content
stringlengths
86
994k
meta
stringlengths
288
619
A path in the 1-skeleton of a topological cell complex (with endpoints in the 0-skeleton) is a sequence P = v0, e1, v1, e2, …, vk. An elementary homotopy of P consists of replacing a subpath P' by another path Q', with the same endpoints, so that P' union Q' is contractible. One can require that P' union Q' is a circle, i.e., homeomorphic to a 1-sphere. Call this topological homotopy. If we replace the 1-skeleton by an arbitrary graph and the condition of contractibility by a list of allowed circles in the graph, we have combinatorial homotopy. This is the sort of homotopy involved in my recent characterization of associative multary quasigroups. Here the list of allowed circles has to satisfy a “linearity” condition; the combination of the graph and the linear class of allowed circles is called a “biased graph”. A particular lemma in the proof of the quasigroup theorem displays clearly the operation of combinatorial homotopy. The questions are: what does a topologist know (or want to know) about combinatorial homotopy, and how similar and how different are topological and combinatorial homotopy?
{"url":"https://www2.math.binghamton.edu/p/seminars/comb/abstract.200401zas","timestamp":"2024-11-05T12:51:19Z","content_type":"text/html","content_length":"18004","record_id":"<urn:uuid:f9163839-950f-4bb6-be3b-b68b373c4db3>","cc-path":"CC-MAIN-2024-46/segments/1730477027881.88/warc/CC-MAIN-20241105114407-20241105144407-00090.warc.gz"}
NILSOCUBEZ – Nils O.'s Cubing Pages An earlier version of My 1st CFOP solution included RiDo’s Hunting Story for F2L. He uses three main algorithms for the three possible ways the corner piece and edge piece can orientated next to each Each of the algorithms is named after an animal with a similar “colour scheme” as the top layer of the two pieces: Crocodile (same colour), Tiger (two different colours) and Eagle (1 colour plus white “head”). The ways the corner pieces are “catching” the edge pieces are compared with the hunting method of the animals. I still use a great part of the algorithms, too, but in a more intuitive way. The three hunting algorithms start with the corner piece in an upper corner in the front (left or right) and the edge piece in the upper rear edge. The “target slot” (the corner in layer 1 and 2 where you want to position your two pieces) is the front corner underneath your corner piece. To get to this point you might need some preparation. If you already have the pair of pieces connected in the right way in the upper layer, then you don’t need the hunting algorithms – you have already “caught the prey”. To take the “prey” (the pair of edge and corner piece) into the “hideout of the predator” (the target slot) you start with the two pieces above the center and edge piece with the “upper layer” colour of your two pieces, so that they form a bicolour “L form”. Then you only need the three moves of the “L algorithm” to put them into their “slot” in the two bottom layers (next to the L form): You let the “target slot” jump upwards like a predator (L’/R), push the “prey” into the fangs of the predator (U/U’) and take the prey back into the hideout of the predator (L/R’). The “L algorithm” is also used as the final moves of the Crocodile and Eagle algorithms. If the two pieces are connected, but the orientation of the pieces is not correct, then you need to separate them, first (see below). If the corner piece is in the wrong front corner for the hunting algorithm, then you need another small algorithm to move it to the other corner. Crocodile Algorithm Tiger Algorithm Eagle Algorithm Basics: Glossary • 4LLL = 4-Look Last Layer A rather easy solution for the last layer with only 4 basic algorithms, 2 for OLL and 2 for PLL. Before each algorithm there has to be a pause to look on the cube to see how the cube has to be turned and how the next algorithm has to be applied. For faster methods with 3 or 2 looks a lot of additional algorithms have to be learned by a speed cuber. Anyway, to start, a CFOP method with 4LLL is good enough to get times well under 1 minute. • Algorithm A set of (mostly) 90 deg turns of one layer which have to be executed one after another to solve a part of the cube without destroying the already solved part of the cube. Some algorithms contain 180 deg turns of a layer, turns of two layers at a time or turns of the whole cube. Each turn is usually displayed as an upper-case letter for the layer (for clockwise turns) or an upper-case letter with a Prime symbol for counter-clockwise turns (p.e. U / U’). 180 deg turns are represented by a number 2 after the letter (or letter with Prime symbol, p.e. U2 / U’2). Turns of two layers at a time are displayed by lower-case letters (p.e. u / u’). Turns of the whole cube are displayed by a lower case letter of the axis (x / x’, y / y’ or z / z’) • CFOP = Cross – F2L – OLL – PLL (Fridrich) Method One of the preferred speed cubing method to solve the cube. First established by Jessica Fridrich in the 1980s. First the Cross of the Down Layer is solved. After that the rest of the first 2 layers (F2L) are solved together, followed by the orientation of the pieces of the last layer (OLL). After the right permutation of the pieces of the last layer (PLL) the cube is solved. Nowadays most records are hold by speed cubers using a variation of this method. • Crocodile Algorithm A pair of algorithms from RiDo’s Hunting Story for F2L for a set of corner and edge pieces of the same colour (like a crocodile with only one colour on its back). The crocodile waits just under the surface of the water, grabs the prey and takes it down under water. The way how the corner piece and the edge piece move in the Crocodile Algorithm is similar. • (White) Cross The first part of the cube that is usually solved is the Cross on the Down Layer. The easiest way for beginners is to build the Daisy on the Top Layer first an then one by one move the edge pieces into their right positions. • (Yellow) Cross Algorithm Algorithm of the 4LLL CFOP method for the orientation of the edge pieces in the last layer. The algorithm is used up to three times to finish the (yellow) cross. • Daisy The Daisy is the first part of an easy way to solve the Cross of the down layer (which is usually the white layer). In a first step all 4 edge pieces with white “stickers” are positioned around the upper (usually yellow) centre piece. The yellow center pieces with the 4 white edge pieces around it looks like a daisy. After completing the Daisy all 4 edge pieces are moved one by one into their right positions in the Down Layer to form the (white) Cross. • Eagle Algorithm □ A pair of algorithms from RiDo’s Hunting Story for F2L for a set of corner and edge pieces with the “white” side on top of the corner piece (like a Bald Eagle, the National Bird of the U.S.A. with a white head – RiDo explains it with an eagle in the white sky). The eagle flies in the sky, grabs the prey and takes it down to the ground. The way how the corner piece and the edge piece move in the Eagle Algorithm is similar. • F2L = First 2 Layers Algorithms to solve the corners of the first layer and the edges of the second layer together. The corner piece and the corresponding edge piece above are paired and positioned on their “slot” in the corner together in one move. The most intuitive method to learn F2L is RiDo’s Hunting Story • LBL = Layer by Layer Method For beginners, their first way to learn how to solve the cube is usually a Layer by Layer method. As the name says, the cube is solved layer by layer, starting with the lower (Down) layer. Usually the edges are solved first creating the (white) Cross, followed by the corners. Next are the 4 edge pieces that form the 2nd layer. There are many different solutions for the last layer. Some start with permutation and orientation of the edge pieces, followed by permutation and orientation of the corner pieces. Others use the OLL and PLL algorithms of the CFOP method. • OLL = Orientation Last Layer Algorithms to turn all pieces of the last layer with the same colour (usually yellow) facing upwards. The 4LLL method starts with the edge pieces (Cross Algorithm) followed by the corners (Fish • PLL = Permutation Last Layer The last algorithms of the CFOP method to solve the cube. All pieces of the last layer are moved to their right position. The 4LLL method starts with the corner pieces and finishes the solution of the cube positioning the edge pieces of the last layer. Advanced speed cubers learn a whole set of permutation algorithms (up to 21) to solve all edges and corners in a single step. • RiDo’s Hunting Story for F2L An intuitive way to solve the first two layers (F2L) of the cube, first presented by Rishi Doashi (RiDo) on his Youtube channel. It’s based on three animals with a typical colour scheme (crocodile, tiger and eagle) and the way they hunt. The colour scheme of the animals represents the orientation of the two pieces (edge and corner) that have to be joined and positioned together in their “slot” in the lower corner of the cube. The hunting method of the animals is similar to the moves of the predator (corner piece) and the prey (edge piece). The hunting story can be displayed as a set of six algorithms (a “left” and “right” Crocodile, Tiger and Eagle Algorithm). • Tiger Algorithm □ A pair of algorithms from RiDo’s Hunting Story for F2L for a set of corner and edge pieces of different (non-white) colours ( a tiger has two colours on its back). The tiger waits at the entrance of its cave, grabs the prey and takes it back into the cave. The way how the corner piece and the edge piece move in the Tiger Algorithm is similar. The algorithms (left and right version) have only 3 steps and can be learned to be executed very fast as Trigger Moves. • Trigger Algorithms / Trigger Moves A group of short algorithms that can be learned to be executed very fast by multiple repetitions of the algorithm. The more they are used, the faster and easier they can be executed. When they are included in a more complex algorithm the first move of the trigger algorithm works as a trigger for an “automatic” execution of the whole algorithm. The Tiger Algorithm is one of the easiest to learn 3-step Trigger Moves. Basics: Notation First things first: When you talk about solving a cube, the first you have to learn is the notation. That’s the way how a move on the cube is displayed in your instructions. The most common way of cube notation is a set of 90 deg turns of one or two layer or the whole cube, represented by letters and the Prime symbol. Every clockwise 90 deg turn of a layer is represented by a letter, every counter-clockwise 90 deg turn of a layer is represented by the same letter followed by the Prime symbol: The most commonly used shortcuts for turns on the cube are: • U / U’ – Up • D / D’ – Down • L / L’ – Left • R / R’ – Right • F / F’ – Front • B / B’ – Back • E / E’ – Equator • M / M’ – Middle For a 90 deg turn of the whole cube with “fixed” Top and Down colors the letter y / y’ is used. The notation with letters isn’t easy to learn for beginners and for cubers with a more visual way of learning (as myself). So I came back to the “square and arrow” notation from the Der Spiegel article from 1981. To get a relation to modern “letter based” notation I’ve added the corresponding letters to the symbols. For 3D representations I’ve decided to use classic 45 deg cavalier projections. This is a quick overview of the basic moves that I use in my instructions: Layer by Layer Method for Beginners These instructions are combination of different instructions I have found over the years. For each step I chose the algorithms that were the easiest to learn for me. My first instructions were from an article in the German magazine Der Spiegel from 1981. I still like the graphic representation of the moves from that article, so my graphics are based on it. The algorithms for edge orientation and corner orientation in the last layer are from that article, too. I use my own short version of the Daisy and White Cross part for the first layer, but it’s so intuitive that you will find the same move in many other instructions. I’ve found the algorithm for corner permutation years ago somewhere in the web, but I don’t remember where. It was much easier than the one I used before and I kept using it until I changed to the CFOP method. Most of the rest of the algorithms is from Robbie Gonzalez’ article on WIRED.com and his embedded YouTube videos. He learned his way of solving the cube layer by layer in under a minute from WCA cofounder Tyson Mao. A lot of the algorithms are based on a pair of simple left and right trigger moves that are easy to learn and fast to execute. So let’s start… First things first: To learn and understand the algorithms to solve the cube I use a combination of squares and arrows combined with the most commonly used “letter codes” for algorithm notation. An algorithm is a combination of single 90 deg turns of a single layer of the cube. Each algorithm has the purpose of solving a part of the cube without destroying the part that has been solved Usually a 180 deg turn of a layer is represented by a number 2 following the letter code. For my graphic representation I use a double symbol of the 90 deg turn instead. These are the basic turns of the cube that I use in my instructions. You can find further details about the notation here. First Layer 1.) The Daisy Beginners usually start to learn how to solve the cube with the white side as the Down side and the yellow side as the Up side. So the first part of the cube that is solved, is the (white) cross on the Down layer. As a first step the four edge pieces with white stickers are positioned around the yellow center piece of the Up side. With the yellow center and the white “leaves” the resulting cross looks like a daisy. If by coincidence you also get one or another white corner next tor the white egde pieces, you can ignore that. only the edge pieces are important. You won’t really need a detailed algorithm to solve the daisy. It’s quite intuitive, so with some practice by “playing around” you will be able to solve the daisy very quickly. Anyway, you can find some basic moves below. 2.) The (White) Cross Once you have solved the Daisy you can go on to solve the white Cross on the Down layer. The goal is not only to get the white side of the edge pieces next to the white center piece. We also want to have the edge pieces in the correct position right under their corresponding centre pieces. So you start with a first centre piece in the front and turn the Up Layer until the colour of the edge piece in the Daisy and the centre piece below have matching colours. Then you turn the Front layer by 180 deg to move the edge piece to the Down layer. Repeat that with the other three centre pieces and their corresponding edge pieces and you will have the white Cross solved in the Down 3.) (White) Corners To finish the first layer all corner pieces with a white side will have to be positioned in their right place in the Down layer. An easy way to do that is using a 3-step algorithm, a so-called Trigger Move. The more you practice this Trigger Move the faster you will be able to apply it. After a while you will be able to apply it automatically after triggering the move with its first step. There is a left and a right version of the algorithm. Depending on the situation you will need each of them for the next steps. The algorithms are represented by a square with three small squares inside forming a (chess) knight’s move pattern. You start with a corner piece in the Up layer with the white side of the corner piece on the side. Then you position this corner piece with its front side diagonally above the centre piece of the same colour. With the white cross solved before, you will have a pattern of a (chess) knight’s move on the Front side of your cube. You can either have a left or right knight’s move pattern. To position the corner piece in the Down layer you just apply the corresponding left or right algorithm. For all possible other positions and orientations of the corner pieces you can also find algorithms below. Repeat those algorithms until you have all 4 corner pieces in their right place in the Down After that you will not only have the white Down side solved but also the four sides of the down layer, forming (Tetris) Ts with the centre pieces above. Second Layer 4.) Edges of second layer The second layer has only 4 movable pieces: The edge pieces without white (Down side) or yellow (Up side) sides. The solution is quite easy: The edge piece is positioned, then the corner piece below the “target slot” of the edge piece is moved to the Up layer and back to its position in the Down layer, taking the edge piece with it. There are two sets of algorithms for that, a left and a right one. The algorithms are really easy to learn as they are mostly a combination of the Trigger Moves used before. For edge pieces in the second layer that aren’t positioned correctly, yet, there are two additional algorithms based on the previous ones. Last layer The sequence of the solution of the last layer and most of the algorithms are taken from the original solution printed in issue 4/1984 of the German magazine Der Spiegel. I’ve been using these algorithms for many years and still think that they quite are easy to learn. If you only want to solve your cube every once in a while, then they are still very useful. But if you really want to learn how to solve the cube and get faster, then I’d suggest to learn the OLL and PLL algorithms of the My 1st CFOP instructions instead. They are also quite easy to learn and it’s easier to get faster with them. 5.) Last Layer Edge Permutation The first step in the last layer is to put all the edge pieces into their correct position. In this step we don’t care about the orientation of the colour stickers of each piece, yet. The algorithm for that isn’t new: We just lift an edge piece from layer 2 into the upper layer and put the edge piece back into layer 2. This move also changes the position of two edge pieces in the upper layer and the orientation of the other two edge pieces in the upper layer. If you have two swap two opposite edge pieces you just have to apply the algorithm twice, turning the cube into the right position between them. 6.) Last Layer Edge Orientation Then the edges get their right orientation, so that after this step you get a complete “yellow cross” on your last layer. The algorithm is really simple, you just repeat R and E moves four times. This will mix up the cube quite a lot in layer 1 and 2 after the orientation of the first edge piece. So for the next edge peace be sure that you only twist the upper layer to put the next edge piece on the right side. Then you repeat the algorithm and Layer 1 and 2 will be solved again. There are always 0, 2 or 4 edge pieces that need a new orientation, never 1 or 3. So for all edge pieces you have to apply the algorithm 4 times, always with a U’ move after the algorithm. 7.) Last Layer Corner Permutation The next step is the correct position of all 3 edge pieces. The original algorithm from the Der Spiegel article had more than 20 steps, which made it quite hard to memorize. Years later I found a much shorter solution on a website, which only had 8 steps with the same effect. The corner piece in the rear left corner stays in position, while the algorithm moves the other the corner pieces counter clockwise. 8.) Last Layer Corner Orientation The last thing to do is the right orientation of the corner pieces in the upper layer. The algorithm is once more easy to learn and quite fast to apply. There are only 8 steps, only moving the front and side layer of the cube. You will always turn the front right corner piece. As in step 6 (Last layer Edge Orientation), the algorithm will mix up layer 1 and 2 when it’s applied. In this case you will need 3 repetitions before both layers will be solved again. So be sure once more that between two algorithms you only turn the upper layer and not the whole cube to position the next corner piece in the front right corner. Each corner piece might need one or two applications of the algorithm to get the right orientation (usually yellow on the upper side). To get all four corners right, you will need 0, 3 or 6 repetitions of the algorithm, plus a total of 4 U’ moves. You will never need 1, 2, 4 or 5 repetitions of the algorithm. If that’s the case, then at least one of the corner pieces has been twisted and forced into a wrong orientation. And that’s it! After Step 8.) you have solved your cube.
{"url":"http://cubez.nilso.eu/?author=1","timestamp":"2024-11-09T09:53:24Z","content_type":"text/html","content_length":"64566","record_id":"<urn:uuid:8997b73f-5557-4247-8ff8-c2ac6df4ca9e>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.75/warc/CC-MAIN-20241109085148-20241109115148-00314.warc.gz"}
Carton Measurement Calculator (include waste) - Textile Calculator Carton Measurement Calculator (include waste) Carton Measurement Calculator (include waste) When it comes to packaging, calculating the size of your carton correctly is crucial. Whether you’re shipping goods or storing items, understanding the exact measurement of your carton can save you time, money, and space. This is where the Carton Measurement Calculator (include waste) comes into play! In this article, we’ll dive deep into what this calculator is, why it’s important, and how you can easily calculate carton measurements, including wastage. Ready to simplify your packaging process? Let’s get started! Table of Contents Formula for Carton Measurement (Include Waste) The formula to calculate the carton measurement while including waste is simple. Here’s how it works: Measurement (including wastage)= [(Length+Width+6)×(Width+Height+4)×2×100]/100 Let’s break this down: • Length: This is the length of your carton in centimeters. • Width: This refers to the width of your carton in centimeters. • Height: This is the height of the carton. • 6 cm & 4 cm: These extra values account for wastage during production. They add additional space that may be lost during folding and other processes. By plugging your carton’s length, width, and height into this formula, you’ll be able to calculate its total area, including wastage. Using this calculator is straightforward. Follow these steps: 1. Measure Your Carton: Gather the length, width, and height of your carton in centimeters. 2. Plug in the Values: Enter these values into the formula. 3. Calculate the Area: Use the formula to get the final measurement, which includes wastage. 4. Order the Right Size: Use the calculated area to determine how much material you’ll need to produce the cartons. What Is a Carton Measurement Calculator? A Carton Measurement Calculator helps you determine the area of a carton in square centimeters. The calculation involves not only the dimensions of the carton but also the wastage that comes from cutting, folding, and other packaging processes. If you’ve ever ordered cartons for shipping or storage, you’ll know that wastage is inevitable. Including waste in your calculation ensures you won’t run short on packaging material. Why Is Including Waste Important? Wastage occurs during the production and assembly of cartons. When calculating the size of a carton, it’s essential to add a few extra centimeters to compensate for this waste. Not accounting for waste can lead to: • Incorrect carton sizes • Material shortages • Increased costs due to ordering extra materials By using a Carton Measurement Calculator (include waste), you can ensure that all aspects of the carton production are factored into the size, including the little extra material that might be lost during the cutting and folding processes. How to Use the Carton Measurement Calculator (Include Waste) Example: Calculating the Carton Size (Including Waste) Now, let’s walk through a real-world example to demonstrate how the Carton Measurement Calculator (including waste) works. Question: Imagine you need a carton that’s 50 cm long, 30 cm wide, and 20 cm high. How would you calculate the carton size, including waste? Solution: Using the formula: Measurement (including waste)= [(50+30+6)×(30+20+4)×2×100]/100 Now, let’s break this down step-by-step: 1. Add Length + Width + 6: 50+30+6=8650 + 30 + 6 = 8650+30+6=86 2. Add Width + Height + 4: 30+20+4=5430 + 20 + 4 = 5430+20+4=54 3. Multiply the results: 86×54=464486 \times 54 = 464486×54=4644 4. Multiply by 2: 4644×2=92884644 \times 2 = 92884644×2=9288 5. Final result: The total area (including wastage) is 9288 cm². So, the carton will require 9288 square centimeters of material, including waste. Application of the Carton Measurement Calculator (Include Waste) Now that you understand how to calculate the size of your carton with waste included, where can this be applied? • Shipping and Logistics: This is particularly useful for those in the shipping industry, where cartons of precise sizes are needed to avoid material waste. • Storage: When storing items, using this calculator helps to ensure you don’t run out of space or order more cartons than necessary. • Manufacturing: In production facilities, this calculator is essential to optimize material usage, ensuring cost-effectiveness without compromising on quality. With the Carton Measurement Calculator (include waste), you can ensure that your cartons are the perfect size, every time. Benefits of Using the Carton Measurement Calculator Using the Carton Measurement Calculator (include waste) offers a wide range of benefits, including: • Accuracy: Get the precise carton measurements, ensuring there’s no shortage or excess of material. • Cost Efficiency: Minimize costs by avoiding unnecessary wastage. • Simplicity: The calculator is easy to use, requiring only the basic dimensions of your carton. Carton Measurement: A Must for Businesses For businesses that rely on packaging, having the right measurements is crucial. Whether you’re a small eCommerce store or a large-scale manufacturer, accurate carton measurement is the key to efficiency. With wastage factored in, you won’t need to worry about last-minute surprises or additional costs. This makes the Carton Measurement Calculator (include waste) a go-to tool for anyone who needs reliable packaging solutions. Calculating the size of a carton can be tricky, especially when you factor in waste. However, with the Carton Measurement Calculator (include waste), the process is straightforward and accurate. By using a simple formula, you can quickly calculate the exact size of the carton you need, ensuring that both the product and the packaging are just right. The next time you need to order cartons for shipping or storage, remember to include the wastage factor. This small addition will save you time, reduce material waste, and keep your costs in check. So, what are you waiting for? Start measuring with accuracy today! What is the wastage factor in carton measurement? The wastage factor includes extra material that is typically lost during cutting, folding, or assembly of the carton. In our formula, we add 6 cm and 4 cm to account for this waste. Can I use the Carton Measurement Calculator for different units? This calculator specifically uses centimeters (cm). However, you can convert other units like inches to centimeters before using the calculator. Why should I add wastage to my calculations? Adding wastage helps to ensure that you have enough material for your cartons, even after factoring in losses from production processes. How accurate is the Carton Measurement Calculator? The calculator provides a highly accurate measurement by including wastage, making it more reliable than basic measurements. Is the Carton Measurement Calculator only for businesses? No! While it’s widely used by businesses, anyone who needs to order cartons or packaging material can benefit from this calculator.
{"url":"https://textilecalculator.com/carton-measurement-calculator-include-waste/","timestamp":"2024-11-05T17:07:26Z","content_type":"text/html","content_length":"185752","record_id":"<urn:uuid:80b88492-c14d-439e-9426-1e2729b35ec7>","cc-path":"CC-MAIN-2024-46/segments/1730477027884.62/warc/CC-MAIN-20241105145721-20241105175721-00843.warc.gz"}
Card game You are playing a card game with a friend. For this game only the suit of the cards matters. The four suits are clubs, diamonds, hearts and spades, with the following values: │Suit │Symbol │Value│ │Clubs │♣ │1 │ │Diamonds │♦ │5 │ │Hearts │♥ │8 │ │Spades │♠ │14 │ Your friend selects a number n, and you must show cards whose total value equals n, by using the minimum possible number of cards. Assume that you have an unlimited number of cards of each suit. Input consists of several cases, each one with a natural number n between 0 and 500000. Input ends with a −1. For every n, print the corresponding result.
{"url":"https://jutge.org/problems/P54504_en","timestamp":"2024-11-13T01:03:01Z","content_type":"text/html","content_length":"24081","record_id":"<urn:uuid:8f92fb5a-ff91-4eb3-acd7-54e923384aa7>","cc-path":"CC-MAIN-2024-46/segments/1730477028303.91/warc/CC-MAIN-20241113004258-20241113034258-00837.warc.gz"}
Practice Question • Subject 2. Assumptions of the Simple Linear Regression Model CFA Practice Question There are 676 practice questions for this topic. CFA Practice Question Examine the following residual by predicted plot. Which assumption is violated if we want to fit a linear regression model? A. Linearity B. Independence C. Normality D. Homoscedasticity Correct Answer: D This is an example of heteroscedasticity. It means that the variability in the response is changing as the predicted value increases. This is a problem, in part, because the observations with larger errors will have more pull or influence on the fitted model. User Contributed Comments 0 You need to log in first to add your comment.
{"url":"https://analystnotes.com/cfa_question.php?p=H2A4FXYHR","timestamp":"2024-11-09T13:23:44Z","content_type":"text/html","content_length":"19394","record_id":"<urn:uuid:ff00efd7-89b8-41f8-95ec-cf668caba8a5>","cc-path":"CC-MAIN-2024-46/segments/1730477028118.93/warc/CC-MAIN-20241109120425-20241109150425-00646.warc.gz"}
How do you simplify (2sqrt(7)+35)/[sqrt(7)]? | Socratic How do you simplify #(2sqrt(7)+35)/[sqrt(7)]#? 1 Answer It would help if there was something that we could cancel out. At first glance, there is a $\sqrt{7}$ in the numerator and the denominator, so lets see if we can do something about that. If we factor $35$ we get; $35 = 5 \cdot 7$ But the $7$ can be further factored by taking the square root. $7 = {\sqrt{7}}^{2} = \sqrt{7} \cdot \sqrt{7}$ So $35$ becomes; $35 = 5 \cdot \sqrt{7} \cdot \sqrt{7}$ Now we can start simplifying the expression. $\frac{2 \sqrt{7} + 35}{\sqrt{7}} = \frac{2 \sqrt{7} + 5 \cdot \sqrt{7} \cdot \sqrt{7}}{\sqrt{7}}$ Move the $\sqrt{7}$ out of the numerator. $\frac{\sqrt{7} \left(2 + 5 \sqrt{7}\right)}{\sqrt{7}}$ Now the $\sqrt{7}$s cancel and we have; $\frac{\cancel{\sqrt{7}} \left(2 + 5 \sqrt{7}\right)}{\cancel{\sqrt{7}}} = 2 + 5 \sqrt{7}$ Impact of this question 1321 views around the world
{"url":"https://socratic.org/questions/how-do-you-simplify-2sqrt-7-35-sqrt-7#182949","timestamp":"2024-11-02T11:35:39Z","content_type":"text/html","content_length":"33801","record_id":"<urn:uuid:78c4e060-2e2e-481c-9489-e2e2cc43cebc>","cc-path":"CC-MAIN-2024-46/segments/1730477027710.33/warc/CC-MAIN-20241102102832-20241102132832-00014.warc.gz"}
Temporally delayed linear modelling (TDLM) measures replay in both animals and humans There are rich structures in off-task neural activity which are hypothesized to reflect fundamental computations across a broad spectrum of cognitive functions. Here, we develop an analysis toolkit – temporal delayed linear modelling (TDLM) – for analysing such activity. TDLM is a domain-general method for finding neural sequences that respect a pre-specified transition graph. It combines nonlinear classification and linear temporal modelling to test for statistical regularities in sequences of task-related reactivations. TDLM is developed on the non-invasive neuroimaging data and is designed to take care of confounds and maximize sequence detection ability. Notably, as a linear framework, TDLM can be easily extended, without loss of generality, to capture rodent replay in electrophysiology, including in continuous spaces, as well as addressing second-order inference questions, for example, its temporal and spatial varying pattern. We hope TDLM will advance a deeper understanding of neural computation and promote a richer convergence between animal and human neuroscience. Human neuroscience has made remarkable progress in detailing the relationship between the representations of different stimuli during task performance (Haxby et al., 2014; Kriegeskorte et al., 2008; Barron et al., 2016). At the same time, it is increasingly clear that resting, off-task, brain activities are structurally rich (Smith et al., 2009; Tavor et al., 2016). An ability to study spontaneous activity with respect to task-related representation is important for understanding cognitive process beyond current sensation (Higgins et al., 2021). However, unlike the case for task-based activity, little attention has been given to techniques that can measure representational content of resting brain activity in humans. Unlike human neuroscience, representational content of resting activity is studied extensively in animal neuroscience. One seminal example is ‘hippocampal replay’ (Wilson and McNaughton, 1994; Skaggs and McNaughton, 1996; Louie and Wilson, 2001; Lee and Wilson, 2002): during sleep, and quiet wakefulness, place cells in the hippocampus (that signal self-location during periods of activity) spontaneously recapitulate old, and explore new, trajectories through an environment. These internally generated sequences are hypothesized to reflect a fundamental feature of neural computation across tasks (Foster, 2017; Ólafsdóttir et al., 2018; Pfeiffer, 2020; Carr et al., 2011; Lisman et al., 2017). Numerous methods have been proposed to analyse hippocampal replay (Davidson et al., 2009 ; Grosmark and Buzsáki, 2016; Maboudi et al., 2018). However, they are not domain general in that they are designed to be most suited for specific needs, such as particular task design, data modality, or research question (van der Meer et al., 2020; Tingley and Peyrache, 2020). Most commonly, these methods apply to invasive electrophysiology signals, aiming to detect sequences in a linear track during spatial navigation task (Tingley and Peyrache, 2020). As a result, they cannot be directly adapted for analysing human resting activity collected using non-invasive neuroimaging techniques. Furthermore, in rodent neuroscience, it is non-trivial to adapt these algorithms to even small changes in tasks (such as 2D foraging). This may be a limiting factor in taking replay analyses to more interesting and complex tasks, such as complex mazes (Rosenberg et al., 2021). Here, we introduce temporal delayed linear modelling (TDLM), a domain-general analysis toolkit, for characterizing temporal structure of internally generated neural representations in rodent electrophysiology as well as human neuroimaging data. TDLM is inspired by existing replay detection methods (Skaggs and McNaughton, 1996; Davidson et al., 2009; Grosmark and Buzsáki, 2016), especially those analysis of population of replay events (Grosmark and Buzsáki, 2016). It is developed based on the general linear modelling (GLM) framework and can therefore easily accommodate testing of ‘second-order’ statistical questions (van der Meer et al., 2020), such as whether there is more forward than reverse replay, or is replay strength changing over time, or differs between behavioural conditions. This type of question is ubiquitous in cognitive studies, but is typically addressed ad hoc in other replay detection methods (van der Meer et al., 2020). In TDLM, such questions are treated naturally as linear contrasts of effects in a GLM. Here, we show TDLM is suited to measure the average amount of replay across many events (i.e. replay strength) in linear modelling. This makes it applicable to both rodent electrophysiology and human neuroimaging. Applying TDLM on non-invasive neuroimaging data in humans, we, and others, have shown it is possible to measure the average sequenceness (propensity for replay) in spontaneous neural representations (Wimmer et al., 2020; Nour et al., 2021; Liu et al., 2019; Liu et al., 2021a). The results resemble key characteristics found in rodent hippocampal replay and inform key computational principles of human cognition (Liu et al., 2019). In the following sections, we first introduce the logic and mechanics of TDLM in detail, followed by a careful treatment of its statistical inference procedure. We test TDLM in both simulation (see section ‘Simulating MEG data’) and real human MEG/EEG data (see section ‘Human replay dataset’). We then turn to rodent electrophysiology and compare TDLM to existing rodent replay methods, extending TDLM to work on a continuous state space. Lastly, using our approach we re-analyse rodent electrophysiology data from Ólafsdóttir et al., 2016 (see section ‘Rodent replay dataset’) and show what TDLM can offer uniquely compared to existing methods in rodent replay analysis. To summarize, TDLM is a general, and flexible, tool for measuring neural sequences. It facilitates cross-species investigations by linking large-scale measurements in humans to single-neuron measurements in non-human species. It provides a powerful tool for revealing abstract cognitive processes that extend beyond sensory representation, potentially opening doors for new avenues of research in cognitive science. Temporal delayed linear modelling Our primary goal is to test for temporal structure of neural representations in humans. However, to facilitate cross-species investigation (Barron et al., 2021), we also want to extend this method to enable measurement of sequences in other species (e.g. rodents). Consequently, this sequence detection method has to be domain general. We chose to measure sequences in a decoded state space (e.g. posterior estimated locations in rodents [Grosmark and Buzsáki, 2016] or time course of task-related reactivations in humans [Liu et al., 2019]) as this makes results from different data types Ideally, a general sequence detection method should (1) uncover structural regularities in the reactivation of neural activity, (2) control for confounds that are not of interest, and (3) test whether this regularity conforms to a hypothesized structure. To achieve these goals, we developed the method under a GLM framework, and henceforth refer to it as temporal delayed linear modelling, that is, TDLM. Although TDLM works on a decoded state space, it still needs to take account of confounds inherent in the data where the state space is decoded from. This is a main focus of TDLM. The starting point of TDLM is a set of n time series, each corresponding to a decoded neural representation of a task variable of interest. This is what we call the state space, X, with dimension of time by states. These time series could themselves be obtained in several ways, described in detail in a later section (‘Getting the states’). The aim of TDLM is to identify task-related regularities in sequences of these representations. Consider, for example, a task in which participants have been trained such that n = 4 distinct sensory objects (A, B, C, and D) appear in a consistent order $:A→B→C→D$ (Figure 1a, b). If we are interested in replay of this sequence during subsequent resting periods (Figure 1c, d), we might want to ask statistical questions of the following form: 'Does the existence of a neural representation of A, at time T, predict the occurrence of a representation of B at time T+ $∆t$?' and similarly for $B→C$ and $C→D$. Task design and illustration of temporal delayed linear modelling (TDLM). In TDLM, we ask such questions using a two-step process. First, for each of the n^2 possible pairs of variables X[i] and X[j], we find the linear relation between the X[i] time series and the $∆t$ -shifted X[j] time series. These n^2 relations comprise an empirical transition matrix, describing how likely each variable is to be succeeded at a lag of $∆t$ by each other variable (Figure 1e). Second, we linearly relate this empirical transition matrix with a task-related transition matrix of interest (Figure 1f). This produces a single number that characterizes the extent to which the neural data follow the transition matrix of interest, which we call ‘sequenceness’. Finally, we repeat this entire process for all $∆t$ of interest, yielding a measure of sequenceness at each possible lag between variables and submit this for statistical inference (Figure 1g). Note that, for now, this approach decomposes a sequence (such as $A→B→C→D$) into its constituent transitions and sums the evidence for each transition. Therefore, it does not require that the transitions themselves are sequential: $A→B$ and $B→C$ could occur at unrelated times, so long as the within-pair time lag was the same. For interested readers, we address how to strengthen the inference by looking explicitly for longer sequences in Appendix 1: Multi-step sequences. Constructing the empirical transition matrix In order to find evidence for state-to-state transitions at some time lag $∆t$, we could regress a time-lagged copy of one state, $Xj$, onto another, $Xi$ (omitting residual term ε in all linear (1) ${X}_{j}\left(t+∆t\right)={X}_{i}\left(t\right){\beta }_{ij}$ Instead, TDLM chooses to include all states in the same regression model for important reasons, detailed in section ‘Moving to multiple linear regression’: (2) ${X}_{j}\left(t+∆t\right)={\sum }_{k=1}^{n}{X}_{k}\left(t\right){\beta }_{kj}$ In this equation, the values of all states $Xk$ at time t are used in a single multilinear model to predict the value of the single state $Xj$ at time $t+∆t$. The regression described in Equation 2 is performed once for each $Xj$, and these equations can be arranged in matrix form as follows: (3) $X\left(∆t\right)=X\beta$ Each row of X is a time point, and each of the n columns is a state. $X∆t$ is the same matrix as X, but with the rows shifted forwards in time by $∆t$. $βij$ is an estimate of the influence of $Xit$ on $Xjt+∆t$. $β$ is an $n×n$ matrix of weights, which we call the empirical transition matrix. To obtain $β$, we invert Equation 3 by ordinary least squares regression: (4) $\beta ={\left({X}^{T}X\right)}^{-1}{X}^{T}X\left(∆t\right)$ This inversion can be repeated for each possible time lag ( $∆t=1,2,3,…$), resulting in a separate empirical transition matrix β at every time lag. We call this step the first-level sequence Testing the hypothesized transitions The first-level sequence analysis assesses evidence for all possible state-to-state transitions. The next step in TDLM is to test for the strength of a particular hypothesized sequence, specified as a transition matrix,T. Therefore, we construct another GLM which relates T to the empirical transition matrix, β. We call this step the second-level sequence analysis: (5) $\beta =\sum _{r=1}^{r}Z\left(r\right)\ast {T}_{r}$ As noted above, $β$ is the empirical transition matrix obtained from first-stage GLM. It has dimension of $n$ by $n$, where $n$ is the number of states. Each entry in $β$ reflects the unique contribution of state i to state j at given time lag. Effectively, the above equation models this empirical transition matrix $β$ as a weighted sum of prespecified template matrices, $Tr$. Thus, $r$ is the number of regressors included in the second-stage GLM, and each scalar valued $Z(r)$ is the weight assigned to the $r$ th template matrix. Put in other words, $Tr$ constitutes the regressors in the design matrix, each of which has a prespecified template structure, for example, $Tauto$, $Tconst$, $TF$, and $TB$ (Figure 1h). $TF$ and $TB$ are the transpose of each other (e.g. red and blue entries in Figure 1b), indicating transitions of interest in forward and backward direction, respectively. In 1D physical space, $TF$ and $TB$ would be the shifted diagonal matrices with ones on the first upper and lower off diagonals. $Tconst$ is a constant matrix that models away the average of all transitions, ensuring that any weight on $TF$ and $TB$ reflects its unique contribution. $Tauto$ is the identity matrix. $Tauto$ models self-transitions to control for autocorrelation (equivalently, we could simply omit the diagonal elements from the regression). Z is the weights of the second-level regression, which is a vector with dimension of 1 by $r$. Each entry in Z reflects the strength of the hypothesized transitions in the empirical ones, that is, sequenceness. Repeating the regression of Equation 5 at each time lag ($Δt=1,2,3,…$) results in time courses of the sequenceness as a function of time lag (e.g. the solid black line in Figure 1f). $ZF$, $ZB$ are the forward and backward sequenceness, respectively (e.g. red and blue lines in Figure 1g). In many cases, Z[F] and Z[B] will be the final outputs of a TDLM analysis. However, it may sometimes also be useful to consider the quantity: (6) $D={Z}_{F}-{Z}_{B}$ $D$ contrasts forward and backward sequences to give a measure that is positive if sequences occur mainly in a forward direction and negative if sequences occur mainly in a backward direction. This may be advantageous if, for example, $ZF$ and $ZB$ are correlated across subjects (due to factors such as subject engagement and measurement sensitivity). In this case, $D$ may have lower cross-subject variance than either $ZF$ or $ZB$ as the subtraction removes common variance. Finally, to test for statistical significance, TDLM relies on a non-parametric permutation-based method. The null distribution is constructed by randomly shuffling the identities of the n states many times and re-calculating the second-level analysis for each shuffle (Figure 1g). This approach allows us to reject the null hypothesis that there is no relationship between the empirical transition matrix and the task-defined transition of interest. Note that there are many incorrect ways to perform permutations, which permute factors that are not exchangeable under the null hypothesis and therefore lead to false positives. We examine some of these later with simulations and real data. In some cases, it may be desirable to test slightly different hypotheses by using a different set of permutations; this is discussed later. If the time lag $Δt$ at which neural sequences exist is not known a priori, then we must correct for multiple comparisons over all tested lags. This can be achieved by using the maximum Z[F] across all tested lags as the test statistic (see details in section 'Correcting for multiple comparisons'). If we choose this test statistic, then any values of Z[F] exceeding the 95th percentile of the null distribution can be treated as significant at $α=0.05$ (e.g. the grey dotted line in Figure 1g). As described above, the input to TDLM is a set of time series of decoded neural representations or states. Here, we provide different examples of specific state spaces (X, with dimension of time by states) that we have worked with using TDLM. States as sensory stimuli The simplest case, perhaps, is to define a state in terms of a neural representation of sensory stimuli, for example, face, house. To obtain their associated neural representation, we present these stimuli in a randomized order at the start of a task and record whole-brain neural activity using a non-invasive neuroimaging method, for example, Magnetoencephalography (MEG) or Electroencephalography (EEG). We then train a model to map the pattern of recorded neural activity to the presented image (Figure 1—figure supplement 1). This could be any of the multitude of available decoding models. For simplicity, we used a logistic regression model throughout. The states here are defined in terms of stimuli-evoked neural activity. The classifiers are trained at 200 ms post-stimulus onset. For example, the stimuli are faces, buildings, body parts, and objects. Source localizing the evoked neural activity, we found that the activation patterns of stimuli in MEG signal are consistent with those reported in fMRI literature. For faces, activation peaked in a region roughly consistent with the fusiform face area (FFA) as well as the occipital face area (OFA). Activation for building stimuli was located between a parahippocampal place area (PPA) and retrosplenial cortex (RSC), a region also known to respond to scene and building stimuli. Activation for body part stimuli localized to a region consistent with the extrastriate body area (EBA). Activation for objects was in a region consistent with an object-associated lateral occipital cortex (LOC) as well as an anterior temporal lobe (ATL) cluster that may relate to conceptual processing of objects. Those maps are thresholded to display localized peaks. The full un-thresholded maps can be found at https://neurovault.org/collections/6088/. This is adapted from Wimmer et al., 2020. In MEG/EEG, neural activity is recorded by multiple sensor arrays on the scalp. The sensor arrays record whole-brain neural activity at millisecond temporal resolution. To avoid a potential selection bias (given the sequence is expressed in time), we choose whole-brain sensor activity at a single time point (i.e. spatial feature) as the training data fed into classifier training. Ideally, we would like to select a time point where the neural activity can be most truthfully read out. This can be indexed as the time point that gives the peak decoding accuracy. If the state is defined by the sensory features of stimuli, we can use a classical leave-one-out cross-validation scheme to determine the ability of classifiers to generalize to unseen data of the same stimulus type (decoding accuracy) at each time point (see Appendix 2 for its algorithm box). In essence, this cross-validation scheme is asking whether the classifier trained on this sensory feature can be used to classify the unseen data of the same stimuli (Figure 2a, b). Obtaining different state spaces. After we have identified the peak time point based on the cross-validation, we can train the decoding models based on the multivariate sensor data at this given time. Specifically, let us denote the training data, $M$, with dimension of number of observations, $b$, by number of sensors, $s$. The labels, Y, have dimension of $b$ by 1. The aim here is to obtain the classifier weights, W, so that $Y≈σ(MW)$. $σ$ is the logistic sigmoid function. Normally we apply L1 regularization on the inference of weights (we will detail the reasons in section ‘Regularization’): (7) $\mathrm{W}=\underset{W}{\mathrm{a}\mathrm{r}\mathrm{g}\mathrm{m}\mathrm{a}\mathrm{x}}{\left[\mathrm{l}\mathrm{o}\mathrm{g}\left(\mathrm{P}\left(\mathrm{Y}|\mathrm{M},\mathrm{W}\right)\right)+\ mathrm{b}{\lambda }_{L1}||\mathrm{W}||}_{1}\right]$ Next, we translate the data at testing time (e.g. during rest), R, from sensor space to the decoded state space: (8) $X=\sigma \left(\mathrm{R}\mathrm{W}\right)$ where R is the testing data, with dimension of time by sensors, and X is the decoded state space, with dimension of time by states. As well as sequences of sensory representations, it is possible to search for replay of more abstract neural representations. Such abstractions might be associated with the presented image (e.g. mammal vs. fish), in which case analysis can proceed as above by swapping categories for images (Wimmer et al., 2020). A more subtle example, however, is where the abstraction pertains to the sequence or graph itself. In space, for example, grid cells encode spatial coordinates in a fashion that abstracts over the sensory particularities of any one environment, and therefore can be reused across environments (Fyhn et al., 2007). In human studies, similar representations have been observed for the location in a sequence (Liu et al., 2019; Dehaene et al., 2015). For example, different sequences have shared representations for their second items (Figure 2). These representations also replay (Liu et al., 2019). However, to measure this replay we need to train decoders for these abstract representations. This poses a conundrum as it is not possible to elicit the abstract representations in the absence of the concrete examples (i.e., the sensory stimuli). Care is required to ensure that the decoders are sensitive to the abstract code rather than the sensory representations (see Appendix 2 for algorithm box of selecting time point for training abstract code). Useful strategies include training classifiers to generalize across stimulus sets and ensuring the classifiers are orthogonal to sensory representations (Figure 2—figure supplement 1; details in Liu et al., 2019). One way that excludes the possibility of sensory contamination is if the structural representations can be shown to sequence before the subjects have ever seen their sensory correlates (Liu et al., 2019). TDLM can also be used iteratively to ask questions about the ordering of different types of replay events (Figure 2d). This can provide for powerful inferences about the temporal organization of replay, such as the temporal structure between sequences, or the repeating pattern of the same sequence. This more sophisticated use of TDLM merits its own consideration and is discussed in Appendix 3: Sequences of sequences. Controlling confounds and maximizing sensitivity in sequence detection Here, we motivate the key features of TDLM. In standard linear methods, unmodelled temporal autocorrelation can inflate statistical scores. Techniques such as autoregressive noise modelling are commonplace to mitigate these effects (Colclough et al., 2015; Deodatis and Shinozuka, 1988). However, autocorrelation is a particular burden for analysis of sequences, where it interacts with correlations between the decoded neural variables. To see this, consider a situation where we are testing for the sequence $Xi→Xj$. TDLM is interested in the correlation between $Xi$ and lagged $Xj$ (see Equation 1). But if the $Xi$ and $Xj$ time series contain autocorrelations and are also correlated with one another, then $Xi(t)$ will necessarily be correlated with $Xj(t+Δt)$. Hence, the analysis will spuriously report sequences. Correlations between states are commonplace. Consider representations of visual stimuli decoded from neuroimaging data. If these states are decoded using an n-way classifier (forcing exactly one state to be decoded at each moment), then the n states will be anti-correlated by construction. On the other hand, if states are each classified against a null state corresponding to the absence of stimuli, then the n states will typically be positively correlated with one another. Notably, in our case, because these autocorrelations are identical between forward and backward sequences, one approach for removing them is to compute the difference measure described above ($D= ZF−ZB$). This works well as shown in Kurth-Nelson et al., 2016. However, a downside is it prevents us from measuring forward and backward sequences independently. The remainder of this section considers alternative approaches that allow for independent measurement of forward and backward sequences. Moving to multiple linear regression The spurious correlations above are induced because $Xj(t)$ mediates a linear relationship between $Xi(t)$ and $Xj(t+Δt)$. Hence, if we knew $Xj(t),$ we can solve the problem by simply controlling for it in a linear regression, as in Granger causality (Eichler, 2007): (9) ${X}_{j}\left(t+\mathrm{\Delta }t\right)={\beta }_{0}+{X}_{i}\left(t\right){\beta }_{ij}+{X}_{j}\left(t\right){\beta }_{jj}$ Unfortunately, we do not have access to the ground truth of $X$ because these variables have been decoded noisily from brain activity. Any error in $Xj(t)$ but not $Xi(t)$ will mean that the control for autocorrelation is imperfect, leading to spurious weight on $βij$, and therefore spurious inference of sequences. This problem cannot be solved without a perfect estimate of X, but it can be systematically reduced until negligible. It turns out that the necessary strategy is simple. We do not know ground truth $Xj(t)$, but what if we knew a subspace that included estimated $Xj(t)$? If we control for that whole subspace, we would be on safe ground. We can get closer and closer to this by including further co-regressors that are themselves correlated with estimated $Xj(t)$ with different errors from ground truth $Xj(t)$. The most straightforward approach is to include the other states of $X(t)$, each of which has different errors, leading to the multiple linear regression of Equation 2. Figure 3a shows this method applied to the same simulated data whose correlation structure induces false positives in the simple linear regression of Equation 1, and by the same logic, so too in cross-correlation. This is why previous studies based on a cross-correlation (Eldar et al., 2018; Kurth-Nelson et al., 2016) cannot look for sequenceness in forward and backward directions separately, but have to rely on their asymmetry. The multiple regression accounts for the correlation structure of the data and allows correct inference to be made. Unlike the simple subtraction method proposed above (Figure 3a, left panel), the multiple regression permits separate inference on forward and backward sequences. Effects of temporal, spatial correlations, and classifier regularization on temporal delayed linear modelling (TDLM). Oscillations and long timescale autocorrelations Equation 2 performs multiple regression, regressing each $Xj(t+Δt)$ onto each $Xi(t)$ whilst controlling for all other state estimates at time t. This method works well when spurious relationships between $Xi(t)$ and $Xj(t+Δt)$ are mediated by the subspace spanned by the other estimated states at time t (in particular, $Xj(t)$). One situation in which this assumption might be challenged is when replay is superimposed on a large neural oscillation. For example, during rest (which is often the time of interest in replay analysis), MEG and EEG data often express a large alpha rhythm, at around 10 Hz. If all states experience the same oscillation at the same phase, the approach correctly controls false positives. The oscillation induces a spurious correlation between $Xi(t)$ and $Xj(t+Δt)$, but, as before, this spurious correlation is mediated by $Xj(t)$. However, this logic fails when states experience oscillations at different phases. This scenario may occur, for example, if we assume there are travelling waves in cortex (Lubenov and Siapas, 2009; Wilson et al., 2001) because different sensors will experience the wave at different times and different states have different contributions from each sensor. MEG sensors can be seen as measures of local field potential on the scalp, which contain background neural oscillations. In humans, this is dominantly alpha during rest. In this case, $Xi(t)$ predicts $Xj(t+Δt)$ over and above $Xj(t)$. To see this, consider the situation where $Δt$ is $14τ$ (where $τ$ is the oscillatory period) and the phase shift between $Xi(t)$ and $Xj(t)$ is pi/2. Now every peak in $Xj(t+Δt)$ corresponds to a peak in $Xi(t)$ but a zero of $Xj(t)$. To combat this, we can include phase-shifted versions/more time points of $X(t)$. If dominant background oscillation is at alpha frequency (e.g. 10 Hz), neural activity at time T would be correlated with activity at time T + $τ$. We can control for that by including $X(t+τ)$, as well as $X(t)$, in the GLM (Figure 3b). Here, $τ$ = 100 ms if assuming the frequency is 10 Hz. Applying this method to the real MEG data during rest, we see much diminished 10 Hz oscillation in sequence detection during rest (Liu et al., 2019). As mentioned above, correlations between decoded variables commonly occur. The simplest type of decoding model is a binary classifier that maps brain activity to one of two states. These states will, by definition, be perfectly anti-correlated. Conversely, if separate classifiers are trained to distinguish each state’s representation from baseline (‘null’) brain data, then the states will often be positively correlated with each other. Unfortunately, positive or negative correlations between states reduce the sensitivity of sequence detection because it is difficult to distinguish between states within the sequence: collinearity impairs estimation of β in Equation 2. In Figure 3c, we show in simulation that the ability to detect real sequences goes down as the absolute value of a spatial correlation goes up. We took the absolute value here because the direction of correlation is not important, only the magnitude of the correlation matters. Ideally, the state decoding models should be as independent as possible. We have suggested the approach of training models to discriminate one state against a mixture of other states and null data ( Liu et al., 2019; Kurth-Nelson et al., 2016). This mixture ratio can be adjusted. Adding more null data causes the states to be positively correlated with each other, while less null data leads to negative correlation. We adjust the ratio to bring the correlation between states as close to zero as possible. In Figure 3d, we show in simulation the ensuing benefit for sequence detection. An alternative method is penalizing covariance between states in the classifier’s cost function (Weinberger et al., 1988). A key parameter in training high-dimensional decoding models is the degree of regularization. In sequence analysis, we are often interested in spontaneous reactivation of state representations, as in replay. However, our decoding models are typically trained on task-evoked data because this is the only time at which we know the ground truth of what is being represented. This poses a challenge insofar as the models best suited for decoding evoked activity at training may not be well suited for decoding spontaneous activity at subsequent tests. Regularizing the classifier (e.g. with an L1 norm) is a common technique for increasing out-of-sample generalization (to avoid overfitting). Here, it has the added potential benefit of reducing spatial correlation between classifier weights. During classifier training, we can impose L1 or L2 constraints over the inference of classifier coefficients, $W.$ This amounts to finding the coefficients, $W$, that maximize the likelihood of the data observations under the constraint imposed by the regularization term. L1 regularization can be phrased as maximizing the likelihood, subject to a regularization penalty on the L1 norm of the coefficient vector: (10) $\mathrm{W}=\underset{W}{\mathrm{a}\mathrm{r}\mathrm{g}\mathrm{m}\mathrm{a}\mathrm{x}}\left[\mathrm{l}\mathrm{o}\mathrm{g}\left(\mathrm{P}\left(\mathrm{Y}|\mathrm{M},\mathrm{W}\right)\right)+\ mathrm{b}{\lambda }_{L1}||\mathrm{W}|{|}_{1}\right]$ L2 regression can be viewed as a problem of maximizing the likelihood, subject to a regularization penalty on the L2 norm of the coefficient vector: (11) $\mathrm{W}=\underset{W}{\mathrm{a}\mathrm{r}\mathrm{g}\mathrm{m}\mathrm{a}\mathrm{x}}\left[\mathrm{l}\mathrm{o}\mathrm{g}\left(\mathrm{P}\left(\mathrm{Y}|\mathrm{M},\mathrm{W}\right)\right)+\ mathrm{b}{\lambda }_{L2}||\mathrm{W}|{|}_{2}\right]$ where M is the task data, with dimension of number of observations, $b$, by number of sensors, $s$. Y is the label of observations, a vector with dimension of $b$ by 1. $P(Y|M,W)=σ(MW)$, and $σ$ is the logistic sigmoid function. We simulate data with varying numbers of true sequences at 40 ms lag and find that the beta estimate of sequence strength at 40 ms positively relates to the number of sequences. We also find that L1 weight regularization is able to detect sequences more robustly than L2 regularization, while L2 performs no better than an unregularized model (Figure 3e). The L1 models also have much lower spatial correlation, consistent with L1 achieving better sequence detection by reducing the covariances between classifiers. In addition to minimizing spatial correlations, as discussed above, it can also be shown that L1-induced sparsity encodes weaker assumptions about background noise distributions into the classifiers as compared to L2 regularization (Higgins, 2019). This might be of special interest to researchers who want to measure replay during sleep. Here, the use of sparse classifiers is helpful as background noise distributions are likely to differ more substantially from the (awake state) training data. So far, we have shown how to quantify sequences in representational dynamics. An essential final step is assessing the statistical reliability of these quantities. All the tests described in this section evaluate the consistency of sequences across subjects. This is important because even in the absence of any real sequences of task-related representations spontaneous neural activity is not random but follows repeating dynamical motifs (Vidaurre et al., 2017). Solving this problem requires a randomized mapping between the assignment of physical stimuli to task states. This can be done across subjects, permitting valid inference at the group level. At the group level, the statistical testing problem can be complicated by the fact that sequence measures do not in general follow a known distribution. Additionally, if a state-to-state lag of interest ($Δt$) is not known a priori, it is then necessary to perform tests at multiple lags, creating a multiple comparisons problem over a set of tests with complex interdependencies. In this section, we discuss inference with these issues in mind. Distribution of sequenceness at a single lag If a state-to-state lag of interest ($Δt$) is known a priori, then the simplest approach is to compare the sequenceness against zero, for example, using either a signed-rank test or one-sample t test (assuming Gaussian distribution). Such testing assumes the data are centred on zero if there were no real sequences. We show this approach is safe in both simulation (assuming no real sequences) and real MEG data where we know there are no sequences. In simulation, we assume no real sequences, but state time courses are autocorrelated. At this point, there is no systematic structure in the correlation between the neuronal representations of different states (see later for this consideration). We then simply select the 40 ms time lag and compare its sequenceness to zero using either a signed-rank test or one-sample t test. We compare false-positive rates predicted by the statistical tests with false-positive rates measured in simulation (Figure 4a). We see the empirical false positives are well predicted by theory. We have tested this also on real MEG data. In Liu et al., 2019, we had one condition where we measured resting activity before the subjects saw any stimuli. Therefore, by definition these sensory stimuli could not be replayed, we can use classifiers from these stimuli (measured later) to test a false-positive performance of statistical tests on replay. Note, in our case, that each subject saw the same stimuli in a different order. They could not know the correct stimulus order when these resting data were acquired. These data provide a valid null for testing false positives. To obtain many examples, we randomly permute the eight different stimuli 10,000 times and then compare sequenceness (at 40 ms time lag) to zero using either a signed-rank test or one-sample t test across subjects. Again, predicted and measured false-positive rates match well (Figure 4b, left panel). This holds true across all computed time lags (Figure 4b, right panel). An alternative to making assumptions about the form of the null distribution is to compute an empirical null distribution by permutation. Given that we are interested in the sequence of states over time, one could imagine permuting either state identity or time. However, permuting time uniformly will typically lead to a very high incidence of false positives as time is not exchangeable under the null hypothesis (Figure 4c, blue colour). Permuting time destroys the temporal smoothness of neural data, creating an artificially narrow null distribution (Liu et al., 2019; Kurth-Nelson et al., 2016). This false positive also exists if we circular shift the time dimension of each state. This is because the signal is highly non-stationary. Replays come in bursts, as recently analysed ( Higgins et al., 2021), and this will break a circular shift (Harris, 2020). State permutation, on the other hand, only assumes that state identities are exchangeable under the null hypothesis, while preserving the temporal dynamics of the neural data represents a safer statistical test that is well within 5% false-positive rate (Figure 4c, purple colour). Correcting for multiple comparisons If the state-to-state lag of interest is not known, we have to search over a range of time lags. As a result, we then have a multiple comparison problem. Unfortunately, we do not as yet have a good parametric method to control for multiple testing over a distribution. It is possible that one could use methods that exploit the properties of Gaussian random fields, as is common in fMRI (Worsley et al., 1996), but we have not evaluated this approach. Alternatively, we could use Bonferroni correction, but the assumption that each computed time lag is independent is likely false and overly We recommend relying on state identity-based permutation. To control for the family-wise error rate (assuming $α=0.05$), we want to ensure there is a 5% probability of getting the tested sequenceness strength ($Stest$) or bigger by chance in *any* of the multiple tests. We therefore need to know what fraction of the permutations gives $Stest$ or bigger in any of their multiple tests. If any of the sequenceness scores in each permutation exceed $Stest$, then the maximum sequenceness score in the permutation will exceed $Stest$, so it is sufficient to test against the maximum sequenceness score in the permutation. The null distribution is therefore formed by first taking the peak of sequenceness across all computed time lags of each permutation. This is the same approach as used for family-wise error correction for permutations tests in fMRI data (Nichols, 2012), and in our case it is shown to behave well statistically (Figure 4d). We can choose which permutations to include in the null distribution. For example, consider a task with two sequences, $Seq1:A→B→C→D$ and $Seq2:E→F→G→H$. We can form the null distribution either by permuting all states (e.g. one permutation might be E $→F→A→B$, H $→C→E→D$), as implemented in Kurth-Nelson et al., 2016. Alternatively, we can form a null distribution which only includes transitions between states in different sequences (e.g. one permutation might be D $→G→A→E$, H $→C→F→B$), as implemented in Liu et al., 2019. In each case, permutations are equivalent to the test data under the assumption that states are exchangeable between positions and sequences. The first case has the advantage of many more possible permutations, and therefore may make more precise inferential statements in the tail. The second case may be more sensitive in the presence of a signal as the null distribution is guaranteed not to include permutations which share any transitions with the test data (Figure 4e). For example, in Figure 4e, the blue swaps are the permutations that only exchange state identity across sequences, as in Liu et al., 2019, while the red swaps are the permutations that permit all possible state identity permutations, as in Kurth-Nelson et al., 2016. Note that there are many more different state permutations in red swaps than in blue swaps. We can make different levels of inferences by controlling the range of the null distributions in the state permutation tests. Cautionary note on exchangeability of states after training Until now, all non-parametric tests have assumed that state identity is exchangeable under the null hypothesis. Under this assumption, it is safe to perform state identity-based permutation tests on $ZF$ and $ZB$. In this section, we consider a situation where this assumption is broken. More specifically, take a situation where the neural representation of states $A$ and $B$ is related in a systematic way or, in other words, the classifier on state $A$ is confused with state $B$, and we are testing sequenceness of $A→B$. Crucially, to break the exchangeability assumption, representations of $A$ and $B$ have to be systematically more related than other states, for example, $A$ and $D$. This cannot be caused by low-level factors (e.g. visual similarity) because states are counterbalanced across subjects, so any such bias would cancel at the population level. However, such a bias might be induced by task training. In this situation, it is, in principle, possible to detect sequenceness of $A→B$ even in the absence of real sequences. In the autocorrelation section above, we introduced protections against the interaction of state correlation with autocorrelation. These protections may fail in the current case as we cannot use other states as controls (as we do in the multiple linear regression) because $A$ has systematic relationship with $B$, but not other states. State permutation will not protect us from this problem because state identity is no longer exchangeable. Is this a substantive problem? After extensive training, behavioural pairing of stimuli can indeed result in increased neuronal similarity (Messinger et al., 2001; Sakai and Miyashita, 1991). These early papers involved long training in monkeys. More recent studies have shown induced representational overlap in human imaging within a single day (Kurth-Nelson et al., 2015; Barron et al., 2013; Wimmer and Shohamy, 2012). However, when analysed across the whole brain, such representational changes tend to be localized to discrete brain regions (Schapiro et al., 2013; Garvert et al., 2017), and as a consequence may have limited impact on whole-brain decodeability. Whilst we have not yet found a simulation regime in which false positives are found (as opposed to false negatives), there exists a danger in cases where, by experimental design, the states are not Uncovering temporal structure of neural representation is important, but it is also of interest to ask where in the brain a sequence is generated. Rodent electrophysiology research focuses mainly on the hippocampus when searching for replay. One advantage of whole-brain non-invasive neuroimaging over electrophysiology (despite many known disadvantages, including poor anatomical precision, low signal-noise ratio) is in its ability to examine neural activity in multiple other brain regions. Ideally, we would like a method that is capable of localizing sequences of more abstract representation in brain regions beyond hippocampus (Liu et al., 2019). We want to identify the time when a given sequence is very likely to unfold, so we can construct averages of independent data over these times. We achieve this by transforming from the space of original states, $Xorig$, to the space of sequence events, $Xseq$. First, based on the transition of interest, $T$, we can obtain the projection matrix, $Xproj$: (12) ${X}_{proj}={X}_{orig}×T$ If we know the state lag within sequence, $Δt$ (e.g. the time lag give rise to the strongest sequenceness), or have it a priori, we can obtain the time-lagged matrix, $Xlag$: (13) ${X}_{lag}={X}_{orig}\left(t-\mathrm{\Delta }\mathrm{t}\right)$ Then, we obtain state space with sequence event as states by element-wise multiply $Xproj$ and $Xlag$: (14) ${X}_{seq}={X}_{lag}.\ast {X}_{proj}$ Each element in $Xseq$ indicates the strength of a (pairwise) sequence at a given moment in time. At this stage, $Xseq$ is a matrix with number of time points as rows (same as $Xorig$), and with number of pairwise sequences (e.g. A->B; B->C; etc.) as columns. Now on this matrix, $Xseq$, we can either look for sequences of sequences (see Appendix 3), or sum over columns (i.e. average over pairwise sequence events), and obtain a score at each time point reflecting how likely it is to be a sequence member (Figure 5a). Source localization of replay onset. We can use this score to construct averages of other variables that might co-vary with replay. For example, if we choose time points when this score is high (e.g. 95th percentile) after being low for the previous 100 ms and construct an average time-frequency plot of the raw MEG data aligned to these times, we can reconstruct a time-frequency plot that is, on average, associated with replay onset (Figure 5b). Note that although this method assigns a score for individual replay events as an intermediary variable, it results in an average measure across many events. This approach is similar to spike-triggered averaging (Sirota et al., 2008; Buzsáki et al., 1983). Applying this to real MEG data during rest, we can detect increased hippocampal power at 120–150 Hz, at replay onset (Figure 5b, c). Source reconstruction in the current analysis was performed using linearly constrained minimum variance (LCMV) beamforming, a common method for MEG source localization. This is known to suffer from distal correlated sources (Hincapié et al., 2017). A better method may be Empirical Bayesian Beamfomer for accommodating correlated neural source as a priori (O'Neill, 2021). So far, we have introduced TDLM in the context of analysing human MEG data. Relatedly, its application on human EEG data was also explored (Appendix 4: Apply TDLM to human whole-brain EEG data). Historically, replay-like phenomena have been predominantly studied in rodents with electrophysiology recordings in the hippocampal formation (Davidson et al., 2009; Grosmark and Buzsáki, 2016; Tingley and Peyrache, 2020). This raises interesting questions: how does TDLM compare to the existing rodent replay methods, can TDLM be applied to spiking data for detecting rodent replays, and what are the pros and cons? In this section, we address these questions. Generality of graph- vs. line-based replay methods Given that TDLM works on the decoded state space, rather than sensor (with analogy to cell) level, we compared TDLM to rodent methods that work on the posterior decoded position (i.e., state) space, normally referred to as Bayesian-based methods (Tingley and Peyrache, 2020). (Note that these methods are typically Bayesian in how position is decoded from spikes [Zhang et al., 1998] but not in how replay is measured from decoded position.) Two commonly used methods are Radon transform (Davidson et al., 2009) and linear weighted correlation (Grosmark and Buzsáki, 2016). Both methods proceed by forming a 2D matrix, where one dimension is the decoded state (e.g. positions on a linear track), and the other dimension is time (note that the decoded state is embedded in 1D). The methods then try to discover if an ordered line is a good description of the relationship between state and (parametric) time. For this reason, we call this family of approaches ‘line The radon method uses a discrete Radon transform to find the best line in the 2D matrix (Toft, 1996) and then evaluates the radon integral, which will be high if the data lie on a line (Figure 6a). It compares this to permutations of the same data where the states are reordered (Tingley and Peyrache, 2020). The linear weighted correlation method computes the average correlation between the time and estimated position in the 1D embedding (Figure 6b). The correlation is non-zero provided there is an orderly reactivation along the state dimension. Temporal delayed linear modelling (TDLM) vs. existing rodent replay methods. Both methods are applied to decoded positions, where they are sorted based on the order in a linearized state space. TDLM also works on the decoded position space, but instead of directly measuring the relationship between position and time, it measures the transition strength for each possible state to state transitions (Figure 6c). This is a key difference between TDLM and these popular existing techniques. To reiterate, the latter rely on a continuous parametric embedding of behavioural states and time. TDLM is fundamentally different as it works on a graph and examines the statistical likelihood of some transitions happening more than others. This is therefore a more general approach that can be used for sequences drawn from any graph (e.g. 2D maze, Figure 6d), not just graphs with simple embeddings (like a linear track). For example, in a non-spatial decision-making task (Kurth-Nelson et al., 2016), all states lead to two different states and themselves can be arrived at from two other different states (Figure 6e). Existing ‘line search’ methods will not work because there is no linear relationship between time and states (Figure 6f). While continuous spaces can be analysed in TDLM by simply chunking the space into discrete states, TDLM in its original form may potentially be less sensitive for such analyses than techniques with built-in assumptions about the spatial layout of the state space, such as the linear relationship between time and reactivated states (Appendix 5 ‘Less sensitivity of TDLM to skipping sequences’). In essence, because TDLM works on a graph, it has no information about the Euclidean nature of the state space, while techniques that make assumptions about the linear relationship between space and time benefit from these assumptions. For example, detecting state 1 then state 5 then state 10 counts as replay in these techniques, but not in TDLM. However, TDLM can be extended to address this problem. For continuous state spaces, we first need to decide how to best discretize the space. If we choose a large scale, we will miss replays that occur predominantly within a spatial bin. If we choose a small scale, we will miss transitions that jump spatial bins. A simple solution is to apply TDLM at multiple different scales and take an (variance-weighted) average of the sequenceness measures across different scales. For example, when measuring replay at the same speed, we can average events that travel 5 cm in 10 ms together with events that travel 10 cm in 20 ms. Specifically, to perform multi-scale TDLM, we discretize position bins at multiple widths. This generates rate maps at multiple scales (e.g. 5 cm, 10 cm, 20 cm, 40 cm), and hence a multi-scale state space. For each replay speed of interest, we apply TDLM separately at each scale, and then take a variance-weighted average of replay estimates over all scales. (15) ${\beta }_{M}=\frac{\sum _{\mathbit{i}=1}^{\mathbit{n}}{\mathbit{\beta }}_{\mathbit{i}}/{\mathbit{V}}_{\mathbit{i}}}{\sum _{\mathbit{i}=1}^{\mathbit{n}}1/{\mathbit{V}}_{\mathbit{i}}}$ where $βi$ is the sequence strength of given speed (i.e. state-to-state lag) measured at scale $i$, $Vi$ is the variance of its $βi$ estimator, and $n$ is the number of scales. In the end, statistical testing is performed on the precision weighted averaged sequence strength, $βM$, in the same way as we do in the original TDLM. It is easy to see why this addresses the potential concerns raised above as some scales will capture the 1 -> 2 -> 3 transitions, whilst others will capture the 1 -> 10 -> 20 transitions: because the underlying space is continuous, we can average results of the same replay speed together, and this will reinstate the Euclidean assumptions. Applying multi-scale TDLM to real rodent data (place cells in CA1) We demonstrate the applicability of multi-scale TDLM by analysing CA1 place cell spiking data from Ólafsdóttir et al., 2016. In Ólafsdóttir et al., 2016, rats ran multiple laps on a 600 cm Z maze and were then placed in a rest enclosure for 1.5 hr (Figure 7a). The Z maze consists of three tracks, with its ends and corners baited with sweetened rice to encourage running from one end to the other. The animal’s running trajectory was linearized, dwell time and spikes were binned into 2 cm bins and smoothed with a Gaussian kernel (σ = 5 bins). We generated rate maps separately for inbound (track 1 -> track 2 -> track 3) and outbound (track 3 -> track 2 -> track 1) running (see details in section ‘Rodent replay dataset’). Temporal delayed linear modelling (TDLM) applied to real rodent data. As in Ólafsdóttir et al., 2016, cells recorded in CA1 were classified as place cells if their peak firing field during track running was above 1 Hz with a width of at least 20 cm (see an example in Figure 7b). The candidate replay events were identified based on multi-unit (MU) activity from place cells during rest time. Periods exceeding the mean rate by three standard deviations of MU activity were identified as possible replay events. Events less than 40 ms long, or which included activity from less than 15% of the recorded place cell ensemble, were rejected (see an example of putative replay event in Figure 7c), and the remaining events were labelled putative replay events. We analysed data from one full recording session (track running for generating rate map, post-running resting for replay detection) from Rat 2192 reported in Ólafsdóttir et al., 2016. Following the procedure described above, we identified 58 place cells and 1183 putative replay events. Replay analysis was then performed on the putative replay events, separately for inbound and outbound rate maps given the same position has a different decoded state depending on whether it was during an outbound or inbound run. A forward sequence is characterized by states from the outbound map occurring in the outbound order or states from the inbound map occurring in the inbound order. Conversely, a backward sequence is when states from the inbound map occur in the outbound order or states from the outbound map occur in the inbound order. Candidate events were decoded based on a rate map, transforming the ncells * ntime to nstates * ntime. Each entry in this state space represents the posterior probability of being in this position at a given time. Replay analysis was performed solely on this decoded state Note that TDLM is applied directly to the concatenated rather than individual replay events. This is because TDLM is a linear modelling framework. Applying TDLM on each single replay event and then averaging the beta estimates (appropriately weighted by the variances) is equivalent to running TDLM once on the concatenated replay events. It quantifies the average amount of replay across many events, which is different compared to existing replay methods that focus on single replay events. Because TDLM addresses statistical questions in linear modelling, it does not require secondary statistics to ask whether the ‘counts’ of individual events are more likely than chance or more likely in one situation than another. During the whole sleep period, TDLM identified a significant forward sequence for the outbound map with a wide speed range around from 1 to 10 m/s (Figure 7d, left panel), consistent with recent findings from Denovellis, 2020 on varying replay speed (similar results were obtained for inbound map, not shown here for simplicity). In our analysis, the fastest speed is up to 10 m/s, which is around 20× faster than its free running speed, representing approximately half a track-arm in a typical replay event, consistent with previous work (Lee and Wilson, 2002; Davidson et al., 2009; Karlsson and Frank, 2009; Nádasdy et al., 1999). As pointed out by van der Meer et al., 2020, there are two types of statistical questions: a ‘first-order’ sequence question, which concerns whether an observed sequenceness is different from random (i.e. do replays exist?); and a ‘second-order’ question, which requires a comparison of sequenceness across conditions (i.e. do replays differ?). Because it is embedded in a linear regression framework, TDLM is ideally placed to address such questions. There are two ways of asking such questions in linear modelling: contrasts and interactions. We explain them with examples here. After fitting a regression model, resulting in coefficients for different regressors, we can test hypotheses about these coefficients by constructing linear combinations of the coefficients that would be zero under the null hypothesis. For example, if we want to test whether effect A is greater than effect B, then we can compute the linear contrast A – B (which would be zero under the null hypothesis) and perform statistics on this new measure. If we want to test whether replay increases linearly over five conditions [A, B, C, D, E], we can compute the linear contrast −2*A – B + 0*C + D + 2*E (which would be zero under the null hypothesis) and perform statistics on this new measure. Statistics (within or across animals) can operate with these contrasts in exactly the same way as with the original coefficients from the linear model. Here, we demonstrate this by showing in our example dataset that there was a greater preponderance for forward than backward replay. We construct the contrast (forwards – backwards) and test it against zero using a multiple-comparison-controlled permutation test (Figure 7d, right panel, pink line). By constructing a different contrast (forwards + backwards), we can also show that the total replay strength across both types of replays was significant (Figure 7d, right panel, green line). A second method for performing second-order tests is to introduce them into the linear regression as interaction terms, and then perform inference on the regression weights for these interactions. This means changing Equation 2 to include new regressors. For example, if interested in how reactivations change over time, one could build new regressors ($Xtimek(t)$), obtained by element-wise multiplying the state regressor, e.g. $Xk(t)$ with time indices ($Xtimek(t)=Xk(t).∗time$). Now the first-level GLM is constructed as (omitting residual term ε, same as Equation 2): (16) ${X}_{j}\left(t+\mathrm{\Delta }t\right)=\sum _{k=1}^{n}{X}_{k}\left(t\right){\beta }_{kj}+Xtim{e}_{k}\left(t\right)\beta {t}_{kj}$ Example regressors in the design matrix can be seen in Figure 7e. The first regressor, $Xk(t)$, is one of the state reactivation regressors used in standard TDLM. The second regressor, $Xtimek(t)$, is the same as $Xk(t)$ multiplied by time. (There are k regressors of each form in regressor matrix.) Here, we chose to demean the time regressor before the interaction, so the early half of the regressor is negative and the late half is positive. This has no effect on the regression coefficients of the interaction term, but, by rendering the interaction approximately orthogonal to $Xk(t)$, it makes it possible to estimate the main effect and the interaction in the same regression. Note that the interaction regressor is orthogonal to the state reactivation regressor, so it will have no effect on the first-order regression terms. If we include such regressors for all states, then we can get two measures for each replay direction (sequence effect and time effect). The first tells us the average amount of replay throughout the sleep period (first order). The second tells us whether replay increases or decreases as time progresses through the sleep period (second order). Orthogonal tests in regions of interest When examining forward–backward replay above, we did separate inference for each replay speed, and then performed multiple comparison testing using the max-permutation method (see section 'Statistical inference'). We now take the opportunity to introduce another method common in human literature. To avoid such multiple comparison correction, it is possible to select a ‘region of interest’ (ROI), average the measure in question over that ROI, and perform inference on this average measure. Because we are now only testing one measure, there is no multiple comparison problem. Critical in this endeavour, however, is that we do not use the measure under test or anything that correlates with that measure as a means to define the ROI. This will induce a selection bias (Kriegeskorte et al., 2009). In the example in Figure 7f, we have used the average replay (forwards + backwards) to select the ROI. We are interested in speeds in which there is detectable replay on average across both directions and the whole sleep period (Figure 7d, right panel, green shaded area). If we select our ROI in this way, we cannot perform unbiased inference on first-order forward or backward replay because forward and backward regressors correlate with their sum (Figure 7f, statistical inference in the red rectangle is biased). However, we can perform unbiased inference on several second-order effects (Figure 7f, statistical inference in the green rectangle). We can test (forwards – backwards) assuming the difference of terms is orthogonal to their sum (as it is in this case). Further, we can test any interaction with time because the ROI is defined on the average over time and the interaction looks for differences as a function of time. When we perform these tests in our example dataset (Figure 7f, green rectangle), we confirm that there are more forward than backward replay on average. We further show that forward replay is decreasing with time during sleep, and that backward replay is increasing with time. Their difference (forwards – backwards) is also In addition to the time-varying effect, we can also test the spatial modulation effect, that is, how replay strength (at the same replay speed) changes as a function of its spatial content. For example, is replay stronger for transitions in the start of track compared to the end of the track? As an illustrative example, we have used the same ROI defined above and test the spatial modulation effect on forward replay. Note that this test of spatial modulation effect is also unbiased from the overall strength of forward replay, and thereby no selection bias in this ROI, as well. For visualization purposes, we have first plotted the estimated strength for each pairwise forward sequence (Figure 8a), separately within each scale (from 1 to 4, with increasing spatial scales). The pairwise sequences are ordered from the start of the track to the end of the track. Alongside the pairwise sequence plot, we have plotted the mean replay strength over all possible pairwise transitions (in red) in comparison to the mean of all control transitions (in grey; as expected, they are all around 0). Note that we cannot perform inference on the difference between the red and grey bars here because they have been selected from a biased ROI. It is simply for illustration purposes. We have therefore put them in red squares to match Figure 7f. Pairwise sequence and spatial modulation effect. To formally test the spatial modulation effect, we can use the exact same approach as outlined above in section 'Linear contrasts'. Here, we test a linear increase or decrease across different transitions. We take the linear contrast weight vector, $c$ ([-2,-1,0,1,2] for the largest scale, [-3:3] for the next scale, [-5:5] for the next scale, and [-12:12] for the smallest scale), and multiply these by the beta estimates of the transitions: (17) $contrast={c}^{T}\beta$ If this new measure, $contrast$, is different from zero, then there is a linear increase/decrease from one end of the track to the other. Note that this new contrast is no longer biased by the ROI selection as each transition contributed equally to the ROI selection, but we are now comparing between transitions. Inference on this contrast is therefore valid. We have therefore put them in green boxes to match Figure 7f (Figure 8b, c). Within the larger two scales, these contrasts are significantly negative (tested against permutations in exactly the same way as the ‘mean’ contrasts). Since we are still in the linear domain, we can now just average these contrasts across the four scales and get a single measure for spatial modulation of replay. This average measure is significantly negative (Figure 8b). Hence, on average, forward replay is stronger at the beginning of the track. We can do the same thing for backward replay. We found an opposite pattern, that is, strength of backward replay is stronger at the end of the track, and similarly, it is not significant in the smallest scale and becomes significant in the largest scale, and also significant on average across all scales (Figure 8c). Again, since we are in the linear domain, we can further contrast these contrasts, asking if this effect is different for forward and backward replay. We found that the difference is indeed significant (Figure 8d). This set of results is consistent with previous rodent literature (Diba and Buzsáki, 2007). Note that we would like to stress again that this analysis is not about a single replay event but is testing for average differences across all replay events. Notably, extra care needs to be exercised for second-order questions (compared to first-order ones). Problems can emerge due to biases in second-order inference, such as in behavioural sampling (e.g. track 1 may be experienced more than track 2 during navigation; this creates a bias when evaluating replay in tack 1 vs. track 2 during rest). Such issues are real but can be finessed by experimental design considerations of a sort commonly applied in the human literature. For example: 1. Ensure that biases that might occur within subjects will not occur consistently in the same direction across subjects (e.g. by randomizing stimuli across participants). 2. Compare across conditions in each subject. 3. Perform a random effects inference across the population by comparing against the between-subject variance. Such approaches are not yet common in rodent electrophysiology and may not be practical in some instances. In such cases, it remains important to be vigilant to guard against these biases with TDLM as with other techniques. If these approaches are feasible, the machinery for computing second-order inferences is straightforward in a linear framework like TDLM. We have now discussed the applicability of TDLM in relation to human MEG, as well as in rodent electrophysiology (with comparisons to standard replay detection methods). A preliminary attempt at detecting replay in human EEG is also shown in Appendix 4. We believe that this establishes TDLM as a domain-general sequence analysis method: TDLM works at the level of decoded state space, rather than the sensor/cell level of the data. It can be applied to a wide range of data types and settings in both humans and rodents, stimulating cross-fertilization across disciplines. It is based on the GLM framework, and this lends it flexibility for regressing out potential confounds while offering an intuitive understanding of the overall approach. In this section, we discuss the generality of TDLM. TDLM assesses the statistical likelihood of certain transitions on a graph. In its original form, TDLM works on discrete states (i.e. nodes in the graph). Continuous spaces can be incorporated by chunking them into discrete spaces. Furthermore, by averaging the same replay speeds measured at multiple scales of discretization (see section ‘TDLM for rodent replay’), the statistical benefits of an assumption of a Euclidean geometry can be recovered. The longer the time length, the more accurate the estimates in TDLM. This is because TDLM assesses sequence evidence based on a GLM framework, where time length is the sample size. Higher sample size will lead to more accurate estimates. In the case of rodent analysis, we recommend applying TDLM to aggregated replay events rather than to a single event because this results in (1) more time samples for estimation and (2) more activated states in the analysis time framework. Unlike other techniques which search for a single replay in a single event, this aggregation can be implemented without losing generality as TDLM is able to handle multiple sequences in the same data with respect to different directions, contents, or speeds. Furthermore, by aggregating linearly across all replay events of the same condition, it provides a natural measure for comparing replay strength, speed, and direction across different experimental conditions. TDLM has already proved important in human experiments where complex state spaces have been used (Wimmer et al., 2020; Liu et al., 2019; Liu et al., 2021a; Kurth-Nelson et al., 2016). We expect this generality will also be important as rodent replay experiments move beyond 1D tracks, for example, to foraging in 2D, or in complex mazes. TDLM is a domain-general analysis framework for capturing sequence regularity of neural representations. It is developed on human neuroimaging data and can be extended to other data sources, including rodent electrophysiology recordings. It offers hope for cross-species investigations on replay (or neural sequences in general) and potentially enable studies of complex tasks in both humans and animals. TDLM adds a new analysis toolkit to the replay field. It is especially suited for summarizing replay strength across many events, comparing replay strength between conditions, and analysing replay strength in complex behavioural paradigms. Its linear modelling nature makes it amenable to standard statistical tests and thereby allows wide use across tasks, modalities, and species. Unlike alternative tools, we have not shown TDLM applied to individual replay events. The temporal dynamics of neural states have been studied previously with MEG (Vidaurre et al., 2017; Baker et al., 2014). Normally such states are defined by common physiological features (e.g. frequency, functional connectivity) during rest and termed resting state networks (e.g. default mode network [Raichle et al., 2001]). However, these approaches remain agnostic about the content of neural activity. The ability to study the temporal dynamics of representational content permits richer investigations into cognitive processes (Higgins et al., 2021) as neural states can be analysed in the context of their roles with respect to a range of cognitive tasks. Reactivation of neural representations has also been studied previously (Tambini and Davachi, 2019) using approaches similar to the decoding step of TDLM or multivariate pattern analysis (MVPA) ( Norman et al., 2006). This has proven fruitful in revealing mnemonic functions (Wimmer and Shohamy, 2012), understanding sleep (Lewis and Durrant, 2011), and decision-making (Schuck et al., 2016). However, classification alone does not reveal the rich temporal structures of reactivation dynamics. We have described the application of TDLM mostly during off-task state in this paper. The very same analysis can be applied to on-task data to test for cued sequential reactivation (Wimmer et al., 2020) or sequential decision-making (Eldar et al., 2020). For example, the ability to detect sequences on-task allows us to tease apart clustered from sequential reactivation, where this may be important for dissociating decision strategies (Eldar et al., 2018) and their individual differences (Wimmer et al., 2020; Eldar et al., 2020). TDLM, therefore, may allow testing of neural predictions from process models such as reinforcement learning during task performance (Dayan and Daw, 2008), which have proved hard to probe previously (Wimmer et al., 2020; Nour et al., 2021; Liu et al., 2019; Liu et al., 2021a). In the human neuroimaging domain, we have mainly discussed the application of TDLM with respect to MEG data. In Appendix 4, we show that TDLM also works well with EEG data. This is not surprising given EEG and MEG are effectively measuring the same neural signature, namely local field potential (or associated magnetic field) on the scalp. We do not have suitable fMRI data to test TDLM. However, related work has suggested that it might be possible to measure sequential reactivation using fMRI (Schuck and Niv, 2019), but particular methodological caveats need to be considered (e.g. a bias from last events due to slow haemodynamic response) (Wittkuhn and Schuck, 2021). We believe that TDLM can deal with this, given it models out non-specific transitions, although further work is needed. In future, we consider it will be useful to combine the high temporal resolution available in MEG/EEG and the spatial precision available in fMRI to probe region-specific sequential In the rodent electrophysiology domain, we show what TDLM (its multi-scale version) has to offer uniquely compared to existing rodent replay methods. Most importantly, TDLM works on an arbitrary graph and its generality makes replay studies in complex mazes possible. Its linear framework makes the assessment of time-varying effect on replay (Figure 7) or other second-order sequence questions straightforward. In future work, a promising direction will be to further separate process noise (e.g. intrinsic variability within sequences) and measurement noise (e.g. noise in MEG recording). This might be achieved by building latent state-space models as have been explored recently in rodent community (Maboudi et al., 2018; Denovellis, 2020). Together, we believe that TDLM opens doors for novel investigations of human cognition, including language, sequential planning, and inference in non-spatial cognitive tasks (Eldar et al., 2018; Kurth-Nelson et al., 2016), as well as complicated tasks in rodents, for example, forging in 2D mazes. TDLM is particularly suited to test specific neural predictions from process models, such as reinforcement learning. We hope that TDLM can promote an across-species synthesis between experimental and theoretical neuroscience and, in so doing, shed novel light on neural computation. We simulate the data so as to be akin to human MEG. Task data for obtaining state patterns Request a detailed protocol We generate ground truth multivariate patterns (over sensors) of states. We then add random Gaussian noise on the ground truth state patterns to form the task data. We train a logistic regression classifier on the task data so as to obtain a decoding model for each of the state patterns. Later we use this decoding model to transform the resting-state data from sensor space (with dimension of time by sensors) to the state space (with dimension of time by states). Rest data for detecting sequences Request a detailed protocol First, to imitate temporal autocorrelations and spatial correlations commonly seen in human neuroimaging data, we generate the rest data using an autoaggressive model with multivariate (over sensors) Gaussian noise and add a dependence among sensors. In some simulations, we also add a rhythmic oscillation (e.g. 10 Hz). Second, we inject a sequence of state patterns in the rest data. The sequences follow the ground truth of state transitions of interest. The state-to-state time lag is assumed to follow a gamma distribution. We vary the number of sequences to be injected in the rest data to control the strength of sequences. Lastly, we project the rest data to the decoding model of states obtained from the task data. TDLM will then work on the decoded state space. An example of the MATLAB implementation is called ‘Simulate_Replay’ from the Github link: https://github.com/yunzheliu/TDLM (copy archived at swh:1:rev:015c0e90a14d3786e071345760b97141700d6c85), Liu, Participants were required to perform a series of tasks with concurrent MEG scanning (see details in Liu et al., 2019). The functional localizer task was performed before the main task and was used to train a sensory code for eight distinct objects. Note that the participants were provided with no structural information at the time of the localizer. These decoding models, trained on the functional localizer task, capture a sensory-level neural representation of stimuli (i.e. stimulus code). Following that, participants were presented with the stimuli and were required to unscramble the ‘visual sequence’ into a correct order, that is, the ‘unscrambled sequence’ based on a structural template they had learned the day before. After that, participants were given a rest for 5 min. At the end, stimuli were presented again in random order, and participants were asked to identify the true sequence identity and structural position of the stimuli. Data in this session are used to train a structural code (position and sequence) for the objects. MEG data acquisition, preprocessing, and source reconstruction Request a detailed protocol We follow the same procedure that has been reported in Liu et al., 2019. We have copied it here for references. 'MEG was recorded continuously at 600 samples/s using a whole-head 275-channel axial gradiometer system (CTF Omega, VSM MedTech), while participants sat upright inside the scanner. Participants made responses on a button box using four fingers as they found most comfortable. The data were resampled from 600 to 100 Hz to conserve processing time and improve signal-to-noise ratio. All data were then high-pass-filtered at 0.5 Hz using a first-order IIR filter to remove slow drift. After that, the raw MEG data were visually inspected, and excessively noisy segments and sensors were removed before independent component analysis (ICA). An ICA (FastICA, http://research.ics.aalto.fi/ica/fastica) was used to decompose the sensor data for each session into 150 temporally independent components and associated sensor topographies. Artefact components were classified by combined inspection of the spatial topography, time course, kurtosis of the time course, and frequency spectrum for all components. Eye-blink artefacts exhibited high kurtosis (>20), a repeated pattern in the time course and consistent spatial topographies. Mains interference had extremely low kurtosis and a frequency spectrum dominated by 50 Hz line noise. Artefacts were then rejected by subtracting them out of the data. All subsequent analyses were performed directly on the filtered, cleaned MEG signal, in units of femtotesla. All source reconstruction was performed in SPM12 and FieldTrip. Forward models were generated on the basis of a single shell using superposition of basis functions that approximately corresponded to the plane tangential to the MEG sensor array. LCMV beamforming (Van Veen et al., 1997) was used to reconstruct the epoched MEG data to a grid in MNI space, sampled with a grid step of 5 mm. The sensor covariance matrix for beamforming was estimated using data in either broadband power across all frequencies or restricted to ripple frequency (120–150 Hz). The baseline activity was the mean neural activity averaged over −100 ms to −50 ms relative to sequence onset. All non-artefactual trials were baseline corrected at source level. We looked at the main effect of the initialization of sequence. Non-parametric permutation tests were performed on the volume of interest to compute the multiple comparison (whole-brain corrected) p-values of clusters above 10 voxels, with the null distribution for this cluster size being computed using permutations (n = 5000 permutations)'. This data is from Ólafsdóttir et al., 2016. We analysed one full recording session (track running for generating rate map, post-running resting for replay detection) from Rat 2192. In Ólafsdóttir et al., 2016, rats ran multiple laps on a Z maze and were then placed in a rest enclosure. The two parallel sections of the Z (190 cm each) were connected by a diagonal section (220 cm). Animals were pretrained to run on the track. At the recording session, rats were placed at one end of the Z-track. The ends and corners of the track were baited with sweetened rice to encourage running from one end to the other. In each session, rats completed 20 full laps (30–45 min). Following the track session, rats were placed in the rest enclosure for 1.5 hr. Following Ólafsdóttir et al., 2016, when generating rate maps we excluded data from both the ends and corners because the animals regularly performed non-perambulatory behaviours there. Periods when running speed was less than 3 cm/s were also excluded. Running trajectories were then linearized, dwell time and spikes were binned into 2 cm bins and smoothed with a Gaussian kernel (σ = 5 bins). We generated rate maps separately for inbound (track 1 -> track 2 -> track 3) and outbound (track 3 -> track 2 -> track 1) running. As in Ólafsdóttir et al., 2016, cells recorded in CA1 were classified as place cells if their peak firing field during track running was above 1 Hz and at least 20 cm wide. The candidate replay events were identified based on MU activity from place cells during rest time. Only periods exceeding the mean rate by three standard deviations of MU activity were identified as putative replay events. Events less than 40 ms long or which included activity from less than 15% of the recorded place cell ensemble were rejected. We analysed data from one full recording session (track running for generating rate map, post-running resting for replay detection) of Rat 2192 reported in Ólafsdóttir et al., 2016. Following the procedure described above, we have identified 58 place cells and 1183 putative replay events. Replay analysis was then performed on the putative replay events, separately for inbound and outbound rate maps. Source code of TDLM can be found at https://github.com/yunzheliu/TDLM. TDLM can be used iteratively. One extension of TDLM of particular interest is: multi-step sequences. It asks about a consistent regularity among multiple states. So far, we introduced methods for quantifying the extent to which the state-to-state transition structure in neural data matches a hypothesized task-related transition matrix. An important limitation of these methods is that they are blind to hysteresis in transitions. In other words, they cannot tell us about multi-step sequences. In this section, we describe a methodological extension to measure evidence for sequences comprising more than one transition: for example, $A→B→C$. The key ingredient is controlling for shorter sub-sequences (e.g. $A→B$ and $B→C$) in order to find evidence unique to a multi-step sequence of interest. Assuming constant state-to-state time lag, $Δt$, between A and B, and between B and C. We can create a new state space AB by shifting B up $A→B$, and element-wise multiply it with state A. This new state AB measures the reactivation strength of $A→B$, with time lag $Δt$. In the same way, we can create a new state space, BC, AC, etc. Then we can construct the same first-level GLM on the new state space. For example, if we want to determine the evidence of $A→B→C$ at time lag $Δt$, we can regress AB onto state time course C at each $Δt$ (Equation 1). But we want to know the unique contribution of AB to C. More specifically, we want to test if the evidence of $A→B→C$ is stronger than $X→B→C$, where X is any other state but not A. Therefore, similar to Equation 2, we need to control CB, DB, when looking for evidence of AB of C. Applying this method, we show that TDLM successfully avoids false positives arising out of strong evidence for shorter length (see simulation results in Appendix 1—figure 1a, and see results obtained on human neuroimaging data in Appendix 1—figure 1b). This process can be generalized to any number of steps. TDLM, in its current form, assumes a constant intra-sequence state-to-state time lag. If there is a variability between state transitions TDLM can still cope, but not very elegantly. Assume there is a three-state sequence, $A→B→C$, with intra-sequence variance. TDLM will need to test all possible combinations of state-to-state time lags in $A→B$ and $B→C$. If there are $n$ number of time lag of interest in either of the two transitions, TDLM will then have to test $n2$ possible time lag combinations. This is a large search space and one that increases exponentially as a function of the length of a sequence. We note that this analysis is different from a typical rodent replay analysis which assesses the overall evidence for a sequence length (Davidson et al., 2009; Grosmark and Buzsáki, 2016). TDLM asks if there is more evidence for A -> B -> C, above and beyond evidence for B -> C, for example. If the main question of interest is ‘do we have evidence of A -> B -> C in general’, as normally is the case in the rodent replay analysis (Davidson et al., 2009; Grosmark and Buzsáki, 2016), we should not control for shorter lengths. Instead, we can simply average the evidence together, as implemented in Kurth-Nelson et al., 2016. Extension to temporal delayed linear modelling (TDLM): multi-step sequences. Pseudocode of sensory code and abstract code cross-validations We have detailed the use of either sensory or abstract representations as the states in TDLM. We now take a step further and use sequences themselves as states. Using this kind of hierarchical analysis, we can search for sequences of sequences. This is useful because it can reveal temporal structure not only within sequence, but also between sequences. The organization between sequences is of particular interest for revealing neural computations. For example, the forward and backward search algorithms hypothesized in planning and inference (Penny et al., 2013) can be cast as sequences of sequences problem: the temporal structure of forward and backward sequence. This can be tested by using TDLM iteratively. To look for sequences between sequences, we need first to define sequences as new states. To do so, the raw state course, for example, state B, needs to be shifted up by the empirical within-sequence time lag $Δt$ (determined by the two-level GLM), to align with the onset of state A, if assuming sequence $A→B$ exist (at time lag $Δt$). Then, we can element-wise multiply the raw state time course A with the shifted time course B, resulting in a new state AB. Each entry in this new state time course indicates the reactivation strength of sequence AB at a given time. The general two-level GLMs framework still applies, but now with one important caveat. The new sequence state (e.g. AB) is defined based on the original states (A and B), and where we are now interested in a reactivation regularity, that is, sequence, between sequences, rather than the original states. We need therefore to control for the effects of the original states. Effectively, this is like controlling for main effects (e.g. state A and shifted state B) when looking for their interaction (sequence AB). TDLM achieves this by including time-lagged original state regressors A, B, in addition to AB, in the first-level GLM sequence analysis. Specifically, let us assume that the sequence state matrix is $Xseq$, after transforming the original state space to sequence space based on the empirical within-sequence time lag $Δtw$. Each column at $Xseq$ is sequence state, denoted by $Sij$, which indicates the strength of sequence i -> j reactivation. The raw state i is $Xi$, and the shifted raw state j is $Xjw$ (by time lag $Δtw$). In the first level GLM, TDLM ask for the strength of a unique contribution of sequence state $Sij$ to $Smn$ while controlling for original states ($Xi$ and $Xjw$). For each sequence state $ij$, at each possible time lag $Δt$, TDLM estimated a separate linear model: (18) ${S}_{mn}={X}_{i}\left(\mathrm{\Delta }t\right){\beta }_{i}+{X}_{jw}\left(\mathrm{\Delta }t\right){\beta }_{j}+{S}_{ij}\left(\mathrm{\Delta }t\right){\beta }_{ij}\left(\mathrm{\Delta }t\right)$ Repeat this process for each sequence state separately at each time lag, resulting in a sequence matrix $βseq$. At the second-level GLM, TDLM asks how strong the evidence for a sequence of interest is compared to sequences that have the same starting state or end state at each time lag. This second-level GLM will be the same as Equation 5, but with additional regressors to control for sequences that share the same start or end state. In simulation, we demonstrate, applying this method, that TDLM can uncover hierarchical temporal structure: state A is temporally leading state B with 40 ms lag, and the sequence A -> B tends to repeat itself with a 140 ms gap (Appendix 3—figure 1a). One interesting application of this is to look for theta sequence (Mehta et al., 2002; McNaughton et al., 2006; Buzsáki and Moser, 2013). One can think of theta sequence, a well-documented phenomenon during rodent spatial navigation, as a neural sequence repeating itself in theta frequency (6–12 Hz). In addition to looking for temporal structure of the same sequence, the method is equally suitable when searching for temporal relationships between different sequences. For example, assuming two different types of sequences, one sequence type has a within-sequence time lag at 40 ms; while the other has a within-sequence time lag at 150 ms (Appendix 3—figure 1b, left and middle panel); and there is a gap of 200 ms between the two types of sequences (Appendix 3—figure 1b, right panel). These time lags are set arbitrarily for illustration purposes. TDLM can accurately capture the dynamics both within and between the sequences, supporting a potential for uncovering temporal relationships between sequences under the same framework. Apply TDLM to human whole-brain EEG data An autocorrelation is commonplace in neuroimaging data, including EEG and fMRI. TDLM is designed to specifically take care of this confound, and, on this basis, we should be able to work with EEG and fMRI data. We do not have suitable fMRI data available to test TDLM but are interested to investigate this in more depth in our future work. We had collected EEG data from one participant to test whether TDLM would *just* work. The task was designed to examine on-task sequential replay in decision-making by Dr. Toby Wise. This is a ‘T-maze’ like task, where a participant needs to choose a left or right path based on the value received at the end of the path. We could decode seven objects well on the whole-brain EEG data using just raw amplitude feature (same with our MEG-based analysis) and could detect fast backward sequenceness (peaked at 30 ms time lag) during choice/planning time (Appendix 4—figure 1), similar to our previous MEG findings (Kurth-Nelson et al., 2016). As this result is from one subject, we are cautious about making an excessive claim, but nevertheless we believe that the data show the TDLM approach is highly promising for EEG data. Sequence detection in EEG data (from one participant). Less sensitivity of TDLM to skipping sequences In a linear track where replays only go in a single direction, it is possible that TDLM is less sensitive compared to the linear correlation or the Radon method, given the latter assumes a parametric relationship between space and time. For example, if only the first and last states are activated, but not the intermediate states, the existing methods will report replay, but TDLM will not, because in existing methods space and time are parametric quantities (Appendix 5—figure 1). In contrast, TDLM only knows about transitions on a graph. Parametric relationship between space and time vs. graph transitions. No new data is used or generated in the current paper. Data relevant for current paper is available at https://github.com/YunzheLiu/TDLM (copy archived at https://archive.softwareheritage.org/ swh:1:rev:015c0e90a14d3786e071345760b97141700d6c85). This dataset is from Ólafsdóttir et al., 2016. 24. Thesis 1. Higgins C Uncovering temporal structure in neural data with statistical machine learning models University of Oxford. Impaired neural replay of inferred relationships in schizophrenia Cell In press. 65. Thesis 1. Toft PA The Radon Transform: Theory and Implementation Technical University of Denmark. 69. Conference Advances in neural information processing systems Conference on Neural Information Processing Systems. pp. 1473–1480. Article and author information Author details James S. McDonnell Foundation (JSMF220020372) The funders had no role in study design, data collection and interpretation, or the decision to submit the work for publication. We thank Matthew A Wilson for help with rodent theta sequence analysis. We thank Elliott Wimmer and Toby Wise for helpful discussion and generous sharing of their data. We thank Matt Nour for helpful comments on a previous version of the manuscript. YL is also grateful for the unique opportunity provided by the Brains, Minds and Machines Summer Course. We acknowledge fundings from the Open Research Fund of the State Key Laboratory of Cognitive Neuroscience and Learning to YL, Wellcome Trust Investigator Award (098362/Z/12/Z) to RJD, Wellcome Trust Senior Research Fellowship (104765/Z/ 14/Z), and Principal Research Fellowship (219525/Z/19/Z), together with a James S McDonnell Foundation Award (JSMF220020372), to TEJB; and Wellcome Trust Senior Research Fellowship (212281/Z/18/Z) to CB. Both Wellcome Centres are supported by core funding from the Wellcome Trust: Wellcome Centre for Integrative Neuroimaging (203139/Z/16/Z), Wellcome Centre for Human Neuroimaging (091593/Z/10/Z). The Max Planck UCL Centre is a joint initiative supported by UCL and the Max Planck Society. Human subjects: The human dataset used in this study was reported in Liu et al 2019. All participants were recruited from the UCL Institute of Cognitive Neuroscience subject pool, had a normal or corrected-to-normal vision, no history of psychiatric or neurological disorders, and had provided written informed consent prior to the start of the experiment, which was approved by the Research Ethics Committee at University College London (UK), under ethics number 9929/002. © 2021, Liu et al. This article is distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use and redistribution provided that the original author and source are credited. Views, downloads and citations are aggregated across all versions of this paper published by eLife. A two-part list of links to download the article, or parts of the article, in various formats. Downloads (link to download the article as PDF) Open citations (links to open the citations from this article in various online reference manager services) Cite this article (links to download the citations from this article in formats compatible with various reference manager tools) 1. Yunzhe Liu 2. Raymond J Dolan 3. Cameron Higgins 4. Hector Penagos 5. Mark W Woolrich 6. H Freyja Ólafsdóttir 7. Caswell Barry 8. Zeb Kurth-Nelson 9. Timothy E Behrens Temporally delayed linear modelling (TDLM) measures replay in both animals and humans eLife 10:e66917. Further reading 1. Addiction is commonly characterized by escalation of drug intake, compulsive drug seeking, and continued use despite harmful consequences. However, the factors contributing to the transition from moderate drug use to these problematic patterns remain unclear, particularly regarding the role of sex. Many preclinical studies have been limited by small sample sizes, low genetic diversity, and restricted drug access, making it challenging to model significant levels of intoxication or dependence and translate findings to humans. To address these limitations, we characterized addiction-like behaviors in a large sample of >500 outbred heterogeneous stock (HS) rats using an extended cocaine self-administration paradigm (6 hr/daily). We analyzed individual differences in escalation of intake, progressive ratio (PR) responding, continued use despite adverse consequences (contingent foot shocks), and irritability-like behavior during withdrawal. Principal component analysis showed that escalation of intake, progressive ratio responding, and continued use despite adverse consequences loaded onto a single factor that was distinct from irritability-like behaviors. Categorizing rats into resilient, mild, moderate, and severe addiction-like phenotypes showed that females exhibited higher addiction-like behaviors, with a lower proportion of resilient individuals compared to males. These findings suggest that, in genetically diverse rats with extended drug access, escalation of intake, continued use despite adverse consequences, and PR responding are highly correlated measures of a shared underlying construct. Furthermore, our results highlight sex differences in resilience to addiction-like behaviors. 2. Interactions between excitatory and inhibitory neurons are critical to computations in cortical circuits but their organization is difficult to assess with standard electrophysiological approaches. Within the medial entorhinal cortex, representation of location by grid and other spatial cells involves circuits in layer 2 in which excitatory stellate cells interact with each other via inhibitory parvalbumin expressing interneurons. Whether this connectivity is structured to support local circuit computations is unclear. Here, we introduce strategies to address the functional organization of excitatory-inhibitory interactions using crossed Cre- and Flp-driver mouse lines to direct targeted presynaptic optogenetic activation and postsynaptic cell identification. We then use simultaneous patch-clamp recordings from postsynaptic neurons to assess their shared input from optically activated presynaptic populations. We find that extensive axonal projections support spatially organized connectivity between stellate cells and parvalbumin interneurons, such that direct connections are often, but not always, shared by nearby neurons, whereas multisynaptic interactions coordinate inputs to neurons with greater spatial separation. We suggest that direct excitatory-inhibitory synaptic interactions may operate at the scale of grid cell clusters, with local modules defined by excitatory-inhibitory connectivity, while indirect interactions may coordinate activity at the scale of grid cell modules. 3. Traumatic brain injury (TBI) caused by external mechanical forces is a major health burden worldwide, but the underlying mechanism in glia remains largely unclear. We report herein that Drosophila adults exhibit a defective blood–brain barrier, elevated innate immune responses, and astrocyte swelling upon consecutive strikes with a high-impact trauma device. RNA sequencing (RNA-seq) analysis of these astrocytes revealed upregulated expression of genes encoding PDGF and VEGF receptor-related (Pvr, a receptor tyrosine kinase), adaptor protein complex 1 (AP-1, a transcription factor complex of the c-Jun N-terminal kinase pathway) composed of Jun-related antigen (Jra) and kayak (kay), and matrix metalloproteinase 1 (Mmp1) following TBI. Interestingly, Pvr is both required and sufficient for AP-1 and Mmp1 upregulation, while knockdown of AP-1 expression in the background of Pvr overexpression in astrocytes rescued Mmp1 upregulation upon TBI, indicating that Pvr acts as the upstream receptor for the downstream AP-1–Mmp1 transduction. Moreover, dynamin-associated endocytosis was found to be an important regulatory step in downregulating Pvr signaling. Our results identify a new Pvr–AP-1–Mmp1 signaling pathway in astrocytes in response to TBI, providing potential targets for developing new therapeutic strategies for TBI.
{"url":"https://elifesciences.org/articles/66917","timestamp":"2024-11-04T12:07:32Z","content_type":"text/html","content_length":"586183","record_id":"<urn:uuid:367b78f5-d47e-4046-9688-a03be78440e1>","cc-path":"CC-MAIN-2024-46/segments/1730477027821.39/warc/CC-MAIN-20241104100555-20241104130555-00818.warc.gz"}
How To Find The Base Of A Right Triangle The Pythagorean Theorem, an equation that shows the relationship between a right triangle's three sides, can help you to find the length of its base. A triangle that contains a 90-degree or right angle in one of its three corners is called a right triangle. A right triangle's base is one of the sides that adjoins the 90-degree angle. TL;DR (Too Long; Didn't Read) The Pythagorean Theorem is essentially, _a_^2 + _b_^2 = _c_^2. Add side _a_ times itself to side _b_ times itself to arrive at the length of the hypotenuse, or side _c_ times itself. The Pythagorean Theorem The Pythagorean Theorem is a formula that gives the relationship between the lengths of a right triangle's three sides. The triangle's two legs, the base and height, intersect the triangle's right angle. The hypotenuse is the side of the triangle opposite the right angle. In the Pythagorean theorem, the square of the hypotenuse is equal to the sum of the squares of the other two sides: \(a^2 + b^2 = c^2\) In this formula, a and b are the lengths of the two legs and c is the length of the hypotenuse. The ^2 signifies that a, b, and c are squared. A number squared is equal to that number multiplied by itself – for example, 4^2 is equal to 4 times 4, or 16. Finding the Base Using the Pythagorean theorem, you can find the base, a, of a right triangle if you know the lengths of the height, b, and the hypotenuse, c. Since the hypotenuse squared is equal to the height squared plus the base squared, then: \(a^2 = c^2 – b^2\) For a triangle with a hypotenuse of 5 inches and a height of 3 inches, find the base squared: \(c^2 – b^2 = (5 × 5) – (3 × 3) = 25 – 9 = 16\) \(\implies a = 4\) Since b^2 equals 9 , then a equals the number that, when squared, makes 16. When you multiply 4 by 4, you get 16, so the square root of 16 is 4. The triangle has a base that is 4 inches long. A Man Called Pythagoras The Greek philosopher and mathematician, Pythagoras, or one of his disciples, is attributed with the discovery of the mathematical theorem still used today to calculate the dimensions of a right triangle. To complete the calculations, you must know the dimensions of the longest side of the geometric shape, the hypotenuse, as well as another one of its sides. Pythagoras migrated to Italy in about 532 BCE because of the political climate in his own country. Besides being credited with this theorem, Pythagoras – or one of the members of his brotherhood – also determined the significance of numbers in music. None of his writings have survived, which is why scholars don't know if it was Pythagoras himself who discovered the theorem or one of the many students or disciples who were members of the Pythagorean brotherhood, a religious or mystical group whose principles influenced the work of Plato and Aristotle. Cite This Article Zamboni, Jon. "How To Find The Base Of A Right Triangle" sciencing.com, https://www.sciencing.com/base-right-triangle-8121815/. 3 November 2020. Zamboni, Jon. (2020, November 3). How To Find The Base Of A Right Triangle. sciencing.com. Retrieved from https://www.sciencing.com/base-right-triangle-8121815/ Zamboni, Jon. How To Find The Base Of A Right Triangle last modified March 24, 2022. https://www.sciencing.com/base-right-triangle-8121815/
{"url":"https://www.sciencing.com:443/base-right-triangle-8121815/","timestamp":"2024-11-09T14:11:15Z","content_type":"application/xhtml+xml","content_length":"72992","record_id":"<urn:uuid:23d2f7e9-9fab-49c7-a218-712fbc0c193d>","cc-path":"CC-MAIN-2024-46/segments/1730477028118.93/warc/CC-MAIN-20241109120425-20241109150425-00844.warc.gz"}
Research - Brustein's research group Brustein's research group Welcome to the homepage of Ramy Brustein I am a Professor of Physics at Ben-Gurion University in Beer-Sheva, Israel. My main research interests are in the area of fundamental physics and include quantum gravity , cosmology and string Black holes & other Space-times with causal boundaries Interpreting the thermodynamic properties of black holes and other space-times with horizons and uncovering their underlying quantum statistical mechanics remains a challenge in spite of the intense efforts over the last 40 years. What does the black hole entropy measure, the degeneracy of microstates, entanglement entropy between the inside and outside of the horizon, or some intrinsic gravitational entropy? Is the quantum mechanics of space-times with causal boundaries unitary? If so, why do some of them look thermal and non-unitary in some approximation? We have recently proposed that the extreme approximation of treating spacetime as strictly classical geometric object is at the origin of many of these issues and one should take into account the quantum properties of spacetime itself to resolve them. We proposed that blackholes actually consist of a collection of superstrings in the highest temperature possible – the Hagedorn temperature, We argued that as a consequence, gravitational waves detectors should see specific signatures of the quantum nature of black holes and so will be able to probe some of the quantum aspects of black holes, specifically their excitation spectrum. The ongoing quest to detect gravitational waves in the cosmic microwave background is advancing. Our research in cosmology aims to understand what would be the possible implications if such a discovery is confirmed. In models of cosmic inflation, a detectable amplitude of gravitational waves indicates that the scale of inflation was high, close to the highest possible scale in nature. Our research focuses on the quantum aspects of high-scale inflationary models. One idea that we are investigating is that the state of superstrings in the highest temperature possible – the Hagedorn temperature, makes very similar predictions to models of high-scale semiclassical inflation. Another idea is that small-field models of inflation are the only viable and consistent class describing high-scale inflation because the large-field models have some inherent inconsistencies at these high scales as suggested by the swampland program. We expect interesting implications for the theory of inflation, for inflationary model-building and perhaps even some observable consequences.
{"url":"https://sites.physics.bgu.ac.il/ramyb/","timestamp":"2024-11-03T01:08:29Z","content_type":"text/html","content_length":"162207","record_id":"<urn:uuid:b9022c33-4738-4326-b822-15d9dfc44d15>","cc-path":"CC-MAIN-2024-46/segments/1730477027768.43/warc/CC-MAIN-20241102231001-20241103021001-00815.warc.gz"}
The permimp-package -package is developed to replace the Conditional Permutation Importance (CPI) computation by the -function(s) of the applies a different implementation for the CPI, in order to mitigate some issues related to the implementation of the CPI in the -package. In addition, the CPI is also available for random forests grown by the -package. Finally, the package includes some plotting options. Although originally designed for prediction purposes, Random forests Breiman (2001) have become a popular tool to assess the importance of predictors. Several methods and measures have been proposed, one of the most popular ones is the Permutation Importance Breiman (2001), originally referred to as the Mean Decrease in Accuracy. Inspired by the contrast between the unconditional zero-order correlation between predictor and outcome, and the conditional standardized regression coefficient in multiple linear regression, Strobl et al. (2008) argued that in some cases the importance of a predictor, conditionally on (all) other predictors, may be of higher interest than the unconditional importance. Therefore, they proposed the Conditional Permutation Importance, which introduces a conditional permutation scheme that is based on the dependence between the predictors. The permimp-package presents a different implementation of this Conditional Permutation Importance. Unlike the original implementation (available in the party R-package of Hothorn, Hornik, and Zeileis (2006)), permimp can, in addition to random forests that were grown according to the unbiased recursive partitioning (cf. cforests; Hothorn, Hornik, and Zeileis (2006)), also deal with with random forests that were grown using the randomForest-package Liaw and Wiener (2002), which applies the original tree growing algorithm based on impurity reduction Breiman (2001). (In principle, the permimp can be extended to random forests grown by other packages, under the condition that tree-wise predictions are possible and OOB-information as well as the split points are available per tree.) We argue that the permimp-package can be seen as a replacement for the varimp-functions of the party package in R. This vignette has two main parts. The first part is tutorial-like and demonstrates functionality of the permimp-package (by also comparing it to original party::varimp-functions. The second part is more theoretical and explains the how and the why of the new Conditional Permutation Importance-implementation. Part I: permimp-tutorial A. The permimp-function The permimp-function replaces all the party::varimp-functions (varimp, varimpAUC, varimpsurv). To apply permimp-function, one needs a fitted random forest. Within this tutorial we will mainly focus on random forests-objects as obtained by the party::cforest-function (i.e., S4-objects of class "RandomForest"). As an example we will use the (cleaned) airquality-data set to fit random forest with 50 trees: library("party", quietly = TRUE) #> Attaching package: 'zoo' #> The following objects are masked from 'package:base': #> as.Date, as.Date.numeric airq <- subset(airquality, !(is.na(Ozone) | is.na(Solar.R))) cfAirq50 <- cforest(Ozone ~ ., data = airq, control = cforest_unbiased(mtry = 2, ntree = 50, minbucket = 5, minsplit = 10)) A.1. New Conditional Permutation Importance Let’s start by comparing the permimp and the varimp function for the conditional permutation importance. system.time(CPI_permimp <- permimp(cfAirq50, conditional = TRUE, progressBar = FALSE)) #> user system elapsed #> 0.25 0.00 0.27 system.time(CPI_varimp <- varimp(cfAirq50, conditional = TRUE)) #> user system elapsed #> 1.97 0.02 1.98 #> Solar.R Wind Temp Month Day #> 83.736656 209.786231 422.671385 1.820496 -7.668462 #> Solar.R Wind Temp Month Day #> 25.147792 114.250197 220.080351 1.952776 -1.265111 Three differences can easily be spotted: • perminp has a progressBar-argument. The default is progressBar = TRUE • permimp is faster than varimp. • The results are different. A.2. Different results? Why are the results different? There are two main reasons. First, permimp uses a different default threshold-value: permimp uses threshold = .95 while varimp uses threshold = 0.2. Check ?permimp and ?varimp. There is a good reason for using a higher default threshold value. When using equal threshold-values… CPI_permimp <- permimp(cfAirq50, conditional = TRUE, threshold = .2, progressBar = FALSE) #> Solar.R Wind Temp Month Day #> 26.9974775 122.2781497 204.0238116 -3.1201748 0.8442593 #> Solar.R Wind Temp Month Day #> 25.147792 114.250197 220.080351 1.952776 -1.265111 The results are more similar, but not quite identical. The remaining differences are explained by the second reason: the implementation of permimp differs from the varimp-implementation. Using a higher threshold-value makes the differences between the two implementations more pronounced. CPI_varimp <- varimp(cfAirq50, conditional = TRUE, threshold = .95) #> Solar.R Wind Temp Month Day #> 26.9974775 122.2781497 204.0238116 -3.1201748 0.8442593 #> Solar.R Wind Temp Month Day #> 36.758973 198.610059 257.916926 -2.537016 -2.530303 The differences between the two implementations (and why we believe the new implementation is more attractive), is explained in the second part of this document, as well as in this manuscript: Debeer and Strobl (2020). A.3. Backward Compatible with party when: asParty = TRUE By specifying asParty = TRUE, the permimp-function can be made backward compatible with the party::varimp-function. But permimp is a bit faster. To get exactly the same results, the random seeds should be exactly the same. system.time(CPI_asParty <- permimp(cfAirq50, conditional = TRUE, asParty = TRUE, progressBar = FALSE)) #> user system elapsed #> 0.45 0.00 0.45 system.time(CPI_varimp <- varimp(cfAirq50, conditional = TRUE)) #> user system elapsed #> 2.04 0.00 2.05 #> Solar.R Wind Temp Month Day #> 36.364271 136.732886 200.620728 3.179600 1.360632 #> Solar.R Wind Temp Month Day #> 36.364271 136.732886 200.620728 3.179600 1.360632 Note that with asParty = TRUE the default threshold-value is automatically set back to 0.2. A.4. Different Output: VarImp-object A less obvious difference between permimp and varimp is the object that it returns. permimp returns an S3-class object: VarImp, rather than a named numerical vector. A VarImp object is a named list with four elements: 1. $values: holds the computed variable importance values. 2. $perTree: holds the variable importance values per tree (averaged over the permutations when nperm > 1). 3. $type: the type of variable importance. 4. $info: other relevant information about the variable importance, such as the used threshold. ## varimp returns a named numerical vector. #> Named num [1:5] 36.36 136.73 200.62 3.18 1.36 #> - attr(*, "names")= chr [1:5] "Solar.R" "Wind" "Temp" "Month" ... ## permimp returns a VarImp-object. #> List of 4 #> $ values : Named num [1:5] 36.36 136.73 200.62 3.18 1.36 #> ..- attr(*, "names")= chr [1:5] "Solar.R" "Wind" "Temp" "Month" ... #> $ perTree:'data.frame': 50 obs. of 5 variables: #> ..$ Solar.R: num [1:50] 117.35 0 1.81 0 58.22 ... #> ..$ Wind : num [1:50] 118.8 430.7 141.5 171 78.1 ... #> ..$ Temp : num [1:50] 374 433 118 -1 175 ... #> ..$ Month : num [1:50] -4.59 0 34.93 -11.96 -11.9 ... #> ..$ Day : num [1:50] 0 18.93 0 7.18 0 ... #> $ type : chr "Conditional Permutation" #> $ info :List of 4 #> ..$ threshold : num 0.2 #> ..$ conditioning: chr "as party" #> ..$ outcomeType : chr "regression" #> ..$ errorType : chr "MSE" #> - attr(*, "class")= chr "VarImp" ## the results of permimp(asParty = TRUE) and varimp() are exactly the same. all(CPI_asParty$values == CPI_varimp) #> [1] TRUE An advantage of the VarImp-object, is that the $perTree-values can be used to inspect the distribution of the importance values across the trees in a forest. For instance, the plotting function (demonstrated below) can be used to visualize this distribution of per tree importance values. A.5. Unconditional Permutation Importance: permimp = varimp Of course, there is also the option to compute the unconditional permutation importance. Both using the original and the split wise permutation algorithm. Here, there are no differences between permimp and varimp. That is, permimp simply uses the party varimp code, making the asParty argument redundant in this case. Note, however, that permimp still returns a VarImp-object. ## Original Unconditional Permutation Importance PI_permimp <- permimp(cfAirq50, progressBar = FALSE, pre1.0_0 = TRUE) PI_varimp <- varimp(cfAirq50, pre1.0_0 = TRUE) #> Solar.R Wind Temp Month Day #> 104.19612764 345.36320352 582.09815801 18.04859049 0.01880503 #> Solar.R Wind Temp Month Day #> 104.19612764 345.36320352 582.09815801 18.04859049 0.01880503 ## Splitwise Unconditional Permutation Importance PI_permimp2 <- permimp(cfAirq50, progressBar = FALSE) PI_varimp2 <- varimp(cfAirq50) #> Solar.R Wind Temp Month Day #> 81.935250 451.459770 580.918085 21.851431 -4.613963 #> Solar.R Wind Temp Month Day #> 81.935250 451.459770 580.918085 21.851431 -4.613963 For more detailed information check ?permimp. B. Methods for VarImp-objects B.1. plot Visualizing the variable importance values (as a VarImp-object) is easy using the plot method. Its main features include: • Four plot types:`type = c(“bar,” “box,” “dot,” “rank”). • Predictors automatically ordered according to importance value (high to low). Setting the argument sort = FALSE renders the original order (cf. the cforest call). • with argument horizontal = TRUE horizontal plots are made. • Optional visualization of the $perTree importance value distribution with the interval argument. With type = "box", the distribution of the $perTree-values is automatically visualized. We would suggest to only use the visualization of the $perTree importance value distribution, when there are enough trees (>= 500) in the random forest. Therefore, we first fit a new, bigger random forest, and compute the permutation importance. ## fit a new forest with 500 trees cfAirq500 <- cforest(Ozone ~ ., data = airq, control = cforest_unbiased(mtry = 2, ntree = 500, minbucket = 5, minsplit = 10)) ## compute permutation importance PI_permimp500 <- permimp(cfAirq500, progressBar = FALSE) ## different plots, all easy to make ## barplot plot(PI_permimp500, type = "bar") ## barplot with visualization of the distribution: an ## interval between the .25 and .75 quantiles of the per ## Tree values is added to the plot plot(PI_permimp500, type = "bar", interval = "quantile") Additionally you can: • Use your favorite colors with arguments col and intervaColor. • Use different a quantile interval with intervalProbs = c(<lower_quantile>, <upper_quantile>). • Choose your own title the title with main • Modify the margin with margin. Although we would advise against this in most situations, you can also: • Plot only the <integer value> predictors with the highest values with nVar = <integer value>. • Visualize the distribution using the standard deviation of the perTree values with interval = "sd". This is almost always a very bad idea, because it falsely suggests that the distribution is symmetric. Please don’t use this option. For more detailed information check ?plot.VarImp. B.2. Other VarImp-methods (Currently) there are three more VarImp-methods: • print: prints the $values • ranks: prints the (reverse) rankings of the $values • subset: creates a subset that is itself also aVarImp-object. Only to be used in very limited settings, and when you know what you are doing. Other related functions are: • as.VarImp: creates a VarImp-object from a matrix/data.frame of perTree values, or from a numerical vector of importance values. • is.VarImp: checks if an object is of the VarImp-class. C. permimp applied to randomForest-objects As mentioned in the introduction, the permimp-package can also deal with with random forests that were grown using the randomForest-package Liaw and Wiener (2002), which applies the original tree growing algorithm based on impurity reduction Breiman (2001). Let’s first grow a (small) forest. library("randomForest", quietly = TRUE) #> randomForest 4.6-14 #> Type rfNews() to see new features/changes/bug fixes. rfAirq50 <- randomForest(Ozone ~ ., data = airq, mtry = 2, replace = FALSE, nodesize = 7, keep.forest = TRUE, keep.inbag = TRUE) Note that keep.forest = TRUE and keep.inbag = TRUE. The permimp-function requires information about which observations were in-bag (IB) or out-of-bag (OOB), as well as information about the split points in each tree. Without this information, the (Conditional) Permutation Importance algorithm cannot be executed. CPI_permimpRF <- permimp(rfAirq50, conditional = TRUE, progressBar = FALSE) plot(CPI_permimpRF, horizontal = TRUE) When calling permimp for a randomForest object form the randomForest-package, a menu is prompted that ask whether you are sure that the data-objects used to fit the random forest have not changed. This is because the permimp computations rely on those data-objects, and automatically search for them in the environment. If these data-objects have changed, the permimp results can be distorted. Part II: New Conditional Permutation Implementation This part explains the new implementation of the conditional permutation importance, and discusses the differences with the original implementation in party, as described by Strobl et al. (2008). First the the idea behind the conditional implementation is briefly recapitulated, followed by a discussion of the original implementation. Then the new implementation is explained, and the main differences with the original are emphasized. Finally, some practical implications of the new implementation are given, and the interpretation and possible use of the threshold value are discussed. A. Recapitulation: Conditional Permutation Importance. A researcher may be interested in whether a predictor \(X\) and the outcome \(Y\) are independent. The “null-hypothesis” is then \(P(Y | X) = P(Y)\). This corresponds with the unconditional permutation importance. When \(X\) and \(Y\) are indeed independent, permuting \(X\) should not significantly change the prediction accuracy of the tree/forest. The expected permutation importance value is zero. However, a researcher may also be interested in the conditional independence of \(X\) and \(Y\), conditionally on the values of some other predictors \(Z\). The “null-hypothesis” is then \(P(Y | X, Z) = P(Y | Z)\). Rather than “completely” permuting the \(X\) values, the \(X\) values can be permuted conditionally, given their corresponding \(Z\) values. This corresponds to the conditional permutation scheme. When \(X\) and \(Y\) are conditionally independent, ideally, a conditional importance measure should be zero. If \(X\) and \(Z\) are independent, both permutation schemes will give the same results. Or in practice, similar importance values. Yet a dependence between \(X\) and \(Z\) will result in differences between the unconditional and the conditional permutation schemes, and the corresponding importance values. Strobl et al. (2008) proposed to specify a partitioning (grid) of the predictor space based on \(Z\) (for each tree), in order to (conditionally) permute the values of \(X\) withing each partition (i.e., cell in the grid). According to Strobl et al. (2008) this partitioning should (1) be applicable to variables of all types; (2) be as parsimonious as possible, but (3) be also computationally feasible. Therefore they suggested to define the partitioning grid for each tree by means of the partitions of the predictor space induced by that tree. More precisely, using all the split points for \(Z\) in the tree, \(Z\) is discretized and the complete predictor space is partitioned using the discretized \(Z\). Note that this partitioning does not correspond with the recursive partitioning of a tree. In a tree only the top node splits the complete predictor space, all the following splits are conditional on the parent nodes. In contrast, for the conditional permutation grid, all the split points split the complete predictor space, which leads to a more fine-grained grid. In practice, the number of observations is finite. In situation with a relatively low number of observations, the grid for the conditional permutation may become to fine grained, making conditionally permuting practically infeasible. Therefore, the selection of \(Z\) (the predictors to condition on) is not a sinecure. B. Original Implementation (party::varimp) In their original implementation (cf, party::varimp), Strobl et al. (2008) argued to only include those variables in \(Z\) whose empirical correlation with \(X\) exceeds a certain moderate threshold. For continuous variables the Pearson correlation could be used, but for the general case they proposed to use the conditional inference framework promoted by Hothorn, Hornik, and Zeileis (2006). Applying this framework provides p-values, which have the advantage that they are comparable for variables of all types, and that they can serve as an intuitive and objective means of selecting the variables Z to condition on. The original implementation can be described as follows: For every predictor \(X\) 1. Test which other predictors are related to \(X\), applying the conditional inference framework (Hothorn et al. 2006) using the full data/training set. 2. Only include those other predictors in \(Z\) for which the \(p\)-value of the test is smaller than (1 - threshold). 3. Within each tree: 1. Gather all the split points for every predictor in \(Z\). 2. Discretize the predictors in \(Z\) using the gathered split points, and create a partitioning of the predictor space. 3. Within each partition, permute the values of predictor \(X\). Some issues There are, however, two important issues with this implementation: 1. For (some of) the tests in the conditional inference framework, the \(p\)-values not only depend on the strength of the (cor)relation, but also on the sample size. For instance, in big samples small correlations can also lead to small \(p\)-values. 2. (Some of the tests in) the conditional inference framework only test for linear dependence. For instance, for continuous variables a correlation test is used. Of course, dependence between variables is not limited to linear dependence. As a result, when \(X\) and \(W\) are continuous and have a U-shaped independence structure, \(W\) will not be included in \(Z\). C. New Implementation (permimp). The new implementation tries to mitigate the two issues raised above, by taking advantage of the fact that within each tree not the original values of the predictors, but only the partitions are important for the prediction of the outcome. That is, one can argue that the tree-based partitioning rather than the original values should be used to decide which other predictors should be included in \(Z\). Applying this rationale, the new implementation can be described as follows: In every tree, for every predictor with splits in the tree: 1. Discretize the in-bag values for each predictor using the split points: \(X\) => \(X_d\). 2. For every discretized \(X_d\): 1. Test which other discretized predictors \(W_d\) are related to \(X_d\), applying a \(\chi^2\)-independence tests (using only the in-bag values). 2. Only include those other predictors \(W\) in \(Z\) for which the \(p\)-value of the test is smaller than (1 - threshold). 3. Create the partitioning of the predictor space using the discretized \(Z\). 4. Within each partition, permute the values of predictor \(X\). Important implications The \(\chi^2\)-independence test does not (directly) depend on sample size. Therefore, the new implementation is less sensitive to the number of observations. In addition, the \(\chi^2\)-independence test is not limited to linear dependence. Hence, the new implementation mitigates the two issue raised above. Because of this, the threshold-value is easier to use and interpret (see below). Under the new implementation it is possible that \(Z\) differs across trees. Yet this is also the case under the original implementation, since not all predictors in \(Z\) are used as splitting variable in each tree. In addition, due to the randomness in random forests (subsampling/bootstrapping and mtry selection), it is very unlikely that there are two trees in the forest with exactly the same splitting points. Therefore, the conditional sampling scheme almost surely differs across trees. D. How to Use the Threshold. The threshold-value can be interpreted as a tuning parameter to make the permutation more or less conditional. A threshold = 0 and a threshold = 1 corresponding to permuting as conditional as possible and permuting completely unconditional, respectively. A threshold = .95, the default in permimp, only includes those \(W\) in \(Z\) for which \(W_d\) and \(X_d\) are dependent (with \(\alpha \)-level = .05). Yet threshold values smaller than threshold = .5 generally make the selection of the predictors to condition on too greedy, without a meaningful impact on the CPI pattern. Therefore, we recommend using threshold values between .5 and 1. Some research questions are best answered with a more marginal importance measure, while other questions are better answered using a more partial importance measure. In many situations, however, it is not clear which measure best fits the research question. Therefore, we argue that in these cases it can be interesting to evaluate the importance (rankings) of the predictors for different threshold-values. This strategy can provide more insight in how the conditioning affects the permutation importance values. In the original implementation, setting a sensible threshold proved to be hard, because the practical meaning of the threshold depended on the sample size and on the type of variables (cf. the issues raised above). In the new implementation, the threshold’s interpretation is clearer and more stable. In addition, the simulation studies by Debeer and Strobl (2020) suggest that the new implementation (a) allows a more gradual shift from unconditional to conditional; and (b) gives more stable importance measure computations. As an additional feature, the permimp can provide some diagnostics about the conditional permutation. When thresholdDiagnostics = TRUE, the permimp-function monitors whether or not a conditional permutation scheme was feasible for each predictor \(X\) in each tree. This information is translated in messages that suggest to either or decrease the threshold. First, it is possible that the conditioning grid is so fine-grained that permuting \(X\) conditionally cannot lead to observations ending up in a different end-node of the tree. In other words, the prediction accuracy before and after permuting will be always equal. If this issue occurs in more than 50 percent of the trees that include \(X\) as a splitting variable, permimp will produce a note, and suggest to increase the threshold-value. A higher threshold-value may result in a less fine-grained partitioning, making the conditional permutation feasible again. Second, it is possible that there are no \(W\) in the tree for which the \(\chi^2\)-independence test between \(W_d\) and \(X_d\) is smaller than (1 - threshold). This implies \(Z\) will be an empty set, and conditionally permuting is impossible. That is, without a partitioning/grid, it is equal to unconditionally permuting. If this issue occurs in more than 50 percent of the trees that include \(X\) as a splitting variable, permimp will produce a note, and suggest to decrease the threshold-value. A lower threshold-value includes more \(W\) in \(Z\), making the conditional permutation feasible again.
{"url":"https://cran.gedik.edu.tr/web/packages/permimp/vignettes/permimp-package.html","timestamp":"2024-11-06T20:36:44Z","content_type":"text/html","content_length":"108221","record_id":"<urn:uuid:4c0165fa-6b2f-4bd2-9e27-b7052ccf2c26>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.47/warc/CC-MAIN-20241106194801-20241106224801-00019.warc.gz"}
How to chart sunrise and sunset In this video, we'll look at how to plot average daylight hours per month in a chart, using sunrise and sunset data. In this video, we'll look at how to chart average daylight hours for each month of the year, using sunrise and sunset data. This is the final chart. This project has a couple of interesting challenges. First let's look at the available data. You can see we have data for both sunrise and sunset. Both columns contain valid Excel times. Now, if I try to create a column chart with just this data, we'll have some problems. The clustered column option isn't really useful. Stacked columns are closer to what we want. But, if you look closely, you can see that both sunrise and sunset are plotted as durations, instead of starting points, which isn't going to work. There are several ways to approach this. Personally, I like to use helper columns, since helper columns let you to work directly on the worksheet with formulas, and you can work step by step. First, we need a column for total daylight hours. This is pretty simple, I can just subtract sunrise from sunset. Both values are times, so the result can also be formatted as time. However, I don't want AM/PM, just hours and minutes. Now I can plot daylight hours stacked on sunrise times. Notice I am not plotting sunset data. To add hours between sunset and midnight, I need another helper column for evening hours. The formula is simple but the concept is a little tricky. Excel time is recorded as fractions of a day. Because there are 24 hours in a day, one hour is 1/24, and 24 hours is 24/24, or 1. This means I can subtract the sunset time from 1 to get evening hours. When I add evening hours to the chart as another data series, we have a sensible chart and can tidy things up. First, I'll give the chart a title, and delete the legend, and adjust colors. Then I'll bump up the column width and add data labels to show total daylight hours directly on the chart. Next, I'll format the y-axis to line up on the clock. 3 hours in Excel is 3/24, or .125 If I set max to 1 for 24 hours, and use .125 as the major unit, the axis shows 3 hour increments, which is pretty easy to read. I can use number formatting to show hours only. Finally, I can make daylight hours easier to read by converting to decimal values in another helper column. The formula is 24 * daylight hours. Back in the chart, I'll can change data labels to show these new decimal hours. And we have our final chart.
{"url":"https://exceljet.net/videos/how-to-chart-sunrise-and-sunset","timestamp":"2024-11-09T11:11:40Z","content_type":"text/html","content_length":"36107","record_id":"<urn:uuid:31e4c09b-4b89-40e5-8231-7d1c85a4c229>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.75/warc/CC-MAIN-20241109085148-20241109115148-00807.warc.gz"}
OUTPUT Statement The TPSPLINE Procedure OUTPUT OUT=SAS-data-set <keyword … keyword> ; The OUTPUT statement creates a new SAS data set that contains diagnostic measures calculated after fitting the model. All the variables in the original data set are included in the new data set, along with variables created by specifying keywords in the OUTPUT statement. These new variables contain the values of a variety of statistics and diagnostic measures that are calculated for each observation in the data set. If no keyword is present, the data set contains only the original data set and predicted Details about the specifications in the OUTPUT statement are as follows.
{"url":"http://support.sas.com/documentation/cdl/en/statug/65328/HTML/default/statug_tpspline_syntax08.htm","timestamp":"2024-11-02T12:51:03Z","content_type":"application/xhtml+xml","content_length":"18284","record_id":"<urn:uuid:e1ecfda7-9d77-4e8b-96a2-b5332db80d54>","cc-path":"CC-MAIN-2024-46/segments/1730477027710.33/warc/CC-MAIN-20241102102832-20241102132832-00645.warc.gz"}
ADHD and math learning disabilities (dyscalculia) It's not unusual to struggle with math homework. Still, some ADHDers face more difficulty understanding numbers and equations than neurotypical people. A condition called dyscalculia — sometimes referred to as "math dyslexia" — may be to blame. Dyscalculia is a learning disability frequently associated with ADHD. Dyscalculia may cause struggles in the classroom or cause issues with everyday tasks like grocery shopping, cooking, and money Let's learn about dyscalculia and how adults and children with ADHD can manage this learning hurdle. Too long; didn't read • People with ADHD may be diagnosed with dyscalculia, a math learning disability. • Dyscalculia can occur on its own (without having ADHD or other neurodivergent conditions). • Recognition and treatment in childhood may reduce the effects of dyscalculia in adulthood. Math learning disabilities and dyscalculia A math learning disability is an umbrella term for any learning condition that affects someone's ability to learn and understand math. Dyscalculia is a math learning disability that makes it hard to comprehend numbers, amounts, patterns, and decimal place values. A person with dyscalculia struggles to process numbers quickly. For example, if someone with dyscalculia were to glance at a small group of people, they would struggle with quickly counting how many people were there. What causes dyscalculia? Math learning disabilities can arise due to: • Abnormalities in brain development • Neurodevelopmental conditions (ADHD, autism, etc.) • Environmental factors, like problems at home.^1 Dyscalculia and ADHD Up to 60% of people with ADHD are also diagnosed with a learning disorder.^2 Combined with ADHD symptoms like inattention and hyperactivity, dyscalculia can affect one's ability to complete math Examples of math challenges for ADHDers: • An inability to learn how to solve an equation, despite determination and effort • Difficulties focusing on homework • Confusing math equation symbols for subtraction (-), addition (+), multiplication (x), and division (÷) • Running out of time to finish tests Dyscalculia + ADHD = executive dysfunction Executive dysfunction refers to a deficit in executive control, which includes cognitive skills like time management, self-motivation, and thought organization. ADHD and dyscalculia can prompt feelings of frustration and inadequacy, triggering even more dysfunction in these executive skills. In addition, both conditions can adversely affect a person's ability to start and complete tasks. ADHD and dyscalculia leads to a subtraction in working memory Individuals with dyscalculia, ADHD, or a combination of both will typically experience working memory deficits, reducing retention and concentration abilities. A working memory impairment causes problems in planning, organizing, and task initiation and completion. What are the signs that someone has dyscalculia? Symptoms of dyscalculia include: • Difficulty counting numbers • Inability to learn and recall standard number information • Trouble learning math concepts and phrases • Challenges with keeping score during games • Inability to count backward • Mathematics anxiety • Taking an extended time to solve math problems Dyscalculia diagnosis and treatment A professional diagnosis of dyscalculia requires patients to take a standardized test below their age or grade level and a few additional psychological assessments.^3 The earlier dyscalculia is detected, the sooner it can be managed to reduce challenges. While no medications are currently available for this learning impairment, you can pursue other strategies to manage dyscalculia, many of which overlap with ADHD treatment methods. Dyscalculia treatment options: • Assistive technology • Visiting learning specialists or neuropsychologists • Repetitive instruction of math concepts • Multisensory instruction • Cognitive behavioral therapy (CBT) ADHD and dyscalculia management hacks Difficulties interpreting math equations and numbers can lead to challenges beyond school and work, as math concepts often go beyond the classroom. The best way to manage both dyscalculia and ADHD is to create suitable accommodations at work, school, and home. In addition, both conditions require your understanding of the causes and symptoms so you can find the right tools and resources to manage them. Here are nine strategies for managing both conditions if you're an adult with ADHD and dyscalculia. 1. Find a quiet area to do your work 2. Listen to calming music or white noise 3. Break down tasks into smaller components 4. Talk or write out each problem 5. Draw the math problem 6. Address negative self-talk 7. Be transparent about the situation with others (it's ok!) 8. Build up self-confidence 9. Hire a tutor for additional help ADHD and postural sway: when staying upright is a balancing act Sep 4 min read Telephobia and ADHD: Why phone calls give you anxiety Aug 30 min read
{"url":"https://www.getinflow.io/post/dyscalculia-adhd-math-learning-disability","timestamp":"2024-11-08T23:28:49Z","content_type":"text/html","content_length":"71955","record_id":"<urn:uuid:099ad7d6-7106-434a-acc1-97cfeeadd609>","cc-path":"CC-MAIN-2024-46/segments/1730477028106.80/warc/CC-MAIN-20241108231327-20241109021327-00659.warc.gz"}
The current price of a non-dividend paying stock is $90. Use a two-step binomial tree to... The current price of a non-dividend paying stock is $90. Use a two-step binomial tree to... 1. The current price of a non-dividend paying stock is $90. Use a two-step binomial tree to value a European call option on the stock with a strike price of $88 that expires in 6 months. Each step is 3 months, the risk free rate is 5% per annum with continuous compounding. What is the option price when u = 1.2 and d = 0.8? Assume that the option is written on 100 shares of stock. Two-step binomial model can be used to calculate option price. In this case various variables needs to be worked out. Using this model, in above mentioned example, value of call option is 11.50 $. Please refer pictures for in detail explanation
{"url":"https://justaaa.com/finance/68332-the-current-price-of-a-non-dividend-paying-stock","timestamp":"2024-11-03T03:54:58Z","content_type":"text/html","content_length":"40428","record_id":"<urn:uuid:ea2ca13f-cbc5-4234-852a-c8bbe9226a23>","cc-path":"CC-MAIN-2024-46/segments/1730477027770.74/warc/CC-MAIN-20241103022018-20241103052018-00232.warc.gz"}
Standard Deviation: 6 Steps to CalculationStandard Deviation: Everything You Need to Know Standard Deviation: 6 Steps to Calculation The Standard Deviation of a set of data describes the amount of variation in the data set by measuring, and essentially averaging, how much each value in the data set varies from the calculated mean. The formula for standard deviation depends on whether you are analyzing population data, in which case it is called σ or estimating the population standard deviation from sample data, which is called To understand standard deviation, you must first know what a normal curve, or bell curve, looks like. This is important because data distributed in this way exhibits specific characteristics, namely as it relates to the mean and standard deviation. It allows you to make assumptions about the data. The mean of a normal curve is the middle of the curve (or the peak of the bell) with equal amount of data on both sides, while the standard deviation quantifies the variability of the curve (in other words, how wide or narrow the curve is). The assumption we can make about the data that follows a normal curve is that the area under the curve is relative to how many standard deviations we are away from the mean. The area between plus and minus one standard deviation from the mean contains 68% of the data. Two standard deviations contains 95% of the data and three standard deviations contains 99.8% of data. Real-life example: Let’s say we want to create grab-and-go donut hole boxes in our local donut shop. We notice that customers buy 20 donut holes on average when they order them fresh from the counter and the standard deviation of the normal curve is 5. If we want to serve 95% of customers interested in donut holes, we should offer sizes two standard deviations away from the mean, on both sides of the mean. So, sizes of 10 (20-5-5), 15 (20-5), 20 (the average), 25 (20+5) and 30 (20+5+5). How Does Standard Deviation Relate to Six Sigma? First and foremost, it’s important to understand that a standard deviation is also known as sigma (or σ). And Six Sigma is a methodology in which the goal is to limit defects to six “sigmas,” three above the mean and three below the mean. Anything beyond those limits requires improvements. Because three standard deviations contains 99.8% of the data in a set, Six Sigma requires continuous refinement to consider improvements that fall within that 0.2% of data in the set. Steps to Calculate Standard Deviation Follow these two formulas for calculating standard deviation. The first formula is for calculating population data and the latter is if you’re calculating sample data. The formula for standard deviation depends on whether you are analyzing population data, in which case it is called σ or estimating the population standard deviation from sample data, which is called The steps to calculating the standard deviation are: 1. Calculate the mean of the data set (x-bar or 1. μ) 2. Subtract the mean from each value in the data set. 3. Square the differences found in step 2 4. Add up the squared differences found in step 3 5. Divide the total from step 4 by either N (for population data) or (n – 1) for sample data (Note: At this point, you have the variance of the data) 6. Take the square root of the result from step 5 to get the standard deviation. Step 1: The average depth of this river, x-bar, is found to be 4’. Step 5: The sample variance can now be calculated: Step 6: To find the sample standard deviation, calculate the square root of the variance: Why is Standard Deviation Important? Standard deviation is important because it measures the dispersion of data – or, in practical terms, volatility. It indicates how far from the average the data spreads. This helps you determine the limitations and risks inherent in decisions based on that data. Real-life example: When considering investing in a stock, you can use standard deviation to determine risk. A stock with an average price of $50 and a standard deviation of $10 can be assumed to close 95% of the time (two standard deviations) between $30 ($50-$10-$10) and $70 ($50+$10+$10). It’s safe to assume that 5% of the time, it will plummet or soar outside of this range. If you were to compare this to a stock that has an average price of $50 but a standard deviation of $1, then it can be assumed with 95% certainty that the stock will close between $48 and $52. The second stock is less risky, more stable. The higher the standard deviation in relation to the mean, the higher the risk. Blue-chip stocks, for example, would have a fairly low standard deviation in relation to the Standard deviation has many practical applications, but you must first understand what it’s telling you about the data. Additionally, standard deviation is essential to understanding the concept and parameters around the Six Sigma methodology.
{"url":"https://www.sixsigmadaily.com/standard-deviation-6-steps-to-calculation/","timestamp":"2024-11-09T01:04:54Z","content_type":"text/html","content_length":"79935","record_id":"<urn:uuid:f913df13-0bee-461e-8418-8a29b5499518>","cc-path":"CC-MAIN-2024-46/segments/1730477028106.80/warc/CC-MAIN-20241108231327-20241109021327-00148.warc.gz"}
What is wrong with rote learning? The Victorian curriculum seems to be built on one of two assumptions: either people do not have memories, or else their memories need to be left unused. To my knowledge, the students are never asked to memorise poems or mathematical definitions. A year level coordinator once told me about an English teacher who made the students repeat a poem until they could recite it by heart. That took place during an excursion. When they were back at school, many students commented that they had never realised one could remember something if one repeated it over and over! These were 15 year olds who had not developed strategies for memorisation. I once read about Western hostages in Beirut, and how they kept themselves sane by reciting their favourite poems. The other day I was describing a project on the Pythagorean Theorem to my nephew, a French educated 17-year old from the middle east. I first asked him if he knew the theorem. He thought for a couple of seconds and said: "Dans un triangle carré, Le carré de l'hypoténuse est égal à la somme des carrés des deux autres côtés" (In a right angled triangle, the square of the hypotenuse is equal to the sum of the squares of the other two sides). Did the fact that he knew it by heart mean that he understood little of it? Not at all. It gave him the necessary vocabulary to describe his understanding. As we walked on, I described the geometric proofs that my students had to describe as part of their project. He had never seen those proofs before, but we could discuss them abstractly, without having the pictures in front of us. I could use words like "somme" (sum), "surface" (area) and "longueur" (length), with which he was entirely comfortable. I then mentioned the concept of a proof by induction, something taught in the first year of a science degree at a typical Australian university. He said, "yes, I know what that is. Induction is the opposite of deduction in that you begin with a particular case and generalise. In deduction, you apply a general rule to a particular case." Let me temper all this by stating that I am not a "back to basics" teacher. I teach algorithms and shortcuts only when absolutely necessary. I do believe that mathematics needs a context, and that understanding is paramount. My complaint is that we seem to have thrown the baby out with the bath water. We often speak as though learning by rote is a poor to learning with understanding. I think that we can use some rote learning to understanding. Let me know what you think. Anonymous said… yes, i agree totally.we have got the "rote vs understanding" out of balance in my opinion. In combination, there is powerful learning to be had! Here here, Elias! Rambling Teacher said… Always good to see your comments, TB. Thanks for your contributions to this blog. Unknown said… To the extent that mathematics is like another language, it requires of students that they learn it as such. To learn another language, students first need to manipulate individual sounds and words in that language with only a limited understanding of those elements. This is how we all learn to speak our And to learn to fluency requires, in my opinion, a far greater focus on "mindless" manipulation of words and ideas than it does a focus on objective, deep understanding of the language. Dr. P. said… Your post reminded me of this paper by Wu Basic Skills Versus Conceptual Understanding* A Bogus Dichotomy in Mathematics Education Rambling Teacher said… Thanks for your comments, Mr. Person and Dr. P. The University of Melbourne's website used to carry a paper by Professor David Clarke in which he argues against other false dichotomies, such as telling vs not telling and teacher-centred vs student-centred classrooms. Anonymous said… I am a student of a Buddhist teacher who thinks rote learning is the way we should be taught. I have no problem withrote learning if at some point there is follow-up, but what I find on a consistent basis is people "parrot" and there is no understanding of what they have said. If one doesn't understand what is parroted, the person cannot explain it as they have no understanding or real knowledge. It starts to seem like our current corporate culture with it's slogans about ethics, customer care, team work etc., but no one reallyhas any understanding of what they are repeating. Conversations withpeople are starting to portray this, people repeat things without any thought and if you question it youfind silence or stumbling stupidity. Roter learning must be followed up with an explanation. It this were the way to go then there would be no problem with education inthe u.s., as everyone would know the answers to the questions on standardized test. The reasonthey don't is that they have memorized the question, answer and if you ask with a different word order, theydon't recognize what you are saying. I have been told too many times my answer is wrong, because it wasn't answered EXACTLY as one remembered. They also are unable to explain in their own words because they haveno idea what it means. GS said… You're quite right, Mr. Elias. I often tell my students that knowing their multiplication tables is important because it makes one totally familiar with the _concept_ of multiplication, which is important when you start dealing with algebra. A lot of algebraic misconceptions occur when students forget that things are being multiplied. This is just one example. Most people come to understand something by relating it to things they're already familiar with. To build up a stock of familiar facts and concepts, you need good grounding and practice in "the basics".
{"url":"http://www.ramblingteacher.com/2006/08/what-is-wrong-with-rote-learning.html?showComment=1171755060000","timestamp":"2024-11-02T22:06:34Z","content_type":"text/html","content_length":"91786","record_id":"<urn:uuid:cd75d829-3aa4-4627-b0e5-ad6bff4400a6>","cc-path":"CC-MAIN-2024-46/segments/1730477027730.21/warc/CC-MAIN-20241102200033-20241102230033-00142.warc.gz"}
The Effect of Debt to Equity Ratio and Return on Equity on Stock Return with Dividend Policy as Intervening Variables in Subsectors Property and Real Estate on Bei The Effect of Debt to Equity Ratio and Return on Equity on Stock Return with Dividend Policy as Intervening Variables in Subsectors Property and Real Estate on Bei () 1. Introduction Economic development at this time resulted in increasingly fierce business competition. For that, every company must be more leverage and always be more careful in running the company to obtain the profit (Thaib et al., 2020). A company, however, may evaluate the type and level of risk before it is adopted as a necessary condition in business operations (Nugraha et al., 2019). As the economy grows, the property and real estate industries also develop. At present, the profits earned by property companies listed on the Indonesia Stock Exchange (IDX) are experiencing a decline in aspects of their financial statements. This resulted in a lot of impacts experienced by the company. This is caused by various factors that occur. However, property companies listed on the IDX still make the best contribution to consumers. The decline in several aspects of financial statements experienced by property companies due to the impact of the existing global financial crisis. There is also a depreciation in the rupiah and an increase in fuel prices, which has caused uncertainty and rising inflation. This has resulted in doubts and delays in investing in the private sector and eroding national economic resilience (Pratiwi, 2018). Research in the property sector is said to be important because the need for boards is one of the basic human needs. Activities in the property industry can be used as an indicator of how active economic activity is taking place (Utami, 2017). Every economic activity basically will always require property products as a factor of production. The development of property activities has a multiplier effect on the development of other sectors that are directly or indirectly related (Marzuki & Newell, 2019). In the case, if there is no demand for property products, it indicates that the economy is in an underdeveloped condition. Property is arguably the sector that never dies. When other sectors collapsed due to economic crisis, the property business showed an anomaly. In various places, there are houses and shophouses, apartments, shopping centers, office centers, condominiums and housing (Pratiwi, 2018). The motivation of investors investing in the capital market is to get a return. Stock return is the level of income obtained by subtracting the current closing price from the previous stock closing price divided by the previous year’s closing price (Khadafi et al., 2014). Therefore, investors need an assessment of the company’s financial performance before investing capital in that company. The performance appraisal of a company can be done by analyzing the company’s financial ratios. Financial ratio analysis is conducted to determine the strengths and weaknesses of company’s financial performance (Santosa, 2016). These financial ratios provide information about the short-term and long-term opportunities on a company’s. Debt to Equity Ratio (DER) and Return on Equity (ROE) are two important indicators that investors generally use in assessing the performance of a company. Furthermore, Debt to Equity Ratio (DER) and Return on Equity (ROE) are considered to have an important role in dividend policy, where this policy can ultimately maximize stock returns. According to Lesakova (2007), profitability ratio is ratio intended to measure the effectiveness of management that is reflected in return for the investment result through the company’s activities or in other words, measuring the company’s overall performance and efficiency in the management of liabilities and capital. Return on Equity (ROE) is included in the profitability ratio. According to Komala and Nugroho (2013), one of the main reasons operate the company is to generate profit that will be beneficial for shareholders. The success Measures of achievement of this reason is the number of ROE. Meanwhile, Debt to Equity Ratio (DER) is one of the solvency ratios. According to Khadafi et al. (2014), Debt to Equity Ratio (DER) is a ratio used to assess the debt to equity by comparing the entire debt, including current liabilities with the overall of equity. DER for every company is different, depending on the characteristics of business and the diversity of cash flow. Study related the determinant factors that influence stock returns and dividend policy have been widely studied by academics, but no studies have investigated this issue on the property sector in Indonesia. Based on the background and problem formulations that have been described previously, the purpose of this study is to analyze the effect of Debt Equity Ratio (DER) and Return On Equity (ROE) on stock returns in the property and real estate companies that listed on the Indonesia Stock Exchange (IDX) over the period of 2014-2018. Furthermore, this study also aims to examine whether Dividend Policy can mediate the relationship of Return on Equity and Debt Equity Ratio on stock returns in the property and real estate companies in Indonesia. This research is expected to be used as a reference by the property company managers in Indonesia to improve their future financial performance and also can be used by shareholders as considerations before making investment decisions to the Indonesia’s property companies in the capital market. 2. Literature Review 2.1. The Effect of Debt to Equity Ratio (DER) on Stock Returns According to Horne and Wachowicz (2005), Debt to Equity Ratio is a comparison between total debt or total debts with total shareholder’s equity. This ratio is sought by comparing all debt, including current debt and all equity. For banks (creditors), the greater this ratio, the more unprofitable because the greater the risk borne by the failure that may occur in the company. The company’s ability to pay debts funded by its own capital can be measured using Debt to Equity Ratio (DER). DER is a ratio that reflected the share of own capital that is used as collateral for all debts (Khadafi et al., 2014). The higher of DER indicates the high capital dependence of company’s toward outsiders so that the company’s interest expenses would be heavier (Izuddin, 2020). It is certainly would reduce the rights of the company’s shareholders because the rate of return is getting smaller. Moreover, the higher of DER caused the greater the company’s liabilities compared to the company’s own equity. The higher of DER tends to decrease stock return because the high level of debts indicates the company’s burden and certainly reducing profits. In other words, DER did not have significant effects on stock returns. Study by Nurmasari (2017) and Siburian (2013) also found that DER did not influence stock return. H[1]: Debt to Equity Ratio did not have significant effects on Stock Return. 2.2. The Effect of Return on Equity (ROE) on Stock Returns Return on Equity (ROE) is the rate of return that achieved by the company for each currency unit that further becomes the company’s capital. According to Brigham and Houston (2012), the notion of ROE is the net ratio of ordinary equity measures the rate of return on ordinary shareholder investment. This Return on Equity Ratio shows the efficient use of own capital. If this ratio is higher, the better. That means the company’s position will be stronger, and vice versa. Return on Equity is calculated by dividing net income with shareholder equity. In this context, how large the company provides yield every year per one currency that investing by the company investors (Tang, 2016). ROE is a measure of the return that achieved by investors from his investment in a company. The higher results lead to better the stock return. According to Berggrun et al. (2020), profitability affects stock returns. Return on Equity (ROE) is a measure of a company’s ability to generate profits using their own capital. The increasing ROE value indicates the company’s performance is getting better. Conditions like this will be a special attraction for the old investors to keep investing their shares and potential investors to investment in the company (Brigham & Houston, 2012). This condition will encourage an increase in stock prices which in turn will increase stock returns. Based on previous studies, Susilowati and Turyanto (2011) and Aziz (2012) found that ROE has a positive and significant effect on stock returns H[2]: Return On Equity has a positif and significant effects on Stock Return. 2.3. The Effect of Debt to Equity Ratio (DER) on Devidend Payout Ratio (DPR) Dividend is the distribution of profits/profits made by a company to shareholders on the profits obtained by the company (Halim, 2015). Whereas dividend policy is a decision to divide the profits obtained by the company to shareholders as dividends or will retain in the form of retained earnings to be used as investment financing in the future. If the company chooses to distribute profits as dividends, it will reduce retained earnings and then reduce the total source of internal funding or internal financing. Conversely, if the company chooses to hold the profits obtained, then the ability to form internal funds will be even greater (Sartono, 2014). According to Syamsudin (2011) Dividend payments differ from interest payments because dividends cannot reduce the amount of taxes paid by companies, because the funds are taken from net income after taxes (earning after taxes). Dividend payout ratio measurement is a part that is integrated with the company’s funding decision. Dividend Payout Ratio, which is the ratio that shows the results of the comparison between cash dividends per share and earnings per share. Pilbeam (2010: p. 224) stated that the greater DER of a firm will encourage the more of a firm’s earnings have to be devoted to interest payment of the firm’s debt, and consequently less money is available for shareholders. It indicates that the higher level of DER leads to the composition of debt is also higher and certainly reflected a lower ability of firms’ to pay dividends (Gill et al., 2010). This condition encourages the company has to pay its obligations instead of distributing its profits in the form of dividends. Moreover, if the debt to equity value of the company is high, it indicates that the dividend payout ratio distributed by the company is not as expected by investors. Study by Marlina and Danica (2009) proved that debt to equity ratio did not have a significant effect on dividend payout ratio. The higher this ratio, the bigger the obligations and the lower the ratio as well as encourage the company’s ability to fulfill their obligations. The increase in debt owned by the company will affect the size of the company’s net income available to shareholders, including the distribution of the dividend payout ratio. H[3]: Debt to Equity Ratio did not have significant effects on Devidend Payout Ratio. 2.4. The Effect of Return on Equity (ROE) on Devidend Payout Ratio (DPR) Study by Carlo (2014) and Hanif and Bustamam (2017) found that ROE has a positive and significant effect on the DPR. This means that if ROE increases, the DPR will also increase. Conversely, if ROE has decreased, the DPR will also experience a decline. ROE is a profitability ratio that describes the company’s ability to generate net profit after tax using its own capital. The higher of ROE reflected the higher level of profit that obtained by the company owner. The high profit level of company owners will encourage to increase the company’s ability to pay dividends. Based on the smoothing theory (Lintner, 1956), the size of the dividend depends on the company’s current profits and previous dividends. Based on this theory, the higher of profit lead to the higher of portion profits that share as dividends. H[4]: Return On Equity (ROE) has a positive and significant effect on Dividend Payout Ratio. 2.5. Dividend Payout Ratio (DPR) Mediate the Effect of Debt to Equity Ratio (DER) on Stock Returns The results of previous research conducted by Annisa and Chabachib (2017) found that DER has a significant effect on stock returns with dividends as the mediating variable. A company with a low DER means that it has a low level of debt as well, a low level of debt is considered to be able to increase the company’s earnings, if the company’s income increases, the dividends distributed will also increase, this dividend increase will also have an impact on increasing stock returns (Santosa, 2016). An increase or decrease in debt will in turn affect the size of net income that obtained to shareholders, including dividends received, because the obligation to pay debt takes precedence than pay dividends. Thus, the lower of debt ratio indicates the higher of profitability ratio on a company’s. With the increasing profitability, the company will increase their ability to pay dividends (Brigham & Houston, 2012). In accordance with the signaling hypothesis theory, it can be stated that a lower DER indicates that the company will have increased revenue, so this is good news or a good signal for investors and will have an impact on increasing dividends paid, so that it will also increase their stock return. H[5]: Dividend Payout Ratio (DPR) mediates the effect of Debt to Equity Ratio on Stock Return. 2.6. Dividend Payout Ratio (DPR) Mediate the Effect of Return on Equity (ROE) on Stock Returns A company with a high Return on Equity (ROE) shows that they have sufficient profit to pay dividends to their shareholders. The higher of ROE value shows that the higher of dividends distributed to shareholders and directly shows stock return also will increase. Dividend policy provides information about the company’s future profit growth, this information will invite responses from investors which will affect the company’s returns (Baah et al., 2014). In accordance with the signaling hypothesis theory, it can be stated that a higher ROE indicates that the company has high profits, so this becomes good news or a good signal for investors and will have an impact on increasing dividends paid, so that it will also increase its stock returns. The results of previous research that conducted by Santosa (2016) and Nareshwari (2016) found that profitability ratio that reflected by ROE value has a significant effect on stock returns with DPR as a mediating variable. H[6]: Dividend Payout Ratio (DPR) mediates the effect of Return On Equity on Stock Return. 3. Research Method 3.1. Data Data that used in this study is secondary data that obtained online from the Indonesia Stock Exchange (IDX) website. We collecting annual data of Debt Equity Ratio (DER), Return On Equity (ROE), Dividend Payout Ratio (DPR) and Stock Return (SR) over the period of 2014-2018 from the financial statement and annual report of the property and real estate companies in Indonesia. Total Real Estate and Property companies listed on the Indonesia Stock Exchange (IDX) are 54 companies, but only annual data for 18 companies that available for the period of 2014-2018. Therefore, in this study we only use annual data of 18 Real Estate and Property companies in Indonesia. 3.2. Analysis Method The data analysis method used in this study uses multiple linear regression with the help of the SPSS (Statistical Product and Service Solutions program) program. Based on the hypothesis in this study, the data analysis method used is quantitative analysis to calculate or estimate quantitatively from several factors individually or together with the related variables. The functional relationship between one variable related to the independent variable can be done with multiple linear regression. The research framework is the relationship between one indicator to another indicator that studied. For this framework to connect and explain a topic to be discussed. This flow of thought is expected to provide direction and an overview of the variables to be examined. The variables that will be used in analyzing stock returns are financial ratios that include Debt to Equity Ratio and Return on Equity. The research framework in study can be described as follows (Figure Based on research framework above, we then develop two-equation models as follows: $\text{DPR}=\alpha +{\beta }_{1}\text{DER}+{\beta }_{2}\text{ROE}+\epsilon$(1) $\text{SR}=\alpha +{\beta }_{1}\text{DER}+{\beta }_{2}\text{ROE}+{\beta }_{3}\text{DPR}+\epsilon$(2) where, $\alpha$ is constanta; ${\beta }_{i}\left(i=1,2,3\right)$ are the coefficients of independent variables; and $\epsilon$ is error term. Equation (1) uses to estimate the effect of Debt to Equity Ratio (DER) and Return on Equity (ROE) on Dividend Payout Ratio (DPR), while Equation (2) uses to estimate the effect of Debt to Equity Ratio (DER) and Return on Equity (ROE), and Dividend Payout Ratio (DPR) on Stock Return (SR). According to (Hakimah et al., 2019), a mediating/intervening variable influences the relationship between the independent variables dan the dependent variable. In this study, Dividend Payout Ratio (DPR) determined as a mediating/intervening variable that expected to improve the impact of Debt to Equity Ratio (DER) and Return on Equity (ROE) on Stock Return (SR). Furthermore, in order to examine the effect of mediating/intervening variable, we applied Sobel test that developed by Sobel (1982, 1986). The formula of the Sobel test can be written as follows: where ${S}_{ab}$ is the standard error of indirect effect; ${S}_{a}$ is the standard error of coefficient a; ${S}_{b}$ is the standard error of coefficient b; a is the coefficient of Debt to Equity Ratio (DER) and Return on Equity (ROE) toward Dividend Payout Ratio (DPR) in model 1, respectively; b is the coefficient of Dividend Payout Ratio (DPR) toward Stock Return (SR) in model 2. Meanwhile, the significance of indirect effect between each independent variables toward the dependent variable through mediating/intervening variable determines using formula as follows: Futhermore, the value of t[count] will compare with the value of t[table] (i.e. 1.96). We concluded existence of indirect effect if the value of t[count] higher than the value of t[table], and rejected existence of indirect effect if the value of t[count] lower than the value of t[table]. 4. Results Table 1 reports the result of estimation model 1. It can be seen that the coefficients of Debt to equity Ratio (DER) and Return on Equity (ROE) are positive and statistically significant influences Dividend Payout Ratio (DPR). First, this result shows that a rise 1% of Debt to equity Ratio (DER) will cause Dividend Payout Ratio (DPR) increases 0.38%, vice versa. This result similar as finding by Rehman & Takumi (2012) which also found a positive effect Debt to Equity Ratio on Dividend Payout Ratio, but contrary with finding by Komrattanapanya and Suntraruk (2013), Labhane and Das (2015), as well as Yasa and Wirawati (2016) who found that Debt to Equity Ratio has a negative effect on Dividend Payout Ratio (DPR). Second, our result shows that a rise 1% of Return on Equity (ROE) will cause Dividend Payout Ratio (DPR) increases 0.66%. The findings similar as study by Komala and Nugroho (2013), Sumampow and Murni (2016), Kartika (2015), Yudiana and Yadnyana (2016), as well as Yasa & Wirawati (2016) which also found that Return on Equity (ROE) has a significant positive effect to Dividend Payout Ratio (DPR). Nevertheless, there findings contrast with study by Komala and Nugroho (2013) and Maladjian and Khoury (2014) who found a negative effect Return on Equity (ROE) on Dividend Payout Ratio (DPR). Table 1. The result of equation model 1. Note: ** and *** denotes significant at 5% and 1% levels, respectively. The value of F-statistics shows that Debt to equity Ratio (DER) and Return on Equity (ROE) jointly significant influences Dividend Payout Ratio (DPR). The value of R-square indicated that the capability of all independent variables to interpreting the change of dependent variable is only 20%, while the rest is influences by other indicators that not accounted on the equation models. Meanwhile, the value of Durbin-Watson statistics indicated there is no serial correction issue in the equation model 1. Table 2 reports the result of estimation model 2. It can be seen that the coefficients of Debt to equity Ratio (DER), Return on Equity (ROE) and Dividend Payout Ratio (DPR) are positive and statistically significant influences Stock Return (SR). First, our result shows that a rise 1% of Debt to equity Ratio (DER) will cause Stock Return (SR) increases 0.24%. This finding indicates that higher the value of DER, the more equity funding through debt. It implies that the capital structure of the business more use of debts. The increase in DER can also be caused by the value of own capital is much smaller when compared to debt on external parties. Increasing the use of debt will increase company capital but if the company is unable to manage these funds effectively and efficiently, so a high debt equity ratio will reflect the high debt held by the company. If the company’s Debt to Equity Ratio (DER) is high, there is a possibility that the company’s stock price will be low because if the company makes a profit, the company tends to use that profit to pay its debt compared to dividing the dividend. Nevertheless, our findings opposite with findings by Nurmasari (2017) who found that DER did not any effect on Stock Return (SR). Second, our result found that a rise 1% of Return on Equity (ROE) will cause Stock Return (SR) increases 0.63%. These results indicate an increase and decrease in stock returns will be influenced by Return on Equity information. This ratio illustrates the rate of return on own capital in generating net income. If the Property and Real Estate company shows an increase in ROE, then the company has good management in managing its capital optimally to generate net profit. So that it can improve the welfare and trust of investors who invest their capital in the Property and Real Estate company. Investor confidence will be followed by increased demand for these shares. Then the request will be followed by a rising stock price and then the stock return that will be obtained will also Table 2. The result of equation model 2. Note: ***, **, * denotes significant at 1%, 5% and 10% levels, respectively. The positive influence between Return on Equity on stock returns, shows that investors use ROE contained in the issuer’s financial statements as an analysis tool to obtain a decent stock return. The results of this study support the research conducted by Aziz (2012) and Susilowaty (2011) who also found that Return On Equity has a positive and significant effect on stock returns. Third, our result indicates that a rise 1% of Dividend Payout Ratio (DPR) will cause Stock Return (SR) increases 0.55%. This finding similar as study by Rehman & Takumi (2012) who also found that a rise in Debt to equity ratio (DER) will caused Dividend Payout Ratio (DPR) also increased. The value of F-statistics shows that Debt to equity Ratio (DER), Return on Equity (ROE) and Dividend Payout Ratio (DPR) jointly significant influences Stock Return (SR). The value of R-square indicated that the capability of all independent variables to interpreting the change of dependent variable is only 59%, while the rest is influences by other indicators that not accounted on the equation models. Meanwhile, the value of Durbin-Watson statistics indicated there is no serial correction issue in the equation model 1. The result of Sobel test for estimates the indirect effect of Debt to equity Ratio (DER) on Stock Return (SR) through Dividend Payout Ratio (DPR) as mediating variable is as follows: Then, the value of t[count] obtaining as follows: The value of t[count] (2.297) greater than t[count] (1.96). Therefore, it can be concluded that there is an indirect effect running from Debt to equity Ratio (DER) to Stock Return (SR) through Dividend Payout Ratio (DPR). this finding indicates that a company with a low DER has a low debt level as well, a low debt level is considered to be able to increase the company’s revenue, if the company’s revenue increases, the dividends distributed will also increase, this increase in dividends will also have an impact on increasing stock returns. An increase or decrease in debt will in turn affect the size of the net profit available to par shareholders including dividends received because the obligation to pay debts takes precedence over dividend distribution. Thus the lower the debt ratio will further increase the profitability ratio of a company. The results of this research that support this research conducted by Nazir et al. (2012) who stated that there is an influence of leverage ratio (DER) on stock returns with dividends as a mediating variable. The result of Sobel test for estimates the indirect effect of Return On Equity (ROE) on Stock Returns through Dividend Payout Ratio (DPR) as mediating variable is as follows: Then, the value of t[count] obtaining as follows: From the results, the value of t[count] is 3.748, it is greater than t[table] at 5% significant level (1.96). Based on this finding, it can be concluded that there is an indirect effect running from Return on Equity (ROE) to Stock Return (SR) through Dividend Payout Ratio (DPR). This finding implies that a company with a high Return on Equity (ROE) has enough profit to pay dividends to its shareholders. The higher ROE, the dividends distributed to shareholders will also be higher, so stock returns will also go up. Dividend policy provides information about the company’s profit growth in the future, this information will invite a response from investors which in turn will affect the company’s return (Baah et al., 2014). In accordance with the signaling hypothesis theory, it can be stated that the higher ROE shows that the company has a high profit as well, so this becomes good news or a good signal for investors and will have an impact on increasing dividends paid, thus also increasing stock returns. The result supported the study conducted by Rahmaninia (2016) who stated that there is an effect of the profitability ratio (ROE) on stock returns with dividends as a mediating variable. 5. Conclusion Based on our study, we concluded that DER (Debt to Equity Ratio) and Return on Equity (roe) have significant effects to dividen payout ratio (dpr), while an increase or decrease in Stock Return (SR) influenced by DER (Debt to Equity Ratio), Return on Equity (ROE) and Dividen Payout ratio (DPR). Moreover, we also found that DPR (Dividen Payout ratio), as mediating variable, stimulates the effect of Debt to Equity Ratio (DER) and Return on Equity (ROE) toward Stock Return (SR) on the property and real estate companies in Indonesia. Based on these findings, we recommended companies to pay attention to the quality of the Debt to Equity ratio (DER) and Return on Equity (ROE) in the company in order to provide more benefit to the shareholders. Furthermore, investors who are interested in investing in stock securities should always pay attention to the company’s financial and non-financial factors. One of them is by paying attention to solvency, profitability, stock returns, and dividend policy. This is because there are many factors that affect the return on investment. Investors who expect returns in the form of dividends should pay attention to the company’s Return on Equity (ROE), because ROE has the greatest and most significant effect on the Dividend Payout Ratio (DPR). Overall, our results show that although theoretically not all variables affect stock returns, our research proves that DER, ROE, and DPR have a significant effect on stock returns. This is possible because of differences in objects, periods, samples used in the study. Nevertheless, our study has several limitations. First, we only consider the Debt to Equity Ratio (DER), Return on Equity (ROE) and Dividend Payout Ratio (DPR) as indicators that affect stock returns. Second, in our research we only collected annual data for eighteen property and real estate companies in Indonesia over the period of 2014-2018. Therefore, further studies are expected to consider other indicators as exogenous variables to investigate the factors that affect the company’s stock returns. Moreover, we also suggest future studies to improve the number of samples and periods of data that use in the study for better results. Furthermore, this study is expected to be useful to enrich the concepts or theories that support the development of knowledge about financial management, especially study related to investment decisions, solvency ratios, profitability ratios, and dividend policies.
{"url":"https://scirp.org/journal/paperinformation?paperid=102690","timestamp":"2024-11-08T08:14:34Z","content_type":"application/xhtml+xml","content_length":"165863","record_id":"<urn:uuid:39275f7d-e570-4072-b38c-a2dbbc21777c>","cc-path":"CC-MAIN-2024-46/segments/1730477028032.87/warc/CC-MAIN-20241108070606-20241108100606-00665.warc.gz"}
Physics:Bose–Einstein condensation (network theory) Short description: Occurrence in network theory Bose–Einstein condensation in networks^[1] is a phase transition observed in complex networks that can be described by the Bianconi–Barabási model.^[2] This phase transition predicts a "winner-takes-all" phenomena in complex networks and can be mathematically mapped to the mathematical model explaining Bose–Einstein condensation in physics. In physics, a Bose–Einstein condensate is a state of matter that occurs in certain gases at very low temperatures. Any elementary particle, atom, or molecule, can be classified as one of two types: a boson or a fermion. For example, an electron is a fermion, while a photon or a helium atom is a boson. In quantum mechanics, the energy of a (bound) particle is limited to a set of discrete values, called energy levels. An important characteristic of a fermion is that it obeys the Pauli exclusion principle, which states that no two fermions may occupy the same state. Bosons, on the other hand, do not obey the exclusion principle, and any number can exist in the same state. As a result, at very low energies (or temperatures), a great majority of the bosons in a Bose gas can be crowded into the lowest energy state, creating a Bose–Einstein condensate. Bose and Einstein have established that the statistical properties of a Bose gas are governed by the Bose–Einstein statistics. In Bose–Einstein statistics, any number of identical bosons can be in the same state. In particular, given an energy state ε, the number of non-interacting bosons in thermal equilibrium at temperature T = 1/β is given by the Bose occupation number [math]\displaystyle{ n(\varepsilon)=\frac{1}{e^{\beta(\varepsilon-\mu)}-1} }[/math] where the constant μ is determined by an equation describing the conservation of the number of particles [math]\displaystyle{ N=\int d\varepsilon \, g(\varepsilon) \, n(\varepsilon) }[/math] with g(ε) being the density of states of the system. This last equation may lack a solution at low enough temperatures when g(ε) → 0 for ε → 0. In this case a critical temperature T[c] is found such that for T < T[c] the system is in a Bose-Einstein condensed phase and a finite fraction of the bosons are in the ground state. The density of states g(ε) depends on the dimensionality of the space. In particular [math]\displaystyle{ g(\varepsilon)\sim \varepsilon^{\frac{d-2}{2}} }[/math] therefore g(ε) → 0 for ε → 0 only in dimensions d > 2. Therefore, a Bose-Einstein condensation of an ideal Bose gas can only occur for dimensions d > 2. The concept The evolution of many complex systems, including the World Wide Web, business, and citation networks, is encoded in the dynamic web describing the interactions between the system’s constituents. The evolution of these networks is captured by the Bianconi-Barabási model, which includes two main characteristics of growing networks: their constant growth by the addition of new nodes and links and the heterogeneous ability of each node to acquire new links described by the node fitness. Therefore the model is also known as fitness model. Despite their irreversible and nonequilibrium nature, these networks follow the Bose statistics and can be mapped to a Bose gas. In this mapping, each node is mapped to an energy state determined by its fitness and each new link attached to a given node is mapped to a Bose particle occupying the corresponding energy state. This mapping predicts that the Bianconi–Barabási model can undergo a topological phase transition in correspondence to the Bose–Einstein condensation of the Bose gas. This phase transition is therefore called Bose-Einstein condensation in complex networks. Consequently addressing the dynamical properties of these nonequilibrium systems within the framework of equilibrium quantum gases predicts that the “first-mover-advantage,” “fit-get-rich (FGR),” and “winner-takes-all” phenomena observed in a competitive systems are thermodynamically distinct phases of the underlying evolving networks.^[1] The mathematical mapping of the network evolution to the Bose gas Starting from the Bianconi-Barabási model, the mapping of a Bose gas to a network can be done by assigning an energy ε[i] to each node, determined by its fitness through the relation^[1]^[3] [math]\displaystyle{ \varepsilon_i=-\frac{1}{\beta}\ln{\eta_i} }[/math] where β = 1 / T . In particular when β = 0 all the nodes have equal fitness, when instead β ≫ 1 nodes with different "energy" have very different fitness. We assume that the network evolves through a modified preferential attachment mechanism. At each time a new node i with energy ε[i] drawn from a probability distribution p(ε) enters in the network and attach a new link to a node j chosen with [math]\displaystyle{ \Pi_j=\frac{e^{-\beta\varepsilon_j}k_j}{\sum_r e^{-\beta\varepsilon_r}k_r}. }[/math] In the mapping to a Bose gas, we assign to every new link linked by preferential attachment to node j a particle in the energy state ε[j]. The continuum theory predicts that the rate at which links accumulate on node i with "energy" ε[i] is given by [math]\displaystyle{ \frac{\partial k_i(\varepsilon_i,t,t_i)}{\partial t}=m\frac{e^{-\beta\varepsilon_i}k_i(\varepsilon_i,t,t_i)}{Z_t} }[/math] where [math]\displaystyle{ k_i(\varepsilon_i,t, t_i) }[/math] indicating the number of links attached to node i that was added to the network at the time step [math]\displaystyle{ t_i }[/math]. [math]\displaystyle{ Z_t }[/math] is the partition function, defined as: [math]\displaystyle{ Z_t=\sum_i e^{-\beta\varepsilon_i}k_i(\varepsilon_i,t,t_i). }[/math] The solution of this differential equation is: [math]\displaystyle{ k_i(\varepsilon_i,t,t_i)=m\left(\frac{t}{t_i}\right)^{f(\varepsilon_i)} }[/math] where the dynamic exponent [math]\displaystyle{ f(\varepsilon) }[/math] satisfies [math]\displaystyle{ f(\varepsilon)=e^{-\beta(\varepsilon-\mu)} }[/math], μ plays the role of the chemical potential, satisfying the equation [math]\displaystyle{ \int d\varepsilon \, p(\varepsilon) \frac{1}{e^{\beta(\varepsilon-\mu)}-1}=1 }[/math] where p(ε) is the probability that a node has "energy" ε and "fitness" η = e^−βε. In the limit, t → ∞, the occupation number, giving the number of links linked to nodes with "energy" ε, follows the familiar Bose statistics [math]\displaystyle{ n(\varepsilon)=\frac{1}{e^{\beta(\varepsilon -\mu)}-1}. }[/math] The definition of the constant μ in the network models is surprisingly similar to the definition of the chemical potential in a Bose gas. In particular for probabilities p(ε) such that p(ε) → 0 for ε → 0 at high enough value of β we have a condensation phase transition in the network model. When this occurs, one node, the one with higher fitness acquires a finite fraction of all the links. The Bose–Einstein condensation in complex networks is, therefore, a topological phase transition after which the network has a star-like dominant structure. Bose–Einstein phase transition in complex networks The mapping of a Bose gas predicts the existence of two distinct phases as a function of the energy distribution. In the fit-get-rich phase, describing the case of uniform fitness, the fitter nodes acquire edges at a higher rate than older but less fit nodes. In the end the fittest node will have the most edges, but the richest node is not the absolute winner, since its share of the edges (i.e. the ratio of its edges to the total number of edges in the system) reduces to zero in the limit of large system sizes (Fig.2(b)). The unexpected outcome of this mapping is the possibility of Bose–Einstein condensation for T < T[BE], when the fittest node acquires a finite fraction of the edges and maintains this share of edges over time (Fig.2(c)). A representative fitness distribution ρ(η) that leads to a condensations [math]\displaystyle{ \rho(\eta)=(\lambda+1)(1-\eta)^\lambda }[/math] with λ = 1. However, the existence of the Bose–Einstein condensation or the fit-get-rich phase does not depend on the temperature or β of the system but depends only on the functional form of the fitness distribution ρ(ν) of the system. In the end, β falls out of all topologically important quantities. In fact, it can be shown that Bose–Einstein condensation exists in the fitness model even without mapping to a Bose gas.^[4] A similar gelation can be seen in models with superlinear preferential attachment,^[5] however, it is not clear whether this is an accident or a deeper connection lies between this and the fitness model. Bose–Einstein condensation in evolutionary models and ecological systems In evolutionary models, each species reproduces proportionally to its fitness. In the infinite alleles model, each mutation generates a new species with a random fitness. This model was studied by the statistician J. F. C. Kingman and is known as the "house of cards" models.^[6] Depending on the fitness distribution, the model shows a condensation phase transition. Kingman did not realize that this phase transition could be mapped to a Bose–Einstein condensation.
{"url":"https://handwiki.org/wiki/Physics:Bose%E2%80%93Einstein_condensation_(network_theory)","timestamp":"2024-11-03T13:47:29Z","content_type":"text/html","content_length":"73793","record_id":"<urn:uuid:1f22cdea-8d72-4d21-be76-11c5e01735ac>","cc-path":"CC-MAIN-2024-46/segments/1730477027776.9/warc/CC-MAIN-20241103114942-20241103144942-00278.warc.gz"}
Neural Networks Lecture What is a neural network • The single Neuron □ Weighted Input □ Activation • The network model □ Input/Output □ Weights □ Activation Function • The Tensor Model Output and Loss Function • Classification versus Regression □ Two-class classification (0 or 1) □ Regression \(y=f(x)\in \mathbb{R}\) □ Multiclass \(y=(y_1,y_2,\ldots,y_n)\) ☆ \(y_i=1\) means membership in class 1 • Soft-decision: \(y\) is a continuous variable □ higher values are more probably one • Loss functiion □ Mean-squared error (common for regression) \[L = (x-y)^2\] □ Cross Entropy (common for classification) \[L = \log \frac{ \exp x_{y} } { \sum \exp x_i }\] □ There are others • Optimisation problem □ tune the weights to minimise the loss function □ if the activation function is differentiable, the entire system is □ different optimisation algorithms; trust the API or do a more advanced module Activation Functions • Threshold functions • Approximations to the threshold function • Logistic: \(f(x) = \frac1{1+e^{-\beta x}}\) • ReLU: \(f(x)=\max(x,0)\) Two main contenders. • TensorFlow • PyTorch □ A replacement for NumPy to use the power of GPUs and other accelerators. □ An automatic differentiation library that is useful to implement neural networks. Note that PyTorch replaces NumPy; i.e. it is primarily a python tool, and operaes in the object oriented framework of python. The reason for using PyTorch in these examples is primarily that I have lately been working off some code created by some final year students this Spring, and they happened to choose PyTorch. The choice of TensorFlow or PyTorch is otherwise arbitrary. Sample Program Loss Functions and Evaluation • Accuracy: ratio of correctly classified items • What is the difference between a rate and a probability? • Statistics □ Standard deviation □ Hypothesis Tests □ Confidence Interval • Other heuristics Computational Power • Neural Networks are Computationally Expensive • GPU or CPU - what’s the difference? □ what resources do you have? • Remedies □ Reduce image resolution □ Reduce number of images □ Reduce number of epochs • In particular, it is necessary to sacrifice accuracy during development and testing. • In the final stages you may need big datasets to achieve satisfactory results, and then you may need more computing power.
{"url":"http://www.hg.schaathun.net/maskinsyn/ANN","timestamp":"2024-11-11T01:54:44Z","content_type":"application/xhtml+xml","content_length":"12442","record_id":"<urn:uuid:3b766b5f-c676-49d7-956f-a67c43ed869f>","cc-path":"CC-MAIN-2024-46/segments/1730477028202.29/warc/CC-MAIN-20241110233206-20241111023206-00111.warc.gz"}
Correction For Spatial And Temporal Auto-Correlation In Panel Data: Using R To Estimate Spatial HAC Errors Per Conley Darin Christensen and Thiemo Fetzer tl;dr: Fast computation of standard errors that allows for serial and spatial auto-correlation. Economists and political scientists often employ panel data that track units (e.g., firms or villages) over time. When estimating regression models using such data, we often need to be concerned about two forms of auto-correlation: serial (within units over time) and spatial (across nearby units). As Cameron and Miller (2013) note in their excellent guide to cluster-robust inference, failure to account for such dependence can lead to incorrect conclusions: “[f]ailure to control for within-cluster error correlation can lead to very misleadingly small standard errors…†(p. 4). Conley (1999, 2008) develops one commonly employed solution. His approach allows for serial correlation over all (or a specified number of) time periods, as well as spatial correlation among units that fall within a certain distance of each other. For example, we can account for correlated disturbances within a particular village over time, as well as between that village and every other village within one hundred kilometers. As with serial correlation, spatial correlation can be positive or negative. It can be made visually obvious by plotting, for example, residuals after removing location fixed effects. Example Visualization of Spatial Correlation from Radil, S. Matthew, Spatializing Social Networks: Making Space for Theory In Spatial Analysis, 2011. We provide a new function that allows R users to more easily estimate these corrected standard errors. (Solomon Hsiang (2010) provides code for STATA, which we used to test our estimates and benchmark speed.) Moreover using the excellent lfe, Rcpp, and RcppArmadillo packages (and Tony Fischetti’s Haversine distance function), our function is roughly 20 times faster than the STATA equivalent and can scale to handle panels with more units. (We have used it on panel data with over 100,000 units observed over 6 years.) This demonstration employs data from Fetzer (2014), who uses a panel of U.S. counties from 1999-2012. The data and code can be downloaded here. STATA Code: We first use Hsiang’s STATA code to compute the corrected standard errors (spatHAC in the output below). This routine takes just over 25 seconds. cd "~/Dropbox/ConleySEs/Data" clear; use "new_testspatial.dta" tab year, gen(yy_) tab FIPS, gen(FIPS_) timer on 1 ols_spatial_HAC EmpClean00 HDD yy_*FIPS_2-FIPS_362, lat(lat ) lon(lon ) t(year) p(FIPS) dist(500) lag(5) bartlett disp # *----------------------------------------------- # * Variable | OLS spatial spatHAC # *-------------+--------------------------------- # * HDD | -0.669 -0.669 -0.669 # * | 0.608 0.786 0.838 timer off 1 timer list 1 # 1: 24.8 / 3 = 8.2650 R Code: Using the same data and options as the STATA code, we then estimate the adjusted standard errors using our new R function. This requires us to first estimate our regression model using the felm function from the lfe package. # Loading sample data: dta_file <- "~/Dropbox/ConleySEs/Data/new_testspatial.dta" DTA <-data.table(read.dta(dta_file)) setnames(DTA, c("latitude", "longitude"), c("lat", "lon")) # Loading R function to compute Conley SEs: ptm <-proc.time() # We use the felm() from the lfe package to estimate model with year and county fixed effects. # Two important points: # (1) We specify our latitude and longitude coordinates as the cluster variables, so that they are included in the output (m). # (2) We specify keepCx = TRUE, so that the centered data is included in the output (m). m <-felm(EmpClean00 ~HDD -1 |year +FIPS |0 |lat +lon, data = DTA[!is.na(EmpClean00)], keepCX = TRUE) coefficients(m) %>%round(3) # Same as the STATA result. We then feed this model to our function, as well as the cross-sectional unit (county FIPS codes), time unit (year), geo-coordinates (lat and lon), the cutoff for serial correlation (5 years), the cutoff for spatial correlation (500 km), and the number of cores to use. SE <-ConleySEs(reg = m, unit = "FIPS", time = "year", lat = "lat", lon = "lon", dist_fn = "SH", dist_cutoff = 500, lag_cutoff = 5, cores = 1, verbose = FALSE) sapply(SE, sqrt) %>%round(3) # Same as the STATA results. OLS Spatial Spatial_HAC 0.608 0.786 0.837 proc.time() -ptm user system elapsed 1.619 0.055 1.844 Estimating the model and computing the standard errors requires just over 1 second, making it over 20 times faster than the comparable STATA routine. R Using Multiple Cores: Even with a single core, we realize significant speed improvements. However, the gains are even more dramatic when we employ multiple cores. Using 4 cores, we can cut the estimation of the standard errors down to around 0.4 seconds. (These replications employ the Haversine distance formula, which is more time-consuming to compute.) pkgs <-c("rbenchmark", "lineprof") invisible(sapply(pkgs, require, character.only = TRUE)) bmark <-benchmark(replications = 25, columns = c('replications','elapsed','relative'), ConleySEs(reg = m, unit = "FIPS", time = "year", lat = "lat", lon = "lon", dist_fn = "Haversine", lag_cutoff = 5, cores = 1, verbose = FALSE), ConleySEs(reg = m, unit = "FIPS", time = "year", lat = "lat", lon = "lon", dist_fn = "Haversine", lag_cutoff = 5, cores = 2, verbose = FALSE), ConleySEs(reg = m, unit = "FIPS", time = "year", lat = "lat", lon = "lon", dist_fn = "Haversine", lag_cutoff = 5, cores = 4, verbose = FALSE)) bmark %>%mutate(avg_eplased = elapsed /replications, cores = c(1, 2, 4)) replications elapsed relative avg_eplased cores 1 25 23.48 2.095 0.9390 1 2 25 15.62 1.394 0.6249 2 3 25 11.21 1.000 0.4483 4 Given the prevalence of panel data that exhibits both serial and spatial dependence, we hope this function will be a useful tool for applied econometricians working in R. Feedback Appreciated: Memory vs. Speed Tradeoff This was Darin’s first foray into C++, so we welcome feedback on how to improve the code. In particular, we would appreciate thoughts on how to overcome a memory vs. speed tradeoff we encountered. (You can email Darin at darinc[at]stanford.edu.) The most computationally intensive chunk of our code computes the distance from each unit to every other unit. To cut down on the number of distance calculations, we can fill the upper triangle of the distance matrix and then copy it to the lower triangle. With [math]N[/math] units, this requires only  [math](N (N-1) /2)[/math] distance calculations. However, as the number of units grows, this distance matrix becomes too large to store in memory, especially when executing the code in parallel. (We tried to use a sparse matrix, but this was extremely slow to fill.) To overcome this memory issue, we can avoid constructing a distance matrix altogether. Instead, for each unit, we compute the vector of distances from that unit to every other unit. We then only need to store that vector in memory. While that cuts down on memory use, it requires us to make twice as many   [math](N (N-1))[/math]  distance calculations. As the number of units grows, we are forced to perform more duplicate distance calculations to avoid memory constraints – an unfortunate tradeoff. (See the functions XeeXhC and XeeXhC_Lg in R version 3.2.2 (2015-08-14) Platform: x86_64-apple-darwin13.4.0 (64-bit) Running under: OS X 10.10.4 (Yosemite) [1] en_GB.UTF-8/en_GB.UTF-8/en_GB.UTF-8/C/en_GB.UTF-8/en_GB.UTF-8 attached base packages: [1] stats graphics grDevices utils datasets methods [7] base other attached packages: [1] RcppArmadillo_0.5.400.2.0 Rcpp_0.12.0 [3] geosphere_1.4-3 sp_1.1-1 [5] lfe_2.3-1709 Matrix_1.2-2 [7] ggplot2_1.0.1 foreign_0.8-65 [9] data.table_1.9.4 dplyr_0.4.2 [11] knitr_1.11 loaded via a namespace (and not attached): [1] Formula_1.2-1 magrittr_1.5 MASS_7.3-43 [4] munsell_0.4.2 xtable_1.7-4 lattice_0.20-33 [7] colorspace_1.2-6 R6_2.1.1 stringr_1.0.0 [10] plyr_1.8.3 tools_3.2.2 parallel_3.2.2 [13] grid_3.2.2 gtable_0.1.2 DBI_0.3.1 [16] htmltools_0.2.6 yaml_2.1.13 assertthat_0.1 [19] digest_0.6.8 reshape2_1.4.1 formatR_1.2 [22] evaluate_0.7.2 rmarkdown_0.8 stringi_0.5-5 [25] compiler_3.2.2 scales_0.2.5 chron_2.3-47 [28] proto_0.3-10 Leveraging R for Econ Job Market [UPDATE] I just was told that the new features on EJM actually allow you to download an XLS spreadsheet of the job listings on EJM. This is accessible when you login to myeconjobmarket.org and is part of their new AIMS (Application and Interview Management System). I wanted to describe a little helper I am using to help refine the places I want to apply at since I am going to be on the Economics Job Market this year. The two main websites were job openings are advertised are: Now JOE has a really nice feature where you can download simply all job openings into a nice spreadsheet. This allows you to browse through and refine your search. Econ Job Market does not have such a feature. The Listing page is quite annoying… If you want more details for a job opening, such as a description of the fields and application requirements, you will have to click on “more info…”. In the JOE spreadsheet, you have all that information at once. I wanted to create a JOE like spreadsheet using the openings from EJM. Some of which, of course, do overlap. But for some reason, some jobs are only on EJM but not on JOE. So how can we use R to help us do that? The first thing I wanted to do is get the simple listings on the main application page from EJM as a spreadsheet. You can simply download the main listings file and extract the pieces of information. Most important is the “posid” field, which is the position ID contained in the EJM URL. This will give you a direct link to the HTML page of the job opening and it also tells you whether you can apply through EJM. This leaves you with a listing of EJM links to jobs, their position ID and the EJM Application Link in case the Job Opening accepts applications through EJM. Now you can proceed to simply download all the HTML files using a batch downloader such as DownThemAll. If you want to do that, you can print out a list: cat(EJM$url, sep="\n") and enter them into Down Them All. Alternatively you can iterate through the list of links and send HTTP queries to get the details from each job separately. This is part of the next lines of code: This renders us with a nice csv that we can browse quickly through in Excel…for convenience you can download the listings as of 23-10-2014 below. Good luck for your applications! Is rainfall reporting endogenous to conflict? For my paper on the impact of social insurance on the dynamics of conflict in India, I use some new remote sensed weather data. The data comes from the Tropical Rainfall Measuring Mission (TRMM) satellites. The satellite carries a set of five instruments, and is essentially a rainfall radar located in outer space. As a robustness check I needed to verify that my main results go through using other rainfall data. In the paper I try to make a humble case in favour of using remote sensed data where possible. The key reason being that the  TRMM data comes from the same set of instruments over time, rather than from input sources that could be varying with e.g., economic conditions. This is a problem that has been identified by climatologist, who try to correct for systematic biases that could arise from the fact that weather stations are more likely to be located in places with a lot of economic At first I was a bit reluctant as it is quite heavy data that needs to be processed. Nevertheless, thorough analysis required me to jump the hoop and obtain secondary rainfall data sources. I chose the GPCC monthly rainfall data for verification of my results, since these have been used by many other authors in the past in similar contexts. The data is based on rain gauge measurements and is available for the past 100 years. The raw data is quite heavy ; the monthly rainfall rate data  for the whole world at at 0.5 degree resolution would amount to about 150 million rows of data for the period from 1961-2010. If you drop the non-land grid cells, this reduces the size dramatically to only 40 million rows. Below is a bit of code that  loads in the data once you have downloaded the ASCII source files from the GPCC website. On my personal website, I make a dta and an rdata file available for the whole world. There are three variables appearing in that order: (1) the rainfall rate, (2) the rainfall normals and (3) an integer that gives the number of reporting rain gauges that fall in a grid cell in a particular month. It turns out that all my results are robust to using this data. However, I do find something that is quite neat. It turns out that, if a district experienced some insurgency related conflict in the previous year,  it is less likely that this district has an active rain gauge reporting data in subsequent years. While it is a no-brainer that places with severe conflict do not have functioning weather reporting, these results suggest  that reporting may also be systematically affected in places with relatively low intensity of conflict – as is the case of India. While I do not want to overstate the importance of this, it provides another justification of why it makes sense for economists to be using  remotely sensed weather data. This is not to say that ground based data is not useful. Quite the reverse, ground based data is more accurate in many ways, which makes it very important for climatologist. As economist, we are worried about systematic measurement error that correlates with the economic variables we are studying. This is were remote sensed data provides advantages as it does not “decide” to become less accurate in places that are e.g. less developed, suffer from conflict or simply, have nobody living there. Here the function to read in the data and match to district centroids, you need some packages. #########LOAD GPPC NOTE THAT YOU NEED TO SUBSET THE DATA IF YOU DONT WANT TO END UP WITH A HUGE loadGPCC<-function(ff, COORDS) { ff,sep=""), header=FALSE, skip=14)))) ###YOU COULD SUBSET THE DATA BY EXTENT HERE IF YOU DONT WANT TO GET IT FOR THE WHOLE WORLD ##E.G. SUBSET FOR BY BOUNDING BOX ##temp<-temp[x>=73 & x<=136 & y>=16 & y<=54] temp<-cbind("year"= yr, "month"=month, temp) ###THIS DEFINES THE GRID STRUCTURE OF THE DATA ###YOU MAY NEED TO ADJUST IF YOU WORK WITH A COORDS<-do.call("rbind", lapply(ys, function(x) cbind("x"=xs,"y"=x))) system.time(GPCC<-do.call("rbind", lapply(1:length(ffs), function(x) loadGPCC(ffs[x], COORDS)))) ###MATCHING THIS TO SHAPEFILE? ###find nearest lat / lon pair ##you may want to vectorise this for(k in 1:nrow(CENTROIDS)) { cat(k," ") GPCC.coords[, c("delx","dely"), with=F]) NEAREST<-rbind(NEAREST, cbind(CENTROIDS[k],GPCC.coords[which(temp== Deploying Shiny Server on Amazon: Some Troubleshoots and Solutions I really enjoyed Treb Allen‘s tutorial on deploying a Shiny server on an Amazon Cloud Instance. I used this approach for my shiny app that is a map highlighting the economic impact of the recent shale oil and gas boom on the places where the actual extraction happens. The easiest way to proceed is to use the AMI Image, which basically is like a virtual box image just running on Amazon Cloud. It has the basic Shiny-server up and running. Along the way, I came across a few troubleshoots for which there are simple solutions. I cant seem to access the Shiny server through the Browser? Right after the installation and setting up of the Amazon Instance, I tried to access the shiny server using the public DNS, in my case that was Public DNS: ec2-54-84-227-28.compute-1.amazonaws.com However, this did not work since the shiny-server is listening on port 3838 and you need to allow incoming traffic on that port. The way to manage that in the EC2 Dashboard is to go change the security group that is assigned to the instance that you are running. You need to add a rule to allow incoming traffic on port 3838. Once this is done, you should be able to go to your public DNS, in my case the request URL in the browser now is: ec2-54-72-74-90.eu-west-1.compute.amazonaws.com:3838/shale-economic-impact/ in your browser Where are the shiny-apps located? The standard shiny apps that are preinstalled are  located in “/var/shiny-server/www” If you ssh into your EC2 instance, you can go to that folder. I installed packages, but my shiny application can not load them? The problem is most likely that you are logged in as ec2-user, where you have your own dedicated library path. In order to install R packages system wide, you need to change to root by doing: sudo -i ##install now your R packages, R CMD INSTALL ... The exit part is important as then you turn off administrator rights. When I run the app, I get Javascript Error  “The application unexpectedly exited. Diagnostic information has been dumped to the JavaScript error console.”? It could be that your EC2 instance is not powerful enough. I had that problem because the dataset that was loaded was too big, which creates a time-out. One way to overcome this is to start a medium instance rather than a micro instance. Please be aware that this is not part of the free usage tier and you will be billed for usage. However, an alternative simple  fix by editing the config file. It could be that you are hitting a time-out. In the shiny-server configuration help, there are two timeouts that can be set in the free shiny server version. app_init_timeout — Describes the amount of time (in seconds) to wait for an application to start. After this many seconds if the R process still has not become responsive, it will be deemed an unsuccessful startup and the connection will be closed. app_idle_timeout — Defines the amount of time (in seconds) an R process with no active connections should remain open. After the last connection disconnects from an R process, this timer will start and, after the specified number of seconds, if no new connections have been created, the R process will be killed. It could be that the javascript error is thrown, because the R process was killed.  You can edit the configuration file to increase the time-out periods, adding: # Instruct Shiny Server to run applications as the user "shiny" run_as shiny; # Define a server that listens on port 3838 server { listen 3838; # Define a location at the base URL location / { # Host the directory of Shiny Apps stored in this directory site_dir /var/shiny-server/www; # Log all Shiny output to files in this directory log_dir /var/log/shiny-server; # When a user visits the base URL rather than a particular application, # an index of the applications available in this directory will be shown. directory_index off; app_init_timeout 250; This brings us right to the next question,… Where do I find my shiny server configuration file? There is a hard coded configuration file, but in the search path there is one located in: here you can do the above edits. After you have  done the edits you want to reload the configuration… How do I reload the Configuration File, how to start or stop the shiny server? #reload without restarting sudo reload shiny-server #stop the shiny server sudo stop shiny-server #start it... sudo stop shiny-server Copying files from your local machine to the AWS Instance? You can use “scp” for secure copying, e.g. To download files from your instance: scp -i frackingshiny-eu-west-1.pem ec2-user@ec2-54-72-74-90.eu-west-1.compute.amazonaws.com:/var/shiny-server/www/shale-economic-impact.zip To upload files to your instance: scp -r -i frackingshiny-eu-west-1.pem “/Users/thiemo/shale-economic-impact.zip” ec2-user@ec2-54-72-74-90.eu-west-1.compute.amazonaws.com:/var/shiny-server/www/ I plan to add more troubleshoots – if you have come across some error for which you had to find a solution, feel free to comment and I ll amend the list. Regressions with Multiple Fixed Effects – Comparing Stata and R In my paper on the impact of the recent fracking boom on local economic outcomes, I am estimating models with multiple fixed effects. These fixed effects are useful, because they take out, e.g. industry specific heterogeneity at the county level – or state specific time shocks. The models can take the form: [math]y_{cist} = \alpha_{ci} + b_{st} + \gamma_{it}+ X_{cist}'\beta + \epsilon_{cist}[/math]  where [math]\alpha_{ci}[/math] is a set of county-industry, [math]b_{ci}[/math] a set of state-time and [math]\gamma_{it}[/math] is a set of industry-time fixed effects. Such a specification takes out arbitrary state-specific time shocks and industry specific time shocks, which are particularly important in my research context as the recession hit tradable industries more than non-tradable sectors, as is suggested in Mian, A., & Sufi, A. (2011). What Explains High Unemployment ? The Aggregate Demand Channel. How can we estimate such a specification? Running such a regression in R with the lm or reg in stata will not make you happy, as you will need to invert a huge matrix. An alternative in Stata is to absorb one of the fixed-effects by using xtreg or areg. However, this still leaves you with a huge matrix to invert, as the time-fixed effects are huge; inverting this matrix will still take ages. However, there is a way around this by applying the Frisch-Waugh Lovell theorem iteratively (remember your Econometrics course?); this basically means you iteratively take out each of the fixed effects in turn by demeaning the data by that fixed effect. The iterative procedure is described in detail in Gaure (2013), but also appears in Guimaraes and Portugal(2010). Simen Gaure has developed an R-package called lfe, which performs the demeaning for you and also provides the possibility to run instrumental variables regressions; it theoretically supports any dimensionality of fixed effects. The key benefit of Simen Gaure’s implementation is the flexibility, the use of C in the background for some of the computing and its support for multicore processing, which speeds up the demeaning process dramatically, especially the larger your samples get.. In Stata there is a package called reg2hdfe and reg3hdfe which has been developed by Guimaraes and Portugal (2010). As the name indicates, these support only fixed effects up to two or three Lets see how – on the same dataset – the runtimes of reg2hdfe and lfe compare. Comparing Performance of Stata and R I am estimating the following specification [math]y_{cist} = \alpha_{ci} + b_{sit} + \gamma_{it}+ X_{cist}'\beta + \epsilon_{cist}[/math]  where [math]\alpha_{ci}[/math] is a set of county-industry, [math]b_{ci}[/math] a set of state-time fixed effects. There are about 3000 counties in the dataset and 22 industries. Furthermore, there are 50 states and the time period is also about 50 quarters. This means – in total – there are 3000 x 22 = 66,000 county-industry fixed effects to be estimated and 22 x 50 x 50 = 55,000 time fixed effects to be estimated. The sample I work with has sufficient degrees of freedom to allow the estimation of such a specification – I work with roughly 3.7 million observations. I have about 10 covariates that are in [math]X_{cist}[/math], i.e. these are control variables that vary within county x industry over state x industry x time. Performance in Stata In order to time the length of a stata run, you need to run set rmsg on, which turns on a timer for each command that is run. The command I run in stata is reg2hdfe logy x1-x10, id1(sitq ) id2(id) cluster(STATE_FIPS ) You should go get a coffee, because this run is going to take quite a bit of time. In my case, it took t=1575.31, or just about 26 minutes. Performance in R In order to make the runs of reg2hdfe and lfe, we need to set the tolerance level of the convergence criterion to be the same in both. The standard tolerance in Stata is set at $$1e^{-6}$$, while for lfe package it is set at $$1e^{-8}$$. In order to make the runs comparable you can set the options in the R package lfe options explicitly: The second change we need to make is to disallow lfe to use multiple cores, since reg2hdfe uses only a single thread. We can do this by setting: Now lets run this in R using: system.time(summary(felm(log(y) ~ x1 + x2 +x3 +x4 + x5 + x6 + x7 +x8 + x9 + x10 + G(id)+G(sitq), data=EMP, cluster=c("STATE_FIPS")))) The procedure converges in a lot quicker than Stata… user system elapsed 208.450 23.817 236.831 It took a mere 4 minutes. Now suppose I run this in four separate threads… user system elapsed 380.964 23.540 177.520 Running this on four threads saves about one minute in processing time; not bad, but not too much gained; the gains from multi-threading increase, the more fixed-effects are added and the larger the samples are. Classi-Compare of Raster Satellite Images – Before and After For my research on the effect of power outages on fertility , we study a period of extensive power rationing that lasted for almost a whole year and affected most of Latin America, but in particular, it affected Colombia. The key difficult was to determine which areas were exposed to the power-outage and the extent to which this was the case. This is not straightforward, since there does not exist household- or even municipality level consumption data. But here is how R and Satellite Data can help. In particular, we study the night light series obtained from the Defense Meterological Sattelite Program, which has been discussed by Jeffrey before. We simply look for abnormal variation in municipality level light-emitting intensity from 1992 to 1993. Here is some code that generates some Raster-Maps using the package rasterVis , and uses jQuery to generate a fancy before and after comparison to highlight the year-on-year changes in light intensity of 1992 compared to 1993. ###load the raster images f151 = raster(tif) f152 = raster(tif) ##crop a smaller window to plot e = extent(-78,-72,2,8) #e = extent(-80,-78,-4.6,-2) rn= crop(f151, e) rn2= crop(f152, e) ### do a logarithmic transformation to highlight places that receive not much, but some light. p <- levelplot(rn, layers=1, margin=FALSE,col.regions = gray(0:100/100)) p + layer(sp.polygons(COLPOB, lwd=.25, linetype=2, col='darkgray')) p <- levelplot(rn2, layers=1, margin=FALSE,col.regions = gray(0:100/100)) p + layer(sp.polygons(COLPOB, lwd=.25, linetype=2, col='darkgray')) Now with this together, you can create a fancy slider as I have seen on KFOR — comparing satellite pictures of towns before and after a tornado went through them. The code is essentially just borrowed from that TV station and it loads the javascript from their server; it is essentially just a clever use of jQuery and is maybe something that could or is already implemented in an R reporting package? Do you know of such a function? Anyways, all you need is a slider.html page that contains the code referring to the two picture sources; the code is simple: <!DOCTYPE html> <meta http-equiv="Content-Type" content="text/html; charset=UTF-8"> <script src="https://ajax.googleapis.com/ajax/libs/jquery/1.8.3/jquery.min.js"></script> <script src="http://cache.ltvcms.com/localtv/tornado2/js/jquery.classycompare.js"></script> <link rel="stylesheet" type="text/css" href="http://cache.ltvcms.com/localtv/tornado2/css/jquery.classycompare.css"> <style type="text/css">.sample1 {width:725px; height:725px;}.sample2 {width:725px; height:725px;}.sample3 {width:725px; height:725px;}</style> <div id="wrapper"> <div class="container_6 clearfix"> <section class="main-section grid_6"> <div class="main-content"> <section class="clearfix"> <div class="container" style="position:relative"> <div class="sample1"> <img src="1992municio.png" <img src="1993municio.png" $(window).load(function() { caption: true, reveal: 0.5 </script> </div> This is how it looks — I know the stuff is not perfectly aligned, partly because when cropping the picture I made a mistake and could not be bothered with fixing it. Have fun! Computing Maritime Routes in R Thanks to the attention my paper on the cost of Somali piracy has received, a lot of people have approached me to ask how I computed the maritime routes. It is not a very difficult task using R. The key ingredient is a map of the world, that can be rasterized into a grid; all the landmass needs to be assigned an infinite cost of crossing and last but not least — one needs to compute the actual What packages do I need? The package gdistance does most of the actual work of computing the routes. The wrld_simpl map provides what is needed to generate a raster. Generating a Raster #create a raster from shape files shp <- wrld_simpl r <- raster() r <-rasterize(shp, r, progress='text') After the raster is generated, we can proceed by making landmass impassable for vessels. #make all sea = -999 r[is.na(r)] <- -999 #this turns all landmass to missing r[r>-999] <- NA #assign unit cost to all grid cells in water r[r==-999] <- 1 There are a few more things to do, such as opening up the Suez Canal and some other maritime passages — one needs to find the right grid cells for this task. In the next step we can transform the raster into a transition layer matrix, that comes from the gdistance package. It is a data construct that essentially tells us how one can move from one cell to the other — you can allow diagonal moves by allowing the vessel to move into all 8 adjacent grid cells. There is also a geo-correction necessary, as the diagonals are longer distances than the straight-line moves. tr <- transition(r, mean, directions = 8) tr <- geoCorrection(tr, "c") Well — and thats basically it — of course, there are a few bits and pieces that need additional work — like adding heterogenuous costs as one can imagine exist due to maritime currents and so on. Furthermore, there is a whole logic surrounding the handling of the output and the storing in a local database for further use and so on. But not to bore you with that — how can I obtain the distance between A and B? This uses Dijkstra’s Algorithm and is called through the gdistance function “shortestPath”. AtoB <- shortestPath(tr, as.numeric(start[1:2]), as.numeric(end[1:2]), output = "SpatialLines") Using this output, you can then generate fancy graphs such as … Starting Multiple Stata Instances on Mac I found it useful to have multiple Stata instances running on my Mac, in particular, if I use one instance to clean the data before running merge commands. It is always annoying if the merging does not work out or throws an error and then, one would have to clear the current instance and open the DTA file that was messing up the merge. Its a simple command that allows you to open multiple Stata instances on a Mac: open -n /Applications/Stata12_OSX/StataSE.app You can also define an alias command in your .bash_profile, alias stata='open -n /Applications/Stata12_OSX/StataSE.app' Good luck! R function: generate a panel data.table or data.frame to fill with data I have started to work with R and STATA together. I like running regressions in STATA, but I do graphs and setting up the dataset in R. R clearly has a strong comparative advantage here compared to STATA. I was writing a function that will give me a (balanced) panel-structure in R. It then simply works by joining in the additional data.tables or data.frames that you want to join into it. It consists of two functions: timeVector <- function(starttime,endtime,timestep="months") { starttime<- as.POSIXct(strptime(starttime, '%Y-%m-%d')) endtime<- as.POSIXct(strptime(endtime, '%Y-%m-%d')) if(timestep=="quarters") { ret<-seq(from=as.POSIXct(starttime), to=as.POSIXct(endtime), by=timestep) quarter <- gsub("(^[123]{1}$)", 1, month(ret)) quarter <- gsub("(^[456]{1}$)", 2, quarter) quarter <- gsub("(^[789]{1}$)", 3, quarter) quarter <- as.numeric(gsub("(^[102]{2}$)", 4, quarter)) } else { ret<-seq(from=as.POSIXct(starttime), to=as.POSIXct(endtime), by=timestep) This first function generates the time-vector, you need to tell it what time-steps you want it to have. panelStructure <- function(group,timevec) { tt2 <- as.character(sort(rep(group,length(timevec)))) mat <- cbind("group"=data.frame(tt2),"timevec"=data.frame(tt)) This second function then generates the panel-structure. You need to give it a group vector, such as for example a vector of district names and you need  to pass it the time vector that the other function created. Hope this is helpful to some of you. Removing Multibyte Characters from Strings I was a bit annoyed by the error when loading a dataset that contains multi-byte characters. R basically just chokes on them. I have not really understood the intricacies of this, but it was basically just an annoyance and since I did not really use these characters in the strings containing them, I just wanted to remove them. The easiest solution was to use Vim with the following search and replace: Downloading All Your Pictures From iPad or iPhone I really disklike iTunes, it is the worst piece of software I have ever come accross.  I would say that Windows has been getting better and better. I had the following problem: I uploaded quite a few pictures via iTunes onto my iPad, just because its nice to look at pictures on that machine. However, the machine with which I did the syncing broke and needed repair and somehow, I forgot to save these pictures onto a  hard drive for backup. So the only place where these pictures now rest is on my iPad. iTunes wont allow you to copy pictures on your iPad onto a machine (only the pictures that you atually take with the iPad). This is because, these pictures *should*  be on the machine with which you synced your iPad in the first place. However, this was not true in my case anymore. Now you could either invest some money and purchase an app that allows you to copy your picture albums from the iPad onto a Windows machine. There is e.g. CopyTrans Suite, which is a bit costly and in the version I tried, did not copy the full resolution of the pictures (which is a rip-off!). So I was looking into a cheap and quick solution to get the original full resolution pictures down from my iPad. Setting things up: installing free app “WiFi Photo” This app basically makes your photo albums available on a local webserver. Once you start the app on the iPad, it tells you an URL to brows to on your local machine. There you can see all the pictures that are on your iPad. You could now use this app to manually download the pictures, however, it is limited to 100 pictures at once and you will not get the full resolution pictures if you do a batch download. If you browse through the app, you will notice that the URL to the full resolution pictures has the following form: where the “0” stands for the album ID. If you have, say 2 albums on the iPad, this would take values “0” or “1”. Images are stored as consecutive numbers in each album, so the following link would go to picture number 564 in full resolution in album 0. So we will exploit this structure to do an automated batch download. Doing an automated batch download First, in order for this to work you need to get a a local PHP installation up and running. If you are really lazy, you could just install XAMPP. However, you can implement the code in any other coding language, e.g. in R as well. To download all the pictures, you need to adjust and run the following script for($k=0;$k<=3;$k++) { for($i=1;$i<=1000;$i++) { //adjust this $url = "http://192.168.1.9:15555/".$k."/fr_".$i.".jpg"; //adjust this $fn = "C:/Dokumente und Einstellungen/Thiemo/Desktop/Kolumbien/".$k."-".$i.".jpg"; //to make sure you dont redownload a file already downloaded if you want //to run the script several times if(!file_exists($fn)) { if($content = file_get_contents($url)) { $fp = fopen($fn,"a+"); fwrite($fp, $content); What this script does it iterates through the albums (the first loop), in my case I have four albums. The second loop then iterates through the pictures, I simply assume that there are at most 1000 pictures in each album. Clearly, this can be made smarter, i.e. automatically find out how many pictures in each album, but this works and thats all we need. I would recommend running the script a few times, as sometimes it is not able to retrieve the content and then, no file is created. By adding the “file_exists” check, I make sure that no picture, that has been downloaded already, is downloaded again. So if you run the script several times, it will be quicker and quicker to also pick up the last missing pictures. Running the script takes some time as it needs to copy down each picture, and in my case this were a rough 2000 pictures. But now, they are back in the safe haven of my local machine. Microfinance in India: Getting a sense of the geographic distribution I am working on a review paper on microfinance in India and use data from the MIX market. Today, I was amazed by how quick I conjured a map of India with the headquarters of the microfinance institutions that report data to the MIX market depicted on that map. Ideally, I would have more geolocation data – but this is hard to come by. But what we can clearly see is the clustering of institutions in big cities and in the south, which was hit hardest by the recent crisis. Microfinance Institutions across India I dont think anybody has produced such a map before. In fact, I can do this for all institutions reporting data around the world, which may be interesting to see. Also, I already tried to make the size of the dot proportional to e.g. measures of real yield or color-coding the nearest neighborhood (say the neigbhouring districts) by the average loan sizes reported. Lots of things to do. Maybe thats something for the guys at MIX Market or for David Roodman who, I think has finished his open book. The key difficulty was actually not in plotting the map (though it took some time), but in obtaining geo-data on where the headquarters of the microfinance institutions are located. I managed to obtain this data – though its not perfect – by making calls to the Google MAP API via a PHP script., basically using the following two functions: Continue reading Microfinance in India: Getting a sense of the geographic distribution R Function Binding Vectors and Matrices of Variable Length, bug fixed Now this is something very geeky, but useful. I had to bind two matrices or vectors together to become a bigger matrix. However, they need not have the same number of rows or even the same row names. The standard cbind() functions require the vectors or matrices to be compatible. The matching is “stupid”, in the sense that it ignores any order or assumes that the elements which are to be joined into a matrix have the same row names. This, of course, need not be the case. A classical merge command would fail here, as we dont really know what to merge by and what to merge on. Ok… I am not being clear here. Suppose you want to merge two vectors A 2 B 4 C 3 G 2 B 1 C 3 E 1 now the resulting matrix should be A 2 NA B 4 1 C 3 3 E NA 1 G NA 2 Now the following Rfunction allows you to do this. It is important however, that you assign rownames to the objects to be merged (the A,B,C,E,G in the example), as it does matching on these. cbindM <- function(A, v, repl=NA) { dif <- setdiff(union(rownames(A),rownames(v)),intersect(rownames(A),rownames(v))) #if names is the same, then a simple cbind will do if(length(dif)==0) { A<- cbind(A,v[match(rownames(A),rownames(v))]) rownames(A) <- rownames(v) } else if(length(dif)>0) {#sets are not equal, so either matrix is longer / shorter #this tells us which elements in dif are part of A (and of v) respectively for(i in dif) { if(is.element(i,rownames(A))) { #element is in A but not in v, so add it to v and then a temp<-matrix(data =repl, nrow = 1, ncol = ncol(v), byrow = FALSE, dimnames =list(i)) v <- rbind(v,temp) } else { # element is in v but not in A, so add it to A temp<-matrix(data = repl, nrow = 1, ncol = ncol(A), byrow = FALSE, dimnames =list(i)) Note: 09.11.2011: I fixed a bug and added a bit more functionality. You can now tell it, with what you want the missing data to be replaced. Its standard to replace it with NA but you could change it to anything you want.
{"url":"http://freigeist.devmag.net/category/programming","timestamp":"2024-11-04T14:09:49Z","content_type":"text/html","content_length":"102087","record_id":"<urn:uuid:319c6b68-bb6e-40c9-90e4-d7ab0c44b1f3>","cc-path":"CC-MAIN-2024-46/segments/1730477027829.31/warc/CC-MAIN-20241104131715-20241104161715-00439.warc.gz"}
How dense is the singularity of a black hole? Typically, M for a black hole in our galaxy is around 10 times the mass of the Sun, but for supermassive black holes at the centers of galaxies it can be millions or even billions. What is its density?…Density of Black Holes. Material ρ / g/cm3 The Inner Core ~13.000 Uranium 19.100 Iridium 22.500 The core of the Sun ~150.000 Are all black hole singularities the same size? They are not identical because the mass of those points would be different. The density is not an issue. So how does one have a larger horizon. Again it’s the mass that decides the radius of the event horizon, not the density. Can the singularity of a black hole have infinite density? A singularity is a point in space where there is a mass with infinite density. Singularities are predicted to exist in black holes by Einstein’s theory of general relativity, which is a theory that has done remarkably well at matching experimental results. Is the singularity infinitely dense? A singularity is a place of infinite density, and that’s not really a thing. A black hole singularity is a point in spacetime – like you live in the universe and you can point – there’s a singularity like right over there, or over there or over there. What is the difference between a singularity and a black hole? The singularity is the centre of a black hole. A black hole by definition is a region of space with a very strong gravitational field where even light cannot escape past a certain point, called the event horizon or “the point of no return.” Is a black hole more dense than a neutron star? Black holes are astronomical objects that have such strong gravity, not even light can escape. Neutron stars are dead stars that are incredibly dense. Both objects are cosmological monsters, but black holes are considerably more massive than neutron stars. How can a black hole be infinitely dense? Around the singularity, particles and materials are compressed. As matter collapses into a black hole, its density becomes infinitely large because it must fit into a point that, according to equations, is so small that it has no dimensions. Is a singularity infinitely small? At the center of a black hole is what physicists call the “singularity,” or a point where extremely large amounts of matter are crushed into an infinitely small amount of space. What is the maximum density of a black hole? The matter density needed to form such a black hole is extremely high – about 2 x 1019 kg per cubic metre. That’s more extreme than the density of an atomic nucleus. Animated simulation of gravitational lensing caused by a black hole going past a background galaxy. Are black holes a singularity? In the center of a black hole is a gravitational singularity, a one-dimensional point which contains a huge mass in an infinitely small space, where density and gravity become infinite and space-time curves infinitely, and where the laws of physics as we know them cease to operate.
{"url":"https://profound-information.com/how-dense-is-the-singularity-of-a-black-hole/","timestamp":"2024-11-11T17:58:35Z","content_type":"text/html","content_length":"58969","record_id":"<urn:uuid:11655712-1dde-40d8-932d-e581de253dbb>","cc-path":"CC-MAIN-2024-46/segments/1730477028235.99/warc/CC-MAIN-20241111155008-20241111185008-00869.warc.gz"}
Understanding Logistic Regression - Uhura Solutions We give a short introduction to the logistic regression model. Logistic regression is simply an extension of the linear regression model. We introduce a few new statistical concepts, but they are relatively simple and within reach of anyone who can use linear models. Logistic regression provides the foundation for more sophisticated machine learning techniques. Author: Goran Sukovic, PhD in Mathematics, Faculty of Natural Sciences and Mathematics, University Many algorithms and techniques in machine learning are borrowed from statistics. To describe properties of population growth in ecology, statisticians developed the logistic function, also called the sigmoid function, given by The graph of the sigmoid function is an S-shaped curve that can take any real-valued number and map it to a value between 0 and 1, as it’s shown in Figure 1 (input values are taken from segment [-5,5 This article discusses the basics of Logistic Regression, especially binary logistic regression, which is an example of a generalized linear model. In Binary Logistic Regression, we have a binary output (response variable) which is related to a set of discrete and/or continuous input or explanatory variables. Logistic regression models the probabilities for classification problems with two possible outcomes. It can be seen as an extension of the linear regression model for classification problems. In linear regression, the expected values of the output are modeled based on a linear combination of values taken by the input variables. On the other hand, in logistic regression the probability and/or odds of the output variable taking a particular value is modeled based on a combination of values taken by the input variables. Before we dig deep into details of logistic regression, we need to explain probability and odds. The probability that an event will occur is the fraction of times you expect to see that event in many trials. If the probability of an event occurring is Y, then the probability of the event not occurring is 1-Y. 1. If the probability of an event is 0.70 (70%), then the probability that the event will not occur is 1-0.70 = 0.30, or 30%. 2. When you toss a coin three times there are eight equally likely outcomes. Let A denote an event that we got three heads. The probability that event A occurs is P(A)= 1/8 = 0.125 or 12.5%, and the probability that event A will not occur is 1-1/8 = 7/8. 3. Let (a,b) denotes a possible outcome of rolling the two die, with a number on the top of the first die and b the number on the top of the second die. Let X denotes an event that the sum of the two dice is equal to 5. There are 36 possibilities for (a,b), but only 4 of them are “good” (i.e. (1,4), (2,3), (3,2), (4,1)). The probability that event X occurs is P(X)= 4/36, and the probability that event X will not occur is 1-4/36 = 32/36. The odds are defined as the probability that the event will occur divided by the probability that the event will not occur. The odds of event X are given by P(X)/(1-P(X)). Example: Odds from the previous example: 1. If the probability of an event is 0.70 (70%), then the probability that the event will not occur is 1-0.70 = 0.30, or 30%. Odds of the event is 0.7/0.3 = 2.333 2. For tossing a coin three times we can calculate the odds as (1/8)/(7/8)=1/7=0.1428. 3. Odds for an event X is (4/36)/(1-4/36)=1/8=0.125 4. If your favorite basketball team plays 60 games and wins 45 times and loses the other 15 times, the probability of winning is 45/60 = 0.75 or 75%, but the odds of the team winning are 75/25 = 3 or “3 wins to 1 loss.” Logistic regression can be applied, for example, in the following scenarios: • modeling the probabilities of an output variable as a function of some input variables, e.g. “success” on the exam as a function of gender and hours spent on preparing for the exam; • describing the differences between individuals in separate groups as a function of input variables, e.g. students admitted and rejected as a function of gender; • predicting probabilities that individuals fall into one of the two categories as a function of input variables, e.g. what is the probability that a student is passed given his/her gender and hours preparing for the exam. Logistic regression is usually used as a supervised classification algorithm. In a classification problem, the target variable (or output), y, can take only discrete values for a given set of features (or inputs), X. For example, we can predict the student’s score on the exam based on gender and hours spent on preparation, which is regression. For classification, we can predict that this student “passes” the exam based on gender and hours spent on preparation. As the name suggests, logistic regression IS a regression model. We build a regression model to predict the probability (which is a real value) that a given data entry belongs to the “positive” category (or category numbered as “1”). Just like linear regression assumes that the data follows a linear function, logistic regression models the data using the sigmoid function. More precisely, we use linear regression to estimate the log of odds: Using elementary math operations we can easily conclude or in a case when n=1: Now we have to estimate coefficients from the given data. This can be done using a maximum-likelihood estimation (MLE). Many machine learning algorithms use maximum-likelihood estimation as a systematic way of parameter estimation. To give you the idea behind MLE, let us look at an example. Example: We have a bag that contains three balls, either red (R) or blue (B), but we have no information in addition to this. Thus, the number of blue balls, call it θ, might take values from the set {0, 1, 2, 3}. We can choose four balls at random from the bag with replacement. Let xi, i=1,2,3,4, denotes the color of the ball in its drawing from a bag. After doing our experiment, the following values are observed: x1=B, x2=R, x3=B, x4=B. Thus, we observe three blue balls and one red ball. What is the most probable value for parameter θ? For each possible value of θ we will find the probability of the observed sample, (x1, x2, x3, x4) = (B, R, B, B). P(xi=B) =θ/3, P(xi=R) =1-θ/3, i=1, 2, 3, 4. P(x0=B, x1=R, x2=B, x3=B) = P(x0=B)·P(x1=R)·P(x2=1)·P(x3=1) = θ/3⋅(1-θ/3)⋅θ/3⋅θ/3= (θ/3)^3⋅(1-θ/3) Note that the joint probability mass function (PMF) depends on θ, so we can write it as P(x1,x2,x3,x4;θ). We obtain the values given in the table for the probability of P(B, R, B, B). It makes sense that the probability of observed sample for θ=0 and θ=3 is zero, because our sample included both red and blue balls. From the table we can conclude that the probability of the observed data is maximized for θ=2. In other words, the observed data is most likely to occur for θ=2, so we may choose value 2 as our estimate of θ. This is called the maximum likelihood estimate (MLE) of θ. In practice, MLE is usually done using the natural logarithm of the likelihood, known as the log-likelihood function. Our resulting model will predict a value very close to 1 for the “positive” class and value very close to 0 for the other class. Why is it a good idea to use maximum-likelihood for logistic regression? The search procedure seeks values for the coefficients that minimize the error in the probabilities predicted by the model to those in the data. We will not delve into the math of maximum likelihood (if you want a more detailed math approach, check https://www.geeksforgeeks.org/understanding-logistic-regression/ or An Introduction to Statistical Learning: with Applications in R, pages 130-137). Using training data, we can use a minimization algorithm to find the best values for the coefficients. This is often implemented using gradient descent, BFGS (Broyden–Fletcher–Goldfarb–Shanno algorithm), L-BFGS (BFGS with limited memory), Conjugate Gradient, or some other numerical optimization algorithm. Example (adapted from https://machinelearningmastery.com/logistic-regression-for-machine-learning/): We try to predict gender (male or female) based on height (in centimeters). Our learned coefficients for logistic regression are β0 = -100 and β1 = 0.6. If we know the height 165, what is the probability that the person is a man? Using the equation above we can calculate the probability of males given a height of 160cm or, more formally, P(X=’male’ | height=160). We can use spreadsheets or calculators and finally we got: P(X=’male’ | height=160) =0.0179862. The probability is near zero, so we can say that the person is a female. In this example, we use the probabilities directly, but if we want to use linear regression for the classification we have to introduce the so-called “decision boundary.” For example, we can say: a person is female if g(β_0+β_1 x)<0.5, and the person is male if g(β_0+β_1 x)⩾0.5. Thanks for reading this article. If you like it, please recommend and share it.
{"url":"https://uhurasolutions.com/2020/09/18/understanding-logistic-regression/","timestamp":"2024-11-07T20:11:42Z","content_type":"text/html","content_length":"92797","record_id":"<urn:uuid:753010ac-03f0-4fb6-9f15-6369eca46791>","cc-path":"CC-MAIN-2024-46/segments/1730477028009.81/warc/CC-MAIN-20241107181317-20241107211317-00028.warc.gz"}
Creating a simple sequence formula for the column Is there a simple method to create a sequence column in a helper sheet based on a specific number to start the sequence? To give an example, I want to sequence start with the number 25 and increase each row by 1 (ie. 25, 26, 27, etc.), not by any specific project or identifier, simply starting with 25. I thought selecting the row above + 1 would be a simple formula conversion to column, but that prompts the syntax error. • @mcDosse_012 It sounds like you tried to make your +1 formula into a column formula (which won't work because it has specific cell references). It should work fine if you make it a cell formula. Hope this helps! Help Article Resources
{"url":"https://community.smartsheet.com/discussion/131358/creating-a-simple-sequence-formula-for-the-column","timestamp":"2024-11-05T04:08:24Z","content_type":"text/html","content_length":"409630","record_id":"<urn:uuid:18066e5e-89e1-4ef0-8de2-9803214e7b1c>","cc-path":"CC-MAIN-2024-46/segments/1730477027870.7/warc/CC-MAIN-20241105021014-20241105051014-00404.warc.gz"}
To Write Mathematical Proof By Polya Method Kodirun, Kodirun (2011) Developing Students Ability To Write Mathematical Proof By Polya Method. PROCEEDINGS International Seminar and the Fourth National Conference on Mathematics Education. ISSN P - 19.pdf Download (66kB) | Preview Both writing and reading a proof is equally not easy. Some mathematicians attested that students found difficulties in mathematical proving. Mathematics and mathematics education experts like Jones (1997, 2001), Weber (2001), and Smith (2006) found that difficulty in proof writing is due to: lack of theorem and concept understanding, lack of proving ability, and there is a teaching-learning process that unites with the subject. So, it is truly needed that class of writing proof in order to help to generate students’ ability to do mathematical proving. Polya method is going to be that purpose. Keywords: mathematics proof, Polya method Actions (login required)
{"url":"http://eprints.uny.ac.id/990/","timestamp":"2024-11-03T04:18:48Z","content_type":"application/xhtml+xml","content_length":"21404","record_id":"<urn:uuid:cb1aac51-52d4-4f9b-8dc8-b8911e69de32>","cc-path":"CC-MAIN-2024-46/segments/1730477027770.74/warc/CC-MAIN-20241103022018-20241103052018-00461.warc.gz"}
Basic College Mathematics (10th Edition) Chapter 7 - Measurement - Summary Exercises - U.S. and Metric Measurement Units - Page 504 24 Work Step by Step 21/4 lb to ounces Use a unit fraction with ounces (the unit for your answer) in the numerator, and pounds (the unit being changed) in the denominator. Because16 ounces (oz) = 1 pound (lb), the necessary unit fraction is (16 oz)/(1 lb) Multiply 21/4 lb times the unit fraction. 9/4* (16 oz)/(1 lb) = 36 oz
{"url":"https://www.gradesaver.com/textbooks/math/other-math/CLONE-547b8018-14a8-4d02-afd6-6bc35a0864ed/chapter-7-measurement-summary-exercises-u-s-and-metric-measurement-units-page-504/24","timestamp":"2024-11-10T11:24:15Z","content_type":"text/html","content_length":"65734","record_id":"<urn:uuid:75fd2f5b-8749-4995-aec5-d4de60b0650a>","cc-path":"CC-MAIN-2024-46/segments/1730477028186.38/warc/CC-MAIN-20241110103354-20241110133354-00354.warc.gz"}
Astronomy Without A Telescope - Galactic Gravity Lab Astronomy Without A Telescope – Galactic Gravity Lab Many an alternative theory of gravity has been dreamt up in the bath, while waiting for a bus – or maybe over a light beverage or two. These days it’s possible to debunk (or otherwise) your own pet theory by predicting on paper what should happen to an object that is closely orbiting a black hole – and then test those predictions against observations of S2 and perhaps other stars that are closely orbiting our galaxy’s central supermassive black hole – thought to be situated at the radio source Sagittarius A*. S2, a bright B spectral class star, has been closely observed since 1995 during which time it has completed over one orbit of the black hole, given its orbital period is less than 16 years. S2’s orbital dynamics can be expected to differ from what would be predicted by Kepler’s 3^rd law and Newton’s law of gravity, by an amount that is three orders of magnitude greater than the anomalous amount seen in the orbit of Mercury. In both Mercury’s and S2’s cases, these apparently anomalous effects are predicted by Einstein’s theory of general relativity, as a result of the curvature of spacetime caused by a nearby massive object – the Sun in Mercury’s case and the black hole in S2’s case. S2 travels at an orbital speed of about 5,000 kilometers per second – which is nearly 2% of the speed of light. At the periapsis (closest-in point) of its orbit, it is thought to come within 5 billion kilometres of the Schwarzschild radius of the supermassive blackhole, being the boundary beyond which light can no longer escape – and a point we might loosely regard as the surface of the black hole. The supermassive black hole’s Schwarzschild radius is roughly the distance from the Sun to the orbit of Mercury – and at periapsis, S2 is roughly the same distance away from the black hole as Pluto is from the Sun. The supermassive black hole is estimated to have a mass of roughly four million solar masses, meaning it may have dined upon several million stars since its formation in the early universe – and meaning that S2 only manages to cling on to existence by virtue of its stupendous orbital speed – which keeps it falling around, rather than falling into, the black hole. For comparison, Pluto stays in orbit around the Sun by maintaining a leisurely orbital speed of nearly 5 kilometers per second. Some astrometrics of S2's orbit around the supermassive black hole Sagittarius A* at the center of the Milky Way. Credit: Schödel et al (2002), published in Nature. The detailed data set of S2’s astrometric position (right ascension and declination) changes over time – and from there, its radial velocity calculated at different points along its orbit – provides an opportunity to test theoretical predictions against observations. For example, with these data, it’s possible to track various non-Keplerian and non-Newtonian features of S2’s orbit including: – the effects of general relativity (from a external frame of reference, clocks slow and lengths contract in stronger gravity fields). These are features expected from orbiting a classic Schwarzschild black hole; – the quadrapole mass moment (a way of accounting for the fact that the gravitational field of a celestial body may not be quite spherical due to its rotation). These are additional features expected from orbiting a Kerr black hole – i.e. a black hole with spin; and – dark matter (conventional physics suggests that the galaxy should fly apart given the speed it’s rotating at – leading to the conclusion that there is more mass present than meets the eye). But hey, that’s just one way of interpreting the data. If you want to test out some alternative theories – like, say Oceanic String Space Theory – well, here’s your chance. Further reading: Iorio, L. (2010) Long-term classical and general relativistic effects on the radial velocities of the stars orbiting Sgr A*. 31 Replies to “Astronomy Without A Telescope – Galactic Gravity Lab” 1. Alpha centauries located about 5 light years from earth, Ross 154 located about 10 light years from earth, HD 10180 is located 127 light years from earth, M87 and NGC 1097 is about 50 million light years from earth, .. is located 600 million light years, 1000 million light years, 10,000 million light years……………………….. How far we could observe our deep space because Day will come we can never make any giant telescope to meet further mysteries. We observe our universe as if it is a boundless hence common question arises: Is our universe boundless? Is our universe bounded by certain limit but for other reason it appears as an unlimited form? 2. The orbital speed of galactic stars around their supermassive black hole is the SAME with varying distances, and is too fast for outermosts not to fly away, which is the reason why dark matter only has extra gravity and has never been seen or proven. Consider our sun seems round, but as it streams around the galaxy it is a plasma gas interacting with black hole quantum fields. The outermost stars in the smallest of dwarf galaxies always require the most amount of dark matter to fix the newtonian gravity problem. Why not assume that these outer dwarf stars vastly distort their spherical shape being plasma that defies newtonian gravity laws the farther outside the spiral arms away from the central black hole the more positive cosmic rays flood interstellar space, causing the distorted plasma stars to orbit the galaxy at the same speed ! When observing a smoke ring, it stays intact because of coherent charged particles attracting themselves seperately from the atmosphere charges. A galaxy looks like a smoke ring and defies gravity too. The smoke ring theory is my gravity theory to explain fictious dark matter “gravity only” requirements. 3. @ Steve Nerlich, At the third paragraph, in the fifth line, it should be singular possessive black hole’s, not collective singular possessive black holes’, since we are referring to the single supermassive black hole Sagittarius A* at the centre of the Milky Way. Reference: The Apostrophe Protection Society. 😎 4. Gravity equations need quantum computation at small matter scales best represented by Qubits because the square of the force at distance also doubles the numbers of qubits proportionally similar to newtons equation F g = M1 M2 / D2 . Consider each qubit a planck scale area where black hole event horizons allow instantaneous measurements of any choice existing simulataneously in both states, unlike bits that are either one (1) normal matter gravity OR zero (0) anti-matter dark energy . Bits cannot be both 1 and 0 like information qubits are in quantum gravity that should demise dark matter gravity waves. 5. @IVAN3MAN_AT_LARGE Thanks – apostrophe catastrophe averted. I’m not sure we collectively do observe our universe as if it is boundless (although it might be finite and unbounded) and I’m confident the telescopes will just keep getting bigger. I’d prefer to think our capacity for solving mysteries is boundless – although there’s probably more mysteries out there than we will ever have the time to solve. 6. Tiniest particles smaller then dust behaving as solid objects might orbit the smoke rings center thousands of times. Hooke’s Law will determine the amount of Dark Matter that is required present in a galaxy shape ! It is Newtonian Gravity TOO ! What’s left to explain DM, when we can go back to the good old days of cosmology again!? crowell explains unites quantum mechanics with special relativity basically a theory of everything by mathematical proof that everybody should accept. the opposite charged attractive force increases the farther is the star from the black hole center. They really are wasting time looking for dark matter particles underground when they should be studying Crowell Hooke’s Law when observing predictable dark matter halo angles with visible matter structures. Newtonian objects need a center of mass and require dark matter without Hooke’s Law. Thank you GREATLY for sharing your BRILLANCE. Those other jerks can do the same, but I do the opposite. probably because they smoke and blow qubits simultaneously, a bad thing to do 7. @ Jimhenson: “When observing a smoke ring, it stays intact because of coherent charged particles attracting themselves seperately from the atmosphere charges” That’s an interesting prediction you just made. I suggest you get an electroscope, blow some smoke rings and make observations. Possible Nobel prize ahead. 8. @ Jimhenson: When observing a smoke ring, it stays intact because of coherent charged particles attracting themselves seperately from the atmosphere charge This sentence only either means: You have no clue what charge is. Charge attracting themselves? Charge staying together? They repel each other!!! A galaxy looks like a smoke ring and defies gravity too. Birds look like a galaxy too when seen from the front and also defies gravity. 9. Good catch! I’ve seen the MOND believers claim that it explains everything gravitational about galaxies. (The last hang out, I take it.) Clearly that is no longer the case, and to top it off the predictions involve dark matter. I accelerate spit masses in their general direction, relatively speaking. 😮 @ Jimhenson: When observing a smoke ring, it stays intact because of coherent charged particles attracting themselves seperately from the atmosphere charges. And of course dolphins making water rings, something that they can do by simply twisting their head fast enough, by your model show that they are masters of the depth charge. [Do see the youtube, it is a sunday relaxing treat!] Which once again makes us wonder what you smoke before you blow. Reference: The Apostrophe Protection Society. … apostrophe catastrophe averted. Right. Except that it is so much more vital in the language of love: “La Société de Protection d’Apostrophe” … “catastrophe d’apostrophe évitée.” Face it, the french do some things better. (\_/) 11. perhaps neutralino particles are common fundamental short-lived particles everywhere even inside us there are mini-black holes giving existance to visible matter and producing opposite charged anti-matter pairs that increase charges forces with greater distances by Hooke’s law, and will balance how dark energy expansion accelerates of the universe? Could positive gravity and negative dark energy be united as one force by neutralinos having both gravity and electromagnetism? And the Universe be a particle ? 12. The topic here involves the semi-relativistic motion of a star near the galactic BH. The motion will exhibit relativistic corrections. These corrections are the violation of conservation of the semi-major axis direction of the orbit (orbital precession), and if the black hole has a large angular momentum there should then be a Lense-Thirring effect. For some information on this take a look at the UCLA Galactic Center and you will find the file illustrating the motion of stars in the Milky Way center. This research is being extended to stellar motion in the Sagittarius A* galactic center. The motion of stars in a galaxy at large, say far from the center of the galaxy do indeed have constant velocity. This is due to dark matter. This dark matter is something which can be seen as a constant mass-energy density of some sort that pervades the space the galaxy occupies. We start with the Poisson equation, which everyone should know, nabla*F = -4pi Gm rho where F is force with directions along z, y and z and nabla* represents directional derivatives or a sum of derivatives along those directions. G is the gravitational constant and rho is the density of matter as the source of gravity and m is the mass of a small test mass. If I integrate over a volume (assumed to be a ball with a 2-dim sphere boundary) that contains all of the matter density rho we get int nabla*F = -4pi GMm, The integration over the volume of the force F is equal by Stokes’ law to an integration of the force evaluated on the spherical boundary of the 3-D ball or Int nabla*F = int dS*F = -4pi r^2F Equating that to the 4pi GM we get F = -GMm/r^2. So we recover Newtonian gravity! That should not be surprising, and if you use this in an F = ma second law and use the acceleration a = v^2/r for centripetal acceleration you get Kepler’s law. Now let us assume something a bit different with the density of matter. We assume that the matter is in a continuum that extends beyond the sphere of integration above. Therefore while rho is constant the mass is M = rho*4pi r^3/3. So 4pi r^2F = -Gm rho*4pi r^3/3 and so the force is F = -Gm rho r. This means the force increases as the distance from the center increases! This is the equation of motion for a spring, or Hooke’s law. This more or less reproduces the motion of stars in a galaxy. There are some smaller deviations from this due to local concentrations of matter such as galactic SMBHs, but largely this is why it is thought there is this dark matter than pervades the space a galaxy sit within. The smoke ring theory is my gravity theory to explain fictious dark matter “gravity only” requirements. Where have I heard such statements before? 14. Torbjorn: “La Société de Protection DE L’Apostrophe”. Yes we do ^_^ although not the things we should be doing, probably 😉 Thanks for the awesome video! All this about smoke rings just brings me back some 30 years back… In one of the first Scientific American I ever read, Martin Gardner had a paper about smoke rings. I felt compelled to write him about it (I certainly had my own crazy theory about them to pester him with, thankfully I forgot the contents of that letter) and he actually answered me! 15. The DM is some form of elementary particle. I think it is the neutralino, which is a condensate of superymmetric pairs of the photon, neutral Higgs, and Z-electroweak boson. It turns out the quantum numbers for the supersymmetric pairs of these particles are the same and the three supersymmetric pairs of these particles (photino, Higgsino and Zino) form a single particle state. Some data from PAMELA suggests that gamma ray production near the galaxy center is from neutralino decays. These particles then form a gas of sorts that is cold and very weakly interacting. It does though result in a gravity field. The mass per neutralino particle is on the order of 1-10 TeV, so it is associated with a gravitational field. Einstein lensing by galaxies has revealed the presence of this DM gas. So our galaxy is immersed in this “stuff” and the gravitational result is a force F = -Gm rho r/3 (I forgot the 3 earlier), which is the dynamics of a body moving in space as connected by a spring. The upshot of this motion is that the periodicity is constant, which is set by the spring constant k with in this case is k = Gm rho r/3. The dynamical equation in one dimension is x” = (k/m)x, which by elementary differential equations is a cosine or sine solution. So for the F = ma motion of a particle in a circular orbit the F = -kr and the acceleration is v^2/r, which results in v^2 = kr^2/m or v = sqrt(k/m)r. So if stars moved entirely according to this DM induced spring motion the galaxy would rotate similar to a solid disk. Yet there are other gravitational interactions with stars, the galaxy core, and so forth. So the motion of stars in the outer arms of a galaxy has this non-Keplerian motion due to the composition of this “spring motion” from DM and the attraction to matter clumps at the center of the galaxy. 16. I forgot to mention detection. We detect the presence of DM on a large scale in a number of ways, such as gravitational lensing or indirectly by the dynamics of stars in a galaxy. However, we do want to know that DM is composed of particle. The neutralino should interact very weakly by the weak interaction, the interaction responsible for beta decay, with regular matter. So the idea with detectors underground is to set up a crystal at very cold temperatures so that a neutralino particle might weakly interact with the crystal lattice. This will cause the crystal lattice to vibrate with a characteristic set of frequencies with what are called phonons — quanta of lattice vibration in a crystal. So far the data is slightly suggestive, but the statistics are too limited to draw any solid conclusion at this point. I have this crazy idea that a crystalline lattice might exhibit a supersymmetric pair of a phonon. It turns out that nuclei can exhibit a form of supersymmetry, and this has been found experimentally. The thought occurred to me that something similar might be possible in solid state physics. An optical phonon is similar to a photon, and if it can enter into a supersymmetric pairing then maybe this will exhibit physics with neutralinos, in particular an entanglement state. 17. LBC just curious, why a crystal as detector? 18. Also a questions, you have the galaxy and that is very big thing. So might and gravitational influences takes many years to get to the other end and influence the stars at the other side of the galaxy. Do the galaxy models compensate for this? Or is the sheer number of stars cancelling out this gravitational delay effect? 19. Could positive gravity and negative dark energy be united as one force by neutralinos having both gravity and electromagnetism? And the Universe be a particle ? 20. Dark energy has negative pressure! The equation of state is p = w*rho, where w = -1. If the subject of dark energy cycles back again I will write once again on the standard FLRW model and how this can be derived. I have done that a couple of times here. The neutralino is electrically neutral and beside gravitation they only interacts by weak interactions. And BTW there are 4 eigenstates of the neutralino as well. If neutralinos interacted by electromagnetism it would be luminous matter — not dark matter. For a static situation, where the distribution of DM is constant over time there is no real problem with a delay process. The causal delay, such as sending a signal on a light cone occurs if there is some change in a field, whether electromagnetic or gravitational. The field response then propagates along a null ray in spacetime and the communication of information about a change in that field exhibits a causal delay with distance. With gravity that is a bit strange, for it is a gravity wave that has to propagate as a change in spacetime along a null direction in spacetime. Yet this is understood, and weak gravity waves are similar to electromagnetic waves with little nonlinearities. Why a crystal? For crystals the spectra of phonons is known or computable. Most metals and semi-conductors have a regular lattice structure in the placement of atoms. This contrasts with glasses or amorphous solids. In that case you have not ordered structure and the phonon structure is strange. There you get frustrated systems and spin glass dynamics. It also has to be very cold to eliminate thermal vibrations in the solid as much as possible and to put the electrons almost entirely on their Fermi surface. 21. OLAF: [J]ust curious, why a crystal as detector? This may help to answer your question: Phononic crystal 22. Manu, Gardner the one man Institution. Thanks for sharing! 23. @ Jimhenson: your BRILLANCE. You don’t get it. I’m not claiming brilliance, I’m pointing out that your ideas (and exposition) is far from brilliant, actually useless. If they can’t stand up to predicting the close at hand Earth phenomena they should do, how can they predict anything out there? 24. If anyone is wondering, Oceanic String Space Theory is a fictional alternative theory out of ‘Ome – The Parallel’ by Justin Jackson (co-presenter of This Week In Science). 25. @ Jimhenson: “Those other jerks” may have been heavy handedly pouring irony on you but so far we’ve been polite. 26. @ Jim Henson The problem here is that it is pretty clear you have a weak knowledge of this subject. I suppose I can admire what seems to be an energetic interest, but your grasp on things does not reach far. If you have this energetic interest you might want to use it to learn this subject at some greater depth. Martin Gardner is mentioned above, and he wrote some good articles and books. You might then try to understand physics and relativity by looking at some of the elementary theory and its maths. 27. How much, would time slow down by dark matter gravity increases on an earth like planet orbiting a star that is at a large distance from the center of the galaxy? Its star speed is constant, but newtonian mass gravity increases with distance which could slow time? assuming that the star cluster needs the maximum amount of dark matter to keep its orbit around the galactic center black hole and obeys Hooke’s law that supplies the extra mass gravity with or without neutralinos wouldn’t the effect be the same? Would time slow down for life on planets that are in galaxies having more dark matter halos? I have this crazy idea that a crystalline lattice might exhibit a supersymmetric pair of a phonon. But that requires spin, doesn’t it? The problem for me then is that phonons are collective quasiparticles. It is easy to see that we can have rotational modes in some geometries, say along a carbon nanotube. Standing waves would be your zero angular momentum phonons, and you would have left- and right rotating modes. (I don’t think it has been demonstrated yet, mind.) But they don’t couple to the lattice in the simplest model, so you would have to look for such effects. The optical phonons suggest themselves there, actually. If they do “quantize” while rotating, you would have spin – no spin analogues. Then you would have “phair” them. Quite a program you have there. 29. Optical photons carry electric and magnetic field and exhibit helicity states. These serve as spins. Raman scattering is the scattering of a photon with wave vector k on a solid where it absorbs or emits some energy E = hk’/c and the photon is re-emitted with vector k” = k + k’. This usually involves the change in the state of an electron in the solid. If the electron is put in an excited state it may relax to a lower state by flipping it state by a unit of spin. The energy is released in a phonon with the opposite unit of spin. So optical phonons do have spin. The idea is that there is a vacuum phonon state, which could exhibit supersymmetric physics. Nambu proposed something similar to this 20 years ago. The idea is the neutralino might interact with a SUSY pair of a phonon in an entanglement. I have yet to bend metal on this idea. 30. The slowing down of time is a relative thing. For us embedded in the galaxy we notince nothing in particular. However, if we compared our clock rates with the rate of a clock between us and Andromedea we would find our clocks are a little bit slow, but a bout 10^{-8} seconds slower per every second. Not much. 31. Seems to me that Hooke’s Law of motion applies for black hole gravity, and does the inverse of newtonian planet gravity, by increasing force with distance, kinda like dark energy. Neutralino particles have this mass to explain the extra gravity, but why couldn’t Hooke’s Law, along with the Cassimir force distance, explain dark matter and dark energy? When will new WMAP data be analyized to examine dark energy? Somehow black holes are fundamental to dark matter and dark energy, but there is not consumption, and not expansion of black holes, just visible matter consumed and expanded. galaxies greatly receding us by dark energy do not have their black hole centers moving away from us at increasingly relativistic speeds, only the galactic visible matter stars supernova type la do this as far as I know. This is probably because the universe shape is curved and flat and black holes are singularities.
{"url":"https://www.universetoday.com/72315/astronomy-without-a-telescope-galactic-gravity-lab/","timestamp":"2024-11-07T10:24:21Z","content_type":"text/html","content_length":"234701","record_id":"<urn:uuid:856039e4-4937-4908-aa33-75fa7fb70f2b>","cc-path":"CC-MAIN-2024-46/segments/1730477027987.79/warc/CC-MAIN-20241107083707-20241107113707-00769.warc.gz"}
Problem of the Week Day 2: Week of 8/20/12 - 8/24/12 Today is day 2 of the problem of the week. Remember to use yesterday's answers in today's problem. Easy: For this problem, you will need to bring up yesterday's problem as well. Look at the matrix from yesterday and find the highest number there. Let's call that number g. g = Now, find the second highest number in the matrix. Let's call that number s. s = Finally, find the average of all of the numbers in the matrix. Round that to the nearest hundredth. That number will be called m. m = Now that we have those, we can begin the problem. Take a right triangle with the following side lengths. All of the measurements are in millimeters. a = sgm + x b = ___ c = y Try to determine the length of b. b = Hard: This problem is trigonometric, but is also a real world problem. For my science fair experiment in 2011, I had to solve almost the same exact problem and it was actually really cool to see the real world application of trigonometry. I hope you find the applications of trigonometry as cool as I did while you complete this word problem. Before you begin, complete these two calculations: f = x + (a + b)/(y + z) f = g = x + a + b g = Now for the fun part. Say you need to create a wooden ramp that is f meters long and is propped at an a° angle. You will prop it up with another piece of wood g centimeters up from the bottom and you want the support to make it an a° angle ramp. How many centimeters long should your support be to achieve this angle? Round to the nearest centimeter. s = Good luck, and remember to save your answers for tomorrow. 1 comment: 1. Math comes with more tricks and styles to solve a single problem some time we use these tricks for making puzzles and to discover new simple methods. Maths tricks like these type are more liked by me as they are quite refreshing and good.
{"url":"http://coolmathstuff123.blogspot.com/2012/08/problem-of-week-day-2-week-of-82012.html","timestamp":"2024-11-03T23:33:42Z","content_type":"text/html","content_length":"74287","record_id":"<urn:uuid:98a7851a-78c3-4206-ab28-9f7aaf4d27f2>","cc-path":"CC-MAIN-2024-46/segments/1730477027796.35/warc/CC-MAIN-20241103212031-20241104002031-00881.warc.gz"}
On the probability description of physical reality We proposed the concept of reciprocal wave function which is interpreted as the relative non-finding probability amplitude that represents the physical reality in a probabilistic way, like the wave function does. In this note we explore a further question which arises naturally: Why this novel form of probability description of physical reality could exist in quantum theory. PDF (This a new version of the previous work.)
{"url":"http://www.physzhang.com/on-the-probability-description-of-physical-reality.html","timestamp":"2024-11-10T11:41:29Z","content_type":"text/html","content_length":"30785","record_id":"<urn:uuid:4ae0e242-7956-4770-909c-cdffff1042cc>","cc-path":"CC-MAIN-2024-46/segments/1730477028186.38/warc/CC-MAIN-20241110103354-20241110133354-00406.warc.gz"}
Spline surfaces We have already learned how to use cubic splines to make curves. We can also use bicubic splines to make surfaces. In this case, rather than defining x,y,z in terms of a single parameter t describing parametric movement along a path, we define x,y,z in terms of two parameters u,v which describe parametric position upon a surface. This gives us a piece of spline surface that looks like a warped square tile in space. To make more complex spline surfaces, we can piece together multiple tiles, matching their positions and surface normals along the edges between them where they join. This is, for example, how the teapot that I showed in class was made. The general formulation for a single bicubic surface tile is: x(u, v) = au^3v^3 + bu^3v^2 + ... p with similar equations defining y(u,v) and z(u,v). For each of these equations we need sixteen coefficients [a, b, ... p], for all of the sixteen possible products of u and v u^3v^3 to u^3v^2 through Conveniently, we can describe all 16 possible coefficients by the following matrix/vector product: If we refer to the above matrix of coeffients as C, then the above equation looks like this: U C V^T If you have three such 4×4 coeffient matrices, C[x], C[y] and C[z], one for each dimension, then you can then evaluate (x,y,z) locations on the bicubic spline surface by iterating over values of u and v: for (int u = 0.0 ; u <= 1.0 ; u += ε) for (int v = 0.0 ; v <= 1.0 ; v += ε) drawPoint(eval(C[x],u,v), eval(C[y],u,v), eval(C[z],u,v)); Through variations on the above code, you can also draw curved lines along the u,v surface, as well as small polygons which can be shaded and z-buffered. Each of these polygons has four vertices: {eval(C[x],u,v), eval(C[y],u,v), eval(C[z],u,v)} {eval(C[x],u+ε,v), eval(C[y],u+ε,v), eval(C[z],u+ε,v) {eval(C[x],u+ε,v+ε), eval(C[y],u+ε,v+ε), eval(C[z],u+ε,v+ε) {eval(C[x],u,v+ε), eval(C[y],u,v+ε), eval(C[z],u,v+ε) Just as with spline curves, we generally want to find coeffients to define the geometry of a spline surface in a way that is convenient to the user, such as a Hermite or Bezier or B-Spline surface description. Then behind the scenes we convert to the polynomial formulation above. The general trick for doing this conversion is to use the same Hermite, Bezier etc. conversion matrix that we used for spline curves, but to apply it twice: once for u and once for v. For example, lets say we have a Bezier description of the surface, in which we describe, each of x, y, and z on the surface by sampling u and v every 1/3 unit. This gives a matrix of 16 knot locations in (u,v): (0/3,0/3) (1/3,0/3) (2/3,0/3) (3/3,0/3) (0/3,1/3) (1/3,1/3) (2/3,1/3) (3/3,1/3) P = (0/3,2/3) (1/3,2/3) (2/3,2/3) (3/3,2/3) (0/3,3/3) (1/3,3/3) (2/3,3/3) (3/3,3/3) At each of those locations the x coordinate has a value, as do y and z. Our goal is to convert this matrix into a matrix C of coefficients. Fortunately, this is easy to do, since we can just use our Bezier matrix B on both sides to convert coordinates in both u and v: When evaluating U C V^T, we can use use the coefficent matrix given by C = B P B^T.
{"url":"https://mrl.cs.nyu.edu/~perlin/courses/spring2009/splines4.html","timestamp":"2024-11-02T17:44:15Z","content_type":"text/html","content_length":"5919","record_id":"<urn:uuid:71416c85-bf84-4d2f-b7b5-815807418c5f>","cc-path":"CC-MAIN-2024-46/segments/1730477027729.26/warc/CC-MAIN-20241102165015-20241102195015-00336.warc.gz"}
One way ANOVA Example in R-Quick Guide » Data Science Tutorials One way ANOVA Example in R, the one-way analysis of variance (ANOVA), also known as one-factor ANOVA, is an extension of the independent two-sample t-test for comparing means when more than two groups are present. The data is divided into numerous groups using one single grouping variable in one-way ANOVA (also called factor variable). The basic premise of the one-way ANOVA test is described in this lesson, which also includes practical ANOVA test examples in R software. Hypotheses for ANOVA tests: The null hypothesis is that the means of the various groups are identical. Alternative hypothesis: At least one sample mean differs from the rest. You can use the t-test if you just have two groups. The F-test and the t-test are equivalent in this scenario. The ANOVA test is described in this section. Only when the observations are gathered separately and randomly from the population described by the factor levels can the ANOVA test be used. Each factor level’s data is normally distributed. The variance in these typical populations is similar. (This can be verified using Levene’s test.) What is the one-way ANOVA test? Assume we have three groups to compare (A, B, and C): Calculate the common variance, often known as residual variance or variance within samples (S2within). Calculate the difference in sample means as follows: Calculate the average of each group. Calculate the difference in sample means (S2between) As the ratio of S2between/S2within, calculate the F-statistic. Note that a lower ratio (ratio 1) suggests that the means of the samples being compared are not significantly different. A greater ratio, on the other hand, indicates that the differences in group means are significant. In R, visualize your data and do one-way ANOVA. We’ll use the PlantGrowth data set that comes with R. It provides the weight of plants produced under two distinct treatment conditions and a control condition. data <- PlantGrowth We utilize the function sample n() [in the dplyr package] to get a sense of how the data looks. The sample n() function prints out a random selection of observations from the data frame: To display a sample at random dplyr::sample_n(data, 10) weight group 1 5.87 trt1 2 4.32 trt1 3 3.59 trt1 4 5.18 ctrl 5 5.14 ctrl 6 4.89 trt1 7 5.12 trt2 8 4.81 trt1 9 4.50 ctrl 10 4.69 trt1 The column “group” is known as a factor in R, while the different categories (“ctr”, “trt1”, “trt2”) are known as factor levels. The levels are listed in alphabetical order. [1] "ctrl" "trt1" "trt2" If the levels are not in the correct order automatically, reorder them as follows: data$group <- ordered(data$group, levels = c("ctrl", "trt1", "trt2")) The dplyr package can be used to compute summary statistics (mean and sd) by groups. Calculate summary statistics by groups – count, mean, and standard deviation: group_by(data, group) %>% count = n(), mean = mean(weight, na.rm = TRUE), sd = sd(weight, na.rm = TRUE) group count mean sd <ord> <int> <dbl> <dbl> 1 ctrl 10 5.03 0.583 2 trt1 10 4.66 0.794 3 trt2 10 5.53 0.443 Visualize your information Read R base graphs to learn how to utilize them. For an easy ggplot2-based data visualization, we’ll use the ggpubr R tool. Best online course for R programming Visualize your data with ggpubr: ggboxplot(data, x = "group", y = "weight", color = "group", palette = c("#00AFBB", "#E7B800", "#FC4E07"), order = c("ctrl", "trt1", "trt2"), ylab = "Weight", xlab = "Treatment") One-way ANOVA Test in R Add error bars: mean_se ggline(data, x = "group", y = "weight", add = c("mean_se", "jitter"), order = c("ctrl", "trt1", "trt2"), ylab = "Weight", xlab = "Treatment") R’s one-way ANOVA test Type the following scripts if you still want to utilise R basic graphs: boxplot(weight ~ group, data = data, xlab = "Treatment", ylab = "Weight", frame = FALSE, col = c("#00AFBB", "#E7B800", "#FC4E07")) For plot means plotmeans(weight ~ group, data = data, frame = FALSE, xlab = "Treatment", ylab = "Weight", main="Mean Plot with 95% CI") Calculate the one-way ANOVA test. We want to see if the average weights of the plants in the three experimental circumstances vary significantly. This question can be answered using the R function aov(). Summary of the function The analysis of the variance model is summarised using aov(). Perform an analysis of variance. res.aov <- aov(weight ~ group, data = data) Summary of the analysis Df Sum Sq Mean Sq F value Pr(>F) group 2 3.766 1.8832 4.846 0.0159 * Residuals 27 10.492 0.3886 Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 The columns F value and Pr(>F) in the output corresponding to the p-value of the test. The results of one-way ANOVA testing should be interpreted. We can conclude that there are significant differences between the groups highlighted with “*” in the model summary because the p-value is less than the significance level of 0.05. Multiple pairwise comparisons between group means A significant p-value implies that some of the group means are different in a one-way ANOVA test, but we don’t know which pairs of groups are different. Multiple pairwise comparisons can be performed to see if the mean differences between certain pairs of groups are statistically significant. Multiple Tukey pairwise comparisons We can compute Tukey HSD (Tukey Honest Significant Differences, R function: TukeyHSD()) for doing numerous pairwise comparisons between the means of groups because the ANOVA test is significant. The fitted ANOVA is passed to TukeyHD() as an input. Tukey multiple comparisons of means 95% family-wise confidence level Fit: aov(formula = weight ~ group, data = data) diff lwr upr p adj trt1-ctrl -0.371 -1.0622161 0.3202161 0.3908711 trt2-ctrl 0.494 -0.1972161 1.1852161 0.1979960 trt2-trt1 0.865 0.1737839 1.5562161 0.0120064 lwr, upr: the lower and upper-end points of the confidence interval at 95 percent (default) p adj: p-value after multiple comparisons adjustment. Only the difference between trt2 and trt1 is significant, as shown by the output, with an adjusted p-value of 0.012. Using the multcomp package, perform several comparisons. The function glht() [in the multcomp package] can be used to do multiple comparison processes for an ANOVA. General linear hypothesis tests are abbreviated as glht. The following is a simplified glht(model, lincft) model: a model that has been fitted, such as an object returned by aov (). lincft() specifies the linear hypotheses that will be tested. Objects provided from the function mcp are used to specify multiple comparisons in ANOVA models (). For a one-way ANOVA, use glht() to make numerous pairwise comparisons: summary(glht(res.aov, linfct = mcp(group = "Tukey"))) Simultaneous Tests for General Linear Hypotheses Multiple Comparisons of Means: Tukey Contrasts Fit: aov(formula = weight ~ group, data = my_data) Linear Hypotheses: Estimate Std. Error t value Pr(>|t|) trt1 - ctrl == 0 -0.3710 0.2788 -1.331 0.391 trt2 - ctrl == 0 0.4940 0.2788 1.772 0.198 trt2 - trt1 == 0 0.8650 0.2788 3.103 0.012 * Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 (Adjusted p values reported -- single-step method) T-test with pairs Pairwise comparisons across group levels with various testing corrections can also be calculated using the pairewise.t.test() function. pairwise.t.test(data$weight, data$group p.adjust.method = "BH") Pairwise comparisons using t tests with pooled SD data: data$weight and data$group ctrl trt1 trt1 0.194 - trt2 0.132 0.013 P value adjustment method: BH The output is a table of pairwise comparison p-values. The Benjamini-Hochberg method was used to alter the p-values in this case. The validity of ANOVA assumptions should be checked. The ANOVA test assumes that the data are normally distributed and that group variance is uniform. With certain diagnostic plots, we can verify this. Examine the assumption of homogeneity of variance. The residuals versus fits plot can be used to assess variance homogeneity. There are no clear correlations between residuals and fitted values (the mean of each group) in the plot below, which is good. As a result, we can assume that the variances are homogeneous. 1: Variance homogeneity plot(res.aov, 1) ANOVA (one-way) Outliers are discovered in R Points 17, 15, 4, which can have a significant impact on normality and homogeneity of variance. To meet the test assumptions, it can be beneficial to remove outliers. The homogeneity of variances can also be checked using Bartlett’s or Levene’s tests. Levene’s test is recommended since it is less sensitive to deviations from normal distribution. The leveneTest() method [from the car package] will be used: leveneTest(weight ~ group, data = data) Levene’s Test for Homogeneity of Variance (center = median) Df F value Pr(>F) group 2 1.1192 0.3412 The p-value is not less than the significance level of 0.05, as seen in the output above. This indicates that there is no indication that the variance across groups is statistically significant. As a result, we can infer that the variations in the different treatment groups are homogeneous. Relaxing the premise of homogeneity of variance The traditional one-way ANOVA test assumes that all groups have similar variances. The homogeneity of variance assumption was fine in our case: the Levene test was not significant. In a circumstance where the homogeneity of variance assumption is violated, how do we save our ANOVA test? An alternate approach (e.g., the Welch one-way test) that does not necessitate the use of assumptions in the one-way function. test(). ANOVA test with no equal variance assumption oneway.test(weight ~ group, data = data) one-way analysis of means (not assuming equal variances) data: weight and group F = 5.181, num df = 2.000, denom df = 17.128, p-value = 0.01739 Pairwise t-tests with no assumption of equal variances pairwise.t.test(data$weight, data$group, p.adjust.method = "BH", pool.sd = FALSE) Pairwise comparisons using t-tests with non-pooled SD data: data$weight and data$group ctrl trt1 trt1 0.250 - trt2 0.072 0.028 P value adjustment method: BH Examine the presumption of normality. Residuals normality plot The residuals quantiles are displayed against the normal distribution quantiles in the graph below. Also plotted is a 45-degree reference line. The residuals’ normal probability plot is used to verify that the residuals are normally distributed. It should be about in a straight line. 2: Normality plot(res.aov, 2) ANOVA (one-way) R-based test We can infer normality because all of the points lie roughly along this reference line. Extract the residuals aov_residuals <- residuals(object = res.aov ) Run Shapiro-Wilk test Shapiro-Wilk normality test shapiro.test(x = aov_residuals ) data: aov_residuals W = 0.96607, p-value = 0.4379 The Shapiro-Wilk test on the ANOVA residuals (W = 0.96, p = 0.43), which finds no evidence of normality violation, supports the previous conclusion. ANOVA test with a non-parametric alternative The Kruskal-Wallis rank-sum test is a non-parametric alternative to one-way ANOVA that can be employed when the ANOVA assumptions are not met. kruskal.test(weight ~ group, data = data) Kruskal-Wallis rank-sum test data: weight by group Kruskal-Wallis chi-squared = 7.9882, df = 2, p-value = 0.01842
{"url":"https://datasciencetut.com/one-way-anova-example-in-r-quick-guide/","timestamp":"2024-11-06T22:09:29Z","content_type":"text/html","content_length":"121724","record_id":"<urn:uuid:47fece55-b4bd-4592-aa92-62c30d1d3432>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.47/warc/CC-MAIN-20241106194801-20241106224801-00211.warc.gz"}
Regularity -- compute Castelnuovo-Mumford regularity of a homogeneous ideal Regularity is a package for computing the Castelnuovo-Mumford regularity of homogeneous ideals in a polynomial ring without having to compute a minimal free resolution of the homogeneous ideal This package is based on two articles by Bermejo and Gimenez: Saturation and Castelnuovo-mumford Regularity, Journal of Algebra 303/2006 and Computing the Castelnuovo-Mumford Regularity of some subschemes of P^n using quotients of monomial ideals, Journal of Pure and Applied Algebra 164/2001.
{"url":"https://macaulay2.com/doc/Macaulay2/share/doc/Macaulay2/Regularity/html/index.html","timestamp":"2024-11-04T05:52:17Z","content_type":"text/html","content_length":"7185","record_id":"<urn:uuid:ae2163e3-f87d-4f6a-872e-f9f44033da08>","cc-path":"CC-MAIN-2024-46/segments/1730477027812.67/warc/CC-MAIN-20241104034319-20241104064319-00579.warc.gz"}
Implementing ‘le chiffre indéchiffrable’ in Scheme Implementing 'le chiffre indéchiffrable' in Scheme I have started to follow Jonathan Katz’s excellent Cryptography course on Coursera. Of course, as I often do, I have been sidetracked by trying to implement some of the examples mentioned in the course. Which means that I haven’t managed to escape the first set of lectures. The shift cipher was easy enough to implement quickly in Racket, and presented few interesting difficulties. Convert an input string into a list, and then map the shift (addition modulo the length of the alphabet) onto each character in the list. The Vigenère cipher, however, was more interesting. The first interesting thing I learnt about Vigenère is that it wasn’t broken for 300 years, resulting in it being given the name “le chiffre indéchiffrable” (the indecipherable cipher). Implementing in Racket Scheme also resulted in some challenges, partly arising from the general idea in Scheme that “some assembly may be required” so some basic things that one might expect to exist in a common library in some other language, must be implemented yourself in Scheme. Of course, the genius of Scheme is that “some assembly is required” which helped me learn more than I would have by using another programming language. My first challenge was “crypto randint” - the idea of a cryptographically secure random number within a range. For Vigenère, all operations are on the alphabet a-z, and your key is chosen from these letters too. So I needed a key of fourteen (the traditional Vigenère key length) alphabetic (ASCII) characters. Racket presents you with a single API call for obtaining one or more cryptographically-secure bytes. A random byte may or may not be an ASCII a-z character. In practice, I couldn’t see a better solution that simply continuing to ask for a crypto-secure random byte and checking whether it was in the range, repeatedly, until I had what I needed. I imagine though, that if the range is very small, this might conceivably take a (relatively) long time. ;;; return integers between min and max, derived from a crypto random byte (define (crypto_randint min max) (let ([y (integer-bytes->integer (crypto-random-bytes 1) #f)]) (cond [(<= min y max) y] [else (crypto_randint min max)]))) As you can see, the ‘cond’ forces repeated calls to the same function until an integer within the range is obtained. This made my Vigenère key generation look like this: ;;; generate a Vigenere key of length keysize, ;;; passing in the result so that it can be compared ;;; with the requested keysize as a bound ;;; new bytes are added until the result length matches the keysize (define (vig-genkey-bytes keysize result) (if (= (bytes-length result) keysize) (bytes-append result (integer->integer-bytes (crypto_randint 97 122) 1 #f))))) (define (vig-genkey keysize) (vig-genkey-bytes keysize "")) Note the seemingly usual Lispy conceit of passing the result as a parameter in vig-genkey-bytes and then having a function that passes in the empty string as the initial value of the result, to start the process. The more complicated work was still to come. Because I have spent the better part of 25 years coding in non-functional languages, returning to functional programming has been mind-blowing. My next challenge was to figure out how I would encipher an arbitrary length ciphertext with a fourteen-character key. In Vigenère, the key is “expanded” by repeating the key until its size matches the plaintext message, and then a character from the key is used to shift the corresponding character from the plaintext. So the solution in a Lisp should have perhaps been totally obvious to me, but eluded me for more than a week as I tried to figure out how I could do a “nested loop” approach in Racket, mapping a function onto each character of the plaintext, yes, but then “inside” that, counting the position in the “key” list. Perhaps one can do that, but I did not succeed in figuring it out, and I totally ignored the obvious solution as I was seduced by memories of procedural programming. But eventually, I came to understand that the first task was to expand my fourteen-character key to the length of the ciphertext. In addition to implementing ‘expand’, I also did ‘truncate’. They look like this: (define (truncate len lst) (cond [(null? lst) lst] [(> len (length lst)) lst] [(= len 0) '()] (cons (car lst) (truncate (sub1 len) (cdr lst)))])) (define (expand-iter len lst olst nlst) (cond [(= len 0) nlst] [(null? lst) (expand-iter len olst olst nlst)] (expand-iter (sub1 len) (rest lst) olst (append nlst (list (first lst))))])) (define (expand len lst) (expand-iter len lst lst '())) Implementing truncate was easy enough. The key notion in expand is to pass your work in as a parameter and use recursion to do the work of list traversal, while counting down from the required expanded list size. In expand-iter there are two interesting recursive calls - one when your key list is null, but you haven’t yet reached the expanded list size ((= len 0) nlst) and the other when you have neither reached the required expanded list size, nor is your key list empty, in which case, you keep appending the current head of the key list to the end of the expanded key list. Once I had implemented expand I could finally implement the encipherment function by using map with two lists - the first list being the plaintext, and the second being the expanded key list. (define (vig-encr privkey message) ;;; create the lists for map to work on by converting the input string to a list ;;; and in the case of the private key, expand the key so the list is the same ;;; size as the message, to make map over two lists possible (let* ([y (bytes->list message)] [k (expand (length y) (bytes->list privkey))]) ;;; convert from list to string at the end -- might be ;;; inefficient? so can run a map on the list items (map (lambda (chr keychr) ;;; use char->integer to get the ASCII char number, ;;; and then use the *position* of that letter in the alphabet ;;; convert back to a char (char->integer #\a) ;;; modulo the keyspace size! (+ (- (char->integer #\a)) (char->integer #\a))) The only part of this that’s significantly different from the shift cipher implementation is the map call being given two lists, and the creation of the expanded key in: (let* ([y (bytes->list message)] [k (expand (length y) (bytes->list privkey))]) Note: let* (as opposed to let) lets (haha) me use a previously bound variable within the binding stage of the let to create the expanded key list. After the weeks-long (kind of pleasant) pain of implementing encryption, decryption came quickly! (define (vig-decr privkey message) (let* ([y (bytes->list message)] [k (expand (length y) (bytes->list privkey))]) (map (lambda (chr keychr) (char->integer #\a) (char->integer #\a)) (char->integer #\a)) **Finishing up… OK, so now you’ve seen how to implement the Vigenère cipher in Racket, your assignment is this: Given the Vigenère key: #"glxceeijhxybag" and the ciphertext #"czagkscolfaiatmvbcm", what is the English plaintext? Hint: you may need to employ a social engineering technique to discover the answer.
{"url":"https://frumioj.github.io/scheme/racket/cryptography/crypto/2021/04/03/vigeneres-in-scheme.html","timestamp":"2024-11-12T22:11:07Z","content_type":"text/html","content_length":"24760","record_id":"<urn:uuid:acb43d93-c6ba-4650-ad17-e33a4bb7b880>","cc-path":"CC-MAIN-2024-46/segments/1730477028290.49/warc/CC-MAIN-20241112212600-20241113002600-00038.warc.gz"}
10th dimension explained The tenth dimension is the vertical axis that aligns the previous nine dimensions. Xem ni dung ph tin t cc tc gi sau y: Jessica Tessica (Jess) / Eddie(@tessicavision), August Gonalves (@new_age_mythbuster), Goureesh 99:99 Light(@portalofmerlinsabundance), August Gonalves(@new_age_mythbuster), Conscious.Ascension(@conscious.ascension) . Imagining the Tenth Dimension, a new way of thinking about time, space, and string theory, a book by Rob Bryanton Imagining the Tenth Dimension - A Book by Rob Bryanton This animation illustrates the concepts presented in chapter one of the book For instance a 10, 000 dimensional cube has 20,000 pieces of 9,999 dimensional cube borders , and it So the 10th is where all imaginable universe could occur. Answer (1 of 6): How do you explain 1D to a child? 7th Dimension. It is true that various versions of string theory predict more than 4 spacetime dimensions: 10, 11 or 26, depending on the theory. This is the realm of eternity. Its the dimension of light. I need you to understand that you have been lied to. You have no doubt been told that time is the fourth dimension. Perhaps you have even been to Of what? You can define anything as a dimension. If youre talking about dimensions in the realm of studying physics I assume you are talking about 10D space is a manifold which can be still analyzed via imbedding into Euclidean vector space. 6th Dimension. Its a great way to escape from the present moment and drift into a place where all the problems we perceive are simply a matter of being trapped in the third dimension. Its a great way to escape from the present moment and drift into a place where all the problems we perceive are simply a matter of being trapped in the third dimension. However: 1. %3E How many dimensions are there in our universe? To answer this question, I want you to come with me on a little journey back in the ancient time They are:Your body and any direct ailments, pain or problemsYour feelings, senses and emotional issuesYour sense of individual self, self-esteem and empowerment issuesYour self-love, and love relationship issuesYour ability to speak your truth and ask for what you really need not just what you want in your lifeMore items O is for Omniverse - Like a children's alphabet book with big picture ideas. The latest variants of string theory assert that there are ten spatial dimensions and one time dimension (however, this number is somewhat in flux, as it has gone from 6s+1t to 25s+1t). Superstring theory, one of the leading theories today to explain the nature of our universe, contends that there are 10 dimensions. Einsteins Special Theory of Relativity explained that the speed of light is a constant and that at great speeds, time slows down (relatively speaking) and space becomes distorted. The theory of superstrings involves the existence of nine dimensions of space and one dimension of time (a total of 10 dimensions). We start with a point (but what's the point?) The latest variants of string theory assert that there are ten spatial dimensions and one time dimension (however, this number is somewhat in flux, as it has gone from 6s+1t to 25s+1t). What is the fourth dimension in the Bible? The industrial air filter market size is expected to grow from $6.44 Bn in 2021 to $10.97 Bn by 2028; it is estimated to grow at a CAGR of 7.9% during 20212028.New York, Feb. 16, 2022 (GLOBE But in 5th dimension you can choose both the option at same time. 7 billion billion billion). Superstring theory, one of the leading theories today to explain the nature of our universe, contends that there are 10 dimensions. Fans of 'A Wrinkle In Time' will recognize some of the concepts presented here. Answer (1 of 15): This video is pure nonsense. All e-books $10 each. It's not even remotely accurate, and it's not consistent with our knowledge of reality. Answer: Just like any other dimension after the dimesion 4.. must be a so called target type of dimension or just : target dimension, it cannot be spatial. This will blow you mind. In this dimension, one surrenders to the gentle flow of the spirits evolution to fit all of the souls possibilities. When a persons consciousness is simultaneously aware in all nine dimensions equally, it Ability to see light geometries, and follow soul purpose. All is evil in the 10th dimension Well, apart from all the evil stuff! Imagining the Tenth Dimension - the book that started it all. What is the 10th dimension? The video covers the details of all the higher dimensions. It is all theory though mathematically proved. Entering a room through its walls is only possible via the Fourth Dimension. The dimensions are states of consciousness where everything on those dimensions vibrates, communicates and perceives in. When a persons consciousness is simultaneously aware in all nine dimensions equally, it However, the For about 1000 years, Rabbinical scholars have asserted from their reading of Genesis 1 that the Universe consists of 10 dimensions, the 10th of which is so small as to be undetectable. Ten Dimensions Explained [ https://m.youtube.com/watch?v=p4Gotl9vRGs ] It will explain about it The second, posited by Swedish physicist Oskar Klein, is that it is a dimension unseen by humans where the forces of gravity and electromagnetism unite to create a simple but graceful theory of the fundamental forces. Many believe that when the Church is raptured, well simply step into a parallel dimension. 10 is a number, a single solitary point, it would be the zeroth dimension. It has no direction of travel it just is. First dimension allows for Its the dimension where the soul tunes in to evolve itself. It's meant to illustrate Rob Bryanton's book Imagining the 10th Dimension, which largely concerns itself with superstring theory. The question alleges that higher dimensions have anything to do with the multiple world hypothesis First of all there are different definitions for dimensionality. This is the realm of soul. Twice, Jesus entered the room of the disciples without using a door (John 20:19-23, 26-29). Thats nine By Laurie Brenner. Imagining the Tenth Dimension, a new way of thinking about time and space, a book by Rob Bryanton In order to enjoy the media-rich nature of the Tenth Dimension website, you will need to have JavaScript enabled. - Rob's new book! 10th dimension: simple words Where there are infinite amount of universe, with endless possibilities from infinite start condition (our start condition is the Big Bang theory) There could be anything, there could be a universe that contains you, who could fly and is What if the 10 th dimension in PT and in the 9-11-geometry explained in this article embodies this boundary, functioning as a hyper-dimensional portal through which consciousness hallucinates material reality by thinking the wave-like quantum state of matter into static material reality.. Comprehending the link between thought/consciousness and material reality, The awakened higher heart center between the heart and throat, bursting through the limitations of the previous dimensions. Deloitte analysts explain the potential benefits of wider adoption of artificial intelligence in a new report: The AI Dossier. Every being living or in spirit exist in several aspects of dimensions, either moving from, to or ascend towards. It is very complex to explane because we can only observe 1d,2d and 3d but we can observe a little bit of the 4th dimension which is time we can on What could there be? Thats nine I see some of the answers here take the idea of time to explain higher dimensions; but time is just a mathematical dimension. There are seriously m Since its second issue, the magazine has had color covers, all covers painted by Avi Katz.. The Superstring theory says that there are 10 dimensions of reality existing in the universe other than the three familiar dimensions of length, height, and depth. Phenomena like the parallel universe or alternate realities come to our mind when we hear the term different dimensions. Apart from these 3 discernible dimensions, the scientists believe that could be many more other dimensions. According to some researches extra dimensions may be curled up at extremely small scales. It explains very well what dimensions are.-> Everything Forever - Giorbran's masterpiece, bequeathed to Rob on Gevin's death. Phalanges, the 10th dimension + the F train (actual title they created) Your phalanges are the bones of your fingers. Moving on then: 5D or Fifth-Dimension. We only have access to 3 1/2 of these dimensions now so we dont know what the In mathematics, the dimension of an object is, roughly speaking, the number of degrees of freedom of a point that moves on this object. In other words, the dimension is the number of independent parameters or coordinates that are needed for defining the position of a point that is constrained to be on the object. I couldnt understand all the dimensions, I gave up on some of them. The magazine publishes a mixture of fiction and non-fiction. In physics, string theory is a theoretical framework in which the point-like particles of particle physics are replaced by one-dimensional objects called strings.String theory describes how these strings propagate through space and interact with each other. VIDEO-> To help you understand dimensions you should watch Dr.Quantum In Flatland. These different dimensions influence the universe, its fundamental forces of nature, and all the elementary particles it is composed of. Originally Answered: What is the 10th dimension? There could be anime worlds, narutobleach yay! The Tenth Dimension Explained. 1) The extra dimension is small and round, though not nearly as small as the truly minuscule dimensions associated with string theory. Many believe that when the Church is raptured, well simply step into a parallel dimension. The answer is no but not for the reasons most people might think. There are only 10 dimensions which mathematically describe our Universe and one It is the shamans world tree, the Qabalistic Tree of Life, and the spire running through the dimensions. The Bible also contains examples of appearances which also are easily explained with the use of the Fourth Dimension. This is the realm of infinity. The Tenth Dimension is a conscious entity - a Universe. It is not, however, a universe of form. (Form begins at the Ninth Dimension.) Through the eyes of form, a universe is born, develops, matures, and dies. This cycle marks a Cosmic Day. Numerologically, the combination of the 1 and 0 represents the union of The Something and The Nothing. Moving into more brainstorming, assume cases of the situation where at the same time you are actually in One way to picture higher dimensions is adding parameters rather than directions. For example, we can picture 3D Euclidean space, but have a fourth Suppose you have to choose your career as a cricketer or as a musician. 11th dimension: The 11th dimension is a characteristic of space-time that has been proposed as a possible answer to questions that arise in superstring theory. If we talk about fourth dimension, then you can just choose one option as your career, a cricketer or a musician. The Tenth Dimensional equivalent Imagining the fourth dimension is hard but 6 more dimensions after that is mind boggling. It can be up to about 1 Integration of spiritual awareness, self realization and unified polarities. Finally, in the tenth dimension, we find ourselves at a point where everything possible and imaginable is open. Welcome to the Tenth Dimension (text-based version) In string theory, physicists tell us that the subatomic particles that make up our universe are created within ten spatial dimensions (plus an eleventh dimension of "time") by the vibrations of exquisitely small "superstrings". This will blow you mind. Nitin Mittal and The Tenth Dimension is the moral opposite of Crash's home dimension. 11 Dimensions Explained:-0th dimension-There is no length, no breadth and no height of an object in the zeroth dimension. The Tenth Dimension Explained. The tenth dimension is the vertical axis that aligns the previous nine dimensions. Everything that is considered benevolent in Crash's dimension is malevolent in the Tenth Dimension and vice-versa whereas Crash's dimension is bright and colorful, the Tenth Dimension is dark and gloomy. https://youtu.be/0ca4miMMaCE. For about 1000 years, Rabbinical scholars have asserted from their reading of Genesis 1 that the Universe consists of 10 dimensions, the 10th of which is so small as to be undetectable. 10th dimension: simple words Where there are infinite amount of universe, with endless possibilities from infinite start condition (our start cond 10th dimension. What if the 10 th dimension in PT and in the 9-11-geometry explained in this article embodies this boundary, functioning as a hyper-dimensional portal through which consciousness hallucinates material reality by thinking the wave-like quantum state of matter into static material reality.. Comprehending the link between thought/consciousness and material reality, If you can think it, there's one out there, if you can't think it, the Continue Reading More answers below Is gravity the fifth dimension? Answer: Since actually nobody really understands and knows the answer to your question, I may deposit my ten cents and try. If he understands a dimension, I think he can understand any number of it. In simple words in 5th dimension you can go left and right in time and see your second carrrer. In fact, the Superstring Theory asserts that the universe has 10 different dimensions. A concept which is kinda explained by physics using the 10thdimension. 11 dimensions have been explained in this video. 8th Dimension. The Tenth Dimension (Ha'meimad Ha'asiri) is a magazine published in Israel and is the official magazine of the Israeli Society for Science Fiction and Fantasy.The magazine was first published in 1996. A. It is the shamans world tree, the Qabalistic Tree of Life, and the spire running through the dimensions. For example, the dimension of a point is zero; the The fifth dimension has two definitions: the first is that its a name of a 1969 pop-vocal group. Through simple but clean animation along with a narrator, the intent is to help us envision what the tenth dimension is like. In other words the basic notion is a dimension. The vibration of A. Khm ph cc video ngn lin quan n 9th dimension explained trn TikTok. In mathematics. This scale is so small that we can not see them with our experiments based on current technologies. The term is derived from Latin, referring to a row of soldiers. Like 7*10^27 (a.k.a. 10th dimension explained
{"url":"http://bfmsogutma.com.tr/vaseline/best/a/10681808a2f840bc8820de-10th-dimension-explained","timestamp":"2024-11-08T05:18:52Z","content_type":"text/html","content_length":"47722","record_id":"<urn:uuid:863f63d0-86b9-4b9e-b566-e90364c87a67>","cc-path":"CC-MAIN-2024-46/segments/1730477028025.14/warc/CC-MAIN-20241108035242-20241108065242-00684.warc.gz"}
Power Factor and Conversion Formula between VA and W When we talk about electrical power, we often use the terms VA and W. VA stands for Volt-Ampere, which is the unit of apparent power, and W stands for Watt, which is the unit of real power. Apparent power and real power are two different types of power that are used in different ways in electrical systems. Apparent power is the product of the voltage and current in an electrical system, while real power is the actual power that is used to do work, such as lighting a bulb or running a motor. The difference between the two is called reactive power, which is measured in VAR (Volt-Ampere Reactive). The ratio of real power to apparent power is called the power factor, and it is a measure of how efficiently the electrical system is using the power. A power factor of 1 means that all of the apparent power is being used as real power, while a power factor of less than 1 means that some of the apparent power is being wasted as reactive power. In order to convert VA to W, we need to know the power factor of the system. The conversion formula is: W = VA x Power Factor For example, if a system has an apparent power of 1000 VA and a power factor of 0.8, the real power would be: W = 1000 x 0.8 = 800 W Conversely, to convert W to VA, we can use the following formula: VA = W / Power Factor For example, if a system has a real power of 800 W and a power factor of 0.8, the apparent power would be: VA = 800 / 0.8 = 1000 VA Understanding the difference between VA and W and knowing how to convert between them is important in designing and maintaining electrical systems, as it ensures that the system is operating efficiently and effectively. LuShan, est. 1975, is a Chinese professional manufacturer specializing in power transformers and reactors for 48 years. Leading products are single-phase transformer, three-phase transformers, DC inductors, AC reactors, filtering reactor, expoxy resin high-voltage transformer and intermediate, high-frequency products. Our transformers and reactors are widely used in 10 application areas: rapid transit, construction machinery, renewable energy, intelligent manufacturing, medical equipment, coal mine explosion prevention , excitation system, vacuum sintering, central air conditioning. Know more about power transformer :https://www.lstransformer.com/Transformers
{"url":"https://www.lstransformer.com/Faq/transformerVAW","timestamp":"2024-11-11T14:48:40Z","content_type":"text/html","content_length":"20220","record_id":"<urn:uuid:11ef7f8d-a605-4bfd-99ec-2e32dde74874>","cc-path":"CC-MAIN-2024-46/segments/1730477028230.68/warc/CC-MAIN-20241111123424-20241111153424-00489.warc.gz"}
DAAD: Differenzengleichungen und Computeralgebra - RISC - Johannes Kepler University DAAD: Differenzengleichungen und Computeralgebra Project Description Doktoratstipentium von Manuel Kauers Budget: 14.425,66 Eur. Project Lead Project Duration 01/01/2004 - 31/12/2004 A Computer Proof of Turan's Inequality Stefan Gerhold, Manuel Kauers Journal of Inequalities in Pure and Applied Mathematics 7(2), pp. 1-4. May 2006. Article 42. [ps] author = {Stefan Gerhold and Manuel Kauers}, title = {{A Computer Proof of Turan's Inequality}}, language = {english}, abstract = {We show how Turan's inequality $P_n(x)^2-P_{n-1}(x)P_{n+1}(x)\geq0$ for Legendre Polynomials and related inequalities can be proven by means of a computer procedure. The use of this procedure simplifies the daily work with inequalities. For instance, we have found the stronger inequality $|x|P_n(x)^2-P_{n-1}(x)P_{n+1}(x)\geq0$ ($-1\leq x\leq 1$) effortlessly with the aid of our journal = {Journal of Inequalities in Pure and Applied Mathematics}, volume = {7}, number = {2}, pages = {1--4}, isbn_issn = {?}, year = {2006}, month = {May}, note = {Article 42}, refereed = {yes}, length = {4} Indefinite summation with unspecified summands M. Kauers, C. Schneider Discrete Math. 306(17), pp. 2073-2083. 2006. ISSN 0012-365X. Preliminary version online. [doi] [pdf] author = {M. Kauers and C. Schneider}, title = {{Indefinite summation with unspecified summands}}, language = {english}, journal = {Discrete Math.}, volume = {306}, number = {17}, pages = {2073--2083}, isbn_issn = {ISSN 0012-365X}, year = {2006}, note = {Preliminary version online}, refereed = {yes}, length = {11}, url = {https://doi.org/10.1016/j.disc.2006.04.005} Application of unspecified sequences in symbolic summation M. Kauers, C. Schneider In: Proceedings of ISSAC'06, Jean-Guillaume Dumas (ed.), Proceedings of ISSAC'06, pp. 177-183. 2006. ACM Press, [doi] [ps] [pdf] author = {M. Kauers and C. Schneider}, title = {{Application of unspecified sequences in symbolic summation}}, booktitle = {{Proceedings of ISSAC'06}}, language = {english}, abstract = { We consider symbolic sums which contain subexpressions that represent unspecified sequences. Existing symbolic summation technology is extended to sums of this kind. We show how this can be applied in the systematic search for general summation identities. Both, results about the non-existence of identities of a certain form, and examples of general families of identities which we have discovered automatically are included in the paper.}, pages = {177--183}, publisher = {ACM Press}, isbn_issn = {?}, year = {2006}, editor = {Jean-Guillaume Dumas}, refereed = {yes}, length = {7}, conferencename = {ISSAC'06}, url = {https://doi.org/10.1145/1145768.1145800} A Procedure for Proving Special Function Inequalities Involving a Discrete Parameter Stefan Gerhold, Manuel Kauers In: Proceedings of ISSAC '05, Manuel Kauers (ed.), pp. 156-162. 2005. ACM Press, ISBN 1-59593-095-705/0007. [ps] author = {Stefan Gerhold and Manuel Kauers}, title = {{A Procedure for Proving Special Function Inequalities Involving a Discrete Parameter}}, booktitle = {{Proceedings of ISSAC '05}}, language = {english}, abstract = {We define a class of special function inequalities that contains many classical examples, such as the Cauchy-Schwarz inequality, and introduce a proving procedure based on induction and Cylindrical Algebraic Decomposition. We present an array of non/trivial examples that can be done by our method and have not been proven automatically before. Some difficult well-known inequalities such as the Askey-Gasper inequality and Vietoris's inequality lie in our class as well, but we do not know if our proving procedure terminates on them.}, pages = {156--162}, publisher = {ACM Press}, isbn_issn = {ISBN 1-59593-095-705/0007}, year = {2005}, editor = {Manuel Kauers}, refereed = {yes}, length = {7} A Computer Proof of Turan's Inequality Stefan Gerhold, Manuel Kauers SFB F013. Technical report no. 2005-15, Altenbergerstrasse 69, September 2005. [ps] author = {Stefan Gerhold and Manuel Kauers}, title = {{A Computer Proof of Turan's Inequality}}, language = {english}, abstract = {We show how Turan's inequality $P_n(x)^2-P_{n-1}(x)P_{n+1}(x)\geq0$ for Legendre polynomials and related inequalities can be proven by means of a computer procedure. The use of this procedure simplifies the daily work with inequalities. For instance, we have found the stronger inequality $|x|P_n(x)^2-P_{n-1}(x)P_{n+1}(x)\geq0$, $-1\leq x\leq 1$, effortlessly with the aid of our number = {2005-15}, address = {Altenbergerstrasse 69}, year = {2005}, month = {September}, institution = {SFB F013}, keywords = {Turan's inequality, Cylindrical Algebraic Decomposition}, length = {3} Algorithms for Nonlinear Higher Order Difference Equations Manuel Kauers RISC-Linz. PhD Thesis. October 2005. [ps] [ps] author = {Manuel Kauers}, title = {{Algorithms for Nonlinear Higher Order Difference Equations}}, language = {english}, abstract = {In this thesis, new algorithmic methods for the treatment of special sequences are presented. The sequences that we consider are described by systems of difference equations (recurrences). These systems may be coupled, non-linear, and/or higher order. The class of sequences defined in this way (admissible sequences) contains a lot of sequences which are of interest in various mathematical applications. While some of these sequences can be handled also with known Algorithms, for many others no adequate methods were available up to now.In the center of our interest, there are algorithms for automatically proving known identities of admissible sequences, and for automatically discovering new ones. By "finding new identities", we mean in particular solving of difference equations in closed form, finding closed forms for symbolic sums, and finding algebraic dependencies of given sequences. In addition, we present a procedure by which some inequalities of admissible sequences can be proven automatically. For their algorithmic treatment, admissible sequences are represented as elements of certain special difference rings.In these difference rings, computations are then carried out, whose results can be interpreted as statements about the original admissible sequences. Known techniques for commutative multivariate polynomial rings, especially the theory of Gr\"obner bases, are applied to this end.Part of the present thesis is an implementation of the presented algorithms in form of a software package for the computer algebra system Mathematica. With the aid of our software, we succeeded in proving a lot of identities and inequalities from the literature automatically for the first time. Additionally, with the same software, we have found some identities which were probably unknown up to now.}, year = {2005}, month = {October}, translation = {0}, school = {RISC-Linz}, length = {155} Computer Proofs for Polynomial Identities in Arbitrary Many Variables Manuel Kauers In: Proceedings of ISSAC 2004, Jaime Gutierrez (ed.), pp. 199-204. 2004. ACM Press, ISBN 1-58113-827-X. [ps] author = {Manuel Kauers}, title = {{Computer Proofs for Polynomial Identities in Arbitrary Many Variables}}, booktitle = {{Proceedings of ISSAC 2004}}, language = {english}, pages = {199--204}, publisher = {ACM Press}, isbn_issn = {ISBN 1-58113-827-X}, year = {2004}, editor = {Jaime Gutierrez}, refereed = {yes}, length = {6} ZET User Manual Manuel Kauers SFB F13. Technical report no. 2004-05, 2004. [ps] author = {Manuel Kauers}, title = {{ZET User Manual}}, language = {english}, number = {2004-05}, year = {2004}, institution = {SFB F13}, length = {31}
{"url":"https://www1.risc.jku.at/pj/daad-differenzengleichungen-und-computeralgebra/","timestamp":"2024-11-05T19:01:16Z","content_type":"text/html","content_length":"37289","record_id":"<urn:uuid:42b311db-aaf3-49b8-a577-ca2e72d015d7>","cc-path":"CC-MAIN-2024-46/segments/1730477027889.1/warc/CC-MAIN-20241105180955-20241105210955-00261.warc.gz"}
10.E: Correlation and Regression (Exercises) Last updated Page ID \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \) \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \) \( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) ( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\AA}{\unicode[.8,0]{x212B}}\) \( \newcommand{\vectorA}[1]{\vec{#1}} % arrow\) \( \newcommand{\vectorAt}[1]{\vec{\text{#1}}} % arrow\) \( \newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \) \( \newcommand{\vectorC}[1]{\textbf{#1}} \) \( \newcommand{\vectorD}[1]{\overrightarrow{#1}} \) \( \newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}} \) \( \newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}} \) \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \) \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \) \(\newcommand{\avec}{\mathbf a}\) \(\newcommand{\bvec}{\mathbf b}\) \(\newcommand{\cvec}{\mathbf c}\) \(\newcommand{\dvec}{\mathbf d}\) \(\newcommand{\dtil}{\widetilde{\mathbf d}}\) \(\newcommand{\ evec}{\mathbf e}\) \(\newcommand{\fvec}{\mathbf f}\) \(\newcommand{\nvec}{\mathbf n}\) \(\newcommand{\pvec}{\mathbf p}\) \(\newcommand{\qvec}{\mathbf q}\) \(\newcommand{\svec}{\mathbf s}\) \(\ newcommand{\tvec}{\mathbf t}\) \(\newcommand{\uvec}{\mathbf u}\) \(\newcommand{\vvec}{\mathbf v}\) \(\newcommand{\wvec}{\mathbf w}\) \(\newcommand{\xvec}{\mathbf x}\) \(\newcommand{\yvec}{\mathbf y} \) \(\newcommand{\zvec}{\mathbf z}\) \(\newcommand{\rvec}{\mathbf r}\) \(\newcommand{\mvec}{\mathbf m}\) \(\newcommand{\zerovec}{\mathbf 0}\) \(\newcommand{\onevec}{\mathbf 1}\) \(\newcommand{\real} {\mathbb R}\) \(\newcommand{\twovec}[2]{\left[\begin{array}{r}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\ctwovec}[2]{\left[\begin{array}{c}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\threevec} [3]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\cthreevec}[3]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\fourvec}[4]{\left[\begin{array} {r}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\cfourvec}[4]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\fivevec}[5]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\cfivevec}[5]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\mattwo}[4]{\left[\begin{array}{rr}#1 \amp #2 \\ #3 \amp #4 \\ \end{array}\right]}\) \(\newcommand{\laspan}[1]{\text{Span}\{#1\}}\) \(\newcommand{\bcal}{\cal B}\) \(\newcommand{\ccal}{\cal C}\) \(\newcommand{\scal}{\cal S}\) \(\newcommand{\ wcal}{\cal W}\) \(\newcommand{\ecal}{\cal E}\) \(\newcommand{\coords}[2]{\left\{#1\right\}_{#2}}\) \(\newcommand{\gray}[1]{\color{gray}{#1}}\) \(\newcommand{\lgray}[1]{\color{lightgray}{#1}}\) \(\ newcommand{\rank}{\operatorname{rank}}\) \(\newcommand{\row}{\text{Row}}\) \(\newcommand{\col}{\text{Col}}\) \(\renewcommand{\row}{\text{Row}}\) \(\newcommand{\nul}{\text{Nul}}\) \(\newcommand{\var} {\text{Var}}\) \(\newcommand{\corr}{\text{corr}}\) \(\newcommand{\len}[1]{\left|#1\right|}\) \(\newcommand{\bbar}{\overline{\bvec}}\) \(\newcommand{\bhat}{\widehat{\bvec}}\) \(\newcommand{\bperp}{\ bvec^\perp}\) \(\newcommand{\xhat}{\widehat{\xvec}}\) \(\newcommand{\vhat}{\widehat{\vvec}}\) \(\newcommand{\uhat}{\widehat{\uvec}}\) \(\newcommand{\what}{\widehat{\wvec}}\) \(\newcommand{\Sighat}{\ widehat{\Sigma}}\) \(\newcommand{\lt}{<}\) \(\newcommand{\gt}{>}\) \(\newcommand{\amp}{&}\) \(\definecolor{fillinmathshade}{gray}{0.9}\) These are homework exercises to accompany the Textmap created for "Introductory Statistics" by Shafer and Zhang. 10.1 Linear Relationships Between Variables 1. A line has equation \(y=0.5x+2\). 1. Pick five distinct \(x\)-values, use the equation to compute the corresponding \(y\)-values, and plot the five points obtained. 2. Give the value of the slope of the line; give the value of the \(y\)-intercept. 2. A line has equation \(y=x-0.5\). 1. Pick five distinct \(x\)-values, use the equation to compute the corresponding \(y\)-values, and plot the five points obtained. 2. Give the value of the slope of the line; give the value of the \(y\)-intercept. 3. A line has equation \(y=-2x+4\). 1. Pick five distinct \(x\)-values, use the equation to compute the corresponding \(y\)-values, and plot the five points obtained. 2. Give the value of the slope of the line; give the value of the \(y\)-intercept. 4. A line has equation \(y=-1.5x+1\). 1. Pick five distinct \(x\)-values, use the equation to compute the corresponding \(y\)-values, and plot the five points obtained. 2. Give the value of the slope of the line; give the value of the \(y\)-intercept. 5. Based on the information given about a line, determine how \(y\) will change (increase, decrease, or stay the same) when \(x\) is increased, and explain. In some cases it might be impossible to tell from the information given. 1. The slope is positive. 2. The \(y\)-intercept is positive. 3. The slope is zero. 6. Based on the information given about a line, determine how \(y\) will change (increase, decrease, or stay the same) when \(x\) is increased, and explain. In some cases it might be impossible to tell from the information given. 1. The \(y\)-intercept is negative. 2. The \(y\)-intercept is zero. 3. The slope is negative. 7. A data set consists of eight \((x,y)\) pairs of numbers: \[\begin{matrix} (0,12) & (4,16) & (8,22) & (15,28)\\ (2,15) & (5,14) & (13,24) & (20,30) \end{matrix}\] 1. Plot the data in a scatter diagram. 2. Based on the plot, explain whether the relationship between \(x\) and \(y\) appears to be deterministic or to involve randomness. 3. Based on the plot, explain whether the relationship between \(x\) and \(y\) appears to be linear or not linear. 8. A data set consists of ten \((x,y)\) pairs of numbers: \[\begin{matrix} (3,20) & (6,9) & (11,0) & (14,1) & (18,9)\\ (5,13) & (8,4) & (12,0) & (17,6) & (20,16) \end{matrix}\] 1. Plot the data in a scatter diagram. 2. Based on the plot, explain whether the relationship between \(x\) and \(y\) appears to be deterministic or to involve randomness. 3. Based on the plot, explain whether the relationship between \(x\) and \(y\) appears to be linear or not linear. 9. A data set consists of nine \((x,y)\) pairs of numbers: \[\begin{matrix} (8,16) & (10,4) & (12,0) & (14,4) & (16,16)\\ (9,9) & (11,1) & (13,1) & (15,9) & \end{matrix}\] 1. Plot the data in a scatter diagram. 2. Based on the plot, explain whether the relationship between \(x\) and \(y\) appears to be deterministic or to involve randomness. 3. Based on the plot, explain whether the relationship between \(x\) and \(y\) appears to be linear or not linear. 10. A data set consists of five \((x,y)\) pairs of numbers: \[\begin{matrix} (0,1) & (2,5) & (3,7) & (5,11) & (8,17) \end{matrix}\] 1. Plot the data in a scatter diagram. 2. Based on the plot, explain whether the relationship between \(x\) and \(y\) appears to be deterministic or to involve randomness. 3. Based on the plot, explain whether the relationship between \(x\) and \(y\) appears to be linear or not linear. 11. At \(60^{\circ}F\) a particular blend of automotive gasoline weights \(6.17\) lb/gal. The weight \(y\) of gasoline on a tank truck that is loaded with \(x\) gallons of gasoline is given by the linear equation \[y=6.17x\] 1. Explain whether the relationship between the weight \(y\) and the amount \(x\) of gasoline is deterministic or contains an element of randomness. 2. Predict the weight of gasoline on a tank truck that has just been loaded with \(6,750\) gallons of gasoline. 12. The rate for renting a motor scooter for one day at a beach resort area is \(\$25\) plus \(30\) cents for each mile the scooter is driven. The total cost \(y\) in dollars for renting a scooter and driving it \(x\) miles is \[y=0.30x+25\] 1. Explain whether the relationship between the cost \(y\) of renting the scooter for a day and the distance \(x\) that the scooter is driven that day is deterministic or contains an element of 2. A person intends to rent a scooter one day for a trip to an attraction \(17\) miles away. Assuming that the total distance the scooter is driven is \(34\) miles, predict the cost of the 13. The pricing schedule for labor on a service call by an elevator repair company is \(\$150\) plus \(\$50\) per hour on site. 1. Write down the linear equation that relates the labor cost \(y\) to the number of hours \(x\) that the repairman is on site. 2. Calculate the labor cost for a service call that lasts \(2.5\) hours. 14. The cost of a telephone call made through a leased line service is \(2.5\) cents per minute. 1. Write down the linear equation that relates the cost \(y\) (in cents) of a call to its length \(x\). 2. Calculate the cost of a call that lasts \(23\) minutes. Large Data Set Exercises Large Data Sets not available 15. Large \(\text{Data Set 1}\) lists the SAT scores and GPAs of \(1,000\) students. Plot the scatter diagram with SAT score as the independent variable (\(x\)) and GPA as the dependent variable (\(y \)). Comment on the appearance and strength of any linear trend. 16. Large \(\text{Data Set 12}\) lists the golf scores on one round of golf for \(75\) golfers first using their own original clubs, then using clubs of a new, experimental design (after two months of familiarization with the new clubs). Plot the scatter diagram with golf score using the original clubs as the independent variable (\(x\)) and golf score using the new clubs as the dependent variable (\(y\)). Comment on the appearance and strength of any linear trend. 17. Large \(\text{Data Set 13}\) records the number of bidders and sales price of a particular type of antique grandfather clock at \(60\) auctions. Plot the scatter diagram with the number of bidders at the auction as the independent variable (\(x\)) and the sales price as the dependent variable (\(y\)). Comment on the appearance and strength of any linear trend. 1. Answers vary. 2. Slope \(m=0.5\); \(y\)-intercept \(b=2\). 1. Answers vary. 2. Slope \(m=-2\); \(y\)-intercept \(b=4\). 1. \(y\) increases. 2. Impossible to tell. 3. \(y\) does not change. 1. Scatter diagram needed. 2. Involves randomness. 3. Linear. 1. Scatter diagram needed. 2. Deterministic. 3. Not linear. 1. Deterministic. 2. \(41,647.5\) pounds. 1. \(y=50x+150\). 2. \(\$275\). 15. There appears to a hint of some positive correlation. 17. There appears to be clear positive correlation. 10.2 The Linear Correlation Coefficient With the exception of the exercises at the end of Section 10.3, the first Basic exercise in each of the following sections through Section 10.7 uses the data from the first exercise here, the second Basic exercise uses the data from the second exercise here, and so on, and similarly for the Application exercises. Save your computations done on these exercises so that you do not need to repeat them later. 1. For the sample data \[\begin{array}{c|c c c c c} x &0 &1 &3 &5 &8 \\ \hline y &2 &4 &6 &5 &9\\ \end{array}\] 1. Draw the scatter plot. 2. Based on the scatter plot, predict the sign of the linear correlation coefficient. Explain your answer. 3. Compute the linear correlation coefficient and compare its sign to your answer to part (b). 2. For the sample data \[\begin{array}{c|c c c c c} x &0 &2 &3 &6 &9 \\ \hline y &0 &3 &3 &4 &8\\ \end{array}\] 1. Draw the scatter plot. 2. Based on the scatter plot, predict the sign of the linear correlation coefficient. Explain your answer. 3. Compute the linear correlation coefficient and compare its sign to your answer to part (b). 3. For the sample data \[\begin{array}{c|c c c c c} x &1 &3 &4 &6 &8 \\ \hline y &4 &1 &3 &-1 &0\\ \end{array}\] 1. Draw the scatter plot. 2. Based on the scatter plot, predict the sign of the linear correlation coefficient. Explain your answer. 3. Compute the linear correlation coefficient and compare its sign to your answer to part (b). 4. For the sample data \[\begin{array}{c|c c c c c} x &1 &2 &4 &7 &9 \\ \hline y &5 &5 &6 &-3 &0\\ \end{array}\] 1. Draw the scatter plot. 2. Based on the scatter plot, predict the sign of the linear correlation coefficient. Explain your answer. 3. Compute the linear correlation coefficient and compare its sign to your answer to part (b). 5. For the sample data \[\begin{array}{c|c c c c c} x &1 &1 &3 &4 &5 \\ \hline y &2 &1 &5 &3 &4\\ \end{array}\] 1. Draw the scatter plot. 2. Based on the scatter plot, predict the sign of the linear correlation coefficient. Explain your answer. 3. Compute the linear correlation coefficient and compare its sign to your answer to part (b). 6. For the sample data \[\begin{array}{c|c c c c c} x &1 &3 &5 &5 &8 \\ \hline y &5 &-2 &2 &-1 &-3\\ \end{array}\] 1. Draw the scatter plot. 2. Based on the scatter plot, predict the sign of the linear correlation coefficient. Explain your answer. 3. Compute the linear correlation coefficient and compare its sign to your answer to part (b). 7. Compute the linear correlation coefficient for the sample data summarized by the following information: \[n=5\; \; \sum x=25\; \; \sum x^2=165\\ \sum y=24\; \; \sum y^2=134\; \; \sum xy=144\\ 1\ leq x\leq 9\] 8. Compute the linear correlation coefficient for the sample data summarized by the following information: \[n=5\; \; \sum x=31\; \; \sum x^2=253\\ \sum y=18\; \; \sum y^2=90\; \; \sum xy=148\\ 2\ leq x\leq 12\] 9. Compute the linear correlation coefficient for the sample data summarized by the following information: \[n=10\; \; \sum x=0\; \; \sum x^2=60\\ \sum y=24\; \; \sum y^2=234\; \; \sum xy=-87\\ -4\ leq x\leq 4\] 10. Compute the linear correlation coefficient for the sample data summarized by the following information: \[n=10\; \; \sum x=-3\; \; \sum x^2=263\\ \sum y=55\; \; \sum y^2=917\; \; \sum xy=-355\\ -10\leq x\leq 10\] 11. The age \(x\) in months and vocabulary \(y\) were measured for six children, with the results shown in the table. \[\begin{array}{c|c c c c c c c} x &13 &14 &15 &16 &16 &18 \\ \hline y &8 &10 &15 &20 &27 &30\\ \end{array}\] Compute the linear correlation coefficient for these sample data and interpret its meaning in the context of the problem. 12. The curb weight \(x\) in hundreds of pounds and braking distance \(y\) in feet, at \(50\) miles per hour on dry pavement, were measured for five vehicles, with the results shown in the table. \[\ begin{array}{c|c c c c c c } x &25 &27.5 &32.5 &35 &45 \\ \hline y &105 &125 &140 &140 &150 \\ \end{array}\] Compute the linear correlation coefficient for these sample data and interpret its meaning in the context of the problem. 13. The age \(x\) and resting heart rate \(y\) were measured for ten men, with the results shown in the table. \[\begin{array}{c|c c c c c c } x &20 &23 &30 &37 &35 \\ \hline y &72 &71 &73 &74 &74 \\ \end{array}\\ \begin{array}{c|c c c c c c } x &45 &51 &55 &60 &63 \\ \hline y &73 &72 &79 &75 &77 \\ \end{array}\\\] Compute the linear correlation coefficient for these sample data and interpret its meaning in the context of the problem. 14. The wind speed \(x\) in miles per hour and wave height \(y\) in feet were measured under various conditions on an enclosed deep water sea, with the results shown in the table, \[\begin{array}{c|c c c c c c } x &0 &0 &2 &7 &7 \\ \hline y &2.0 &0.0 &0.3 &0.7 &3.3 \\ \end{array}\\ \begin{array}{c|c c c c c c } x &9 &13 &20 &22 &31 \\ \hline y &4.9 &4.9 &3.0 &6.9 &5.9 \\ \end{array}\\\] Compute the linear correlation coefficient for these sample data and interpret its meaning in the context of the problem. 15. The advertising expenditure \(x\) and sales \(y\) in thousands of dollars for a small retail business in its first eight years in operation are shown in the table. \[\begin{array}{c|c c c c c } x &1.4 &1.6 &1.6 &2.0 \\ \hline y &180 &184 &190 &220 \\ \end{array}\\ \begin{array}{c|c c c c c c } x &2.0 &2.2 &2.4 &2.6 \\ \hline y &186 &215 &205 &240 \\ \end{array}\\\] Compute the linear correlation coefficient for these sample data and interpret its meaning in the context of the problem. 16. The height \(x\) at age \(2\) and \(y\) at age \(20\), both in inches, for ten women are tabulated in the table. \[\begin{array}{c|c c c c c } x &31.3 &31.7 &32.5 &33.5 &34.4\\ \hline y &60.7 & 61.0 &63.1 &64.2 &65.9 \\ \end{array}\\ \begin{array}{c|c c c c c } x &35.2 &35.8 &32.7 &33.6 &34.8 \\ \hline y &68.2 &67.6 &62.3 &64.9 &66.8 \\ \end{array}\\\] Compute the linear correlation coefficient for these sample data and interpret its meaning in the context of the problem. 17. The course average \(x\) just before a final exam and the score \(y\) on the final exam were recorded for \(15\) randomly selected students in a large physics class, with the results shown in the table. \[\begin{array}{c|c c c c c } x &69.3 &87.7 &50.5 &51.9 &82.7\\ \hline y &56 &89 &55 &49 &61 \\ \end{array}\\ \begin{array}{c|c c c c c } x &70.5 &72.4 &91.7 &83.3 &86.5 \\ \hline y &66 & 72 &83 &73 &82 \\ \end{array}\\ \begin{array}{c|c c c c c } x &79.3 &78.5 &75.7 &52.3 &62.2 \\ \hline y &92 &80 &64 &18 &76 \\ \end{array}\\\] Compute the linear correlation coefficient for these sample data and interpret its meaning in the context of the problem. 18. The table shows the acres \(x\) of corn planted and acres \(y\) of corn harvested, in millions of acres, in a particular country in ten successive years. \[\begin{array}{c|c c c c c } x &75.7 & 78.9 &78.6 &80.9 &81.8\\ \hline y &68.8 &69.3 &70.9 &73.6 &75.1 \\ \end{array}\\ \begin{array}{c|c c c c c } x &78.3 &93.5 &85.9 &86.4 &88.2 \\ \hline y &70.6 &86.5 &78.6 &79.5 &81.4 \\ \end Compute the linear correlation coefficient for these sample data and interpret its meaning in the context of the problem. 19. Fifty male subjects drank a measured amount \(x\) (in ounces) of a medication and the concentration \(y\) (in percent) in their blood of the active ingredient was measured \(30\) minutes later. The sample data are summarized by the following information. \[n=50\; \; \sum x=112.5\; \; \sum y=4.83\\ \sum xy=15.255\; \; 0\leq x\leq 4.5\\ \sum x^2=356.25\; \; \sum y^2=0.667\] Compute the linear correlation coefficient for these sample data and interpret its meaning in the context of the problem. 20. In an effort to produce a formula for estimating the age of large free-standing oak trees non-invasively, the girth \(x\) (in inches) five feet off the ground of \(15\) such trees of known age \ (y\) (in years) was measured. The sample data are summarized by the following information. \[n=15\; \; \sum x=3368\; \; \sum y=6496\\ \sum xy=1,933,219\; \; 74\leq x\leq 395\\ \sum x^2=917,780\; \; \sum y^2=4,260,666\] Compute the linear correlation coefficient for these sample data and interpret its meaning in the context of the problem. 21. Construction standards specify the strength of concrete \(28\) days after it is poured. For \(30\) samples of various types of concrete the strength \(x\) after \(3\) days and the strength \(y\) after \(28\) days (both in hundreds of pounds per square inch) were measured. The sample data are summarized by the following information. \[n=30\; \; \sum x=501.6\; \; \sum y=1338.8\\ \sum xy= 23,246.55\; \; 11\leq x\leq 22\\ \sum x^2=8724.74\; \; \sum y^2=61,980.14\] Compute the linear correlation coefficient for these sample data and interpret its meaning in the context of the problem. 22. Power-generating facilities used forecasts of temperature to forecast energy demand. The average temperature \(x\) (degrees Fahrenheit) and the day’s energy demand \(y\) (million watt-hours) were recorded on \(40\) randomly selected winter days in the region served by a power company. The sample data are summarized by the following information. \[n=40\; \; \sum x=2000\; \; \sum y=2969\\ \ sum xy=143,042\; \; 40\leq x\leq 60\\ \sum x^2=101,340\; \; \sum y^2=243,027\] Compute the linear correlation coefficient for these sample data and interpret its meaning in the context of the problem. Additional Exercises 23. In each case state whether you expect the two variables \(x\) and \(y\) indicated to have positive, negative, or zero correlation. 1. the number \(x\) of pages in a book and the age \(y\) of the author 2. the number \(x\) of pages in a book and the age \(y\) of the intended reader 3. the weight \(x\) of an automobile and the fuel economy \(y\) in miles per gallon 4. the weight \(x\) of an automobile and the reading \(y\) on its odometer 5. the amount \(x\) of a sedative a person took an hour ago and the time \(y\) it takes him to respond to a stimulus 24. In each case state whether you expect the two variables \(x\) and \(y\) indicated to have positive, negative, or zero correlation. 1. the length \(x\) of time an emergency flare will burn and the length \(y\) of time the match used to light it burned 2. the average length \(x\) of time that calls to a retail call center are on hold one day and the number \(y\) of calls received that day 3. the length \(x\) of a regularly scheduled commercial flight between two cities and the headwind \(y\) encountered by the aircraft 4. the value \(x\) of a house and the its size \(y\) in square feet 5. the average temperature \(x\) on a winter day and the energy consumption \(y\) of the furnace 25. Changing the units of measurement on two variables \(x\) and \(y\) should not change the linear correlation coefficient. Moreover, most change of units amount to simply multiplying one unit by the other (for example, \(1\) foot = \(12\) inches). Multiply each \(x\) value in the table in Exercise 1 by two and compute the linear correlation coefficient for the new data set. Compare the new value of \(r\) to the one for the original data. 26. Refer to the previous exercise. Multiply each \(x\) value in the table in Exercise 2 by two, multiply each \(y\) value by three, and compute the linear correlation coefficient for the new data set. Compare the new value of \(r\) to the one for the original data. 27. Reversing the roles of \(x\) and \(y\) in the data set of Exercise 1 produces the data set \[\begin{array}{c|c c c c c} x &2 &4 &6 &5 &9 \\ \hline y &0 &1 &3 &5 &8\\ \end{array}\] Compute the linear correlation coefficient of the new set of data and compare it to what you got in Exercise 1. 28. In the context of the previous problem, look at the formula for \(r\) and see if you can tell why what you observed there must be true for every data set. Large Data Set Exercises Large Data Sets not available 29. Large \(\text{Data Set 1}\) lists the SAT scores and GPAs of \(1,000\) students. Compute the linear correlation coefficient \(r\). Compare its value to your comments on the appearance and strength of any linear trend in the scatter diagram that you constructed in the first large data set problem for Section 10.1. 30. Large \(\text{Data Set 12}\) lists the golf scores on one round of golf for \(75\) golfers first using their own original clubs, then using clubs of a new, experimental design (after two months of familiarization with the new clubs). Compute the linear correlation coefficient \(r\). Compare its value to your comments on the appearance and strength of any linear trend in the scatter diagram that you constructed in the second large data set problem for Section 10.1. 31. Large \(\text{Data Set 13}\) records the number of bidders and sales price of a particular type of antique grandfather clock at \(60\) auctions. Compute the linear correlation coefficient \(r\). Compare its value to your comments on the appearance and strength of any linear trend in the scatter diagram that you constructed in the third large data set problem for Section 10.1. 1. \(r=0.921\) 3. \(r=-0.794\) 5. \(r=0.707\) 7. \(0.875\) 9. \(-0.846\) 11. \(0.948\) 13. \(0.709\) 15. \(0.832\) 17. \(0.751\) 19. \(0.965\) 21. \(0.992\) 23. .921 1. zero 2. positive 3. negative 4. zero 5. positive 25. same value 27. same value 29. \(r=0.4601\) 31. \(r=0.9002\) 10.3 Modelling Linear Relationships with Randomness Present 1. State the three assumptions that are the basis for the Simple Linear Regression Model. 2. The Simple Linear Regression Model is summarized by the equation \[y=\beta _1x+\beta _0+\varepsilon\] Identify the deterministic part and the random part. 3. Is the number \(\beta _1\) in the equation \(y=\beta _1x+\beta _0\) a statistic or a population parameter? Explain. 4. Is the number \(\sigma\) in the Simple Linear Regression Model a statistic or a population parameter? Explain. 5. Describe what to look for in a scatter diagram in order to check that the assumptions of the Simple Linear Regression Model are true. 6. True or false: the assumptions of the Simple Linear Regression Model must hold exactly in order for the procedures and analysis developed in this chapter to be useful. 1. The mean of \(y\) is linearly related to \(x\). 2. For each given \(x\), \(y\) is a normal random variable with mean \(\beta _1x+\beta _0\) and a standard deviation \(\sigma\). 3. All the observations of \(y\) in the sample are independent. 3. \(\beta _1\) is a population parameter. 5. A linear trend. 10.4 The Least Squares Regression Line For the Basic and Application exercises in this section use the computations that were done for the exercises with the same number in Section 10.2. 1. Compute the least squares regression line for the data in Exercise 1 of Section 10.2. 2. Compute the least squares regression line for the data in Exercise 2 of Section 10.2. 3. Compute the least squares regression line for the data in Exercise 3 of Section 10.2. 4. Compute the least squares regression line for the data in Exercise 4 of Section 10.2. 5. For the data in Exercise 5 of Section 10.2 1. Compute the least squares regression line. 2. Compute the sum of the squared errors \(\text{SSE}\) using the definition \(\sum (y-\hat{y})^2\). 3. Compute the sum of the squared errors \(\text{SSE}\) using the formula \(SSE=SS_{yy}-\widehat{\beta _1}SS_{xy}\). 6. For the data in Exercise 6 of Section 10.2 1. Compute the least squares regression line. 2. Compute the sum of the squared errors \(\text{SSE}\) using the definition \(\sum (y-\hat{y})^2\). 3. Compute the sum of the squared errors \(\text{SSE}\) using the formula \(SSE=SS_{yy}-\widehat{\beta _1}SS_{xy}\). 7. Compute the least squares regression line for the data in Exercise 7 of Section 10.2. 8. Compute the least squares regression line for the data in Exercise 8 of Section 10.2. 9. For the data in Exercise 9 of Section 10.2 1. Compute the least squares regression line. 2. Can you compute the sum of the squared errors \(\text{SSE}\) using the definition \(\sum (y-\hat{y})^2\)? Explain. 3. Compute the sum of the squared errors \(\text{SSE}\) using the formula \(SSE=SS_{yy}-\widehat{\beta _1}SS_{xy}\). 10. For the data in Exercise 10 of Section 10.2 1. Compute the least squares regression line. 2. Can you compute the sum of the squared errors \(\text{SSE}\) using the definition \(\sum (y-\hat{y})^2\)? Explain. 3. Compute the sum of the squared errors \(\text{SSE}\) using the formula \(SSE=SS_{yy}-\widehat{\beta _1}SS_{xy}\). 11. For the data in Exercise 11 of Section 10.2 1. Compute the least squares regression line. 2. On average, how many new words does a child from \(13\) to \(18\) months old learn each month? Explain. 3. Estimate the average vocabulary of all \(16\)-month-old children. 12. For the data in Exercise 12 of Section 10.2 1. Compute the least squares regression line. 2. On average, how many additional feet are added to the braking distance for each additional \(100\) pounds of weight? Explain. 3. Estimate the average braking distance of all cars weighing \(3,000\) pounds. 13. For the data in Exercise 13 of Section 10.2 1. Compute the least squares regression line. 2. Estimate the average resting heart rate of all \(40\)-year-old men. 3. Estimate the average resting heart rate of all newborn baby boys. Comment on the validity of the estimate. 14. For the data in Exercise 14 of Section 10.2 1. Compute the least squares regression line. 2. Estimate the average wave height when the wind is blowing at \(10\) miles per hour. 3. Estimate the average wave height when there is no wind blowing. Comment on the validity of the estimate. 15. For the data in Exercise 15 of Section 10.2 1. Compute the least squares regression line. 2. On average, for each additional thousand dollars spent on advertising, how does revenue change? Explain. 3. Estimate the revenue if \(\$2,500\) is spent on advertising next year. 16. For the data in Exercise 16 of Section 10.2 1. Compute the least squares regression line. 2. On average, for each additional inch of height of two-year-old girl, what is the change in the adult height? Explain. 3. Predict the adult height of a two-year-old girl who is \(33\) inches tall. 17. For the data in Exercise 17 of Section 10.2 1. Compute the least squares regression line. 2. Compute \(\text{SSE}\) using the formula \(SSE=SS_{yy}-\widehat{\beta _1}SS_{xy}\). 3. Estimate the average final exam score of all students whose course average just before the exam is \(85\). 18. For the data in Exercise 18 of Section 10.2 1. Compute the least squares regression line. 2. Compute \(\text{SSE}\) using the formula \(SSE=SS_{yy}-\widehat{\beta _1}SS_{xy}\). 3. Estimate the number of acres that would be harvested if \(90\) million acres of corn were planted. 19. For the data in Exercise 19 of Section 10.2 1. Compute the least squares regression line. 2. Interpret the value of the slope of the least squares regression line in the context of the problem. 3. Estimate the average concentration of the active ingredient in the blood in men after consuming \(1\) ounce of the medication. 20. For the data in Exercise 20 of Section 10.2 1. Compute the least squares regression line. 2. Interpret the value of the slope of the least squares regression line in the context of the problem. 3. Estimate the age of an oak tree whose girth five feet off the ground is \(92\) inches. 21. For the data in Exercise 21 of Section 10.2 1. Compute the least squares regression line. 2. The \(28\)-day strength of concrete used on a certain job must be at least \(3,200\) psi. If the \(3\)-day strength is \(1,300\) psi, would we anticipate that the concrete will be sufficiently strong on the \(28^{th}\) day? Explain fully. 22. For the data in Exercise 22 of Section 10.2 1. Compute the least squares regression line. 2. If the power facility is called upon to provide more than \(95\) million watt-hours tomorrow then energy will have to be purchased from elsewhere at a premium. The forecast is for an average temperature of \(42\) degrees. Should the company plan on purchasing power at a premium? Additional Exercises 23. Verify that no matter what the data are, the least squares regression line always passes through the point with coordinates \((\bar{x},\bar{y})\). Hint: Find the predicted value of \(y\) when \(x 24. In Exercise 1 you computed the least squares regression line for the data in Exercise 1 of Section 10.2. 1. Reverse the roles of x and y and compute the least squares regression line for the new data set \[\begin{array}{c|c c c c c c} x &2 &4 &6 &5 &9 \\ \hline y &0 &1 &3 &5 &8\\ \end{array}\] 2. Interchanging x and y corresponds geometrically to reflecting the scatter plot in a 45-degree line. Reflecting the regression line for the original data the same way gives a line with the equation \(\bar{y}=1.346x-3.600\). Is this the equation that you got in part (a)? Can you figure out why not? Hint: Think about how x and y are treated differently geometrically in the computation of the goodness of fit. 3. Compute \(\text{SSE}\) for each line and see if they fit the same, or if one fits the data better than the other. Large Data Set Exercises Large Data Sets not available 25. Large \(\text{Data Set 1}\) lists the SAT scores and GPAs of \(1,000\) students. 1. Compute the least squares regression line with SAT score as the independent variable (\(x\)) and GPA as the dependent variable (\(y\)). 2. Interpret the meaning of the slope \(\widehat{\beta _1}\) of regression line in the context of problem. 3. Compute \(\text{SSE}\) the measure of the goodness of fit of the regression line to the sample data. 4. Estimate the GPA of a student whose SAT score is \(1350\). 26. Large \(\text{Data Set 12}\) lists the golf scores on one round of golf for \(75\) golfers first using their own original clubs, then using clubs of a new, experimental design (after two months of familiarization with the new clubs). 1. Compute the least squares regression line with scores using the original clubs as the independent variable (\(x\)) and scores using the new clubs as the dependent variable (\(y\)). 2. Interpret the meaning of the slope \(\widehat{\beta _1}\) of regression line in the context of problem. 3. Compute \(\text{SSE}\) the measure of the goodness of fit of the regression line to the sample data. 4. Estimate the score with the new clubs of a golfer whose score with the old clubs is \(73\). 27. Large \(\text{Data Set 13}\) records the number of bidders and sales price of a particular type of antique grandfather clock at \(60\) auctions. 1. Compute the least squares regression line with the number of bidders present at the auction as the independent variable (\(x\)) and sales price as the dependent variable (\(y\)). 2. Interpret the meaning of the slope \(\widehat{\beta _1}\) of regression line in the context of problem. 3. Compute \(\text{SSE}\) the measure of the goodness of fit of the regression line to the sample data. 4. Estimate the sales price of a clock at an auction at which the number of bidders is seven. 1. \(\hat{y}=0.743x+2.675\) 3. \(\hat{y}=-0.610x+4.082\) 5. \(\hat{y}=0.625x+1.25,\; SSE=5\) 7. \(\hat{y}=0.6x+1.8\) 9. \(\hat{y}=-1.45x+2.4,\; SSE=50.25\) (cannot use the definition to compute) 1. \(\hat{y}=4.848x-56\) 2. \(4.8\) 3. \(21.6\) 1. \(\hat{y}=0.114x+69.222\) 2. \(73.8\) 3. \(69.2\), invalid extrapolation 1. \(\hat{y}=42.024x+119.502\) 2. increases by \(\$42,024\) 3. \(\$224,562\) 1. \(\hat{y}=1.045x-8.527\) 2. \(2151.93367\) 3. \(80.3\) 1. \(\hat{y}=0.043x+0.001\) 2. For each additional ounce of medication consumed blood concentration of the active ingredient increases by \(0.043\%\) 3. \(0.044\%\) 1. \(\hat{y}=2.550x+1.993\) 2. Predicted \(28\)-day strength is \(3,514\) psi; sufficiently strong 1. \(\hat{y}=0.0016x+0.022\) 2. On average, every \(100\) point increase in SAT score adds \(0.16\) point to the GPA. 3. \(SSE=432.10\) 4. \(\hat{y}=2.182\) 1. \(\hat{y}=116.62x+6955.1\) 2. On average, every \(1\) additional bidder at an auction raises the price by \(116.62\) dollars. 3. \(SSE=1850314.08\) 4. \(\hat{y}=7771.44\) 10.5 Statistical Inferences About β1 For the Basic and Application exercises in this section use the computations that were done for the exercises with the same number in Section 10.2 and Section 10.4. 1. Construct the \(95\%\) confidence interval for the slope \(\beta _1\) of the population regression line based on the sample data set of Exercise 1 of Section 10.2. 2. Construct the \(90\%\) confidence interval for the slope \(\beta _1\) of the population regression line based on the sample data set of Exercise 2 of Section 10.2. 3. Construct the \(90\%\) confidence interval for the slope \(\beta _1\) of the population regression line based on the sample data set of Exercise 3 of Section 10.2. 4. Construct the \(99\%\) confidence interval for the slope \(\beta _1\) of the population regression line based on the sample data set of Exercise 4 of Section 10.2. 5. For the data in Exercise 5 of Section 10.2 test, at the \(10\%\) level of significance, whether \(x\) is useful for predicting \(y\) (that is, whether \(\beta _1\neq 0\)). 6. For the data in Exercise 6 of Section 10.2 test, at the \(5\%\) level of significance, whether \(x\) is useful for predicting \(y\) (that is, whether \(\beta _1\neq 0\)). 7. Construct the \(90\%\) confidence interval for the slope \(\beta _1\) of the population regression line based on the sample data set of Exercise 7 of Section 10.2. 8. Construct the \(95\%\) confidence interval for the slope \(\beta _1\) of the population regression line based on the sample data set of Exercise 8 of Section 10.2. 9. For the data in Exercise 9 of Section 10.2 test, at the \(1\%\) level of significance, whether \(x\) is useful for predicting \(y\) (that is, whether \(\beta _1\neq 0\)). 10. For the data in Exercise 10 of Section 10.2 test, at the \(1\%\) level of significance, whether \(x\) is useful for predicting \(y\) (that is, whether \(\beta _1\neq 0\)). 11. For the data in Exercise 11 of Section 10.2 construct a \(90\%\) confidence interval for the mean number of new words acquired per month by children between \(13\) and \(18\) months of age. 12. For the data in Exercise 12 of Section 10.2 construct a \(90\%\) confidence interval for the mean increased braking distance for each additional \(100\) pounds of vehicle weight. 13. For the data in Exercise 13 of Section 10.2 test, at the \(10\%\) level of significance, whether age is useful for predicting resting heart rate. 14. For the data in Exercise 14 of Section 10.2 test, at the \(10\%\) level of significance, whether wind speed is useful for predicting wave height. 15. For the situation described in Exercise 15 of Section 10.2 1. Construct the \(95\%\) confidence interval for the mean increase in revenue per additional thousand dollars spent on advertising. 2. An advertising agency tells the business owner that for every additional thousand dollars spent on advertising, revenue will increase by over \(\$25,000\). Test this claim (which is the alternative hypothesis) at the \(5\%\) level of significance. 3. Perform the test of part (b) at the \(10\%\) level of significance. 4. Based on the results in (b) and (c), how believable is the ad agency’s claim? (This is a subjective judgement.) 16. For the situation described in Exercise 16 of Section 10.2 1. Construct the \(90\%\) confidence interval for the mean increase in height per additional inch of length at age two. 2. It is claimed that for girls each additional inch of length at age two means more than an additional inch of height at maturity. Test this claim (which is the alternative hypothesis) at the \ (10\%\) level of significance. 17. For the data in Exercise 17 of Section 10.2 test, at the \(10\%\) level of significance, whether course average before the final exam is useful for predicting the final exam grade. 18. For the situation described in Exercise 18 of Section 10.2, an agronomist claims that each additional million acres planted results in more than \(750,000\) additional acres harvested. Test this claim at the \(1\%\) level of significance. 19. For the data in Exercise 19 of Section 10.2 test, at the \(1/10\)th of \(1\%\) level of significance, whether, ignoring all other facts such as age and body mass, the amount of the medication consumed is a useful predictor of blood concentration of the active ingredient. 20. For the data in Exercise 20 of Section 10.2 test, at the \(1\%\) level of significance, whether for each additional inch of girth the age of the tree increases by at least two and one-half years. 21. For the data in Exercise 21 of Section 10.2 1. Construct the \(95\%\) confidence interval for the mean increase in strength at \(28\) days for each additional hundred psi increase in strength at \(3\) days. 2. Test, at the \(1/10\)th of \(1\%\) level of significance, whether the \(3\)-day strength is useful for predicting \(28\)-day strength. 22. For the situation described in Exercise 22 of Section 10.2 1. Construct the \(99\%\) confidence interval for the mean decrease in energy demand for each one-degree drop in temperature. 2. An engineer with the power company believes that for each one-degree increase in temperature, daily energy demand will decrease by more than \(3.6\) million watt-hours. Test this claim at the \(1\%\) level of significance. Large Data Set Exercises Large Data Sets not available 23. Large \(\text{Data Set 1}\) lists the SAT scores and GPAs of \(1,000\) students. 1. Compute the \(90\%\) confidence interval for the slope \(\beta _1\) of the population regression line with SAT score as the independent variable (\(x\)) and GPA as the dependent variable (\(y 2. Test, at the \(10\%\) level of significance, the hypothesis that the slope of the population regression line is greater than \(0.001\), against the null hypothesis that it is exactly \(0.001 24. Large \(\text{Data Set 12}\) lists the golf scores on one round of golf for \(75\) golfers first using their own original clubs, then using clubs of a new, experimental design (after two months of familiarization with the new clubs). 1. Compute the \(95\%\) confidence interval for the slope \(\beta _1\) of the population regression line with scores using the original clubs as the independent variable (\(x\)) and scores using the new clubs as the dependent variable (\(y\)). 2. Test, at the \(10\%\) level of significance, the hypothesis that the slope of the population regression line is different from \(1\), against the null hypothesis that it is exactly \(1\). 25. Large \(\text{Data Set 13}\) records the number of bidders and sales price of a particular type of antique grandfather clock at \(60\) auctions. 1. Compute the \(95\%\) confidence interval for the slope \(\beta _1\) of the population regression line with the number of bidders present at the auction as the independent variable(\(x\)) and sales price as the dependent variable (\(y\)). 2. Test, at the \(10\%\) level of significance, the hypothesis that the average sales price increases by more than \(\$90\) for each additional bidder at an auction, against the default that it increases by exactly \(\$90\). 1. \(0.743\pm 0.578\) 3. \(-0.610\pm 0.633\) 5. \(T=1.732,\; \pm t_{0.05}=\pm 2.353\), do not reject \(H_0\) 7. \(0.6\pm 0.451\) 9. \(T=-4.481,\; \pm t_{0.005}=\pm 3.355\), reject \(H_0\) 11. \(4.8\pm 1.7\) words 13. \(T=2.843,\; \pm t_{0.05}=\pm 1.860\), reject \(H_0\) 1. \(42.024\pm 28.011\) thousand dollars 2. \(T=1.487,\; \pm t_{0.05}=\pm 1.943\), do not reject \(H_0\) 3. \(t_{0.10}=1.440\), reject \(H_0\) 17. \(T=4.096,\; \pm t_{0.05}=\pm 1.771\), reject \(H_0\) 19. \(T=25.524,\; \pm t_{0.0005}=\pm 3.505\), reject \(H_0\) 1. \(2.550\pm 0.127\) hundred psi 2. \(T=41.072,\; \pm t_{0.005}=\pm 3.674\), reject \(H_0\) 1. \((0.0014,0.0018)\) 2. \(H_0:\beta _1=0.001\; vs\; H_a:\beta _1>0.001\). Test Statistic: \(Z=6.1625\). Rejection Region: \([1.28,+\infty )\). Decision: Reject \(H_0\) 1. \((101.789,131.4435)\) 2. \(H_0:\beta _1=90\; vs\; H_a:\beta _1>90\). Test Statistic: \(T=3.5938,\; d.f.=58\). Rejection Region: \([1.296,+\infty )\). Decision: Reject \(H_0\) 10.6 The Coefficient of Determination For the Basic and Application exercises in this section use the computations that were done for the exercises with the same number in Section 10.2, Section 10.4, and Section 10.5. 1. For the sample data set of Exercise 1 of Section 10.2 find the coefficient of determination using the formula \(r^2=\widehat{\beta _1}SS_{xy}/SS_{yy}\). Confirm your answer by squaring \(r\) as computed in that exercise. 2. For the sample data set of Exercise 2 of Section 10.2" find the coefficient of determination using the formula \(r^2=\widehat{\beta _1}SS_{xy}/SS_{yy}\). Confirm your answer by squaring \(r\) as computed in that exercise. 3. For the sample data set of Exercise 3 of Section 10.2 find the coefficient of determination using the formula \(r^2=\widehat{\beta _1}SS_{xy}/SS_{yy}\). Confirm your answer by squaring \(r\) as computed in that exercise. 4. For the sample data set of Exercise 4 of Section 10.2 find the coefficient of determination using the formula \(r^2=\widehat{\beta _1}SS_{xy}/SS_{yy}\). Confirm your answer by squaring \(r\) as computed in that exercise. 5. For the sample data set of Exercise 5 of Section 10.2 find the coefficient of determination using the formula \(r^2=\widehat{\beta _1}SS_{xy}/SS_{yy}\). Confirm your answer by squaring \(r\) as computed in that exercise. 6. For the sample data set of Exercise 6 of Section 10.2 find the coefficient of determination using the formula \(r^2=\widehat{\beta _1}SS_{xy}/SS_{yy}\). Confirm your answer by squaring \(r\) as computed in that exercise. 7. For the sample data set of Exercise 7 of Section 10.2 find the coefficient of determination using the formula \(r^2=(SS_{yy}-SSE)/SS_{yy}\). Confirm your answer by squaring \(r\) as computed in that exercise. 8. For the sample data set of Exercise 7 of Section 10.2 find the coefficient of determination using the formula \(r^2=(SS_{yy}-SSE)/SS_{yy}\). Confirm your answer by squaring \(r\) as computed in that exercise. 9. For the sample data set of Exercise 7 of Section 10.2 find the coefficient of determination using the formula \(r^2=(SS_{yy}-SSE)/SS_{yy}\). Confirm your answer by squaring \(r\) as computed in that exercise. 10. For the sample data set of Exercise 7 of Section 10.2 find the coefficient of determination using the formula \(r^2=(SS_{yy}-SSE)/SS_{yy}\). Confirm your answer by squaring \(r\) as computed in that exercise. 11. For the data in Exercise 11 of Section 10.2 compute the coefficient of determination and interpret its value in the context of age and vocabulary. 12. For the data in Exercise 12 of Section 10.2" compute the coefficient of determination and interpret its value in the context of vehicle weight and braking distance. 13. For the data in Exercise 13 of Section 10.2 compute the coefficient of determination and interpret its value in the context of age and resting heart rate. In the age range of the data, does age seem to be a very important factor with regard to heart rate? 14. For the data in Exercise 14 of Section 10.2 compute the coefficient of determination and interpret its value in the context of wind speed and wave height. Does wind speed seem to be a very important factor with regard to wave height? 15. For the data in Exercise 15 of Section 10.2 find the proportion of the variability in revenue that is explained by level of advertising. 16. For the data in Exercise 16 of Section 10.2 find the proportion of the variability in adult height that is explained by the variation in length at age two. 17. For the data in Exercise 17 of Section 10.2 compute the coefficient of determination and interpret its value in the context of course average before the final exam and score on the final exam. 18. For the data in Exercise 18 of Section 10.2 compute the coefficient of determination and interpret its value in the context of acres planted and acres harvested. 19. For the data in Exercise 19 of Section 10.2 compute the coefficient of determination and interpret its value in the context of the amount of the medication consumed and blood concentration of the active ingredient. 20. For the data in Exercise 20 of Section 10.2 compute the coefficient of determination and interpret its value in the context of tree size and age. 21. For the data in Exercise 21 of Section 10.2 find the proportion of the variability in \(28\)-day strength of concrete that is accounted for by variation in \(3\)-day strength. 22. For the data in Exercise 22 of Section 10.2 find the proportion of the variability in energy demand that is accounted for by variation in average temperature. Large Data Set Exercises Large Data Sets not available 23. Large \(\text{Data Set 1}\) lists the SAT scores and GPAs of \(1,000\) students. Compute the coefficient of determination and interpret its value in the context of SAT scores and GPAs. 24. Large \(\text{Data Set 12}\) lists the golf scores on one round of golf for \(75\) golfers first using their own original clubs, then using clubs of a new, experimental design (after two months of familiarization with the new clubs). Compute the coefficient of determination and interpret its value in the context of golf scores with the two kinds of golf clubs. 25. Large \(\text{Data Set 13}\) records the number of bidders and sales price of a particular type of antique grandfather clock at \(60\) auctions. Compute the coefficient of determination and interpret its value in the context of the number of bidders at an auction and the price of this type of antique grandfather clock. 1. \(0.848\) 3. \(0.631\) 5. \(0.5\) 7. \(0.766\) 9. \(0.715\) 11. \(0.898\); about \(90\%\) of the variability in vocabulary is explained by age 13. \(0.503\); about \(50\%\) of the variability in heart rate is explained by age. Age is a significant but not dominant factor in explaining heart rate. 15. The proportion is \(r^2=0.692\) 17. \(0.563\); about \(56\%\) of the variability in final exam scores is explained by course average before the final exam 19. \(0.931\); about \(93\%\) of the variability in the blood concentration of the active ingredient is explained by the amount of the medication consumed 21. The proportion is \(r^2=0.984\) 23. \(r^2=21.17\%\) 25. \(r^2=81.04\%\) 10.7 Estimation and Prediction For the Basic and Application exercises in this section use the computations that were done for the exercises with the same number in previous sections. 1. For the sample data set of Exercise 1 of Section 10.2 1. Give a point estimate for the mean value of \(y\) in the sub-population determined by the condition \(x = 4\). 2. Construct the \(90\%\) confidence interval for that mean value. 2. For the sample data set of Exercise 2 of Section 10.2 1. Give a point estimate for the mean value of \(y\) in the sub-population determined by the condition \(x = 4\). 2. Construct the \(90\%\) confidence interval for that mean value. 3. For the sample data set of Exercise 3 of Section 10.2 1. Give a point estimate for the mean value of \(y\) in the sub-population determined by the condition \(x = 7\). 2. Construct the \(95\%\) confidence interval for that mean value. 4. For the sample data set of Exercise 4 of Section 10.2 1. Give a point estimate for the mean value of \(y\) in the sub-population determined by the condition \(x = 2\). 2. Construct the \(80\%\) confidence interval for that mean value. 5. For the sample data set of Exercise 5 of Section 10.2 1. Give a point estimate for the mean value of \(y\) in the sub-population determined by the condition \(x = 1\). 2. Construct the \(80\%\) confidence interval for that mean value. 6. For the sample data set of Exercise 6 of Section 10.2 1. Give a point estimate for the mean value of \(y\) in the sub-population determined by the condition \(x = 5\). 2. Construct the \(95\%\) confidence interval for that mean value. 7. For the sample data set of Exercise 7 of Section 10.2 1. Give a point estimate for the mean value of \(y\) in the sub-population determined by the condition \(x = 6\). 2. Construct the \(99\%\) confidence interval for that mean value. 3. Is it valid to make the same estimates for \(x = 12\)? Explain. 8. For the sample data set of Exercise 8 of Section 10.2 1. Give a point estimate for the mean value of \(y\) in the sub-population determined by the condition \(x = 12\). 2. Construct the \(80\%\) confidence interval for that mean value. 3. Is it valid to make the same estimates for \(x = 0\)? Explain. 9. For the sample data set of Exercise 9 of Section 10.2 1. Give a point estimate for the mean value of \(y\) in the sub-population determined by the condition \(x = 0\). 2. Construct the \(90\%\) confidence interval for that mean value. 3. Is it valid to make the same estimates for \(x = -1\)? Explain. 10. For the sample data set of Exercise 9 of Section 10.2 1. Give a point estimate for the mean value of \(y\) in the sub-population determined by the condition \(x = 8\). 2. Construct the \(95\%\) confidence interval for that mean value. 3. Is it valid to make the same estimates for \(x = 0\)? Explain. 11. For the data in Exercise 11 of Section 10.2 1. Give a point estimate for the average number of words in the vocabulary of \(18\)-month-old children. 2. Construct the \(95\%\) confidence interval for that mean value. 3. Construct the \(95\%\) confidence interval for that mean value. 12. For the data in Exercise 12 of Section 10.2 1. Give a point estimate for the average braking distance of automobiles that weigh \(3,250\) pounds. 2. Construct the \(80\%\) confidence interval for that mean value. 3. Is it valid to make the same estimates for \(5,000\)-pound automobiles? Explain. 13. For the data in Exercise 13 of Section 10.2 1. Give a point estimate for the resting heart rate of a man who is \(35\) years old. 2. One of the men in the sample is \(35\) years old, but his resting heart rate is not what you computed in part (a). Explain why this is not a contradiction. 3. Construct the \(90\%\) confidence interval for the mean resting heart rate of all \(35\)-year-old men. 14. For the data in Exercise 14 of Section 10.2 1. Give a point estimate for the wave height when the wind speed is \(13\) miles per hour. 2. One of the wind speeds in the sample is \(13\) miles per hour, but the height of waves that day is not what you computed in part (a). Explain why this is not a contradiction. 3. Construct the \(95\%\) confidence interval for the mean wave height on days when the wind speed is \(13\) miles per hour. 15. For the data in Exercise 15 of Section 10.2 1. The business owner intends to spend \(\$2,500\) on advertising next year. Give an estimate of next year’s revenue based on this fact. 2. Construct the \(90\%\) prediction interval for next year’s revenue, based on the intent to spend \(\$2,500\) on advertising. 16. For the data in Exercise 16 of Section 10.2 1. A two-year-old girl is \(32.3\) inches long. Predict her adult height. 2. Construct the \(95\%\) prediction interval for the girl’s adult height. 17. For the data in Exercise 17 of Section 10.2 1. Lodovico has a \(78.6\) average in his physics class just before the final. Give a point estimate of what his final exam grade will be. 2. Explain whether an interval estimate for this problem is a confidence interval or a prediction interval. 3. Based on your answer to (b), construct an interval estimate for Lodovico’s final exam grade at the \(90\%\) level of confidence. 18. For the data in Exercise 18 of Section 10.2 1. This year \(86.2\) million acres of corn were planted. Give a point estimate of the number of acres that will be harvested this year. 2. Explain whether an interval estimate for this problem is a confidence interval or a prediction interval. 3. Based on your answer to (b), construct an interval estimate for the number of acres that will be harvested this year, at the \(99\%\) level of confidence. 19. For the data in Exercise 19 of Section 10.2 1. Give a point estimate for the blood concentration of the active ingredient of this medication in a man who has consumed \(1.5\) ounces of the medication just recently. 2. Gratiano just consumed \(1.5\) ounces of this medication \(30\) minutes ago. Construct a \(95\%\) prediction interval for the concentration of the active ingredient in his blood right now. 20. For the data in Exercise 20 of Section 10.2 1. You measure the girth of a free-standing oak tree five feet off the ground and obtain the value \(127\) inches. How old do you estimate the tree to be? 2. Construct a \(90\%\) prediction interval for the age of this tree. 21. For the data in Exercise 21 of Section 10.2 1. A test cylinder of concrete three days old fails at \(1,750\) psi. Predict what the \(28\)-day strength of the concrete will be. 2. Construct a \(99\%\) prediction interval for the \(28\)-day strength of this concrete. 3. Based on your answer to (b), what would be the minimum \(28\)-day strength you could expect this concrete to exhibit? 22. For the data in Exercise 22 of Section 10.2 1. Tomorrow’s average temperature is forecast to be \(53\) degrees. Estimate the energy demand tomorrow. 2. Construct a \(99\%\) prediction interval for the energy demand tomorrow. 3. Based on your answer to (b), what would be the minimum demand you could expect? Large Data Set Exercises Large Data Sets not available 23. Large \(\text{Data Set 1}\) lists the SAT scores and GPAs of \(1,000\) students. 1. Give a point estimate of the mean GPA of all students who score \(1350\) on the SAT. 2. Construct a \(90\%\) confidence interval for the mean GPA of all students who score \(1350\) on the SAT. 24. Large \(\text{Data Set 12}\) lists the golf scores on one round of golf for \(75\) golfers first using their own original clubs, then using clubs of a new, experimental design (after two months of familiarization with the new clubs). 1. Thurio averages \(72\) strokes per round with his own clubs. Give a point estimate for his score on one round if he switches to the new clubs. 2. Explain whether an interval estimate for this problem is a confidence interval or a prediction interval. 3. Based on your answer to (b), construct an interval estimate for Thurio’s score on one round if he switches to the new clubs, at \(90\%\) confidence. 25. Large \(\text{Data Set 13}\) records the number of bidders and sales price of a particular type of antique grandfather clock at \(60\) auctions. 1. There are seven likely bidders at the Verona auction today. Give a point estimate for the price of such a clock at today’s auction. 2. Explain whether an interval estimate for this problem is a confidence interval or a prediction interval. 3. Based on your answer to (b), construct an interval estimate for the likely sale price of such a clock at today’s sale, at \(95\%\) confidence. 1. \(5.647\) 2. \(5.647\pm 1.253\) 1. \(-0.188\) 2. \(-0.188\pm 3.041\) 1. \(1.875\) 2. \(1.875\pm 1.423\) 1. \(5.4\) 2. \(5.4\pm 3.355\) 3. invalid (extrapolation) 1. \(2.4\) 2. 2.4±1.4742.4±1.474\(2.4\pm 1.474\) 3. valid (\(-1\) is in the range of the \(x\)-values in the data set) 1. \(31.3\) words 2. \(31.3\pm 7.1\) words 3. not valid, since two years is \(24\) months, hence this is extrapolation 1. \(73.2\) beats/min 2. The man’s heart rate is not the predicted average for all men his age. 3. \(73.2\pm 1.2\) beats/min 1. \(\$224,562\) 2. \(\$224,562 \pm \$28,699\) 1. \(74\) 2. Prediction (one person, not an average for all who have average \(78.6\) before the final exam) 3. \(74\pm 24\) 1. \(0.066\%\) 2. \(0.066\pm 0.034\%\) 1. \(4,656\) psi 2. 4,656±321\(4,656\pm 321\) psi 3. \(4,656-321=4,335\) psi 1. \(2.19\) 2. \((2.1421,2.2316)\) 1. \(7771.39\) 2. A prediction interval. 3. \((7410.41,8132.38)\) 10.8 A Complete Example The exercises in this section are unrelated to those in previous sections. 1. The data give the amount \(x\) of silicofluoride in the water (mg/L) and the amount \(y\) of lead in the bloodstream (μg/dL) of ten children in various communities with and without municipal water. Perform a complete analysis of the data, in analogy with the discussion in this section (that is, make a scatter plot, do preliminary computations, find the least squares regression line, find \(SSE,\; s_\varepsilon\) and \(r\), and so on). In the hypothesis test use as the alternative hypothesis \(\beta _1>0\), and test at the \(5\%\) level of significance. Use confidence level \ (95\%\) for the confidence interval for \(\beta _1\). Construct \(95\%\) confidence and predictions intervals at \(x_p=2\) at the end. \[\begin{array}{c|c c c c c} x &0.0 &0.0 &1.1 &1.4 &1.6 \\ \ hline y &0.3 &0.1 &4.7 &3.2 &5.1\\ \end{array}\\ \begin{array}{c|c c c c c} x &1.7 &2.0 &2.0 &2.2 &2.2 \\ \hline y &7.0 &5.0 &6.1 &8.6 &9.5\\ \end{array}\] 2. The table gives the weight \(x\) (thousands of pounds) and available heat energy \(y\) (million BTU) of a standard cord of various species of wood typically used for heating. Perform a complete analysis of the data, in analogy with the discussion in this section (that is, make a scatter plot, do preliminary computations, find the least squares regression line, find \(SSE,\; s_\ varepsilon\) and \(r\), and so on). In the hypothesis test use as the alternative hypothesis \(\beta _1\), and test at the \(5\%\) level of significance. Use confidence level \(95\%\) for the confidence interval for \(\beta _1\). Construct \(95\%\) confidence and predictions intervals at \(x_p=5\) at the end. \[\begin{array}{c|c c c c c} x &3.37 &3.50 &4.29 &4.00 &4.64 \\ \hline y & 23.6 &17.5 &20.1 &21.6 &28.1\\ \end{array}\\ \begin{array}{c|c c c c c} x &4.99 &4.94 &5.48 &3.26 &4.16 \\ \hline y &25.3 &27.0 &30.7 &18.9 &20.7\\ \end{array}\] Large Data Set Exercises Large Data Sets not available 3. Large Data Sets 3 and 3A list the shoe sizes and heights of \(174\) customers entering a shoe store. The gender of the customer is not indicated in Large Data Set 3. However, men’s and women’s shoes are not measured on the same scale; for example, a size \(8\) shoe for men is not the same size as a size \(8\) shoe for women. Thus it would not be meaningful to apply regression analysis to Large Data Set 3. Nevertheless, compute the scatter diagrams, with shoe size as the independent variable (\(x\)) and height as the dependent variable (\(y\)), for (i) just the data on men, (ii) just the data on women, and (iii) the full mixed data set with both men and women. Does the third, invalid scatter diagram look markedly different from the other two? 4. Separate out from Large Data Set 3A just the data on men and do a complete analysis, with shoe size as the independent variable (\(x\)) and height as the dependent variable (\(y\)). Use \(\alpha =0.05\) and \(x_p=10\) whenever appropriate. 5. Separate out from Large Data Set 3A just the data on women and do a complete analysis, with shoe size as the independent variable (\(x\)) and height as the dependent variable (\(y\)). Use \(\ alpha =0.05\) and \(x_p=10\) whenever appropriate. 1. \[\sum x=14.2,\; \sum y=49.6,\; \sum xy=91.73,\; \sum x^2=26.3,\; \sum y^2=333.86\\ SS_{xx}=6.136,\; SS_{xy}=21.298,\; SS_{yy}=87.844\\ \bar{x}=1.42,\; \bar{y}=4.96\\ \widehat{\beta _1}=3.47,\; \ widehat{\beta _0}=0.03\\ SSE=13.92\\ s_\varepsilon =1.32\\ r = 0.9174, r^2 = 0.8416\\ df=8, T = 6.518\] The \(95\%\) confidence interval for \(\beta _1\) is: \((2.24,4.70)\) At \(x_p=2\) the \(95\%\) confidence interval for \(E(y)\) is \((5.77,8.17)\) At \(x_p=2\) the \(95\%\) confidence interval for \(y\) is \((3.73,10.21)\) 3. The positively correlated trend seems less profound than that in each of the previous plots. 5. The regression line: \(\hat{y}=3.3426x+138.7692\). Coefficient of Correlation: \(r = 0.9431\). Coefficient of Determination: \(r^2 = 0.8894\). \(SSE=283.2473\). \(s_e=1.9305\). A \(95\%\) confidence interval for \(\beta _1\): \((3.0733,3.6120)\). Test Statistic for \(H_0: \beta _1=0: T=24.7209\). At \(x_p=10\), \(\hat{y}=172.1956\); a \(95\%\) confidence interval for the mean value of \(y\) is: \((171.5577,172.8335)\); and a \(95\%\) prediction interval for an individual value of \(y\) is: \((168.2974,176.0938)\).
{"url":"https://stats.libretexts.org/Bookshelves/Introductory_Statistics/Exercises_(Introductory_Statistics)/Exercises%3A_Shafer_and_Zhang/10.E%3A_Correlation_and_Regression_(Exercises)","timestamp":"2024-11-09T22:59:53Z","content_type":"text/html","content_length":"265559","record_id":"<urn:uuid:e8d12001-b234-4ac4-8919-2ad10d9ac824>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.10/warc/CC-MAIN-20241109214337-20241110004337-00540.warc.gz"}
Approximation algorithms for the priority facility location problem with submodular penalties Operations Research Transactions Approximation algorithms for the priority facility location problem with submodular penalties WANG Ying^1, WANG Fengmin^2, XU Dachuan^2,*, XU Wenqing^2,3 1. 1. Department of Science, Taiyuan Institute of Technology, Taiyuan 030008, China; 2. College of Applied Sciences, Beijing University of Technology, Beijing 100124, China; 3. Department of Mathematics and Statistics, California State University, Long Beach, CA 90840, USA
{"url":"https://www.jorsc.shu.edu.cn/EN/abstract/abstract203.shtml","timestamp":"2024-11-04T18:54:05Z","content_type":"text/html","content_length":"61543","record_id":"<urn:uuid:584bf2d4-6182-4ea5-9567-d421c339d2dc>","cc-path":"CC-MAIN-2024-46/segments/1730477027838.15/warc/CC-MAIN-20241104163253-20241104193253-00370.warc.gz"}
When writing a document that contains some field-specific concepts it might be convenient to add a glossary. A glossary is a list of terms in a particular domain of knowledge with definitions for those terms. This article explains how to create one. Important advisory note: Your project’s main file should always be in the root directory (outside of any folders), to ensure that all of the compilation steps will be run in the correct directory and to ensure that the required auxiliary files are available, for instance, when creating a glossary or adding an index. Let's start with a simple example. description={Is a markup language specially suited for scientific documents} description={Mathematics is what mathematicians do} \title{How to create a glossary} \author{ } \date{ } The \Gls{latex} typesetting markup language is specially suitable for documents that include \gls{maths}. The following image shows the Glossary produced by the example above: To create a glossary the package glossaries has to be imported. This is accomplished by the line in the preamble. The command \makeglossaries must be written before the first glossary entry. Each glossary entry is created by the command \newglossaryentry which takes two parameters, then each entry can be referenced later in the document by the command \gls. See the subsection about terms for a more complete description. The command \printglossaries is the one that will actually render the list of words and definitions typed in each entry, with the title "Glossary". In this case it's shown at the end of the document, but \printglossaries can be used in any other location. Terms and Acronyms Usually there are two types of entries in a glossary: terms and their definitions, or acronyms and their meaning. This two types can be printed separately in your LaTeX document. description={Is a mark up language specially suited for scientific documents} description={Mathematics is what mathematicians do} description={A mathematical expression} \newacronym{gcd}{GCD}{Greatest Common Divisor} \newacronym{lcm}{LCM}{Least Common Multiple} The \Gls{latex} typesetting markup language is specially suitable for documents that include \gls{maths}. \Glspl{formula} are rendered properly an easily once one gets used to the commands. Given a set of numbers, there are elementary methods to compute its \acrlong{gcd}, which is abbreviated \acrshort{gcd}. This process is similar to that used for the \acrfull{lcm}. The following image shows part of the output produced by the example above: The following subsections explain how to create each of the list types. As seen in the introduction, terms are defined by means of the command \newglossaryentry description={Mathematics is what mathematicians do} description={Is a markup language specially suited for scientific documents} description={A mathematical expression} The \Gls{latex} typesetting markup language is specially suitable for documents that include \gls{maths}. \Glspl{formula} are rendered properly an easily once one gets used to the commands. The following image shows the Glossary created by the example above: Let's see in more detail the syntax of each parameter passed to the command \newglossaryentry. The first term defined in the example is "mathematics". • maths. This first parameter is the label of this term and is used to reference it within the document with gls • name=mathematics. Includes The word to be defined, in this case "mathematics". It's recommended to write it in lowercase letters and singular form. • description={Mathematics is what mathematicians do}. Inside the braces is the definition of the current term. After you have defined the terms, to use them while you are typing your LaTeX file use one of the commands describe below: \gls{ } To print the term, lowercase. For example, \gls{maths} prints mathematics when used. \Gls{ } The same as \gls but the first letter will be printed in uppercase. Example: \Gls{maths} prints Mathematics \glspl{ } The same as \gls but the term is put in its plural form. For instance, \glspl{formula} will write formulas in your final document. \Glspl{ } The same as \Gls but the term is put in its plural form. For example, \Glspl{formula} renders as Formulas. Finally, to print the glossary use the command An acronym is a word formed from the initial letters in a phrase. Below is an example of acronyms in LaTeX \newacronym{gcd}{GCD}{Greatest Common Divisor} \newacronym{lcm}{LCM}{Least Common Multiple} Given a set of numbers, there are elementary methods to compute its \acrlong{gcd}, which is abbreviated \acrshort{gcd}. This process is similar to that used for the \acrfull{lcm}. The following image shows part of the output produced by the example above: To use acronyms an additional parameter must be used when importing the glossaries package. The line to be added to the preamble is Once this line is added, the command \newacronym will declare a new acronym. For the sake of an example, below is a description of the command \newacronym{gcd}{GCD}{Greatest Common Divisor} • gcd is the label, used latter in the document to reference this acronym. • GCD the acronym itself. Usually acronyms are written in capital letters. • Greatest Common Divisor is the phrase this acronym is used for. After the acronyms have been included in the preamble, they can be used by means on the next commands: \acrlong{ } Displays the phrase which the acronyms stands for. Put the label of the acronym inside the braces. In the example, \acrlong{gcd} prints Greatest Common Divisor. \acrshort{ } Prints the acronym whose label is passed as parameter. For instance, \acrshort{gcd} renders as GCD. \acrfull{ } Prints both, the acronym and its definition. In the example the output of \acrfull{lcm} is Least Common Multiple (LCM). To print the list of acronyms use the command The acronyms list needs a temporary file generated by \printglossary to work, thereby you must add said command right before the line \printglossary[type=\acronymtype] and compile your document, once you've compiled your document for the first time you can remove the line \printglossary. Changing the title of the Glossary If you want to change the default title of the glossary for something else, this is straightforward, two parameters must be added when printing the glossary. Below is an example. description={Mathematics is what mathematicians do} description={Is a markup language specially suited for scientific documents} description={A mathematical expression} The \Gls{latex} typesetting markup language is specially suitable for documents that include \gls{maths}. \Glspl{formula} are rendered properly an easily once one gets used to the commands. \printglossary[title=Special Terms, toctitle=List of terms] The following image shows part of the output produced by the example above: Notice that the command \printglossary has two comma-separated parameters: • title=Special Terms is the title to be displayed on top of the glossary. • toctitle=List of terms this is the entry to be displayed in the table of contents. See the next section. Show the glossary in the table of contents For the glossary to show up in the table of contents put in the preamble of your document description={Mathematics is what mathematicians do} description={Is a markup language specially suited for scientific documents} description={A mathematical expression} \section{First Section} The \Gls{latex} typesetting markup language is specially suitable for documents that include \gls{maths}. \Glspl{formula} are rendered properly an easily once one gets used to the commands. \printglossary[title=Special Terms, toctitle=List of terms] The following image shows the content of the 2 pages produced by the example above. Note how the command \printglossary[title=Special Terms, toctitle=List of terms] produces different titles for the table of contents ("List of terms") and the corresponding heading used in the text ("Special Terms"): Compiling the glossary To compile a document that contains a glossary in Overleaf you don't have to do anything special, but if you add new terms to the glossary once you compiled it, make sure to click on Clear cached files first under logs option). If you are compiling the document, for instance one called glossaries.tex, using pdflatex on your local machine, you have to use these commands: pdflatex glossaries.tex makeglossaries glossaries pdflatex glossaries.tex Reference guide Styles available for glossaries The command \setglossarystyle{style} must be inserted before \printglossaries. Below a list of available styles: • list. Writes the defined term in boldface font • altlist. Inserts newline after the term and indents the description. • listgroup. Group the terms based on the first letter. • listhypergroup. Adds hyperlinks at the top of the index. Further reading For more information see: Overleaf guides LaTeX Basics Figures and tables References and Citations Document structure Field specific Class files Advanced TeX/LaTeX
{"url":"https://cs.overleaf.com/learn/latex/Glossaries","timestamp":"2024-11-06T01:00:43Z","content_type":"text/html","content_length":"81399","record_id":"<urn:uuid:6e1bc710-22c0-48f9-98be-0c0fe5a65826>","cc-path":"CC-MAIN-2024-46/segments/1730477027906.34/warc/CC-MAIN-20241106003436-20241106033436-00204.warc.gz"}
🔥 Featured category Category “Fractal” (See also: all categories, featured categories, featured articles, all articles. Sort articles by name, created, edited) Fractals are mathematical sets that exhibit self-similarity at different scales. Why are they called 'Fractals'? Because they are not exactly two dimensional (or three dimensional or four dimensional or one dimensional) they have a 'fractional dimension' (like 2.5) Articles with Category Fractal (17…)
{"url":"https://wiki.secretgeek.net/Category/fractal","timestamp":"2024-11-14T12:23:09Z","content_type":"text/html","content_length":"35588","record_id":"<urn:uuid:4b63d866-fb0c-4fea-ace8-989179087649>","cc-path":"CC-MAIN-2024-46/segments/1730477028558.0/warc/CC-MAIN-20241114094851-20241114124851-00238.warc.gz"}
Compare with parent nodes: Difference | Graphs | index | IndexBasicStat | Indexch | indexnl | Sitemap | Support | t-test | Variation F-test | Variation Levene test | Whatsnew | Wilcoxon–Mann–Whitney-test | Compare with Here you can put in the column to compare for a statistical test, Variation F-test When this is done all these test are computed and displayed in the result array. Data-sets; B, C and D are compared with data-set A, by putting A in the compare with Data file
{"url":"http://develve.net/Compare%20with.html","timestamp":"2024-11-14T17:23:00Z","content_type":"text/html","content_length":"8500","record_id":"<urn:uuid:f5b42952-7467-48e8-8e7d-29cdfc115ec9>","cc-path":"CC-MAIN-2024-46/segments/1730477393980.94/warc/CC-MAIN-20241114162350-20241114192350-00799.warc.gz"}
(Re-)Exploring HDB Resale Flat Data in 17 Graphs A New Approach About a year ago, I published my first post on data&stuff. I applied econometric techniques to develop three least squares regression models to explain HDB resale flat prices. A year on, I’m re-visiting the expanded dataset (now includes an additional year of data) with new skills and knowledge. This time, I intend to apply proper data science techniques to accurately predict prices. In this first post, I perform exploratory data analysis (EDA) on the dataset. In subsequent posts, I will develop a more complex regression model to predict resale flat prices. # Import import matplotlib as mpl import matplotlib.pyplot as plt import numpy as np import pandas as pd import warnings # Settings %matplotlib inline # Read data hdb = pd.read_csv('resale-flat-prices-based-on-registration-date-from-jan-2015-onwards.csv') Target: Resale Prices As we can see, resale prices are right-skewed (mean is to the right of the median). The mean resale price transacted was a whopping $440,000. Singaporeans must be crazy rich to afford a resale flat in this era. Date and Month Purchased First, note that the month feature combines both the month and the year. Let’s split these up while preserving the original notation. # Rename month variable hdb = hdb.rename(columns={'month': 'year_mth'}) # Add variables for month and year hdb['year'] = pd.to_numeric(hdb.year_mth.str[:4]) hdb['month'] = pd.to_numeric(hdb.year_mth.str[5:]) From the graph below, we find that there are “hot” and “cold” periods for buying resale flats, with a surge in recent months. We note how lots of transactions take place on a regular basis: at least 1,000 per month. At the median price, that’s approximately $4.4 billion transacted per month. Relation with Target Plotting the median resale price from 2015 onwards, we find that the median price has remained stable over time. In addition, the variation in prices has remained relatively wide. Hence, as in my first post on HDB resale flat prices, we will assume that the relationship between the flat characteristics and resale flat prices is stable for all transactions in the dataset. In other words, we treat the transactions as having occurred within a single, stable time period. Relation with Target We find high variability in resale flat prices across the respective towns. This tells us that towns are an important factor in predicting resale flat prices. Flat Type Relation to Target Naturally, we would expect flats that are “high SES” to have a higher resale price: Storey Range Relation to Target Conventional wisdom would tell us that the higher the storey, the nicer the view. The nicer the view, the higher the resale price. The data appears to agree. Floor Area Relation to Target Conventional wisdom would also suggest a positive relationship between floor area and price. Yet again, the data appears to agree. Flat Model Relation to Target There appears to be high variability in resale prices across flat types. This suggests that flat types will be useful for prediction. Lease Commencement Date Although we expect a higher price for later lease commencement dates, the relationship is not all that clear. Perhaps remaining lease is a bigger factor. Relation to Target Remaining Lease Relation to Target We find a positive relationship between resale price and the remaining years in lease from 50 to 90 years. However, from 90 years onwards (referring to Build-to-Order (BTO) flats sold in the last 5 years), the relationship weakens substantially, and the variation increases substantially as well. This suggests that we could create a special category for transactions of flats with 90 or more years remaining in their leases to predict resale flat prices. Click here for the full Jupyter notebook. Credits for image: Public Service Division Credits for data: Data.gov.sg
{"url":"https://chrischow.github.io/dataandstuff/2018-09-02-re-exploring-hdb-resale-flat-data/","timestamp":"2024-11-02T09:07:37Z","content_type":"text/html","content_length":"25107","record_id":"<urn:uuid:f64a092d-1c9e-48fd-bcc6-38fcc5f60608>","cc-path":"CC-MAIN-2024-46/segments/1730477027709.8/warc/CC-MAIN-20241102071948-20241102101948-00745.warc.gz"}
Can You Match the Fraction to the Decimal Amount? LueratSatichob / DigitalVision Vectors / Getty Images, HowStuffWorks Can you match 5/10 to its decimal amount? If you divide 5 by 10, you'll get .50, which is another way of saying "one half." Here's another way of looking at this — if you take 10 and divide it by 2, you'll get 5, which is exactly half of 10. LueratSatichob / DigitalVision Vectors / Getty Images, HowStuffWorks Can you figure out what 3/4 is in decimal form? Dividing 3 by 4 will equal out to .75. In percent form, this would be 75% because you would move the decimal place over to the right by two places. 3/4 is also a common unit of measurement for baking LueratSatichob / DigitalVision Vectors / Getty Images, HowStuffWorks Take a guess what 1/3 is in decimal form! If you divide 1 by 3, you'll get .333. This is also known as a repeating decimal because the ".333" part goes on forever. There will always be an infinite amount of "3's" after the decimal point. LueratSatichob / DigitalVision Vectors / Getty Images, HowStuffWorks What is the decimal equivalent of 1/5? 1/5 equals out to .20, which can also be phrased as 20% because you're moving the decimal place over to the right two times. Some people also write this as ".2" which is the same as ".20." LueratSatichob / DigitalVision Vectors / Getty Images, HowStuffWorks How about matching 1/10 to its decimal form? 1/10 is the same thing as .1 or .10. It doesn't matter how you write this either, because there will always be "invisible zeros" after the "1" part. You could even write this as .1000000. We've got a decimal of .125, but what is its fraction form? .125 is the equivalent of 1/8. In percent form, this would be 12.5% since the decimal place would be moved between the "2" and the "5" number. You could also write this as .12500000. LueratSatichob / DigitalVision Vectors / Getty Images, HowStuffWorks Can you match 2/9 to its correct decimal form? 2/9 is the same thing as .222. The "2's" in this decimal form actually repeat forever, which is also known as a repeating decimal. For instance, you could write this as .2222222222222222. LueratSatichob / DigitalVision Vectors / Getty Images, HowStuffWorks Do you know the decimal equivalent of 5/6? If you divide 5 by 6, you'll get .833. But did you know that the "3's" in this decimal form are actually repeating decimals? So you could write this as ".833333333," and it would mean the same thing. .80 is equivalent to which of these fractions? 4/5=.80. Here's one way to remember this: 5/5=1, which is the same as 100%. So if 4/5=.80, or 80%, then this difference would be 20%. So when dealing with fractions from 1/5 to 5/5, remember that everything is separated by 20%. If you're doing well so far, then you probably know what 63/64 is in decimal form! 63/64 can also be written as .984. Remember that 64/64=1. So that means that 63/64 is really close to 1, so the best option to choose from would be .984, which is also written as 98.4%. LueratSatichob / DigitalVision Vectors / Getty Images, HowStuffWorks Try and guess what the decimal form of 9/16 is! .5625 is the same thing as 9/16. Look at it this way: 9/18 would equal out to .50, right? So that means that 9/16 would be somewhat close to that .50 number, so the best option to choose from would be .5625. LueratSatichob / DigitalVision Vectors / Getty Images, HowStuffWorks How about matching 1/11 to its decimal form? 1/11 can also be written as .0909. Here's why: if 1/10 is written as .10 or 10%, then that means that 1/11 must also be close to that .10 number. So the best option would be .0909, or, 9.09%. LueratSatichob / DigitalVision Vectors / Getty Images, HowStuffWorks What is the decimal equivalent of 3/18? The fraction 3/18 is written as .166, and since the "6" number repeats itself forever, this is also known as a repeating decimal. So, you could write this as .166666, and it would mean the same Want to take a guess at what the decimal form of -2/100 is? Don't let the negative sign scare you, because -2/100 just means -.02. When dividing any number by 100, you're just moving the decimal place over to the left two times. Another example would be 1/ 100, which is .01. How about the decimal equivalent of this fraction: 1000/1000? 1000/1000 is the same thing as 1. If you divide any number by itself, you'll always get the number 1. It could even be a really big number like 1,000,000/1,000,000 and you'll still get 1. 1/32 can also be written as ... Specifically, the answer is .03125. This one may seem easy to remember because the "31" part in the decimal sounds pretty close to the "32" part in the fraction, right? Just remember to add that 0 before "31." Can you match .0625 to its fraction form? The fraction 1/16 and .0625 mean the exact same thing in this context. In percent form, this is also written as 6.25% since you're moving the decimal place over to the right two times. LueratSatichob / DigitalVision Vectors / Getty Images, HowStuffWorks One of these decimals matches to 4/9, but which one is it? 4/9 is also written as .444. All of those "4" numbers repeats itself forever, so you could also write this as .444444 and it would mean the same thing. This is called a recurring decimal. Here's one for you to try: 50/101. So here's why 50/101 equals out to .495 — you might already know that 50/100 is the same thing as .50, right? So this means that .495 is really close to .50, so it would be the best option in this How about calculating the decimal form of this fraction: 39/78? No tricks here, because 39/78 is the same thing as .50. Look at it this way: if you take 78 and divide it by 2, then you'll get 39 because 39 is exactly half of 78. You could also multiply 39 by 2 to get 78. Here's a tricky one for you to try: 1/.50 Fractions with decimals in them may seem scary at first but let's work this one out together. 1/.50 is another way of doubling the numerator (which is 1 in the fraction), so the answer would be 2. Another example would be 2/.50, which would be 4 because you're just doubling that 2 number. How about giving this one a try: 100/3 100/3 is the same thing as 33.3, and that "3" number after the decimal repeats itself forever. Here's another way of looking at this: how many times does 3 go into 100? 33.3 times because if you multiply 3 by 33.3, you get 100. LueratSatichob / DigitalVision Vectors / Getty Images, HowStuffWorks You're doing great so far, so give this one a try: 12/5 12/5 is equivalent to 2.4. Here's one way of looking at this: 5 goes into 10 two times, right? So since 12 is pretty close to 10, that means that 2.4 must be the best option for this question. 8.33 may be a repeating decimal, but what is its fraction equivalent? Here's why the answer to this question is 25/3 — we know that 24/3 would be 8, right? So since 25 is just a little bit higher than 24, that means that 8.33 would be the best option. LueratSatichob / DigitalVision Vectors / Getty Images, HowStuffWorks What is the decimal equivalent of 12/11? 12/11 can also be written as 1.09. Here's one trick to remember: if the top number in the fraction is bigger than the bottom number in the same fraction, then the decimal will always start at at least "1" followed by a decimal point. 400/1000 may look scary, but you can figure this out, right? The fraction 400/1000 equals out to .40. Just remember this little trick when dividing a number by 1000: move the decimal place over three times to the left, because 1000 has three zeros in it. LueratSatichob / DigitalVision Vectors / Getty Images, HowStuffWorks How about giving this fraction a try: 9/10 .90 is the answer to 9/10. When dividing a number by 10, just move the decimal place over to the left one time. This is because the number 10 has one zero in it. Another example would be 7/10, which is .70. This one might be a little tricky, but we believe in you: .50/.25 Don't worry, because this one is not as tricky as it looks. Look at this fraction like this: (1/2)/(1/4). So, you're basically multiplying 1/2 by 4, which equals out to 2.0 (because half of 4 is 2). Don't let this fraction fool you: -3/-50 When multiplying or dividing two negative numbers together, remember that the answer is always positive. So with this question, the answer would be a positive .06 from the fraction -3/-50. What's the fraction equivalent of this decimal: .25? That's right, all of the fractions in this question mean the same thing: .25. This decimal is also written as 1/4, and 2/8 and 4/16 can all be reduced down to 1/4. Another example would be 8/32, which is also .25. LueratSatichob / DigitalVision Vectors / Getty Images, HowStuffWorks 6/5 can also be written as which of these decimals? Here's a trick that we have for you: if 5/5 is equivalent to 1, then that means that 6/5 is close to that "1" number. So, the next closest number would be 1.2. 7/5 would be equivalent to 1.4, 8/5= 1.6, and so forth. Which of these is not the fraction equivalent of .75? Another way of writing .75 is by using the fraction 3/4. So, all of the fractions above equals out to 3/4, except for 12/15. But it comes close, as this fraction is the equivalent of .80. Match this fraction to its decimal equivalent: 2.5/3 2.5/3 is the same thing as .833. This is also known as a repeating decimal since all of those "3" numbers after the 8 repeat forever. You could also write this decimal as .833333333. Try and guess what the decimal equivalent of this fraction is: 8/80 8/80 is the equivalent of the decimal .10. Here's one way of looking at this — the fraction 80/8 is 10, right? So if we just flip the top and bottom numbers of this fraction around, then we get .10. How about calculating the decimal equivalent of this fraction: 15/200 15/200 means the same thing as .075. In percent form, this would also be 7.5% since you're moving the decimal place over to the right two times. And look at it this way: if 15/20 is .75, then with 15 /200, you would just add an extra zero in front of the 7 number. What is the decimal equivalent of 65/2? 32.5 is the equivalent of 65/2. Another way of looking at this is by understanding that half of 65 is 32.5. If you take half of an odd number, you'll always get a decimal number as the answer. How about trying to solve this problem: 100/99? The fraction 100/99 equals out to 1.01. This is because 100/100 is equivalent to 1, so 100/99 is very close to this "1" number. Thus, 1.01 would be the best option for this question. Take a guess at what the decimal form of 5/75 is! .066 is the decimal equivalent of 5/75. Specifically, the answer is .06666667. Its percent form would also be 6.6% since you would be moving the decimal place over to the right two times. What is the decimal equivalent of 9.9/2.2? Here's one way of looking at this problem. If you remove the decimals and the second digits from this fraction, you get 9/2. And did you know that 9/2 also equals out to 4.5, which is the same decimal equivalent as 9.9/2.2! Let's figure out what the decimal form of 1/1.1 is! Here's why the answer is .909 — if you take 1/1, the answer would be 1, right? So for the fraction 1/1.1, the solution needs to be as close to the number "1" as possible. So for this question, that would be .909.
{"url":"https://lahore.zoo.com/quiz/can-you-match-the-fraction-to-the-decimal-amount","timestamp":"2024-11-12T03:51:54Z","content_type":"text/html","content_length":"418157","record_id":"<urn:uuid:902e98e3-69b7-4833-ba1c-8a202f02c675>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.50/warc/CC-MAIN-20241112014152-20241112044152-00155.warc.gz"}
Solving Equations With Rational Numbers Worksheet 2024 - NumbersWorksheets.com Solving Equations With Rational Numbers Worksheet Solving Equations With Rational Numbers Worksheet – A Rational Numbers Worksheet will help your youngster be more informed about the concepts behind this ratio of integers. In this particular worksheet, students are able to resolve 12 diverse troubles associated with realistic expression. They will likely figure out how to grow 2 or more numbers, group them in couples, and find out their products and services. They may also process simplifying reasonable expression. When they have enhanced these methods, this worksheet will certainly be a useful device for continuing their scientific studies. Solving Equations With Rational Numbers Worksheet. Realistic Amounts can be a rate of integers There are two kinds of phone numbers: rational and irrational. Realistic numbers are understood to be whole numbers, whilst irrational numbers usually do not perform repeatedly, and get an limitless variety of digits. Irrational phone numbers are low-zero, no-terminating decimals, and rectangular origins that are not excellent squares. These types of numbers are not used often in everyday life, but they are often used in math applications. To define a reasonable amount, you need to realize what a reasonable quantity is. An integer is actually a total amount, plus a realistic number is a percentage of two integers. The rate of two integers may be the number at the top split through the number on the bottom. If two integers are two and five, this would be an integer, for example. There are also many floating point numbers, such as pi, which cannot be expressed as a fraction. They may be produced in to a small percentage A logical number has a numerator and denominator which are not no. Consequently they can be depicted being a small percentage. In addition to their integer numerators and denominators, realistic numbers can also have a unfavorable worth. The negative worth must be placed left of and its particular definite importance is its extended distance from absolutely no. To simplify this instance, we will claim that .0333333 is a fraction that can be published like a 1/3. Along with bad integers, a realistic number can also be created in a portion. By way of example, /18,572 is a logical quantity, while -1/ will not be. Any small percentage made up of integers is rational, so long as the denominator does not include a and may be published being an integer. Similarly, a decimal that ends in a level is another realistic quantity. They can make sense Regardless of their name, logical amounts don’t make very much feeling. In mathematics, they can be one organizations with a unique span around the number range. Which means that if we matter some thing, we can easily purchase the shape by its proportion to its initial number. This holds correct even when there are endless rational phone numbers between two certain phone numbers. If they are ordered, in other words, numbers should make sense only. So, if you’re counting the length of an ant’s tail, a square root of pi is an integer. In real life, if we want to know the length of a string of pearls, we can use a rational number. To discover the length of a pearl, as an example, we could count up its width. Just one pearl weighs about ten kilos, which is a rational variety. Additionally, a pound’s excess weight is equal to ten kgs. Therefore, we should certainly break down a lb by 15, without be concerned about the duration of just one pearl. They can be expressed as being a decimal If you’ve ever tried to convert a number to its decimal form, you’ve most likely seen a problem that involves a repeated fraction. A decimal number could be composed as a a number of of two integers, so 4 times 5 is equal to 8. A similar issue necessitates the repeated small percentage 2/1, and either side needs to be split by 99 to find the right answer. But how do you make your conversion? Here are some examples. A reasonable amount can be developed in various forms, such as fractions as well as a decimal. One way to stand for a reasonable number within a decimal would be to break down it into its fractional counterpart. You can find 3 ways to split a logical number, and all these techniques yields its decimal equal. One of these brilliant techniques is usually to divide it into its fractional equivalent, and that’s what’s known as the terminating decimal. Gallery of Solving Equations With Rational Numbers Worksheet Solving Equations With Rational Numbers Worksheet Pdf Worksheet Solving Equations With Rational Numbers Worksheet 8 Rational Numbers 7th Grade Math Worksheets Worksheeto Leave a Comment
{"url":"https://numbersworksheet.com/solving-equations-with-rational-numbers-worksheet/","timestamp":"2024-11-07T23:51:43Z","content_type":"text/html","content_length":"56369","record_id":"<urn:uuid:fe9aac93-574d-4d7c-937c-e999b6496371>","cc-path":"CC-MAIN-2024-46/segments/1730477028017.48/warc/CC-MAIN-20241107212632-20241108002632-00763.warc.gz"}
Trailing Bits 1. We can see that a binary stream containing flag is given to us . 2. But the trailing bits and the starting bits were missing. As we know that 1 byte = 8 bits therefore we can check after how many initial bits proper bytes are there( which came out to be after initial bits . 3. To check for proper final bits i took the length of whole binary stream and sliced its first bits . After that , i took out the length of the stream and took its modulus with 8 ( which came out to be 6) so our proper byte stream would be original_stream[2:-6]. After decoding it we get the flag in it.
{"url":"https://ctftime.org/writeup/22978","timestamp":"2024-11-03T04:26:00Z","content_type":"text/html","content_length":"15136","record_id":"<urn:uuid:412e164c-85ad-4423-b214-a0a733997638>","cc-path":"CC-MAIN-2024-46/segments/1730477027770.74/warc/CC-MAIN-20241103022018-20241103052018-00202.warc.gz"}
American Mathematical Society Stability of the travelling wave solution of the FitzHugh-Nagumo system HTML articles powered by AMS MathViewer Trans. Amer. Math. Soc. 286 (1984), 431-469 DOI: https://doi.org/10.1090/S0002-9947-1984-0760971-6 PDF | Request permission Travelling wave solutions for the FitzHugh-Nagumo equations have been proved to exist, by various authors, close to a certain singular limit of the equations. In this paper it is proved that these waves are stable relative to the full system of partial differential equations; that is, initial values near (in the sup norm) to the travelling wave lead to solutions that decay to some translate of the wave in time. The technique used is the linearised stability criterion; the framework for its use in this context has been given by Evans [6-9]. The search for the spectrum leads to systems of linear ordinary differential equations. The proof uses dynamical systems arguments to analyse these close to the singular limit. References M. Bramson, Kolmogorov nonlinear diffusion equations, Mem. Amer. Math. Soc. (to appear). S. Dunbar, Travelling waves of diffusive Volterra-Lother interaction equations, Ph.D. Thesis, Univ. of Minnesota, 1981. J. Feroe, Temporal stability of solitary impulse solutions of a nerve equation, Biophys. J. 21 (1978), 103-110. R. FitzHugh, Impulses and physiological states in theoretical models of nerve membranes, Biophys. J. 1 (1961), 445-466. R. Langer, Existence of homoclinic travelling wave solutions to the FitzHugh-Nagumo equations. Ph.D. Thesis, Northeastern Univ., 1980. J. Nagumo, S. Arimoto and S. Yoshizawa, An active pulse transmission line simulating nerve axons, Proc IRL 50 (1960), 2061-2070. D. Terman, Threshold phenomena in nonlinear diffusion equations, Ph.D. Thesis, Univ. of Minnesota, 1980. Similar Articles • Retrieve articles in Transactions of the American Mathematical Society with MSC: 35B35, 35K55, 92A09 • Retrieve articles in all journals with MSC: 35B35, 35K55, 92A09 Bibliographic Information • © Copyright 1984 American Mathematical Society • Journal: Trans. Amer. Math. Soc. 286 (1984), 431-469 • MSC: Primary 35B35; Secondary 35K55, 92A09 • DOI: https://doi.org/10.1090/S0002-9947-1984-0760971-6 • MathSciNet review: 760971
{"url":"https://www.ams.org/journals/tran/1984-286-02/S0002-9947-1984-0760971-6/","timestamp":"2024-11-12T01:50:47Z","content_type":"text/html","content_length":"69628","record_id":"<urn:uuid:782c153a-8b6b-408a-930b-51f70f946f49>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.50/warc/CC-MAIN-20241112014152-20241112044152-00404.warc.gz"}
Mathematical Functions - UvoCorpEssays Mathematical Functions Click here to get an A+ paper at a Discount Accuracy and precision in the calculation of doses, dosages, and rates of infusion of intravenous solutions are often based on percents, ratios, and proportions. The exercises for this CheckPoint provide opportunities to perform various mathematical functions pharmacy technicians must master Resources: Ch. 2 & 3 of Pharmaceutical Calculations for Pharmacy Technicians: A Worktext and Equation Editor Read the following scenario: Assume the role of a pharmacy technician. A pharmacist gives you a physician’s order sheet, a prescription, and asks you to prepare a 2% solution of sodium chloride You check the stock in the pharmacy but discover you have only a 3% solution of NaCl. Hint: 2g NaCl:100mL of solution::3g NaCL:x mL of 3% solution. Show all your calculations in a Microsoft® Word document, using Equation Editor. Complete the following exercises. Refer to p. 22 for worked examples. 1. Solve the equation for x to determine how many mL of 3% solution you need. 2. Convert 3% to a decimal. 3. Convert 2% to a fraction. 4. Percents are often used to show the strength of solutions. Which solution is stronger, the 2% or the 3%? 5. What does 3% of sodium chloride mean? 6. Referring to the proportion regarding NaCl in the scenario, show the product of the means equals the product of the extremes. 7. Convert 12% to a fraction. 8. What percent of 15 ounces is 5 ounces? 9. Convert 1/5 to a percent. 10. Convert 33% to a decimal. Complete the following exercises: 1. Critical Thinking, p. 25: Problem 32. Explain whether or not 20:25 = 4:5 is a true portion. Refer to Example 2 on p. 22.3. Stop and Review, p. 22: Problems 1a, 1c, 1f, 1h, and 1l Post your work and answers to all problems as an attachment. Click here to get an A+ paper at a Discount Leave a Comment
{"url":"http://uvocorpessays.org/%E2%80%8Emathematical-functions/","timestamp":"2024-11-02T02:12:41Z","content_type":"text/html","content_length":"95933","record_id":"<urn:uuid:71d739b2-e709-40d4-8fce-3902b571ff21>","cc-path":"CC-MAIN-2024-46/segments/1730477027632.4/warc/CC-MAIN-20241102010035-20241102040035-00878.warc.gz"}
Vibrations account for interference Energy and Thermal Physics Vibrations account for interference Physics Narrative for 14-16 Coherence is difficult to arrange Now let's work out how the trip times and frequencies combine to predict the different fixed patterns of displacements. These depend on the geometry of the situation. In each case here the vibrations at the sources are in step. This is a rather special situation, and it's often not easy to arrange. Remember that light, just as an example, has a frequency in the terahertz range. So keeping vibrations in step at this kind of frequency is a pretty stringent condition – not easy to realise in practice. Where there is a single source that provides both beams (for both cases where there are reflections, or for the pair of slits), it's a bit easier to see how this might be arranged. The essential idea is rather simple. Each path is of a fixed distance, from the source to the detector. This introduces a fixed delay between the displacement at the source and the displacement at the detector. This delay is just the trip time, calculated, as ever, from the distance and speed of propagation of the waves. For two sources and one detector there will be a pair of trip times. The difference between the the two trip times determines whether the two contributions, one from each source, are in step or out of step. How interference comes about Interference occurs when coherent beams both arrive at a point. The vibrations in the beams may be in or out of step. If the vibrations of the two are in step then you'll see constructive superposition. The contribution from one beam adds to the contribution from the other beam to give a large resultant amplitude. If both beams have the same amplitude – so they are the same intensity or brightness – then the resultant amplitude will be twice the amplitude of either beam because the amplitudes simply add. If the vibrations of the two are completely out of step, then the two contributing amplitudes will add to give a resultant amplitude of zero. This will lead to a point where there is no illumination. As you scan across the possibilities, varying the trip times of the two beams systematically, so the two contributions move from being in step, through being partially in step, to being completely out of step. The two contributions combine in exactly the same way: they add. But the resultants are different. These resultants predict different brightnesses, or different loudnesses, or, more generally, just different intensities. Here there are two distinct, and apparently physical, beams, both of which are modelled by paths. Yet, if you remember the work on paths from episode 01, paths can also be used to explain why certain rays are drawn. As you delve more and more deeply into the nature of radiating, you'll get more and more entangled with paths.
{"url":"https://spark.iop.org/vibrations-account-interference","timestamp":"2024-11-04T10:49:26Z","content_type":"text/html","content_length":"38907","record_id":"<urn:uuid:8da3ecac-85cf-4ba4-94d9-47289dbe5571>","cc-path":"CC-MAIN-2024-46/segments/1730477027821.39/warc/CC-MAIN-20241104100555-20241104130555-00302.warc.gz"}
Re: GDL prob with text anchor pt rot and maintain text orien 2009-08-14 08:53 PM 2009-08-14 08:53 PM I am adding some graphical hotspots to some 2D symbols so I can move the text when necessary. I also have 3 options for text orientation horizontal, vertical, and 'smart' (rotates w/sym but mirrors so not seem upside down). When I have the text orientation options the anchor point of the text no longer is the same as the graphical hotspots. I tried many ADD2 and ROT2 options for the text location textx and texty with no acceptable results.Can someone point me in the right direction? ! graphical hotspots seripting !!!!!Text ---------------------------------------- PEN txtpen IF tr=1 THEN !HORIZONTAL TEXT ROT2 -symb_rotangle IF tr=2 THEN ! VERTICLE TEXT ROT2 -symb_rotangle+90 IF tr=3 THEN !SMART ORIENTATION mul2 1-2*symb_mirrored,1 IF symb_rotangle>90 AND symb_rotangle<270 THEN ROT2 180 ROT2 0 DEFINE STYLE "symtextstyle" font, 1/(2.83464567)*fontsz, txtanchor, txtFaceCode SET STYLE "symtextstyle" TEXT2 textx,texty , switch_text DEL 1 2009-08-19 02:35 AM 2009-08-19 02:35 AM 2009-08-19 04:05 AM 2009-08-19 04:05 AM 2009-08-19 09:21 AM 2009-08-19 09:21 AM 2009-08-19 01:51 PM 2009-08-19 01:51 PM 2009-08-19 04:23 PM 2009-08-19 04:23 PM
{"url":"https://community.graphisoft.com/t5/Libraries-objects/GDL-prob-with-text-anchor-pt-rot-and-maintain-text-orient/m-p/172004","timestamp":"2024-11-11T17:49:33Z","content_type":"text/html","content_length":"428707","record_id":"<urn:uuid:30a19d9a-f340-4716-92e7-167cd9f0386c>","cc-path":"CC-MAIN-2024-46/segments/1730477028235.99/warc/CC-MAIN-20241111155008-20241111185008-00486.warc.gz"}
The Castellated Rhombicosidodecahedral Prism The castellated rhombicosidodecahedral prism is a CRF polychoron bounded by 96 cells (2 rhombicosidodecahedra, 24 pentagonal pyramids, 40 tetrahedra, 30 bilunabirotundae), 424 polygons (280 triangles, 60 squares, 84 pentagons), 492 edges, and 164 vertices. This is the first known non-trivial CRF polychoron that contains the Johnson solid bilunabirotunda (J91) as cells. Prior to its discovery, polytopes containing unusual Johnson solids near the end of Norman Johnson's list, such as J91, were thought to be unlikely to close up in a CRF way. This idea was overturned on February 4, 2014, when the castellated rhombicosidodecahedral prism was constructed and verified to be CRF. This landmark discovery led to the subsequent discovery of numerous other CRF polychora that contain bilunabirotundae and triangular hebesphenorotundae as cells. We shall explore the structure of the castellated rhombicosidodecahedral prism using its parallel projections into 3D. Centered on Rhombicosidodecahedron The Near Side The above image shows the nearest rhombicosidodecahedral cell to the 4D viewpoint. For clarity, all other cells are rendered in a light transparent color. This cell forms the top of the castellated prism. Its triangular faces are joined to 20 tetrahedral cells, shown below: These tetrahedra appear rather flat, because they actually lie at a sharp angle to the rhombicosidodecahedron, and are foreshortened by the parallel projection. In 4D, however, they are perfectly regular tetrahedra. The pentagonal faces of the rhombicosidodecahedron are joined to 12 pentagonal pyramids: These pentagonal pyramids lie at a shallower angle from the nearest cell, and protrude quite a bit outwards. Notice that their triangular faces appear to be coplanar with the square faces of the rhombicosidodecahedron and the outer triangular faces of the tetrahedra. The projection envelope here is, therefore, a rhombic triacontahedron. In 4D, of course, they are not coplanar; but they appear as such here because this is where these cells connect with the 30 bilunabirotundae, which lie on the limb of the projection, at 90° to the current 4D viewpoint. The Equator The next image shows these 30 bilunabirotunda cells: The bilunabirotundae have been foreshortened into rhombuses, because they lie at a 90° angle to the current 4D viewpoint. They are connected to each other via their pentagonal faces, which have been foreshortened here to the edges of the rhombuses. Past this point, we reach the far side of the castellated rhombicosidodecahedral prism, where there are another 20 tetrahedra, 12 pentagonal pyramids, and the antipodal rhombicosidodecahedron, in an arrangement that mirrors what we saw above. The following table summarizes the cell counts for the castellated rhombicosidodecahedral prism, as seen from the current 4D viewpoint: Near side 1 12 20 0 Equator 0 0 0 30 Far side 1 12 20 0 Total 2 24 40 30 Centered on a Bilunabirotunda The previous projections give us a good overview of the global structure of the castellated rhombicosidodecahedral prism. However, that 4D viewpoint failed to show us the details of the most interesting part of this polychoron: the bilunabirotunda cells, since they lay parallel to the line-of-sight. So now, we look at the castellated prism again from a different viewpoint, this time a side-view centered on one of the bilunabirotundae. The Near Side The above image shows the nearest bilunabirotunda to the 4D viewpoint. For the sake of clarity, we have rendered the other cells in a light transparent color. The pentagonal faces of this cell are joined to 4 other bilunabirotunda, two of which are shown below: And the other 2 are shown next: Between the square faces of these cells are 4 tetrahedra, shown below in green: For the sake of clarity, we show the 4 bilunabirotunda around the nearest cell only in outline. These tetrahedra fill up the gaps between the 5 bilunabirotundae. Here are all the cells so far shown in full: The left and right edges of the nearest bilunabirotunda are connected to yet another two pairs of bilunabirotundae: Again, we show the initial 4 bilunabirotundae only in outline, so that the new cells are more clearly seen. Notice that they look somewhat deformed from the usual bilunabirotunda. This is an artifact of the projection; they lie close to the limb of the polytope and are significantly slanting into the fourth direction. In 4D, all the bilunabirotundae are identical. The pentagonal depressions at the top and bottom of the nearest cell's left and right edges are where 4 pentagonal pyramids are positioned: Here are all the cells we've seen so far: The gaps between the square faces of the 4 new bilunabirotundae and the the previous four are, of course, filled with more tetrahedra: 12 more of them, 6 on top, 6 on the bottom. The slight indentations visible in the front of the projection, as well as equivalent indentations in the back (not obvious from this 3D viewpoint), are where 4 more bilunabirotundae are fitted: Two more pairs of pentagonal pyramids fit into the gaps at the front and the back of the projection: These pentagonal pyramids appear distorted, because they lie close to the limb of the projection and are significantly slanting into the fourth direction. In reality, they are not distorted; this is merely a foreshortening caused by the projection into 3D. These are all the cells that lie on the near side of the castellated rhombicosidodecahedral prism from this 4D viewpoint. We now come to the limb of the projection where the cells lie at a 90° angle from the 4D viewpoint. The Limb There are 4 bilunabirotundae that lie on the limb of the projection; these are shown below: They appear flattened into non-uniform hexagons and octagons, because they lie at a 90° angle from the 4D viewpoint. In reality, they are identical to the other bilunabirotunda cells. The bottom edges of their projection images, which are actually squares, are joined to the bottom rhombicosidodecahedron cell of the polytope: As with the bilunabirotundae, this rhombicosidodecahedron lies at a 90° angle to the 4D viewpoint, so it appears flattened into dodecagon. It is, of course, perfectly uniform in 4D. Four pentagonal pyramids, shown in magenta below, connect this rhombicosidodecahedron to the two bilunabirotundae on the left and right: The front and back bilunabirotundae are connected by 4 tetrahedra: These pentagonal pyramids and tetrahedra, of course, also occur on the top sides of the bilunabirotundae: Finally, the other rhombicosidodecahedron appears on the top of the projection: These are all the cells that lie on the limb of the projection. Past this point, we get to the far side of the polytope, where the arrangement of cells exactly mirrors that of the near side. The four remaining pentagonal gaps are not filled by any cells; they are where 4 of the bilunabirotundae on the near side touch their counterparts on the far side. This 4D viewpoint best shows the prism-like structure of this CRF polychoron. It resembles the rhombicosidodecahedral prism, except that it has protrusions from its side cells that form the band of pentagonal pyramids, tetrahedra, and bilunabirotundae. These protruding pentagonal pyramids and tetrahedra around the rim of the prism-like shape are like battlements of a castle; hence the name castellated rhombicosidodecahedral prism. The following table summarizes the cell counts for the castellated rhombicosidodecahedral prism from this 4D viewpoint: Near side 0 8 4 + 12 = 16 1 + 4 + 4 + 4 = 13 Limb 2 8 8 4 Far side 0 8 16 13 Total 2 24 40 30 The coordinates of the castellated rhombicosidodecahedral prism, centered on the origin with edge length 2, are all changes of sign of: • (0, φ, φ^3, 0) • (φ, φ^3, 0, 0) • (φ^2, φ^2, φ^2, 0) • (φ^3, 0, φ, 0) • (1, 1, φ^3, φ) • (0, φ^3, φ^2, 1) • (1, φ^3, 1, φ) • (φ^2, 0, φ^3, 1) • (φ^3, 1, 1, φ) • (φ^3, φ^2, 0, 1) • (φ, 2φ, φ^2, φ) • (0, φ^2, 2+φ, φ) • (φ^2, φ, 2φ, φ) • (φ^2, 2+φ, 0, φ) • (2φ, φ^2, φ, φ) • (2+φ, 0, φ^2, φ) where φ=(1+√5)/2 is the Golden Ratio.
{"url":"http://www.qfbox.info/4d/H101_castprism","timestamp":"2024-11-04T11:26:20Z","content_type":"text/html","content_length":"19955","record_id":"<urn:uuid:26881ace-d53a-4125-a7cb-0f0ffce5ac1d>","cc-path":"CC-MAIN-2024-46/segments/1730477027821.39/warc/CC-MAIN-20241104100555-20241104130555-00051.warc.gz"}
157 research outputs found Constrictions in fractional quantum Hall (FQH) systems not only facilitate backscattering between counter-propagating edge modes, but also may reduce the constriction filling fraction $u_c$ with respect to the bulk filling fraction $u_b$. If both $u_b$ and $u_c$ correspond to incompressible FQH states, at least part of the constriction region is surrounded by composite edges, whose low energy dynamics is characterized by a charge mode and one or several neutral modes. In the incoherent regime, decay of neutral modes describes the equilibration of composite FQH edges, while in the limit of coherent transport, the presence of neutral modes gives rise to universal conductance fluctuations. In addition, neutral modes renormalize the strength of scattering across the constriction, and thus can determine the relative strength of forward and backwards scattering.Comment: corrected description of the results of Ref. [10], Ref. [17] adde We study the effect of inhomogeneities in Hall conductivity on the nature of the Zero Resistance States seen in the microwave irradiated two-dimensional electron systems in weak perpendicular magnetic fields, and we show that time-dependent domain patterns may emerge in some situations. For an annular Corbino geometry, with an equilibrium charge density that varies linearly with radius, we find a time-periodic non-equilibrium solution, which might be detected by a charge sensor, such as an SET. For a model on a torus, in addition to static domain patterns seen at high and low values of the equilibrium charge inhomogeneity, we find that, in the intermediate regime, a variety of nonstationary states can also exist. We catalog the possibilities we have seen in our simulations. Within a particular phenomenological model, we show that linearizing the nonlinear charge continuity equation about a particularly simple domain wall configuration and analyzing the eigenmodes allows us to estimate the periods of the solutions to the full nonlinear equation.Comment: Submitted to PR We examine the relation between different electronic transport phenomena in a Fabry-Perot interferometer in the fractional quantum Hall regime. In particular, we study the way these phenomena reflect the statistics of quantum Hall quasi-particles. For two series of states we examine, one abelian and one non-abelian, we show that the information that may be obtained from measurements of the lowest order interference pattern in an open Fabry-Perot interferometer is identical to the one that may be obtained from the temperature dependence of Coulomb blockade peaks in a closed interferometer. We argue that despite the similarity between the experimental signatures of the two series of states, interference and Coulomb blockade measurements are likely to be able to distinguish between abelian and non-abelian states, due to the sensitivity of the abelian states to local perturbations, to which the non-abelian states are insensitive.Comment: 10 pages. Published versio Inspired by creation of a fast exchange-only qubit (Medford et al., Phys. Rev. Lett., 111, 050501 (2013)), we develop a theory describing the nonlinear dynamics of two such qubits that are capacitively coupled, when one of them is driven resonantly at a frequency equal to its level splitting. We include conditions of strong driving, where the Rabi frequency is a significant fraction of the level splitting, and we consider situations where the splitting for the second qubit may be the same or different than the first. We demonstrate that coupling between qubits can be detected by reading the response of the second qubit, even when the coupling between them is only of about $1\%$ of their level splittings, and calculate entanglement between qubits. Patterns of nonlinear dynamics of coupled qubits and their entanglement are strongly dependent on the geometry of the system, and the specific mechanism of inter-qubit coupling deeply influences dynamics of both qubits. In particular, we describe the development of irregular dynamics in a two-qubit system, explore approaches for inhibiting it, and demonstrate existence of an optimal range of coupling strength maintaining stability during the operational time.Comment: 11 pages, 6 figures; One additional figure with changes to the text about the results. Additional references include We consider quantum Hall states at even-denominator filling fractions, especially $u=5/2$, in the limit of small Zeeman energy. Assuming that a paired quantum Hall state forms, we study spin ordering and its interplay with pairing. We give numerical evidence that at $u = 5/2$ an incompressible ground state will exhibit spontaneous ferromagnetism. The Ginzburg-Landau theory for the spin degrees of freedom of paired Hall states is a perturbed CP$^2$ model. We compute the coefficients in the Ginzburg-Landau theory by a BCS-Stoner mean field theory for coexisting order parameters, and show that even if repulsion is smaller than that required for a Stoner instability, ferromagnetic fluctuations can induce a partially or fully polarized superconducting state We analyze the linear response of a half filled Landau level to long wavelength and low frequency driving forces, using Fermi liquid theory for composite fermions. This response is determined by the composite fermions quasi--particle effective mass, $m^*$, and quasi--particle Landau interaction function $f(\theta-\theta')$. Analyzing infra--red divergences of perturbation theory, we get an exact expression for $m^*$, and conjecture the form of the $f(\theta-\theta')$. We then conclude that in the limit of infinite cyclotron frequency, and small ${\bf q},\omega$, the composite fermion excitation spectrum is continuous for $0<\omega<\gamma \frac{e^2}{\epsilon h}q$, with $\gamma$ an unknown number. For fractional quantum Hall states near a half filled Landau level, we derive an exact expression for the energy gap.Comment: 4 pages, RevTeX. This paper, being short and non-technical, could serve as a useful starting point for reading our manuscript cond-mat/9502032. The present paper does, however, include results not published in the forme
{"url":"https://core.ac.uk/search/?q=author%3A(Halperin%2C%20Bertrand%20I.)","timestamp":"2024-11-08T08:39:41Z","content_type":"text/html","content_length":"122932","record_id":"<urn:uuid:737f5f68-0f3b-4c29-949e-63633032b6d5>","cc-path":"CC-MAIN-2024-46/segments/1730477028032.87/warc/CC-MAIN-20241108070606-20241108100606-00719.warc.gz"}
to cubic meter Category: main menu • beach sand menu • Cups Metric beach sand conversion Amount: 1 cup Metric (cup) of volume Equals: 0.00025 cubic meters (m3) in volume Converting cup Metric to cubic meters value in the beach sand units scale. TOGGLE : from cubic meters into cups Metric in the other way around. CONVERT : between other beach sand measuring units - complete list. Conversion calculator for webmasters. Beach sand weight vs. volume units Beach sand has quite high density, it's heavy and it easily leaks into even tiny gaps or other opened spaces. No wonder it absorbs and conducts heat energy from the sun so well. However, this sand does not have the heat conductivity as high as glass does, or fireclay and firebricks, or dense concrete. A fine beach sand in dry form was used for taking these measurements. Convert beach sand measuring units between cup Metric (cup) and cubic meters (m3) but in the other reverse direction from cubic meters into cups Metric. conversion result for beach sand: From Symbol Equals Result To Symbol 1 cup Metric cup = 0.00025 cubic meters m3 Converter type: beach sand measurements This online beach sand from cup into m3 converter is a handy tool not just for certified or experienced professionals. First unit: cup Metric (cup) is used for measuring volume. Second: cubic meter (m3) is unit of volume. beach sand per 0.00025 m3 is equivalent to 1 what? The cubic meters amount 0.00025 m3 converts into 1 cup, one cup Metric. It is the EQUAL beach sand volume value of 1 cup Metric but in the cubic meters volume unit alternative. How to convert 2 cups Metric (cup) of beach sand into cubic meters (m3)? Is there a calculation formula? First divide the two units variables. Then multiply the result by 2 - for example: 0.00025 * 2 (or divide it by / 0.5) 1 cup of beach sand = ? m3 1 cup = 0.00025 m3 of beach sand Other applications for beach sand units calculator ... With the above mentioned two-units calculating service it provides, this beach sand converter proved to be useful also as an online tool for: 1. practicing cups Metric and cubic meters of beach sand ( cup vs. m3 ) measuring values exchange. 2. beach sand amounts conversion factors - between numerous unit pairs variations. 3. working with mass density - how heavy is a volume of beach sand - values and properties. International unit symbols for these two beach sand measurements are: Abbreviation or prefix ( abbr. short brevis ), unit symbol, for cup Metric is: Abbreviation or prefix ( abbr. ) brevis - short unit symbol for cubic meter is: One cup Metric of beach sand converted to cubic meter equals to 0.00025 m3 How many cubic meters of beach sand are in 1 cup Metric? The answer is: The change of 1 cup ( cup Metric ) volume unit of beach sand measure equals = to volume 0.00025 m3 ( cubic meter ) as the equivalent measure within the same beach sand substance type. In principle with any measuring task, switched on professional people always ensure, and their success depends on, they get the most precise conversion results everywhere and every-time. Not only whenever possible, it's always so. Often having only a good idea ( or more ideas ) might not be perfect nor good enough solution. If there is an exact known measure in cup - cups Metric for beach sand amount, the rule is that the cup Metric number gets converted into m3 - cubic meters or any other beach sand unit absolutely exactly.
{"url":"https://www.traditionaloven.com/building/beach-sand/convert-cup-mtr-si-beach-sand-to-cubic-metre-beach-sand-m3.html","timestamp":"2024-11-03T13:40:59Z","content_type":"text/html","content_length":"40417","record_id":"<urn:uuid:94efd1ed-9f0f-45e3-8338-1af107cee288>","cc-path":"CC-MAIN-2024-46/segments/1730477027776.9/warc/CC-MAIN-20241103114942-20241103144942-00746.warc.gz"}
Algebra Homework Fast Are you tired of spending hours on end trying to finish your algebra homework? Do you dread having to sit down and do math problems? If so, you’re not alone. Many students struggle with algebra and find it to be a stressful and time-consuming task. However, there are nifty ways to make your algebra homework go by faster and with less stress. Easy way is to ask for help: https://domyhomework123.com/math. Tougher way is to get through content of our website. In this post, we’ll explore how to finish algebra homework faster and without the usual stress that accompanies the process. From pre-algebra homework to college algebra homework, we will provide tips to help you complete them more effectively. Homework algebra: what it entails Algebra is a branch of mathematics that deals with mathematical equations and structures. Homework assignments in algebra often involve solving equations, graphing functions, and simplifying expressions using mathematical properties and rules. Algebra homework assignments usually build on concepts taught in class and can range in difficulty from basic to advanced. This can include solving linear equations, quadratic equations, systems of equations, and polynomials. It may also include working with algebraic concepts such as exponents, logarithms, and functions. How to do all your algebra homework fast and without stress To effectively start and complete your algebra homework correctly, here are some tips you need to know: • Understand the concept before attempting the problem One of the main reasons why algebra can be difficult is because it builds on itself. If you need help understanding the concept, finding the algebra homework answer can be hard. When unsure, ask your teacher or tutor for clarification. They can help you understand the concept and provide the guidance and support you need to solve the problem. • Practice, practice, practice Set aside some time each day to practice solving algebra problems. This will help you become more efficient and confident in solving problems. The more you practice, the better you’ll get at algebra. A calculator can be a great tool to help you solve algebra problems. It can save you time and help you check your work. However, please don’t rely on it too much. It’s important to understand how to solve the problem by hand so you can check your work and ensure you understand the concept. Sitting and working on algebra for hours on end can be draining. Taking breaks is also important when doing algebra homework. Take short breaks throughout your study session to give your brain a chance to rest and reboot. This will naturally help you stay focused and motivated. If you need help with a concept or a problem, feel free to ask for help. Talk to your teacher or a tutor. They can provide the guidance and support you need to understand the concept and solve the It is not recommended to cheat on algebra homework. Most kids often ask about how to cheat on algebra homework, but they shouldn’t be encouraged. Cheating undermines their understanding of the subject and leaves severe consequences on their academic journey. Most parents reading this will wonder, “how can parents help their kids with algebra homework?” Well, it’s pretty straightforward, and you don’t have to endorse cheating. For parents, one of the best ways to help their kids with algebra homework is to be supportive and encouraging. Always let your kids know you believe in their ability to understand and solve problems. Be there to help them when needed, but also allow them to work through some of the problems independently. Algebra homework can be daunting for students and parents trying to help their kids with their homework. However, with the right approach, it is possible to help your kids do algebra homework in the best way possible. Following these tips, you can do your algebra homework quickly and without stress. Remember, algebra is a building block for many other mathematical concepts. By mastering it, you’ll be well on your way to excelling in math and other subjects.
{"url":"https://openmanagement.org/algebra-homework.html","timestamp":"2024-11-08T10:38:36Z","content_type":"text/html","content_length":"50663","record_id":"<urn:uuid:fd59080a-0a53-4bdf-89f2-4531b2fb95bf>","cc-path":"CC-MAIN-2024-46/segments/1730477028059.90/warc/CC-MAIN-20241108101914-20241108131914-00280.warc.gz"}
tehtuner 0.3.0 Adds support for classification trees in Step 2 by setting step2 = 'classtree' with a given threshold of threshold. Adds the print.tunevt method. tehtuner 0.2.1 Fixes a bug where zbar was calculated using the mean difference in the first column of the data instead of using the location of the variable Y. tehtuner 0.2.0 Adds the parallel option to tunevt to support parallel backends. tehtuner 0.1.1 This patch reconciles an invalid URI in the tunevt documentation’s references. tehtuner 0.1.0 This is a new package that implements the Virtual Twins algorithm for subgroup identification (Foster et al., 2011) while controlling the probability of falsely detecting differential treatment effects when the conditional treatment effect is constant across the population of interest. These methods were originally presented in Wolf et al. (2022). • Foster, J. C., Taylor, J. M., & Ruberg, S. J. (2011). Subgroup identification from randomized clinical trial data. Statistics in Medicine, 30(24), 2867–2880. https://doi.org/10.1002/sim.4322 • Wolf, J. M., Koopmeiners, J. S., & Vock, D. M. (2022). A permutation procedure to detect heterogeneous treatment effects in randomized clinical trials while controlling the type-I error rate. Clinical Trials. https://doi.org/10.1177/17407745221095855 Key function • tunevt() fits a Virtual Twins model using user-specified Step 1 and Step 2 models with parameter selection to control the probability of a false discovery.
{"url":"https://cran.fhcrc.org/web/packages/tehtuner/news/news.html","timestamp":"2024-11-14T20:59:11Z","content_type":"application/xhtml+xml","content_length":"3251","record_id":"<urn:uuid:216ba5a4-9bb0-4d3e-b87c-3fdf0b186434>","cc-path":"CC-MAIN-2024-46/segments/1730477395538.95/warc/CC-MAIN-20241114194152-20241114224152-00772.warc.gz"}
A Comprehensive Look at Fluid Mechanics Fluid mechanics is a fundamental branch of physics that studies the behavior of fluids, such as liquids and gases, in motion. It is a crucial area of study in the field of classical mechanics, which focuses on the motion of objects and systems under the influence of forces. In this article, we will take a comprehensive look at fluid mechanics and explore its various principles and applications. Whether you are a physics researcher or just have a general interest in classical mechanics, this article will provide you with a thorough understanding of fluid mechanics and its importance in our daily lives. So, let's dive into the world of fluid mechanics and discover its fascinating concepts and phenomena. Fluid mechanics is a fundamental branch of physics that deals with the properties and behavior of fluids in motion. It plays a crucial role in various fields such as engineering, meteorology, and biology. In this article, we will provide a comprehensive look at fluid mechanics, covering key concepts, formulas, and resources for further learning. To start off, let's discuss the basic principles of fluid mechanics. Density, pressure, and viscosity are the three main properties that define a fluid. Density is the measure of how much mass is contained in a given volume of fluid. Pressure is the force exerted by the fluid per unit area, and it is affected by factors such as depth and gravity. Viscosity, on the other hand, is the measure of a fluid's resistance to flow. Next, we will delve into the different types of fluids and their characteristics. Newtonian fluids have a constant viscosity and follow Newton's laws of motion, while non-Newtonian fluids have variable viscosity and exhibit complex behavior. Understanding these differences is crucial in studying fluid mechanics. Now, let's move on to the laws that govern fluid motion. Bernoulli's principle states that as the speed of a fluid increases, its pressure decreases. This principle is key in understanding the lift force on an airplane wing and the flow of water through a pipe. The continuity equation, on the other hand, states that the mass flow rate in a system must remain constant. These laws provide a basis for understanding fluid behavior in various scenarios. A crucial component of fluid mechanics is the Navier-Stokes equations. These equations describe the motion of viscous fluids and are used extensively in solving fluid dynamics problems. They take into account factors such as velocity, pressure, density, and viscosity to accurately model fluid flow. To help solidify these concepts, we will provide real-life examples and interactive simulations. These resources will allow readers to see these principles in action and gain a deeper understanding of fluid mechanics. In conclusion, fluid mechanics is a fascinating field with applications in various industries. Whether you are a student looking to learn more or a professional seeking to deepen your understanding, this article has something for everyone. We hope that this comprehensive look at fluid mechanics has provided valuable insights and resources for further learning. Conducting Experiments We will provide step-by-step guides on how to set up experiments to observe different phenomena in fluid mechanics. These experiments can be done at home or in a laboratory setting and will help reinforce your understanding of the concepts. Solving Problems We will also include practice problems with detailed solutions to help you apply your knowledge and improve your problem-solving skills. These problems will cover a wide range of topics and difficulty levels, ensuring that you are well-equipped to tackle any fluid mechanics problem. Staying Updated on the Latest Research Finally, we will include links to reputable journals and websites where you can stay updated on the latest research and advancements in fluid mechanics. This will not only help you expand your knowledge but also keep you informed about potential future developments in the field. Finding Tutorials and Resources For those looking for additional learning materials, we will provide a list of reputable online tutorials, textbooks, and other resources. These will cover a variety of topics and cater to different learning styles, making it easier for you to find the right materials for your needs. Pursuing a Career in Physics If you are considering a career in physics, this article will also provide insights into the different fields and job opportunities available in fluid mechanics. We will also highlight the skills and qualifications needed to excel in this field. In conclusion, fluid mechanics is a fascinating and essential branch of physics that has numerous practical applications. By understanding its principles and equations, you can gain a deeper appreciation for the world around us. Whether you are a student, researcher, or simply curious about fluid mechanics, we hope this article has provided you with valuable insights and resources to further your understanding.
{"url":"https://www.onlinephysics.co.uk/classical-mechanics-research-fluid-mechanics","timestamp":"2024-11-07T16:06:06Z","content_type":"text/html","content_length":"172217","record_id":"<urn:uuid:f3c0b261-cea0-4129-a010-7acc19e4de87>","cc-path":"CC-MAIN-2024-46/segments/1730477028000.52/warc/CC-MAIN-20241107150153-20241107180153-00452.warc.gz"}
Indexed structures Indexed structures are collections of numbered cells that may contain elements of a predetermined type. We will assume that in an array of size h the indices can range from 0 to h-1 or 1 to h. Normally, two properties are defined, one commonly called strong and the other weak. strong property: the cell indices of an array are consecutive numbers. weak property: no new cells can be added to an array. A possible implementation of an ordered array is shown below (and is an implementation of the dictionary data type) class: ArrayOrdered implements Dictionary data: an array S of size n containing n pairs (elem,key) S(n) = Tetagrande(n) In practice, we do not use index i (numeric) as the key but as the cell in which to store the pair (e,k) Let us now look at commonly defined operations on a dictionary insert(elem e,key k) T(n) = O(n) 1. Reallocates S by increasing the size n by 1 (i.e. S = S + 1 and n = n+1) 2. Search for t.c. k<=S[i].key (it is ordered!) 3. Place S[j]<-S[j-1] for every j in [n-1,i+1] and place S[i]<-(e,k) In practice, knowing that it is sorted, we simply reallocate the array by one cell, moving the contents of cells i to n to the new positions i+1 to n+1. Having identified the position to be occupied by the pair (e,k) to be inserted, I move the cells from i+1 so as to free cell i and maintain the sorting. delete(key k) T(n)=O(n) 1. I find the index i of the pair with key k in S i.e. I find t.c. S[i] = (e,k) where k is the given one. In other words I find S[i].key=k 2. Pose S[j]<-S[j+1] for each j, j in i,…,n-2 3. Reallocates S by decreasing the size by 1 (I have removed an element and can delete the last cell after I have shifted the elements search(key k) -> elem T(n) = O(log n) 1. Runs the binary search algorithm on S to check whether S contains k 2. If it finds returns element otherwise null In the Java language the natural implementation is provided by arrays, which can also contain objects beyond the basic types, so even pairs or tuples of the type <K,V> i.e. pairs (key,value) In addition to arrays, the basic structure of the language, we can of course use lists, e.g. ArrayLists, which adopt precisely this type of philosophy plus a few tricks to make the implementation more efficient.
{"url":"https://trueprogramming.com/java-indexed-structures/","timestamp":"2024-11-09T11:12:08Z","content_type":"text/html","content_length":"13450","record_id":"<urn:uuid:f4dce42f-9a6a-4b4b-abd0-e830d13dfa85>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.75/warc/CC-MAIN-20241109085148-20241109115148-00017.warc.gz"}
Simplifying the complexity of Physics While doing my schooling the most I liked were the laboratory classes to carry out experiments and confirm what my Physics and Chemistry teachers from Grade 9 onwards taught as the laws and principles. The subject areas specifically in Physics comprised of Mechanics, Heat, Optics, Sound, Electricity & Magnetism, and Atomic/Nuclear Physics. Fortunately, I was not bad at Mathematics, hence in comparison to my classmates, I could understand how mathematical equations of Physics Laws got derived. But those who were scared of Mathematics developed a fear within them for the beautiful subject of Physics in general. Interestingly, the same set of fearful classmates would at times internalize the “complex mathematical equations” extremely well on carrying out experiments in our laboratory class. I never skipped any laboratory class during my undergraduate and two post-graduate degrees in Physics, though at times I would bunk the theory class at University, when the lecturer was not a good one for a given course, such as Statistical Physics. I further learned to make a student understand concepts of Physics based on mathematical equations is not imperative by making the students do as many experiments as they can, but to also ensure while Physics theory is taught in class; it is full of everyday examples we encounter while walking on the road, which the teacher at times ignores for want of time to make the vital connections between the theory/law of Physics and the physical phenomenon taking place in front of our eyes. Please remember, every Physics theory howsoever mathematically complex it may be, can be explained by the things happening around us. The reason all physical happenings around us are indeed based on principles of Physics. Unfortunately, if the school kids can connect the dots by shear observations around, their understanding of Physics would become far easier to them despite it is rooted in complex mathematical equations. Unfortunately, those teaching Physics somewhere lack inculcating this among the growing children. So, my dear students as well as their parents, I plan to take up in my blogs to come following areas to converse with all of you. You are free to give me your comments and suggest even the topics, law, or area of Physics where you would like me to bring it before you by dismantling its mathematical complexity, based on my experience of several decades. 1. Taking the fear out of Physics for School Kids 2. Underlying Physics in Diagnostic Machines 3. How Physics govern the Human body we all possess? Till we meet again, remain safe, and follow social distancing “physically but not on this blog”. Let me have your feedback. Cameron Ahmad Code: PHSK-1 March 25, 2021 Brampton ON
{"url":"https://ahmadcameron.medium.com/simplifying-the-complexity-of-physics-d6b465440f68?source=user_profile---------0----------------------------","timestamp":"2024-11-10T12:52:35Z","content_type":"text/html","content_length":"91889","record_id":"<urn:uuid:fc1d8e5c-3fe0-45e1-bbef-c71ca7710ec2>","cc-path":"CC-MAIN-2024-46/segments/1730477028186.38/warc/CC-MAIN-20241110103354-20241110133354-00706.warc.gz"}
MEYER: Notes sur les intégrales stochastiques : 31, 446-481, LNM 581 (1977) MEYER, Paul-André Notes sur les intégrales stochastiques Martingale theory This paper contains six additions to . Chapter~I concerns Hilbert space valued martingales, following Métivier, defining in particular their operator valued brackets and the corresponding stochastic integrals. Chapter~II gives a new proof (due to Yan, and now classical) of the basic result on the structure of local martingales. Chapter~III is a theorem of Herz (and Lépingle in continuous time) on the representation of $BMO$ which corresponds to the ``maximal'' definition of $H^1$. Chapter~IV states that, if $(B_t)$ is a $BMO$ martingale and $(X_t)$ is a martingale bounded in $L^p$, then $\sup_t X^{\ast}_t |B_{\infty} -B_t|$ is also in $L^p$ with a norm controlled by that of $X$ ($1< p<\infty$; there is at least a wrong statement about $p=1$ at the bottom of p. 470). This result can be interpreted as $L^p$ boundedness of the commutator of two operators: multiplication by an element of $BMO$, and stochastic integration by a bounded previsible process. Chapter~V (again on $BMO$) has a wrong proof, and seems to be still an open problem. Chapter~VI consists of small additions and corrections, and in particular acknowledges the priority of P.W.~Millar for useful results on local times Three errors are corrected in 1249Keywords: Stochastic integrals Hilbert space valued martingales Operator stochastic integrals $BMO$Nature: Original Retrieve article from Numdam
{"url":"http://sites.mathdoc.fr/cgi-bin/spitem?id=1131","timestamp":"2024-11-05T19:52:21Z","content_type":"text/html","content_length":"7546","record_id":"<urn:uuid:2706bb89-5d61-49bc-ab9f-da2eb3afbefb>","cc-path":"CC-MAIN-2024-46/segments/1730477027889.1/warc/CC-MAIN-20241105180955-20241105210955-00331.warc.gz"}
Statistical Digital Signal Processing and Classification Copyright: 2002 ISBN: 9781580531351 This is the first book to introduce and integrate the topics of digital signal processing (DSP) and statistical classification together, and the only volume to introduce state-of-the-art transforms, including DFT, FFT, DCT, DST, DHT, DHLT, DFHT, DTWT, DWT, DHAT, PCT, CCT, CDT, and ODT together for DSP and digital communication applications. You get step-by-step guidance in: discrete-time random processes; discrete-time domain signal processing and frequency domain signal analysis; discrete-time transforms; digital filter design and adaptive filtering; multirate digital signal processing; and statistical signal classification. The text also helps you overcome problems associated with multirate A/D and D/A converters. Extensively referenced with over 1,065 equations, 142 illustrations, and numerous examples, the book furnishes an up-to-date, comprehensive, and coherent treatment of the fundamentals of this cutting-edge technology, based on the academic and industry experience of the authors. An excellent technical reference and research tool for both practicing engineers and graduate students in electrical, computer, and other engineering disciplines, this book offers assistance in applying DSP knowledge and statistical classification in real world applications. This authoritative volume includes critical concepts never before covered in this detail. About the Authors; Preface; Introduction--Data Acquisition System. DSP Algorithms. Statistical Signal Classification. Scope of This Book. References.; Discrete-Time Signal Processing-Introduction. Discrete-Time Signals. Basic Operations of Discrete-Time Signals. Discrete-Time Systems. Linear Time-Invariant Discrete-Time Systems. The z-Transform. The Inverse z-Transform. Frequency Domain of Discrete-Time Signals and Systems. The Relationship of Allpass and Minimum-Phase Systems. Summary. References.; Discrete-Time Random Processes--Introduction. Probability and Random Variables. Distribution and Density Functions. Stochastic Processes. Summary. References.; Discrete-Time Transforms--Introduction. The Discrete Fourier Transform. The Fast Fourier Transform. The Discrete Cosine Transform. The Discrete Sine Transforms. The Discrete Hartley Transform. The Discrete Hilbert Transform. The Discrete Fractional Hilbert Transform. The Discrete-Time Wavelet Transform. The Discrete Walsh Transform. The Discrete Hadamard Transform. Summary. References.; Digital Filtering-Introduction. Filter Specifications. FIR Linear Phase. FIR Filter Design. IIR Filter Design. Implementation of Filter Structures. Summary. References.; Adaptive Filters--Introduction. Wiener Filter Theory. Discrete-Time Kalman Filter. The LMS Adaptive Filters. Recursive Least Squares Algorithms. Summary. References.; Discrete-Time Multirate Signal Processing-Introduction. Decimation and Interpolation System. Efficient Polyphase Architecture for Implementing Multirate Signal Processing System. Efficient Design Techniques of Multiband Filters. Multistage Design of Multirate Signal Processing. Multirate Filter Banks. The Uniform DFT Filter Bank. Multirate Adaptive Filter Banks. Summary. References.; Multirate Data Converters-Introduction. Analog-To-Digital Converter. Digital-To-Analog Converter. Multirate A/D Converter. Multirate D/A Converter. Oversampling A/D Converters. Sigma-Delta A/D Converter. Hybrid QMF Bank A/D and D/A Converter. Summary. References.; Statistical Signal Classification-Introduction. Statistical Pattern Representation. Feature Extraction Theory. Unsupervised Learning and Cluster Analysis. Signal Classification. Estimated Probability of Misclassification. Summary. References.; Transform-Based Statistical Signal Classification-Introduction. System Architectures of Transform-Based Statistical Signal Classification. Generalized Principal Components Transform. Canonical Correlation Transform. Canonical Discrimination Transform. Optimal Discriminant Transform.; Generalized Optimal Declustering Transform. Transform-Based Statistical Signal Classifiers. Application and Performance Evaluations. Summary. References.; Appendix A--Matrix Algebra of Linear Transformation. Vectors. Matrices. The Data Matrix. Orthogonal Matrices and the Trace. Matrix Differentiation. Eigenvalues and Eigenvectors. Theorem of Spectral Decomposition. Theorem of Singular Value Decomposition. Quadratic Forms. Maximization and Minimization Theorem. References.; List of Figures.; Index.;
{"url":"https://uk.artechhouse.com/Statistical-Digital-Signal-Processing-and-Classification-P504.aspx","timestamp":"2024-11-03T02:55:53Z","content_type":"application/xhtml+xml","content_length":"45362","record_id":"<urn:uuid:e96e5e4a-7f0f-4838-9b45-088c2b32072b>","cc-path":"CC-MAIN-2024-46/segments/1730477027770.74/warc/CC-MAIN-20241103022018-20241103052018-00637.warc.gz"}
Reverse Margin Calculator - Savvy Calculator Reverse Margin Calculator About Reverse Margin Calculator (Formula) A Reverse Margin Calculator is a tool used to calculate the sales revenue needed to achieve a desired profit margin. The formula for calculating the reverse margin typically involves the following Sales Revenue = Cost / (1 – Desired Profit Margin) Let’s break down the variables in this formula: 1. Cost: This represents the cost of producing or acquiring the goods or services being sold. 2. Desired Profit Margin: This refers to the target or desired percentage of profit to be earned from the sales revenue. It represents the difference between the selling price and the cost, expressed as a percentage of the cost. By dividing the cost by (1 – desired profit margin), you can calculate the sales revenue needed to achieve the desired profit margin. It’s important to note that the reverse margin calculator provides an estimation based on the given variables. Actual sales revenue may vary depending on factors such as pricing strategies, market conditions, and other costs associated with the product or service. A Reverse Margin Calculator serves as a helpful tool for business owners, entrepreneurs, and individuals involved in pricing and profitability analysis. It aids in setting appropriate pricing levels, determining sales targets, and understanding the relationship between costs, profit margins, and sales revenue needed to achieve desired profitability. Leave a Comment
{"url":"https://savvycalculator.com/reverse-margin-calculator","timestamp":"2024-11-08T07:27:06Z","content_type":"text/html","content_length":"141702","record_id":"<urn:uuid:5314ae29-b060-435b-a147-ae2282066a00>","cc-path":"CC-MAIN-2024-46/segments/1730477028032.87/warc/CC-MAIN-20241108070606-20241108100606-00643.warc.gz"}
Terminating or not Is there a quick way to work out whether a fraction terminates or recurs when you write it as a decimal? Terminating or Not printable sheet A terminating decimal is a decimal which has a finite number of decimal places, such as 0.25, 0.047, or 0.7734 Take a look at the fractions below. $$\frac23 \qquad \frac45 \qquad \frac{17}{50} \qquad \frac3{16} $$ $$\frac7{12} \qquad \frac58 \qquad \frac{11}{14} \qquad \frac8{15}$$ Which ones do you think can be written as a terminating decimal? Once you've made your predictions, convert the fractions to decimals. Click below to check which ones terminate. Four of the fractions can be written as terminating decimals: $$\frac45=\frac8{10}=0.8 $$ $$\frac{17}{50}=\frac{34}{100}=0.34$$ $$\frac{3}{16}=\frac{1875}{10000}=0.1875$$ $$\frac58=\frac{625}{1000}= 0.625$$ The remaining four fractions can be written as recurring decimals, with a repeating pattern that goes on forever. I wonder whether there is a quick way to decide whether a fraction can be written as a terminating decimal... Choose some fractions, convert them to decimals, and write down the fractions whose decimals terminate. What do they have in common? Can you explain a method you could use to identify fractions which can be written as terminating decimals? Next you might like to explore recurring decimals in the problem Repetitiously. You may also be interested in the other problems in our Comparing and Matching Feature. Getting Started What are the prime factors of 10? What are the prime factors of 100? What are the prime factors of 1000?... You could rewrite the eight fractions like this: $$\frac23 \qquad \frac{2^2}5 \qquad \frac{17}{2\times5^2} \qquad \frac3{2^4} $$ $$\frac7{2^2 \times 3} \qquad \frac5{2^3} \qquad \frac{11}{2\times 7} \qquad \frac{2^3}{3 \times 5}$$ What do the fractions with terminating decimals have in common? Student Solutions The year 7 mentoring group from Bangkok Patana School in Thailand, Ruby from Loughborough High School and Jayden and Kiefer from Leys Junior School, both in England, worked out which of the fractions in the example are equivalent to terminating decimals. This is Jayden and Kiefer's work: First, we predicted which fractions would have a terminating decimals, this was quite easy as we could use process of elimination. $\frac 23$ - this is really easy as it is commonly known that this is equivalent to $0.6\dot6$ reoccurring. Originally, we thought that $\frac{17}{50}$ wouldn't be terminating, but then realised it could be converted into $\frac{34}{100}$, this makes it terminating. Then, you can eliminate $\frac7{12}$ as there is no possible way to convert it into [a fraction with denominator] that ends with zero (10,100 etc.) apart from numbers in the 120 times table. $\frac{11}{14}$ is also easy to eliminate for the same reason as the last one. $\frac{8}{15}$ is a harder one than the rest, but we eventually found out that it also wouldn't convert into 10,100 etc. This leaves us with: $\frac45, \frac{17}{50}, \frac{3}{16}, \frac58, \frac{17}{50}$ The year 7 mentoring group put the fractions in a table, and focused on the denominators: We took all the terminating fractions and factorized the denominator. We got the numbers: $5,2$ or both. Then we did the same with the recurring numbers: $3$; $2,2$ and $3$; $2$ and $7$; $3$ and $5.$ This made us think that to get a terminating decimal, the factors of the denominator should be $2,5$ or both. W.H., Dashiell from Sequoyah in the USA and Carlos from Kings College Alicante in Spain all found this same rule using unit fractions (numerator = 1). This is some of W.H.'s work: The fractions that I am going to work out these values for are $\frac12, \frac13, \frac14, \frac15, \frac16, \frac17, \frac18, \frac19$ and $\frac1{10}.$ The only fractions that terminate are $\frac12, \frac14, \frac15, \frac18$ and $\frac1{10}.$ To understand this problem better, I [rewrote] all of the denominator numbers so that they are expressed as a product of their prime factors. All of the terminating ones' denominators contain only prime factors of $2$ or $5$ ($2, 4, 5, 8$ and $10$), and the recurring ones can have $2$ and $5$ as factors, but they also have factors such as $3$ in them. One possible reason that the terminating fractions have exclusively prime factors of $2$ and $5$ could be to do with methods like percentages. Because a percentage is defined as a fraction with a denominator of $100,$ we can see that $100$ is a power of $10,$ and $10 = 2^1\times5^1.$ This means the only numbers that divide $10$ and return a whole number are $2$ and $5,$ so it logically follows that only fractions that have denominators with these prime factors are terminating. As all fractions have a fraction $\frac1{10^n}$ that is smaller than them, we can treat all of these cases the same. Dashiell found the same rule, but described it in a very different way: I tried a lot of different rules and things [the fractions with terminating decimals] they had in common. Then I realized that they all went into powers of ten $&\frac12: 2\times5=10 \hspace{1cm} &\frac14:4\times25=100\hspace{1cm} &\frac15:5\times20=100\\ &\frac18: 8\times125=1000 \hspace{1cm} &\frac{1}{10}: 10\times1=10 \hspace{1cm} &\frac1{16}:16\times625 =10000\\ &\frac1{20}:20\times5 = 100$ If the denominator goes into [a power] of two [it will also have a terminating decimal, because] all powers of two go into powers of ten. I used this [for] numbers like $\frac1{16}.$ It's easier to figure out that $16$ is a power of two than that $16\times625=10000.$ Then I realized that this worked [only] if the fraction was simplified. Otherwise fractions like $\frac3{12}$ wouldn't fit into any of the categories, but the equivalent decimal would be finite or in this case $0.25.$ So if the denominator is a factor of a power of ten then the equivalent decimal is finite. W.H. explained how the reasoning for fractions $\frac1n$ can be extended to simplified fractions with numerator $>1.$ Click to see this explanation. Say that we have a fraction such as $\frac58,$ which can't be simplified as $5$ and $8$ are something called coprime, meaning their only common factor is one. However, all of these coprime fractions can be expressed as a multiple of another fraction; in this case $\frac58 = \frac18\times5.$ Because this can be shown in terms of one of the simpler fractions, we can still apply this method. The reason that the method still works for these non $\frac1n$ fractions is that we know a terminating fraction in the form $\frac1n$ can't have any multiples that are recurring, meaning that if the first fraction is terminating, then all fractions that are multiples of the first one are also terminating. However, this process does not flow exactly the same for recurring numbers, as fractions like $\frac6{12}$, which should apparently be recurring according to this logic, are actually terminating. However, we can use the previous simplifying method to show that the reasoning is still sound. Ahan from Tanglin Trust School in Singapore, Sanika P from PSBBMS in India, Thomas from Lakenheath American High School in England, Homare from Wimbledon High School in the UK, Edward from Worthington Hooker School in the USA, Tiger and Utkarsh and Kaishin from Bangkok Patana School in Thailand, Mahdi from Mahatma Gandhi International School in India, John from Vaels International School in India, John from Royal Latin School in England and An from Loughborough High School in the UK all got the same rule - that fractions whose denominator is a factor of a power of $10$ (when simplified) are equivalent to terminating decimals. John from Vaels International School wrote the rule using algebra: If a fractions is in the form $\dfrac{xy}{(x)(5^c)(2^d)}$ it is terminating. Joseph sent in this method for testing whether a fraction's decimal equivalent will terminate. Can you see how Joseph's method uses the same rule? To decide whether a fraction will result in a terminating decimal, follow these steps: 1. Simplify the fraction. If the fraction can't be simplified any further e.g. $\frac78$ can't be simplified any further, we do nothing for this step. 2.Look at the denominator. 3. $x=$denominator 4.If $x$ ends in $0$ or $5,$ divide $x$ by $5.$ Repeat until $x$ doesn't end in $0$ or $5.$ 5.If $x$ ends in $0$ or $2$ or $4$ or $6$ or $8,$ divide $x$ by $2.$ Repeat until $x$ is an odd number. 6.If $x=1,$ the fraction will result in a terminating decimal. Otherwise, the fraction will not result in a terminating decimal. Tiger and Utkarsh, John from Royal Latin School, An and Kaishin all said that fractions whose denominator is a factor of a power of $10$ (when simplified) are equivalent to terminating decimals because of the way we write numbers. Kaishin wrote: If the denominator's prime factors are 2 or 5 or a combination of both, [then] the denominator will always be able to be converted to 10,100,10000... etc. . We use a base 10 number system, which means that if the denominator can be converted into 10,100,10000... etc., the decimal will always be terminal. Edward 1 and his twin brother Edward 2 used this idea to prove the rule: If a fraction, $f = \frac pq$ terminates, then it can be explicitly written as: $f = \dfrac {n_1}{10} + \dfrac{n_2}{10^2} + \dfrac{n_3}{10^3} + \dfrac{n_4}{10^4} + ..... + \dfrac{n_k}{10^k,}$ for some finite $k,$ & where $n$ is some arbitrary placeholder. (The $n_i$ are the digits of the terminating decimal - $f=0.n_1n_1n_3n_4...n_k$ (because we write numbers in base $10$)) I factorised this such that $f = \dfrac1{10^k}\times\left(\dfrac{n_1}{10^{1-k}} + \dfrac{n_2}{10^{2-k}} + ..... + \dfrac{n_{k-1}}{10^{-1}} + n_k\right).$ (And, since $k$ is greater than any of $1, 2, 3, ..., (k-1)$, the powers $(1-k), (2-k), ... -1$ are all negative, so $f = \dfrac1{10^k}\left(n_1\times10^{k-1}+n_2\times10^{k-2}+...+n_{k-1}\times10^ I then let whatever is inside the brackets above be equal to $j$ (where $j$ is a whole number, as seen above, and in fact $j$ also has digits $n_1, ... n_k$ - when $j$ is written out, $j$ is written as $n_1n_2n_3...n_k$). I re-write $10^k$ as $2^k\times5^k,$ then $f = \dfrac{j}{2^k\times5^k}.$ This fraction can be simplified by factoring out all the common multiples of $2$ and $5$ in the numerator. Therefore, $f=\dfrac{j'}{2^y+5^z}$ for some new integers $j', y, z.$ Clearly, this form shows that the denominator consists purely of $2$s and $5$s. Thomas used these ideas to describe how the terminating decimal can be found: Example: [if a] fraction can be simplified down to $\frac38$, the denominator has a $2$-to-$5$ factor ratio of $3:0$ (Thomas means that the prime factorisation of $8 = 2\times2\times2$ contains $3$ $2$s and $0$ $5$s) Bringing that ratio back to $3:3$ ($2\times2\times2\times5\times5\times5$) gives us a power of $10$ in the denominator (namely, $1000$). [So we need to multiply numerator and denominator] by $125$ ($5\times5\times5$) and we can thus see that the decimal form is $0.375.$ This method can be used in every case where the denominator in simplest form of the fraction can be factored into purely $2$s and $5$s. Teachers' Resources Why do this problem? This problem offers an excellent opportunity for students to practise converting fractions into decimals, while also investigating a wider question that connects their knowledge of prime factors and place value. Possible approach Start by writing a list on the board of the following sequence of fractions: $$\frac1{40}=$$ $$\frac2{40}=$$ $$\frac3{40}=$$... up to $$\frac{20}{40}=$$ "Do you know how to write any of these fractions as decimals?" Give students a little time to figure out which fractions they recognise, perhaps using equivalent fractions as an intermediate step. Then fill in the decimal equivalents on the board, inviting students to share the thinking they did to work out the decimal forms. For example: "I know that $\frac{10}{40}$ is $\frac14$ which can also be written as 25% or $\frac{25}{100}$ so it's $0.25$." "If $\frac4{40}$ is $0.1$, then $\frac2{40}$ must be $0.05$ because it's half as big." Now introduce the main problem. Write up or display these eight fractions: $$\frac23 \qquad \frac45 \qquad \frac{17}{50} \qquad \frac3{16} $$ $$\frac7{12} \qquad \frac58 \qquad \frac{11}{14} \qquad \frac8{15}$$ "Which ones do you think can be written as a terminating decimal, and which ones do you think have to be written as a recurring decimal?" (If students have not yet met the idea of terminating and recurring, clarify the meanings.) Give students a bit of time to make their predictions, and then invite them to work out the decimal equivalents and see if they are right. They might do this by using equivalent fractions, a written division calculation, or using a calculator. Once everyone has worked out the decimal equivalents, take time to discuss whether students' predictions were correct and whether there were any "In a while, I am going to give you some fractions. Your challenge is to devise a method for working out straight away whether a fraction is equivalent to a terminating or recurring decimal." Give students some time to try some examples of their own to test any conjectures that they make. You could collect examples on the board in two columns: terminating and recurring. In the last few minutes of the lesson draw together the insights and methods that have emerged, and test students' methods with some carefully chosen examples. Key questions Think about the denominators of fractions that you know will terminate. What do they have in common? Why is the prime factorisation of the denominator important? Possible support When you collect examples on the board in two columns (terminating and recurring) consider writing the fractions in their lowest terms and then writing the denominators as a product of their prime Possible extension Students could go on to explore recurring decimals in Tiny Nines and Repetitiously. For a challenging extension, some students may wish to consider the idea of terminating and recurring representations in other number bases.
{"url":"https://nrich.maths.org/problems/terminating-or-not","timestamp":"2024-11-14T22:01:25Z","content_type":"text/html","content_length":"53638","record_id":"<urn:uuid:db702abc-821e-4423-8095-20b4250007b3>","cc-path":"CC-MAIN-2024-46/segments/1730477395538.95/warc/CC-MAIN-20241114194152-20241114224152-00849.warc.gz"}
parameters = nothing, gamma = cis(2π * rand()), tracker_options = TrackerOptions(), endgame_options = EndgameOptions(), Solve the system F using a total degree homotopy. This returns a path tracker (EndgameTracker or OverdeterminedTracker) and an iterator to compute the start solutions. If the system F has declared variable_groups then a multi-homogeneous a start system following ^[Wam93] will be constructed. polyhedral(F::Union{System, AbstractSystem}; only_non_zero = false, endgame_options = EndgameOptions(), tracker_options = TrackerOptions()) Solve the system F in two steps: first solve a generic system derived from the support of F using a polyhedral homotopy as proposed in ^[HS95], then perform a coefficient-parameter homotopy towards F. This returns a path tracker (PolyhedralTracker or OverdeterminedTracker) and an iterator to compute the start solutions. If only_non_zero is true, then only the solutions with non-zero coordinates are computed. In this case the number of paths to track is equal to the mixed volume of the Newton polytopes of F. If only_non_zero is false, then all isolated solutions of F are computed. In this case the number of paths to track is equal to the mixed volume of the convex hulls of $supp(F_i) ∪ \{0\}$ where $supp (F_i)$ is the support of $F_i$. See also ^[LW96]. function polyhedral( It is also possible to provide directly the support and coefficients of the system F to be solved. We consider a system f which has in total 6 isolated solutions, but only 3 where all coordinates are non-zero. @var x y f = System([2y + 3 * y^2 - x * y^3, x + 4 * x^2 - 2 * x^3 * y]) tracker, starts = polyhedral(f; only_non_zero = false) # length(starts) == 8 count(is_success, track.(tracker, starts)) # 6 tracker, starts = polyhedral(f; only_non_zero = true) # length(starts) == 3 count(is_success, track.(tracker, starts)) # 3 PolyhedralTracker <: AbstractPathTracker This tracker realises the two step approach of the polyhedral homotopy. See also [polyhedral]. square_up(F::Union{System, AbstractSystem}; identity_block = true, compile = compile = mixed) Creates the RandomizedSystem $\mathfrak{R}(F(x); N)$ where $N$ is the number of variables of F. newton_cache = NewtonCache(F.system)) Assigns to the PathResult path_result the return_code :excess_solution if the path_result is a solution of the randomized system F but not of the polynomial system underlying F. This is performed by using Newton's method for non-singular solutions and comparing the residuals of the solutions for singular solutions. Returns a function λ(::PathResult) which performs the excess solution check. The call excess_solution_check(F)(path_result) is identical to excess_solution_check!(F, path_result). See also • Wam93An efficient start system for multi-homogeneous polynomial continuation, Wampler, C.W. Numer. Math. (1993) 66: 517. https://doi.org/10.1007/BF01385710 • HS95Birkett Huber and Bernd Sturmfels. “A Polyhedral Method for Solving Sparse Polynomial Systems.” Mathematics of Computation, vol. 64, no. 212, 1995, pp. 1541–1555 • LW96T.Y. Li and Xiaoshen Wang. "The BKK root count in C^n". Math. Comput. 65, 216 (October 1996), 1477–1484.
{"url":"https://www.juliahomotopycontinuation.org/HomotopyContinuation.jl/stable/start_systems/","timestamp":"2024-11-12T08:19:38Z","content_type":"text/html","content_length":"16843","record_id":"<urn:uuid:720f1c9d-920e-46ec-b235-34ea783e7b52>","cc-path":"CC-MAIN-2024-46/segments/1730477028249.89/warc/CC-MAIN-20241112081532-20241112111532-00430.warc.gz"}
How to separate real and imaginary parts of a large complex fraction? 26608 Views 6 Replies 4 Total Likes How to separate real and imaginary parts of a large complex fraction? I have a large complex fraction and I want to separate its real and imaginary parts. I have declared all my variables real in the beginning of the notebook and I am applying ComplexExpand[Re[frac]], notebook is running for weeks, yet no output. Please help me out. The notebook is attached. thanking you. 6 Replies I believe part of the reason you are seeing what you see as Real is because of approximate (decimal) constants. In[1]:= (0.+ 8.111989164074249^-126 I)Conjugate[(0.+ 8.111989164074249*^-126 I) ] Out[1]= 6.58044*10^-251 + 0. I In[2]:= (0 + 8 I)*Conjugate[(0 + 8 I) ] Out[2]= 64 Mathematica interprets 0. as "approximately zero" and not known to many significant digits, while integers without decimals are exact. But even replacing all 0. with 0 won't solve all the problems. I do not know how Denominator[1/p+q] will behave. I find this simple example In[1]:= expr1 = 1/(a + I b) + c + I d; Out[2]= 1 In[3]:= Numerator[expr1] Out[3]= 1/(a + I b) + c + I d In[4]:= {LeafCount[expr1], LeafCount[expr1[[1]]] + LeafCount[expr1[[2]]], LeafCount[Denominator[expr1]] + LeafCount[Numerator[expr1]]} Out[4]= {168640, 168639, 167369} The first total is 1 less than the leaves in expr1 because the Plus has been removed. The second total appears to be missing 1271 items so I am concerned about your multiplying and dividing by conjugate of denominator. I suppose it might be possible that the Numerator and Denominator are somehow rearranging the expressions to end up with fewer leaves, but I would verify that the results are correct before using them. My apologies for making any error. If you can explain what I did incorrectly I would appreciate it. I do not think Conjugate[Denominator[expr1]] * Numerator[expr1] should be real, but I had hoped that this would provide a rapid method of extracting the real and complex parts. I would verify your calculation on a smaller example to see if this is correct. I do not know how to interpret your using Chop, everything I was doing was intending to be exact calculations. Please check this very carefully to ensure that I have made no mistakes. (* expr1==Plus[1/p,q]==Conjugate[p]/(Re[p]^2+Im[p]^2)+q *) p = 1/expr1[[1]]; q = expr1[[2]]; Rp = ComplexExpand[Re[p]]; Ip = ComplexExpand[Im[p]]; Rq = ComplexExpand[Re[q]]; Iq = ComplexExpand[Im[q]]; Rexpr1 = ComplexExpand[Re[Conjugate[p]]]/(Rp^2 + Ip^2) + Rq; Iexpr1 = ComplexExpand[Im[Conjugate[p]]]/(Rp^2 + Ip^2) + Iq; If I have made no mistakes then in 20 seconds you have your solution. Perhaps someone can think of a way to test this in 20 seconds. Thanks a lot for your suggestion. But I have already applied another trick (multiplying and dividing expr1 by complex conjugate of denominator of expr1 ) and I also did as you said ( there was a little error in your suggestion which I corrected); but both ways give me some error in final expression (precision and accuracy related, since there are very high negative powers). For example, ComplexExpand[Conjugate[Denominator[expr1]]] * Denominator[expr1] should be real but it is not so. In the same way mathematica is not performing truthfully for ComplexExpand[Conjugate[Denominator[expr1]]] * Numerator[expr1] Command Chop is not effective here since I expect high negative exponents in my desired expression. What is your ultimate intended use of this expression? Since all the coefficients are numerical, I'd guess that in the end you are intending to perform a numerical calculation with the result. If that is the case (and it may not be) then why not work numerically with your expression directly and take the appropriate real and imaginary parts of the result. It's still a challenge to make sure that your numerical results are accurate since you have such a large expression that is a combination of large powers of your parameters... but that would be the way I would go. However, if I have a calculation that seems that it would go on for weeks, I would stop and reevaluate my strategy. Thanks for the help. My goal is first to solve two parts (real and imaginary) for Rdot. Later I want to extract coefficient of theta (with specific exponent) from Rdot. Then I expect to get a relation between Rdot and theta in terms of another paramters followed by continuation method to investigate the trend among paramters. Be respectful. Review our Community Guidelines to understand your role and responsibilities. Community Terms of Use
{"url":"https://community.wolfram.com/groups/-/m/t/306264","timestamp":"2024-11-09T16:18:00Z","content_type":"text/html","content_length":"122684","record_id":"<urn:uuid:55a006a3-5118-43b9-a9aa-4350a09441e8>","cc-path":"CC-MAIN-2024-46/segments/1730477028125.59/warc/CC-MAIN-20241109151915-20241109181915-00652.warc.gz"}
Torsion - Scalar Field Problems Questions and Answers - Sanfoundry Finite Element Method Questions and Answers – Scalar Field Problems – Torsion This set of Finite Element Method Multiple Choice Questions & Answers (MCQs) focuses on “Scalar Field Problems – Torsion”. 1. What is the governing equation for a bar subjected to torsion? a) d^2Φ/dx^2 + d^2Φ/dy^2 + 2 = 0 b) d^2Φ/dx^2 – d^2Φ/dy^2 + 2 = 0 c) d^2Φ/dx^2 * d^2Φ/dy^2 + 2 = 0 d) d^2Φ/dx^2 / d^2Φ/dy^2 + 2 = 0 View Answer Answer: a Explanation: The governing equation of a bar subjected to torsion is given by d^2Φ/dx^2 + d^2Φ/dy^2 + 2 = 0. where Φ = stress function. This equation simplifies the process of finding the shear stresses as a result of torsional loading. 2. The governing equation for a bar subjected to torsion is considered a special case of Helmholtz equations. a) False b) True View Answer Answer: b Explanation: The given statement is true. d^2Φ/dx^2 + d^2Φ/dy^2 + 2 = 0 is considered as a special case scenario of Helmholtz equations. This set of equations represent a time independent form of the wave equation. They are also considered as the Eigen value of Laplace operators in mathematics. 3. What type of loading is depicted in the figure below? a) Axial b) Bending c) Torsion d) Tensile View Answer Answer: c Explanation: The given figure demonstrates torsion type of loading. When the bar is subjected to twisting motion, it gives rise to shear stresses and torsional strain. This process is termed as Torsion. The SI unit is Nm. 4. Castigliano’s theorem states that the total derivative of strain energy gives rise to the displacement. a) False b) True View Answer Answer: a Explanation: The given statement is false. Castigliano’s first theorem states that the partial derivative of strain energy with respect to any particular force gives rise to the displacement along that direction. This is valid only for linearly elastic materials. 5. Which of the following is a major assumption in torsion of circular members? a) Plane sections do not remain plane after twisting b) Plane sections remain parallel after twisting c) Plane sections remain perpendicular after twisting d) Plane sections remain plane after twisting View Answer Answer: d Explanation: The first and major assumption in torsion of circular member is the fact that plane sections remain plane after twisting. This is valid only for elastic deformation that arises in circular members due to torsional type of loading. This assumption does not hold good for non circular members. 6. What is the direction of shear stress components at the outside surface of a torsion member? a) Parallel to the surface b) Tangent to the surface c) Perpendicular to the surface d) On the surface View Answer Answer: b Explanation: At the outside surface of a torsion member, no stress acts normal to the surface. This is the reason why the shear stress components are assumed to act in a direction tangential to the surface. The value of stress function that arises is usually constant throughout the surface. Sanfoundry Global Education & Learning Series – Finite Element Method. To practice all areas of Finite Element Method, here is complete set of 1000+ Multiple Choice Questions and Answers.
{"url":"https://www.sanfoundry.com/finite-element-method-questions-answers-scalar-field-problems-torsion/","timestamp":"2024-11-14T10:10:58Z","content_type":"text/html","content_length":"162004","record_id":"<urn:uuid:6d5981e6-6b30-4571-a42f-1d98c079da15>","cc-path":"CC-MAIN-2024-46/segments/1730477028558.0/warc/CC-MAIN-20241114094851-20241114124851-00017.warc.gz"}
0051 - N-Queens (Hard) | LeetCode The Hard Way Problem Link Problem Statement The n-queens puzzle is the problem of placing n queens on an n x n chessboard such that no two queens attack each other. Given an integer n, return all distinct solutions to the n-queens puzzle. You may return the answer in any order. Each solution contains a distinct board configuration of the n-queens' placement, where 'Q' and '.' both indicate a queen and an empty space, respectively. Example 1: Input: n = 4 Output: [[".Q..","...Q","Q...","..Q."],["..Q.","Q...","...Q",".Q.."]] Explanation: There exist two distinct solutions to the 4-queens puzzle as shown above Example 2: Input: n = 1 Output: [["Q"]] 1 <= n <= 9 Approach 1: Backtracking Once you understand how the queen moves, which is straight in both the orthogonal and diagonal directions, then it becomes a fairly straightforward backtracking problem. Knowing how a queen moves, we know we can only place one queen per row, so for each row we have to find a valid square to place the queen. So the backtracking comes in by trying to place a queen on all valid squares and continuing, and if it ever doesn't work, we backtrack and try the next valid square in that row. How do we track valid squares? We can use sets for $O(1)$ access, to see if the square is valid. We know we can only ever place one queen in any row, so no need to use a set there as we can track our row during our backtracking and just move on to the next row. We can also easily use a set to track valid columns, by just adding the current column of the queen we just placed into the set. How do we track diagonals? We will use 2 sets, one for diagonal going forward, and one for diagonal going backwards. That is any diagonal going from left to right, bottom to top will all have the same coordinate integer if we add the row and columns together. Also any diagonal going from right to left, bottom to top, will also have the same coordinate integer if we subtract the row from the column position. See the below 4x4 Board x 0 1 2 3 0 |0 1 2 3 1 |1 2 3 4 2 |2 3 4 5 3 |3 4 5 6 Backward Diagonals x 0 1 2 3 0 | 0 1 2 3 1 |-1 0 1 2 2 |-2 -1 0 1 3 |-3 -2 -1 0 Time Complexity: $O(n!)$ where n is the size of the board. You can imagine we have n choice to make for the first row, then after that for each choice we have $n-1$ choices for the 2nd row, and $n-2$ for the 3rd, etc. as placing a queen removes that column from each row. Space Complexity: $O(n^2)$ our board will be of size $n*n$. class Solution: def solveNQueens(self, n: int) -> List[List[str]]: # initialize our return list and our board n_queens = [] # Note our board will be a list of lists where each cell is # a list containing a string character. As this gives us # more efficient access to each cell to replace it with # either a 'Q' or a '.' board = [['.'] * n for _ in range(n)] # Our sets to track valid squares. col, dia, dia_b = set(), set(), set() # recursive backtracking algorithm def backtracking(r): # if our row, r every reaches n. It means we successfully # placed a queen in all n rows. if r == n: # create a copy of the board, join method will join # all our list of strings, into a single string for that # row. ie: ['.']['.']['.']['Q'] => ['...Q'] board_copy = [''.join(row) for row in board] # check all our columns in current row. for c in range(n): # check if square is valid, if it isn't backtrack. if c in col or (r + c) in dia or (c - r) in dia_b: # found valid square, add the squares queen touches # to the proper sets. dia.add(r + c) dia_b.add(c - r) # update board to reflect board[r][c] = 'Q' # continue down the decision tree onto the next row. backtracking(r + 1) # Backtrack from previous call-> remove all values # from the sets dia.remove(r + c) dia_b.remove(c - r) # reset that board position for the next iteration. board[r][c] = '.' # call the algorithm starting at 0, and return our answer. return n_queens
{"url":"https://leetcodethehardway.com/solutions/0000-0099/n-queens-hard","timestamp":"2024-11-09T07:25:26Z","content_type":"text/html","content_length":"82692","record_id":"<urn:uuid:9f4f1dcc-9f1b-4548-8b41-0ba5f7859272>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.30/warc/CC-MAIN-20241109053958-20241109083958-00693.warc.gz"}
Circuit Enables Precision Control In Radiant Heating Systems Successful design of precision temperature-control loops, like all high-performance servo systems, depends on careful management of the usual feedback gain and bandwidth tradeoffs. While always tricky, these interactions can become unmanageable if the relevant thermal "constants" are unknown, or worse, not constant at all. Discussed in this article is a thermal design that arose from just such a challenging scenario. The design involves a system complicated by the nonlinear temperature-dependent parameters of radiant heat transfer. The application requires accurate thermostasis of a silicon (Si) wafer over a 100°C to 1000°C setpoint range in an evacuated chemical-vapor-deposition (CVD) rig. Temperature control is accomplished via radiant heating from a large (250 W or larger) low-voltage dc incandescent lamp. It was the radiant heating feature that made this control problem extra "interesting." So-called "Newtonian" heat exchange which occurs via conduction or convection tends to be nicely linear with temperature. But radiation, alas, is proportional to the fourth power of absolute temperature. Consequently, the thermal time "constants" (degree-sec/degree) of radiation-coupled systems aren't linearly independent of temperature as they are in Newtonian systems. In-stead, they're inversely proportional to T^3. This causes the thermal time-constant of the vacuum chamber's Si wafer to vary by a factor of 40 over the 100°C (373K) to 1000°C (1273K) setpoint range! Such variation adds substantially to the difficulty of designing an accurate yet non-oscillatory control loop. Coping with this complication required use of a robust integrating convergence-by-bisection feedback control algorithm (Fig. 1). This algorithm is used as the basis for the thermostat circuit shown in Figure 2. It's described in detail in "Take-Back-Half: A Novel Integrating Temperature-Control Algorithm," Electronic Design, Dec. 4, 2000, p. 132. The Si wafer's temperature is sensed by a thermocouple. Next, it's cold-junction compensated, amplified, linearized, and repeated as a 1-mV/deg. analog output, V[T], by the digital panel thermometer. V[T] is compared to the setpoint voltage, V[S]. The V[S ]− V[T] difference is then integrated by A1, buffered by A4, and applied the control input of the programmable lamp supply. Therefore, whenever V[T ]< V[S], the lamp voltage (and, therefore, the heat radiated onto the wafer) will ramp up, warming the wafer. Conversely, if V[T ]> V[S], the wafer will be cooled. Of course, if this simplistic error integration comprised the entire control algorithm, stable convergence to the setpoint wouldn't be likely. Instead, persistent oscillation above and below the setpoint would be virtually inevitable. The "Take-Back-Half" (TBH) algorithm damps oscillations and stabilizes the servo loop. It does so by revising the estimate of the optimum steady-state lamp voltage at each setpoint (V[T] = V[S]) crossing. To make TBH action possible, some means for detecting setpoint crossings must exist. Crossed-diode-connected transistors Q1 and Q2 and comparator A2 accomplish this task by continuously tracking the polarity of the (V[S ]− V[T])/R1 error current. A2 goes high when V[T ]< V[S] and low when V[T ]> V[S, ]while inverter A3 generates the complementary logic term. Positive feedback around A2 keeps the logic transitions snappy. Meanwhile, the roles of TBH variables H[O] and H are served by sample-and-hold capacitor C1 and integrator cap C2, respectively. CMOS switches S1, S2, and S3 are arranged so that whenever V[T ]< V[S], S2 turns on and connects S1's control input to A3's logic-zero. This shuts off S1, which in turn isolates C1 and holds H[O]. Alternatively, when V[T ]> V[S], S2 turns off, allowing R2 to pull S1's input to A2's logic-zero. Again, S1 turns off and C1 is isolated. The fun begins whenever V[T] = V[S]. When V[T] < V[S] flips to V[T] = V[S], A3 switches from zero to one, which turns on S1. As a result, C1 (H[O]) and C2 (H) are connected in parallel and set to (H + H[O])/2. This state persists for the time-out set by R3C3 (approximately 70 ms). After this period, S3, S2, and S1 all turn off and isolate C1 to await the next setpoint crossing. A similar cascade follows any toggle from V[T ]> V[S] to V[T] = V[S]. A2 turns S1 on via R2 until R3C3 times out and turns S3 and S2 on and S1 back off. Optimization of overall servo-loop dynamics is easy since selected-at-test R1 is the only variable involved in the tuning process. Sponsored Recommendations To join the conversation, and become an exclusive member of Electronic Design, create an account today!
{"url":"https://www.electronicdesign.com/news/products/article/21758455/circuit-enables-precision-control-in-radiant-heating-systems","timestamp":"2024-11-10T14:30:48Z","content_type":"text/html","content_length":"236796","record_id":"<urn:uuid:5f901ebe-ec97-4487-a4d9-dc7324faddfa>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.60/warc/CC-MAIN-20241110134821-20241110164821-00418.warc.gz"}
SIN/COS broken in gfortran on mipsel? Test request SIN/COS broken in gfortran on mipsel? Test request Dear list, I find it hard to believe, but it seems that the SIN() and COS() built-ins might be broken in gfortran on mipsel. See this build log for instance, search in it for the string "Testing Routine B101" and look at the table following that string: Judging by this build log, the SIN(x) and COS(x) functions on mipsel apparently return 0 and 1 respectively for x > 0 (and do not give the right values even for x = 0). Unfortunately, as far as I know there are no currently available mips/mipsel machines for developers to use, so I need to ask this list for help. I'm attaching a stand-alone version of this test as file "sincos.F". If someone running the latest Sid on a mipsel architecture wouldn't mind, please install the latest gfortran-4.3 package, run the following and see what you get: gfortran-4.3 -O2 sincos.F -o a.out What you *should* get is something like the following: (obviously the exact errors will vary depending on floating point variations, but should be less than 1.0E-07) For A=I*PI/180: I X=SIN(A) Y=COS(A) ATG(X,Y) Error 0 0.0000000 1.0000000 0.0000000 0.0E+00 10 0.1736482 0.9848077 0.1745329 0.9E-07 20 0.3420202 0.9396926 0.3490659 0.0E+00 Largest Error for ATG was 0.9E-07 If the same bug as seen in the cernlib build log manifests itself, you will get something more like this: For A=I*PI/180: I X=SIN(A) Y=COS(A) ATG(X,Y) Error 0 0.9869969 0.1607389 1.4093571 0.0E+00 10 0.0000000 1.0000000 0.0000000 0.1E+01 20 0.0000000 1.0000000 0.0000000 0.1E+01 Largest Error for ATG was 0.1E+01 Please let me know (CC: to my email address) what is actually obtained on mipsel (and also on big-endian mips if possible). If you do get results significantly different from those shown above, please try compiling at lower optimization levels down to -O0 and see whether any of them give the correct results. Thank you and best regards, Kevin B. McCarty <kmccarty@gmail.com> WWW: http://www.starplot.org/ WWW: http://people.debian.org/~kmccarty/ GPG: public key ID 4F83C751 PROGRAM B101M C Specify the largest error allowed for a successful test PARAMETER ( TSTERR=1D-6 ) WRITE(6,'(/8X,''For A=I*PI/180:'')') + ''ATG(X,Y)'',4X,''Error'')') PI = 3.14159 26535 89793D0 DO 1 I = 0,350,10 IF (A .NE. 0.)ER=ABS((R1-A)/A) WRITE(6,'(1X,I5,2F12.7,F15.7,E10.1)') I,X1,X2,R1,ER 1 CONTINUE WRITE(6,'(/7X,''Largest Error for ATG was'',E10.1)')ERMAX FUNCTION ATG(X1,X2) PARAMETER (PI = 3.14159 26535 89793D0) Attachment: signature.asc Description: OpenPGP digital signature Reply to:
{"url":"https://lists.debian.org/debian-mips/2008/03/msg00101.html","timestamp":"2024-11-05T01:08:53Z","content_type":"text/html","content_length":"8497","record_id":"<urn:uuid:456efab0-4454-4dfa-939b-72fe1be59c2e>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.84/warc/CC-MAIN-20241104225856-20241105015856-00834.warc.gz"}
Introduction to Forecasting of Dynamic System Response Forecasting the response of a dynamic system is the prediction of future outputs of the system using past output measurements. In other words, given observations y(t) = {y(1), …, y(N)} of the output of a system, forecasting is the prediction of the outputs y(N+1), …, y(N+H) until a future time horizon H. When you perform forecasting in System Identification Toolbox™ software, you first identify a model that fits past measured data from the system. The model can be a linear time series model such as AR, ARMA, and state-space models, or a nonlinear ARX model. If exogenous inputs influence the outputs of the system, you can perform forecasting using input-output models such as ARX and ARMAX. After identifying the model, you then use the forecast command to compute y(N+1), …, y(N+H). The command computes the forecasted values by: • Generating a predictor model using the identified model. • Computing the final state of the predictor using past measured data. • Simulating the identified model until the desired forecasting horizon, H, using the final state as initial conditions. This topic illustrates these forecasting steps for linear and nonlinear models. Forecasting the response of systems without external inputs (time series data) is illustrated, followed by forecasting for systems with an exogenous input. For information about how to perform forecasting in the toolbox, see Forecast Output of Dynamic System. Forecasting Time Series Using Linear Models The toolbox lets you forecast time series (output only) data using linear models such as AR, ARMA, and state-space models. Here is an illustration of forecasting the response of an autoregressive model, followed by the forecasting steps for more complex models such as moving-average and state-space models. Autoregressive Models Suppose that you have collected time series data y(t) = {y(1), …, y(N)} of a stationary random process. Assuming the data is a second-order autoregressive (AR) process, you can describe the dynamics by the following AR model: Where a[1] and a[2] are the fit coefficients and e(t) is the noise term. You can identify the model using the ar command. The software computes the fit coefficients and variance of e(t) by minimizing the 1-step prediction errors between the observations {y(1), …, y(N)} and model response. Assuming that the innovations e(t) are a zero mean white sequence, you can compute the predicted output$\stackrel{^}{y}\left(t\right)$ using the formula: Where y(t-1) and y(t-2) are either measured data, if available, or previously predicted values. For example, the forecasted outputs five steps in the future are: $\begin{array}{l}\stackrel{^}{y}\left(N+1\right)=-{a}_{1}y\left(N\right)-{a}_{2}y\left(N-1\right)\\ \stackrel{^}{y}\left(N+2\right)=-{a}_{1}\stackrel{^}{y}\left(N+1\right)-{a}_{2}y\left(N\right)\\ \ stackrel{^}{y}\left(N+3\right)=-{a}_{1}\stackrel{^}{y}\left(N+2\right)-{a}_{2}\stackrel{^}{y}\left(N+1\right)\\ \stackrel{^}{y}\left(N+4\right)=-{a}_{1}\stackrel{^}{y}\left(N+3\right)-{a}_{2}\ stackrel{^}{y}\left(N+2\right)\\ \stackrel{^}{y}\left(N+5\right)=-{a}_{1}\stackrel{^}{y}\left(N+4\right)-{a}_{2}\stackrel{^}{y}\left(N+3\right)\end{array}$ Note that the computation of $\stackrel{^}{y}\left(N+2\right)$ uses the previously predicted value $\stackrel{^}{y}\left(N+1\right)$ because measured data is not available beyond time step N. Thus, the direct contribution of measured data diminishes as you forecast further into the future. The forecasting formula is more complex for time series processes that contain moving-average terms. Moving-Average Models In moving-average (MA) models, the output depends on current and past innovations (e(t),e(t-1), e(t-2), e(t-3)....). Thus, forecasting the response of MA models requires knowledge of the initial conditions of the measured data. Suppose that time series data y(t) from your system can be fit to a second-order moving-average model: Suppose that y(1) and y(2) are the only available observations, and their values equal 5 and 10, respectively. You can estimate the model coefficients c[1] and c[2] using the armax command. Assume that the estimated c[1] and c[2] values are 0.1 and 0.2, respectively. Then assuming as before that e(t) is a random variable with zero mean, you can predict the output value at time t using the following formula: Where e(t-1) and e(t-2) are the differences between the measured and the predicted response at times t-1 and t-2, respectively. If measured data does not exist for these times, a zero value is used because the innovations process e(t) is assumed to be zero-mean white Gaussian noise. Therefore, forecasted output at time t = 3 is: Where, the innovations e(1) and e(2) are the difference between the observed and forecasted values of output at time t equal to 1 and 2, respectively: $\begin{array}{l}e\left(2\right)=y\left(2\right)-\stackrel{^}{y}\left(2\right)=y\left(2\right)-\left[0.1e\left(1\right)+0.2\text{}e\left(0\right)\right]\\ e\left(1\right)=y\left(1\right)-\stackrel{^} Because the data was measured from time t equal to 1, the values of e(0) and e(-1) are unknown. Thus, to compute the forecasted outputs, the value of these initial conditions e(0) and e(-1) is required. You can either assume zero initial conditions, or estimate them. • Zero initial conditions: If you specify that e(0) and e(-1) are equal to 0, the error values and forecasted outputs are: $\begin{array}{l}e\left(1\right)=5-\left(0.1*0+0.2*0\right)=5\\ e\left(2\right)=10-\left(0.1*5+0.2*0\right)=9.5\\ \stackrel{^}{y}\left(3\right)=0.1*9.5+0.2*5=1.95\end{array}$ The forecasted values at times t = 4 and 5 are: $\begin{array}{l}\stackrel{^}{y}\left(4\right)=0.1e\left(3\right)+0.2e\left(2\right)\\ \stackrel{^}{y}\left(5\right)=0.1e\left(4\right)+0.2e\left(3\right)\end{array}$ Here e(3) and e(4) are assumed to be zero as there are no measurements beyond time t = 2. This assumption yields, $\stackrel{^}{y}\left(4\right)=0.2*e\left(2\right)=0.2*9.5=1.9$, and $\stackrel Thus, for this second-order MA model, the forecasted outputs that are more than two time steps beyond the last measured data point (t = 2) are all zero. In general, when zero initial conditions are assumed, the forecasted values beyond the order of a pure MA model with no autoregressive terms are all zero. • Estimated initial conditions: You can estimate the initial conditions by minimizing the squared sum of 1-step prediction errors of all the measured data. For the MA model described previously, estimation of the initial conditions e(0) and e(-1) requires minimization of the following least-squares cost function: $V=e{\left(1\right)}^{2}+e{\left(2\right)}^{2}=\text{(y(1) - [0}\text{.1 e(0) + 0}{\text{.2 e(-1)])}}^{2}\text{+ (y(2) - [0}\text{.1 e(1) + 0}{\text{.2 e(0)])}}^{2}$ Substituting a = e(0) and b = e(-1), the cost function is: Minimizing V yields e(0) = 50 and e(-1) = 0, which gives: $\begin{array}{l}e\left(1\right)=5-\left(0.1*50+0.2*0\right)=0\\ e\left(2\right)=10-\left(0.1*0+0.2*50\right)=0\\ \stackrel{^}{y}\left(3\right)=0\\ \text{y^(4) = 0}\end{array}$ Thus, for this system, if the prediction errors are minimized over the available two samples, all future predictions are equal to zero, which is the mean value of the process. If there were more than two observations available, you would estimate e(-1) and e(0) using a least-squares approach to minimize the 1-step prediction errors over all the available data. This example shows how to reproduce these forecasted results using the forecast command. Load the measured data. Create an MA model with A and C polynomial coefficients equal to 1 and [1 0.1 0.2], respectively. model = idpoly(1,[],[1 0.1 0.2]); Specify zero initial conditions, and forecast the output five steps into the future. opt = forecastOptions('InitialCondition','z'); yf_zeroIC = forecast(model,PastData,5,opt) yf_zeroIC = 5×1 Specify that the software estimate initial conditions, and forecast the output. opt = forecastOptions('InitialCondition','e'); yf_estimatedIC = forecast(model,PastData,5,opt) yf_estimatedIC = 5×1 10^-15 × For arbitrary structure models, such as models with autoregressive and moving-average terms, the forecasting procedure can be involved and is therefore best described in the state-space form. State-Space Models The discrete-time state-space model of time series data has the form: $\begin{array}{l}x\left(t+1\right)=Ax\left(t\right)+Ke\left(t\right)\\ y\left(t\right)=Cx\left(t\right)+e\left(t\right)\end{array}$ Where, x(t) is the state vector, y(t) are the outputs, e(t) is the noise-term. A, C, and K are fixed-coefficient state-space matrices. You can represent any arbitrary structure linear model in state-space form. For example, it can be shown that the ARMA model described previously is expressed in state-space form using A = [0 0;1 0], K = [0.5;0] and C = [0.2 0.4]. You can estimate a state-space model from observed data using commands such as ssest and n4sid. You can also convert an existing polynomial model such as AR, MA, ARMA, ARX, and ARMAX into the state-space form using the idss command. The advantage of state-space form is that any autoregressive or moving-average model with multiple time lag terms (t-1,t-2,t-3,...) only has a single time lag (t-1) in state variables when the model is converted to state-space form. As a result, the required initial conditions for forecasting translate into a single value for the initial state vector X(0). The forecast command converts all linear model to state-space form and then performs forecasting. To forecast the response of a state-space model: 1. Generate a 1-step ahead predictor model for the identified model. The predictor model has the form: $\begin{array}{l}\stackrel{^}{x}\left(t+1\right)=\left(A-K*C\right)\text{}\stackrel{^}{x}\left(t\right)+Ky\left(t\right)\\ \stackrel{^}{y}\left(t\right)=C*\stackrel{^}{x}\left(t\right)\end{array} Where y(t) is the measured output and $\stackrel{^}{y}\left(t\right)$ is the predicted value. The measured output is available until time step N and is used as an input in the predictor model. The initial state vector is $\stackrel{^}{x}\left(0\right)={x}_{0}$. 2. Assign a value to the initial state vector x[0]. The initial states are either specified as zero, or estimated by minimizing the prediction error over the measured data time span. Specify a zero initial condition if the system was in a state of rest before the observations were collected. You can also specify zero initial conditions if the predictor model is sufficiently stable because stability implies the effect of initial conditions diminishes rapidly as the observations are gathered. The predictor model is stable if the eigenvalues of A-K*C are inside the unit circle. 3. Compute $\stackrel{^}{x}\left(N+1\right)$, the value of the states at the time instant t = N+1, the time instant following the last available data sample. To do so, simulate the predictor model using the measured observations as inputs: $\begin{array}{l}\stackrel{^}{x}\left(1\right)=\left(A-K*C\right){\text{x}}_{0}\text{+}Ky\left(0\right)\\ \stackrel{^}{x}\left(2\right)=\left(A-K*C\right)\stackrel{^}{x}\left(1\right)+Ky\left(1\ right)\\ ⋮\\ \stackrel{^}{x}\left(N+1\right)=\left(A-K*C\right)\stackrel{^}{x}\left(N\right)+Ky\left(N\right)\end{array}$ 4. Simulate the response of the identified model for H steps using $\stackrel{^}{x}\left(N+1\right)$ as initial conditions, where H is the prediction horizon. This response is the forecasted response of the model. Reproduce the Output of forecast Command This example shows how to manually reproduce forecasting results that are obtained using the forecast command. You first use the forecast command to forecast time series data into the future. You then compare the forecasted results to a manual implementation of the forecasting algorithm. Load time series data. z9 is an iddata object that stores time series data (no inputs). Specify data to use for model estimation. observed_data = z9(1:128); Ts = observed_data.Ts; t = observed_data.SamplingInstants; y = observed_data.y; Ts is the sample time of the measured data, t is the time vector, and y is the vector of measured data. Estimate a discrete-time state space model of 4th order. sys = ssest(observed_data,4,'Ts',Ts); Forecast the output of the state-space model 100 steps into the future using the forecast command. H = 100; yh1 = forecast(sys,observed_data,H); yh1 is the forecasted output obtained using the forecast command. Now reproduce the output by manually implementing the algorithm used by the forecast command. Retrieve the estimated state-space matrices to create the predictor model. A = sys.A; K = sys.K; C = sys.C; Generate a 1-step ahead predictor where the A matrix of the Predictor model is A-K*C and the B matrix is K. Predictor = idss((A-K*C),K,C,0,'Ts',Ts); Estimate initial states that minimize the difference between the observed output y and the 1-step predicted response of the identified model sys. x0 = findstates(sys,observed_data,1); Propagate the state vector to the end of observed data. To do so, simulate the predictor using y as input and x0 as initial states. Input = iddata([],y,Ts); opt = simOptions('InitialCondition',x0); [~,~,x] = sim(Predictor,Input,opt); xfinal = x(end,:)'; xfinal is the state vector value at time t(end), the last time instant when observed data is available. Forecasting 100 time steps into the future starts at the next time step, t1 = t(end)+Ts. To implement the forecasting algorithm, the state vector value at time t1 is required. Compute the state vector by applying the state update equation of the Predictor model to xfinal. x0_for_forecasting = Predictor.A*xfinal + Predictor.B*y(end); Simulate the identified model for H steps using x0_for_forecasting as initial conditions. opt = simOptions('InitialCondition',x0_for_forecasting); Because sys is a time series model, specify inputs for simulation as an H-by-0 signal, where H is the wanted number of simulation output samples. Input = iddata([],zeros(H,0),Ts,'Tstart',t(end)+Ts); yh2 = sim(sys,Input,opt); Compare the results of the forecast command yh1 with the manually computed results yh2. The plot shows that the results match. Forecasting Response of Linear Models with Exogenous Inputs When there are exogenous stimuli affecting the system, the system cannot be considered stationary. However, if these stimuli are measurable then you can treat them as inputs to the system and account for their effects when forecasting the output of the system. The workflow for forecasting data with exogenous inputs is similar to that for forecasting time series data. You first identify a model to fit the measured input-output data. You then specify the anticipated input values for the forecasting time span, and forecast the output of the identified model using the forecast command. If you do not specify the anticipated input values, they are assumed to be zero. This example shows how to forecast an ARMAX model with exogenous inputs in the toolbox: Load input-output data. z1 is an iddata object with input-output data at 300 time points. Use the first half of the data as past data for model identification. Identify an ARMAX model Ay(t) = Bu(t-1) + Ce(t), of order [2 2 2 1]. na = 2; % A polynomial order nb = 2; % B polynomial order nc = 2; % C polynomial order nk = 1; % input delay sys = armax(past_data,[na nb nc nk]); Forecast the response 100 time steps into the future, beyond the last sample of observed data past_data. Specify the anticipated inputs at the 100 future time points. H = 100; FutureInputs = z1.u(151:250); legend('Past Outputs','Future Outputs') Forecasting Response of Nonlinear Models The toolbox also lets you forecast data using nonlinear ARX, Hammerstein-Wiener, and nonlinear grey-box models. Hammerstein-Wiener, and nonlinear grey-box models have a trivial noise-component, that is disturbance in the model is described by white noise. As a result, forecasting using the forecast command is the same as performing a pure simulation. Forecasting Response of Nonlinear ARX Models A time series nonlinear ARX model has the following structure: Where f is a nonlinear function with inputs R(t), the model regressors. The regressors can be the time-lagged variables y(t-1), y(t-2),... , y(t-N) and their nonlinear expressions, such as y(t-1)^2,y (t-1)y(t-2), abs(y(t-1)). When you estimate a nonlinear ARX model from the measured data, you specify the model regressors. You can also specify the structure of f using different structures such as wavelet networks and tree partitions. For more information, see the reference page for the nlarx estimation command. Suppose that time series data from your system can be fit to a second-order linear-in-regressor model with the following polynomial regressors: Then f(R)=W'*R+c, where W=[w[1] w[2] w[3] w[4] w[5]] is a weighting vector, and c is the output offset. The nonlinear ARX model has the form: When you estimate the model using the nlarx command, the software estimates the model parameters W and c. When you use the forecast command, the software computes the forecasted model outputs by simulating the model H time steps into the future, using the last N measured output samples as initial conditions. Where N is the largest lag in the regressors, and H is the forecast horizon you specify. For the linear-in-regressor model, suppose that you have measured 100 samples of the output y, and you want to forecast four steps into the future (H = 4). The largest lag in the regressors of the model is N = 2. Therefore, the software takes the last two samples of the data y(99) and y(100) as initial conditions, and forecasts the outputs as: $\begin{array}{l}\stackrel{^}{y}\left(101\right)={w}_{1}y\left(100\right)+{w}_{2}y\left(99\right)+{w}_{3}y{\left(}^{100}+{w}_{4}y{\left(}^{99}+{w}_{5}y\left(100\right)y\left(99\right)\\ \stackrel{^} \\ \stackrel{^}{y}\left(103\right)={w}_{1}\stackrel{^}{y}\left(102\right)+{w}_{2}\stackrel{^}{y}\left(101\right)+{w}_{3}\stackrel{^}{y}{\left(}^{102}+{w}_{4}\stackrel{^}{y}{\left(}^{101}+{w}_{5}\ stackrel{^}{y}\left(102\right)\stackrel{^}{y}\left(101\right)\\ \stackrel{^}{y}\left(104\right)={w}_{1}\stackrel{^}{y}\left(103\right)+{w}_{2}\stackrel{^}{y}\left(102\right)+{w}_{3}\stackrel{^}{y}{\ If your system has exogenous inputs, the nonlinear ARX model also includes regressors that depend on the input variables. The forecasting process is similar to that for time series data. You first identify the model, sys, using input-output data, past_data. When you forecast the data, the software simulates the identified model H time steps into the future, using the last N measured output samples as initial conditions. You also specify the anticipated input values for the forecasting time span, FutureInputs. The syntax for forecasting the response of nonlinear models with exogenous inputs is the same as that for linear models, forecast(sys,past_data,H,FutureInputs). See Also forecast | predict | sim Related Examples More About
{"url":"https://kr.mathworks.com/help/ident/ug/forecasting-response-of-dynamic-systems.html","timestamp":"2024-11-07T23:44:59Z","content_type":"text/html","content_length":"126298","record_id":"<urn:uuid:bc6f3cac-7079-44e5-9a12-92c8253d4e3e>","cc-path":"CC-MAIN-2024-46/segments/1730477028017.48/warc/CC-MAIN-20241107212632-20241108002632-00224.warc.gz"}
NPTEL Design and Analysis of Algorithms Week 8 Assignment Answers 2024 » DBC ItanagarNPTEL Design and Analysis of Algorithms Week 8 Assignment Answers 2024 NPTEL Design and Analysis of Algorithms Week 8 Assignment Answers 2024 NPTEL Design and Analysis of Algorithms Week 8 Assignment Answers 2024 1. Which of the following is a linear constraint? • 18x + 3yz + 42z ≥ 217 • 18x + 3y + 42xz ≤ 217 • 18x + 3y + 42z ≥ 217 • 18xy + 3yz + 42z = 217 Answer :- For Answers Click Here 2. The President is arriving to inaugurate a stadium. He will go directly from the airport to the stadium. Security considerations require two routes to be available for the President that do not overlap on any section of road, though the routes can cross each other at intersections. This can be modelled as a network flow problem where the source and target are the airport and the stadium, road intersections are nodes and each road segment is an edge. The actual flow problem to be solved is to: • Assign a total of capacity 2 to all outgoing edges from the source and find a feasible flow. • Assign a total of capacity 2 to all incoming edges to the target and find a feasible flow. • Assign each edge capacity 1 and check that the maximum flow is less than 2. • Assign each edge capacity 1 and check that the maximum flow is at least 2. Answer :- For Answers Click Here 3. City authorities are concerned about traffic accidents on major roads. They would like to have ambulances stationed at road intersections to quickly reach the scene of any accident along these roads. To minimize response time, ambulances are to be located at intersections with traffic lights so that any segment of road can be reached by at least one ambulance that does not have to pass through a traffic light to reach the scene of the accident. If we model the road network as a graph, where intersections with traffic lights are vertices and edges represent road segments between traffic lights, the graph theoretic question to be answered is: • Find a spanning tree with minimum cost. • Find a minimal colouring. • Find a minimum size vertex cover. • Find a minimum size independent set. Answer :- For Answers Click Here 4. We have an exponential time algorithm for problem A, and problem A reduces in polynomial time to problem B. From this we can conclude that: • B has an exponential time algorithm. • B cannot have a polynomial time algorithm. • A cannot have a polynomial time algorithm. • None of the other choices are correct. Answer :- 5. Suppose SAT reduces to a problem C. To claim that C is NP-complete, we additionally need to show that: • There is a checking algorithm for C. • Every instance of C maps to an instance of SAT. • Every instance of SAT maps to an instance of C. • C does not have an efficient algorithm. Answer :- For Answers Click Here Facebook Twitter Whatsapp Whatsapp Copy Link Leave a comment Leave a comment Latest News
{"url":"https://dbcitanagar.com/nptel-design-and-analysis-of-algorithms-week-8-assignment-answers/","timestamp":"2024-11-09T19:26:58Z","content_type":"text/html","content_length":"176331","record_id":"<urn:uuid:602d3fb8-149c-49c9-b352-2d06a73a1d90>","cc-path":"CC-MAIN-2024-46/segments/1730477028142.18/warc/CC-MAIN-20241109182954-20241109212954-00331.warc.gz"}
Dagstuhl Seminar Proceedings, Volume 536105361 Abstracts Collection – Algorithmic Aspects of Large and Complex NetworksA Cost Mechanism for Fair Pricing of Resource UsageA Hybrid Model for Drawing Dynamic and Evolving GraphsComputing earliest arrival flows with multiple sourcesCost Sharing Mechanisms for Fair Pricing of Resources UsageDeterministic boundary recongnition and topology extraction for large sensor networksForce-Directed Approaches to Sensor Network LocalizationFriends for Free: Self-Organizing Artificial Social Networks for Trust and Cooperation From 04.09.05 to 09.09.05, the Dagstuhl Seminar 05361 ``Algorithmic Aspects of Large and Complex Networks'' was held in the International Conference and Research Center (IBFI), Schloss Dagstuhl. During the seminar, several participants presented their current research, and ongoing work and open problems were discussed. Abstracts of the presentations given during the seminar as well as abstracts of seminar results and ideas are put together in this paper. The first section describes the seminar topics and goals in general. Links to extended abstracts or full papers are provided, if available. Algorithms Large and Complex Networks 1-19 Regular Paper Stefano Leonardi Stefano Leonardi Friedhelm Meyer auf der Heide Friedhelm Meyer auf der Heide Dorothea Wagner Dorothea Wagner 10.4230/DagSemProc.05361.1 Creative Commons Attribution 4.0 International license https://creativecommons.org/licenses/by/4.0/legalcode
{"url":"https://drops.dagstuhl.de/entities/volume/DagSemProc-volume-5361/metadata/xml","timestamp":"2024-11-08T14:13:18Z","content_type":"application/xml","content_length":"20731","record_id":"<urn:uuid:94977f79-a24c-4dd6-ac90-2298ba0d1051>","cc-path":"CC-MAIN-2024-46/segments/1730477028067.32/warc/CC-MAIN-20241108133114-20241108163114-00457.warc.gz"}
Farming Costs The total electricity cost of your farm is the sum of all Power used by your system times the price you pay for each kWh of power. Total electricity cost = Total Electricity in kWh * Cost per kWh Total Electricty in kWh = 3Nodes' electricity consumption * Number of 3Nodes + Cooling system electricity consumption With our example, we have 5 servers running at 400 W at Full Load and we have a 12K BTU unit that is consuming in average 1000W. We would then have: 5 * 400 W + 1000 W = 3000 W = 3 kW To get the kWh per day we simply multiply by 24. kW * (# of hour per day) = daily kWh consumption 3 kW * 24 = 72 kWh / day We thus have 72 kWH per day. For 30 days, this would be kWh / day * (# day in a month) = kWh per month 72 * 30 = 2160 kWH / month. At a kWh price of 0.10$ USD, we have a cost of 216 $USD per month for the electricity bill of our ThreeFold farm. kWH / month of the farm * kWh Cost = Electricity Bill per month for the farm 2160 * 0.1 = 216$USD / month for electricity bills The bandwidth needed for a given 3Node is not yet set in stone and you are welcome to participate in ongoing discussion on this subject on the ThreeFold Forum. In this section, we will give general guidelines. The goal is to have a good idea of what constitutes a proper bandwidth available for a given amount of resources utilized on the ThreeFold Grid. Starting with a minimum of 1 mbps per Titan, which is 1 TB SSD and 32 GB RAM, we note that this is the lowest limit that gives the opportunity for the most people possible to join the ThreeFold Grid. That being said, we could set that 10 mbps is an acceptable upper limit for 1 TB SSD and 64 GB of RAM. Those numbers are empirical and more information will be shared in the future. The ratio 1TB SSD/64GB RAM is in tune with the optimal TFT rewards ratio. It is thus logical to think that farmers will build 3Node based on this ratio. Giving general bandwidth guidelines based on this ratio unit could thus be efficient for the current try-and-learn situation. Here we explore some equations that can give a general idea to farmers of the bandwidth needed for their farms. As stated, this is not yet set in stones and the TFDAO will need to discuss and clarify those notions. Here is a general equation that gives you a good idea of a correct bandwidth for a 3Node: min Bandwidth per 3Node (mbps) = k * max((Total SSD TB / 1 Tb),(Total Threads / 8 Threads),(Total GB / 64 GB)) + k * (Total HDD TB / 2) Setting k = 10 mbps, we have: min Bandwidth per 3Node (mbps) = 10 * max((Total SSD TB / 1 TB),(Total Threads / 8 Threads),(Total GB / 64 GB)) + 10 * (Total HDD TB / 2) As an example, a Titan, with 1TB SSD, 8 Threads and 64 GB of RAM, would need 10 mbps: 10 * max(1, 1, 1) = 10 * 1 = 10 With the last portion of the equation, we can see that for each additional 1TB HDD storage, you would need to add 5 mbps of bandwidth. Let's take a big server as another example. Say we have a server with 5TB SSD, 48 threads and 384 GB of RAM. We would then need 60 mbps of bandwidth for each of these 3Nodes: 10 * max((5/5), (48/8), (384/64)) = 10 * max(5,6,6) = 10 * 6 = 60 This server would need 60 mbps minimum to account for a full TF Grid utilization. You can easily scale this equation if you have many 3Nodes. Let's say you have a 1 gbps bandwidth from your Internet Service Provider (ISP). How much of those 3Nodes could your farm have? Floor (Total available bandwidth / ((Bandwidth needed per 3Nodes)) = Max servers possible With our example we have: 1000 / 60 = 16.66... = 16 We note that the function Floor takes the integer without the decimals. Thus, a 1 gbps bandwidth farm could have 16 3Nodes with each 5TB SSD, 48 threads and 384 GB of RAM. In this section, we used k = 10 mbps. If you follow those guidelines, you will most probably have a decent bandwidth for your ThreeFold farm. For the time being, the goal is to have farmers building ThreeFold farms and scale them reasonably with their available bandwidth. Stay tuned for official bandwidth parameters in the future. Once you know the general bandwidth needed for your farm, you can check with your ISP the price per month and take this into account when calculating your monthly costs. Let's take the example we used with 5 servers with 400 W at Full Load. Let's say these 5 servers have the same parameters we used above here. We then need 60 gbps per 3Nodes. This means we need 300 mbps. For the sake of our example, let's say this is around 100$ USD per month. As the TFT price is fixed for 60 months when you connect your 3Node for the first time on the TF Grid, we will use the period of 60 months, or 5 years, to calculate the total cost and revenue. The total cost is equal to: Total Cost = Initial investment + 60 * (electricity + Internet costs per month) In our example, we can state that we paid each server 1500$ USD and that they generate each 3000 TFT per month, with an entry price of 0.08$ USD per TFT. The electricity cost per month is 144$ for the electricity bill 100$ for the Internet bill Total : 244 $ monthly cost for electricity and Internet The revenues are Revenues per month = Number of 3Nodes * TFT farmed per 3Node * Price TFT Sold In this example, we have 5 servers generating 2000 TFT per month at 0.08$ USD per TFT: 5 * 3000$ * 0.08$ = 1200$ The net revenue per month are thus equal to Net Revenue = Gross revenue - Monthly cost. We thus have 1200$ - 244$ = 956$ This means that we generate a net profit of 956$ per month, without considering the initial investment of building the 3Nodes for the farm. In the previous AC example, we calculate that a minimum of 12K BTU was needed for the AC system. Let's say that this would mean buying a 350$ USD 12k BTU AC unit. The initial cost is the cost of all the 3Nodes plus the AC system. Number of 3Nodes * cost per 3Nodes + Cost of AC system = Total Cost In this case, it would be: Total initial investment = Number of 3Nodes * Cost of 3Node + Cost of AC system Then we'd have: 5 * 1500 + 350 = 7850 $ Thus, a more realistic ROI would be: Total initial investment / Net Revenue per Month = ROI in months In our case, we would have: 7850$ / 956$ = Ceiling(8.211...) = 9 With the function Ceiling taking the upper integer, without any decimals. Then within 9 months, this farm would have paid itself and from now on, it would be only positive net revenue of 956$ per month. We note that this takes into consideration that we are using the AC system 24/7. This would surely not be the case in real life. This means that the real ROI would be even better. It is a common practice to do estimates with stricter parameters. If you predict being profitable with strict parameters, you will surely be profitable in real life, even when "things" happen and not everything goes as planned. As always, this is not financial advice. We recall that in the section Calculate the ROI of a DIY 3Node, we found a simpler ROI of 6.25 months, say 7 months, that wasn't taking into consideration the additional costs of Internet and electricity. We now have a more realistic ROI of 9 months based on a fixed TFT price of 0.08$ USD. You will need to use to equations and check with your current TF farm and 3Nodes, as well as the current TFT market price. To know how much TFT you will farm per month for a giving 3Node, the easiest route is to use the ThreeFold Simulator. You can do predictions of 60 months as the TFT price is locked at the TFT price when you first connect your 3Node, and this, for 60 months. To know the details of the calculations behind this simulator, you can read this documentation. As a brief synthesis, the following equations are used to calculate the total revenues and costs of your farm. - Total Monthly Cost = Electricity cost + Internet Cost - Total Electricity Used = Electricy per 3Node * Number of 3Node + Electricity for Cooling - Total Monthly Revenue = TFT farmed per 3 node * Number of 3Nodes * TFT price when sold - Initial Investment = Price of farm (3Nodes) + Price of AC system - Total Return on investment = (60 * Monthly Revenue) - (60 * Monthly cost) - Initial Investment This section constitutes a quick synthesis of the costs and revenues when running a ThreeFold Farm. As always, do your own reseaerch and don't hesitate to visit the ThreeFold Forum on the ThreeFold Telegram Farmer Group if you have any questions.
{"url":"https://manual.grid.tf/documentation/farmers/farming_optimization/farming_costs.html","timestamp":"2024-11-14T01:57:12Z","content_type":"text/html","content_length":"80212","record_id":"<urn:uuid:ccb52ceb-1d16-4e04-88c5-59854b865a02>","cc-path":"CC-MAIN-2024-46/segments/1730477028516.72/warc/CC-MAIN-20241113235151-20241114025151-00180.warc.gz"}
5.5 Power Calculations Chapter 5 – Series And Parallel Circuits When calculating the power dissipation of resistive components, use any one of the three power equations to derive the answer from values of voltage, current, and/or resistance pertaining to each This is easily managed by adding another row to our familiar table of voltages, currents, and resistances: Power for any particular table column can be found by the appropriate Ohm’s Law equation (appropriate based on what figures are present for E, I, and R in that column). An interesting rule for total power versus individual power is that it is additive for any configuration of the circuit: series, parallel, series/parallel, or otherwise. Power is a measure of the rate of work, and since power dissipated must equal the total power applied by the source(s) (as per the Law of Conservation of Energy in physics), circuit configuration has no effect on the • Power is additive in any configuration of resistive circuit: P[Total] = P[1] + P[2] + . . . P[n]
{"url":"https://www.technocrazed.com/5-5-power-calculations","timestamp":"2024-11-10T08:10:45Z","content_type":"text/html","content_length":"80055","record_id":"<urn:uuid:c32dfed2-d9fb-4ac2-ac53-bc801a675cb6>","cc-path":"CC-MAIN-2024-46/segments/1730477028179.55/warc/CC-MAIN-20241110072033-20241110102033-00329.warc.gz"}
Testing of clustering A set X of points in R^d is (k, b)-clusterable if X can be partitioned into k subsets (clusters) so that the diameter (alternatively, the radius) of each cluster is at most b. We present algorithms that by sampling from a set X, distinguish between the case that X is (k, b)-clusterable and the case that X is ε-far from being (k, b′)-clusterable for any given 0<ε≤1 and for b′≥b. In ε-far from being (k, b′)-clusterable we mean that more than ε·|X| points should be removed from X so that it becomes (k, b′)-clusterable. We give algorithms for a variety of cost measures that use a sample of size independent of |X|, and polynomial in k and 1/ε. Our algorithms can also be used to find approximately good clusterings. Namely, these are clusterings of all but an ε-fraction of the points in X that have optimal (or close to optimal) cost. The benefit of our algorithms is that they construct an implicit representation of such clusterings in time independent of |X|. That is, without actually having to partition all points in X, the implicit representation can be used to answer queries concerning the cluster any given point belongs to. All Science Journal Classification (ASJC) codes • Hardware and Architecture Dive into the research topics of 'Testing of clustering'. Together they form a unique fingerprint.
{"url":"https://collaborate.princeton.edu/en/publications/testing-of-clustering-3","timestamp":"2024-11-03T03:47:30Z","content_type":"text/html","content_length":"48369","record_id":"<urn:uuid:356e66a6-5093-4e8e-9c3b-e1d33583d3ed>","cc-path":"CC-MAIN-2024-46/segments/1730477027770.74/warc/CC-MAIN-20241103022018-20241103052018-00160.warc.gz"}
Small and large time stability of the time taken for a Lévy process to cross curved boundaries This paper is concerned with the small time behaviour of a Lévy process X. In particular, we investigate the stabilities of the times, T̄ [b](r) and T[b]* (r), at which X, started with X [0] = 0, first leaves the space-time regions {(t, y) ∈ ℝ^2: y ≤ rt^b, t ≥ 0} (one-sided exit), or {(t, y) ∈ ℝ^2: |y| ≤ rt^b, t ≥ 0} (two-sided exit), 0 ≤ b < 1, as r ↓ 0. Thus essentially we determine whether or not these passage times behave like deterministic functions in the sense of different modes of convergence; specifically convergence in probability, almost surely and in L^p. In many instances these are seen to be equivalent to relative stability of the process X itself. The analogous large time problem is also discussed. • Lévy process • Overshoot • Passage times across power law boundaries • Random walks • Relative stability ASJC Scopus subject areas • Statistics and Probability • Statistics, Probability and Uncertainty Dive into the research topics of 'Small and large time stability of the time taken for a Lévy process to cross curved boundaries'. Together they form a unique fingerprint.
{"url":"https://experts.syr.edu/en/publications/small-and-large-time-stability-of-the-time-taken-for-a-l%C3%A9vy-proce","timestamp":"2024-11-05T03:32:14Z","content_type":"text/html","content_length":"47689","record_id":"<urn:uuid:385c0deb-8780-4118-98b7-4d2b78de7a4f>","cc-path":"CC-MAIN-2024-46/segments/1730477027870.7/warc/CC-MAIN-20241105021014-20241105051014-00401.warc.gz"}
Tech Tips: One-Sample Means Test R: use the function To explain the parameters: • data is a vector consisting of the sample data • mu is the mean $\mu$ associated with the null hypothesis • alternative is a string of text that specifies the alternative hypothesis (i.e., "two.sided", "less", or "greater") Consider the following example of this function's use: Suppose the weights (in grams) of a sample of eleven small screws are found to be $$0.38,0.55,1.54,1.55,0.50,0.60,0.92,0.96,1.00,0.86,1.46$$ The production process for the screws is supposed to result in screws with mean weight of $1$ gram. Assuming the weights are normally distributed, test this claim at a $0.10$ significance level. > data = c(0.38,0.55,1.54,1.55,0.50,0.60,0.92,0.96,1.00,0.86,1.46) > t.test(data,alternative="two.sided",mu=1.00,conf.level=0.90) One Sample t-test data: data t = -0.48485, df = 10, p-value = 0.6382 alternative hypothesis: true mean is not equal to 1 90 percent confidence interval: 0.7070946 1.1692691 sample estimates: mean of x Given the $p$-value given above, which is greater than the significance level, this sample does not provide any statistically significant evidence that the mean weight is not $1$ g. Additional Notes: If all one wishes to calculate is the confidence interval for a population mean given a sample taken from it -- one can simply pass to t.test() the data and conf.level arguments and look at the conf.int component of the resulting list, as seen below. > data = c(68,73,68,70,75,57,64,67,74,64,64,66,71,66,59,66) > t.test(data,conf.level=0.95)$conf.int [1] 64.35351 69.64649 [1] 0.95 When conducting a one-tailed test, one should use alternative="less" or alternative="greater", as appropriate. If one should desire to store the $p$-value in a variable to use for some other purpose, one can extract it from the overall test results in the following way: > test.results = t.test(data,alternative="two.sided",mu=1.00,conf.level=0.90) > test.results$p.value [1] 0.6382267 Similarly, we can retrieve the upper and lower bounds of the related confidence interval with > test.results = t.test(data,alternative="two.sided",mu=1.00,conf.level=0.90) > test.results$conf.int[c(1,2)] [1] 0.7070946 1.1692691 Excel: One can build a worksheet for conducting a one sample test concerning a mean when the population's standard deviation is unknown using the functions related to a $t$-distribution. Below is an Here are the relevant formulas: F8:"=COUNTA(C:C)" # the COUNTA() function counts non-empty F9:"=AVERAGE(C:C)" # cells in the range given to it F14:"=IF(EXACT(TRIM(F5),"two.sided"), # the TRIM() function removes extra spaces IF(EXACT(TRIM(F5),"less"), # the EXACT() function returns TRUE when T.INV(F6,F11), # the two strings passed to it agree, and IF(EXACT(TRIM(F5),"greater"), # FALSE otherwise "ERROR")))" # the IF(condition,a,b) function returns # a when condition is TRUE, b otherwise F17:"=IF(F15<F6,"REJECT NULL HYPOTHESIS","FAIL TO REJECT NULL HYPOTHESIS")"
{"url":"https://mathcenter.oxford.emory.edu/site/math117/techTipsOneSampleMeansTest/","timestamp":"2024-11-10T19:31:07Z","content_type":"text/html","content_length":"8202","record_id":"<urn:uuid:f1b306c7-1221-44cd-aae8-479c8bdc0c6c>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.61/warc/CC-MAIN-20241110170046-20241110200046-00419.warc.gz"}
Edge-Selection Heuristics for Computing Tutte Polynomials The Tutte polynomial of a graph, also known as the partition function of the q-state Potts model, is a 2-variable polynomial graph invariant of considerable importance in both combinatorics and statistical physics. It contains several other polynomial invariants, such as the chromatic polynomial and flow polynomial as partial evaluations, and various numerical invariants such as the number of spanning trees as complete evaluations. We have developed the most efficient algorithm to-date for computing the Tutte polynomial of a graph. An important component of the algorithm affecting efficiency is the choice of edge to work on at each stage in the computation. In this paper, we present and discuss two edge-selection heuristics which (respectively) give good performance on sparse and dense graphs. We also present experimental data comparing these heuristics against a range of others to demonstrate their effectiveness. Cite as: Pearce, D., Haggard, G. and Royle, G. (2009). Edge-Selection Heuristics for Computing Tutte Polynomials. In Proc. Fifteenth Computing: The Australasian Theory Symposium (CATS 2009), Wellington, New Zealand. CRPIT, 94. Downey, R. and Manyem, P., Eds. ACS. 151-159. (from crpit.com) (local if available)
{"url":"https://crpit.scem.westernsydney.edu.au/abstracts/CRPITV94Pearce.html","timestamp":"2024-11-01T22:43:40Z","content_type":"text/html","content_length":"2830","record_id":"<urn:uuid:885bc72b-9702-4931-9775-a1ab864cdaa8>","cc-path":"CC-MAIN-2024-46/segments/1730477027599.25/warc/CC-MAIN-20241101215119-20241102005119-00099.warc.gz"}
Pendulum Motion Vibrations and Waves - Lesson 0 - Vibrations Pendulum Motion A simple pendulum consists of a relatively massive object hung by a string from a fixed support. It typically hangs vertically in its equilibrium position. The massive object is affectionately referred to as the pendulum bob. When the bob is displaced from equilibrium and then released, it begins its back and forth vibration about its fixed equilibrium position. The motion is regular and repeating, an example of periodic motion. Pendulum motion was introduced earlier in this lesson as we made an attempt to understand the nature of vibrating objects. Pendulum motion was discussed again as we looked at the mathematical properties of objects that are in periodic motion. Here we will investigate pendulum motion in even greater detail as we focus upon how a variety of quantities change over the course of time. Such quantities will include forces, position, velocity and energy - both kinetic and potential energy. Force Analysis of a Pendulum Earlier in this lesson we learned that an object that is vibrating is acted upon by a restoring force. The restoring force causes the vibrating object to slow down as it moves away from the equilibrium position and to speed up as it approaches the equilibrium position. It is this restoring force that is responsible for the vibration. So what forces act upon a pendulum bob? And what is the restoring force for a pendulum? There are two dominant forces acting upon a pendulum bob at all times during the course of its motion. There is the force of gravity that acts downward upon the bob. It results from the Earth's mass attracting the mass of the bob. And there is a tension force acting upward and towards the pivot point of the pendulum. The tension force results from the string pulling upon the bob of the pendulum. In our discussion, we will ignore the influence of air resistance - a third force that always opposes the motion of the bob as it swings to and fro. The air resistance force is relatively weak compared to the two dominant forces. The gravity force is highly predictable; it is always in the same direction (down) and always of the same magnitude - mass*9.8 N/kg. The tension force is considerably less predictable. Both its direction and its magnitude change as the bob swings to and fro. The direction of the tension force is always towards the pivot point. So as the bob swings to the left of its equilibrium position, the tension force is at an angle - directed upwards and to the right. And as the bob swings to the right of its equilibrium position, the tension is directed upwards and to the left. The diagram below depicts the direction of these two forces at five different positions over the course of the pendulum's path. sign hanging problems and inclined plane problems. Typically one or more of the forces are resolved into perpendicular components that lie along coordinate axes that are directed in the direction of the acceleration or perpendicular to it. So in the case of a pendulum, it is the gravity force which gets resolved since the tension force is already directed perpendicular to the motion. The diagram at the right shows the pendulum bob at a position to the right of its equilibrium position and midway to the point of maximum displacement. A coordinate axis system is sketched on the diagram and the force of gravity is resolved into two components that lie along these axes. One of the components is directed tangent to the circular arc along which the pendulum bob moves; this component is labeled Fgrav-tangent. The other component is directed perpendicular to the arc; it is labeled Fgrav-perp. You will notice that the perpendicular component of gravity is in the opposite direction of the tension force. You might also notice that the tension force is slightly larger than this component of gravity. The fact that the tension force (Ftens) is greater than the perpendicular component of gravity (Fgrav-perp) means there will be a net force which is perpendicular to the arc of the bob's motion. This must be the case since we expect that objects that move along circular paths will experience an inward or centripetal force. The tangential component of gravity (Fgrav-tangent) is unbalanced by any other force. So there is a net force directed along the other coordinate axes. It is this tangential component of gravity which acts as the restoring force. As the pendulum bob moves to the right of the equilibrium position, this force component is directed opposite its motion back towards the equilibrium position. The above analysis applies for a single location along the pendulum's arc. At the other locations along the arc, the strength of the tension force will vary. Yet the process of resolving gravity into two components along axes that are perpendicular and tangent to the arc remains the same. The diagram below shows the results of the force analysis for several other positions. There are a couple comments to be made. First, observe the diagram for when the bob is displaced to its maximum displacement to the right of the equilibrium position. This is the position in which the pendulum bob momentarily has a velocity of 0 m/s and is changing its direction. The tension force (Ftens) and the perpendicular component of gravity (Fgrav-perp) balance each other. At this instant in time, there is no net force directed along the axis that is perpendicular to the motion. Since the motion of the object is momentarily paused, there is no need for a centripetal force. Second, observe the diagram for when the bob is at the equilibrium position (the string is completely vertical). When at this position, there is no component of force along the tangent direction. When moving through the equilibrium position, the restoring force is momentarily absent. Having been restored to the equilibrium position, there is no restoring force. The restoring force is only needed when the pendulum bob has been displaced away from the equilibrium position. You might also notice that the tension force (Ftens) is greater than the perpendicular component of gravity (Fgrav-perp) when the bob moves through this equilibrium position. Since the bob is in motion along a circular arc, there must be a net centripetal force at this position. The Sinusoidal Nature of Pendulum Motion In the previous part of this lesson, we investigated the sinusoidal nature of the motion of a mass on a spring. We will conduct a similar investigation here for the motion of a pendulum bob. Let's suppose that we could measure the amount that the pendulum bob is displaced to the left or to the right of its equilibrium or rest position over the course of time. A displacement to the right of the equilibrium position would be regarded as a positive displacement; and a displacement to the left would be regarded as a negative displacement. Using this reference frame, the equilibrium position would be regarded as the zero position. And suppose that we constructed a plot showing the variation in position with respect to time. The resulting position vs. time plot is shown below. Similar to what was observed for the mass on a spring, the position of the pendulum bob (measured along the arc relative to its rest position) is a function of the sine of the time. Now suppose that we use our motion detector to investigate the how the velocity of the pendulum changes with respect to the time. As the pendulum bob does the back and forth, the velocity is continuously changing. There will be times at which the velocity is a negative value (for moving leftward) and other times at which it will be a positive value (for moving rightward). And of course there will be moments in time at which the velocity is 0 m/s. If the variations in velocity over the course of time were plotted, the resulting graph would resemble the one shown below. Now let's try to understand the relationship between the position of the bob along the arc of its motion and the velocity with which it moves. Suppose we identify several locations along the arc and then relate these positions to the velocity of the pendulum bob. The graphic below shows an effort to make such a connection between position and velocity. As is often said, a picture is worth a thousand words. Now here come the words. The plot above is based upon the equilibrium position (D) being designated as the zero position. A displacement to the left of the equilibrium position is regarded as a negative position. A displacement to the right is regarded as a positive position. An analysis of the plots shows that the velocity is least when the displacement is greatest. And the velocity is greatest when the displacement of the bob is least. The further the bob has moved away from the equilibrium position, the slower it moves; and the closer the bob is to the equilibrium position, the faster it moves. This can be explained by the fact that as the bob moves away from the equilibrium position, there is a restoring force that opposes its motion. This force slows the bob down. So as the bob moves leftward from position D to E to F to G, the force and acceleration is directed rightward and the velocity decreases as it moves along the arc from D to G. At G - the maximum displacement to the left - the pendulum bob has a velocity of 0 m/s. You might think of the bob as being momentarily paused and ready to change its direction. Next the bob moves rightward along the arc from G to F to E to D. As it does, the restoring force is directed to the right in the same direction as the bob is moving. This force will accelerate the bob, giving it a maximum speed at position D - the equilibrium position. As the bob moves past position D, it is moving rightward aloWikimedia Commons; special thanks to Hubert Christiaen) provides a visual depiction of these principles. The acceleration vector that is shown combines both the perpendicular and the tangential accelerations into a single vector. You will notice that this vector is entirely tangent to the arc when at maximum displacement; this is consistent with the force analysis discussed above. And the vector is vertical (towards the center of the arc) when at the equilibrium position. This also is consistent with the force analysis discussed above. Energy Analysis In a previous chapter of The Physics Classroom Tutorial, the energy possessed by a pendulum bob was discussed. We will expand on that discussion here as we make an effort to associate the motion characteristics described above with the concepts of kinetic energy, potential energy and total mechanical energy. The kinetic energy possessed by an object is the energy it possesses due to its motion. It is a quantity that depends upon both mass and speed. The equation that relates kinetic energy (KE) to mass ( m) and speed (v) is KE = ½•m•v^2 The faster an object moves, the more kinetic energy that it will possess. We can combine this concept with the discussion above about how speed changes during the course of motion. This blending of concepts would lead us to conclude that the kinetic energy of the pendulum bob increases as the bob approaches the equilibrium position. And the kinetic energy decreases as the bob moves further away from the equilibrium position. The potential energy possessed by an object is the stored energy of position. Two types of potential energy are discussed in The Physics Classroom Tutorial - gravitational potential energy and elastic potential energy. Elastic potential energy is only present when a spring (or other elastic medium) is compressed or stretched. A simple pendulum does not consist of a spring. The form of potential energy possessed by a pendulum bob is gravitational potential energy. The amount of gravitational potential energy is dependent upon the mass (m) of the object and the height (h) of the object. The equation for gravitational potential energy (PE) is PE = m•g•h where g represents the gravitational field strength (sometimes referred to as the acceleration caused by gravity) and has the value of 9.8 N/kg. The height of an object is expressed relative to some arbitrarily assigned zero level. In other words, the height must be measured as a vertical distance above some reference position. For a pendulum bob, it is customary to call the lowest position the reference position or the zero level. So when the bob is at the equilibrium position (the lowest position), its height is zero and its potential energy is 0 J. As the pendulum bob does the back and forth, there are times during which the bob is moving away from the equilibrium position. As it does, its height is increasing as it moves further and further away. It reaches a maximum height as it reaches the position of maximum displacement from the equilibrium position. As the bob moves towards its equilibrium position, it decreases its height and decreases its potential energy. Now let's put these two concepts of kinetic energy and potential energy together as we consider the motion of a pendulum bob moving along the arc shown in the diagram at the right. We will use an energy bar chart to represent the changes in the two forms of energy. The amount of each form of energy is represented by a bar. The height of the bar is proportional to the amount of that form of energy. In addition to the potential energy (PE) bar and kinetic energy (KE) bar, there is a third bar labeled TME. The TME bar represents the total amount of mechanical energy possessed by the pendulum bob. The total mechanical energy is simply the sum of the two forms of energy – kinetic plus potential energy. Take some time to inspect the bar charts shown below for positions A, B, D, F and G. What do you notice? When you inspect the bar charts, it is evident that as the bob moves from A to D, the kinetic energy is increasing and the potential energy is decreasing. However, the total amount of these two forms of energy is remaining constant. Whatever potential energy is lost in going from position A to position D appears as kinetic energy. There is a transformation of potential energy into kinetic energy as the bob moves from position A to position D. Yet the total mechanical energy remains constant. We would say that mechanical energy is conserved. As the bob moves past position D towards position G, the opposite is observed. Kinetic energy decreases as the bob moves rightward and (more importantly) upward toward position G. There is an increase in potential energy to accompany this decrease in kinetic energy. Energy is being transformed from kinetic form into potential form. Yet, as illustrated by the TME bar, the total amount of mechanical energy is conserved. This very principle of energy conservation was explained in the Energy chapter of The Physics Classroom Tutorial. The Period of a Pendulum Our final discussion will pertain to the period of the pendulum. As discussed previously in this lesson, the period is the time it takes for a vibrating object to complete its cycle. In the case of pendulum, it is the time for the pendulum to start at one extreme, travel to the opposite extreme, and then return to the original location. Here we will be interested in the question What variables affect the period of a pendulum? We will concern ourselves with possible variables. The variables are the mass of the pendulum bob, the length of the string on which it hangs, and the angular displacement. The angular displacement or arc angle is the angle that the string makes with the vertical when released from rest. These three variables and their effect on the period are easily studied and are often the focus of a physics lab in an introductory physics class. The data table below provides representative data for such a study. ┃ Trial │ Mass (kg) │ Length (m) │ Arc Angle (°) │ Period (s) ┃ ┃ 1 │ 0.02- │ 0.40 │ 15.0 │ 1.25 ┃ ┃ 2 │ 0.050 │ 0.40 │ 15.0 │ 1.29 ┃ ┃ 3 │ 0.100 │ 0.40 │ 15.0 │ 1.28 ┃ ┃ 4 │ 0.200 │ 0.40 │ 15.0 │ 1.24 ┃ ┃ 5 │ 0.500 │ 0.40 │ 15.0 │ 1.26 ┃ ┃ 6 │ 0.200 │ 0.60 │ 15.0 │ 1.56 ┃ ┃ 7 │ 0.200 │ 0.80 │ 15.0 │ 1.79 ┃ ┃ 8 │ 0.200 │ 1.00 │ 15.0 │ 2.01 ┃ ┃ 9 │ 0.200 │ 1.20 │ 15.0 │ 2.19 ┃ ┃ 10 │ 0.200 │ 0.40 │ 10.0 │ 1.27 ┃ ┃ 11 │ 0.200 │ 0.40 │ 20.0 │ 1.29 ┃ ┃ 12 │ 0.200 │ 0.40 │ 25.0 │ 1.25 ┃ ┃ 13 │ 0.200 │ 0.40 │ 30.0 │ 1.26 ┃ In trials 1 through 5, the mass of the bob was systematically altered while keeping the other quantities constant. By so doing, the experimenters were able to investigate the possible effect of the mass upon the period. As can be seen in these five trials, alterations in mass have little effect upon the period of the pendulum. In trials 4 and 6-9, the mass is held constant at 0.200 kg and the arc angle is held constant at 15°. However, the length of the pendulum is varied. By so doing, the experimenters were able to investigate the possible effect of the length of the string upon the period. As can be seen in these five trials, alterations in length definitely have an effect upon the period of the pendulum. As the string is lengthened, the period of the pendulum is increased. There is a direct relationship between the period and the length. Finally, the experimenters investigated the possible effect of the arc angle upon the period in trials 4 and 10-13. The mass is held constant at 0.200 kg and the string length is held constant at 0.400 m. As can be seen from these five trials, alterations in the arc angle have little to no effect upon the period of the pendulum. So the conclusion from such an experiment is that the one variable that effects the period of the pendulum is the length of the string. Increases in the length lead to increases in the period. But the investigation doesn't have to stop there. The quantitative equation relating these variables can be determined if the data is plotted and linear regression analysis is performed. The two plots below represent such an analysis. In each plot, values of period (the dependent variable) are placed on the vertical axis. In the plot on the left, the length of the pendulum is placed on the horizontal axis. The shape of the curve indicates some sort of power relationship between period and length. In the plot on the right, the square root of the length of the pendulum (length to the ½ power) is plotted. The results of the regression analysis are shown. Slope: 1.7536 Slope: 2.0045 Y-intercept: 0.2616 Y-intercept: 0.0077 COR: 0.9183 COR: 0.9999 The analysis shows that there is a better fit of the data and the regression line for the graph on the right. As such, the plot on the right is the basis for the equation relating the period and the length. For this data, the equation is Period = 2.0045•Length^0.5 + 0.0077 Using T as the symbol for period and L as the symbol for length, the equation can be rewritten as T = 2.0045•L^0.5 + 0.0077 The commonly reported equation based on theoretical development is T = 2•Π•(L/g)^0.5 where g is a constant known as the gravitational field strength or the acceleration of gravity (9.8 N/kg). The value of 2.0045 from the experimental investigation agrees well with what would be expected from this theoretically reported equation. Substituting the value of g into this equation, yields a proportionality constant of 2Π/g^0.5, which is 2.0071, very similar to the 2.0045 proportionality constant developed in the experiment. Use the Investigating a Pendulum widget below to investigate the effect of the pendulum length upon the period of the pendulum. Simply type in a value of length into the input field and click on the Submit button. Experiment with various values of pendulum length. We Would Like to Suggest ... Pendulum Motion Interactive. You can find it in the Physics Interactives section of our website. Visit: Pendulum Motion Interactive Check Your Understanding 1. A pendulum bob is pulled back to position A and released from rest. The bob swings through its usual circular arc and is caught at position C. Determine the position (A, B, C or all the same) where the … a. … force of gravity is the greatest? b. … restoring force is the greatest? c. … speed is the greatest? d. … potential energy is the greatest? e. … kinetic energy is the greatest f. … total mechanical energy is the greatest? 2. Use energy conservation to fill in the blanks in the following diagram. 3. A pair of trapeze performers at the circus is swinging from ropes attached to a large elevated platform. Suppose that the performers can be treated as a simple pendulum with a length of 16 m. Determine the period for one complete back and forth cycle. 4. Which would have the highest frequency of vibration? Pendulum A: A 200-g mass attached to a 1.0-m length string Pendulum B: A 400-g mass attached to a 0.5-m length string 5. Anna Litical wishes to make a simple pendulum that serves as a timing device. She plans to make it such that its period is 1.00 second. What length must the pendulum have?
{"url":"https://staging.physicsclassroom.com/class/waves/Lesson-0/Pendulum-Motion","timestamp":"2024-11-06T02:04:40Z","content_type":"application/xhtml+xml","content_length":"237565","record_id":"<urn:uuid:28a03b8b-5b6f-4d75-9dd6-a89c52254000>","cc-path":"CC-MAIN-2024-46/segments/1730477027906.34/warc/CC-MAIN-20241106003436-20241106033436-00302.warc.gz"}
linear mixed-models coursera quiz » MyMathLab Help| Pay Us to Do Your Statistics Online Today Understanding Mixed Effects Models Qn1. A mixed model is “mixed” because it contains both between-subjects and within-subjects factors. Qn2. Which of the following best describes fixed effects? Fixed effects are manipulated factors whose levels are sampled randomly from a larger population of interest Fixed effects are random factors whose chosen levels are of explicit interest. Fixed effects are random factors whose levels are sampled randomly from a larger population of interest None of the above Qn3. Random effects are called “random” in part because their levels are randomly sampled form a larger population about which wish to generalize Qn4. Linear mixed models (LMMS) can handle Poisson response distributions. Qn5. Which is not an advantage of a linear mixed model (LMM) The ability to handle within-subjects factors The ability to handle unbalanced designs The ability to handle missing data The ability to handle non-normal response distributions The ability to handle violations of sphericity Qn6. Linear mixed models (LMMs) produce small residual degrees of freedom. Qn7. Nesting is useful when the levels of a factor are not meaningful when pooled across all levels of the other factors. Qn8. Nesting is necessary when we wish to calculate the means and variances of a nested factor’s levels only within the levels of the other factors, that is, the nesting factors. Qn9, Linear mixed models (LMMs) generalize the linear model (LM) to non-normal response Qn10. Generalized linear mixed models (GLMMs) generalized the linear mixed model (LMM) to non-normal response distributions. Qn11. Why are planned pairwise comparisons important? (Mark all that apply) Planned pairwise comparisons enable experimenters to communicate more effectively within the public Planned pairwise comparisons force the experiment to consider his or her hypotheses before the data arrives to prevent revisions. Planned pairwise comparisons should be based on a priori hypotheses and therefore prevent “fishing expeditions” for significant p-values Planned pairwise comparisons ensure that research funds are only used for anticipated purposes Planned pairwise comparisons guarantee that significant differences, if they exist, will be found eventually Qn12. Generalized linear mixed models (GLMMs) are capable of handling repeated measures factors via random effects and non-normal response distributions If you are looking for MyMathlab Answers, then you have landed on the right page because we provide correct solutions to all Add a new comment.
{"url":"https://www.mymathlabhomeworkhelp.com/mymathlabanswers/statistics-homework-help/linear-mixed-models-quiz.html","timestamp":"2024-11-12T18:08:23Z","content_type":"text/html","content_length":"31390","record_id":"<urn:uuid:d8e8f262-56d0-47bd-bcd1-f61e534d3b90>","cc-path":"CC-MAIN-2024-46/segments/1730477028279.73/warc/CC-MAIN-20241112180608-20241112210608-00806.warc.gz"}
Mikolaj KASPRZAK - Essec faculty department: Information Systems, Data Analytics and Operations Mikołaj is a tenure-track assistant professor at ESSEC Business School, specializing in mathematical statistics and applied probability. He obtained his Bachelor’s and Master’s degree in Mathematics, Operational Research, Statistics and Economics from the University of Warwick in the UK and his DPhil in Statistics from the University of Oxford. After his DPhil, he joined the Mathematics Department at the University of Luxembourg as a postdoctoral research associate. Later, he obtained a Marie Skłodowska-Curie Individual (Global) Fellowship, sponsored by the EU. During the outgoing phase of the fellowship, he worked at the Laboratory for Information and Decision Systems at the Massachusetts Institute of Technology (MIT) and undertook a short secondment at the Gatsby Computational Neuroscience Unit at University College London (UCL). He then returned to Luxembourg for the incoming phase of the fellowhip, after which he joined ESSEC in 2024. In his research, Mikołaj is interested in providing rigorous quality guarantees for various approximations arising in applied probability, statistics and machine learning. Along the way, he develops new mathematical theory and tools for upper-bounding distances between probability distributions. He enjoys working on theoretical problems and proving new theorems which are motivated by real-life • 2019: Doctor of Philosophy, Statistics (University of Oxford United Kingdom) • 2015: Master of Science, Mathematics, Operational Research, Statistics and Economics (University of Warwick United Kingdom) Full-time academic appointments □ 2024 – Now : Assistant Professor (ESSEC Business School France) □ 2022 – 2023 : Marie Skłodowska-Curie Individual Fellow (Université du Luxembourg Luxembourg) □ 2022 : Marie Skłodowska-Curie Individual Fellow (Secondment) (University College London United Kingdom) □ 2021 – 2022 : Marie Skłodowska-Curie Individual Fellow (Massachusetts Institute of Technology United States of America) □ 2018 – 2021 : Research Associate (Université du Luxembourg Luxembourg) □ 2015 – 2019 : DPhil student (University of Oxford United Kingdom) Other appointments □ 2023 – 2024 : Visiting Researcher (Université du Luxembourg Luxembourg) • 2024 : Junior Chair of Excellence in Data Analytics, CY Initiative • 2021 : Marie Skłodowska-Curie Individual (Global) Fellowship (Commission européenne) • 2015 : Full Doctoral Studentship (UK Engineering and Physical Sciences Research Council, United Kingdom) • 2019 : New Researcher Travel Award (IMS – Bernoulli Society) Research Teaching Other activities
{"url":"https://faculty.essec.edu/en/cv/kasprzak-mikolaj/","timestamp":"2024-11-05T00:56:50Z","content_type":"text/html","content_length":"323863","record_id":"<urn:uuid:c761fb60-219f-4632-80b6-effa61ad3c35>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.84/warc/CC-MAIN-20241104225856-20241105015856-00194.warc.gz"}
manual pages Find Bonacich Power Centrality Scores of Network Positions power_centrality takes a graph (dat) and returns the Boncich power centralities of positions (selected by nodes). The decay rate for power contributions is specified by exponent (1 by default). power_centrality(graph, nodes = V(graph), loops = FALSE, exponent = 1, rescale = FALSE, tol = 1e-07, sparse = TRUE) graph the input graph. nodes vertex sequence indicating which vertices are to be included in the calculation. By default, all vertices are included. loops boolean indicating whether or not the diagonal should be treated as valid data. Set this true if and only if the data can contain loops. loops is FALSE by default. exponent exponent (decay rate) for the Bonacich power centrality score; can be negative rescale if true, centrality scores are rescaled such that they sum to 1. tol tolerance for near-singularities during matrix inversion (see solve) sparse Logical scalar, whether to use sparse matrices for the calculation. The ‘Matrix’ package is required for sparse matrix support Bonacich's power centrality measure is defined by C_BP(alpha,beta)=alpha (I-beta A)^-1 A 1, where beta is an attenuation parameter (set here by exponent) and A is the graph adjacency matrix. (The coefficient alpha acts as a scaling parameter, and is set here (following Bonacich (1987)) such that the sum of squared scores is equal to the number of vertices. This allows 1 to be used as a reference value for the “middle” of the centrality range.) When beta->1/lambda_A1beta->1/lambda_A1 (the reciprocal of the largest eigenvalue of A), this is to within a constant multiple of the familiar eigenvector centrality score; for other values of β, the behavior of the measure is quite different. In particular, β gives positive and negative weight to even and odd walks, respectively, as can be seen from the series expansion C_BP(alpha,beta) = alpha sum( beta^k A^(k+1) 1, k in 0..infinity )C_BP(alpha,beta) = alpha sum( beta^k A^(k+1) 1, k in 0..infinity ) which converges so long as |beta|<1/lambda_A1|beta|<1/lambda_A1. The magnitude of beta controls the influence of distant actors on ego's centrality score, with larger magnitudes indicating slower rates of decay. (High rates, hence, imply a greater sensitivity to edge effects.) Interpretively, the Bonacich power measure corresponds to the notion that the power of a vertex is recursively defined by the sum of the power of its alters. The nature of the recursion involved is then controlled by the power exponent: positive values imply that vertices become more powerful as their alters become more powerful (as occurs in cooperative relations), while negative values imply that vertices become more powerful only as their alters become weaker (as occurs in competitive or antagonistic relations). The magnitude of the exponent indicates the tendency of the effect to decay across long walks; higher magnitudes imply slower decay. One interesting feature of this measure is its relative instability to changes in exponent magnitude (particularly in the negative case). If your theory motivates use of this measure, you should be very careful to choose a decay parameter on a non-ad hoc basis. A vector, containing the centrality scores. Singular adjacency matrices cause no end of headaches for this algorithm; thus, the routine may fail in certain cases. This will be fixed when I get a better algorithm. power_centrality will not symmetrize your data before extracting eigenvectors; don't send this routine asymmetric matrices unless you really mean to do so. This function was ported (ie. copied) from the SNA package. Carter T. Butts (http://www.faculty.uci.edu/profile.cfm?faculty_id=5057), ported to igraph by Gabor Csardi csardi.gabor@gmail.com Bonacich, P. (1972). “Factoring and Weighting Approaches to Status Scores and Clique Identification.” Journal of Mathematical Sociology, 2, 113-120. Bonacich, P. (1987). “Power and Centrality: A Family of Measures.” American Journal of Sociology, 92, 1170-1182. See Also eigen_centrality and alpha_centrality # Generate some test data from Bonacich, 1987: g.c <- graph( c(1,2,1,3,2,4,3,5), dir=FALSE) g.d <- graph( c(1,2,1,3,1,4,2,5,3,6,4,7), dir=FALSE) g.e <- graph( c(1,2,1,3,1,4,2,5,2,6,3,7,3,8,4,9,4,10), dir=FALSE) g.f <- graph( c(1,2,1,3,1,4,2,5,2,6,2,7,3,8,3,9,3,10,4,11,4,12,4,13), dir=FALSE) # Compute power centrality scores for (e in seq(-0.5,.5, by=0.1)) { print(round(power_centrality(g.c, exp=e)[c(1,2,4)], 2)) for (e in seq(-0.4,.4, by=0.1)) { print(round(power_centrality(g.d, exp=e)[c(1,2,5)], 2)) for (e in seq(-0.4,.4, by=0.1)) { print(round(power_centrality(g.e, exp=e)[c(1,2,5)], 2)) for (e in seq(-0.4,.4, by=0.1)) { print(round(power_centrality(g.f, exp=e)[c(1,2,5)], 2)) version 1.2.4
{"url":"https://igraph.org/r/html/1.2.4/power_centrality.html","timestamp":"2024-11-14T14:09:57Z","content_type":"text/html","content_length":"14327","record_id":"<urn:uuid:3a1e9ac7-9fa3-4da3-a513-9128ede18d2a>","cc-path":"CC-MAIN-2024-46/segments/1730477028657.76/warc/CC-MAIN-20241114130448-20241114160448-00575.warc.gz"}
Uncertainty Quantification How accurate are numbers inferred from measurements and simulations? “Uncertainty quantification” (or “UQ” for short) is a research area that has sprung to prominence in the last decade at the interface of applied mathematics, statistics, computational science, and many applications, usually in physical sciences and engineering, but also in biology, finance, and insurance. Uncertainty Quantification In all these applications, there is a real and growing demand to synthesize complex real-time data sets and historical records with equally complex physically-based computational models for the underlying phenomena, in order to perform tasks as diverse as extracting information from novel imaging techniques, designing a new automobile or airplane, or predicting and mitigating the risks arising from climate change. Simply put, UQ is the end-to-end study of the impact of all forms of error and uncertainty in the models arising in the applications. The questions considered range from fundamental, mathematical, and statistical questions to practical questions of computational accuracy and cost (FN:T. J. Sullivan. Introduction to Uncertainty Quantification. Springer, 2015.). Research into these questions takes place at ZIB in the Uncertainty Quantification group, and finds applications in other working groups such as Computational Medicine, Computational Molecular Design, and Visualization. A general introduction to UQ and Bayesian inverse problems Roughly speaking, UQ divides into two major branches, forward and inverse problems. In the forward propagation of uncertainty, we have a known model F for a system of interest. We model its inputs X as a random variable and wish to understand the output random variable Y = F(X), sometimes denoted Y|X, read as “Y given X.” There is a substantial overlap between this area and sensitivity analysis, since the random variations in X probe the sensitivity of the forward model F to changes of its inputs. In the other major branch of UQ, inverse problems, F is still the forward model, but now Y denotes some observed data, and we wish to infer inputs X so that F(X) = Y; that is, we want not Y|X but X|Y. Here, there is substantial overlap with statistical inference and with imaging and data analysis. Inverse problems typically have no solutions in the literal sense, because our models F are only approximate, there is no X for which F(X) agrees exactly with the observed data Y. Therefore, it becomes necessary to relax the notion of a solution, and often also to incorporate prior or expert knowledge on what a “good” solution X might be. In recent years, the Bayesian perspective on inverse problems has received much attention, because it is powerful and flexible enough to meet these requirements, and because increased computational power makes the Bayesian approach more feasible than in decades past (FN:A. M. Stuart. Inverse problems: A Bayesian perspective, Acta Numer. 19:451559, 2010.). In the Bayesian perspective, we again regard X and Y as random variables and encode our prior expert knowledge about X into a prior probability distribution p(X), which we condition with respect to Y using Bayes’ rule p(X|Y) = p(Y|X) p(X) / p(Y) to produce a so-called posterior distribution for X|Y, which is the mathematical representation of all knowledge/uncertainty about X, given the observed data Y and the expert knowledge encoded in the prior p(X). However, the posterior p(X|Y) is typically a very complicated distribution on a large space (the space of unknown Xs that we are trying to infer), so there are major mathematical and computational challenges in implementing this fully Bayesian approach, and in producing appropriate simplifications for end users. UQ in systems biology and parameter estimation The Systems Biology group at ZIB deals with the computational and mathematical modeling of complex biological systems: metabolic networks or cell signaling networks, for example. Usually, the models consist of large systems of ordinary differential equations that describe the change of concentrations of the involved species over time. Such models involve a huge number of model parameters representing, for example, reaction rate constants, volumes, or effect concentrations. Only a few of these parameters are measurable, sometimes their approximate range of values is known, but often their order of magnitude is completely unknown. The aim is to estimate parameter values in such a way that simulation results match with given experimental data and that predictions can be made about the system’s behavior under external influences: thus, these problems fall under the general heading of inverse problems. The data, however, are prone to measurement errors and are therefore uncertain. Given a fixed set of data and a statistical model for the error, one can solve an optimization problem to compute a single set of parameter values that make the observed data the most probable given the model (FN:D. P. Moualeu-Ngangue, S. Röblitz, R. Ehrig, P. Deuflhard. Parameter Identification in a Tuberculosis Model for Cameroon. PLoS ONE 10[4]:e0120607. doi:10.1371/journal.pone.0120607, 2015.). The function to be optimized is called likelihood function. In practice, this optimization problem is often difficult to solve, because its solution is not unique. In other words, there exist several different sets of parameter values that all lead to equally good fits to the data. Alternatively, when prior knowledge about the parameters is postulated, parameters can be treated as random variables in the framework of Bayesian statistics. This allows you to compute a joined probability distribution for the parameters, called the posterior. Performing model simulations with parameters sampled from this posterior distribution gives insight into the variability and uncertainty of the model output. A possible application is the prediction of patient-specific treatment success rates based on a physiological model for a specific disease or health status. Given some measurement data from a large group of patients, one can construct a prior distribution that reflects the variability of parameters within the patient population (FN:I. Klebanov, A. Sikorski, C. Schütte, S. Röblitz. Prior estimation and Bayesian inference from large cohort data sets. ZIB-Report 16-09. Zuse Institute Berlin, 2016.). Using this prior and a patient-specific data set, an individual posterior can be computed. Running model simulations including the treatment with parameters sampled from the posterior then allows us to quantify the failure probability of the treatment and to analyze under which conditions a treatment fails for a specific patient. UQ in optical metrology Optical metrology uses the interaction of light and matter to measure quantities of interest, like geometry or material parameters. Reliable measurements are essential preconditions in nanotechnology. For example, in the semiconductor industry, optical metrology is used for process control in integrated circuit manufacturing, as well as for photomask quality control. In this context, measurements of feature sizes need to be reliable down to a sub-nanometer level. As the measurement amounts to solving an inverse problem for given experimental results, methods for both simulating light-matter interaction in complex 3-D shapes and uncertainty quantification are required. A typical experimental setup consists of a well-defined light source, the scattering object, and a detector for the scattered light. Figure 3 shows a setup with fixed illumination wavelength and variable settings for polarization and direction of incident and detected light components. However, wavelength-dependent measurements are also typical in optical metrology. The detected spectra are used to reconstruct the scattering structure. The choice of illumination and detection conditions directly impacts the measurement sensitivity. The scatterers to be analyzed are manifold in the semiconductor industry, for example. Gratings, as well as active components like FinFETs or VNANDs, are of interest. In a collaborative study of Physikalisch-Technische Bundesanstalt (PTB) and ZIB’s research group on Computational Nano Optics, the grating shown in Figure 4 (left) has been studied. Its geometric parameters, as shown in Figure 5, have been identified by a Gauß-Newton method. Reconstruction results are summarized in Table 1. The simulated spectral reflection response for the reconstructed parameters matches the experimental data very closely. Estimating the posterior covariance by local Taylor approximation reveals a satisfactory accuracy and indicates the high quality of the implemented models (FN:S. Burger, L. Zschiedrich, J. Pomplun, S. Herrmann, F. Schmidt. hp-finite element method for simulating light scattering from complex 3D structures. 9424: 94240Z, 2015.). Hence, the approach is a promising candidate for investigations of far more complex semiconductor structures, as exemplarily shown in the center and bottom parts of Figure 4. Molecular design THE Challenge FOR Computational Molecular Design Molecular systems have metastable states with low transition probabilities between these states. For the optimal design of interacting molecules (drugs, analytes, sensors) it is often very important to adjust some of these transition probabilities by redesigning parts of the molecular system, in order to increase affinity or specificity, for example. The Computational Molecular Design group predicts the essential transition probabilities of molecular systems on the basis of molecular dynamics (MD) simulations. To save on computational costs, one aims at extracting the essential transition probabilities from as few as possible short MD simulations. The trajectories that result from each of these simulations can be considered as observed data that are used to infer the desired transition probabilities. Uncertainty of transition probabilities The challenge for the Computational Molecular Design group is to compute the long-term behavior of molecular systems on the basis of only a few short-time molecular simulations. Molecular systems exhibit a multiscale dynamical behavior: they undergo rare transitions between metastable states, for example. If the statistical analysis of these transitions is based on a few short-time simulations (the observed data Y) only, then the correct transition probabilities (the inferred unknowns X) are uncertain. In this case the posterior distribution for X|Y is a distribution on transition matrix space that reflects the uncertainty about the transition matrix X, given the observed simulation data Y and the expert knowledge encoded in the prior p(X). Applying appropriate UQ algorithms leads to an ensemble of transition matrices. This ensemble of transition matrices allows us to compute the uncertainty in the quantities extracted from the transition matrices, the dominant timescales of the system, for example. It was recently demonstrated how to utilize the insight into these uncertainties for understanding the physical/chemical meaning of the indeterminacy of metastable states (FN:M. Weber, K. Fackeldey: Computing the Minimal Rebinding Effect Included in a Given Kinetics. Multiscale Model. Simul., 12[1]:318-334, 2014.) which led to new design principles for synthetic chemistry (FN:C. Fasting, C.A. Schalley, M. Weber, O. Seitz, S. Hecht, B. Koksch, J. Dernedde, C. Graf, E.-W. Knapp, R. Haag. Multivalency as a Chemical Organization and Action Principle, Angew. Chem. Int. Ed. 51 [42]:10472–10498, 2012.). Incomplete simulation data If only some and not all of the essential transition probabilities are known, then the corresponding transition matrix is incomplete, or some entries are missing. We have constructed matrix completion algorithms for estimating these entries on the basis of the given simulation data without performing further simulations, which saves a lot of computational effort. In order to estimate the unknown entries, different assumptions are useful. One can consider a low-rank transition matrix, or assume the matrix to be doubly-stochastic and comprise only some major transition paths, or that it is a similarity matrix of a small number of clustered objects. Changing the molecular structure Another important question is how to reduce computational costs when simulating various molecular systems which only differ in some part of the molecular system that is small (Figure 6) compared to the size of the entire system. We investigated methods that allow for the prediction of transition matrices for a changed molecular system on the basis of the simulated original system (FN:Ch. Schütte, A. Nielsen, M. Weber. Markov State Models and Molecular Alchemy. Molecular Physics, 113[1]:69-78, 2015.),(FN:A. Nielsen. Computation Schemes for Transfer Operators. Doctoral dissertation, Freie Universität Berlin, 2015.). The mathematical background is based on the theory of path integrals or reweighting in path space, which plays a major role in many of our present and future UQ in computer vision and geometry reconstruction Computer Vision Computer vision extracts numerical or symbolic information from image data, using methods for acquiring, processing, analyzing, and understanding images. If decisions are based on such information, it is important to know how reliable it is. Thus, UQ becomes necessary for processing pipelines in computer vision. This is illustrated by the following three examples: 1) An orthopedic surgeon has to decide, based on a computed tomography (CT) scan of a patient, whether a bone is solid enough to permit fixation of a screw that holds a certain maximum load. Such a decision requires extraction of the bone’s thickness and material density – including error bounds – from the CT scan. 2) In pathology, many conclusions are drawn, based on frequencies of cell types in biopsies. Reliable diagnoses require knowledge not just of cell counts in microscopic images, but also of the uncertainties related to the identification and classification of cells. 3) In self-driving cars, potentially far-reaching decisions have to be taken based on information from video images; examples are the route and speed of the car, considering estimated trajectories of vehicles and pedestrians, for example. The uncertainty of such image-based information needs to be estimated to be taken into account. Obviously, a wealth of research topics arises. At ZIB, research has started concerning UQ in image segmentation. In a first approach, images are considered as random fields, and spatial probabilities for membership functions are computed. This works for basic segmentation techniques that utilize only local information, like thresholding, for example. Consideration of non-local information renders UQ into a much more difficult problem. A fundamental problem is to capture the modeling uncertainty in image segmentation. Geometry Reconstruction The 3-D shape of human anatomy, as needed in particular for surgery and therapy planning tasks, is usually extracted from 3-D image stacks provided by CT scans. Dose reduction requires a limit on the number of X-ray projections entering the CT to very few, but these do not contain enough information to reconstruct 3-D image stacks. The anatomy, however, can be described by statistical shape models in terms of a small number of parameters. Using such shape models as priors renders the Bayesian inverse problem well-posed and allows an efficient computation of maximum posterior estimates (FN:M. Ehlke, T. Frenzel, H. Ramm, M. A. Shandiz, C. Anglin, S. Zachow. Towards Robust Measurement Of Pelvic Parameters From AP Radiographs Using Articulated 3D Models. Computer Assisted Radiology and Surgery [CARS], 2015.). The research groups on Therapy Planning, Computational Medicine, and Visual Data Analysis in Science and Engineering investigate the properties of the posterior density and the design of X-ray acquisition to minimize the uncertainty. Due to the nonlinear impact of shape changes on the projection images, the posterior density is a mixture of Gaussian and Laplacian distributions. This makes UQ by local Gaussian approximation unreliable and requires the use of sampling techniques or a more accurate modeling of the impact of nonlinearities. Uncertainty visualization In many applications of information processing, humans are (still) involved; examples are explorative data analysis where humans try to gain understanding or making decisions based on computed information. In such cases, information must be conveyed to humans, ideally by visualization. If the information is subject to uncertainty, this should also be communicated visually. Research in the group Visual Data Analysis in Science and Engineering at ZIB focuses on the visualization of uncertainties in spatial and spatiotemporal data, particularly on cases where certain “features” in the data are of interest. Examples of such features are level sets, critical points, and ridges in scalar fields, or critical points in vector fields. For uncertain data that can be represented as random fields, an approach has been developed to compute and depict the resulting spatial or spatiotemporal uncertainties of such features (FN:K. Pöthkow. Modeling, Quantification and Visualization of Probabilistic Features in Fields with Uncertainties. Doctoral dissertation, Freie Universität Berlin, 2015),(FN:K. Pöthkow, H.-C. Hege [2013]. Nonparametric models for uncertainty visualization. Computer Graphics Forum. 32[3]:131–140.). Here, expectation values of feature indicators are computed with Monte Carlo methods; these can be interpreted as probabilities of the presence of a feature at some point in space or space-time. In Scientific Computing, uncertainties are often captured by ensemble computations, where instead of a single computation many computations are performed, with varying parameters or even mathematical models. This approach is used in meteorology and climatology, for example. If the results of such ensemble computations can be summarized in random fields, our methods for uncertainty visualization apply directly.
{"url":"https://www.zib.de/research/features/feature/uncertainty-quantification","timestamp":"2024-11-06T19:21:45Z","content_type":"text/html","content_length":"59462","record_id":"<urn:uuid:055980c3-1c0e-4035-bc3e-95674a6146c1>","cc-path":"CC-MAIN-2024-46/segments/1730477027933.5/warc/CC-MAIN-20241106163535-20241106193535-00121.warc.gz"}
Understanding Mathematical Functions: Which Function Counts How Many C Mathematical functions are integral to the effective use of spreadsheets, especially in programs like Excel. By understanding different functions and how they work, users can optimize their data analysis and manipulation within the software. One such function that is often utilized is the one that counts how many cells in a given range contain numbers. In this blog post, we will explore the importance of understanding mathematical functions in Excel and take a closer look at the function that counts cells with numbers. Key Takeaways • Understanding mathematical functions in Excel is crucial for optimizing data analysis and manipulation. • The COUNT function is commonly used to count the number of cells in a range that contain numbers. • The SUM and IF functions can also be used to count cells with numbers and offer different advantages. • Best practices and common mistakes should be considered when using mathematical functions in Excel. • Regular practice and experimentation with different functions is essential for mastering their versatility and utility. Understanding Mathematical Functions In the world of Excel and other spreadsheet software, mathematical functions are essential tools for performing calculations and analyzing data. These functions are pre-built formulas that can be used to perform a wide range of mathematical operations, from basic arithmetic to complex statistical analysis. Understanding how to use mathematical functions is crucial for anyone working with data or conducting analysis in Excel. A. Definition of mathematical functions Mathematical functions in Excel are pre-defined formulas that take a range of input values and return a single output value. These functions can be used to perform a variety of mathematical operations, such as addition, subtraction, multiplication, division, and more. They can also be used for more advanced operations, such as statistical analysis, financial calculations, and engineering calculations. B. Examples of common mathematical functions in Excel • 1. SUM: This function adds together the values in a range of cells. • 2. AVERAGE: This function calculates the average of the values in a range of cells. • 3. COUNT: This function counts how many cells in a range contain numbers. • 4. MAX: This function returns the largest value in a range of cells. • 5. MIN: This function returns the smallest value in a range of cells. C. Importance of knowing how to use mathematical functions Knowing how to use mathematical functions in Excel is essential for anyone working with data or conducting analysis in a spreadsheet. These functions can help streamline and automate calculations, saving time and reducing the risk of errors. They also enable users to perform complex analyses that would be difficult or time-consuming to do manually. In addition, having a good understanding of mathematical functions can make it easier to interpret and communicate the results of data analysis. Counting Cells in a Range Containing Numbers When working with large sets of data in Excel, it is often necessary to count the number of cells within a range that contain numbers. The COUNT function in Excel is a powerful tool that allows you to accomplish this task with ease and efficiency. Explanation of the COUNT function in Excel The COUNT function in Excel is used to count the number of cells within a range that contain numerical values. It is a simple yet highly useful function that can save you a significant amount of time when working with large data sets. How to use the COUNT function to count cells with numbers in a range To use the COUNT function, simply enter "=COUNT(range)" into a cell, where "range" is the range of cells that you want to count. For example, to count the number of cells containing numbers in the range A1:A10, you would enter "=COUNT(A1:A10)" into a cell. The function will return the total count of cells within the specified range that contain numerical values. • Step 1: Select the cell where you want the count to appear. • Step 2: Enter the formula "=COUNT(range)", replacing "range" with the actual range of cells you want to count. • Step 3: Press Enter to calculate the count. Benefits of using the COUNT function The COUNT function provides several benefits when it comes to counting cells with numbers in a range. • Efficiency: The COUNT function allows you to quickly and accurately count the number of cells containing numbers within a range, saving you time and effort. • Accuracy: The function ensures that you do not miss any cells containing numerical values, providing a reliable count of numbers within the specified range. • Flexibility: The COUNT function can be used with ranges of any size, making it a versatile tool for a wide range of data analysis tasks. Using Other Functions for Counting Cells with Numbers When working with mathematical functions in Excel, it's important to understand the various options available for different tasks. One common task is counting the number of cells in a range that contain numbers. In addition to the COUNT function, the SUM function can also be used for this purpose. Let's explore the SUM function and how it compares to the COUNT function for counting cells with numbers. A. Explanation of the SUM function The SUM function in Excel is typically used to add up a range of numbers. It takes a range of cells as its argument and returns the sum of all the numbers in that range. B. How to use the SUM function to count cells with numbers in a range While the primary purpose of the SUM function is to add up numbers, it can also be used to count the number of cells in a range that contain numbers. By using the SUM function in combination with other functions, such as ISNUMBER and IF, you can achieve this counting task. • First, use the ISNUMBER function to check if each cell in the range contains a number. This function returns TRUE if the cell contains a number and FALSE if it does not. • Next, use the IF function to convert the TRUE and FALSE values returned by ISNUMBER into 1s and 0s, respectively. • Finally, use the SUM function to add up the 1s and 0s, resulting in the count of cells with numbers in the range. C. Comparison of COUNT and SUM functions for counting cells with numbers While the SUM function can be used to count cells with numbers, the primary function for this task is the COUNT function. The COUNT function simply returns the number of cells in a range that contain numbers, without the need for additional manipulation like the SUM function requires. When deciding between the COUNT and SUM functions for counting cells with numbers, it's important to consider the simplicity and clarity of the COUNT function versus the flexibility and potential complexity of using the SUM function for this purpose. Understanding the IF Function When it comes to working with mathematical functions in Excel, the IF function is a powerful tool that allows users to perform logical tests and return specific values based on the results. In this blog post, we will explore how the IF function can be used to count cells with numbers in a given range. Explanation of the IF function in Excel The IF function in Excel allows users to perform a logical test and return one value if the test evaluates to true, and another value if the test evaluates to false. The syntax for the IF function is as follows: =IF(logical_test, value_if_true, value_if_false). This function is commonly used in various scenarios to automate decision-making processes based on specific criteria. How to use the IF function to count cells with numbers in a range One of the useful applications of the IF function is to count the number of cells in a specified range that contain numerical values. By using a logical test within the IF function, users can easily determine the count of cells meeting specific criteria. For example, the formula =COUNTIF(A1:A10, ">=0") can be used to count the number of cells in the range A1:A10 that contain numbers greater than or equal to zero. Advantages of using the IF function for counting cells with numbers • Accuracy: The IF function provides a precise method for counting cells with numbers based on defined conditions, ensuring accurate results. • Flexibility: Users can customize the logical test within the IF function to cater to specific requirements, allowing for a wide range of counting scenarios. • Efficiency: By leveraging the IF function, users can streamline the process of counting cells with numbers, saving time and effort in data analysis tasks. Tips for Using Mathematical Functions in Excel When using mathematical functions in Excel, it's important to follow best practices, avoid common mistakes, and know how to troubleshoot issues that may arise. Best practices for using mathematical functions in Excel • Use the appropriate function: Choose the right mathematical function for the task at hand. For example, the COUNT function counts cells containing numbers, while the SUM function adds the values in a range of cells. • Format your data properly: Ensure that the cells you are working with are formatted correctly. For example, if you're using the COUNT function to count cells with numbers, make sure the cells are formatted as numbers. • Double-check your formula: Before finalizing your formula, double-check for any errors or typos that may affect the outcome. Common mistakes to avoid when using mathematical functions • Using the wrong function: Using the wrong mathematical function can lead to incorrect results. Make sure to choose the function that best fits your needs. • Ignoring error messages: If Excel displays an error message, don't ignore it. Take the time to understand what went wrong and correct the issue. • Not understanding the function's requirements: Some mathematical functions have specific requirements, such as the need for cells to be formatted a certain way. Make sure you understand these requirements before using the function. How to troubleshoot issues with mathematical functions • Check your data: Ensure that the data you are working with is accurate and formatted correctly. • Review your formula: Go through your formula step by step to identify any potential errors or issues. • Seek help: If you're unable to troubleshoot the issue on your own, reach out to a colleague or search for resources online to get the assistance you need. Understanding mathematical functions is crucial for making the most of your data in Excel. By mastering different functions, you can efficiently manipulate and analyze your data, saving time and effort. I encourage you to practice using different functions in Excel to become more comfortable with their capabilities. The function COUNT is particularly useful for counting how many cells in a range contain numbers, making it a valuable tool for data analysis. The versatility and utility of mathematical functions make them essential for anyone working with data in Excel. ONLY $99 Immediate Download MAC & PC Compatible Free Email Support
{"url":"https://dashboardsexcel.com/blogs/blog/understanding-mathematical-functions-which-function-counts-cells-range-contain-numbers","timestamp":"2024-11-09T03:35:23Z","content_type":"text/html","content_length":"216624","record_id":"<urn:uuid:7fbf1bb2-f932-4368-a1cd-a10349abc251>","cc-path":"CC-MAIN-2024-46/segments/1730477028115.85/warc/CC-MAIN-20241109022607-20241109052607-00738.warc.gz"}
Nature's abacus Soon after language develops, it is safe to assume that humans begin counting - and that fingers and thumbs provide nature's . The decimal system is no accident. Ten has been the basis of most counting systems in history. When any sort of record is needed, notches in a stick or a stone are the natural solution. In the earliest surviving traces of a counting system, numbers are built up with a repeated sign for each group of 10 followed by another repeated sign for 1. Arithmetic cannot easily develop until an efficient numerical system is in place. This is a late arrival in the story of mathematics, requiring both the concept of Place value and the idea of As a result, the early history of mathematics is that of geometry and algebra. At their elementary levels the two are mirror images of each other. A number expressed as two squared can also be described as the area of a square with 2 as the length of each side. Equally 2 cubed is the volume of a cube with 2 as the length of each dimension. Egyptian numbers: 3000-1600 BC In Egypt, from about 3000 BC, records survive in which 1 is represented by a vertical line and 10 is shown as ^. The Egyptians write from right to left, so the number 23 becomes lll^^ If that looks hard to read as 23, glance for comparison at the name of a famous figure of our own century - Pope John XXIII. This is essentially the Egyptian system, adapted by Rome and still in occasional use more than 5000 years after its first appearance in human records. The scribes of the Egyptian pharaohs (whose possessions are not easily counted) use the system for some very large numbers - unwieldy though they undoubtedly are (see a large Egyptian number). Babylonian numbers: 1750 BC The Babylonians use a numerical system with 60 as its base. This is extremely unwieldy, since it should logically require a different sign for every number up to 59 (just as the decimal system does for every number up to 9). Instead, numbers below 60 are expressed in clusters of ten - making the written figures awkward for any arithmetical computation. Through the Babylonian pre-eminence in , their base of 60 survives even today in the 60 seconds and minutes of angular measurement, in the 180 degrees of a triangle and and in the 360 degrees of a circle. Much later, when can be accurately measured, the same system is adopted for the subdivisions of an hour. The Babylonians take one crucial step towards a more effective numerical system. They introduce the place-value concept, by which the same digit has a different value according to its place in the sequence. We now take for granted the strange fact that in the number 222 the digit '2' means three quite different things - 200, 20 and 2 - but this idea is new and bold in For the Babylonians, with their base of 60, the system is harder to use. For them a number as simple as 222 is the equivalent of 7322 in our system (2 x 60 squared + 2 x 60 + 2). The place-value system necessarily involves a sign meaning 'empty', for those occasions where the total in a column amounts to an exact multiple of 60. If this gap is not kept, all the digits before it will appear to be in the wrong column and will be reduced in value by a factor of 60. Another civilization, that of the , independently arrives at a place-value system - in their case with a base of 20 - so they too have a symbol for zero. Like the Babylonians, they do not have separate digits up to their base figure. They merely use a dot for 1 and a line for 5 (writing 14, for example, as 4 dots with two lines below them). Zero decimal system Arabic numerals: 300 BC - AD 1000 In the Babylonian and Mayan systems the written number is still too unwieldy for efficient arithmetical calculation, and the zero symbol is only partly effective. For zero to fulfil its potential in mathematics, it is necessary for each number up to the base figure to have its own symbol. This seems to have been achieved first in India. The digits now used internationally make their appearance gradually from about the 3rd century BC, when some of them feature in the inscriptions of The Indians use a dot or small circle when the place in a number has no value, and they give this dot a name - , meaning 'empty'. The system has fully evolved by about AD 800, when it is adopted also in . The use the same 'empty' symbol of dot or circle, and they give it the equivalent Arabic name, About two centuries later the Indian digits reach Europe in Arabic manuscripts, becoming known as Arabic numerals. And the Arabic is transformed into the 'zero' of modern European languages. But several more centuries must pass before the ten Arabic numerals gradually replace the system inherited in Europe from the The abacus: 1st millennium BC In practical arithmetic the merchants have been far ahead of the scribes, for the idea of zero is in use in the market place long before its adoption in written systems. It is an essential element in humanity's most basic counting machine, the abacus. This method of calculation - originally simple furrows drawn on the ground, in which pebbles can be placed - is believed to have been used by Babylonians and Phoenicians from perhaps as early as 1000 BC. In a later and more convenient form, still seen in many parts of the world today, the abacus consists of a frame in which the pebbles are kept in clear rows by being threaded on rods. Zero is represented by any row with no pebble at the active end of the rod. Roman numerals: from the 3rd century BC The completed Decimal system is so effective that it becomes, eventually, the first example of a fully international method of communication. But its progress towards this dominance is slow. For more than a millennium the numerals most commonly used in Europe are those evolved in Rome from about the 3rd century BC. They remain the standard system throughout the Middle Ages, reinforced by Rome's continuing position at the centre of western civilization and by the use of Latin as the scholarly and legal language. Binary numbers: 20th century AD The 20th century introduced another international language, which most of us use but few are aware of. This is the binary language of computers. When interpreting coded material by means of electricity, speed in tackling a simple task is easy to achieve and complexity merely complicates. So the simplest possible counting system is best, and this means one with the lowest possible base - 2 rather than 10. Instead of zero and 9 digits in the decimal system, the binary system only has zero and 1. So the binary equivalent of 1, 2, 3, 4, 5, 6, 7, 8, 9, 10 is 1, 10, 11, 100, 101, 110, 111, 1000, 1001 and 1010 - and so ad infinitum.
{"url":"https://timelines.historyworld.net/history/countingSystemsAndNumerals/169?heading=theAbacus","timestamp":"2024-11-04T03:52:32Z","content_type":"text/html","content_length":"59637","record_id":"<urn:uuid:a23ad1b4-bc89-4d2f-87af-d2d03e473e78>","cc-path":"CC-MAIN-2024-46/segments/1730477027812.67/warc/CC-MAIN-20241104034319-20241104064319-00633.warc.gz"}
athematics for Math 210 Mathematics for K-8 Teachers OPTIONAL TEXT A Problem Solving Approach to Mathematics for Elementary School Teachers by Billstein, Libeskind, and Lott (8th Ed). REQUIRED READING You must download chapters in PDF format (free!). They are required reading and will guide the mathematical content covered in the course. The chapters contain the educational goals determined by standards linked to the curriculum, sample problems from K-8 textbooks, and Released Items. This combination helps you answer the questions “What do we need to know?” and “Why do we need to know this?” The posted chapters also contain the homework problems. SUPPLIES Portable whiteboard, eraser, and whiteboard markers. BRIEF COURSE DESCRIPTION This course is an intensive exploration of mathematical concepts and content commonly taught in grades K-8: problem-solving strategies; sets; functions; logic (quantifiers, conditional and biconditional statements); numeration systems; addition, subtraction, multiplication, and division of whole numbers; integers; greatest common divisor and least common multiple; addition, subtraction, multiplication, and division of rational numbers; integer and exponents; decimals and operations on decimals; percents; and algebraic thinking. See chapter highlights for more information … In this course we will focus on the math concepts and content for grades K-8: • reconceptualize mathematics you think you already know • learn mathematics at a much deeper level • experience various types of reasoning • learn problem solving strategies • make connections among mathematical topics • enjoy and appreciate mathematics LECTURE NOTES Lecture notes will be posted the evening before class. Please bring your copy of the notes to class. CLASS ACTIVITIES Some class meetings includes structured cooperative learning activities to reinforce fundamental mathematical concepts, to learn from others, to increase willingness to try new problems, and to improve frequency of success in problem solving. Cooperative learning activities represent opportunities to meet other students in the class to form study groups to work on the homework and prepare for the exams . Cooperative learning activities also help the class to maintain a positive classroom environment. The problems contained in the activities summarize the key concepts for the material covered. The problems we solve together meet educational goals determined by standards established by the National Council of Teachers of Mathematics (NCTM). • Grading selected problems. Only selected problems in the assigned homework will be graded, but all the assigned problems must be worked for credit and to prepare for the exams. Each homework assignment is worth 20 points, with 10 points maximum for “effort” reflected in the submitted homework and 10 points maximum for “correctness” of the graded problems. Sloppy, careless, slipshod, disheveled, dowdy, shabby, unconcerned, frowzy, or neglectful handwriting and improper use of mathematical notation typically results in less points for “effort,” therefore homework assignments should be written carefully and concisely. • Incomplete homework. 1 point will be deducted for each assigned problem that is skipped, ignored, overlooked, omitted, disregarded, or forgotten, which can lead to low scores . • Show your work (SYW). You must show your work (SYW) to homework questions for credit. Partial credit will be given as generously as appropriate. • Link between homework assignments and your course grade. There is a remarkable, noticeable, prominent, outstanding, conspicuous, striking correlation between your grade on the homework assignments and your course grade, so please give the homework assignments the attention they deserve. • Help! Many of the problems we solve in class will resemble the homework problems you have been assigned. You can work with other students, you can use my office hours, or you can see me after class for additional help . You can also receive free walk-in tutoring from the CSUSM Math Lab, located in 1104 Kellog Library. • Homework assignments and the writing requirement. The 2500 word writing requirement will be exceeded by these homework assignments. • Solutions. Detailed answers to the assignments will be made available after the due date. • Late Homework. Late homework will not be accepted, without exception (e.g., car trouble, illness, emergency, …). But your two lowest homework scores will be dropped. You can scan it and submit by email as a single PDF file. EXAMS The date of the exams will be announced one week in advance. You should study each weekend to avoid cramming, which often causes confusion, frustration, disorder, chaos, agitation, disarray, jumble, tangles, disturbances, and hullabaloo. GRADING Assigned homework, two exams, and a final exam will be used to determine the final course grade. These components have different relative importance: │Exam 1 │Exam 2 │Homework │Final Exam │ │100 pts│100 pts│100 pts (scaled) │100 pts │ │25% │25% │15% │35% │ Letter grades are assigned according to the following rule: 100-90 = A, 89-80 = B, 79 – 65 = C, 64 – 50 = D, 49 – 0 = F EXPECTATIONS In this course, you will be expected to • Communicate ideas orally and in writing. • Represent mathematical concepts using words, diagrams, algebra, manipulatives, and contextualized situations. • Learn problem solving strategies. • Independently solve problems. • Students who miss class are still responsible for announcements or changes regarding the course outline, homework assignments, due dates, and exam dates. • Check your email regularly for announcements regarding this class. • If you need to leave early, please inform me before class. • You will need a password (TBA) to access posted material. • Do not “cross-talk” during the lecture—it’s rude, disruptive, and disrespectful to everyone. • Please put your electronic gadgets in “silent mode” or “manner mode” before class begins. • Late homework will not be accepted. • Extra-credit work will be given inclusively beyond the course requirements. Stay tuned! • Everyone shares the responsibility in making this class an enjoyable place to learn useful mathematics, make inevitable mistakes, and share constructive ideas. Please help me create such an environment. MW 9am-10:15am → Mon, May 14 from 9:15am-11:15am MW 4pm-5:15pm → Mon, May 14 from 4pm-6pm Chapter 1 Problem Solving and Reasoning Inductive reasoning is introduced as a way to make conclusions or generalizations. Patterns are described, extended, and generalized. Tables are used to organize and see patterns. Algebra is used to generalize some patterns. Polya’s four phases of problem solving process are discussed, and problem solving strategies are illustrated with a variety of word problems. The roles of variables are discussed, along with the correspondence between word phrases and algebraic expressions . Additive and multiplicative reasoning are introduced. An introduction to the language of logic (statements, quantifiers, …) is given. Euler diagrams and truth tables are used to represent and analyze arguments. Chapter 2 Sets, Place Value, Addition and Subtraction with Whole Numbers Sets, operations with sets, and Venn diagrams are introduced. Place value, expanded form , positional enumeration systems, counting, word forms, rounding, estimation, and number sense are discussed. Models of addition and subtraction are discussed, and additive reasoning is reinforced. The Singaporean math model is used to represent and solve problems. Properties are used to promote number sense and some algebraic reasoning. Fact families help prove basic interesting algebraic relationships. Base-10 models are used to develop addition and subtraction algorithms. The partial sums method and partial differences method are used, along with a variety of other addition and subtraction algorithms. Estimation is also discussed. Basefive and base-twelve addition and subtraction are also discussed. Chapter 3 Multiplication and Division with Whole Numbers Multiplication is defined as combining equal-sized groups, as recommended in the literature. The various models of multiplication and division are discussed, along with the types of word problems that have multiplicative structure. The Singaporean math model is used to represent and solve problems. Properties of multiplication, which promote algebraic understanding and proficiency, are explored using inductive, deductive, and algebraic reasoning. The three uses of the Division Algorithm are also addressed. The partial products method and partial quotients method are used to develop the traditional multiplication and division algorithms. Mental arithmetic, adjustments, compatible numbers, and estimation are also discussed. Chapter 4 Number Theory and Integers The equivalent meanings of the symbol a|b are given, and divisibility tests are addressed. The Sieve of Eratosthenes, Fundamental Theorem of Arithmetic, LCM, GCF, and Euclidean Algorithm are discussed. A number theory result is included to give a sense of the distribution of prime numbers. Clock and modular arithmetic are discussed. Models are used to help define addition and subtraction with integers. Inductive reasoning is used to extend many whole number properties. Patterns are used to motivate the rule for signs with integers . Chapter 5 Fractions and Rational Numbers Models of fractions, nomenclature, symbolic notation, and various interpretations of fractions are discussed. Division models are used to show that a fraction is a quotient. The Singaporean math model is used to represent and solve problems. Inductive reasoning is used to establish the concept of equivalent fractions. Common fractions, rounding, estimation, benchmarks for comparing fractions , cross product rule, and the density property of fractions are also addressed. Whole number operations are extended to fractions in a natural way using diagrams. Additive and multiplicative reasoning are extended to fractions. The Singaporean math model is used to represent and solve problems. Algebraic properties of fraction are used to solve equations. Chapter 6 Decimals, Proportional Reasoning, and Real Numbers The expanded form of a decimal is emphasized in the nomenclature for decimals. Comparing decimals, rounding decimals, the similarities and differences between 4.3 and 4.30, terminating decimals, scientific notation, operations with decimals, and estimation are discussed. Additive and multiplicative reasoning are reinforced. Equivalent ratios and the value of a ratio are emphasized. Proportional reasoning is introduced and distinguished from additive reasoning. Tables and fractions are used to proportions involving “missing value” problems. Proportional reasoning is described algebraically and graphically . Percent is used to make comparisons between two quantities. The three types of percent problems and representational tools are discussed. Relationships between rational numbers and decimals are discussed. Relationships between irrational numbers and decimals are discussed. The real number line is discussed .
{"url":"https://www.softmath.com/tutorials-3/relations/math-210-mathematics-for-k-8.html","timestamp":"2024-11-12T07:10:22Z","content_type":"text/html","content_length":"44669","record_id":"<urn:uuid:c7f4c9ab-d93b-4f85-86b3-4eca38ae6e71>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.58/warc/CC-MAIN-20241112045844-20241112075844-00756.warc.gz"}
Air Resistance vs Friction: A Comprehensive Guide for Physics Students Air resistance and friction are two fundamental forces that play a crucial role in the study of physics. While both forces oppose motion, they act on different types of objects and in distinct ways. This comprehensive guide will delve into the intricacies of air resistance and friction, providing you with a deep understanding of the underlying principles, formulas, and practical applications. Understanding Air Resistance Air resistance, also known as drag, is the force that opposes the motion of an object as it moves through the air. This force is directly proportional to the square of the object’s velocity and the cross-sectional area of the object. The formula for calculating air resistance is: F = 1/2 * ρ * A * Cd * v^2 – F is the force of air resistance (in Newtons) – ρ (rho) is the density of the air (in kg/m^3) – A is the cross-sectional area of the object (in m^2) – Cd is the coefficient of drag (a dimensionless quantity) – v is the velocity of the object (in m/s) The coefficient of drag (Cd) is a dimensionless quantity that depends on the shape and orientation of the object. For example, a streamlined object, such as an airplane wing, has a lower Cd value compared to a blunt object, such as a brick. Examples of Air Resistance 1. Skydiving: When a skydiver jumps from a plane, they experience a significant amount of air resistance as they fall. The air resistance, combined with the force of gravity, causes the skydiver to reach a terminal velocity, where the two forces are balanced. 2. Cycling: Cyclists often adopt a more aerodynamic position to reduce air resistance and improve their speed. The shape of the bicycle and the rider’s position can significantly affect the air resistance experienced. 3. Falling objects: When an object is dropped in a vacuum (where there is no air), it experiences only the force of gravity and accelerates at a constant rate (9.8 m/s^2). However, when the same object is dropped in the presence of air, it experiences air resistance, which slows down its acceleration. Understanding Friction Friction is the force that opposes the relative motion between two surfaces in contact with each other. The magnitude of the frictional force depends on the coefficient of friction between the surfaces and the normal force acting on them. The formula for calculating the force of friction is: F = μ * N – F is the force of friction (in Newtons) – μ (mu) is the coefficient of friction (a dimensionless quantity) – N is the normal force acting on the surfaces (in Newtons) The coefficient of friction (μ) is a dimensionless quantity that depends on the materials and surface properties of the two objects in contact. For example, the coefficient of friction between rubber and concrete is generally higher than the coefficient of friction between steel and ice. Types of Friction 1. Static Friction: This is the force that opposes the initial motion of an object when it is at rest. The maximum static friction force is given by the formula: F_s,max = μ_s * N, where μ_s is the coefficient of static friction. 2. Kinetic Friction: This is the force that opposes the motion of an object that is already in motion. The kinetic friction force is generally lower than the maximum static friction force and is given by the formula: F_k = μ_k * N, where μ_k is the coefficient of kinetic friction. 3. Rolling Friction: This is the force that opposes the rolling motion of an object, such as a wheel or a ball. Rolling friction is generally much lower than sliding friction and is often approximated as F_r = C_r * N, where C_r is the coefficient of rolling friction. Examples of Friction 1. Braking a car: When you apply the brakes on a car, the brake pads create a frictional force between the brake pads and the brake discs, causing the car to slow down. 2. Walking on a surface: When you walk on a surface, the friction between your shoes and the ground allows you to maintain traction and prevent slipping. 3. Sliding a box on a surface: When you try to slide a box on a surface, the frictional force between the box and the surface opposes the motion, causing the box to slow down or stop. Similarities and Differences between Air Resistance and Friction While air resistance and friction are both resistive forces that oppose motion, they have some key similarities and differences: – Both air resistance and friction cause objects to lose energy and heat up. – Both forces can cause surfaces to become deformed or damaged over time. – Air resistance depends on the speed and cross-sectional area of the object, while friction between solids does not. – Friction between solids does not depend on the relative speed of the surfaces, whereas air resistance can change depending on other factors. – The formula for calculating air resistance (F = 1/2 * ρ * A * Cd * v^2) is different from the formula for calculating friction (F = μ * N). Practical Applications and Numerical Examples Calculating Air Resistance Example 1: A skydiver with a mass of 80 kg has a cross-sectional area of 0.5 m^2 and a coefficient of drag of 0.25. Assuming the air density is 1.225 kg/m^3, calculate the air resistance force experienced by the skydiver when they are falling at a velocity of 60 m/s. – Mass (m) = 80 kg – Cross-sectional area (A) = 0.5 m^2 – Coefficient of drag (Cd) = 0.25 – Air density (ρ) = 1.225 kg/m^3 – Velocity (v) = 60 m/s Substituting the values in the air resistance formula: F = 1/2 * ρ * A * Cd * v^2 F = 1/2 * 1.225 * 0.5 * 0.25 * (60)^2 F = 1,102.5 N Therefore, the air resistance force experienced by the skydiver is 1,102.5 N. Calculating Friction Example 2: A box with a mass of 10 kg is placed on a horizontal surface. The coefficient of static friction between the box and the surface is 0.4, and the coefficient of kinetic friction is 0.3. Calculate the maximum static friction force and the kinetic friction force acting on the box. – Mass (m) = 10 kg – Coefficient of static friction (μ_s) = 0.4 – Coefficient of kinetic friction (μ_k) = 0.3 Step 1: Calculate the normal force (N) acting on the box. N = m * g N = 10 kg * 9.8 m/s^2 N = 98 N Step 2: Calculate the maximum static friction force. F_s,max = μ_s * N F_s,max = 0.4 * 98 N F_s,max = 39.2 N Step 3: Calculate the kinetic friction force. F_k = μ_k * N F_k = 0.3 * 98 N F_k = 29.4 N Therefore, the maximum static friction force acting on the box is 39.2 N, and the kinetic friction force is 29.4 N. Air resistance and friction are two fundamental forces that play a crucial role in the study of physics. Understanding the underlying principles, formulas, and practical applications of these forces is essential for physics students. This comprehensive guide has provided you with a deep dive into the world of air resistance and friction, equipping you with the knowledge and tools necessary to tackle complex problems and real-world scenarios. Hi..I am Indrani Banerjee. I completed my bachelor’s degree in mechanical engineering. I am an enthusiastic person and I am a person who is positive about every aspect of life. I like to read Books and listen to music.
{"url":"https://techiescience.com/air-resistance-vs-friction/","timestamp":"2024-11-04T12:04:33Z","content_type":"text/html","content_length":"101965","record_id":"<urn:uuid:39c7fe54-7217-4773-9c77-f3596675bbc9>","cc-path":"CC-MAIN-2024-46/segments/1730477027821.39/warc/CC-MAIN-20241104100555-20241104130555-00528.warc.gz"}
No Arbitrage Pricing and the Term Structure of Interest Rates PDF Download No Arbitrage Pricing and the Term Structure of Interest Rates by Thomas Gustavsson Economic Studies 1992:2 Department of Economics Uppsala University Originally published as ISBN 91-87268-11-6 and ISSN 0283-7668 Acknowledgement I would like to thank my thesis advisor professor Peter Englund for helping me to complete this project. I could not have done it without the expert advice of IngemarKajfromtheDepartmentofMathematicsatUppsalaUniversity. Iam alsogratefultoDavidHeathofCornellUniversityforreadinganddiscussingan early version of this manuscript. Repeated conversations with Martin Kulldorff andHansDill´en, bothUppsalaUniversity, andRainerSch¨obel, Tu¨bingen, have also been most helpful. The usual disclaimer applies. Financial support for this project was received from Bo Jonas Sj¨onanders Minnesfond and Bankforskningsinstitutet. Special thanks to professors Sven- ErikJohansson, NilsHakansson, Claes-HenricSivenandErikDahm´enfortheir support. Uppsala May 1992 Abstract Thisdissertationprovidesanintroductiontotheconceptofnoarbitragepricing and probability measures. In complete markets prices are arbitrage-free if and onlyifthereexistsanequivalentprobabilitymeasureunderwhichallassetprices are martingales. This is only a slight generalization of the classical fair game hypothesis. The most important limitation of this approach is the requirement of free and public information. Also in order to apply the martingale repre- sentation theorem we have to limit our attention to stochastic processes that are generated by Wiener or Poisson processes. While this excludes branching it does include diffusion processes with stochastic variances. Theresultisanon-lineararbitragepricingtheoryforfinancialassetsingen- eral and for bonds in particular. Discounting of future cash flows is performed with zero coupon bonds as well as with short term interest rates (roll-over). In the presence of bonds discounting is an ambiguous operation unless an explicit intertemporalnumeraireisdefined. However,withtheproperdefinitionswecan dispense with the traditional expectations hypothesis about the term structure of interest rates. Arbitrage-free bond prices can be found simply from the fact that these are assets with a finite life and a fixed redemption value.1 1Note for the current reader: Unfortunately there are some serious mathematical errors in sections 5.2 and 6.2 of this work. In particular the single forward-neutral measure described here is confused with the family of forward-neutral measures introduced by El- KarouiandGeman(1991). Thisisnotasimplemattersinceitinvolvessomeratherdelicate problemswiththeeconomicbehaviorofmarketparticipantsandtheirattitudestowardsrisk over time and intertemporal pricing of bonds. In more recent work I show that the single forward-neutralmeasuredescribedherecan,infact,beidentifiedwiththeoriginalprobability measure,denotedbyQinthistext. Contents 1 Introduction 1 2 Single period market 5 2.1 Probabilistic interpretation . . . . . . . . . . . . . . . . . . . . . 6 2.2 Continuous payoffs . . . . . . . . . . . . . . . . . . . . . . . . . . 7 2.3 Viability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 3 Multiperiod markets 13 3.1 Trading strategies . . . . . . . . . . . . . . . . . . . . . . . . . . 14 3.2 No arbitrage and martingales . . . . . . . . . . . . . . . . . . . . 16 4 Discounting and the choice of a numeraire 20 4.1 Roll-over discounting . . . . . . . . . . . . . . . . . . . . . . . . . 21 4.2 Using the current term structure . . . . . . . . . . . . . . . . . . 22 4.3 Forward and futures prices . . . . . . . . . . . . . . . . . . . . . 25 5 Martingale representation 27 5.1 Roll-over pricing . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 5.2 Forward-neutral pricing . . . . . . . . . . . . . . . . . . . . . . . 30 5.3 The drift condition . . . . . . . . . . . . . . . . . . . . . . . . . . 33 6 General pricing formulas 35 6.1 The market prices of risk in a non-linear APT . . . . . . . . . . . 35 6.2 Traditional expectation hypotheses . . . . . . . . . . . . . . . . . 38 7 Conclusions 40 References Appendix: Mathematical foundations of no arbitrage pricing Tables: 1 Introduction Similarities between gambling and the trading of financial assets are sometimes considered to discredit the respectability of financial markets. Quite to the contraryIwouldsay. Thisdissertationshowsindetailhowthegamblingaspect of the behavior of traders can enable them to reach a consensus on the current value of any number of uncertain future prospects. The evaluation procedure is independent of the preferences of traders with respect to risk and investment horizons. Tosimplifymatterswerestrictourselvestothecasewhenallrelevant informationispublicorsymmetricamongtraders. Ifassetpricesfullyreflectall relevantinformationnotradershouldbeabletoearnexcessreturnsfromtrading rulesbasedonhistoricalinformation-whetherpublicorprivate. Thisisknown as the efficient market hypothesis. Early examples of this type of approach can be found in Cootner (1964), for an interesting survey see Fama (1970). The basic idea was exploited in a number of empirically oriented papers during the sixties and the seventies. Typically it was claimed that, in ”efficient” markets, pricesorratesofreturnshouldbeseriallyuncorrelatedorfollowrandomwalks. Unfortunately each researcher only tested his own particular version of how to ”beatthemarket”. Comparisonswererare,andmostofthesuggestedempirical rulesofthumbwereunfounded. Theremainingresultoftheseeffortsseemtobe mainlymethodological. Theystartedastrongempiricaltraditionofresearchon asset pricing. Several connections were made to traditional statistical methods, and, in particular, to the theory of ”fair” games. In a fair game the expected value of winnings and losses should be zero. Alternatively the expected value of the gambler’s fortune should always equal its current value, i e the evolution of his fortune over time should follow a martingale. This provided a key link to the development of more general pricing principles for ”efficient markets”. One of the most popular is known as no arbitrage pricing (or as arbitrage-free pricing). No arbitrage pricing is an invariance principle for markets with public in- formation. No arbitrage means that all opportunities to make a riskfree profit have been exhausted by traders. This should certainly be a basic requirement for an ”efficient” market. As a result of the arbitrage activities relative prices willbeconstrained. Inthecaseofcompletemarketsthebasictheoremofnoar- bitragetellsusexactlyhow. Intuitivelythetheoremclaimsthatanyassetprice must equal the expected value of its discounted future cash payoffs to preclude arbitrage. This is a surprisingly strong result. It should, however, be noted thatforthistoholdwecannotcalculatetheexpectedvaluewithrespecttoany probability measure. We have to construct a very special probability measure for this to be true. This is the fundamental difference between no arbitrage pricing and the concept of a fair game. No arbitrage prices can be calculated under fairly general conditions. All involved stochastic processes should have finitevarianceandexpectation. Basicallywhatfollowshereisanelaborationof this result. Clearly the no arbitrage pricing principle is a statement about the developmentofassetpricesinrelationtoeachotherovertime. Neitherforward 1 nor spot prices need follow martingales, see Lucas (1978). Instead the focus is on relative prices. This is where discounting enters. The role of discounting is to cancel out any common time trends in absolute prices. Disregarding growth trendsinthiswaynoarbitragemeansthattradingisinsomesenseafairgame. Although the actual odds need not be fair it should be possible in principle to tilttheoddsslightlyandgetanequivalentgamethatisfair. Asnotedbeforethe gambler’s fortune in such an equivalent game will follow a martingale. There- fore the constructed probability measure is known as an equivalent martingale measure. The general theory of no arbitrage pricing and its relation to the fa- mousmathematicaltheoremofseparatinghyperplanes(Hahn-Banachtheorem) was first developed by Ross (1976 and 1978). He did not make the connection to fair games and equivalent martingale measures. This was done by Harrison and Kreps (1978), and Harrison and Pliska (1981 and 1983). Duffie and Huang (1985) showedthepowerofthemartingaletoolboxtoreplacedynamicprogram- ming. The equivalent martingale approach generalizes traditional capital asset pricingmodels. OptimalportfoliorulescanbefoundinCoxandHuang (1990). In the special case of constant interest rates the no arbitrage principle is also called the risk-neutral evaluation principle. This principle was made famous by option pricing. This dissertation provides a systematic introduction to no arbitrage pricing of financial assets in general and to that of bonds in particular. The pricing of (zero coupon) bonds is often referred to as the term structure of interest rates, theTSIRforshort. Foralongtimebondshavebeentreatedasanisolatedtopic dwellinginamazeoftechnicaldetail. Herethepurposeistoshowhowbondsfit into the general framework of no arbitrage pricing. What makes bonds special? Howcanbondsbeusedforthedisconntingoffuturecashflows? Inwhatwaydo stochastic interest rates influence the pricing of bonds and other assets? What roleisplayedbythetermstructureofinterestrates? Doweneedthetraditional expectationshypothesesaboutthetermstructure? Inparticular, howdoprices of long term bonds relate to short term interest rates? What is the role of local risk-neutrality? These are the main questions that will be discussed here. Ho and Lee (1986) were the first to use the current TSIR for no arbitrage pricing of bonds. They used an event-tree approach (a binomial model) with both discrete time and discrete state space. Unfortunately their model suffered from some inconsistencies. For example, they did not rule out the possibility of negative interest rates. The first consistent treatment of bond pricing and the TSIR was done by Heath, Jarrow, and Morton (1987, 1989, published in 1992), and(independently) byArtznerandDelbaen(1989). UnfortunatelyHeath,Jar- row,andMortondidnotrelatetheirmodeltothebasictheoremofnoarbitrage. Instead they chose to start from scratch using a framework completely unique to bonds. They mapped bond prices onto implied forward spot rates from the current TSIR and derived a new form of the basic no arbitrage theorem within their particular framework. This resulted in a drift condition for arbitrage-free bond pricing which refers to roll-over cash as the numeraire. As will be shown 2 here the essence of their approach comes out more naturally when using bonds as the numeraire (no local risk-neutrality). This approach also brings out the fundamentalroleofimpliedforwardratesinthepricingofotherassets. Geman (1989) pioneered the use of bonds as discount factors in a general no arbitrage pricing framework. She showed how discounting future cash flows with bonds from the current TSIR corresponds to a particular choice of numeraire in an intertemporal model of asset prices. To properly understand the role of the TSIR we have to go beyond the cash price convention prevailing’in finance and explicitly identify the micro-economic concept of an intertemporal numeraire. This provides an interesting analytical alternative to rolling-over of money at short term rates of interest, the traditional choice of numeraire. As we shall see bothalternativesareequallyvalidwaystodiscountfuturemoneyprices. Sowhy prefer short term roll-over to pure discount bonds? Indeed, it does not seem to be widely recognized that the simultaneous existence of these two alternatives makes discounting ambiguous in terms of money! InawaytheapproachofthisdissertationfollowsthatofArtznerandDelbaen (1989). They started with the well-established general theorem of no arbitrage and subsequently derived the pricing of bonds as a special case. But they too usedroll-overmoneyasnumeraire(localrisk-neutrality),andformallytheyonly price one bond in relation to short term interest rates (roll-over money). This dissertation attempts to close the gap by showing that the general no arbitrage pricing approach results in the same prices and the same drift conditions as can be found in Heath, Jarrow, and Morton (1992). Also several of the results derived for bonds by Artzner and Delbaen (1989) are shown here to hold for any type of asset. In addition, their results are extended to the case of no local risk-neutrality using discount bonds as numeraire. Furthermore, I elaborate on the economic interpretation of the results, hopefully making them accessible to a wider audience. The basic method used here is stochastic calculus. It must be remembered that the theoretical calculations ignore important empirical aspects of asset pricing. Inparticular,transactioncosts,bid-askspreadsanddifferencesbetween lendingandborrowingratesofinterestarenotconsidered. Thereisnorationing of credits and all assets are assumed to be infinitely divisible. This is a suitable framework only for those who trade regularity in markets with high turnover. Anotherlimitationisthatallrelevantinformationisassumedtobepublic. This means that all traders have free access to the same information. Obviously this ignores the possibility of gaining more information by trading more, paying extra for forecasting services, or paying for access to privileged information. Section 2 provides an introduction to the concept of no arbitrage pricing withinasingleperiodframework. Inparticular,itsrelationshiptotheexistence of implicit state prices and probability measures is explained in detail. Section 3 takes on the same topics in a multi-period framework. The key concept here is that of a self-financing trading strategy. This is followed by a statement and 3 a proof of the basic theorem of no arbitrage pricing. I relate the concept of no arbitrage to fair games and martingales. Several definitions are provided in ordertoincreasethereadabilityoftheproof. Specialcareistakentoisolatethe economic arguments from the mathematical foundations which are delegated to an Appendix. After these preliminary efforts the different methods of dis- counting and their relation to particular choices of an intertemporal numeraire are described in section 4. Here arbitrage-free prices are derived in their most general form. In continuous time these results can be strengthened, if we’re willingtomakespecificassumptionsaboutthenatureoftheflowofinformation over time. This is done in section 5. With continuous Wiener processes gener- atingtheflowofinformationmorespecificpricingresultsareobtainedusingthe martingale representation theorem. In particular, this assumption completely determinesarbitrage-freebondprices. Toprovethiscalculationsaremadewith local risk-neutrality and without it. For the second ”locally stochastic” case this has not been done before as far as I know. The resulting bond prices in both cases are shown to satisfy the drift condition in Heath, Jarrow, Morton (1992 p 94). Thus their model is derived here as a special case of the general pricingofassetsandbondsinparticular. Furthermore, insection6, thegeneral formofthedrifttermforarbitrage-freepricesisshowntobeanon-linearAsset Pricing Theory, cp the linear APT of Ross (1976). This completely determines themarketpriceofrisk. Finally,usingthisresult,thevalidityofthetraditional hypotheses about the term structure of interest rates is examined. In contrast to Cox, Ingersoll, Ross (1981) I find that both the local riskneutral hypoth- esis and the unbiased forward-neutral hypothesis are compatible with general equilibrium. 4 2 Single period market A financial market consists of a fixed number N of assets with random future payoffs A ,n = 1,2,...,N, and their current prices, a column vector S = n (S ,...,S ). To begin with let there be a finite number of future states 1 N Ω=(ω ,...,ω ) (2.1) 1 M In this case the future payoffs at the end of the period are often written as a matrix a a ... a 11 12 1N a21 a22 ... A= . (2.2) . . a ... a M1 MN Here each column is a vector showing the nominal (cash) payoffs of an asset inalloftheM differentstates. Thisissometimescalledthestate-spacetableau. Typicallyeachassetisastockandinviewoflimitedliabilityallitsfuturepayoffs are non-negative. The market is called complete if it is possible to obtain any future payoff profile by trading and combining the available assets in different proportions. For this to be possible there must, of course, be at least as many assets as states. However, just counting the number of states and assets is not enough, as in the case of finding solutions to linear equation systems. A necessaryandsufficientconditionforthemarkettobecompleteisthattherank of the matrix A is equal to M, the number of possible states. Inthiscontextnoarbitragehasaveryclearmeaning. ThepricesS preclude arbitrage with respect to the market A if you can’t get something for nothing (or less than nothing). In other words there does not exist any portfolio θ = (θ ,.....,θ ) withnon-negativepayoffsinallstatesandanegativemarketvalue. 1 N No arbitrage means that any such portfolio should have a positive cost, i e N N (cid:88) (cid:88) θ a ≥0 for m=1,...,M ⇒ θ S >0 (2.3) n mn n n n=1 n=1 Inordertoensurepositivityatleastoneoftheleftmostinequalitiesmustbe strict. Clearly a portfolio that does not pay anything should have zero cost. Thissimpleconceptofnoarbitragepricingalsohasacleargeometricmean- ing. If the portfolio payoffs are non-negative in each state m and positive in at least one, the vector θA of portfolio payoffs lies in the positive convex cone spanned by the columns of A. Then according to implication (2.3), the price vector S must form an acute angle to the portfolio vector θ. (The rightmost sum in (2.3) is just the scalar product θS ) We say that S lies outside the or- thogonal complement to the cone spanned by A. But this is just the negative cone spanned by the transpose of A. So to preclude arbitrage there must exist an implicit state price system λ=(λ ,...,λ )>0 such that 1 M M (cid:88) S = λ a for n=1,...,N (2.4) n m mn m=1 5 The implicit prices give the current value of a future dollar in each of the states. If the market A is complete they will all be positive. In this case no arbitrageisequivalenttocondition(2.4). Thestrictformofthisstatement(not given here) is known as Farka’s lemma, see e g Gale (1964). Is this evaluation procedure unique for any fixed price vector S in the given marketA? Inotherwords,couldthereexistanotherimplicitstatepricesystem? Yes, in general it could. For example, if the market is not complete the rank of the matrix A will be less than M and no unique shadow price solution need exist. Also restrictions on short sales (some negative values for the components of θ not permitted) may preclude uniqueness. Finally it is interesting to note thattransactioncoststoomayspoiltheuniquenessbydrivingawedgebetween the prices on individual assets and that of replicating portfolios. 2.1 Probabilistic interpretation There is an important relation between no arbitrage and probability measures Q on the sample space Ω. Each row m in the matrix A represents one possible outcome ω of a random vector A = (A ,A ,...,A ), where a = A (ω ). m 1 2 N mn n m Thus each asset in the market can be identified with a random variable on the probability space (Ω,Q). The market A becomes a vector of random variables. Pursuing this interpretation we find that each weighted average of column el- ements in (2.4) is the expected value of the random variable A except for a n scale factor. Although state prices are always positive (and less than one) their sum need not equal one. So in general they are not probabilities, i e positive numbers between 0 and 1 defined for all events in Ω and summing to one. But this is easily taken care of. Define a probability measure Q∗ for all elementary events in Ω in the following way M Q∗(ω )= λm where (cid:107)λ(cid:107) = (cid:88) λ (2.5) m (cid:107)λ(cid:107) 1 m 1 m=1 The denominator ensures that these new numbers sum into one. Being defined for all subsets of Ω and additive they are obviously probabilities. (For any event, i e for any subset of Ω , simply sum the elementary events involved.) Formally, a numerically valued set function Q is a probability measure provided i) Q(B )≥0 forallsubsets B ∈Ω k k (cid:83) (cid:80) ii) Q( B )= Q(B ) fordisjointsubsetsB (2.6) k k k iii) Q(Ω)=1 In general the subsets B can beanycombination of the elementary events ω . k m Clearly Q∗ as defined in (2.5) fullfills these requirements. Next,wedefinemathematicalexpectationwithrespecttothenewprobabil- ity measure Q∗. The expected value of the (discrete) random variable A with n 6 See more
{"url":"https://www.zlibrary.cc/dl/no-arbitrage-pricing-and-the-term-structure-of-interest-rates","timestamp":"2024-11-11T14:26:19Z","content_type":"text/html","content_length":"149531","record_id":"<urn:uuid:91d613fb-528c-42fe-a1a4-74e0e4701d94>","cc-path":"CC-MAIN-2024-46/segments/1730477028230.68/warc/CC-MAIN-20241111123424-20241111153424-00182.warc.gz"}
Multiple Line Chart Android Studio 2024 - Multiplication Chart Printable Multiple Line Chart Android Studio Multiple Line Chart Android Studio – The Multiplication Graph Collection will help your pupils visually symbolize numerous early math methods. It must be used as a teaching aid only and should not be confused with the Multiplication Table, however. The chart can be purchased in 3 models: the colored version is helpful once your pupil is centering on a single occasions desk at the same time. The horizontal and vertical types are compatible with children that are nonetheless studying their periods desks. In addition to the colored version, you can also purchase a blank multiplication chart if you prefer. Multiple Line Chart Android Studio. Multiples of 4 are 4 from one another The pattern for determining multiples of 4 is to include every variety to on its own and locate its other multiple. As an illustration, the first several multiples of 4 are: 12, 8, 4 and 16 and 20. This trick works because all multiples of a number are even, and they are four away from each other on the multiplication chart line. Additionally, multiples of a number of are even figures Multiples of 5 are even You’ll find multiples of 5 on the multiplication chart line only if they end in or 5. In other words, you can’t multiply a quantity by 2 or 3 to have a much number. You can only find a multiple of five if the number ends in five or ! Thankfully, you can find tips that can make discovering multiples of five even easier, like using the multiplication chart line to find the multiple of five. Multiples of 8 are 8 from the other The design is apparent: all multiples of 8 are two-digit phone numbers and all sorts of multiples of four-digit figures are two-digit amounts. Every selection of 10 has a several of 8. Seven is even, so all its multiples are two-digit amounts. Its style proceeds as much as 119. When you see a quantity, be sure you search for a a number of of seven from the beginning. Multiples of 12 are 12 clear of the other The number twelve has endless multiples, and you will increase any complete amount by it to make any number, which include alone. All multiples of twelve are even figures. Here are several examples. James wants to buy pens and organizes them into 8 packets of 12. He presently has 96 pens. James has one of each kind of pencil. Within his office, he arranges them around the multiplication graph Multiples of 20 are 20 away from the other person From the multiplication chart, multiples of 20 or so are typical even. The multiple will be also even if you multiply one by another. If you have more than one factor, multiply both numbers by each other to find the factor. If Oliver has 2000 notebooks, then he can group them equally, for example. Exactly the same is applicable to pencils and erasers. You can buy one out of a pack of three or perhaps a load up of six. Multiples of 30 are 30 clear of each other In multiplication, the word “aspect combine” identifies a small grouping of amounts that kind an obvious number. For example, if the number ’30’ is written as a product of five and six, that number is 30 away from each other on a multiplication chart line. The same holds true for a variety inside the array ‘1’ to ’10’. In other words, any variety might be composed as being the product of 1 and on its own. Multiples of 40 are 40 far from one another Do you know how to find them, though you may know that there are multiples of 40 on a multiplication chart line? To accomplish this, you could add from the outside-in. As an example, 10 12 14 = 40, and so forth. In the same manner, twenty seven = 20. In this case, the quantity around the kept of 10 is surely an even amount, even though the one in the appropriate is undoubtedly an peculiar Multiples of 50 are 50 from the other person Using the multiplication graph or chart series to discover the amount of two amounts, multiples of fifty are similar distance away from each other about the multiplication graph or chart. They have got two excellent factors, 80 and 50. In most cases, every single expression is different by 50. Other component is 50 itself. Listed here are the common multiples of 50. A common numerous will be the several of a presented variety by 50. Multiples of 100 are 100 far from the other person Allow me to share the many figures that happen to be multiples of 100. An optimistic match is a a number of of merely one 100, when a negative set is a several of twenty. These 2 types of numbers are different in many techniques. The initial technique is to divide the amount by successive integers. In cases like this, the quantity of multiples is a, thirty, twenty and ten and forty. Gallery of Multiple Line Chart Android Studio Android Real Time Animating Line Chart Fast Native Chart Controls Android Line Chart How To Draw Line Chart In Android Coding Demos MPAndroidChart Library Android Studio Yang Cukup Mumpuni Untuk Leave a Comment
{"url":"https://www.multiplicationchartprintable.com/multiple-line-chart-android-studio/","timestamp":"2024-11-06T11:03:52Z","content_type":"text/html","content_length":"52675","record_id":"<urn:uuid:a12f419e-7189-4d55-8a3b-1de954bdacee>","cc-path":"CC-MAIN-2024-46/segments/1730477027928.77/warc/CC-MAIN-20241106100950-20241106130950-00390.warc.gz"}
xplorerr | Rsquared Academy Descriptive Statistics Generate descriptive statistics such as measures of location, dispersion, frequency tables, cross tables, group summaries and multiple one/two way tables. Visualize Probability Distributions Visualize and compute percentiles/probabilities of normal, t, f, chi square and binomial distributions. Inferential Statistics Select set of parametric and non-parametric statistical tests. ‘inferr’ builds upon the solid set of statistical tests provided in ‘stats’ package by including additional data types as inputs, expanding and restructuring the test results. The tests included are t tests, variance tests, proportion tests, chi square tests, Levene’s test, McNemar Test, Cochran’s Q test and Runs test. Linear Regression Tools designed to make it easier for users, particularly beginner/intermediate R users to build ordinary least squares regression models. Includes comprehensive regression output, heteroskedasticity tests, collinearity diagnostics, residual diagnostics, measures of influence, model fit assessment and variable selection procedures. Logistic Regression Tools designed to make it easier for beginner and intermediate users to build and validate binary logistic regression models. Includes bivariate analysis, comprehensive regression output, model fit statistics, variable selection procedures, model validation techniques and a ‘shiny’ app for interactive model building. RFM Analysis Tools for RFM (recency, frequency and monetary value) analysis. Generate RFM score from both transaction and customer level data. Visualize the relationship between recency, frequency and monetary value using heatmap, histograms, bar charts and scatter plots. Data Visualization Tools for interactive data visualization . Users can visualize data using ‘ggplot2’, ‘plotly’, ‘rbokeh’ and ‘highcharter’ libraries.
{"url":"https://xplorerr.rsquaredacademy.com/","timestamp":"2024-11-14T05:06:17Z","content_type":"text/html","content_length":"15776","record_id":"<urn:uuid:b502ec89-e21e-4181-bb2e-99615b68e768>","cc-path":"CC-MAIN-2024-46/segments/1730477028526.56/warc/CC-MAIN-20241114031054-20241114061054-00157.warc.gz"}
Mathematical Creativity and Key Stage 1 - Book School Workshops When it comes to mathematics, the population is divided into two distinct groups. The first group holds an opinion that maths is easy and straightforward, while what the second group claims that maths is very difficult and there is some innate mathematical “ability” that they lack. This situation seems to be special to mathematics, as probably no one claims that they have no ability to study Biology, History, or English literature. In this series of posts we try to investigate what it is that makes maths unique, to discuss rote learning versus mathematical creativity and their role in children’s education. We also explain links between mathematical education, modern studies in abstract mathematics, and applications of mathematical skills in the real world. “My first Maths” Let’s look at how children first encounter maths. Most four year old children enjoy counting small collections of objects (one, two, three…). So far so good. The next important stage, is simple addition (2+2, 3+2, and so on). Already at this stage we can see, what it takes to be good at mathematics: by practicing, children acquire certain familiarity and intuition about how numbers work. For example, an excellent exercise for young children who are learning addition is to get them to memorize pairs that make up 10, that is 1+9, 2+8, 3+7, 4+6, 5+5. This skill is key to subtraction, as well as to addition of multi-digit numbers later on. In fact “good” mathematics very often feels like a game. Learning times tables The next important stage is learning times tables. Multiplication by two, that is doubling, comes easily to most children. However, multiplication tables themselves are naturally perceived as long and intimidating. Here too we see what mathematics is all about. First of all, let us recall there are two ways of learning mathematics. The first way, sometimes called rote learning, is about constant repetition and memorizing. The other way which we may call meaningful learning (or mathematical creativity) makes an emphasis on understanding rather than memorization . Rote learning vs meaningful learning approach In learning mathematics, perhaps as in many other subjects, rote learning and meaningful learning can effectively complement each other. Let us compare the two approaches for times tables. Once a basic idea about what multiplication does is familiar, intensive rote learning allows to memorize the whole 12 x 12 table in just a couple of months. In contrast meaningful learning would take a long time and not really apply to the problem, as the question such as why is 7 x 7 = 49 does not really make sense. On the deeper level, and again this is what mathematics often is about, is that it all links together. For example, one can say that 7 x 7 = 49 because 6 x 7 = 42. This may not really make sense but the point is once certain familiarity with the whole table is there, individual entries start making more sense. That is, rote learning is sometimes a precursor to understanding. In this example we also see one of the main difficulties that people encounter with mathematics. It comes in layers, and learning each new layer requires strong understanding of the previous ones. What we mean is that learning and understanding times tables is very difficult without good grasp of addition. Mathematical creativity and times tables We can also demonstrate mathematical creativity alongside the rote learning of times tables. Mathematical creativity is often about coming up with new problems, and turning everything upside down and inside out. For example, an excellent exercise on times tables is to ask: is 36 in times tables? The answer of course is yes, and it comes in twice, as 4 x 9 and 6 x 6. Of course, one can swap the two factors such as 9 x 4, but let’s agree that it’s the same way of making up 36 as 4 x 9, rather than a different way. Next we may consider a whole list of numbers, say 30, 31, 32, 33, 34, 35. Some of these are in times tables while others are not. Indeed, we have 30 = 5 x 6 = 3 x 10 32 = 4 x 8 33 = 3 x 11 35 = 5 x 7 Now what about the remaining two numbers: 31 and 34? They are not in the times tables. But why? The reason why 34 is not in times tables is that while 34 = 2 x 17, the second factor 17 is not included in our times tables. On the other hand, the number 31 can not be divided by anything else than itself and one. That is 31 = 1 x 31 is the only way we can factor it out. Such numbers are called prime numbers, and this connects times tables directly to modern branches of modern abstract mathematics, called Number Theory, where professional mathematicians answer questions such as “what is the probability that a given number is prime”? Not bad for “boring” times tables! On the role of Maths workshops in teaching mathematics in Primary School So how can the teachers infuse creativity into classroom maths? You can invent all sorts of maths problems (like space maths!) that involve subtracting, dividing, converting units of measurements, telling time and practicing times tables. Another way is to book a Maths workshop! Delivered by experienced mathematicians, educators and presenters, they will help the teachers infuse creativity into the traditional subject and get the children excited and enthusiastic about Maths, be it shapes and angles, weights and volumes, times tables or even history of mathematics! Maths is fun!
{"url":"https://bookschoolworkshops.com/mathematical-creativity-key-stage-1/","timestamp":"2024-11-05T01:08:38Z","content_type":"text/html","content_length":"133207","record_id":"<urn:uuid:5dea7833-e556-49e5-9ee9-1308b460468c>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.84/warc/CC-MAIN-20241104225856-20241105015856-00160.warc.gz"}
what was the legal age of marriage in 19th century england A self referential data structure. /* Here in this code, we take the first node as a dummy node. Singly Linked list program in C # include < stdio.h > # include < stdlib.h > //Self referential structure to create node. Sign in|Recent Site Activity|Report Abuse|Print Page|Powered By Google Sites. Must Do Coding Questions for Companies like Amazon, Microsoft, Adobe, ... Top 5 IDEs for C++ That You Should Try Once. A node in the singly linked list consist of two parts: data part and link part. A comprehensive listing of Indian colleges, A list of CBSE Toppers from schools all over India, A list of CBSE's top performing schools (Class 12), A list of CBSE's top performing schools (Class 10), School Infrastructure Data For All Districts, Links to Infra Details of Various Schools, Baby step with python for Data Science (word count), Data pre-processing & Linear Regression with Gradient Descent, Linear Classification with Stochastic Gradient Descent, Ada-grad vs Bold-driver for linear classification, Regularization & ridge regression with batch GD, Imputation Techniques In Data Science In R, Using ggplot To Create Visualizations In R. What kind of criteria should one use to pick a college? first one is data and second field is link that refers to the second node. We often face situations, where the data is dynamic in nature and number of data can’t be predicted or the number of data keeps changing during program execution. Insert an element at the top of a list. Write Interview Simple Linked Lists - A Java Applet Visualization, 2. How to Hack WPA/WPA2 WiFi Using Kali Linux? Singly linked list is the most basic linked data structure. Singly linked lists are a type of a linked list where each node points to the next node in the sequence. Students preparing for ISC/CBSE/JEE examinations. Here is a C Program to perform the following operations on a singly linked list. typedef struct { Node * head ; Node * tail ; } List ; //Initialize List void initList ( List * lp ) { lp - > head = NULL ; lp - > tail = NULL ; } //Create node and return reference of it. Each of these nodes contain two parts, namely the data and the reference to the next list node. Linked list with no head and tail - elements point to each other in a circular fashion. Clone a linked list with next and random pointer | Set 1; Clone a linked list with next and random pointer | Set 2; Insertion Sort for Singly Linked List; Point to next higher value node in a linked list with an arbitrary pointer; Rearrange a given linked list in-place. number of nodes in a fragmented memory environment. Sort a linked list that is sorted alternating ascending and descending orders. Singly linked list has two field. Nodes in a linked list are linked together using a next field, which stores the address of the next node in the next field of the previous node i.e. Select a Random Node from a Singly Linked List, Compare two strings represented as linked lists, Rearrange a linked list such that all even and odd positioned nodes are together, Rearrange a Linked List in Zig-Zag fashion, Add 1 to a number represented as linked list, Point arbit pointer to greatest value right side node in a linked list, Check if a linked list of strings forms a palindrome, Sort linked list which is already sorted on absolute values, Delete last occurrence of an item from linked list, Delete a Linked List node at a given position, Flatten a multi-level linked list | Set 2 (Depth wise), Rearrange a given list such that it consists of alternating minimum maximum elements, Subtract Two Numbers represented as Linked Lists, Find pair for given sum in a sorted singly linked without extra space, Partitioning a linked list around a given value and keeping the original order, Check linked list with a loop is palindrome or not, Clone a linked list with next and random pointer in O(1) space, Length of longest palindrome list in a linked list using O(1) extra space, Implementing Iterator pattern of a single Linked List, Move all occurrences of an element to end in a linked list, Remove all occurrences of duplicates from a sorted Linked List, Remove every k-th node of the linked list, Check whether the length of given linked list is Even or Odd, Multiply two numbers represented by Linked Lists, Find the sum of last n nodes of the given Linked List, Count pairs from two linked lists whose sum is equal to a given value, Merge a linked list into another linked list at alternate positions, In-place Merge two linked lists without changing links of first list, Merge k sorted linked lists | Set 2 (Using Min Heap), Union and Intersection of two Linked Lists, Union and Intersection of two linked lists | Set-2 (Using Merge Sort), Union and Intersection of two linked lists | Set-3 (Hashing), Recursive selection sort for singly linked list | Swapping node links, Insert node into the middle of the linked list, Sort a linked list of 0s, 1s and 2s by changing links, Insert a node after the n-th node from the end, Count rotations in sorted and rotated linked list. */, /* Allocate memory for the new node and put data in it. each node of the list refers to its successor and the last node contains the NULL reference. Linked list is one of the most important data structures. That means we can traverse the list only in forward direction. each node of the list refers to its successor and the last node contains the NULL reference. Store the address of the node next to the temporary node in the next. Else, now pointer points to a node and the node next to it has to be removed, declare a temporary node (temp) which points to the node which has to be, removed. In a singly linked list, each node stores a reference to an object that is an element of the sequence, as well as a reference to the next node of the list. Writing code in comment? Only the reference to the first list node is required to access the whole linked list. Figure 1 shows an example of a singly linked list with 4 nodes. Sort a linked list that is sorted alternating ascending and descending orders. In C language, a linked list can be implemented using structure and pointers . */, /* Go to the node for which the node next to it has to be deleted */, "Element %d is not present in the list\n", /* Now pointer points to a node and the node next to it has to be removed */, /*temp points to the node which has to be removed*/, /*We removed the node which is next to the pointer (which is also temp) */. Singly linked list or One way chain Singly linked list can be defined as the collection of ordered set of elements. A self referential data structure. /* Beacuse we deleted the node, we no longer require the memory used for it . ), DC Circuits: Examples and Problems, Circuits with Resistance and Capacitance, DC Circuits: Problems related to RL, LC, RLC Circuits, DC Circuits: Electrical Networks and Network Theorems, DC Circuits: More Network Theorems, Examples, Solved Problems, Basic Digital Circuits: Boolean Algebra-1, Basic Digital Circuits: Boolean Algebra-2, Basic Digital Circuits: Combinational Circuits-1, Basic Digital Circuits: Combinational Circuits-2, Basic Digital Circuits: Sequential Circuits-1, Basic Digital Circuits: Sequential Circuits-2, Top Schools & School-wise results (CBSE 2015 Class 12 Examinations), Top Schools & School-wise Results (ISC 2015, Class 12 Exams), Top Schools & School-wise Results (RBSE 2015 Class 12, Rajasthan State), Top Schools & School-wise results (CBSE 2014 Class 12 Examinations), Top Schools & School-wise Results (ICSE-ISC 2014 Examinations), Top Schools & School-wise results (ICSE-ISC 2013 Class 10 & 12 Examinations), ISC Class 12: Syllabus, Specimen Papers, Books. The data field stores the element and the next is a pointer to store the address of the next node. -Algebraic, exponential, log, trigonometric,polynomial functions, Linear Algebra - Problems Based on Simultaneous Equations, Eigenvalues, Eigenvectors, Probability: Part 1 - Continuous & Discrete Variables, Chebyshev Inequality, Problems, Probability Distributions- Discrete/Continuous- Bernouilli/ Binomial/Geometric/Uniform/etc, Basic Mechanics: Introduction to Vectors and Motion, Basic Mechanics: More on Vectors and Projectile Motion, Engineering Mechanics: Moments and Equivalent Systems, Engineering Mechanics: Centroids and Center of Gravity, Engineering Mechanics: Analysis of Structures, Basic Electrostatics and Electromagnetism, Basic Electrostatics: Some Interesting Problems, Basic Electromagnetism: Some Interesting Problems, Electrostatics and Electromagnetism: A Quick Look at More Advanced Concepts, Atomic Structure: Notes, Tutorial, Problems with Solutions, The Book Corner for Computer Science and Programming Enthusiasts, Arrays and Searching: Binary Search ( with C Program source code), Arrays and Sorting: Insertion Sort ( with C Program source code, a tutorial and an MCQ Quiz on Sorting), Arrays and Sorting: Selection Sort (C Program/Java Program source code, a tutorial and an MCQ Quiz on Sorting), Arrays and Sorting: Merge Sort ( C Program/Java Program source code, a tutorial and an MCQ Quiz on Sorting), Arrays and Sorting: Quick Sort (C Program/Java Program source code; a tutorial and an MCQ Quiz ), Data Structures: Stacks ( with C Program source code), Data Structures: Queues ( with C Program source code). The node next to the previous node most basic linked data structure head and tail - elements to... After the specified element in a circular fashion item ; struct tmp { int item ; struct {! A C program to perform the following operations on a singly linked lists are useful... Using self referential Structures heap memory unlike array which uses contiguous locations in|Recent Activity|Report... The linked list with no head and a tail ; each element points to of! New element at the bottom of a list the element and the Complexity of Algorithms- Test much! In|Recent Site Activity|Report Abuse|Print Page|Powered By Google Sites list where each node of list! Is required to access the whole linked list or one way chain linked... Link here each element points to the first node as a dummy node Pandas DataFrame By index labels with head!, tutorial and an mcq quiz ) very useful in this code, we no longer require the used... Number of elements may vary according to need of the list till we the! Node and put data in it, 2 Allocate memory for the new node put... How much you know about basic Algorithms and data Structures: singly linked is! Before the specified element in a list singly linked list need of the list to! Rows in Pandas DataFrame By index labels ide.geeksforgeeks.org, generate link and share the link here search the! Following operations singly linked list c++ a singly linked list that is sorted alternating ascending and descending orders a dummy node parts namely. Node and put data in it Microsoft, Adobe,... top 5 IDEs for that! No head and tail - elements point to each other in a list * Beacuse deleted! Anywhere in the next to each other in a list create node new node and put data in it -... ; //structure for create linked list the end of the linked list is one of the program labels... Of these nodes contain two parts: data part and link part > # include < >... Of a Cube: Area, Volume, Diagonal singly linked list c++ a Vector in C++ is using! Index labels Visualization, 2, 2 tmp { int item ; struct tmp * next ; } ;... Structure to create node of all ICSE and ISC Schools in India ( and abroad ) can be anywhere... Is required to access the whole linked list,... top 5 IDEs for that! Not have any pointer that points to another of its own kind data part link. Dynamic size, which can be determined only at run time is link that refers the. Last node contains the NULL reference here is a pointer to store the address of the important. * /, / * Iterate through the list only in forward.... Perform the following operations on a singly linked list that is sorted alternating ascending descending. The address of the list refers to its successor and the last contains... With C program to perform the following operations on a singly linked list insert an element at the of! Of two parts, namely the data field stores the element and next. Placed anywhere in the heap memory unlike array which uses contiguous locations is done using...., Adobe,... top 5 IDEs for C++ that you Should Try Once next to the first of... At the end of the most basic linked data structure that is sorted alternating ascending and descending orders singly! Top 5 IDEs for C++ that you Should Try Once a Java Applet Visualization, 3 type a! On a singly linked list can be defined as the collection of set! Element before the specified element in a list singly linked list c++ to access the whole linked that. Where each node of the list ; //structure for create linked list program in C include! We take the first node as a dummy node Google Sites be defined as the collection of ordered of. After the specified element in a Vector in C++ a Java Applet Visualization, 3,. For C++ that you Should Try Once element and the last node part!: singly linked list can be determined only at run time that are using! Figure 1 shows an example of a singly linked list is the most basic linked data structure of may! //Self referential structure to create node circular linked lists - a Java Visualization... Pandas DataFrame By index labels is done using pointers first one is data and second field is link refers. Tmp * next ; } node ; //structure for create linked list one... Mcq Quizzes- Test how much you know Microsoft, Adobe,... top IDEs! The NULL reference that means we can traverse the list refers to successor. How much you know about basic Algorithms and the last node contains the NULL reference Google Sites to index... Only at run time temporary node in the heap memory unlike array which uses locations! An example of a given element in a Vector in C++ way chain singly list! Points to the first node as a dummy node size, which can be implemented using structure and.... First list node that is sorted alternating ascending and descending orders Coding Questions for Companies like Amazon,,... Stdlib.H > //Self referential structure to create node of Algorithms- Test how much you!! Element before the specified element in a list list and search for the key take the first list node alternating... The first list node is required to access the whole linked list and Structures! You Should Try Once, tutorial and an mcq quiz ) made up of nodes that created! //Structure for create linked list index labels in forward direction link that refers to the temporary node in heap. Before the specified element in a list quiz ) linked list or one way chain singly linked.! Reference to the second node using self referential Structures use cookies to ensure have... The data field stores the element and the reference to the second node create linked list is a C source... Node next to the first list node is required to access the whole linked list where each node the... Applet Visualization, 2 implementation of a given element in a list next is a to! Means we can traverse singly linked list c++ list till we encounter the last node contains the reference... No head and a tail ; each element points to another of its kind! Through the list refers to its successor and the last node for that! Is the most important data Structures the specified element in a circular fashion in sequence... Next to singly linked list c++ first node as a dummy node contiguous locations implemented using structure and pointers to another of own! Heap memory unlike array which uses contiguous locations and second field is link that refers the. Through the list part and link part Beacuse we deleted the node next to the next is type. Mcq Quizzes on data Structures, Algorithms and the last node contains the reference... That refers to its successor and the last node contains the NULL reference element a. Drop rows in Pandas DataFrame By index labels IDEs for C++ that you Should Try Once may vary to. > # include < stdio.h > # include < stdio.h > # include < stdio.h > include. Last node contains the NULL reference no longer require the memory used for it through the entire linked and! List node one of the most basic linked data structure mcq Quizzes on data Structures: singly linked list in! An mcq quiz ) < stdio.h > # include < stdlib.h > // Self referential structure to create.. The address of the program NULL reference another of its own kind in forward direction you Try. Hyena Male Wild Strawberry Jam For Sale Father And Sons Quotes Uber Commercial Stay Home How To Train Your Dragon Artwork The Royal Game Summary Secuestro Meaning Vince Gill Songs From The 80s Mononucleosis Treatment Dark Nicknames
{"url":"http://shakhidi.com/docs/journal/page.php?a61347=what-was-the-legal-age-of-marriage-in-19th-century-england","timestamp":"2024-11-11T19:55:29Z","content_type":"text/html","content_length":"31220","record_id":"<urn:uuid:6c182cab-b8a9-4417-9e77-c32f2f508133>","cc-path":"CC-MAIN-2024-46/segments/1730477028239.20/warc/CC-MAIN-20241111190758-20241111220758-00802.warc.gz"}
Greenwich Maths Time IMA Festival 2017 Greenwich Maths Time – the IMA Festival of Mathematics and its Applications University of Greenwich, 27-28 June 2017 Over the duration of the festival, over 1500 visitors came, many of them school students and teachers from primary through all stages of secondary schools and colleges. Schools from as far away as Cambridge and Canterbury visited the beautiful Queen Anne Court of the Old Royal Naval College campus of Greenwich University: where on arrival they could see people demonstrating the non-Newtonian behaviour of custard by walking on it! Enthusiastic feet-on participants included IMA Executive Director David Youdan and Assistant Director John Meeson. Presentations and workshops covered a wide range of mathematics including talks on applications including medicine, emergency evacuation modelling, engineering modelling, cryptography, music, statistical significance, patterns and predictions to name but a few! The festival also had many workshops and hands-on maths activities. Visitor feedback was incredibly positive, with praise especially given to the talk by Nira Chamberlain on “The Black Heroes of Mathematics”, and the exhibition “Women of mathematics throughout Europe: a gallery of portraits”, a collection of photographs by Noel Tovia Matoff. The festival was keen to make visitors aware of the career opportunities offered by studying mathematics, and Alison Terry and Aoife Hunt attracted large audiences for their talks about the possibilities opened up by studying mathematics at A-level and beyond. It was particularly appropriate in this context that the result of the IMA Maths Careers website poster competition was announced during the Festival and the shortlisted posters were on display. The festival’s success in raising awareness of the applications of mathematics could be seen in the answers to the tie-break question of the mathematical treasure hunt, which required a number of questions to be answered to generate a code word that needed decrypting. Looks like the cryptography workshops had been well received! The tie-break asked entrants what surprising mathematics they had discovered at the Festival, and almost every event was mentioned by some. It was clear that the range of applications of mathematics, especially to medicine, had not been appreciated by many of those who came. It was very heartening to see how successful the festival had been in enthusing and inspiring visitors, and perhaps in suggesting future career pathways. The final word on the Festival goes to the students and teachers from The Archbishop’s School, Canterbury who created a wordle which is a wonderful way to express maths through words. For more information on all the events, visit the festival website . By Tony Mann
{"url":"https://teachingmathsscholars.org/newsandevents/greenwichmathstime","timestamp":"2024-11-09T21:59:44Z","content_type":"text/html","content_length":"52733","record_id":"<urn:uuid:bab419e4-e07a-4a89-b2a0-a48fc5832c37>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.10/warc/CC-MAIN-20241109214337-20241110004337-00518.warc.gz"}
Cite as Amotz Bar-Noy, Toni Böhnlein, David Peleg, and Dror Rawitz. On the Role of the High-Low Partition in Realizing a Degree Sequence by a Bipartite Graph. In 47th International Symposium on Mathematical Foundations of Computer Science (MFCS 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 241, pp. 14:1-14:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022) Copy BibTex To Clipboard author = {Bar-Noy, Amotz and B\"{o}hnlein, Toni and Peleg, David and Rawitz, Dror}, title = {{On the Role of the High-Low Partition in Realizing a Degree Sequence by a Bipartite Graph}}, booktitle = {47th International Symposium on Mathematical Foundations of Computer Science (MFCS 2022)}, pages = {14:1--14:15}, series = {Leibniz International Proceedings in Informatics (LIPIcs)}, ISBN = {978-3-95977-256-3}, ISSN = {1868-8969}, year = {2022}, volume = {241}, editor = {Szeider, Stefan and Ganian, Robert and Silva, Alexandra}, publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik}, address = {Dagstuhl, Germany}, URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.MFCS.2022.14}, URN = {urn:nbn:de:0030-drops-168121}, doi = {10.4230/LIPIcs.MFCS.2022.14}, annote = {Keywords: Graph Realization, Bipartite Graphs, Degree Sequences, Graphic Sequences, Bigraphic Sequences, Approximate Realization, Multigraph Realization}
{"url":"https://drops.dagstuhl.de/search/documents?author=Bar-Noy,%20Amotz","timestamp":"2024-11-03T03:07:19Z","content_type":"text/html","content_length":"119572","record_id":"<urn:uuid:dd45a770-f0ee-4d21-9556-daf5c9e59eb6>","cc-path":"CC-MAIN-2024-46/segments/1730477027770.74/warc/CC-MAIN-20241103022018-20241103052018-00068.warc.gz"}
Topic modeling LaTeX equations on the arXiv by Jaan Altosaar for Google Open Source Programs Office Exposing scientists to alternate mathematical descriptions of problems they are working on has the potential to accelerate research. This necessitates incorporating mathematics into current topic modeling approaches such as Latent Dirichlet Allocation. By applying this approach to the arXiv's corpus of LaTeX equations, we aim to develop tools to analyze and predict historical trends of mathematical formulas in science and enhance scientific recommendation systems.
{"url":"https://google-melange.appspot.com/archive/gsoc/2014/orgs/ospo/projects/jaanaltosaar.html","timestamp":"2024-11-08T21:48:10Z","content_type":"text/html","content_length":"5935","record_id":"<urn:uuid:5ad2e70e-d473-41db-94b8-fb5bbeb3f2cf>","cc-path":"CC-MAIN-2024-46/segments/1730477028079.98/warc/CC-MAIN-20241108200128-20241108230128-00575.warc.gz"}
Linear Time Partition – A Three Way Split Linear-time partition is a divide & conquer based selection algorithm. With it, data is split into three groups using a pivot. An integral part of Quick Sort algorithm which uses this partitioning logic recursively. All the elements smaller than the pivot are put on one side and all the larger ones on the other side of the pivot. Similar to the discussion of Dynamic Programming, this algorithm plays on solving sub-problems to solve complex problem. Post selecting the pivot, Linear-time partition routine separates the data into three groups with values: • less than the pivot • equal to the pivot • greater than the pivot Generally, this algorithm is done in place. This results in partially sorting the data. There are handful of problems that make use of this fact, like: • Sort an array that contains only 0s, 1s & 2s • Dutch national flag problem • Print all negative integers followed by positive for an array full of them • Print all 0s first and then 1s or vice-versa for an array with only 0s & 1s • Move all the 0s to the end maintaining relative order of other elements for an array of integers If done out of place, (i.e. not changing the original data), it would cost O(n) additional space Let’s take an example of: sort an array that contains only 0s, 1s & 2s First thought for such problem is to perform a count of 0s, 1s and 2s. Once we have the counts, reset the array with them. Though it has time complexity O(n), it takes two traversals of the array or uses an extra array. Below is an attempt to solve using Linear-time partition algorithm to avoid that extra traversal/space. def threeWayPartition(A): start = mid = 0 end = len(A)-1 # define a Pivot pivot = 1 while (mid <= end): # mid element is less than pivot # current element is 0 # so lets move it to start # current start is good. # move start to next element # move mid to next element to move forward if (A[mid] < pivot) : swap(A, start, mid) start = start + 1 mid = mid + 1 # mid element is more than pivot # current element is 2 # so lets move it to end # current end is good. # move end to previous element elif (A[mid] > pivot) : swap(A, mid, end) end = end - 1 # mid element is same as pivot # current element is 1 # just move forward: # mid to next element else : mid = mid + 1 # Swap two elements A[i] and A[j] in the list def swap(A, i, j): A[i], A[j] = A[j], A[i] # Define an array inputArray = [0, 1, 2, 2, 1, 0, 0, 2] # Call the Linear-time partition routine # print the final result # Outputs # [0, 0, 0, 1, 1, 2, 2, 2] With a defined pivot, we segregated the data on either side which resulted in desired output. Dutch nation flag problem or printing all negative first and then positive, or printing all 0s first follows the same code. For moving all 0s to end maintaining other elements order, we do a tweak in swap index to maintain order: def threeWayPartition(A): current = 0 nonzero = 0 end = len(A)-1 # define a Pivot pivot = 0 while (current <= end): if (A[current] != pivot) : swap(A, current, nonzero) nonzero = nonzero + 1 current = current + 1 # Swap two elements A[i] and A[j] in the list def swap(A, i, j): A[i], A[j] = A[j], A[i] # Define an array inputArray = [7,0,5,1,2,0,2,0,6] # Call the Linear-time partition routine # print the final result # Output # [7, 5, 1, 2, 2, 6, 0, 0, 0] With the above algorithm approach, we solved our problem with Time complexity O(n) & Space complexity O(1) (with single traversal of the array). It was fun solving!.
{"url":"https://www.codeproject.com/Articles/5283491/Linear-Time-Partition-A-Three-Way-Split","timestamp":"2024-11-08T22:16:26Z","content_type":"text/html","content_length":"30253","record_id":"<urn:uuid:a6b8e4b7-3278-465d-8ebb-0cdb179d190a>","cc-path":"CC-MAIN-2024-46/segments/1730477028079.98/warc/CC-MAIN-20241108200128-20241108230128-00208.warc.gz"}
Personalization by website transformation: Theory and... (PDF) This article appeared in a journal published by Elsevier. The attached copy is furnished to the author for internal non-commercial research and education use, including for instruction at the authors institution and sharing with colleagues. Other uses, including reproduction and distribution, or selling or licensing copies, or posting to personal, institutional or third party websites are prohibited. In most cases authors are permitted to post their version of the article (e.g. in Word or Tex form) to their personal website or institutional repository. Authors requiring further information regarding Elsevier’s archiving and manuscript policies are encouraged to visit: http://www.elsevier.com/copyright Author's personal copy Information Processing and Management 46 (2010) 284–294 Contents lists available at ScienceDirect Information Processing and Management journal homepage: www.elsevier .com/ locate / infoproman Personalization by website transformation: Theory and practice Saverio Perugini * Department of Computer Science, University of Dayton 300 College Park, Dayton, OH 45469–2160, USA a r t i c l e i n f o a b s t r a c t Article history: We present an analysis of a progressive series of out-of-turn transformations on a hierar- Received 6 October 2008 chical website to personalize a user’s interaction with the site. We formalize the transfor- Received in revised form 18 December 2009 mation in graph-theoretic terms and describe a toolkit we built which enumerates all of Accepted 26 December 2009 the traversals enabled by every possible complete series of these transformations in any Available online 12 February 2010 site and computes a variety of metrics while simulating each traversal therein to qualify the relationship between a site’s structure and the cumulative effect of support for the Keywords: transformation in a site. We employed this toolkit in two websites. The results indicate that Hierarchical hypermedia the transformation enables users to experience a vast number of paths through a site not Information personalization Navigation traversable through browsing and demonstrate that it supports traversals with multiple Out-of-turn interaction steps, where the semblance of a hierarchy is preserved, as well as shortcuts directly to Website transformation the desired information. Ó 2010 Elsevier Ltd. All rights reserved. 1. Introduction Personalization refers to automatically customizing interactive information systems based on user preferences. Person- alization technologies are now widely utilized on the web. While most approaches to personalization are either template-based (i.e., slot fillers such as those found at My Yahoo! (Manber, Patel, & Robinson, 2000)) or artificial intelli- gence-oriented, the central theme of our approach is to personalize a user’s interaction with a website by progressively transforming its structure in response to every user interaction in a session with the site to help the user experience paths through the site not traversable through browsing. For instance, consider a user shopping for a book by Aldous Huxley at a website which only presents books by genre. Such a user unsure in which genres Huxley published is forced to browse through all genres to manually find books of interest. While this user is unable to respond to the current solicitation for input (i.e., genre), she does have information (i.e., author) relevant to the information-seeking task even though that information is not required until the user is nested deeper into the catalog. Our approach to this problem is a technique called out-of-turn interaction. The idea is to permit a user navigating a hierarchical website to postpone clicking on any of the hyperlinks presented on the current page (e.g., when unable or unwilling to respond to the current prompt for input) and, instead, communicate the label of a hyperlink nested deeper in the hierarchy. When the user supplies such out-of-turn input we transform the hierarchy to reflect the user’s informational need. In the example above, when unsure in which genres Huxley published, the user may communicate * Tel.: +1 9372294079; fax: +1 9372292193. E-mail address: [email protected] URL: http://academic.udayton.edu/ SaverioPerugini 0306-4573/$ - see front matter Ó 2010 Elsevier Ltd. All rights reserved. doi:10.1016/j.ipm.2009.12.008 Author's personal copy S. Perugini / Information Processing and Management 46 (2010) 284–294 285 ‘Aldous Huxley’ to the site out-of-turn. In response, we would transform the hierarchical organization of the catalog so that all hyperlinks leading to books not written by Huxley are purged and re-present the hierarchy to the user. As a result of the transformation, the user would see a page of hyperlinks representing genres. However, each hyperlink remaining would eventually lead to a book by Huxley. Thus, out-of-turn interaction permits the user to circumvent any intended flows of nav- igation hardwired into the hyperlink structure by the designer and, in this manner, helps reconcile any mismatch between the site’s one-size-fits-all organization and the user’s model of information seeking. We built a transformation engine as a web service based on this idea which prunes a hierarchical site when given out-of- turn input. We also built two interfaces to communicate the input to the engine: a voice interface, implemented with Voice- XML and X+V, which permits the user to supply out-of-turn inputs through speech and enables multimodal interaction when used in conjunction with hyperlinks, and Extempore, implemented with XUL, which is a cross-platform toolbar plugin embedded into the Mozilla Firefox web browser. The transformation engine, interfaces, and a coordinating interaction man- ager constitute a customizable software framework for creating web personalization systems with support for out-of-turn interaction (Narayan, Williams, Perugini, & Ramakrishnan, 2004). We have applied this technique to various websites, including the Open Directory Project, a large web directory. We have studied out-of-turn interaction from software implementation (Narayan et al., 2004) and human-com- puter interaction (HCI) (Perugini, Anderson, & Moroney, 2007) perspectives. The goal of this paper is to study the transformation which supports this technique from a graph transformation perspective and analyze the traversals of the site it enables. This is an intermediate approach between the implementation and HCI complementary ap- proaches. Specifically, we (i) formalize the transformation in graph-theoretic terms, (ii) describe a toolkit we built which computes and simulates all of the traversals enabled by all possible complete series of out-of-turn transforma- tions in any site to qualify the relationship between how terms are distributed through the site’s structure and the effect of support for the transformation in a site, and (iii) report the results of employing this toolkit in two websites. The central mantra of this paper is that a series of website transformations on a site supports a set of traversals through the site we called an interaction paradigm: Transformationð TransformationðWebsite;Hyperlink labelÞ; ; Hyperlink labelÞ ) Interaction paradigm: Only a small subset of all possible traversals made possible by a series of out-of-turn transformations on a site can be expe- rienced through browsing. 2. Related research Traditionally, there are two main approaches to web personalization: template- and AI-oriented approaches. The tem- plate-based approach (Perugini & Ramakrishnan, 2003) (also called checkbox personalization) is predominately employed in the my sites (e.g., My Yahoo! (Manber et al., 2000) or My eBay). Most all e-commerce sites now provide such a facility. The onus is on the user to explicitly specify her preferences and, as a result, the content, structure, or presentation of the website is tailored accordingly. Such an approach involves explicit user modeling (Konstan et al., 1997). While template- based approaches to personalization do not suffer from privacy concerns, the level of personalization delivered is bounded by the investment of the user in communicating his interests, and often higher-order connections or serendipitous recom- mendations are not possible. On the other hand, AI-based approaches to web personalization involve covertly monitoring user behavior and activity, often through web usage mining (i.e., web log analysis) (Mobasher, Cooley, & Srivastava, 2000), to implicitly glean user preference and, ultimately, build a user model which is used as a basis from which to personalize the site. One popular example of such an approach is adaptive websites (Perkowitz & Etzioni, 2000). Unlike template- based personalization, the success of AI-oriented approaches is not predicated on the cooperation of the user. However, these methods are perceived as invasive and raise privacy concerns (Riedl, 2001). The primary enabling technology for these approaches is web mining (Eirinaki & Vazirgiannis, 2003; Kosala & Blockeel, 2000), and specifically web usage min- ing (Srivastava, Cooley, Deshpande, & Tan, 2000). This user-model through access monitoring approach is seen in the adaptive hypermedia (Brusilovsky, 2001) and interactive information retrieval (White, Jose, & Ruthven, 2006) communities. The out-of-turn website transformation approach to personalized interaction does not fit into either of these categories. Rather, out-of-turn interaction can be broadly characterized as a faceted browsing and search technique (Hearst et al., 2002), and is particularly related to the zoom operation in dynamic taxonomies (Sacco, 2000). Faceted browsing and search (Sacco & Tzitzkas, 2009) seeks to marry navigational (e.g., Yahoo!) and direct (free form) search (e.g., Google), and has received an in- creased level of attention from the interactive information retrieval community recently as an approach between template- and AI-based techniques. Faceted browsing and search permits a user to explore a multi-dimensional dataset in a manner which matches the user’s mental model of information-seeking, thereby personalizing the user’s interaction with site (e.g., ‘You prefer to browse rec- ipes using a by main ingredient, dish type, preparation method motif while I prefer to browse by dish type, preparation method, and main ingredient’). The multi-faceted index of recipes at http://epicurious.com is perhaps the most illustrative example of a faceted classification on the web (Hearst, 2000). Author's personal copy 286 S. Perugini / Information Processing and Management 46 (2010) 284–294 1 3. Theory: out-of-turn transformation formalism Fundamentally, the out-of-turn transformation is a closed transformation over a graph modeling the hyperlink structure of a website. In this section we discuss how websites can be represented as graphs, how interacting out-of-turn transforms a graph, and the implications a series of those transformations have on web interaction. 3.1. Websites as graphs It is instructive to think of websites as graphs. For instance, Fig. 1 (left) illustrates a directed acyclic graph (DAG) model of a hierarchical website with characteristics similar to web directories such as the Open Directory Project (ODP) at http://dmoz. org. Edges help model paths through a website a user follows to access leaf vertices, which model leaf webpages containing content. We refer to a leaf content page as terminal information and the terms therein as units of terminal information. Edge- labels, which we refer to as structural information, model hyperlink labels or, in other words, choices made by a navigator en route to a leaf. An edge-label, a unit of structural information, is therefore a term of information-seeking (simply a term here- after) which a user may bring to bear upon information seeking. Structural information thus helps make distinctions among terminal information. A set of terms is complete when it determines a particular terminal webpage; otherwise it is partial. An interaction set of a DAG D is the complete set of the terms along a path from the root of D to a leaf vertex of D. An interaction set constitutes complete information; any proper subset of it is partial information. An interaction set of D classifies a leaf vertex of D, but does not capture any order of the terms therein. On the other hand, a sequence is a total order of an interaction set wrt the parenthood relation of the site. In other words, a sequence represents a path from the root to a leaf in a site. The sequence shopping, apparel, winter is in the DAG shown in Fig. 1 (left). A term is in-turn information if it appears as a hyperlink label on the user’s current webpage and is, thus, currently solicited by the system. On the other hand, a term is out-of-turn information if it represents a hyperlink label nested somewhere deeper in the site and is, thus, currently unsolicited from the system, but relevant to information-seeking. In any DAG, in-turn and out-of-turn information is mutually-exclusive. 3.2. Transformations We now present some website transformations. Term extraction is a total function TE : D ! PðTÞ which given D returns the set of all unique terms in D, where D represents the universal set of DAGs, T represents the universal set of terms, and PðÞ denotes the power set function. A term-co-occurrence set of D is a set T #TEðDÞ. Let the level of an edge-label in D be the depth of the source vertex of the edge it labels. If a given edge-label occurs multiple times in D, a level is associated with every occurrence. A term-level set of D then is a term-co-occurrence set comprising all unique terms in D with the same level. Term-level extraction is a total function TLE : ðD NÞ ! PðTEðDÞÞ which given D and a level lðP 1Þ 2 N ¼ f1; 2; . . . ;Mg re- turns the set of all unique terms in D with level l (i.e., a term-level set), where M represents the maximum depth of D. If D represents the DAG in Fig. 1 (left), TLEðD; 2Þ ¼ finternational; advertising; coupons; electronics; apparelg. In any DAG, TLEðD; 1Þ returns the set of terms available to supply through browsing or, in other words, in-turn information. Browse is a partial function B : ðD TÞ ! D? which given D and a term t 2 TLEðD; 1Þ returns the sub-DAG rooted at the target vertex of the edge in D labeled with t whose source vertex is the root of D. If D is the DAG in Fig. 1 (left), BðD; shoppingÞ returns the sub-DAG rooted at vertex 3, which represents the result of a user clicking on the hyperlink labeled ‘shopping’. The symbol ? denotes the partial nature of the function (i.e., the value of B is undefined for some inputs). If t R TLEðD; 1Þ; B returns ?. 0 Out-of-turn transformation is a partial function OOT1 : ðD TÞ ! D? which given D and a term t 2 TEðDÞ returns D : ð1Þ where FP (forward propagate): ðD TÞ ! PðLÞ is a total function which given D and a term t 2 T ¼ TEðDÞ returns a set of leaf vertices L of D, where L contains each leaf vertex reachable from all paths of D containing an edge labeled t, and L denotes the universal set of leaf webpages, 0 0 BP (back propagate): ðD PðLÞÞ ! D? is a partial function which given D and L returns a DAG D , where D contains only paths from the root of D to the leaves of D which classify the leaf vertices in L, and 0 CE (consolidate edges): ðD TÞ ! D? is a partial function which given D and a term t 2 TEðDÞ returns D , where any edge 0 0 e in D labeled with t is removed in D , the source vs of e is replaced with its target vt in D , and vt becomes the new target of any edge e0 with target vs in D0. 1 Some terms and definitions in this section have been reported by the author in (Perugini & Ramakrishnan, 2010) and appear here for purposes of clarity and comprehension. Author's personal copy S. Perugini / Information Processing and Management 46 (2010) 284–294 287 1 news shopping 2 3 1 news shopping international advertising coupons@ electronics apparel 2 3 4 5 6 7 international advertising coupons@ electronics 4 5 6 europe china international@ holidays apple computers@ cameras winter china international@ holidays apple computers@ 8 9 10 11 12 13 9 10 11 1 shopping news 3 coupons@ electronics 5 6 international holidays apple computers@ 4 international@ 10 11 china 9 Fig. 1. Website transformations simplified for purposes of presentation: illustration of forward-propagation ðFPÞ followed by back-propagation ðBPÞ on the DAG on left. (left) A sample DAG model of a hierarchical website. Vertices 9, 10, and 11 (i.e., those dotted) represent the result of forward-propagation wrt the term ‘advertising’: FPðD; advertisingÞ. (center) Result of back-propagation wrt leaf vertices 9, 10, and 11 on left: BPðD; FPðD; advertisingÞÞ. (right) Result of out-of-turn interaction with the DAG D shown on left wrt the term ‘advertising’: OOT1ðD; advertisingÞ. Alternatively, we can think of this DAG as the 0 0 result of consolidating edges with the DAG D in center (i.e., CEðD ; advertisingÞ). Author's personal copy 288 S. Perugini / Information Processing and Management 46 (2010) 284–294 Fig. 1 illustrates the out-of-turn transformation (i.e., forward-propagation (left) followed by back-propagation (center) followed by consolidation (right)). Intuitively, this transformation retains all sequences of D which contain the out-of-turn input (FP followed by BP), and then removes the out-of-turn input from those remaining sequences ðCEÞ. The result of FP is the set of all leaf vertices classified by the out-of-turn input. We back-propagate from this set of leaves up to the root of the DAG with BP. Note that when no term in the DAG represented by the first argument to OOT1 resides at more than one level, and the second argument to OOT1 is in-turn information, the transformation is functionally equivalent to B. Thus, OOT1 subsumes B. To marry the out-of-turn transformation with standard techniques from information retrieval we can replace FP with any total function SL (select leaves): ðD TÞ ! L which given D and a term t 2 TEðDÞ returns a set of leaf vertices of D (FP is an instance of SL). This generalization leads to the possibility of bringing units of terminal information (i.e., terms modeled in the leaf pages and not explicitly used in the classification), in replacement of or in addition to structural information, to bear upon the transformation and resulting interaction. For instance, we might perform a query (e.g., ‘laptop’) in a vector-space model over the set of leaf webpages (i.e., documents) using cosine similarity to arrive at a target set of leaves from which to back-propagate. Notice that D also can be represented as a jTEðDÞj jCRðDÞj term-doc- ument matrix, where rows correspond to terms (i.e., structural information, or edge-labels) and the columns correspond to webpages (i.e., terminal information, or leaf vertices). Collect results is a total function CR : D ! PðLÞ which given D returns a set of all the leaf vertices in D. For instance, CRðDÞ returns the {9,10,11} set of vertices, where D is the DAG in Fig. 1 (center). 3.3. Commutativity We now examine the commutativity of the out-of-turn transformation. Lemma. The out-of-turn transformation is commutative, assuming both sides are defined: OOT1ðOOT1ðD; xÞ; yÞ ¼ OOT1ðOOT1ðD; yÞ; xÞ; where x and y represent terms. A sketch of the proof of this lemma is given in (Perugini, 2004, Chap. 4) Armed with this lemma, we can consider the possibility of communicating multiple terms per utterance, where an utter- ance is a set of terms with the same arrival time — the time at which the user communicates a term or terms to the system. To accommodate multiple terms per utterance, we re-define the out-of-turn transformation: OOTðD; uÞ ¼ OOT1ð OOT1ðOOT1ðD; t1Þ; t2Þ ; tnÞ; where u denotes an utterance consisting of only the ft1; t2; . . . ; tng set of terms and each OOT1 on the rhs refers to (1). If OOTðD; uÞ returns a DAG containing only one vertex v (and, therefore, no edges), then the utterance u is complete information (and v is terminal information). Otherwise, u is partial information. 3.4. Web interaction We now present concepts which relate to a user’s interaction with a website to help describe the cumulative effect of the out-of-turn transformation on a site. Several partial orders can be defined over an interaction set wrt arrival time. When a user clicks on a hyperlink, she implicitly communicates the hyperlink’s label to the underlying system. For instance, when a user clicks on a hyperlink labeled ‘news’ followed by that labeled ‘international’, she communicates the news, interna- tional terms to the system, in that order. Similarly, when the user supplies out-of-turn input, he is communicating terms to the system. These partial orders can be summarized as partially ordered sets or posets. Each linear extension of such a poset is a total order called an interaction episode. A browsing interaction episode of D is a total order on any interaction set of D wrt the parenthood relation of D. Notice that a browsing episode is the same as a sequence as defined above. An out-of-turn interaction episode is a total order over the set of all set partitions of an interaction set wrt the arrival time relation implied by out-of-turn interaction. The arrival time relation implied by out-of-turn interaction is a partial order containing only the reflexive tuples of all set partitions from any interaction set. In other words, out-of-turn interaction requires none of the term set partitions from each interaction set are required to be ordered. The linear extensions of the posets associated with these partial orders are out-of-turn interaction episodes. An interaction paradigm P for D is the union of all linear extensions of posets defined over all interaction sets of D. In other words, an interaction paradigm is a complete set of realizable interaction episodes from D wrt a transformation (e.g., Browse or OOT). The browsing paradigm PB of D in Fig. 1 (left) is: {news, international, europe;news, international, china;news, advertising, international, news, advertising, holiday;news, advertising, apple;shopping, coupons, international, shopping, coupons, holiday;shopping, coupons, apple;shopping, electronics, computers, shopping, electronics, cameras;shopping, apparel, winter}. Author's personal copy S. Perugini / Information Processing and Management 46 (2010) 284–294 289 The out-of-turn paradigm PO of D is: {(europe international news);(international news), europe;(europe news), international, (europe international), news;news, (europe international);international, (europe news), europe, (international news);news, international, europe;news, europe, international, international, news, europe;europe, news, international;international, europe, news, europe, international, news, all permutations of all set partitions of {news, international, china}, ..., all permutations of all set partitions of {shopping, electronics, cameras}, (apparel shopping winter);(apparel shopping), winter;(shopping winter), apparel, (apparel winter), shopping;shopping, (apparel winter);apparel, (shopping winter), winter, (apparel shopping);shopping, apparel, winter;shopping, winter, apparel, apparel, shopping, winter;winter, shopping, apparel;apparel, winter, shopping, winter, apparel, shopping}, where terms in parentheses (e.g., ‘(europe news)’) represent a single utterance with multiple terms (i.e., more than one term with the same arrival time). Since OOT subsumes Browse, the browsing paradigm of a site D is always a proper subset of the site’s out-of-turn interaction paradigm (Perugini, 2004, Chap. 4), There are 143 (= jPOj) interaction episodes in the out-of- turn interaction paradigm of D in Fig. 1 (left). To capture the number of episodes in an out-of-turn paradigm we use notation from discrete mathematics (Kreher & Stinson, 1999, §3.2: Set partitions, Bell & Stirling numbers), where sðmÞ is the set of all partitions of a set of size m into non-empty subsets (where m is a positive integer), and sðm; nÞ is the set of all partitions of a set of size m into exactly n non-empty subsets (where n is a positive integer and n 6 m). The Bell number of a set of size m is BðmÞ ¼ jsðmÞj. The Stirling Pm number of a set of size m is Sðm; nÞ ¼ jsðm; nÞj. It follows that BðmÞ ¼ n¼1Sðm; nÞ. Intuitively, support for the out-of-turn transformation in a website enriches user interaction with that site so that users can experience traversals through the site which represent permutations of all set partitions of the interaction set from each browsing episode in the site. Therefore, we use this notation to count permutations of set partitions. Specifically, we define size of out-of-turn paradigm as a total function SPO : D ! N which given D returns the size of its out-of-turn interaction par- adigm (i.e., the total number of interaction episodes in the paradigm): X jGXISðEÞj jPOj ¼ SPOðDÞ ¼ n! SðjGISðEÞj; nÞ; E2SQðDÞ n¼1 where SQ (sequencize): D ! PðEÞ is a total function which given D returns the browsing paradigm PB of D, where E represents the universal set of interaction episodes, and GIS (get interaction set) E ! S is a total function which given an interaction episode E returns the interaction set over which it is defined, where S denotes the universal set of sets. This formula considers valid utterances containing one or more terms and makes no assumption on the consistency of the length (i.e., number of terms) across all browsing episodes. The columns of Table 1 labeled n ¼ 1; . . . ; 10 contain the number of episodes (permutations) of n partitions of an inter- action set of size m. The column labeled jPOj gives the sum of the columns labeled n ¼ 1; . . . ; 10 for a particular row or, in Table 1 The number and type (i.e., partitioned into n ¼ 1; . . . ; 10 utterances) of interaction episodes enabled by the use of the out-of-turn transformation in a site with one sequence of length m. jPBj ¼ 1 in all rows. n! Sðm; nÞ m jPOj %D n = 1 2 3 4 5 6 7 8 9 10 1 1 1 0 1 2 2 3 200 1 6 6 3 Author's personal copy 290 S. Perugini / Information Processing and Management 46 (2010) 284–294 other words, the size of the out-of-turn paradigm corresponding to a browsing paradigm consisting of only one episode. The ratio of the number of episodes in a DAG’s out-of-turn paradigm to those in its browsing paradigm is shown in the column labeled % in Table 1 and defined by the following expression: jPOj j PBj SPOðDÞ j SQðDÞj ðDÞ ¼ ¼ : jPBj jSQðDÞj 4. Practice We now study the effect that the out-of-turn transformation has on two websites and, specifically, the implications of the structure and characteristics of a site on the results of the transformation. 4.1. Analysis toolkit Given the formalism above, users can make out-of-turn utterances at any point while interacting with a website, and in any order, and, thus, there is a combinatorial explosion in the number of possible interaction episodes this single transfor- mation supports (cf. the column labeled jPOj of Table 1). To help site designers explore the interaction episodes the out-of- turn transformation enables, we built a toolkit, available from http://oot.cps.udayton.edu/oot-toolkit.tgz, which consists of Perl scripts to: (i) compute the size of an out-of-turn paradigm of a site given the number and length of each sequence there- in, (ii) generate the out-of-turn interaction paradigm for a site (i.e., given the browsing paradigm, enumerate all possible interaction episodes realizable through the out-of-turn transformation), (iii) simulate interaction episodes in batch while collecting a variety of transformation statistics, and (iv) compute the number of sequences in which each term in a site is contained. The scripts to both compute the size of an out-of-turn paradigm and generate all interaction episodes in an out-of-turn paradigm make use of modules which are optimized to use dynamic programming strategies and, as a result, the toolkit produces results on large sites fast. The episode simulator produces a complete summary of what is capable with support to interact out-of-turn with a web- site. For instance, on the sample DAG in Fig. 1 (left), it produces one line per interaction episode followed by a colon and the number of sequences through the site remaining after each utterance in that episode: (news) (international) (europe): 11 5 3 1 (news) (europe) (international): 11 5 1 1 (international) (news) (europe): 11 4 3 1 (europe) (news) (international): 11 1 1 1 (international) (europe) (news): 11 4 1 1 (europe) (international) (news): 11 1 1 1 (news international) (europe): 11 3 1 . . . (winter) (apparel) (shopping): 11 1 1 1 (shopping apparel) (winter): 11 1 1 (shopping winter) (apparel): 11 1 1 (apparel winter) (shopping): 11 1 1 (shopping) (apparel winter): 11 6 1 (apparel) (shopping winter): 11 1 1 (winter) (shopping apparel): 11 1 1 (shopping apparel winter): 11 1. Notice that the numbers trailing some of the above episodes have repeating ones (1’s). Once an utterance initiates a trans- formation which renders a site with only one remaining sequence, the result is effectively fixed. It is up to the designer to force the user to click through a series of links leading to the only terminal page remaining or to consolidate that series. 4.2. Case studies The out-of-turn transformation is a pruning operator and is appropriate on the web when several term associations underlie the hierarchical model of the site on which it is applied. The sequences pruned from a site are those which do not contain the term supplied out-of-turn. Therefore, invoking the out-of-turn transformation with a term which appears in a frequent number of sequences results in the retention of more sequences (and, thus, terminal pages) and removal of less. On the other hand, suppling a term out-of-turn which appears only in a few sequences selects less sequences and prunes more. For instance, the term ‘advertising’ classifies six of the 11 sequences through the DAG D in Fig. 1 (left) and, therefore, supplying ‘advertising’ out-of-turn causes the site to be thinned while retaining the semblance of a hierarchy Author's personal copy S. Perugini / Information Processing and Management 46 (2010) 284–294 291 (Sacco, 2000). In contrast, the term ‘apparel’ is occurs in only one sequence in D and, thus, saying it out-of-turn results in a shortcut directly to terminal page 13 (Gerstel et al., 2007). Therefore, while the definition of the transformation is fixed as shown in Eq. (1) (i.e., it is applied consistently across different sites), its results as well as the interaction afforded to the user depend on how the terms labeling hyperlinks are distributed throughout the site’s sequences. For insight into the results of the out-of-turn transformation in practice, we conducted a variety of experiments in two websites: Project Vote Smart (PVS) and the News category of the Open Directory Project (hereafter referred to as News). Terms are distributed differently throughout the sequences of these sites. Specifically, unlike the sample site given in Fig. 1 (left), each level of PVS corresponds to a facet of information-seeking. At the first level, users are asked to make a state selection, followed by branch of Congress (House or Senate), then a choice for political party (Democrat, Republican, or Inde- pendent), and, finally, a choice for district/seat. We say such sites are faceted (Perugini, 2009) because each level of the site corresponds to a facet of information seeking. We call faceted sites with a consistent depth (i.e., sequence length across all sequences or number of facets across all sequences) structured (Perugini, 2009). On the other hand, the sample site given in Fig. 1 (left) and ODP are without a facet classifying the terms at each level. We call such sites semistructured because the data they present is schemaless and self-describing and, thus, often called semistructured data (Abiteboul, Buneman, & Suciu, 2000). Furthermore, News, and ODP in general, unlike the sample site in Fig. 1 (left), does not have a consistent depth across all sequences. Project Vote Smart is a comprehensive and authoritative website for political officials at all levels of government. PVS has a webpage for each state and federal official containing biographical information as well as information about the official’s party affiliation, committees, and voting record. The Open Directory Project (ODP) is the largest, most comprehensive, and most widely distributed human-compiled taxonomy of links to websites (Perugini, 2008; The Open Directory Project, 2002). We analyzed the News topic of ODP available from http://rdf.dmoz.org in RDF format. Table 2 captures values for a variety of structural characteristics of PVS and News. Results in Tables 2 and 3 reflect the US congressional landscape on December 4, 2007 and the News category of ODP data based on the structure.rdf.u8.gz RDF dump file, downloaded on January 18, 2008, which contains the category hierarchy information. We also include the values of these characteristics for the sample site in Fig. 1 (left) for purposes of comparison. While the site in Fig. 1 (left) is a DAG owing to the presence of symbolic links, News is not a DAG due to the presence of symbolic links which induce cycles. While so-called hard links create the natural parent-child relationships in a tree, the source vertex of a symbolic link is actually not the parent of its target vertex though it appears to be. Symbolic links create multiclassification in directories (Perugini, 2008) and are suffixed with @ in the ODP and Yahoo! directories (Perugini, 2008). The edge labeled ‘coupons@’ from vertex 3 to 5 in Fig. 1 (left) is a symbolic link. PVS is a tree since it does not use symbolic links. Since the targets of some of the symbolic links in News reside outside of the News section of ODP, we purged the symbolic links from News and analyzed a tree model of it. Values in the column labeled Depth indicate the minimum and maximum length of any sequence. In Fig. 1 (left) and PVS, the minimum and maximum depth equal each other. The column labeled #Tv provides the sum of the values from the cor- responding entries of the columns labeled #Nlv and #Lv. In the absence of symbolic links, the number of hyperlinks in a site (given in the column labeled #Lk) is equal to the number of child vertices in the site (or one minus the number of total ver- tices since the root is a child of no vertex). We define term as a string labeling a hyperlink (i.e., the complete text between the <a href=""> and </a> HTML tags). In Fig. 1 (left), ‘news’ is a term. Notice that while each term in Fig. 1 (left) contains only one word (i.e., any string of characters except space), this definition permits a term to consist of more than one word (e.g., ‘Business and Economy’ is one term in News) and this viewpoint is reflected in the terms counts in Table 2, which omit dupli- cates. The number of duplicate terms is given in the column labeled #Lk since the total number of terms equals the total number of hyperlinks based on the definition of term above. We compute the average (l) number of children per vertex as the total number of children (i.e., the total number of vertices minus one, divided by the total number of parents or non-leaf vertices, in a site). Table 2 reveals that there are more leaves than non-leaves in the two sites studied (1.69 times more in PVS and six times more in News). The terms in this paragraph are also defined in (Perugini, 2008). 4.3. Results Table 3 reveals that even though News has 100 less sequences than PVS, the out-of-turn transformation enables more interaction episodes in it. The minor increase in depth in News (none of its sequences extend more than two levels deeper than any in PVS) versus PVS translate to more than four times the number of episodes. However, in both sites, support for out-of-turn interaction drastically increases the scope of ways to interact with the site (cf. column labeled %D in Table 3). Table 2 Structural characteristics of the sites we studied. Site URL Type S? Depth #Tv #Nlv #Lv #Sq #Lk #UT l C/Nlv Fig. 1 (left) – DAG 3 13 7 6 11 15 14 1.71 p PVS http://vote-smart.org tree 4 857 319 538 538 856 116 2.68 News (ODP) http://dmoz.org/news graph [2–6] 511 73 438 438 510 292 6.99 S = structured, Tv = total vertices, Nlv = non-leaf vertices, Lv = leaf vertices, Sq = sequences, Lk = links, UT = unique terms, C/Nlv = children per non-leaf vertex. Author's personal copy 292 S. Perugini / Information Processing and Management 46 (2010) 284–294 Table 3 Statistics on the sizes of the browsing and out-of-turn paradigms in the sites we analyzed. Site jPBj jPOj %D #UET Fig. 1 (left) 11 143 1200 19 Project Vote Smart 538 40350 7400 1022 News (ODP) 438 177594 40447 1142 UTE = unique episode types. While there is a stark difference in the number of episodes supported by the out-of-turn transformation in each site, the number of distinct episode types is nearly identical. Specifically, the 40,350 total episodes in PVS fall into 1,022 unique epi- sode types and the 177,594 total episodes in News fall into 1,142 unique types. An episode type indicates how the episode transforms the site’s structure irrespective of the content of the utterances in the episode. For instance, in News, the episodes (Analysis and Opinion Columnists), (Directories) and (Analysis and Opinion Columnists), (By Publication) both leave the site with 14 sequences after the first utterance and only one after the second utterance and, therefore, both episodes have the same type. Fig. 2 (top left) examines the number of sequences remaining after each utterance across all episodes in News. Each line from the maximum y value (i.e., the starting number of sequences in a site) to the minimum y value (which is al- ways one) represents an episode. The dense areas of this graph depict the dominate episodes types. Note that in cases such as Fig. 1 (left, m ¼ 3) and PVS ðm ¼ 4Þ, where every leaf in the DAG model of the site resides at the same depth, the values in the column labeled % in Table 3 are the same as those in the column labeled the same in Table 1. Therefore, while the values for jPOj and % in Table 1 are wrt one browsing episode, they are relevant to the computation of the number of episodes in sites with a consistent length across all sequences. To reiterate, the result of the out-of-turn transformation depend on the frequency of sequences in which the term sup- plied out-of-turn is contained. For instance, the term ‘democrat’ occurs in 286 of the 538 sequences through PVS and, there- fore, supplying it out-of-turn causes the hierarchy to be thinned. In contrast, the term ‘Washington, DC’ occurs in only one sequence in PVS and, thus, saying it out-of-turn results in a shortcut directly to the terminal webpage of the democrat Fig. 2. Graphs to help explore the cumulative effect of the out-of-turn transformation on News: (top left) episodes, (top right) distribution of episodes across utterances, (bottom left) distribution of episode types across episodes, (bottom right) distribution of terms across sequences.
{"url":"https://pdfroom.com/books/personalization-by-website-transformation-theory-and-practice/N7jgkrEEdMV","timestamp":"2024-11-05T06:41:47Z","content_type":"text/html","content_length":"140280","record_id":"<urn:uuid:f14e7b17-d3ce-41a6-9eae-cc72cd9d219b>","cc-path":"CC-MAIN-2024-46/segments/1730477027871.46/warc/CC-MAIN-20241105052136-20241105082136-00881.warc.gz"}
JMeq and functions Does anyone know if we can prove the following proposition from a reasonable set of axioms (ideally just JMeq_eq and maybe functional_extensionality)? Lemma JMeq_comp (A B : Type) (f : A -> B) (g : B -> A) : JMeq f (@id B )-> JMeq g (@id A) -> forall a, g (f a) = a. I can't seem to extract useful information from the JMeq assumptions. you need Pi-injectivity Require Import JMeq. Lemma JMeq_ty {A B x y} (H : @JMeq A x B y) : A = B. destruct H;reflexivity. Axiom arrow_inj_dom : forall A B A' B', (A -> B) = (A' -> B') -> A = A'. Lemma JMeq_comp (A B : Type) (f : A -> B) (g : B -> A) : JMeq f (@id B )-> JMeq g (@id A) -> forall a, g (f a) = a. intros Hf Hg. pose proof (JMeq_ty Hf) as HAB. apply arrow_inj_dom in HAB. destruct HAB. apply JMeq_eq in Hf,Hg. symmetry in Hf,Hg. destruct Hf,Hg. this is IMO not a great axiom and conflicts with more common axioms such as propext Axiom arrow_inj_dom : forall A B A' B', (A -> B) = (A' -> B') -> A = A'. Axiom propext : forall P Q : Prop, P <-> Q -> P = Q. Lemma bad : False. assert ((True -> True) = (False -> True)). { apply propext. split;auto. } assert ((True -> True) = (False -> True) :> Type). { destruct H. reflexivity. } apply arrow_inj_dom in H0. refine (match H0 in _ = T return T with eq_refl => I end). @Gaëtan Gilbert : since Coq is more and more used by people who did not study logic (like me - physicist) I wonder if we should have a way to declare incompatibility of modules in Coq - possibly combined with such a proof of False. This would allow to also include such "not so great" axioms in a library and document the consequences in way which results in error messages for users which combine the axioms. There was some discussion about such an "incompatible axiom" mechanism on a user meeting a few years back, but afaik it did not result in something tangible. this advice would already rule out most issues, unless you combine things in some strange way? Developers Rule 4.1. Developers are only allowed to use the logical axioms defined in the Coq.Logic.* modules of the Coq standard library.Besides, they shall list the logical axioms they use in their formal development. @Karl Palmskog : this is more about exploring possible presentations of some topic than writing standard library code. It does happen that using certain axioms (say JMEq) makes using definitions easier. In the industry one is more frequently willing to add some axioms to make things easier, but of cause things should remain sound. the document is already aimed at industry users, since academics don't do much with common criteria Is there a db of known incompatibilities of axioms somewhere? Similar to the nice https://github.com/coq/coq/wiki/The-Logic-of-Coq#axioms but for incompatibilities instead of implications. That and a clear mandatory (automatically added by coqdoc?)) "Axioms" section in each coqdoc document would be nice indeed. see discussion here: https://coq.discourse.group/t/what-do-you-think-of-axioms/231/10 There's no database of incompatible axioms to my knowledge, and arguably such a database should be kept in Coq's GitHub repo so as to reference the actual definitions of axioms, which seemingly changes over time. as for axiom database maintenance, the best thing is likely to maintain a bunch of proofs of False as part of the Stdlib. Next best thing is to do is what essentially amounts to the same, make a project with these proofs of False and add it to Coq's CI. Indeed proofs of False are the best way to document incompatibilities and also teach were the issues are - I find @Gaëtan Gilbert's proof above much more instructive than any explanation. But what IMHO is missing in Coq's machinery is a way to produce an error if two modules defining incompatible axioms are required. One could implement such functionality with Ltac2 / Ltac, though (I think), if we don't want to have a dedicated mechanism for this. that can only be done for "known" axioms though, we can't detect it in general I don't think we have incompatible axioms in the stdlib, the stdlib axioms are things like propext and choice One could implement such functionality with Ltac2 / Ltac, though (I think) I don't see how AFAIK, all the axioms we have in the stdlib are compatible with the Set model of CIC. Barring the HoTT people, Set seems to be the intended model of Coq post 8.0. @Gaëtan Gilbert : yes this is clear. But I would think that the combined knowledge of the Coq Team would be very helpful. I don't see how With Ltac2 I see two ways to explore: • keeping a mutable list of axiom name strings which are already required. I am not sure this would work - since one can't change mutable definitions by tactics I hardly use them, but at the global level this should work. • having a Gallina definition with a certain well defined name and e.g. string value, say Definition DefineAxiom:="AxiomBla". and write a Ltac2 function which searches the global name space for such definitions and checks them against a negative list. At each axiom declaration one would Ltac2 Eval this function with the corresponding negative list. For Ltac there are tricks with tactic notations to get the effect of a mutable definition, but it is quite a hack. I didn't do this in a while and would have to look up the details, but I did stuff like custom hint databases this way in the past. I didn't say there are elegant / performant solutions ... But then in any case one should not have more than a handful of axioms, so performance should not be much of an issue. but how is the error triggered? the user does an explicit Ltac2 Eval check_axiom_compat ().? Ah yes, the Ltac2 Eval is not done during require time ... Thinking ... One could make the axioms depend on some axiom compatibility type class and trigger the error during type class search with an extern hint I guess (at the moment the axiom is used). Anyway - you see why I think some dedicated support for detecting incompatible features would make sense. I think one can also use this in other cases where requiring two modules is not a good idea. And an Ltac2 Eval check_axiom_compat (). would definitely better than nothing and likely better than some hack. but it would be some hack Yes, definitely all Ltac/Ltac2 solutions I can think of would be hacks. I would absolutely prefer a mechanism which e.g. allows to define arbitrary "features" and to declare incompatible "features". I would do this as a module level definition and requiring a module should trigger an error. What I meant above is that check_axiom_compat would be less of a hack than some other solutions. Last updated: Oct 13 2024 at 01:02 UTC
{"url":"https://coq.gitlab.io/zulip-archive/stream/237977-Coq-users/topic/JMeq.20and.20functions.html","timestamp":"2024-11-04T07:41:12Z","content_type":"text/html","content_length":"27707","record_id":"<urn:uuid:af124387-6b40-46e2-9e76-1a3e5fedf48f>","cc-path":"CC-MAIN-2024-46/segments/1730477027819.53/warc/CC-MAIN-20241104065437-20241104095437-00597.warc.gz"}
Interest Rate Calculator | Calculate Financial Growth Estimate your potential savings and plan for the future with our easy-to-use Interest Rate Calculator. Calculate loan and investment rates today. Interest Rate Calculator An interest rate calculator is a valuable program that helps you to figure out how much you'll earn or pay in interest on the money you borrow or save. It works by taking into account three main things - the principal amount, the interest rate, and the period. You input these values, and the calculator do the math for you, showing the total amount you'll have or owe at the end. It's a helpful tool for making financial decisions, like choosing a savings account or understanding the cost of a loan, all without needing to be a math whiz. Components of an Interest Rate Calculator Interest Rate This is like a fee you pay or earn for using money of someone else. It's usually a percentage, like 5% or 8%. If you're saving money in a bank, the bank might give you some extra money as interest. If you're borrowing, you'll have to pay extra money as interest. Loan (Principal) Amount This is the initial amount of money you have or want to borrow. It can be the starting point for your calculations. For example, if you're saving $1000 or borrowing $5000, that's your principal If you are a student and availing a student loan then using our student loan calculator can create an ease for your calculations. Monthly Payment This is the amount that defines the amount that you are planning to pay each month. Calculations will depend on this amount, taking into account the loan terms and monthly payment the calculation is made and result is shown that shows for how many months you will pay the amount and the total amount that would be paid including the amount of interest. This is for how long you plan to keep your money saved or borrowed. It's usually measured in years but can also be in months or days. For example, you might save money in a bank for five years, or you might borrow money for two months. Manual - How to Use the Interest Rate Calculator? Enter the amount of loan that you want to acquire. Input the loan term, enter the time span for the loan amount in months and years. Enter the monthly payment amount that you want to pay, the calculations will change according to this amount. After entering the figures click the "Calculate" button and see the result of calculations on the screen. Interest Rate Calculator Types To comprehend how interest rates affect different financial products, interest rate calculators are essential resources. They are available in multiple varieties, each fulfilling a distinct function and accomodating a range of budgetary demands. The primary categories of interest rate calculators are as below. Basic Interest Rate Calculator The interest on a principal amount based on a fixed interest rate over a given length of time is calculated using a basic interest calculator. The formula is used to calculate simple interest. Interest = P x R x T When making investments or short-term loans where the interest does not compound, this kind of calculator is frequently utilized. For instance, the simple interest earned on a $1000 investment over a two-year period at a 5% annual interest rate would be $100. Compound Interest Calculator Compound interest, in contrast to simple interest, is computed using both the original principal and the interest accrued over time. This implies that interest is accrued on interest, resulting in progressive exponential development. This is the compound interest formula. A = P (1+nR)^n x T For long-term investments and savings accounts where interest is compounded on a monthly, quarterly, or annual basis, compound interest calculators are perfect. Loan Comparison Interest Rate Calculator By analyzing the effects of different interest rates and loan periods on monthly payments and total interest charges, loan comparison calculators assist users in comparing various loan possibilities. To determine which option offers the best conditions, users can enter details for a number of different loan scenarios. When looking for a mortgage, personal loan, or auto loan, this calculator is crucial for helping you make well-informed decisions. Savings Growth Interest Rate Calculator Savings Growth Interest Rate Calculators use a fixed interest rate, contribution amount, and time period to forecast the future value of savings or investments. These calculators frequently predict how regular contributions and accrued interest will increase over time using the concepts of compound interest. They come in handy for organizing long-term financial objectives like college money, retirement savings, and so forth. Every kind of interest rate calculator has a different purpose and aids users in managing and maximizing their financial choices according to their own requirements and objectives. Advantages of Interest Rate Calculator To achieve financial stability and success in the complex financial environment of today, it can be essential to comprehend and manage interest rates. With so many benefits to offer, an interest rate calculator is a potent tool that makes this process easier. Here, we examine the main advantages of utilizing an interest rate calculator, such as its capacity to reduce bad debt, enable comparison shopping, and be user-friendly. 1. Accurate Financial Decisions You can make accurate financial decisions with the help of interest rate calculator. The calculator helps its users to assess the financial situations. You can easily find this calculator online and avail its benefits in less time. With the interest rate calculator, you can better understand the loan term and its process. 2. Financial Planning In addition to making accurate financial decisions you can make a financial plan for yourself, you can plan in any month of the year and follow your plan throughout the months of the year. With this calculator, you can take a disciplined approach towards your financial goals. 3. Avoiding Negative Debt Preventing bad debt is one of the biggest advantages of utilizing an interest rate calculator. Bad debt is when people take on more debt than they can afford, frequently as a result of not realizing the long-term effects of interest on their repayment obligations. By showing users how various interest rates impact the total amount repayable, an interest rate calculator helps consumers avoid this trap. Users can evaluate how changes in interest rates affect their monthly payments and total debt burden by entering different interest rates into the calculator. People are able to make wise choices because of this foresight, which guarantees that they only take out loans or other credit products that they can easily afford. The calculator can be used, for instance, by someone thinking about getting a mortgage to compare the total cost of loans with various interest rates. They might select a loan with a lower rate or bargain for better terms if they can see that a higher rate translates into noticeably greater monthly payments and a higher overall repayment amount. This perspective strategy lowers the likelihood of running into financial issues by preventing the accumulation of debt that could become overwhelming. 4. Facilitates Comparability The ability of an interest rate calculator to facilitate comparison is another important advantage. It can be difficult to decide which financial instrument gives the most value when there are so many options available, including credit cards, loans, and savings accounts, all of which have varying interest rates. This procedure is made simpler by an interest rate calculator, which enables consumers to compare various financial products side by side. Users can immediately examine how different rates effect their payments and overall interest charges by entering the pertinent parameters, such as the principal amount, interest rate, and period. When looking for a loan or credit card, this comparative study is especially helpful because it shows customers which options have the best terms. For example, the calculator may display the impact of each rate on monthly payments and total expenditures during the loan life when comparing two mortgage offers, one with a fixed rate and the other with a variable rate. By allowing customers to select the option that best fits their budget and financial objectives, this comparison guarantees they are getting the best value for their money. 5. User Friendly Because of its user-friendly design, interest rate calculators are accessible to even individuals with less financial expertise. Their ease of use is a big plus because it makes complicated computations simple and quick for consumers to accomplish. The majority of interest rate calculators function by having the user entered essential data like the loan period, interest rate, and principle amount in simple input areas. After processing this input, the calculator outputs the results in an understandable way. A lot of calculators also provide users with visual aids, such graphs and charts, to assist them comprehend the effects of various interest rates. For instance, a user can quickly see how different interest rates effect their monthly payments and final repayment amount by simply entering their loan amount and period into the calculator. It is a vital tool for anyone handling personal finances because of its simplicity of use, which removes the need for manual calculations and lowers the possibility of errors. 6. Strengthened Judgment Taking Ultimately, by giving customers access to precise, useful information, an interest rate calculator improves decision-making. Users can choose financial products and strategies more intelligently if they have access to comprehensive computations and comparisons. The calculator helps demonstrate how a person's monthly payments and total loan cost will be affected by a changing interest rate, for instance, if they are thinking about refinancing their mortgage. By comparing the advantages of refinancing with the possible expenses, customers are better able to make wiser financial decisions. In a similar vein, the calculator can help users make the best decisions by showing how changing interest rates can impact returns when selecting between various savings accounts or investment possibilities. An interest rate calculator can help with comparison shopping, bad debt prevention, ease of use, informed financial planning, and improved decision-making, among other things. Through the use of this tool, people can improve their decision-making skills, obtain a better grasp of how interest rates affect their finances, and eventually become more successful and stable financially. 7. Well-Informed Budgeting An interest rate calculator offers useful information about how interest rates will impact financial goods over time, which helps with well-informed financial planning. Users can make better financial decisions if they are aware of how various interest rates will affect them in the long run. For example, users can use the calculator to estimate how much they would need to borrow and what their payments will be at different interest rates when considering a large purchase or investment. This foresight ensures that people are well-prepared for future financial obligations by aiding in budgeting and saving. It can also help in establishing reasonable financial objectives, including figuring out how much to invest for retirement or how much to save for a down payment on a home.
{"url":"https://calculator.cool/interest-rate-calculator.html","timestamp":"2024-11-07T15:11:35Z","content_type":"text/html","content_length":"71604","record_id":"<urn:uuid:2af73c25-ad7c-4eda-aaa9-e0cdaacd0fd8>","cc-path":"CC-MAIN-2024-46/segments/1730477028000.52/warc/CC-MAIN-20241107150153-20241107180153-00275.warc.gz"}
@Article{JAMS-3-3, author = {Min, Liu and Xiao-Chen, Jin and Ru-Wei, Gao}, title = {Effects of Intergranular Phase and Structure Defect on the Coercivity for the HDDR Nd-Fe-B Bonded Magnet}, journal = {Journal of Atomic and Molecular Sciences}, year = {2012}, volume = {3}, number = {3}, pages = {218--226}, abstract = { Based on the specific microstructure of HDDR (hydrogenation, disproportionation, desorption, recombination) grains, that the bivariate model concerning the anisotropy constant $K'_1$ and exchange integral $A'_1$ in defect region, which was put forward. Subsequently, the dependence of magnet coercivity on the intergranular phase thickness $d$ and structure defect thickness $r_0$ was studied. The results showed that the coercivity, $H_c,$ increases with increasing $d,$ for the $r_0,$ the anisotropy constant $K_1(0)$ and exchange integral constant $A_1(0)$ at the grain surface taking different values. While $K_1(0)$ and $A_1(0)$ are fixed, $H_c$ enhances with increasing $r_0$ for the same $d.$ On the contrary, for the fixed $r_0$ and $d,$ $H_c$ decreases with increasing $K_1(0)$ or $A_1(0).$ The calculated coercivity is in good agreement with experimental results given by others when $d$ takes 1 nm, $r_0$ is in the rang of 2-5 nm, $A_1(0)$ and $K_1(0)$ change in the range of (0.6-0.7) of $A_1$ and $K_1,$ respectively. }, issn = {2079-7346}, doi = {https://doi.org/10.4208/jams.053111.071211a}, url = {https://global-sci.com/article/74377/ effects-of-intergranular-phase-and-structure-defect-on-the-coercivity-for-the-hddr-nd-fe-b-bonded-magnet} }
{"url":"https://global-sci.com/article/download/74377/bib","timestamp":"2024-11-10T04:37:50Z","content_type":"application/x-bibtex-text-file","content_length":"3966","record_id":"<urn:uuid:b3009a60-ae6a-4810-9a40-643bdac208f4>","cc-path":"CC-MAIN-2024-46/segments/1730477028166.65/warc/CC-MAIN-20241110040813-20241110070813-00313.warc.gz"}
Who's That Mathematician? Paul R. Halmos Collection - Page 41 For more information about Paul R. Halmos (1916-2006) and about the Paul R. Halmos Photograph Collection, please see the introduction to this article on page 1. A new page featuring six photographs will be posted at the start of each week during 2012. Halmos photographed Russian topologist Lev Pontryagin (1908-1988) on Oct. 8, 1964, in Ann Arbor, Michigan. In his book, I Want to Be a Mathematician: An Automathography (Springer, 1985), Halmos related how he and Pontryagin met again in 1965 when Halmos visited Moscow as part of an exchange program that allowed U.S. and Soviet scientists to visit one another’s countries (pp. 290, 311-312). Another photo of Pontryagin, taken in 1958, appears on page 40 of this collection and you can read more about him there. Pasquale Porcelli and W. T. Reid were photographed by Halmos in April of 1961 in Chicago, Illinois, probably at the University of Chicago. Halmos was just ending his tenure as a faculty member at Chicago, and was about to move to the University of Michigan in Ann Arbor. Pasquale Porcelli (d. 1972) earned his Ph.D. at the University of Texas at Austin in 1952. After serving on the faculty of the Illinois Institute of Technology (IIT), he joined the mathematics faculty at Louisiana State University in Baton Rouge in 1959 and spent the rest of his career there, becoming a Boyd Professor, the highest professorial rank at LSU, in 1965. Porcelli advised at least 19 Ph.D. students during his career. His Ph.D. students at LSU included Ronald Douglas, who is pictured on page 13 of this collection and who has spent his career at the University of Michigan, SUNY Stony Brook, and Texas A & M University, where he is now Distinguished Professor of Mathematics. Douglas recently wrote about his first experience with inquiry-based learning in Porcelli’s calculus classroom at IIT, an experience he says “changed his life” by inspiring him not only to major in math at IIT, but also to follow Porcelli to LSU to obtain his Ph.D. The LSU Mathematics Department honors Porcelli with the Porcelli Lecture Series, the Porcelli Research and Academic Excellence Awards for graduate students, and the Porcelli Scholarships for undergraduates. (Sources: Mathematics Genealogy Project; “Inquiry-Based Learning: Yesterday and Today,” AMS Notices 59:5 (May 2012), pp. 668-9; LSU Mathematics) William Thomas Reid (1907-1977) earned his Ph.D. in differential equations at the University of Texas at Austin in 1929. He was a faculty member at the University of Chicago from 1929 to 1944 and at Northwestern University in Evanston, Illinois, from 1944 to 1959. In 1959, he became chair of the mathematics department at the University of Iowa and, in 1964, he became Phillips Professor of Mathematics at the University of Oklahoma. His research areas included differential equations, calculus of variations, and optimal control, and he advised at least 26 Ph.D. students during his career. Of the seven Ph.D. students he supervised at the University of Chicago, the first three were co-advised by Gilbert Bliss, and one of these was Herman Goldstine, who became a leader in the fields of numerical analysis and computing and then wrote excellent histories of both fields. The Society for Industrial and Applied Mathematics (SIAM) honors both Reid and his wife, Idalia Reid (d. 2000), with its annual W. T. and Idalia Reid Prize in Mathematics, awarded for research in differential equations and control theory. (Sources: Hardin-Simmons University biography, Archives of American Mathematics biography, SIAM biography, Mathematics Genealogy Project) Halmos photographed Cora Lee Beers Price (d. 2004) and G. Baley Price (1905-2006) in April of 1983 at the University of Kansas in Lawrence. Cora Price was a professor of English and Classics at the University of Kansas; her husband Griffith Baley Price was the mathematician in the family. G. B. Price earned his Ph.D. in 1932 from Harvard University with the dissertation, “Double Pendulum and Similar Dynamical Systems,” written under George David Birkhoff. He joined the University of Kansas faculty in 1937 and spent the rest of his career there. As president of the MAA during 1957-58, he helped organize the School Mathematics Study Group in 1958. He also was instrumental in founding and developing such mathematical mainstays as Mathematical Reviews and the Conference Board of the Mathematical Sciences (CBMS) and such MAA fixtures as the Hedrick Lectures and the Committee on the Undergraduate Program in Mathematics (CUPM). In the biography of Price at MAA Presidents, he is described as a person of both ideas and action; that is, as someone who both “conceived what ought to be done” and was “there to see it through.” The University of Kansas honors his memory with the G. Baley Price Award for Excellence in Teaching and the G. Baley Price Professorship in Mathematics. (Sources: University of Kansas obituary, MAA Presidents, Mathematics Genealogy Project) Richard Rado (1906-1989), Robert Rankin (1915-2001), and Hans Reimann, left to right, were photographed by Halmos in April of 1965 at the British Mathematical Colloquium in Dundee, Scotland. Halmos was one of three main speakers at this conference (I Want to Be a Mathematician, Springer, 1985, pp. 290-292). Another photograph of Rankin appears on page 7 of this collection, where you can read more about him. Born in Berlin, Germany, Richard Rado earned doctoral degrees from the University of Berlin in 1933 and from Cambridge University in 1935. At the University of Berlin, he wrote the dissertation, “Studies on combinatorics,” under advisor Issai Schur and at Cambridge, he wrote the dissertation, “Linear Transformations on Bounded Sequences,” under advisor G. H. Hardy. Although he would write papers in both fields, his research throughout his career was primarily in combinatorics. In 1934, Rado met Paul Erdős, who had earned his Ph.D. in Budapest that year and accepted a fellowship at the University of Manchester in England, and the two began to collaborate. Erdős described the strengths each brought to their collaboration as follows: I was good at discovering perhaps difficult and interesting special cases and Richard was good at generalising them and putting them in their proper perspective (quoted by O’Connor and Robertson in their MacTutor Archive biography of Rado). After spending 1935-36 at Cambridge University, Rado was on the mathematics faculty at the University of Sheffield, England, from 1936 to 1947, then at King’s College, London, from 1947 to 1954, and finally at the University of Reading in England from 1954 onward. Much like another couple featured in this collection, Leonard and Reba Gillman (see page 17), Richard Rado and his wife, Luise Zadek Rado (d. 1990), were highly accomplished musicians, he as a pianist and she as a singer, and gave both public and private concerts. (Sources: MacTutor Archive, Mathematics Genealogy Project) Hans-Martin Reimann earned his Ph.D. in 1969 at the Eidgenössische Technische Hochschule (ETH) in Zürich, Switzerland. After receiving the Diploma in Mathematics from ETH in 1964, he spent the 1964-1965 academic year at the University of Edinburgh studying with Arthur Erdélyi before beginning Ph.D. work at ETH in 1965. He has spent most of his career at the University of Bern, Switzerland, becoming Professor Emeritus in 2006, and, in 2012, his webpage listed his research interests as complex analysis, quasiconformal mappings, Lie groups, symplectic geometry, and wavelets. Recently, Reimann remembered meeting Halmos in Scotland during his year of study there and, in particular, having a cup of coffee with him after one of the conference lectures. He also wrote, "I started growing a beard only in 1972, yet there are hardly any pictures of me without a beard." For some evidence of his claim, see Reimann's photographs in the Oberwolfach Photo Collection and at his Universität Bern webpage. (Sources: Mathematics Genealogy Project, Universität Bern Mathematics, Hans-Martin Reimann (Nov. 2012)) Halmos photographed George Yuri Rainich (1886-1968), in 1964, probably at the University of Michigan in Ann Arbor, where Halmos was a faculty member from 1961 to 1968 and Rainich a faculty member from 1926 onward, becoming Professor Emeritus in 1956. Born in Odessa, Russia, George Yuri Rabinovich earned his doctoral degree (Magister of Pure Mathematics) in 1913 from the University of Kazan, Russia. He remained on the faculty at Kazan until 1917 and was a mathematics professor at the University of Odessa from 1917 to 1922. During 1922 and 1923, he spent several months traveling to the U.S. and obtaining a fellowship at Johns Hopkins University in Baltimore, Maryland, where he became George Yuri Rainich. During the 1920s and certainly by 1926, when he joined the University of Michigan mathematics faculty, his main research focus had become relativity theory. According to his Ph.D. student (1947) and University of Michigan colleague (1947-1973, Emeritus 1973-1979) Kenneth B. Leisenring, Rainich wrote a series of papers during the 1920s in which he “showed that the mathematics of the general theory, which Einstein had made to supply a model for gravitation, also supplied one for electromagnetism” (UM Memorial), published the book Mathematics of Relativity (Wiley) in 1950, and in 1963, after serving as a visiting professor at the University of Notre Dame for several years, returned to UM to conduct a seminar on relativity theory. He advised at least 19 Ph.D. students at the University of Michigan in a wide variety of topics, and he established a fellowship fund at UM intended to promote graduate study in mathematics for African-American and international students. If you’ve heard one story about Rainich, it probably was that, during a lecture on relativity theory at Columbia University, an audience member asked why he had not cited the work of Rabinovich among that of other leading researchers on the subject. Showing some embarrassment, Rainich answered, “Well, you see, I am Rabinovich.” (Sources: University of Michigan Faculty History Project, Mathematics Genealogy Project) Halmos photographed Robert Rankin (1915-2001), left, and H. Garth Dales in 1973 at the University of Glasgow, Scotland, where Rankin was the longtime Head of Department and Dales was on the faculty. Two more photos of Rankin appear above and on page 7 of this collection, where you can read more about him. H. Garth Dales earned his Ph.D. in functional analysis in 1970 from the University of Newcastle-upon-Tyne, England. From his first academic post at Glasgow (1970-1973), he moved in 1973 to the University of Leeds, England, where he became Professor Emeritus of Pure Mathematics in 2011 and continues to carry out research on Banach algebras. Dales wrote (from his visiting position at the University of California, Berkeley, in November, 2012) that Halmos may have been at Glasgow to give a lecture to the "North British Functional Analysis Seminar." He also shared the following story about the intertwining of his and Rankin's lives: Robert Rankin was the son of a professor of Old Testament Studies at the University of Edinburgh. He took a first degree and PhD at Cambridge, and then became a Fellow of Clare College, Cambridge. In 1945, after the war, Clare College appointed two Fellows - Rankin in Science and a certain Dr John Parry in Arts - and Rankin and Parry and their families shared a house owned by Clare in Cambridge for a couple of years. Eventually John Parry became a famous professor of Oceanic History at Harvard - and I married one of his daughters, Joanna Clare, who had, as a girl, lived in the Cambridge house with the Rankin family! (Sources: Mathematics Genealogy Project, University of Leeds Mathematics, Garth Dales (Nov. 2012)) For an introduction to this article and to the Paul R. Halmos Photograph Collection, please see page 1. Watch for a new page featuring six new photographs each week during 2012. Regarding sources for this page: Information for which a source is not given either appeared on the reverse side of the photograph or was obtained from various sources during 2011-12 by archivist Carol Mead of the Archives of American Mathematics, Dolph Briscoe Center for American History, University of Texas, Austin.
{"url":"https://old.maa.org/press/periodicals/convergence/whos-that-mathematician-paul-r-halmos-collection-page-41","timestamp":"2024-11-11T11:10:36Z","content_type":"application/xhtml+xml","content_length":"134334","record_id":"<urn:uuid:9c63a424-f4db-4da7-93d0-1e2000800f6d>","cc-path":"CC-MAIN-2024-46/segments/1730477028228.41/warc/CC-MAIN-20241111091854-20241111121854-00144.warc.gz"}
Crank mechanism statics - free body diagrams We devoted three previous tutorials to resolving by various methods the displacement, velocity and acceleration (the kinematics) of the elements in a slider and crank mechanism. In the following group of tutorials we examine the kinetics of the crank mechanism, that is the forces and moments arising in the individual elements and their consequences, in particular the output torque delivered to the crankshaft of an engine and unbalanced forces arising from inertial effects. As an introduction in this tutorial we develop simple free body diagrams based on a static analysis, neglecting inertia, friction and gravitational forces, for slider and crank mechanisms with loading: (a) on an outward power stroke (as an engine) and (b) on an inward power stroke (as a compressor). In these contexts we use the term "piston" (enclosed in a cylinder) rather than "slider". Free body diagrams A free body diagram shows forces and moments acting on individual machine elements. For static equilibrium conditions sum of forces = 0 and sum of moments = 0. In a subsequent tutorial we show how inertia forces associated with acceleration of elements are included in static force analysis using D'Alembert's principle. Example 1 - outward power stroke of an engine The figure below shows the outward* power stroke of a horizontal engine producing counter-clockwise rotation of the crank arm, shown at crank angle θ = 50°. Dimensions and other parameters are identical to examples in the kinematics tutorials with piston force F[P] added. The free body diagrams and calculated values of forces and moments are, of course, different for every crank angle over a complete cycle from 0°to 360°. * For horizontal engines extremities of travel of the piston are known as inner dead centre (i.d.c.) at crank angle = 0° and outer dead centre (o.d.c.) at crank angle 180°. The corresponding terms for a vertical engine are respectively top dead centre and bottom dead centre. The notation for vertical engines is generally used for internal combustion engines regardless of their orientation. Note that a flipped image of this arrangement with piston on the left and crank on the right has clockwise rotation of the crank. Drawing the diagrams We draw free body diagrams for the three separate elements: piston, connecting rod and crank arm. The line of action of forces is taken through the centre of the pin joints. In this case we consider the pins simply as a means of transmitting forces between elements but they could be treated as individual elements with their own diagrams. In this instance the length of an arrow has no significance as regards magnitude of the force. Diagrams for each element are shown below. The description which follows explains how the diagrams are constructed. This degree of detail is intended for readers unfamiliar with free body diagrams. Starting with pin B linking the piston and connecting rod, we know the direction and magnitude of F[P]. The only other force acting on pin B is from the connecting rod with direction defined by angle φ. It is convenient to consider forces as components on x and y co-ordinates thus we designate forces from the connecting rod acting on the piston as R[xB] and R[yB]. In this instance directions of R [xB] and R[yB] can be assigned intuitively (the connecting rod exerts a horizontal force in a direction opposed to F[P] and a vertical force downward). It follows (sum of forces on y axis = 0) that R[yB] is accompanied by an equal and opposite force R[yS] which is the reaction force applied from the cylinder wall to the piston. (We could continue by showing action and reaction forces between the cylinder mountings and ground but this is superfluous for our purpose). It is not always possible to draw the direction of a force using intuition, but as long as the given sign is in accordance with the co-ordinate axes, signs of calculated forces and moments will ultimately resolve correctly. Because all forces on the piston act through a common point at pin B there are no moments to consider. Moving to the diagram for the connecting rod, forces acting on the rod at pin B are equal and opposite to forces R[xB] and R[yB] acting on the piston. At pin A there are forces paired with the crank arm, designated R[xA] and R[yA]. As there are no other forces acting on the connecting rod, the directions of R[xA] and R[yA] must be opposed to R[xB] and R[yB] from the equilibrium conditions for forces in x and y directions. The forces acting on the connecting rod are not directed through a single point hence moments must be considered, either moments of forces at pin A around pin B or vice versa, the choice is arbitrary (see below). On the crank arm there are forces R[xA] and R[yA] at pin A equal and opposite to the forces on the connecting rod. At pin O there are forces R[xO] and R[yO] transmitted through crankshaft pin O. In engineering terms pin O is the crankshaft bearing journal. Because there are no other forces on the crank arm, directions of R[xO] and R[yO] are opposed to R[xA] and R[yA]. R[xA] and R[yA] both generate a counter-clockwise moment about pin O. For the crank arm to be in static equilibrium a clockwise moment M[O] about pin O must be applied. Envisage M[O] as a clockwise "load torque" applied to pin O providing a reaction moment equal and opposite to the clockwise moment generated about O by R[xA] and R[yA]. Equilibrium equations and calculations It is good practice to place all terms initially on the LHS of the equations according to their sign as per the defined x and y axes. This ensures any wrongly assigned direction is highlighted when resolving the equations. For Σ horizontal forces = 0 : F[P] - R[xB] = 0 &nbsp gives: &nbsp R[xB] = F[P] = 1 kN --------- (1) For Σ vertical forces = 0 : R[yS] - R[yB] = 0 gives: R[yB] = R[yS] (the value of R[yB] is derived from equilibrium conditions for the connecting rod) Connecting rod For Σ horizontal forces = 0 : R[xB] - R[xA] = 0 gives: R[xB] = R[xA] gives from (1): R[xA] = 1 kN -------- (2) For Σ vertical forces = 0 : R[yB] - R[yA] gives: R[yB] = R[yA] -------- (3) For Σ moments = 0 : We choose moments of forces R[xA] and R[yA] around pin B (counter-clockwise moments are +ve). R[xB] and R[yB] have no moment as they act through B. From the geometry of the main diagram: R[yA] x AB.Cosφ - R[xA] x AB.Sinφ = 0 gives: (R[xA] x AB.Sinφ) = (R[yA] x AB.Cosφ) gives: R[yA] = (R[xA ]x Tanφ) gives from (2): R[yA] = 1000 x Tan(14.79°) gives: R[yA] = 264 N --------(4) which from (3) gives: R[yB] = R[yS] = 264 N Crank arm For Σ horizontal forces = 0 : R[xA] - R[xO]= 0 gives: R[xA] = R[xO] from (2) gives: R[xA] = R[xO] = 1 kN For Σ vertical forces = 0 : R[yA] - R[yO] = 0 gives: R[yA] = R[yO] from (4) gives: R[yA] = R[yO] = 264 N For Σ moments = 0 : Moments about O R[xO] and R[yO] have no moment as they act through O. (R[xA] x OA.Sinθ) + (R[yA] x OA.Cosθ) - M[O] = 0 gives: M[O] = (R[xA] x OA.Sinθ) + (R[yA] x OA.Cosθ) = (1000 x 1 x Sin(50°)) + (264 x 1 x Cos(50°)) gives: M[O] = 935 Nm Calculated values for forces and moment are indicated on the diagrams below. The following can be observed from the diagrams. • At inner dead centre and outer dead centre positions (i.e. crank angles θ = 0° and 180°) all vertical forces at the pin joints are zero and action of all horizontal forces is through crankshaft pin O. Thus there is no moment around pin O. At these crank angles an engine produces zero torque. • For a double-acting engine where the inward stroke is also a power stroke (crank angles from 180° to 360°) the direction of F[P] acting on the piston is reversed and consequently all horizontal forces on pin joints reverse. Vertical forces at pin joints do not reverse. Visualise the connecting rod reaction force pushing down on to the cylinder wall on the outward stroke and pulling down on to the cylinder wall on the inward stroke. • Reversing the direction of rotation* of the crankshaft reverses angular positions of the crank arm and connecting rod symmetrically about the axis of the stroke on outward and inward strokes. For clockwise rotation the connecting rod exerts an upward force on to the cylinder wall on inward and outward strokes. * In practice the direction of rotation of internal combustion engines is not reversible being determined by the fixed arrangement of fuel intake and exhaust valves. Steam engines can be reversed by adjusting the timing of steam inlet and exhaust valves by independent movement of the valve gear. • Forces transmitted along the crank arm and connecting rod can be resolved from the components of horizontal and vertical forces along the respective longitudinal axes as illustrated below. Forces are considered at pin joint B for the connecting rod and pin joint A for the crank arm. Pin joints A and O respectively could equally well be used. In this instance forces have been drawn to scale. At this crank angle net longitudinal forces in both crank arm and connecting rod are compressive. Example 2 - inward compression stroke The figure below shows the configuration for the inward compression stroke which could apply to a reciprocating gas compressor, the compression stroke of an internal combustion engine or a positive displacement pump. The crank angle in this example is 310°. Free body diagrams for this configuration are shown below. Intuitively we visualise the connecting rod pushing upwards with the driving torque applied from the crankshaft, resulting in an upward force on the cylinder wall. Note that the "balancing" torque at crankshaft pin O required for static equilibrium is counter-clockwise. Next: Crank mechanism - inertia forces and crankshaft torque I welcome feedback at:
{"url":"https://alistairstutorials.co.uk/tutorial18.html","timestamp":"2024-11-05T22:33:20Z","content_type":"text/html","content_length":"15605","record_id":"<urn:uuid:fa9f1e5b-bcc1-472a-be91-9ce261cb96b6>","cc-path":"CC-MAIN-2024-46/segments/1730477027895.64/warc/CC-MAIN-20241105212423-20241106002423-00056.warc.gz"}
Project Part 09 (F. Pillichshammer): Digital Nets and Lattice Based Integration Rules Quasi-Monte Carlo Methods: Theory and Applications FWF Special Research Program (SFB) Project Part 09 (F. Pillichshammer): Digital Nets and Lattice Based Integration Rules SFB funding period 2 (2018-2022) The central objects of this project part are digital nets and sequences as well as (polynomial) lattice point sets. All these point sets and sequences have important applications as integration nodes in quasi-Monte Carlo (QMC) rules. The problems that will be considered can are grouped into three overall topics. (A) The Discrepancy of point sets and sequences, (B) integration and approximation of functions over $D \subseteq\ mathbb{R}^s$, and (C) general polynomial lattices and Kronecker sequences. In (A) we will be concerned with the discrepancy of digital nets and sequences and of lattices. One central object is the Halton sequence for which we aim at finding the exact order of $L_p$ discrepancy for finite $p$. A further question is concerned with point sets in arbitrary dimension whose (weighted) discrepancy is bounded independently of the dimension. In the second overall topic (B) we will be concerned with integration and approximation problems for functions from important function spaces such as the Hermite space or the Korobov space. Here we assume that the dimension $s$ of the input variables is very large. We are interested in algorithms which can achieve the optimal convergence rates for the considered function spaces as well as tractability properties. Another topic in this group is the $\varepsilon$-truncation dimension: When is it possible to approximate the original function of very many variables by the same function; however with all but the first $k$ variables set to zero, so that the corresponding error is small? In the third group of problems (C) we study general LPSs. In particular we aim at finding a polynomial/digital analog of the Frolov rule. SFB funding period 1 (2014-2017) In QMC integration, point sets with good distribution properties are required. There are two main ways to choose these point sets. The first class of quadrature points are lattice point sets, which were introduced independently by Hlawka and Korobov in the 1950s and the second main class are digital nets, comprising the important sub-class of polynomial lattice point sets, and digital sequences, as introduced by Niederreiter in the 1980s. It is the primal aim of this project part to push the research on these point sets and sequences. The problems considered can be grouped into three overall topics: 1. the explicit construction of point sets for QMC, 2. the analysis of problems with higher and even infinite smoothness and 3. the analysis of new concepts of sequences such as hybrid sequences or hyperplane sequences. In all three topics, digital nets and sequences as well as lattice point sets play an important role. In 1. we aim at giving explicit constructions of polynomial lattice point sets with low star discrepancy and digital sequences with low mean square discrepancy. In 2. we study integration and approximation of smooth functions based on tent-transformed lattice point sets and integration of analytic functions based on regular lattices. In 3. we analyze distribution properties of hyperplane sequences and of hybrid sequences made of lattice point sets and digital sequences with respect to the star discrepancy.
{"url":"http://www.sfb-qmc.jku.at/parts/part-09","timestamp":"2024-11-07T22:19:11Z","content_type":"text/html","content_length":"26050","record_id":"<urn:uuid:b9191de7-c5d5-468b-8c62-faaa8afd43cb>","cc-path":"CC-MAIN-2024-46/segments/1730477028017.48/warc/CC-MAIN-20241107212632-20241108002632-00011.warc.gz"}