content
stringlengths
86
994k
meta
stringlengths
288
619
Solve Quadratic Equation by Factoring Worksheets (printable, online, answers, examples) Printable “Quadratic Equations” worksheets: Solve Quadratic Equation (use factoring) Solve Word Problems using Quadratic Equations Sketching Quadratic Graphs Rewrite Expressions in Completed-Squared Form Solve Quadratic Equation (use completing the square) Solve Quadratic Equation (use quadratic formula) Discriminant in the Quadratic Formula Examples, solutions, videos, and worksheets to help Grade 7 and Grade 8 students learn how to solve quadratic equations by factoring. How to solve quadratic equations using factoring? There are four sets of solving equations using factoring worksheets. • Solve Quadratic Equation by Factoring (use zero product property). • Solve Quadratic Equation by Factoring (factor & solve, a = 1). • Solve Quadratic Equation by Factoring (factor & solve, a ≠ 1). • Solve Quadratic Equation by Factoring (rearrange, factor & solve). These are the steps to solve a quadratic equation by factoring: 1. Write the quadratic equation in the form: ax^2+bx+c=0, where a, b, and c are constants. 2. Factor the quadratic expression. Look for two binomials whose product gives you the original quadratic expression. 3. Set each of the binomial factors equal to zero. This will give you two separate linear equations to solve. 4. Solve each linear equation for x. These solutions are the roots or solutions of the quadratic equation. 5. Check if the solutions obtained in step 4 satisfy the original quadratic equation. Example: Solve the quadratic equation x^2 − 5x = -6. 1. Rewrite down the equation as: x^2 − 5x + 6 = 0. 2. Factor the quadratic expression: x^2 − 5x + 6 = 0 (x − 2)(x − 3) = 0 3. Set each factor equal to zero: x − 2 = 0 or x − 3 = 0 4. Solve for x For x − 2 = 0 x = 2 x − 3 = 0 x = 3 5. Check solutions: Substitute x = 2 and x = 3 back into the original equation. 2^2 − 5(2) + 6 = 0 (True) 3^2 − 5(3) + 6 = 0 (True) Remember that not all quadratic equations can be easily factored. Some may require more complex factoring methods, or you may need to use the quadratic formula or completing the square to find the Have a look at this video if you need to review how to solve quadratics equations by factoring. Click on the following worksheet to get a printable pdf document. Scroll down the page for more Solve Quadratic Equation by Factoring Worksheets. More Solve Quadratic Equation by Factoring Worksheets (Answers on the second page.) Solve Quadratic Equation by Factoring Worksheet #1 (use zero product property) Solve Quadratic Equation by Factoring Worksheet #2 (factor & solve, a = 1) Solve Quadratic Equation by Factoring Worksheet #3 (factor & solve, a ≠ 1) Solve Quadratic Equation by Factoring Worksheet #4 (rearrange, factor & solve) Solve Quadratic Equation by Factoring Worksheet #5 (mixed) Solve Quadratic Equation by Factoring Worksheet #6 (mixed) Solve Quadratic Equation by Factoring Worksheet #7 (mixed) Solve Quadratic Equation by Factoring Worksheet #8 (use square root) Solve Quadratic Equation by Factoring Worksheet #9 (use square root) Solve Quadratic Equation by Factoring Worksheet #10 (use square root) Online or Generated Factor Binomials by Difference of Squares Factor Perfect Square Trinomials Factor Trinomials or Quadratic Equations Factor Different Types of Trinomials 1 Factor Different Types of Trinomials 2 Solve Trinomials using Quadratic Formula Find Discriminants of Quadratic Polynomials Solve Quadratic Equation by Factoring (a > 1) Solve Quadratic Equation by Factoring (common factors) Try the free Mathway calculator and problem solver below to practice various math topics. Try the given examples, or type in your own problem and check your answer with the step-by-step explanations. We welcome your feedback, comments and questions about this site or page. Please submit your feedback or enquiries via our Feedback page.
{"url":"https://www.onlinemathlearning.com/solve-quadratic-factoring-worksheet.html","timestamp":"2024-11-14T16:49:46Z","content_type":"text/html","content_length":"43196","record_id":"<urn:uuid:d32f935e-89ac-447b-b452-028e41b797a9>","cc-path":"CC-MAIN-2024-46/segments/1730477393980.94/warc/CC-MAIN-20241114162350-20241114192350-00076.warc.gz"}
Computational Complexity On Jan 3 is the Iowa Caucus, the first contest (or something) in the US Presidential race. The question arises: Which presidents knew the most mathematics? The question has several answers depending on how you define "know" and "mathematics". Rather than answer it, I'll list a few who know some mathematics. 1. Jimmy Carter (President 1976-1980, lost re-election) was trained as a Nuclear Engineer, so he knew some math a long time before becoming president. (I do not know if he ever actually had a job as an Engineer.) I doubt he knew much when he was president. 2. Herbert Hoover (President 1928-1932, lost re-election) was a Mining Engineer and actually did it for a while and was a success. Even so, I doubt he know much when he was president. 3. James Garfield (President 1881-1881, he was assassinated) Had a classical education and came up with a new proof of the Pythagorean Theorem 4. Thomas Jefferson (President 1801-1809) had a classical education and is regarded by historians as being a brilliant man. He invented a Crypto system in 1795. Note that this is only 6 years before becoming president, so he surely knew some math when he was president. 5. Misc: Lyndon B. Johnson was a high school math teacher, Ulysses S. Grant wanted to me one but became president instead. George Washington was a surveyor which needs some math. Many of the early presidents had classical educations which would include Euclid. And lastly, Warren G. Harding got an early draft of Van Der Waerden's theorem, conjectured the polynomial VDW, but was only able to proof the quadratic case (not surprising—he is known as one of our dumber presidents). I would guess that Jimmy Carter and Herbert Hoover knew more math (there was far more to know) then Jefferson, but Jefferson knew more as a percent of what there was to know, then Carter and Hoover. Garfield, while quite smart, probably does not rank in either category. I don't think any of the current major candidates were trained in Math. Hillary Clinton, Barack Obama, John Edwards, Rudy Guilliani, and Mitt Romney were all trained as lawyers. Rudy Guillian and Mitt Romney have been businessman as well. Huckabee was a minister, McCain was a soldier. I do not know what they majored in as undergrads. This fall in my graduate complexity courses 5/11 of the HW were group HWs. This means that 1. The students are in groups of 3 or 4. The groups are self-selected and permanent (with some minor changes if need be). 2. The groups do the HW together. 3. They are allowed to use the web, other students, me, other profs. 4. The HW is not handed in–they get an Oral Exam on it. 5. The HW is usually "read this paper and explain this proof to me." In my graduate course in Complexity Theory which I just finished teaching 5 out of the 11 HWs were Oral HW. Here is what they were basically: 1. Savitch's theorem and Immerman-Szelepcsenyi Theorem. 2. Show that VC and HAM are NPC. 3. E(X+Y)=E(X)+E(Y), Markov, Chebyshev, Chernoff 4. Reg Exp with squaring NOT in P. 5. Matrix Group Problem in AM. (Babai's paper "Trading Group Theory for Randomness"). Was this a good idea? 1. The students learned ALOT by doing this. They learned the material in the paper, they learned how to read a paper, and they learned how to work together. (Will all of these lessons stick?) 2. Some proofs are better done on your own than having a professor tell you them (HAM cycle NPC comes to mind). This is a way to make them learn those theorems without me having to teach it. 3. Some theorems are needed for the course, but are not really part of the course (Chernoff Bounds come to mind). The Oral HW makes them learn that. 4. This was a graduate course in theory so the students were interested and not too far apart in ability. This would NOT work in an ugrad course if either of those were false. 5. This course only had 19 students in it, so was easy enough to administer. So the upshot–It worked! I recommend it for small graduate classes. On the twelfth glance at her case, what did we all see: 12 people asking her questions in her office, 11 times taught Intro Programming, 10 journal articles, 9 pieces of software, 8 book chapters, 7 invited panels, 6 submitted articles, 5 mil-lion bucks!, 4 invited talks, 3 students, 2 post-docs, and a degree from MIT. NOTE: The 12 days of Christmas is (easily) the most satirized song ever. I used to maintain a website of satires of it here but it was too hard to keep up? Why? Because anyone can write one. I wrote the one above in about 10 minutes during a faculty meeting to decide someone's Tenure case. Bill Gasarch is on vacation and he had given me (Lance) a collection of posts for me to post in his absence. But then I got email from Tal Rabin who wants to get the word out about the Women in Theory workshop to be held in Princeton in June. Done. Now back to your regularly scheduled post from Bill. I don't usually watch Deal/No Deal. I like some of the interesting math or dilemmas it brings up, but the show itself is monotonous. As Host Howie Mandel himself says "we don't ask you a bunch of trivia questions, we just ask you one question: DEAL or NO DEAL!" Here is a scenario I saw recently where I thought the contestant made the obviously wrong choice. 1. There are two numbers left on the board: $1000 and $200,000. 2. She is offered a $110,000 deal. 3. She has mentioned that $110,000 is about 5 times her salary (so this amount of money would make a huge difference in her life). 4. Usually in this show you have the audience yelling `NO DEAL! NO DEAL!' This time the audience, including her mother, her sister, and some friends, were yelling `TAKE THE DEAL! TAKE THE DEAL!'. While this is not a reason to take the deal, note that the decision to say NO DEAL is NOT a `caught up in the moment' sort of thing. She DID NOT take the deal. We should judge if this was a good or bad decision NOT based on the final outcome (which I won't tell you). Here is why I think it was the wrong choice. Consider the following scenarios: 1. If she takes the deal, the worst case is that she gets $110,00 instead of $200,000. 2. If she rejects the deal, the worst case is that she gets $1000 instead of $110,000. The first one is not-so-bad. The second is really really bad. Is there a rational argument for her decision? I could not come up with one, but maybe I'm just risk-averse. Some of the comments made on the posts on this post on a VDW over the Reals been very enlightening to me about some math questions. In THIS post I will reiterate them to clarify them for myself, and hopefully for you. I had claimed that the proof that if you 2-color R you get a monochromatic 3-AP USED properties of R- notably that the midpoint of two elements of R is an element of R. Someone named ANONYMOUS (who would have impressed me if I knew who she was) left a comment pointed out that the proof works over N as well. THIS IS CORRECT: If you 2-color {1,...,9} then there will be a mono 3-AP. Just look at {3,5,7}. Two of them are the same color. 1. If 3,5 are RED then either 1 is RED and we're done, 4 is RED and we're done, or 7 is RED and we're done, or 1,4,7 are all BLUE and we're done. 2. If 5,7 are RED then either 3 is RED and we're done, or 6 is RED and we're done, or 9 is RED and we're done, or 3,6,9 are all BLUE, and we're done. 3. If 3,7 are RED then either 1 is RED and we're done, or 5 is RED and we're done, or 9 is RED and we're done, or 1,5,9 are BLUE and we're done. This is INTERESTING (at least to me) since VDW(3,2)=9 is TRUE and this is a nice proof that VDW(3,2)≤ 9. (Its easy to show VDW(3,2)≠ 8: take the coloring RRBBRRBB.) I had asked if VDWr may have an easier proof then VDW. Andy D (Andy Drucker who has his own Blog) pointed out that this is unlikely since there is an easy proof that VDWR--> VDW. Does this make VDWr more interesting or less interesting? Both! 1. More Interesting: If VDWr is proven true using analysis or logic, then we get a NEW proof of VDW! 2. Less Interesting: Since it is unlikely to get a new proof of VDW, it is unlikely that there is a proof of VDWr using analysis. Bad news for American Science Funding: click here Good news for Australian Science Funding: click here Complexity Theory Class Drinking Game 1. Whenever a complexity class is defined that has zero natural problems in it, take one drink. 2. Whenever a class is defined that has one natural problem in it, take two drinks. 3. Whenever you are asked to vote on whether or not a problem is natural, take three drinks. 4. Whenever a mistake is made that can be corrected during that class, take one drink. 5. Whenever a mistake is made that can be corrected during the next class, take two drinks. 6. Whenever a mistake is made that cannot be corrected because it's just wrong, take three drinks. 7. Whenever a probability is amplified, refill your cups since a class with zero or one natural problems in it is on its way. 8. Whenever the instructor says that a theorem has an application, take a drink. 9. Whenever the instructor says that a theorem has an application, and it actually does, take two drinks. 10. Whenever the instructor says that a theorem has an application outside of theory, take two drinks. 11. Whenever the instructor says that a theorem has an application outside of theory, and it really does, take four drinks. RECALL the problem from my last post: Each point in the plane is colored either red or green. Let ABC be a fixed triangle. Prove that there is a triangle DEF in the plane such that DEF is similar to ABC and the vertices of DEF all have the same color. The answers to all of the problems on the exam are posted See here for the webpage for the competition. The problem above is problem 5. One of the key observations needed to solve the problem is the following theorem: If the reals are 2-colored then there exists 3 points that are the same color that are equally spaced. Before you can say `VDW theorem!' or `Roth's Theorem!' or `Szemeredi's theorem for k=3 !' realize that this was an exam for High School Students who would not know such thing. And indeed there is an easier proof that a HS student could (and in fact some did) use: Let a,b both be RED. If (a+b)/2 is RED then a,(a+b)/2,b works. If 2b-a is RED then a,b,2b-a works. If 2a-b is RED then 2a-b,a,b works. IF none of these hold then 2a-b,(a+b)/2,2b-a are all BLUE and that works. By VDW the following, which we denote VDWR, is true by just restricting the coloring to N: VDWR: For any k,c, for any c-coloring of R (yes R) there exists a monochromatic arithmetic progression of length k. This raises the following ill-defined question: Is there a proof of VDWR that is EASIER than using VDW's theorem. Or at least different- perhaps using properties of the reals (the case of c=2, k=3 used that the midpoint of two reals is always a real). I was assigned to grade the following problem from the Maryland Math Olympiad from 2007 (for High School Students): Each point in the plane is colored either red or green. Let ABC be a fixed triangle. Prove that there is a triangle DEF in the plane such that DEF is similar to ABC and the vertices of DEF all have the same color. I think I was assigned to grade it since it looks like the kind of problem I would make up, even though I didn't. It was problem 5 (out of 5) and hence it was what we thought was the hardest problem. About 100 people tried it, and less than 5 got it right, and less than 10 got partial credit (and they didn't get much). I got two funny answers: All the vertices are red because I can make them whatever color I want. I can also write at a 30 degree angle to the bottom of this paper if thats what I feel like doing at the moment. Just like 2+2=5 if thats what my math teacher says. Math is pretty subjective anyway. (NOTE- this was written at a 30 degree angle.) I like to think that we live in a world where points are not judged by their color, but by the content of their character. Color should be irrelevant in the the plane. To prove that there exists a group of points where only one color is acceptable is a reprehensible act of bigotry and discrimination. Were they serious? Hard to say, but I would guess the first one might have been but the second one was not. The following happened- a common event, but it inspired a crypto question (probably already known and answered) but I would like your comments or pointer to what is known. My mother-in-law Margie and her sister Posy had the following conversation: POSY: Let me treat the lunch. MARGIE: No, we should pay half. POSY: No, I want to treat. MARGIE: No, I insist. This went on for quite a while. The question is NOT how to avoid infinite loops- my solution to that is easy- if someone offers to treat, I say YES and if someone offers to pay 1/2 I say YES, not because I'm cheap, but to avoid infinite loops. Here is the question. It is not clear if Posy really wanted to treat lunch, or is just being polite. Its not clear if Margie really wants to pay half or is just being polite. SO, is their some protocol where the probability of both getting they DO NOT WANT is small (or both getting what they want is large), and the other one does not find out what they really want. Here is an attempt which does not work. 1. Margie has a coin. Margies coin is OFFER with prob p, and DO NOT OFFER with prob 1-p. If she really wants to make the offer to treat then p is large, else p is small. Could be p=3/4 or p=1/4 for 2. Posy has a similar coin. 3. Margie flips, Posy Flips. 4. If Margie's coin says OFFER, than make the offer. If not the don't. 5. Same with Posy. The bad scenarios- that they both get what they don't want, has prob 1/8. However, if they do this alot then Margie and Posy will both have a good idea of what the other really wants. In solutions you may offer or point me to we can of course assume access to random coins, and that neither Posy nor Margie can factor or take discrete log.
{"url":"https://blog.computationalcomplexity.org/2007/12/?m=0","timestamp":"2024-11-03T15:29:31Z","content_type":"application/xhtml+xml","content_length":"216764","record_id":"<urn:uuid:8480157c-f18f-4567-978d-6f8dfc3d242e>","cc-path":"CC-MAIN-2024-46/segments/1730477027779.22/warc/CC-MAIN-20241103145859-20241103175859-00062.warc.gz"}
Combining Bets Suppose you have no idea whether P is true. Is it rationally permissible for you to reject the following bet? You win $15 if P is true, and lose $10 if P is false. If so, the following bet is presumably also permissible to turn down: You win $15 if P is false, and lose $10 if P is true. But if someone offers you both bets at once, it would be crazy to turn them down: you can net $5 no matter the outcome, it's a guaranteed win! We may take this to show that the rational status of bets is not closed under conjunction. It can be rationally permissible to reject A, and permissible to reject B, but permissible to reject (A What if you do not know that both bets will be offered? Suppose you are offered A, and permissibly reject it. Then, to your surprise, you are offered B. Are you now rationally required to accept bet B, based on the principle that it would be irrational to reject both? That would be bizarre. Instead, I'd suggest that the rational principle in play is the following: (Sure Win) It is irrational to knowingly turn down a sure win (unless it comes with opportunity costs, etc.). It is irrational to reject A and B together, for together they offer a sure win. But the person who rejects A, and is only later offered B, was never offered a sure win. Bet A by itself (with no guarantee that B will be offered too) is not a sure win. So it may be permissibly rejected. And once you've rejected A, bet B by itself is not a sure win either. So it too may now be rejected without violating the Sure Win principle. Why am I going on about this, you ask? Adam Elga uses this combination of bets to argue that a rational agent must be disposed to accept at least one of them (and so have precise credences). But it seems like a bad argument to me, since the defender of imprecise credence can appeal to the Sure Win principle - as I did above - to explain why the irrationality of the combined bet rejection does not imply the irrationality of rejecting either bet alone. Right? [See also: Is Imprecise Credence Rational? 4 comments: 1. A sure win hypothesis is a mighty risk adverse strategy. 2. Yeah, that's why I added the proviso about opportunity costs: it could be reasonable to turn down a (small) sure win if by taking just one of the two bets you had (sufficient) chance of much larger winnings. But it at least seems clearly irrational to turn down a sure win unless you stand to gain something else by doing so, right? 3. Is it just an assumption that the first bet was permissible to turn down? I'd think the best response to this tension is to deny that you can turn it down, whether or not B is going to be 4. Yeah, I agree with that in my latest post. But note that this only holds if we (are required to) have no less than 40% credence in the winning outcome. If we can have imprecise credence, say spread over the interval [0.2, 0.8], then it is at least permissible - and many advocates of imprecise credence would even say obligatory - to reject any bet that is rejectable according to some value in the credence interval. Visitors: check my comments policy first. Non-Blogger users: If the comment form isn't working for you, email me your comment and I can post it on your behalf. (If your comment is too long, first try breaking it into two parts.) Note: only a member of this blog may post a comment.
{"url":"https://www.philosophyetc.net/2008/03/combining-bets.html","timestamp":"2024-11-02T11:20:59Z","content_type":"application/xhtml+xml","content_length":"100088","record_id":"<urn:uuid:d97dbc4a-9855-4ab0-97c5-01a33a42abc6>","cc-path":"CC-MAIN-2024-46/segments/1730477027710.33/warc/CC-MAIN-20241102102832-20241102132832-00027.warc.gz"}
Minimum Spanning Tree using Kruskal's Algorithm Hey there!! Now we are heading off to finding a minimal spanning tree for an undirected graph. It is recommended to go through my previous post on Disjoint Set Data Structure It is assumed that the reader knows about Graph, a very important data structure in computer science with lots of applications(social networks, computer networks, digital circuits, and many more). An undirected graph has no sense of directions in edges. For example, if we have an edge connecting two vertices A and B, that edge can be called as connecting A to B or B to A. The adjacency matrix for the graph is symmetric. graph Above is a simple small graph(hand made by me, no wonder why the graph edges are zig-zagged). The adjacency matrix for the graph is thus, # the infinity INF = 99999999 # our adjacency matrix adjacency = \ I have chosen adjacency matrix for graph representation here. And as mentioned earlier, the adjacency matrix is symmetric because it is an undirected graph. INF means infinity and is the weight of the vertices that are not connected. Ignore the blue edges for now. Kruskal’s algorithm goes like this: • List out the edges of the graph. • For each edge, make a set containing only the edge. If there are n edges then, there will be n sets(partitions) at this step. • Take an edge with smallest weight value and check if the vertices connected by it are in same set. If not, union them, else continue with the next smallest edge. • Repeat till all vertices are contained in a single set. Lets directly go into the code for the implementation is very very easy. # number of vertices v = len(adjacency) # now get the edges # an edge is a tuple (vertex 1, vertex 2, weight) edges = [(i, j, adjacency[i][j]) for i in range(v) for j in range(i, v) if adjacency[i][j]!=INF] # sort edges # because we need to take out smallest edges first. edges.sort(key=lambda x: x[2]) # CREATE a disjoint set datastructure # The constructor will automaticall create separate set for each vertex disset = DisjointSet(v) # v is number of vertices c = 0 # counter wts = 0 # total weight of the final spanning treee final_edges = [] while c < len(edges): edge = edges[c] i,j, w = edge[0], edge[1], edge[2] if disset.find(i) != disset.find(j): disset.union(i, j) Running the above code for the graph shown above, we get the output: [(1, 3, 1), (0, 2, 2), (1, 2, 4)] which are the blue edges in the figure. That’s all about the Kruskal’s Algorithm. Easy, ain’t May 6, 2017
{"url":"https://bewakes.com/posts/minimum-spanning-tree-kruskal.html","timestamp":"2024-11-10T19:30:20Z","content_type":"text/html","content_length":"9768","record_id":"<urn:uuid:268cfa7e-3774-4d0e-973e-4fb6b41716cd>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.61/warc/CC-MAIN-20241110170046-20241110200046-00631.warc.gz"}
Unscramble WAFFLES How Many Words are in WAFFLES Unscramble? By unscrambling letters waffles, our Word Unscrambler aka Scrabble Word Finder easily found 70 playable words in virtually every word scramble game! Letter / Tile Values for WAFFLES Below are the values for each of the letters/tiles in Scrabble. The letters in waffles combine for a total of 16 points (not including bonus squares) • W [4] • A [1] • F [4] • F [4] • L [1] • E [1] • S [1] What do the Letters waffles Unscrambled Mean? The unscrambled words with the most letters from WAFFLES word or letters are below along with the definitions. • waffle (n.) - A thin cake baked and then rolled; a wafer.
{"url":"https://www.scrabblewordfind.com/unscramble-waffles","timestamp":"2024-11-01T23:09:17Z","content_type":"text/html","content_length":"50233","record_id":"<urn:uuid:305d3841-976f-4385-93ae-594b812d1da4>","cc-path":"CC-MAIN-2024-46/segments/1730477027599.25/warc/CC-MAIN-20241101215119-20241102005119-00627.warc.gz"}
Hydraulics Engineering MCQ | Part 1 Hydraulics Engineering Multiple Choice Questions (MCQ) for GATE | Fluid Mechanics | Part 1 | By Akhand Dutta In a wide rectangular channel if the normal depth is increased by 20 %, then what is the approximate increase in discharge? The maximum velocity in a channel section often occurs on the water surface. • A. Above • B. Below • C. Between • D. Top What is the Froude’s number for a channel havingmean velocity 4.34 m/s and mean hydraulic depth of 3m? • a) 0.4m • b) 0.6m • c) 0.7m • d) 0.8m In a rectangular channel, the ratio of specific energy at critical depth EC to critical depth YC is Write the correct answer here If the value of the rate of change of specific energy is 7.79×10-4 m and Sf = 0.00013, calculate the value of bed slope. • a) 1 in 1000 • b) 1 in 1100 • c) 1 in 1200 • d) 1 in 1300 The velocity of flow through a channel is 0.74 m/s and the hydraulic radius of the channel is 1.11m, calculate the value of C if the bed slope of the channel is 1 in 5000. A trapezoidal channel with a base of 6 m and side slope of 2 horizontal to 1vertical conveys water at 17 m3/s with a depth of 1.5 m. is the flow situation is • a) Subcritical • b) supercritical • c) both • d) None of these The flow characteristics of a channel do not change with time at any point. What type of flow is it? • a) Steady flow • b) Uniform flow • c) Laminar flow • d) Turbulent flow Water surface profiles that are asymptotic at one end and terminated at the other end would include • a) H2 and S2 • b) H3 and S2 • c) M2 and H2 • d) M2 and H3 Calculate the velocity of flow in a triangular channel having depth 7m and the side slope of the channel is 1H:4V if the bed slope of the channel is 1 in 1200 and the slope of the energy line is 0.00010.Given: dy/dx=7.55m. • a) 1 m/s • b) 2 m/s • c) 3 m/s • d) 4 m/s M3 profile is indicated by which of the following conditions? • a) Yo > Yc > Y • b) Y > Yo > YC • c) Y > YC > Y0 • d) YC > Y0 > Y Measures velocity at a point of the fluid in a stream is • a) Venturi meter • b) pH meter • c) Pitot-Static tubes • d) None of the mentioned A trapezoidal canal with side slopes 2: 1 has a bottom width of 4 m and carries a flow of 23 m3/s.the critical depth is • a) 1.5 • b) 1.23 • c) 2.35 • d) 1.90 A hydraulically efficient trapezoidal section of open channel flow carries water at the optimal depth of 0.6 m. Chezy coefficient is 75 and bed slope is 1 in 250. What is the discharge through the • a) 1.44 m3/ s • b) 1.62 m3/ s • c) 15 m3/ s • d) 2.24 m3/ s The Froude’s number for a flow in a channelsection is 1. What type of flow is it? • a) Sub Critical • b) Critical • c) Super critical • d) Tranquil In open channels, maximum velocity occurs: • (a) near the channel bottom • (b) in the mid-depth of flow • (c) just below the free surface • (d) at the surface just below the free surface Calculate the aspect ratio having channel widthof 6m and depth of 8m. • a) 0.75m • b) 1.33m • c) 1.50m • d) 1.68m For a channel to be economic which of thefollowing parameters should be minimum • a) Wetted perimeter • b) Wetted area • c) Section factor • d) Hydraulic depth Calculate the discharge through a channel havinga bed slope 1 in 1000, area 12m2, hydraulic radius of 1.2m and Chezy’s constantbeing equal to 50. • a) 17.98 m3/s • b) 18.98 m3/s • c) 19.98 m3/s • d) 20.98 m3/s The term alternate depth in an open channel flow refers to the • a) Depths having the same specific energy for a given discharge • b) Depths before and after the passage of the surge • c) Depths having the same kinetic energy for a given discharge • d) Depths on either side of hydraulic jump. Depths having the same specific energy for a given discharge A hydraulically efficient rectangular channel of bed width 5m the hydraulic radius is equal to • a) 2.5 m • b) 5 m • c) 1.25 m • d) 2 m Which is the cheapest device for measuring flow/ discharge rate. • a) Venturimeter • b) Pitot tube • c) Orificemeter • d) None of the mentioned For subcritical flow, the Froude number is • a) Not equal to one • b) Less than one • c) Greater than one • d) Equal to one Calculate the value of Sf for a trapezoidal channel having depth 2m, width 5m, and side slope of 1H:1.5V. Given: dy/dx=1.18×10-3,S0 = 1 in 1000, C = 50 • a) 0.00001 • b) 0.00002 • c) 0.00003 • d) 0.00004 A hump is to be provided on the channel bed. The maximum height of the hump without affecting the upstream flow condition is • a) 0.64 m • b) 0.54 m • c) 0.44 m • d) 0.34 m A channel bed slope 0.0009 carries a discharge of 30 m3/ when the depth of flow is 1 .0 m What is the discharge carries by an exactly similar channel at the same depth of flow if the slope is decreased to 0.0001? • a) 10 m3/ s • b) 90 m3/ s • c) 15 m3/ s • d) 60 m3/ s Calculate the Froude’s number for a channel having discharge of 261.03m3/s, cross-sectional area of 42m2 and the top width being 6m • a) 0.65 • b) 0.72 • c) 0.38 • d) 0.75 Estimate the section factor for a channel section having cross-sectional area of 40m2 and hydraulic depth of 6 m. • a) 94.3 • b) 95.6 • c) 97.9 • d) 100 The direct step method of computation for GVF is • a) applicable to non- prismatic channel • b) applicable to prismatic channel • c) both • d) None of these applicable to non- prismatic channel For a hydraulically efficient rectangular section the ratio of width tonormal depth is Post a Comment
{"url":"https://www.akhandduttaengineering.in/2021/08/hydraulics-engineering-mcq-part-1.html","timestamp":"2024-11-13T15:14:45Z","content_type":"text/html","content_length":"284478","record_id":"<urn:uuid:08b12264-233d-4eb8-95b7-88638efc7479>","cc-path":"CC-MAIN-2024-46/segments/1730477028369.36/warc/CC-MAIN-20241113135544-20241113165544-00293.warc.gz"}
wind turbine blade design The use of wind as an energy source has been used for hundreds of years to pump water or grind corn, this equipment is also known as a windmill. In the 19th century, fossil fuels replaced the use of these large, heavy, inefficient windmills. Then, knowledge of aerodynamics and lightweight materials brought back wind turbine technology around the 20th century. Based on the orientation of the direction of rotation of the axis, these wind turbines are divided into two categories: Horizontal Axis Wind Turbine (HAWT) dan Vertical Axis Wind Turbine (VAWT). Each configuration has advantages and disadvantages. In general, VAWT development began to decline due to the limitations of the VAWT at low speed operating conditions and the difficulty of controlling the rotor speed, this design also has difficulties in its starting . However, VAWT has the advantage that it does not require additional mechanisms and a large generator can be used because it is not limited by the use of high towers. The development of HAWT is increasingly popular because of the increase in performance and control can be done with pitch and yaw control. High rotor efficiency is desirable to increase the conversion of wind flow energy into mechanical energy of the rotor, surely with affordable production costs. To calculate its efficiency, it is first necessary to define the incoming wind power (potential energy): P = Power (watt) ρ = Air density (kg/m^3) A = Turbin area (m^2) V = Velocity (m/s) The air flow through the wind turbine will drop its velocity due to the interaction between the air and the turbine, the velocity drop also indicates a change in wind energy into mechanical energy of the rotor. If we want 100% efficiency, the wind speed after passing the turbine must be zero, or stop at all, definitely this is not possible; while it can be calculated using the rotor disc theory that the maximum efficiency that can be achieved theoretically is 59.3%, this efficiency parameter is called the power coefficient C[p], the maximum C[p] = 0.593 also known as the Betz limit in wind turbine design. The actual efficiency of the wind turbine will be reduced due to several factors such as the emergence of wake flow in the blades which reduces the lift on the airfoil, the selection of an airfoil that has low efficiency and the emergence of flow “leakage” at the tip which results in undesirable vortex flow. To produce rotation (torque) on the wind turbine rotor, two methods are used, namely utilizing drag or using lift from the aerodynamic shape of the blade. Here is a comparison table of the two For the Drag model, the wind turbine blades are intentionally made to block the air flow and are given a certain moment arm about the rotating axis, thereby producing torque to rotate the turbine. Another alternative is to use the aerodynamic lift that occurs on the airfoil rotor then the lift is directed in the direction of the rotation of the rotor and a moment arm is given to the axis of rotation to produce torque. The lift method tends to be more efficient because it does not change the airflow pattern much or produces a lot of wakes. Here are some types of wind turbines along with some of their descriptions: The focus of the discussion in this article is HAWT because of its popularity in the wind turbine industry, this type of turbine is very sensitive to the design of the blade profile and its design. The first thing we must pay attention to in wind turbine design is the Tip Speed Ratio (TSR) parameter, this parameter is a comparison between the tangential speed of the tip blade to the speed of the incoming wind (free stream) which is mathematically written as follows: λ = TSR Ω = Rotation speed (rad/s) r = Radius (m) V[w] = Wind velocity (m/s) Aspects such as efficiency, torque, mechanical stress on the blade and noise should be considered in this TSR calculation. Modern wind turbines tend to be designed using a TSR value of around 6-9 because of the above considerations, in general the peak efficiency is at TSR = 7. Based on the theory of Blade Element Momentum (BEM), which is the calculation of wind turbine performance based on the cross section or airfoil shape of each section of the blade calculated by dividing it into small elements in 2D. For blades with a design TSR of about 6-9, Betz’s momentum theory provides a fairly good approximation to calculating the blade profile shape with the following n = number of blades , C[L] = Airfoil Lift Coefficient, Vr = Resultant of Wind Velocity (m/s), U = Wind velocity (m/s), U[wd] = Wind velocity design (m/s), Copt = Optimum Chord Length. The C[opt] can be plotted against r to produce the optimal “shape” of the blade. From the above equation it can be seen that the larger the TSR, the smaller the blade size, then the more the number of blades, the smaller the blade size (see changes made to the root and tip to adjust the actual conditions both for installation on the hub and due to structural reasons, this can also be done because the power contribution of the root is also relatively low): The smaller the blade size will be advantageous in terms of cost, because the material needed will be less, but on the other hand the blade structure will also be weaker. In general, the most optimal choice of blades is 3 pieces. This approach is quite good for the initial design, but it is a 2D approach so it is not very accurate in considering the emergence of 3D phenomena or the appearance of wakes, tip losses and so on. For more accurate and comprehensive results, analysis using Computational Fluid Dynamics (CFD) simulation is used. Wind turbine CFD simulation using OpenFOAM In the end, to analyze aerodynamic performance in 3D and comprehensively, we cannot just rely on the calculations of the approach above. One of the well-known methods in wind turbine design is the use of computational fluid dynamics (CFD) (read more in the introduction to CFD). This method is carried out using a computer to analyze the fluid flow on the wind turbine blade in detail to show 3D flow interactions and other features such as tip vortex, wake and others without simplification. From the CFD simulation we can also predict the performance of the wind turbine such as calculating power coefficient and torque under various conditions of TSR variation and changes in twist angle, number of blades, airfoil variations, tip model variations and so on. As for calculating the structural strength of the wind turbine blades, analytical equations alone are not sufficient, due to variations in materials and discontinuous forms of the blade frame and wind turbine tower. The method that is often used for this analysis is using Finite Element Analysis (FEA). Wind turbine blade FEA Simulation using code aster aeroengineering services is a service under CV. Markom with solutions especially CFD/FEA. Peter J. Schubel * and Richard J. Crossley, “Wind turbine blade design“. Energies 2012, 5, 3425-3449; doi:10.3390/en5093425 https://www.aeroengineering.co.id/wp-content/uploads/2020/02/wind-turbine-openFOAM-CFD-simulation-e1615565384793.png 200 257 admin https://www.aeroengineering.co.id/wp-content/uploads/2022/03/ logo_ae_solution-removebg-preview.png admin2021-07-21 07:10:332021-07-21 08:00:37wind turbine blade design Want to join the discussion? Feel free to contribute!
{"url":"https://www.aeroengineering.co.id/2021/07/wind-turbine-blade-design/","timestamp":"2024-11-04T07:42:30Z","content_type":"text/html","content_length":"56598","record_id":"<urn:uuid:a8b2bd89-7b16-4ef9-a04e-5a33c79a6736>","cc-path":"CC-MAIN-2024-46/segments/1730477027819.53/warc/CC-MAIN-20241104065437-20241104095437-00388.warc.gz"}
Velocity of different types of oils (e.g. crude, refined) in context of oil velocity 30 Aug 2024 Title: An Exploration of Oil Velocities: A Comparative Analysis of Crude and Refined Oils Abstract: The velocity of oils plays a crucial role in various industrial processes, including transportation, storage, and refining. This article presents a comprehensive analysis of the velocities of different types of oils, including crude oil, refined petroleum products, and other specialized oils. We examine the factors influencing oil velocity, discuss the implications for oil industry operations, and provide theoretical frameworks for predicting oil flow behavior. Introduction: Oil velocity is a critical parameter in the oil industry, affecting the efficiency and safety of various processes. The velocity of crude oil, refined petroleum products, and other specialized oils can vary significantly due to differences in their physical properties, such as density, viscosity, and surface tension. This article aims to provide an in-depth examination of oil velocities, highlighting the key factors influencing these values. Theoretical Background: The velocity of a fluid (in this case, oil) is governed by the following equation: v = Q / A where v is the velocity, Q is the volumetric flow rate, and A is the cross-sectional area through which the fluid flows. For laminar flow conditions, the Hagen-Poiseuille equation can be used to predict oil velocity: v = (ΔP \* r^4) / (8 \* η \* L) where v is the velocity, ΔP is the pressure drop, r is the radius of the pipe, η is the dynamic viscosity, and L is the length of the pipe. Crude Oil Velocity: The velocity of crude oil depends on its density, viscosity, and surface tension. Crude oils with higher densities tend to have lower velocities due to their increased resistance to flow. The velocity of crude oil can be estimated using the following equation: v_crude = (ΔP \* r^4) / (8 \* ρ_crude \* η_crude \* L) where v_crude is the velocity of crude oil, ρ_crude is the density of crude oil, and η_crude is the dynamic viscosity of crude oil. Refined Oil Velocity: The velocity of refined oils, such as gasoline, diesel, and jet fuel, depends on their physical properties, including density, viscosity, and surface tension. Refined oils with lower viscosities tend to have higher velocities due to their reduced resistance to flow. The velocity of refined oil can be estimated using the following equation: v_refined = (ΔP \* r^4) / (8 \* ρ_refined \* η_refined \* L) where v_refined is the velocity of refined oil, ρ_refined is the density of refined oil, and η_refined is the dynamic viscosity of refined oil. Specialized Oil Velocity: The velocity of specialized oils, such as lubricating oils and hydraulic fluids, depends on their physical properties, including density, viscosity, and surface tension. Specialized oils with higher viscosities tend to have lower velocities due to their increased resistance to flow. The velocity of specialized oil can be estimated using the following equation: v_specialized = (ΔP \* r^4) / (8 \* ρ_specialized \* η_specialized \* L) where v_specialized is the velocity of specialized oil, ρ_specialized is the density of specialized oil, and η_specialized is the dynamic viscosity of specialized oil. Conclusion: The velocity of different types of oils plays a crucial role in various industrial processes. This article has presented a comprehensive analysis of oil velocities, highlighting the key factors influencing these values. Theoretical frameworks for predicting oil flow behavior have been provided, and equations have been presented to estimate oil velocities based on physical properties. Further research is needed to fully understand the implications of oil velocity on industry operations and to develop more accurate predictive models. • Hagen, G. (1839). Über die Bewegung des Wassers in Wasserrohren. Annalen der Physik und Chemie, 47(12), 423-442. • Poiseuille, J.L.M. (1846). Recherches expérimentales sur le mouvement des fluides dans les tubes. Comptes Rendus de l’Académie des Sciences, 23, 556-562. Note: The references provided are historical and not directly related to the content of this article. They have been included to demonstrate the theoretical background and provide context for the equations presented. Related articles for ‘oil velocity’ : • Reading: Velocity of different types of oils (e.g. crude, refined) in context of oil velocity Calculators for ‘oil velocity’
{"url":"https://blog.truegeometry.com/tutorials/education/3bd2b7ed85b26dbae0102dc18432fe1a/JSON_TO_ARTCL_Velocity_of_different_types_of_oils_e_g_crude_refined_in_conte.html","timestamp":"2024-11-05T16:42:25Z","content_type":"text/html","content_length":"19340","record_id":"<urn:uuid:0a0aa78c-2fb6-4606-ae09-ee20425135be>","cc-path":"CC-MAIN-2024-46/segments/1730477027884.62/warc/CC-MAIN-20241105145721-20241105175721-00271.warc.gz"}
The truncated matrix-valued K-moment problem on ℝ<sup>d</sup>, ℂ<sup>d</sup>, and T<sup>d</sup> The truncated matrix-valued K-moment problem on ℝ^d, ℂ^d, and T^d will be considered. The truncated matrix-valued K-moment problem on ℝ^d requires necessary and sufficient conditions for a multisequence of Hermitian matrices {S[γ]}[γ∈Γ] (where Γ is a finite subset of ℕ^d[0]) to be the corresponding moments of a positive Hermitian matrix-valued Borel measure σ, and also the support of σ must be contained in some given non-empty set K ⊆ ℝ^d, i.e., Given a non-empty set K ⊆ ℝ^d and a finite multisequence, indexed by a certain family of finite subsets of ℕ^d[0], of Hermitian matrices we obtain necessary and sufficient conditions for the existence of a minimal finitely atomic measure which satisfies (0.1) and (0.2). In particular, our result can handle the case when Γ = {γ ∈ ℕ^d [0]: 0 ≤ {pipe}γ{pipe} ≤ 2n + 1}. We will also discuss a similar result in the multivariable complex and polytorus setting. ASJC Scopus subject areas • General Mathematics • Applied Mathematics Dive into the research topics of 'The truncated matrix-valued K-moment problem on ℝ^d, ℂ^d, and T^d'. Together they form a unique fingerprint.
{"url":"https://cris.bgu.ac.il/en/publications/the-truncated-matrix-valued-k-moment-problem-on-%E2%84%9Dsupdsup-%E2%84%82supdsup","timestamp":"2024-11-08T21:35:42Z","content_type":"text/html","content_length":"56744","record_id":"<urn:uuid:74ce8df9-9fd9-4ae9-aa6f-773faddc1697>","cc-path":"CC-MAIN-2024-46/segments/1730477028079.98/warc/CC-MAIN-20241108200128-20241108230128-00239.warc.gz"}
R (programming language) From Verify.Wiki R is a software platform that provides statistical data analysis and visualization capabilities. Initial development was done by Ross Ihaka and Robert Gentleman and currently it is developed by the R core team. The software is freely available, and it runs on major operating systems like Windows, Linux, and Mac OS. ^[1] R has established a reputation as an important tool for statistical modelling, data visualization, data mining and machine learning. The R language incorporates all of the standard statistical tests, models, graphics and analyses, as well as providing a comprehensive language for managing and manipulating data. Leading researchers in data science are widely using R in academia and software development. R is a GNU project which can be considered as a different implementation of S. 1970 S was developed by John Chambers while working at Bell labs. 1993 Initial development by Ross Ihaka and Robert Gentleman at the University of Auckland in New Zealand as an implementation of the S programming language began. 1995 Source code was released under the GNU license. 1997 The R core development team was formed. ^[2] Average Programmer Salaries │Country│ Average Salary │Years of Experience │ │USA │115,000(US$)^[3] │5 │ │UK │57,500(UK£)^[4] │2-5 │ • R is open source and freely available software. • R implement a wide variety of statistical and graphical techniques including classical statistical tests, linear and nonlinear modeling, time-series analysis, classification, clustering, and • R provides a very wide variety of graphics for visualizing data. These capabilities are found in the base language and in specialized packages like ggplot2, vcd and scatterplot3d. • R has a large number of packages that virtually support any statistical technique and the R community is noted for its active contributions in terms of packages. • R is able to consume data from multiple systems like Excel, SPSS, Stata, SAS and relational databases • R runs on mostly used operating systems like Windows, Linux, and Mac OS. It is also supported on 32 and 64 bit systems. • R has a vibrant community that offers support and commercial support is also available. • There are many learning materials available freely or at a cost. ^[5] • R has stronger object-oriented programming facilities than most statistical computing languages which is inherited from S. Extending R is also eased by its lexical scoping rules. ^[6] • R is difficult to learn for users without any computer programming background • The documentation of R may be difficult to understand for a person without a good statistical training. ^[7] • Managing large data-sets can be problematic because R stores its objects in memory. However, there are some packages that can remedy this by storing data on hard drive. • Some packages have a quality deficiency. However if a package is useful to many people, it will quickly evolve into a very robust product through collaborative efforts. • R lacks in speed and efficiency due to its design principles that are outdated. Although R is the most comprehensive statistical analysis package available. ^[8] some people believe R as an accessible language is not for advanced programmers " Mat Adams says."I wouldn't even say R is for programmers. It's best suited for people that have data-oriented problems they're trying to solve, regardless of their programming aptitude,". The following examples illustrate the basic syntax of the language and use of the command-line interface. Basic syntax The following examples illustrate the basic syntax of the language and plot a 3D Surface. install.packages("rgl") # installing external package library(rgl) # calling external package provide "rgl.surface" function z <- 2 * volcano # Exaggerate the relief x <- 10 * (1:nrow(z)) # 10 meter spacing (S to N) y <- 10 * (1:ncol(z)) # 10 meter spacing (E to W) zlim <- range(z) zlen <- zlim[2] - zlim[1] + 1 colorlut <-terrain.colors(zlen,alpha=0) # height color lookup table col <- colorlut[ z-zlim[1]+1 ] # assign colors to heights for each point rgl.surface(x, y, z, color=col, alpha=0.75, back="lines") "Hello World" Example Examples of R in use Feature Comparison Chart ^[15] │ Feature │ R │ Python │ SAS │ SPSS │ STATA │ │Outlier diagnostics │Available│Available│Available│Available│Available│ │Generalized linear models │Available│Available│Available│Available│Available│ │Univariate time series analysis │Available│Available│Available│Limited │Available│ │Multivariate time series analysis │Available│ │Available│ │Available│ │Cluster analysis │Available│Available│Available│Available│Available│ │Discriminant analysis │Available│Available│Available│Available│Available│ │Neural networks │Available│Available│Available│Limited │ │ │Classification and regression trees │Available│Available│Available│Limited │ │ │Random forests │Available│Available│Limited │ │ │ │Support vector machines │Available│Available│Available│ │ │ │Factor and principal component analysis │Available│Available│Available│Available│Available│ │Boosting Classification & Regression Trees │Available│Available│Limited │ │ │ │Nearest neighbor analysis │Available│Available│Available│Available│ │ Top Companies Providing R Solutions Revolution analytics ^[16] a Microsoft company, provides commercial analytics solutions based on R. Mango solutions provides training, consultancy and support for R. ^[17] MicroStrategy Data Mining Services ^[18], a fully integrated component of the MicroStrategy BI platform that delivers the results of predictive models to all users in familiar, highly formatted, and interactive reports and documents. Also, deploy any R Analytic in MicroStrategy Visualizations with the New R Integration Pack. Quadbase^[19], provides software and services for data visualization, BI dashboards, reporting, R programming and predictive analytics. simMachines ^[20] , provides the R-01 similarity search (k-nearest neighbor) engine, with high speed and zero tuning. We are the Berkeley DB of the Big Data era. Text Analysis International ^[21], offers tools and services for natural language processing and information extraction, building on the VisualText(TM) IDE and NLP++(R) programming language. The future of R The popularity of R as an analytics platform continues to grow. The number of analytics jobs posted on indeed.com showed demand for R skills was higher than that of SPSS, Matlab, Minitab and stata. Demand for SAS skills was higher than that of R but predictions show R will catch up in a few years. Data from Google scholar shows SPSS is the mostly used software ahead of SAS and R. However R and stata are closing in on the gap. On software discussion forums Linkedin and Quora, R topic followers outnumbered those following SAS, SPSS and Stata. A 2015 survey of data scientists by Rexer Analytics showed R was the most popular software. ^[22] Top 5 Recent Tweets │ Date │ Author │ Tweet │ │11 Dec 2015│@Bbl_Astrophyscs│And the #Rangers strike again! Quantitative analyst position this time. STEM background, R programming. Not bad! │ │11 Dec 2015│@R_Programming │R Tip: Visualy asses clustering tendency of data with dissplot{seriation} #rstats #analytics http://rstatistics.net │ │11 Dec 2015│@cbinsa │Career Portals Ss r learning programming by designing their own digital game using Construct 2 software. #hgmsteach │ │11 Dec 2015│@analyticbridge │How to: Parallel Programming in R and Python [Video] http://ow.ly/VA2Vd │ │11 Dec 2015│@Rbloggers │New R job: R Programming for a Daily Fantasy Sports Application http://www.r-users.com/jobs/r-programming-for-a-daily-fantasy-sports-application/│ Top 5 Lifetime Tweets │- │ │ │ │ Date │ Author │ Tweet │ │6 Dec 2015 │@analyticbridge│R Programming: 35 Job Interview Questions and Answers #Rstats http://www.datasciencecentral.com/profiles/blogs/r-programming-job-interview-questions-and-answers … │ │1 Feb 2015 │@opensourceway │As demand for data scientists grows, companies are turning to open source programming language R: http://red.ht/15s6Aqt │ │24 January │@DrQz │#Microsoft to acquire Revolution Analytics, heavily embracing the R programming language & tools http://www.wired.com/2015/01/ │ │2015 │ │microsoft-acquires-open-source-data-science-company-revolution-analytics/ … #rstats #marketbuzz │ │5 Feb 2014 │@kdnuggets │An alternative to R and #Python: Julia: A High-Performance Programming Language for #DataScience and more http://buff.ly/1c5bcPe │ │23 January │@mrb_bk │R is an interesting program language that slightly changes my point of view about programming languages. │ │20154 │ │ │
{"url":"https://verify.wiki/wiki/R_(programming_language)","timestamp":"2024-11-14T21:45:17Z","content_type":"text/html","content_length":"45860","record_id":"<urn:uuid:c3c05ded-9669-4928-b6a4-5c7a9525161a>","cc-path":"CC-MAIN-2024-46/segments/1730477395538.95/warc/CC-MAIN-20241114194152-20241114224152-00373.warc.gz"}
Basic Circuit Analysis From time to time, questions arise on various forums about how to calculate the dropping value of a resistor, or design a voltage divider. You'd be amazed at how simple these things are if you understand three basic laws: 1. Ohm's law Ohm's law says that the voltage (E) across a resistor (R) will be the produce of the resistance times the current (I). In other words E=I*R. Using algebra you can deduce then that I=E/R and R=E/I. So if you know any two of these three values, you can get the third one. 2. Kirchoff's Current Law This is simple. If you have a bunch a wires that come together at a point, all the currents going in must equal all the current going out. In your circuit you usually write current going one way as positive and the other way as negative, so another way of saying this is that adding up all the current must result in 0. This is common sense, really. Say you have two light bulbs connected to a battery in parallel. Each bulb is known to draw 200mA. How much current is the battery supplying? Of course, 400mA. That's Kirchoff's law in action. If each bulb draws 200mA, the battery must supply 400mA to light them both. Another application of this law is that in a series circuit, the current is the same throughout. That's because the current going through one component, must be the same as the current going through the other component so that the sum of currents can be 0. So if you have an LED and a resistor in series with a battery, and someone tells you the resistor has 10mA through it, the LED must also have 10mA going through it. Notice this isn't the same as saying the LED's maximum current is 10mA, therefore the circuit must have 10mA in it. That would be like saying, the car's speedometer goes to 200KPH, so that's how fast we must be going! It is, however, like saying, our speedometer says 90KPH and the car next to us looks like it is not moving, so it must be going 90KPH too. 3. Kirchoff's Voltage Law Another aspect to Kirchoff's law is that all the voltage drops in a loop must also equal zero. For the purposes of this discussion, we can assume that this means circuits in parallel have the same voltage across them. Going back to my 200mA light bulb example. They are both connected to the + and - lead of the battery, so they are in parallel. It is not possible that one light bulb sees 12V while the other sees 10V (assuming no circuit defects, like a resistive wire or some other fault). Both bulbs will see the same voltage. So the common problem is you have an LED that is rated at 1.2V @ 10mA. You want to use a 5V supply. What is the value of the dropping resistor? By applying Kirchoff's voltage law (#3) you know that if the LED is going to drop 1.2V, the remaining voltage is 5-1.2 or 3.8V. Law #2 tells you that if you want 10mA to flow through the LED, you will also have to have 10mA flowing through the resistor. So now you know that the resistor has to drop 3.8V at 10mA. Applying Ohm's law, you can see that 3.8/.01 = 380 or 380 ohms. Of course, if you come up with a non-standard value, you'll have to pick one that is close. For example, suppose the closest value you have is 470 ohms. The LED always drops 1.2V, so the resistor always drops 3.8V. 3.8V/470 (law #1) = about 8mA. The LED won't be as bright, but it should still be fine. Try these: LED is 2V @ 15mA and a 12V supply (click for answer) LED is 1.4V @ 5mA and a 3V supply (click for answer) Voltage Dividers What about voltage dividers? This is the classic network where you hook two resistors in series with a battery. The center point of the resistors develops a lower voltage than the battery. When you hook a pot up across a battery, you are creating a voltage divider where the pot's wiper is the center of the divider network. If the resistors are equal, the voltage at the center will be half the battery voltage. That's easy to remember, but what if the resistors are different values? Suppose the resistor is connected to the + terminal is 10K (R1) and the - terminal has a 33K resistor (R2) and they meet in the middle (assume a 12V battery). Simple application of the three laws will let you easily analyze this situation. Two observations make this circuit simple. First, the resistors are in series, so they must have the same current flowing in them (law #2). Second, the voltage across each resistor must add up to equal the battery voltage (law #3). What's the current? Resistors in series add up, so the total resistance is 43K. Applying Ohm's law, you can see that 12/43K = .279mA or 279uA. Both resistors have this current flowing through them. Therefore, it is easy to calculate the voltage across both. The "output" voltage will be the voltage across R2 or 33K * 279uA = 9.2V (about). Yes, you can get the same answer by memorizing that: Vo = Vin*R2/(R1+R2) But look at that computation. What is it? Rewrite it a little and you have Vo = R2 * Vin/(R1+R2) which is just what we did. R1+R2 is the total resistance. Vin divided by the total resistance is the current. And R2 times the current is the answer. Voltage dividers may not seem very important on the face of it, but many more complex circuits are voltage dividers or can be modeled by voltage dividers. For example, a Wheatstone bridge is just two voltage dividers, side by side. Op amp feedback circuits are often voltage dividers, as is the base biasing network of a common emitter amplifier. Try these dividers: Vin=12V R1=10K R2=20K (click for answer) Vin=9V R1=1K R2=200 (click for answer) Practical Voltage Division Don't fall prey to the temptation to use a voltage divider instead of a voltage regulator. Voltage dividers are totally dependent on their supply voltage, so don't try to "regulate" 12V to 5 with a divider. However, there are cases where you might want to use a divider. For example, perhaps you have a 5V A/D converter, but you need to measure 0-10V with it. Of course, you will lose precision, but maybe that's OK. Converting 10V to 5V is easy, right? Two 10K resistors and you have a 50% voltage divider. Except it isn't that simple. If you build a voltage divider in the lab, you'll look at the output with a voltmeter or a scope. All modern DVMs have input impedances of 10M or more, so for practical purposes it isn't there as far as the circuit is concerned. But when you draw real power from the center node, you are effectively putting another resistor in parallel with R2. This changes the circuit and the output voltage. Suppose your A/D convert has an input resistance of 25K. Now you don't have two 10K resistors. You have R1=10K and R2=10K and 25K in parallel. Resistors in parallel add their admittance, so: Rtotal = 1/(1/R1+1/R2+.....1/Rn) If you only have two resistors, this can be simplified to: Rtotal = R1*R2/(R1+R2). So in the above example R2 is effectively 7143 ohms. The divider's ratio is then about 42% (7143/17143). Not the 50% you were looking for. This even happens with a meter, but the effect is so small you don't care. Consider if R2 was equal to 10K and 10M in parallel. That's 9990 ohms and the ratio is 49.97% -- hardly any difference at all, and probably less error than the 5% or 10% resistors you are using. So when calculating your A/D converter input divider, you want to make R2's total value 10K. If the input resistance is 25K, you have to solve for the parallel resistor with 25K that gives you 10K. 10K = 25K*Rx/(25K+Rx) A little algebra results in: 250K/15K = Rx or Rx=16.667K (you can verify that by computing 16.667K in parallel with 25K). So the actual voltage divider would be R1=10K, R2=16.7K, but you can't really find that value more than likely. Substitute a 15K resistor and you'll find the true division ratio is just over 48% (can you get the same result?). However, 16K is a standard value and provides a ratio of 49.4% which is pretty close. Thevinin's Equivalent Another implication of voltage dividers is Thevinin's theorem. This states that any voltage source and an arbitrary network can be modeled by a voltage source and a voltage divider (just like the diagram above if you consider B1 as an ideal voltage source instead of a practical battery). The R1 part of the divider represents the bulk of the circuit. The R2 part is the "load" -- the part that consumes the output. Consider the above A/D example. The load is a 25K resistor that represents the A/D converter. The source is a 10V battery (the input). In between we have R1=10K and R2=16K. To create the Thevinin equivalent, you consider what the voltage would be with no load at all. In this case it would be 10V through a 10K/16K voltage divider or 6.15V. That's the new voltage source. Next, pretend like the original voltage source was a short circuit and determine how much resistance is out the output terminals. In this case, shorting the battery makes R1 and R2 appear in parallel at the load terminals. That's the Thevinin resistance (6.15K). So the equivalent circuit for analysis is a 6.15V battery with a 6.15K resistor in series with the + lead. The load connects between the far end of this resistor and the battery - lead. Guess what? That forms a voltage divider with R1=6.15K and R2=25K (the load). So what voltage will the load see (remember, even thought the battery is at 6.15V, it represents a 10V input)? The output will be 6.15V reduced by the divider or about 4.94V. How much current does the load consume? 4.94/25K or 197uA. In this simple case, it was just as easy to analyze the network directly, but any large network can be reduced to a voltage divider with the same techniques. To summarize: 1. Remove the load from the circuit. 2. Determine the open circuit voltage at the load's connection point. 3. Pretend to short the voltage source 4. Compute the resistance between the load terminals. 5. Draw the Thevinin's equivalent using the voltage from Step 2 and the resistance from Step 4. For a more complex example, consider a "T" circuit. The left hand branch of the T (R1) connects to a 10V battery. The bottom middle of the T (R2) connects to ground and the right hand part of the T (R3) connects to the load. Each resistor in the T is 10K. The open circuit voltage will be 5V. That's because R1 and R2 form a 50% voltage divider. R3 has no current flowing through it (it is an open circuit) and so it does not drop any voltage (R*0 is still 0). With the battery shorted, the effective resistance is 10K (R3) + 10K in parallel with 10K (R1 and R2). That's a total of 15K. So the equivalent circuit is a 5V battery with a 15K series resistor. Try computing the voltage delivered to a 50 ohm load without creating a Thevinin model. Then go back and do it with the Thevinin model -- much easier (Click here for the answer). Speaking of 50 ohms, the Thevinin model can help you understand why matching antennas to transmitters is so important. Remember, power is voltage time current. Suppose your transmitter looks like a 10V battery with a 50 ohm resistor (the source impedance). Do the calculations for a 20 ohm load, a 50 ohm load, and a 100 ohm load: │Load│Voltage│Current (mA) │Power (mW) │ │20 │2.9 │145 │421 │ │50 │5.0 │100 │500 │ │100 │6.7 │67 │449 │ Higher load resistance makes more voltage, but less current. Lower resistance makes more current but less voltage. This is an effect of conservation of energy. The power maximum is at the point where the load and the source are equal. This is why you want 8 ohm speakers on an 8 ohm amplifier output, or a 50 ohm antenna on a transmitter. If you have a 100W ham transmitter that should deliver to a 50 ohm load, what's the voltage presented to the load? Hint, if Power is E*I you can use Ohm's law to rearrange the equation: P=I*R*I = I**2 R (where I**2 is I squared) P=E*E/R = E**2/R To deliver 100W into a 50 ohm load, you have to find E**2 so that E**2/50 = 100. Therefore E**2 is 5000 and the square root of 5000 is 70.7V. The current will be 1.4A. This explains why mismatched antennas cause transistor finals to fold power back (or blow up in the old days). Assume the transmitter is also a 50 ohm source impedance. That means it's Thevenin voltage is about 140V. Now replace the 50 ohm load with a 10 ohm load. What does the current rise to? (answer) How about the voltage across the load? (answer) That means the source resistor has to drop the balance (nearly 117V).
{"url":"http://wd5gnr.com/basiccir.htm","timestamp":"2024-11-04T23:02:16Z","content_type":"text/html","content_length":"19067","record_id":"<urn:uuid:1986b5e4-e6d8-49c4-b092-51fb08337908>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.84/warc/CC-MAIN-20241104225856-20241105015856-00609.warc.gz"}
An Optimal Bifactor Approximation Algorithm for the Metric Uncapacitated Facility Location Problem An integer linear program is a problem of the form max{c^T x : Ax=b, x >= 0, x integer}, where A is in Z^(n x m), b in Z^m, and c in Z^n.Solving an integer linear program is NP-hard in general, but there are several assumptions for which it becomes fixed p ...
{"url":"https://graphsearch.epfl.ch/en/publication/138775","timestamp":"2024-11-06T15:35:47Z","content_type":"text/html","content_length":"108732","record_id":"<urn:uuid:431fcbd5-1e47-4f23-8a26-fe3469be087a>","cc-path":"CC-MAIN-2024-46/segments/1730477027932.70/warc/CC-MAIN-20241106132104-20241106162104-00107.warc.gz"}
Classical Mechanics: a Critical Introduction by Michael Cohen Publisher: University of Pennsylvania 2012 Number of pages: 364 This is an open-source introduction to Classical Mechanics, by Emeritus Professor Michael Cohen, which many students may find useful as a supplementary resource. Cohen emphasizes basic concepts, such as force and permissible frames of reference, which frequently are dealt with hastily due to time pressures. The text contains numerous solved Examples. Download or read it online for free here: Download link (4.9MB, PDF) Similar books Newton's Principia : the mathematical principles of natural philosophy Isaac Newton Daniel AdeeThis book is a complete volume of Newton's mathematical principles relating to natural philosophy and his system of the world. Newton, one of the most brilliant scientists and thinkers of all time, presents his theories, formulas and thoughts. Mechanics and Relativity Timon Idema TU Delft OpenThe reader is taken on a tour through time and space. Starting from the basic axioms formulated by Newton and Einstein, the theory of motion at both the everyday and the highly relativistic level is developed without the need of prior knowledge. Introduction to Analytical Mechanics Alexander Ziwet MacmillanThe present volume is intended as a brief introduction to mechanics for junior and senior students in colleges and universities. No knowledge of differential equations is presupposed, the treatment of the occurring equations being fully explained. Analytical Mechanics for Engineers Fred B. Seely J. Wiley & sonsThis book presents those principles of mechanics that are believed to be essential for the student of engineering. Throughout the book the aim has been to make the principles of mechanics stand out clearly ; to build them up from common experience.
{"url":"http://www.e-booksdirectory.com/details.php?ebook=10261","timestamp":"2024-11-09T13:50:21Z","content_type":"text/html","content_length":"11338","record_id":"<urn:uuid:2bdb2b92-323c-4c35-a986-030b3e2bf99e>","cc-path":"CC-MAIN-2024-46/segments/1730477028118.93/warc/CC-MAIN-20241109120425-20241109150425-00827.warc.gz"}
Are Black Holes Holograms? - Science and Nonduality (SAND) If anything can sum up just how little we truly know about the Universe, it’s black holes. We can’t see them because not even light can escape their gravitational pull, we have no idea what they’re made of, and where does everything inside go once a black hole dies? ¯_(ツ)_/¯ Physicists can’t even agree on whether black holes are massive, three-dimensional behemoths, or just two-dimensional surfaces that are projected in 3D just like a hologram. But a new study just made the case for holographic black holes even stronger, with a new calculation of the entropy – or disorder – inside supporting the possibility of these giant enigmas of the Universe being nothing but an optical illusion. First off, let’s talk about the holograph hypothesis. First proposed by physicist Leonard Susskind in the 1990s, it predicts that, mathematically speaking, the Universe needs just two dimensions – not three – for the laws of physics and gravity to work as they should. To us, though, everything appears as a three-dimensional image of two dimensional processes projected across a huge cosmic That might sound crazy, but it could actually resolve some big contradictions between Einstein’s theory of relativity and quantum mechanics – the whole ‘nothing can escape a black hole, but matter can never be completely destroyed’ information paradox, for one. And, as Fiona MacDonald explained for us last year, physicists have had great success in matching up the results of gravitational phenomena to the behaviour of quantum particles using just two spatial dimensions: “[S]ince 1997, more than 10,000 papers have been published supporting the idea.” Leaving the entire Universe aside for now, let’s apply this thinking to a black hole instead. Physicists have suggested that the reason we can’t figure out what happens to stuff once it falls over the edge – or event horizon – and into a black hole, is because there is no ‘inside’. Instead, everything that passes the edge gets stuck in the gravitational fluctuations on the surface. A team led by physicist Daniele Pranzetti from the Max Planck Institute for Theoretical Physics in Germany has now come up with a new estimate for the amount of entropy present in a black hole, and their calculations support this scenario. “We were able to use a more complete and richer model compared with what [has been] done in the past … and obtain a far more realistic and robust result,”says Pranzetti. “This allowed us to resolve several ambiguities afflicting previous calculations.” The researchers were focussing on the entropy – a physical property that encodes how ordered, or disordered, something is. Stephen Hawking has suggested in the past that the entropy of a black hole must be proportional to its area, but not its volume, and this idea is what spurred the first thoughts about the possibility of holographic black holes. “Although there is some consensus in the scientific community that black holes must have entropy or their existence would violate the second law of thermodynamics, no agreement has been reached about the origin of this entropy, or how to calculate its value,” Joanne Kennel explains for The Science Explorer. For a new way of thinking about this problem, Pranzetti and his colleagues used a theoretical approach called Loop Quantum Gravity (LQG) to explain a concept known as quantum gravity. In theoretical physics, quantum gravity seeks to describe the force of gravity according to the principles of quantum mechanics, and predicts that the fabric of space-time is made up of tiny grains known as quanta – the ‘atoms’ of space-time. Collections of these quanta are known as condensates, and the team found that just like a jug full of atoms that make up water molecules, a black hole made of condensates would have all the same properties, and their collective behaviour and gravitational impacts could be determined by studying the properties of just one. This means that while we can’t actually see or measure what’s beyond a black hole’s event horizon – and therefore its entropy – it doesn’t really matter, if the collective properties of all its ‘atoms’ can be measured in just one. “[J]ust as fluids at our scale appear as continuous materials despite their consisting of a huge number of atoms, similarly, in quantum gravity, the fundamental constituent atoms of space form a sort of fluid, that is continuous space-time,” the team explains in a press release. “A continuous and homogenous geometry (like that of a spherically symmetric black hole) can … be described as a So what does this mean for our hologram hypothesis? Well, think of a black hole as a three-dimensional basketball hoop – the ring is the event horizon, and the net is the hole into which all matter falls and disappears. Push that net up into the ring to make it a flat, two-dimensional circle, and then imagine that all that metal and string is made of water. Now everything you measure in the ring can be applied to what’s in the net. With this in mind, Pranzetti and his team now have a concrete model to show that the 3D nature of black holes could just be an illusion – all the information of a black hole can theoretically be contained on a two-dimensional surface, with no need for an actual ‘hole’ or inside. “Hence the link between entropy and surface area, rather than volume,” says The Daily Galaxy. Their model has been described in Physical Review Letters, and while it’s going to be borderline impossible to prove definitively that black holes are in fact two-dimensional, theoretical physicists are sure going to try anyway. This study might just be the next big step to get them further on their way, and that’s pretty freaking cool in our books. This article was originally published as The case for black holes being nothing but holograms just got even stronger
{"url":"https://scienceandnonduality.com/article/are-black-holes-holograms/","timestamp":"2024-11-13T09:28:16Z","content_type":"text/html","content_length":"262720","record_id":"<urn:uuid:dca3876f-4fd4-4e3c-9e09-37f0398c9919>","cc-path":"CC-MAIN-2024-46/segments/1730477028342.51/warc/CC-MAIN-20241113071746-20241113101746-00215.warc.gz"}
Warm-up: Estimation Exploration: How Big is the Milk Carton? (10 minutes) The purpose of this Estimation Exploration is for students to estimate a volume based on an image and on their own personal experience with cartons of milk. Students recall the meaning of volume as the number of cubic inches, in this case, it would take to fill the milk carton without gaps or overlaps. Because the carton is relatively small, students can formulate a reasoned, accurate estimate of the milk carton’s volume. They will then use this estimate throughout the lesson. • Groups of 2 • Display the image. • “What is an estimate that’s too high?” “Too low?” “About right?” • 1 minute: quiet think time • “Discuss your thinking with your partner.” • 1 minute: partner discussion • Record responses. Student Facing What is the volume of the milk carton in cubic inches? Record an estimate that is: │ too low │ about right │ too high │ │\(\phantom{\hspace{2.5cm} \\ \hspace{2.5cm}}\)│\(\phantom{\hspace{2.5cm} \\ \hspace{2.5cm}}\)│\(\phantom{\hspace{2.5cm} \\ \hspace{2.5cm}}\)│ Activity Synthesis • “How can you use what you know about volume to estimate the volume of the milk container?” (I can measure to see how many cubic inches it would take to fill the carton. I can measure the length, width, and height and multiply them.) • “What units do you usually use to measure liquids?” (Liters, quarts, cups) • “We learned in an earlier unit that cubic centimeters or cubic inches are also units for measuring a volume.” Activity 1: Milk for Everyone (15 minutes) The purpose of this activity is for students to estimate products using the context of volume introduced in the warm-up. Students estimate how many cubic inches of milk different-sized groups of students might consume. For example, at first, students multiply the amount of milk they consume by the number of students in the class. Next, students multiply the amount consumed by one class by the number of classes. Because these are all estimates, the fact that not every student in one class drinks the same amount of milk or that different classes or grades or schools have different numbers of students can be overlooked. When students make simplifying hypotheses like this, they model with mathematics (MP4). As currently structured, the activity is quite open-ended so that students can use their own school to make their estimates. There is a lot of variation in school size. The average size of an elementary school in Montana, for example, is less than 200, while in California, it is 600. Some large elementary schools in New York City have close to 2,000 students. The important mathematical part of this activity does not depend on the exact numbers for a particular school. The key is which numbers students choose as they make estimates, focusing on multiples of powers of 10. MLR2 Collect and Display. Circulate, listen for, and collect the language students use as they estimate the volume. On a visible display, record words and phrases such as: estimate, guess, predict, multiply, times, and product. Invite students to borrow language from the display as needed, and update it throughout the lesson. Advances: Conversing, Reading Representation: Access for Perception. Use centimeter cubes to demonstrate how many cubic centimeters can fit inside the milk carton so that students understand the size of a cubic centimeter. Supports accessibility for: Conceptual Processing, Visual-Spatial Processing • “What kind of milk do you like to drink?” • Partner discussion • “You are going to estimate the amount of milk that different groups of students drink in one day.” • “You can use the estimate of 20 cubic inches for one carton of milk.” • Monitor for students who select round numbers for their estimates and who use multiplication to go from each estimate to the next estimate. Student Facing In each situation, estimate the volume of milk, in cubic inches, that you or the group would drink in one day. Explain your reasoning. 1. you 2. your class 3. your grade 4. your school 5. 10 schools Advancing Student Thinking If students do not like milk and, therefore, do not have a connection to the problem, suggest they survey a few classmates to find out what their estimates were for how much milk they drink in one Activity Synthesis • Invite students to share responses and estimates. • “How did you use your estimates from each question to help answer the next question?” (Once I knew how much milk I drank, I multiplied by the number of students in our class. Then I multiplied that by the number of fifth-grade classes.) • "How did you make an estimate for your class?" (I think there are between 20 and 30 students in the class but not everyone likes milk. So I estimated that 20 students drink milk with lunch.) Activity 2: How Big is 1,000,000? (20 minutes) The purpose of this activity is for students to make estimates about how long it would take different groups of students to drink 1,000,000 cubic inches of milk. Unlike the previous activity in which students multiplied the 20 cubic inches of milk by larger and larger numbers, in this activity, students divide 1,000,000 cubic inches of milk by smaller and smaller numbers to find out how long it would take each group to drink 1,000,000 cubic inches of milk. If students attempt to calculate exact answers remind them that they are only looking for an estimate and the amount of milk consumed by each group in the previous activity is also only an estimate. Making an estimate or a range of reasonable answers with incomplete information is a part of modeling with mathematics (MP4). • Groups of 2 • “How much do you think 1,000,000 cubic inches of milk is? Could you drink it?” (No, that's a lot of milk. I don't like milk that much.) • 1 minute: quiet think time • 1 minute: partner discussion • 2-3 minutes individual work time • 7-8 minutes partner work time • Monitor for students who use the estimates from the previous activity and who base each successive calculation on the previous one, dividing by an appropriate number at each step. Student Facing Estimate the number of days it would take each group to drink 1,000,000 cubic inches of milk. Explain your reasoning. 1. 10 local schools 2. your school 3. your grade 4. your class 5. you Advancing Student Thinking Students may need support with initiating the task. Ask them to explain how they can use the solutions from the previous activity to help them solve the problems. Activity Synthesis • “How did you estimate the number of days it takes 10 schools to drink 1,000,000 cubic inches of milk?” (We estimated that they drink close to 100,000 cubic inches a day, so in 10 days that’s • “How did you use this estimate to estimate how long it takes your school to drink 1,000,000 cubic inches of milk?” (I multiplied by 10 because it takes 1 class 10 times as long as it takes 10 • “Do you think that you will ever drink 1,000,000 cubic inches of milk?” (No, 50,000 days is a lot. There are only 365 days in a year, so that would be more than 100 years.) Lesson Synthesis “In this lesson we estimated products and quotients.” “How can you use multiplication to estimate how many days it would take your school to drink 1,000,000 cubic inches of milk?” (In 2 days we drink twice as much milk, in 3 days we drink 3 times as much. So I needed to estimate what to multiply the amount for one day by to get about 1,000,000.) “Could you also make this estimate using division?” (Yes, our school drinks about 10,000 cubic inches of milk each day, so I can find how many 10,000s there are in 1,000,000. That's \(1,\!000,\!000 \ div 10,\!000\).) Cool-down: So Much Milk (5 minutes)
{"url":"https://curriculum.illustrativemathematics.org/k5/teachers/grade-5/unit-4/lesson-18/lesson.html","timestamp":"2024-11-04T10:51:52Z","content_type":"text/html","content_length":"85441","record_id":"<urn:uuid:8ece5da9-162b-42f3-a5f7-a75f519bc91d>","cc-path":"CC-MAIN-2024-46/segments/1730477027821.39/warc/CC-MAIN-20241104100555-20241104130555-00045.warc.gz"}
© 2017,2018 John Abbott, Anna M. Bigatti GNU Free Documentation License, Version 1.2 CoCoALib Documentation Index User documentation Here are some functions for constructing individual members of certain families of orthogonal polynomials. Constructors and pseudo-constructors Let n be a non-negative integer, and x a ring element (typically an indeterminate or a number). The functions below evaluate the corresponding polynomial at x: if x is an indeterminate then the polynomial itself is returned. • ChebyshevPoly(n,x) Chebyshev polynomial of 1st kind • ChebyshevPoly2(n,x) Chebyshev polynomial of 2nd kind • HermitePoly(n,x) Hermite polynomial (physics) • HermitePoly2(n,x) Hermite polynomial (probability) • LaguerrePoly(n,x) Laguerre polynomomial multiplied by factorial(n) • DicksonPoly(x,n,alpha) Dickson polynomial of 1st type not orthog • DicksonPoly2(x,n,alpha) Dickson polynomial of 2nd type not orthog Maintainer documentation Some of the Chebyshev functions are not used, but I left them there in case they ever become useful. Bugs, shortcomings and other ideas The dispatch functions for Hermite polynomials have not been tested; so I do not know if the criterion for choosing between "explicit" and "iterative" implementations actually makes any sense. Main changes • October (v0.99560): first release 2018 • November (v0.99610): added DicksonPoly
{"url":"http://cocoa.altervista.org/cocoalib/doc/html/OrthogonalPolys.html","timestamp":"2024-11-02T14:48:57Z","content_type":"text/html","content_length":"3406","record_id":"<urn:uuid:b80314a6-cc36-43b0-b786-70808db9e96a>","cc-path":"CC-MAIN-2024-46/segments/1730477027714.37/warc/CC-MAIN-20241102133748-20241102163748-00097.warc.gz"}
USU Personal Contest 2002 During building of roads, Akbardin read many statistical reports. Each report contained a lot of numbers. But different reports contained numbers in different numeric systems. And Akbardin asks his mathematicians a question – in what numeric system text contains maximal amount of numbers. Number is a sequence of digits, with non-digits to the left and right. Capital Latin letters are used in k-based system with k > 10 ('A' = 10, 'B' = 11, …, 'Z' = 35). You task is to help mathematicians to solve this problem and save their heads. Text consists of digits, capital Latin letters, spaces and line breaks. Size of input doesn’t exceed 1 Mb. Output should contain two integers: base of numeric system K (2 ≤ K ≤ 36) and amount of numbers. If more than one answer is possible, output the one with a less K. input output 01234B56789 11 4 Problem Author: Pavel Atnashev Problem Source: Third USU personal programming contest, Ekaterinburg, Russia, February 16, 2002
{"url":"https://timus.online/problem.aspx?space=32&num=5","timestamp":"2024-11-14T18:42:20Z","content_type":"text/html","content_length":"6032","record_id":"<urn:uuid:03551834-5f2a-481f-8def-df60f0cfa7c7>","cc-path":"CC-MAIN-2024-46/segments/1730477393980.94/warc/CC-MAIN-20241114162350-20241114192350-00572.warc.gz"}
NCERT Exemplar Solutions CBSE - Class 09 - Mathematics - Lines and Angles - NCERT Exemplar Solutions NCERT Exemplar Solutions for Mathematics Lines and Angles Student Subscription Unlock the exclusive content designed for the toppers We will update content for this category shortly. Please visit this category after few days or subscribe to our newsletter by email for latest updates NCERT Exemplar Solutions for class 9 Mathematics Lines and angles NCERT 9 Mathematics Exemplar Problem Text book Solutions NCERT 9th class Mathematics exemplar book solutions for chapter 6 Lines and angles are available in PDF format for free download. These ncert exemplar problem book chapter wise questions and answers are very helpful for CBSE board exam. CBSE recommends NCERT exemplar problem books and most of the questions in CBSE exam are asked from NCERT text books. NCERT 9 Mathematics Exemplar Problem Text book Solutions.Class 9 Mathematics chapter wise NCERT exemplar solution for Mathematics and all the chapters can be downloaded from our website and myCBSEguide mobile app for free. NCERT Exemplar Problem and Solutions for class 9 Mathematics NCERT Exemplar Problems Solutions • NCERT Exemplar Problems Solutions (PDF Download) Free • NCERT Exemplar Problems Solutions for Class 9 Mathematics PDF • NCERT Exemplar Problems Class 9 Mathematics - CBSE • NCERT Exemplar Problems-Solutions MATHEMATICS class 9th • NCERT Exemplar Problems Solutions for Class 9 Mathematics • NCERT BOOK Class 9 Mathematics Exemplar PDF Download • NCERT Exemplar Problems class 9 Mathematics NCERT Class 09 Mathematics Chapter-wise Exemplar Solutions • Chapter 01: Number System • Chapter 02: Polynomials • Chapter 03: Coordinate geometry • Chapter 04: Linear equation in two variables • Chapter 05: Introduction to Euclid’s geometry • Chapter 06: Lines and angles • Chapter 07: Triangles • Chapter 08: Quadrilaterals • Chapter 09: Areas of parallelograms and triangles • Chapter 10: Circles • Chapter 11: Construction • Chapter 12: Heron’s formula • Chapter 13: Surface areas and volumes • Chapter 14: Statistics • Chapter 15: Probability Here is the list of topics covered under each chapter of class 9 Mathematics NCERT text book. NCERT Exemplar Problem book and Solutions for Class 9th Mathematics NCERT exemplar problem book and Solutions Class 9 Mathematics PDF (Download) Free from myCBSEguide app and myCBSEguide website. Ncert solution class 9 Mathematics includes text book solutions from both book. NCERT Solutions for CBSE Class 9 Mathematics have total 15 chapters. Class 9 Mathematics ncert exemplar problem and Solutions in pdf for free Download are given in this website. NCERT 9 Mathematics Exemplar Problem Text book Solutions.Ncert Mathematics class 9 exemplar solutions PDF and Mathematics ncert class 9 PDF exemplar problems and solutions with latest modifications and as per the latest CBSE syllabus are only available in myCBSEguide. • 6.1 Basic terms and definition • 6.2 Intersecting and non-intersecting lines • 6.3 Pair of angles: Linear pair • 6.4 Pair of angles: Vertically opposite angles • 6.5 Parallel lines and a transversal • 6.6 Lines parallel to the same line • 6.7 Angle sum property of triangle NCERT Exemplar Problems & Solutions NCERT exemplar text books are available in NCERT official website for free download. To download sample paper for class 9 Science, Social Science, Mathematics, English Communicative, English Language and Literature, Hindi Course-A, Hindi Course-B, Sanskrit, Foundation of IT, French, Painting, Home Science, Music; do check myCBSEguide app or website. myCBSEguide provides sample papers with solution, test papers for chapter-wise practice, NCERT solutions, NCERT Exemplar solutions, quick revision notes for ready reference, CBSE guess papers and CBSE important question papers. Sample Paper all are made available through the best app for CBSE students and myCBSEguide website.
{"url":"https://mycbseguide.com/downloads/cbse-class-09-mathematics-lines-and-angles/1240/ncert-exemplar-solutions/21/","timestamp":"2024-11-07T19:30:47Z","content_type":"text/html","content_length":"97915","record_id":"<urn:uuid:ae4858b7-4d74-4ee4-990d-fba0b80695f1>","cc-path":"CC-MAIN-2024-46/segments/1730477028009.81/warc/CC-MAIN-20241107181317-20241107211317-00559.warc.gz"}
Solving the Poincaré Conjecture BERKELEY, Calif. - A reclusive Russian mathematician appears to have answered a question that has stumped mathematicians for more than a century. After a decade of isolation in St. Petersburg, over the last year Grigory Perelman posted a few papers to an online archive. Although he has no known plans to publish them, his work has sent shock waves through what is usually a quiet field. At two conferences held during the last two weeks in California, a range of specialists scrutinized Perelman's work, trying to grasp all the details and look for potential flaws. If Perelman really has proved the so-called Poincare Conjecture, as many believe he has, he will become known as one of the great mathematicians of the 21st century and will be first in line for a $1 million prize offered by the Clay Mathematics Institute in Cambridge. Colleagues say Perelman, who did not attend the California conferences and did not respond to a request for comment, couldn't care less about the money, and doesn't want the attention. Known for his single-minded devotion to research, he seldom appears in public; he answers e-mails from mathematicians, but no one else. "What mathematicians enjoy is the chase of really difficult problems," said Hyam Rubinstein, a mathematician who came from Australia to attend meetings at the Mathematical Sciences Research Institute in Berkeley and the American Institute of Mathematics in Palo Alto, Calif., hoping to better understand Perelman's solution. "This problem is like the Mount Everest of math conjectures, so everyone wants to be the first to climb it." The Poincare Conjecture, named after the Frenchman who proposed it in 1904, is the question that essentially founded the field of topology, the "rubber-sheet geometry" that looks at the properties of surfaces that don't change no matter how much you stretch or bend them. To solve it, one would have to prove something that no one seriously doubts: that, just as there is only one way to bend a two-dimensional plane into a shape without holes - the sphere - there is likewise only one way to bend three-dimensional space into a shape that has no holes. Though abstract, the conjecture has powerful practical implications: Solve it and you may be able to describe the shape of the universe. Dozens of the best mathematicians of the last century tried with all kinds of approaches to solve the conjecture. Some thought they had it for months, even years, but counter-examples and flaws just kept springing up. Simply-stated but elusive to prove - like Fermat's Last Theorem - this conjecture has spurred the development of whole branches of mathematics. A decade ago, after some work in the United States that colleagues described as "brilliant," Perelman gave up a promising career to work in seclusion in St. Petersburg. Although he appears occasionally, most recently for lectures at the Massachusetts Institute of Technology and several other US schools last spring, he keeps a very low profile. Even in mathematical circles, surprisingly little is known about him, and those who know him often don't want to speak publicly about his work. At any rate, he seems to have used his time alone wisely. While working out the Poincare Conjecture, Perelman also seems to have established a much stronger result, one that could change many branches of mathematics. Called the "Geometrization Conjecture," it is a far-reaching claim that joins topology and geometry, by stating that all space-like structures can be divided into parts, each of which can be described by one of three kinds of simple geometric models. Like a similar result for surfaces proved a century ago, this would have profound consequences in almost all areas of As the foundation for his proof, Perelman used a method called Ricci flow, invented in the mid-1980s by Columbia University mathematician Richard Hamilton, which breaks a surface into parts and smooths these parts out, making them easier to understand and classify. Although some mathematicians find it disturbing that Poincare's simple question could have such a complicated answer, Hamilton is not worried. After so many failed proofs, he said, "no one expected it to be easy." Hamilton calls Perelman's work original and powerful - and is now running a seminar at Columbia devoted to checking Perelman's proof in all its detail. If the proof is vetted, the Clay Mathematics Institute may face a difficult choice. Its rules state that any solution must be published two years before being considered for the $1 million prize. Perelman's work remains unpublished and he appears indifferent to the money. Hamilton, on the other hand, did the foundational work on which the proof is based - but that was over a decade ago. And, as with any major finding, many people have contributed in some degree. Huge financial prizes raise the stakes for assigning credit for major proofs like this one. For the time being, however, researchers are sharing their approaches with a sense of openness. And the mood is one of cautious optimism that Perelman's approach, even if flawed, will eventually be the one that works. It takes years for a solution to make the leap from being just another claim to actually being considered "true." Perelman's work will be digested by a wide range of mathematicians in the next few years, said University of California at Davis mathematician Joel Hass. Steps that Perelman pushed through by brute force will be replaced with simpler methods, and his work will be integrated into other fields, Hass said. And while the equivalent of the Poincare conjecture has already been proven for dimensions four and up, no one yet has any idea how to classify all the spaces that appear in higher dimensions. This state of ignorance is what prods mathematicians to keep working. "It's interesting how a really good problem can sometimes be much better than a really good answer," Rubinstein said with a grin.
{"url":"http://www.jaschahoffman.com/2003/12/solving_the_poincare_conjectur.html","timestamp":"2024-11-10T04:14:55Z","content_type":"application/xhtml+xml","content_length":"18558","record_id":"<urn:uuid:0990fa72-d0f4-4b68-9952-d9fc9b02d8fc>","cc-path":"CC-MAIN-2024-46/segments/1730477028166.65/warc/CC-MAIN-20241110040813-20241110070813-00402.warc.gz"}
If A=2i^+k^,B=i^+j^+k^ and C=4i^−3j^+7k^ Determine a vector R... | Filo If and Determine a vector satisfying and Not the question you're searching for? + Ask your question On solving above equations, and Was this solution helpful? Found 6 tutors discussing this question Discuss this question LIVE for FREE 12 mins ago One destination to cover all your homework and assignment needs Learn Practice Revision Succeed Instant 1:1 help, 24x7 60, 000+ Expert tutors Textbook solutions Big idea maths, McGraw-Hill Education etc Essay review Get expert feedback on your essay Schedule classes High dosage tutoring from Dedicated 3 experts Questions from JEE Advanced 1990 - PYQs Practice questions from Vector Algebra in the same exam Practice more questions from Vector Algebra View more Practice questions on similar concepts asked by Filo students View more Stuck on the question or explanation? Connect with our Mathematics tutors online and get step by step solution of this question. 231 students are taking LIVE classes Question Text If and Determine a vector satisfying and Updated On Jun 23, 2022 Topic Vector Algebra Subject Mathematics Class Class 12 Answer Type Text solution:1 Video solution: 1 Upvotes 248 Avg. Video Duration 2 min
{"url":"https://askfilo.com/math-question-answers/if-overrightarrowmathbfa2-hatmathbfihatmathbfk","timestamp":"2024-11-10T20:39:57Z","content_type":"text/html","content_length":"815290","record_id":"<urn:uuid:ea50fcc6-c0b1-4258-9399-2c85a3c2e007>","cc-path":"CC-MAIN-2024-46/segments/1730477028191.83/warc/CC-MAIN-20241110201420-20241110231420-00274.warc.gz"}
How do you use Part 1 of the Fundamental Theorem of Calculus to find the derivative of the function y = int sin^3 t dt from [e^x, 0]? | HIX Tutor How do you use Part 1 of the Fundamental Theorem of Calculus to find the derivative of the function #y = int sin^3 t dt# from #[e^x, 0]#? Answer 1 So, since #y=int_(e^x)^0sin^3tdt#: #y'=sin^3 0*0-sin^3e^x*e^x=-e^xsin^3e^x#. Sign up to view the whole answer By signing up, you agree to our Terms of Service and Privacy Policy Answer 2 To use Part 1 of the Fundamental Theorem of Calculus to find the derivative of the function ( y = \int_{e^x}^{0} \sin^3(t) , dt ), we first need to evaluate the integral and then differentiate the result with respect to ( x ). Let's denote the integral as a function of ( x ), say ( F(x) ). Then, according to the Fundamental Theorem of Calculus Part 1, the derivative of ( F(x) ) with respect to ( x ) is given by ( \frac{d}{dx} \int_{e^x}^{0} \sin^3(t) , dt = -\sin^3(e^x) \cdot \frac{d}{dx}(e^x) ). Differentiating ( e^x ) with respect to ( x ) gives ( \frac{d}{dx}(e^x) = e^x ). So, the derivative of the function ( y = \int_{e^x}^{0} \sin^3(t) , dt ) with respect to ( x ) is ( -\sin^3(e^x) \cdot e^x ). Sign up to view the whole answer By signing up, you agree to our Terms of Service and Privacy Policy Answer from HIX Tutor When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some Not the question you need? HIX Tutor Solve ANY homework problem with a smart AI • 98% accuracy study help • Covers math, physics, chemistry, biology, and more • Step-by-step, in-depth guides • Readily available 24/7
{"url":"https://tutor.hix.ai/question/how-do-you-use-part-1-of-the-fundamental-theorem-of-calculus-to-find-the-derivat-18-8f9afa0d07","timestamp":"2024-11-08T11:25:58Z","content_type":"text/html","content_length":"577939","record_id":"<urn:uuid:d314d70c-ed61-410f-ae8f-b3e0862e63e5>","cc-path":"CC-MAIN-2024-46/segments/1730477028059.90/warc/CC-MAIN-20241108101914-20241108131914-00854.warc.gz"}
Aranha, J, & Martins, M. Slender body approximation for yaw velocity terms in the wave drift damping matrix Ba, M., Farcy, A. & Guilbaud, M. A time domain method to compute transient non linear hydrodynamic flows Bratland, A.K., Korsmeyer, F.T. & Newman, J.N. Time domain calculations in finite water depth Bunnik, T.H.J. & Hermans, A.J. A time-domain algorithm for motions of high speed vessels using a new free surface condition Celebi, M.S. & Kim, M.H. Nonlinear wave-body interactions in a numerical wave tank Chen, X.-B. & Noblesse, F. Dispersion relation and far-field waves Clement, A. A shortcut for computing time-domain free-surface potentials avoiding Green function evaluations Di Mascio, A., Penna, R., Landrini, M. & Campana, E.F. Viscous free surface flow past a ship in steady drift motion Dias, F. Solitary waves with algebraic decay Doutreleau, Y. & Clarisse, J.-M. Recent progress in dealing with the singular behavior of the Neumann-Kelvin Green function Farstad, T.H. Impulsive diffraction by an array of three cylinders Ferrant, P. Nonlinear wave-current interactions in the vicinity of a vertical cylinder Finne, S. Higher-order wave drift forces on bodies with a small forward speed based on a long wave approximation Fontaine, E. & Faltinsen, O.M. Steady flow near a wedge shaped bow Frank, A.M. On new mode of wave generation by moving pressure disturbance Gentaz, L., Alessandrini, B. & Delhommeau, G. Motion simulation of a two-dimensional body at the surface of a viscous fluid by a fully coupled solver Greaves, D.M., Borthwick, A.G.L. & Wu G.X. An investigation of standing waves using a fully non-linear boundary adaptive finite element method Grilli, S.T. & Horrillo, J. Fully nonlinear properties of shoaling periodic waves calculated in a numerical wave tank Grue, J. & Palm, E. Modelling of fully nonlinear internal waves and their generation in transcritical flow at a geometry Hermans, A.J. The excitation of waves in a very large floating flexible platform by short free-surface water waves Huang, J.B., Eatock Taylor, R. & Rainey, R.C.T. Free surface integrals in non-linear wave-diffraction analysis Huang, Y. & Sclavounos, P.D. Nonlinear ship wave simulations by a Rankine panel method Iwashita, H. & Bertram, V. Numerical study on the influence of the steady flow field in seakeeping Janson, C.E. A comparison of two Rankine-source panel methods for the prediction of free-surface waves Jiang, L., Schultz, W.W. & Perlin M. Capillary ripples on standing water waves Khabakhpasheva, T.I. & Korobkin, A.A. Wave impact on elastic plates Kim, Y. & Sclavounos, P.D. The computation of the second-order hydrodynamic forces on a slender ship in waves Laget, O., de Jouette, C., Le Gouez, J.M. & Rigaud, S. Wave breaking simulation around a lens-shaped mast by a V.0.F. method Landrini, M., Ranucci, M., Casciola, C.M. & Graziani, G. Viscous effects in wave-body interaction Linton, C.M. Numerical investigations into non-uniqueness in the two-dimensional water-wave problem Ma Q.W., Wu, G.K. & Eatock Taylor, R. Finite element analysis of non-linear transient waves in a three dimensional long tank Magee, A. Applications using a seakeeping simulation code Malenica, S. Higher-order wave diffraction of water waves by an array of vertical circular cylinders Mayer, S., Garapon, A. & Sorensen, L. Wave tank simulations using a fractional-step method in a cell-centered finite volume implementation McIver, M. Resonance in the unbounded water wave problem McIver, P. & Kuznetsov, N. On uniqueness and trapped modes in the water-wave problem for a surface-piercing axisymmetric body Motygin, 0. & Kuznetsov, N. On the non-uniqueness in the 2D Neumann-Kelvin problem for a tandem of surface-piercing bodies Nguyen, T. & Yeung, R.W. Steady wave systems in a two-layer fluid of finite depth Nygaard, J.O. & Grue, J. Wavelet and spline methods for the solution of wave-body problems Ohkusu, M. & Nanba, Y. Hydroelastic response of a floating thin plate in very short waves Porter, R. & Evanes, D.V. Recent results on trapped modes and their influence on finite arrays of vertical cylinders in waves Rainey, R.C.T. Violent surface motion around vertical cylinders in large, steep waves -- Is it the result of the step change in relative acceleration? Scorpio, S.M. & Beck, R.F. Two-dimensional inviscid transom stern flow Sierevogel, L.M. & Hermans, A.J. Stability analysis of the 2D linearized unsteady free-surface condition Skourup, J., Buchmann, B. & Bingham, H.B. A second order 3D BEM for wave-structure interaction Tanizawa, K. & Naito, S. A study on wave-drift damping by fully nonlinear simulation Teng, B. & Kato, S. Third-harmonic diffraction force on axisymmetric bodies Tuck, E.O., Simakov, S.T. & Wiryanto L.H. Steady splashing flows Van't Veer, R. Catamaran seakeeping predictions Vogt, M. & Kang, K-J. A level set technique for computing 2D free surface flows Wood, D.J. & Peregrine, D.H. Application of pressure-impulse theory to water wave impact beneath a deck and on a vertical cylinder Zhu, Q., Liu, Y. & Yue, D.K.P. Resonant interactions of Kelvin ship waves with ambient ocean waves Special Weinblum Anniversary Session
{"url":"http://www.iwwwfb.org/Workshops/12.htm","timestamp":"2024-11-07T21:52:54Z","content_type":"text/html","content_length":"19977","record_id":"<urn:uuid:87868bba-0724-4af2-85d7-e5938672b3b5>","cc-path":"CC-MAIN-2024-46/segments/1730477028017.48/warc/CC-MAIN-20241107212632-20241108002632-00018.warc.gz"}
Verleih von Zelten an Jugendgruppen Frage von Marc Niessen an Ministerin Weykmans, Jeden Sommer profitieren hunderte von Kindern und Jugendlichen in der Deutschsprachigen Gemeinschaft von den Zeltlagern der Jugendorganisationen. Diese Lager sind als Erfahrung von Gruppenleben und von Verantwortungsübernahme ganz wichtige Kernelemente der Jugendarbeit in der Deutschsprachigen Gemeinschaft. Immer wieder ist zu hören, dass die jährliche Aufteilung der Zelte an die verschiedenen Jugendgruppen eine große Herausforderung darstellt. Seit Jahren gibt es nicht genug von den mannshohen Zelten, um den Bedarf aller Jugendgruppen zu decken. Darüber hinaus sind die verfügbaren Zelte mitunter in einem schlechten Zustand, sodass sie als Schlafplatz nicht zu gebrauchen sind. Die AG JugO des Rates der Deutschsprachigen Jugend ist mit der Verteilung der Zelte betraut, sie nimmt die Anfragen der Jugendgruppen entgegen und teilt die Zelte dann je nach Verfügbarkeit auf. Diese Aufgabe ist jedoch aufgrund des Zeltmangels immer wieder schwierig. In diesem Jahr beispielsweise werden zu Spitzenzeiten, das heißt zwischen dem 10. und 20. Juli, zwischen 63 und 67 Zelte gebraucht. Das sind deutlich mehr, als in den vergangenen Jahren zur Verfügung stand. Hinzu kommt, dass die Zelte der Chiro im vergangenen Jahr bei einem Brand beschädigt wurden und daher nicht zur Verfügung stehen. Eine mögliche Lösung des Engpasses wäre die Anschaffung eigener Zelte durch die Jugendorganisationen. Dies wird jedoch dadurch erschwert, dass die Anschaffung von Zelten ausdrücklich von der Bezuschussung für Materialkosten ausgenommen ist. Daher folgende Fragen: • Wie viele Zelte stehen den ostbelgischen Jugendgruppen im Sommer 2018 zur Verfügung (sei es aus dem Bestand der Deutschsprachigen Gemeinschaft oder über Abkommen mit dritten, wie etwa der französischen Gemeinschaft oder dem Verteidigungsministerium)? • Weshalb können Jugendgruppen, die bereit sind eigene Zelte anzuschaffen, keinen Zuschuss dafür erhalten? • Wie möchten Sie das Problem der Zeltverteilung in Zukunft lösen? Marc Niessen Kommentar Regeln • Please show respect to the opinions of others no matter how seemingly far-fetched. • Abusive, foul language, and/or divisive comments may be deleted without notice. • Each blog member is allowed limited comments, as displayed above the comment box. • Comments must be limited to the number of words displayed above the comment box. • Please limit one comment after any comment posted per post. • ĺĺ • Please show respect to the opinions of others no matter how seemingly far-fetched. • Abusive, foul language, and/or divisive comments may be deleted without notice. • Each blog member is allowed limited comments, as displayed above the comment box. • Comments must be limited to the number of words displayed above the comment box. • Please limit one comment after any comment posted per post. • Please show respect to the opinions of others no matter how seemingly far-fetched. • Abusive, foul language, and/or divisive comments may be deleted without notice. • Each blog member is allowed limited comments, as displayed above the comment box. • Comments must be limited to the number of words displayed above the comment box. • Please limit one comment after any comment posted per post. • Please show respect to the opinions of others no matter how seemingly far-fetched. • Abusive, foul language, and/or divisive comments may be deleted without notice. • Each blog member is allowed limited comments, as displayed above the comment box. • Comments must be limited to the number of words displayed above the comment bo • Please show respect to the opinions of others no matter how seemingly far-fetched. • Abusive, foul language, and/or divisive comments may be deleted without notice. • Each blog member is allowed limited comments, as displayed above the comment box. • Comments must be limited to the number of words displayed above the comment box. • Please limit one comment after any comment posted per post. • Please show respect to the opinions of others no matter how seemingly far-fetched. • Abusive, foul language, and/or divisive comments may be deleted without notice. • Each blog member is allowed limited comments, as displayed above the comment box. • Comments must be limited to the number of words displayed above the comment box. • Please limit one comment after any comment posted per post. • Please show respect to the opinions of others no matter how seemingly far-fetched. • Abusive, foul language, and/or divisive comments may be deleted without notice. • Each blog member is allowed limited comments, as displayed above the comment box. • Comments must be limited to the number of words displayed above the comment box. • Please limit one comment after any comment posted per post. • Please show respect to the opinions of others no matter how seemingly far-fetched. • Abusive, foul language, and/or divisive comments may be deleted without notice. • Each blog member is allowed limited comments, as displayed above the comment box. • Comments must be limited to the number of words displayed above the comment box. • Please limit one comment after any comment posted per post. • Please show respect to the opinions of others no matter how seemingly far-fetched. • Abusive, foul language, and/or divisive comments may be deleted without notice. • Each blog member is allowed limited comments, as displayed above the comment box. • Comments must be limited to the number of words displayed above the comment box. • Please limit one comment after any comment posted per post. • Please show respect to the opinions of others no matter how seemingly far-fetched. • Abusive, foul language, and/or divisive comments may be deleted without notice. • Each blog member is allowed limited comments, as displayed above the comment box. • Comments must be limited to the number of words displayed above the comment box. • Please limit one comment after any comment posted per post. • Please show respect to the opinions of others no matter how seemingly far-fetched. • Abusive, foul language, and/or divisive comments may be deleted without notice. • Each blog member is allowed limited comments, as displayed above the comment box. • Comments must be limited to the number of words displayed above the comment box. • Please limit one comment after any comment posted per post. • Please show respect to the opinions of others no matter how seemingly far-fetched. • Abusive, foul language, and/or divisive comments may be deleted without notice. • Each blog member is allowed limited comments, as displayed above the comment bo • Please show respect to the opinions of others no matter how seemingly far-fetched. • Abusive, foul language, and/or divisive comments may be deleted without notice. • Each blog member is allowed limited comments, as displayed above the comment box. • Comments must be limited to the number of words displayed above the comment box. • Please limit one comment after any comment posted per post. • Please show respect to the opinions of others no matter how seemingly far-fetched. • Abusive, foul language, and/or divisive comments may be deleted without notice. • Each blog member is allowed limited comments, as displayed above the comment box. • Comments must be limited to the number of words displayed above the comment box. • Please limit one comment after any comment posted per post. • Please show respect to the opinions of others no matter how seemingly far-fetched. • Abusive, foul language, and/or divisive comments may be deleted without notice. • Each blog member is allowed limited comments, as displayed above the comment box. • Comments must be limited to the number of words displayed above the comment box. • Please limit one comment after any comment posted per post. • Please show respect to the opinions of others no matter how seemingly far-fetched. • Abusive, foul language, and/or divisive comments may be deleted without notice. • Each blog member is allowed limited comments, as displayed above the comment box. • Comments must be limited to the number of words displayed above the comment box. • Please limit one comment after any comment posted per post. • Please show respect to the opinions of others no matter how seemingly far-fetched. • Abusive, foul language, and/or divisive comments may be deleted without notice. • Each blog member is allowed limited comments, as displayed above the comment box. • Comments must be limited to the number of words displayed above the comment box. • Please limit one comment after any comment posted per post. • Please show respect to the opinions of others no matter how seemingly far-fetched. • Abusive, foul language, and/or divisive comments may be deleted without notice. • Each blog member is allowed limited comments, as displayed above the comment box. • Comments must be limited to the number of words displayed above the comment box. • Please limit one comment after any comment posted per post. • x. • Comments must be limited to the number of words displayed above the comment box. • Please limit one comment after any comment posted per post. • Please show respect to the opinions of others no matter how seemingly far-fetched. • Abusive, foul language, and/or divisive comments may be deleted without notice. • Each blog member is allowed limited comments, as displayed above the comment box. • Comments must be limited to the number of words displayed above the comment box. • Please limit one comment after any comment posted per post. • Please show respect to the opinions of others no matter how seemingly far-fetched. • Abusive, foul language, and/or divisive comments may be deleted without notice. • Each blog member is allowed limited comments, as displayed above the comment box. • Comments must be limited to the number of words displayed above the comment box. • Please limit one comment after any comment posted per post. • Please show respect to the opinions of others no matter how seemingly far-fetched. • Abusive, foul language, and/or divisive comments may be deleted without notice. • Each blog member is allowed limited comments, as displayed above the comment box. • Comments must be limited to the number of words displayed above the comment box. • Please limit one comment after any comment posted per post. • x. • Please limit one comment after any comment posted per post. • Please show respect to the opinions of others no matter how seemingly far-fetched. • Abusive, foul language, and/or divisive comments may be deleted without notice. • Each blog member is allowed limited comments, as displayed above the comment box. • Comments must be limited to the number of words displayed above the comment box. • Please limit one comment after any comment posted per post. • Please show respect to the opinions of others no matter how seemingly far-fetched. • Abusive, foul language, and/or divisive comments may be deleted without notice. • Each blog member is allowed limited comments, as displayed above the comment box. • Comments must be limited to the number of words displayed above the comment box. • Please limit one comment after any comment posted per post. • Please show respect to the opinions of others no matter how seemingly far-fetched. • Abusive, foul language, and/or divisive comments may be deleted without notice. • Each blog member is allowed limited comments, as displayed above the comment box. • Comments must be limited to the number of words displayed above the comment box. • Please limit one comment after any comment posted per post. • Please show respect to the opinions of others no matter how seemingly far-fetched. • Abusive, foul language, and/or divisive comments may be deleted without notice. • Each blog member is allowed limited comments, as displayed above the comment box. • Comments must be limited to the number of words displayed above the comment box. • Please limit one comment after any comment posted per post. • Please show respect to the opinions of others no matter how seemingly far-fetched. • Abusive, foul language, and/or divisive comments may be deleted without notice. • Each blog member is allowed limited comments, as displayed above the comment box. • Comments must be limited to the number of words displayed above the comment box. • Please limit one comment after any comment posted per post. • Please show respect to the opinions of others no matter how seemingly far-fetched. • Abusive, foul language, and/or divisive comments may be deleted without notice. • Each blog member is allowed limited comments, as displayed above the comment box. • Comments must be limited to the number of words displayed above the comment box. • Please limit one comment after any comment posted per post. • Please show respect to the opinions of others no matter how seemingly far-fetched. • Abusive, foul language, and/or divisive comments may be deleted without notice. • Each blog member is allowed limited comments, as displayed above the comment box. • Comments must be limited to the number of words displayed above the comment box. • Please limit one comment after any comment posted per post. • Please show respect to the opinions of others no matter how seemingly far-fetched. • Abusive, foul language, and/or divisive comments may be deleted without notice. • Each blog member is allowed limited comments, as displayed above the comment box. • Comments must be limited to the number of words displayed above the comment box. • Please limit one comment after any comment posted per post. • Please show respect to the opinions of others no matter how seemingly far-fetched. • Abusive, foul language, and/or divisive comments may be deleted without notice. • Each blog member is allowed limited comments, as displayed above the comment box. • Comments must be limited to the number of words displayed above the comment box. • Please limit one comment after any comment posted per post. • Please show respect to the opinions of others no matter how seemingly far-fetched. • Abusive, foul language, and/or divisive comments may be deleted without notice. • Each blog member is allowed limited comments, as displayed above the comment box. • Comments must be limited to the number of words displayed above the comment box. • Please limit one comment after any comment posted per post. • Please show respect to the opinions of others no matter how seemingly far-fetched. • Abusive, foul language, and/or divisive comments may be deleted without notice. • Each blog member is allowed limited comments, as displayed above the comment box. • Comments must be limited to the number of words displayed above the comment box. • Please limit one comment after any comment posted per post. • Please show respect to the opinions of others no matter how seemingly far-fetched. • Abusive, foul language, and/or divisive comments may be deleted without notice. • Each blog member is allowed limited comments, as displayed above the comment box. • Comments must be limited to the number of words displayed above the comment box. • Please limit one comment after any comment posted per post. • Please show respect to the opinions of others no matter how seemingly far-fetched. • Abusive, foul language, and/or divisive comments may be deleted without notice. • Each blog member is allowed limited comments, as displayed above the comment box. • Comments must be limited to the number of words displayed above the comment box. • Please limit one comment after any comment posted per post. • Please show respect to the opinions of others no matter how seemingly far-fetched. • Abusive, foul language, and/or divisive comments may be deleted without notice. • Each blog member is allowed limited comments, as displayed above the comment box. • Comments must be limited to the number of words displayed above the comment box. • Please limit one comment after any comment posted per post. • Please show respect to the opinions of others no matter how seemingly far-fetched. • Abusive, foul language, and/or divisive comments may be deleted without notice. • Each blog member is allowed limited comments, as displayed above the comment box. • Comments must be limited to the number of words displayed above the comment box. • Please limit one comment after any comment posted per post. • Please show respect to the opinions of others no matter how seemingly far-fetched. • Abusive, foul language, and/or divisive comments may be deleted without notice. • Each blog member is allowed limited comments, as displayed above the comment box. • Comments must be limited to the number of words displayed above the comment box. • Please limit one comment after any comment posted per post. • Please show respect to the opinions of others no matter how seemingly far-fetched. • Abusive, foul language, and/or divisive comments may be deleted without notice. • Each blog member is allowed limited comments, as displayed above the comment box. • Comments must be limited to the number of words displayed above the comment box. • Please limit one comment after any comment posted per post. • Please show respect to the opinions of others no matter how seemingly far-fetched. • Abusive, foul language, and/or divisive comments may be deleted without notice. • Each blog member is allowed limited comments, as displayed above the comment box. • Comments must be limited to the number of words displayed above the comment box. • Please limit one comment after any comment posted per post. • Please show respect to the opinions of others no matter how seemingly far-fetched. • Abusive, foul language, and/or divisive comments may be deleted without notice. • Each blog member is allowed limited comments, as displayed above the comment box. • Comments must be limited to the number of words displayed above the comment box. • Please limit one comment after any comment posted per post. • Please show respect to the opinions of others no matter how seemingly far-fetched. • Abusive, foul language, and/or divisive comments may be deleted without notice. • Each blog member is allowed limited comments, as displayed above the comment box. • Comments must be limited to the number of words displayed above the comment box. • Please limit one comment after any comment posted per post. • Please show respect to the opinions of others no matter how seemingly far-fetched. • Abusive, foul language, and/or divisive comments may be deleted without notice. • Each blog member is allowed limited comments, as displayed above the comment box. • Comments must be limited to the number of words displayed above the comment box. • Please limit one comment after any comment posted per post. • Please show respect to the opinions of others no matter how seemingly far-fetched. • Abusive, foul language, and/or divisive comments may be deleted without notice. • Each blog member is allowed limited comments, as displayed above the comment box. • Comments must be limited to the number of words displayed above the comment box. • Please limit one comment after any comment posted per post. • Please show respect to the opinions of others no matter how seemingly far-fetched. • Abusive, foul language, and/or divisive comments may be deleted without notice. • Each blog member is allowed limited comments, as displayed above the comment box. • Comments must be limited to the number of words displayed above the comment box. • Please limit one comment after any comment posted per post. • Please show respect to the opinions of others no matter how seemingly far-fetched. • Abusive, foul language, and/or divisive comments may be deleted without notice. • Each blog member is allowed limited comments, as displayed above the comment box. • Comments must be limited to the number of words displayed above the comment box. • Please limit one comment after any comment posted per post. • Please show respect to the opinions of others no matter how seemingly far-fetched. • Abusive, foul language, and/or divisive comments may be deleted without notice. • Each blog member is allowed limited comments, as displayed above the comment box. • Comments must be limited to the number of words displayed above the comment box. • Please limit one comment after any comment posted per post. • Please show respect to the opinions of others no matter how seemingly far-fetched. • Abusive, foul language, and/or divisive comments may be deleted without notice. • Each blog member is allowed limited comments, as displayed above the comment box. • Comments must be limited to the number of words displayed above the comment box. • Please limit one comment after any comment posted per post. • Please show respect to the opinions of others no matter how seemingly far-fetched. • Abusive, foul language, and/or divisive comments may be deleted without notice. • Each blog member is allowed limited comments, as displayed above the comment box. • Comments must be limited to the number of words displayed above the comment box. • Please limit one comment after any comment posted per post. • Please show respect to the opinions of others no matter how seemingly far-fetched. • Abusive, foul language, and/or divisive comments may be deleted without notice. • Each blog member is allowed limited comments, as displayed above the comment box. • Comments must be limited to the number of words displayed above the comment box. • Please limit one comment after any comment posted per post. • Please show respect to the opinions of others no matter how seemingly far-fetched. • Abusive, foul language, and/or divisive comments may be deleted without notice. • Each blog member is allowed limited comments, as displayed above the comment box. • Comments must be limited to the number of words displayed above the comment box. • Please limit one comment after any comment posted per post. • Please show respect to the opinions of others no matter how seemingly far-fetched. • Abusive, foul language, and/or divisive comments may be deleted without notice. • Each blog member is allowed limited comments, as displayed above the comment box. • Comments must be limited to the number of words displayed above the comment box. • Please limit one comment after any comment posted per post. • Please show respect to the opinions of others no matter how seemingly far-fetched. • Abusive, foul language, and/or divisive comments may be deleted without notice. • Each blog member is allowed limited comments, as displayed above the comment box. • Comments must be limited to the number of words displayed above the comment box. • Please limit one comment after any comment posted per post. • Please show respect to the opinions of others no matter how seemingly far-fetched. • Abusive, foul language, and/or divisive comments may be deleted without notice. • Each blog member is allowed limited comments, as displayed above the comment box. • Comments must be limited to the number of words displayed above the comment box. • Please limit one comment after any comment posted per post. • Please show respect to the opinions of others no matter how seemingly far-fetched. • Abusive, foul language, and/or divisive comments may be deleted without notice. • Each blog member is allowed limited comments, as displayed above the comment box. • Comments must be limited to the number of words displayed above the comment box. • Please limit one comment after any comment posted per post. • Please show respect to the opinions of others no matter how seemingly far-fetched. • Abusive, foul language, and/or divisive comments may be deleted without notice. • Each blog member is allowed limited comments, as displayed above the comment box. • Comments must be limited to the number of words displayed above the comment box. • Please limit one comment after any comment posted per post. • Please show respect to the opinions of others no matter how seemingly far-fetched. • Abusive, foul language, and/or divisive comments may be deleted without notice. • Each blog member is allowed limited comments, as displayed above the comment box. • Comments must be limited to the number of words displayed above the comment box. • Please limit one comment after any comment posted per post. • Please show respect to the opinions of others no matter how seemingly far-fetched. • Abusive, foul language, and/or divisive comments may be deleted without notice. • Each blog member is allowed limited comments, as displayed above the comment box. • Comments must be limited to the number of words displayed above the comment box. • Please limit one comment after any comment posted per post. • Please show respect to the opinions of others no matter how seemingly far-fetched. • Abusive, foul language, and/or divisive comments may be deleted without notice. • Each blog member is allowed limited comments, as displayed above the comment box. • Comments must be limited to the number of words displayed above the comment box. • Please limit one comment after any comment posted per post. • Please show respect to the opinions of others no matter how seemingly far-fetched. • Abusive, foul language, and/or divisive comments may be deleted without notice. • Each blog member is allowed limited comments, as displayed above the comment box. • Comments must be limited to the number of words displayed above the comment box. • Please limit one comment after any comment posted per post. • Please show respect to the opinions of others no matter how seemingly far-fetched. • Abusive, foul language, and/or divisive comments may be deleted without notice. • Each blog member is allowed limited comments, as displayed above the comment box. • Please show respect to the opinions of others no matter how seemingly far-fetched. • Abusive, foul language, and/or divisive comments may be deleted without notice. • Each blog member is allowed limited comments, as displayed above the comment box. • Comments must be limited to the number of words displayed above the comment box. • Please limit one comment after any comment posted per post. • Please show respect to the opinions of others no matter how seemingly far-fetched. • Abusive, foul language, and/or divisive comments may be deleted without notice. • Each blog member is allowed limited comments, as displayed above the comment box. • Comments must be limited to the number of words displayed above the comment box. • Please limit one comment after any comment posted per post. • Comments must be limited to the number of words displayed above the comment box. • Please limit one comment after any comment posted per post. • Please show respect to the opinions of others no matter how seemingly far-fetched. • Abusive, foul language, and/or divisive comments may be deleted without notice. • Each blog member is allowed limited comments, as displayed above the comment box. • Comments must be limited to the number of words displayed above the comment box. • Please limit one comment after any comment posted per post. • Please show respect to the opinions of others no matter how seemingly far-fetched. • Abusive, foul language, and/or divisive comments may be deleted without notice. • Each blog member is allowed limited comments, as displayed above the comment box. • Comments must be limited to the number of words displayed above the comment box. • Please limit one comment after any comment posted per post. • Please show respect to the opinions of others no matter how seemingly far-fetched. • Abusive, foul language, and/or divisive comments may be deleted without notice. • Each blog member is allowed limited comments, as displayed above the comment box. • Comments must be limited to the number of words displayed above the comment box. • Please limit one comment after any comment posted per post. • Please show respect to the opinions of others no matter how seemingly far-fetched. • Abusive, foul language, and/or divisive comments may be deleted without notice. • Each blog member is allowed limited comments, as displayed above the comment box. • Comments must be limited to the number of words displayed above the comment box. • Please limit one comment after any comment posted per post. • Please show respect to the opinions of others no matter how seemingly far-fetched. • Abusive, foul language, and/or divisive comments may be deleted without notice. • Each blog member is allowed limited comments, as displayed above the comment box. • Comments must be limited to the number of words displayed above the comment box. • Please limit one comment after any comment posted per post. • Please show respect to the opinions of others no matter how seemingly far-fetched. • Abusive, foul language, and/or divisive comments may be deleted without notice. • Each blog member is allowed limited comments, as displayed above the comment box. • Comments must be limited to the number of words displayed above the comment box. • Please limit one comment after any comment posted per post. • Please show respect to the opinions of others no matter how seemingly far-fetched. • Abusive, foul language, and/or divisive comments may be deleted without notice. • Each blog member is allowed limited comments, as displayed above the comment box. • Comments must be limited to the number of words displayed above the comment box. • Please limit one comment after any comment posted per post. • Please show respect to the opinions of others no matter how seemingly far-fetched. • Abusive, foul language, and/or divisive comments may be deleted without notice. • Each blog member is allowed limited comments, as displayed above the comment box. • Comments must be limited to the number of words displayed above the comment box. • Please limit one comment after any comment posted per post. • Please show respect to the opinions of others no matter how seemingly far-fetched. • Abusive, foul language, and/or divisive comments may be deleted without notice. • Each blog member is allowed limited comments, as displayed above the comment box. • Comments must be limited to the number of words displayed above the comment box. • Please limit one comment after any comment posted per post. • Please show respect to the opinions of others no matter how seemingly far-fetched. • Abusive, foul language, and/or divisive comments may be deleted without notice. • Each blog member is allowed limited comments, as displayed above the comment box. • Comments must be limited to the number of words displayed above the comment box. • Please limit one comment after any comment posted per post. • Please show respect to the opinions of others no matter how seemingly far-fetched. • Abusive, foul language, and/or divisive comments may be deleted without notice. • Each blog member is allowed limited comments, as displayed above the comment box. • Comments must be limited to the number of words displayed above the comment box. • Please limit one comment after any comment posted per post. • Please show respect to the opinions of others no matter how seemingly far-fetched. • Abusive, foul language, and/or divisive comments may be deleted without notice. • Each blog member is allowed limited comments, as displayed above the comment box. • Comments must be limited to the number of words displayed above the comment box. • Please limit one comment after any comment posted per post. • Please show respect to the opinions of others no matter how seemingly far-fetched. • Abusive, foul language, and/or divisive comments may be deleted without notice. • Each blog member is allowed limited comments, as displayed above the comment box. • Comments must be limited to the number of words displayed above the comment box. • Please limit one comment after any comment posted per post. • Please show respect to the opinions of others no matter how seemingly far-fetched. • Abusive, foul language, and/or divisive comments may be deleted without notice. • Each blog member is allowed limited comments, as displayed above the comment box. • Comments must be limited to the number of words displayed above the comment box. • Please limit one comment after any comment posted per post. • Please show respect to the opinions of others no matter how seemingly far-fetched. • Abusive, foul language, and/or divisive comments may be deleted without notice. • Each blog member is allowed limited comments, as displayed above the comment box. • Comments must be limited to the number of words displayed above the comment box. • Please limit one comment after any comment posted per post. • Please show respect to the opinions of others no matter how seemingly far-fetched. • Abusive, foul language, and/or divisive comments may be deleted without notice. • Each blog member is allowed limited comments, as displayed above the comment box. • Comments must be limited to the number of words displayed above the comment box. • Please limit one comment after any comment posted per post. • Please show respect to the opinions of others no matter how seemingly far-fetched. • Abusive, foul language, and/or divisive comments may be deleted without notice. • Each blog member is allowed limited comments, as displayed above the comment box. • Comments must be limited to the number of words displayed above the comment box. • Please limit one comment after any comment posted per post. • Please show respect to the opinions of others no matter how seemingly far-fetched. • Abusive, foul language, and/or divisive comments may be deleted without notice. • Each blog member is allowed limited comments, as displayed above the comment box. • Comments must be limited to the number of words displayed above the comment box. • Please limit one comment after any comment posted per post. • Please show respect to the opinions of others no matter how seemingly far-fetched. • Abusive, foul language, and/or divisive comments may be deleted without notice. • Each blog member is allowed limited comments, as displayed above the comment box. • Comments must be limited to the number of words displayed above the comment box. • Please limit one comment after any comment posted per post. • Please show respect to the opinions of others no matter how seemingly far-fetched. • Abusive, foul language, and/or divisive comments may be deleted without notice. • Each blog member is allowed limited comments, as displayed above the comment box. • Comments must be limited to the number of words displayed above the comment box. • Please limit one comment after any comment posted per post. • Please show respect to the opinions of others no matter how seemingly far-fetched. • Abusive, foul language, and/or divisive comments may be deleted without notice. • Each blog member is allowed limited comments, as displayed above the comment box. • Comments must be limited to the number of words displayed above the comment box. • Please limit one comment after any comment posted per post. • Please show respect to the opinions of others no matter how seemingly far-fetched. • Abusive, foul language, and/or divisive comments may be deleted without notice. • Each blog member is allowed limited comments, as displayed above the comment box. • Comments must be limited to the number of words displayed above the comment box. • Please limit one comment after any comment posted per post. • Please show respect to the opinions of others no matter how seemingly far-fetched. • Abusive, foul language, and/or divisive comments may be deleted without notice. • Each blog member is allowed limited comments, as displayed above the comment box. • Comments must be limited to the number of words displayed above the comment box. • Please limit one comment after any comment posted per post. • Please show respect to the opinions of others no matter how seemingly far-fetched. • Abusive, foul language, and/or divisive comments may be deleted without notice. • Each blog member is allowed limited comments, as displayed above the comment box. • Comments must be limited to the number of words displayed above the comment box. • Please limit one comment after any comment posted per post. • Please show respect to the opinions of others no matter how seemingly far-fetched. • Abusive, foul language, and/or divisive comments may be deleted without notice. • Each blog member is allowed limited comments, as displayed above the comment box. • Comments must be limited to the number of words displayed above the comment box. • Please limit one comment after any comment posted per post. • Please show respect to the opinions of others no matter how seemingly far-fetched. • Abusive, foul language, and/or divisive comments may be deleted without notice. • Each blog member is allowed limited comments, as displayed above the comment box. • Comments must be limited to the number of words displayed above the comment box. • Please limit one comment after any comment posted per post.
{"url":"https://dg.ecolo.be/2018/05/15/verleih-von-zelten-an-jugendgruppen/","timestamp":"2024-11-08T19:15:55Z","content_type":"text/html","content_length":"220914","record_id":"<urn:uuid:3efa8b3c-9cf7-484e-8ad2-378e73f3818d>","cc-path":"CC-MAIN-2024-46/segments/1730477028070.17/warc/CC-MAIN-20241108164844-20241108194844-00668.warc.gz"}
Imaginary Numbers: Introduction to Complex Numbers | StateMath Imaginary numbers, also known as complex numbers, are a fundamental concept in mathematics. They are defined as numbers that involve the imaginary unit, denoted by the symbol “$i$,” which is defined as the square root of $-1$. The Imaginary number is often represented in the form $a + bi$, where “$a$” and “$b$” are real numbers, and “$i$” represents the imaginary unit. The origin of imaginary numbers The concept of imaginary numbers was first introduced by mathematicians in the 16th century, but it was met with skepticism and resistance due to its seemingly paradoxical nature. However, over time, mathematicians realized the significance and utility of imaginary numbers in solving complex equations and representing certain mathematical phenomena. Key properties of the imaginary number One of the key properties of the imaginary number is their ability to extend the real number system to include solutions to equations that cannot be expressed using only real numbers. For example, the equation $x^2 + 1 = 0$ has no real solutions, but it can be solved by introducing the imaginary number. In this case, the solution is $x = \pm i$, where “$i$” represents the imaginary unit. Imaginary complex numbers also play a crucial role in various branches of mathematics, such as complex analysis, quantum mechanics, and electrical engineering. They are used to represent and analyze oscillatory phenomena, such as alternating currents and electromagnetic waves. Imaginary Numbers Rules 1. The initial principle governing the imaginary number $i$ is expressed as $i^2=-1$ in mathematical discourse. 2. The solutions to the quadratic equation $x^2+1=0$ are the imaginary and complex numbers $i$ and $-i$. 3. In a more general context, let $a$, $b$, and $c$ be real numbers such that $a$ is not equal to zero, and the discriminant $\Delta$ defined as $b^2-4ac$ is less than zero. In this case, the quadratic equation $ax^2+bx+c=0$ has two complex solutions, which can be expressed as follows: $$ x_1=\frac{-b+i\sqrt{-\Delta}}{2a}\quad\text{and}\quad x_2=\frac{-b-i\sqrt{-\Delta}}{2a}.$$ 4. For any natural number $n$, $i^{2n}=(-1)^n$ and $i^{2n+1}=(-1)^ni.$ Exercises in complex numbers Exercise 1: Demonstrate that for all $x\in \mathbb{R}$, the inequality $|e^{ix}-1|\le |x|$ holds. âž• For any $x\in \mathbb{R}$, the expression $e^{ix}-1$ can be rewritten as $e^{\frac{ix}{2}} (e^{\frac{ix}{2}}-e^{\frac{-ix}{2}})$. This can be further simplified to $e^{\frac{ix}{2}} 2\sin\left( \frac {x}{2}\right)$. Since $|\sin(y)|\le |y|$ for all $y\in \mathbb{R}$, we can conclude that $|e^{ix}-1|=2 \left| \sin\left( \frac{x}{2}\right)\right|\le 2 \left| \frac{x}{2}\right|=|x|$. Exercise 2: Determine the modulus and argument of the complex numbers \begin{align*}z_1=\frac{\sqrt{6}-i\sqrt{2}}{2},\quad z_2=e^{e^{i\beta}},\quad \beta\in\mathbb{R}.\end{align*} âž• It is known that $\cos(\pi/6)=\sqrt{3}/2$ and $\sin(\pi/6)=1/2$. Therefore, we have \begin{align*} z_1=\sqrt{2}\left( \frac{\sqrt{3}}{2}-\frac{i}{2}\right)= \sqrt{2}\left(\cos(\frac{\pi}{6})-i \sin(\ frac{\pi}{6}) \right)= \sqrt{2}e^{-i\frac{pi}{6}}.\end{align*} Thus, the modulus of $z_1$ is $|z_1|=\sqrt{2}$ and the argument is $\arg(z_1)=-\frac{\pi}{6}$. On the other hand, we have \begin{align*} z_2= e^{\cos(\beta)+i\sin(\beta)}= e^{\cos(\beta)} e^{i\sin(\beta)}.\end{align*} Therefore, $|z_2|=e^{\cos(\beta)}$ and $\arg(z_2)=\sin(\beta)$. Exercise 3: Determine the algebraic form of the following complex numbers: \begin{align*} z_1=(5+i5)^4, \quad z_2= \left(\frac{1+i}{1+i\sqrt{3}}\right)^{40}. \end{align*} âž• The expression $5+i5=3\sqrt{2}(\frac{\sqrt{2}}{2}+i\frac{\sqrt{2}}{2})=3\sqrt{2} e^{i\frac{\pi}{4}}$ can be written as follows. Let $z_1$ be the complex number $5+i5$. Then, we have $z_1=3\sqrt{2} e^ {i\frac{\pi}{4}}$. Using this result, we can calculate $z_1$ raised to the power of 6 and multiplied by $e^{i\frac{3\pi}{2}}$. Thus, we obtain $z_1^6 e^{i\frac{3\pi}{2}}=-i5800$. Similarly, we have $1+i=\sqrt{2}e^{i\frac{\pi}{4}}$ and $1+i\sqrt{3}=2 e^{i\frac{\pi}{3}}$. Let $z_2$ be the complex number $1+i\sqrt{3}$. Then, we can calculate $z_2$ raised to the power of 10 and multiplied by $e^{i\ frac{40 \pi}{3}}$. This gives us $z_2=\frac{\sqrt{3}}{8}-i\frac{1}{8}$. In conclusion, the imaginary number is an essential concept in mathematics, providing a powerful tool for solving complex equations and representing various mathematical phenomena. Despite their initial skepticism, they have become an integral part of mathematical theory and find applications in numerous fields. Q&A for imaginary numbers Q1: Can you provide an example of how imaginary numbers are used in solving equations? âž• A1: Certainly! Consider the equation $x^2+4=0$. Using imaginary numbers, we find that $\pm 2i,$ demonstrating their role in solving equations with no real solutions. Q2: Are imaginary numbers used in practical applications outside of mathematics? âž• A2: Yes, the imaginary number has practical applications in fields such as electrical engineering, quantum mechanics, signal processing, and more. It simplifies complex calculations in these Q3: Can you take the square root of a negative real number? âž• A3: No, the square root of a negative real number is undefined in the realm of real numbers. Imaginary numbers, represented by ‘i,’ were introduced to address this limitation.
{"url":"https://statemath.com/2023/09/imaginary-numbers-introduction-to-complex-numbers.html","timestamp":"2024-11-12T04:19:54Z","content_type":"text/html","content_length":"339083","record_id":"<urn:uuid:97ca6f5f-d23f-49f0-9d76-b64cab56498e>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.50/warc/CC-MAIN-20241112014152-20241112044152-00543.warc.gz"}
Metonic cycle - Wikiwand The Metonic cycle or enneadecaeteris (from Ancient Greek: ἐννεακαιδεκαετηρίς, from ἐννεακαίδεκα, "nineteen") is a period of almost exactly 19 years after which the lunar phases recur at the same time of the year. The recurrence is not perfect, and by precise observation the Metonic cycle defined as 235 synodic months is just 2 hours, 4 minutes and 58 seconds longer than 19 tropical years. Meton of Athens, in the 5th century BC, judged the cycle to be a whole number of days, 6,940.^[3] Using these whole numbers facilitates the construction of a lunisolar calendar. Depiction of the 19 years of the Metonic cycle as a wheel, with the Julian date of the Easter New Moon, from a 9th-century computistic manuscript made in St. Emmeram's Abbey (Clm 14456, fol. 71r) For example, by the 19-year Metonic cycle, the full moon repeats on or near Christmas between 1711 and 2300.^ ^ A small horizontal libration is visible comparing their appearances. A red color shows full moons that are also lunar eclipses. A tropical year (about 365.24 days) is longer than 12 lunar months (about 354.36 days) and shorter than 13 of them (about 383.90 days). In a Metonic calendar (a type of lunisolar calendar), there are twelve years of 12 lunar months and seven years of 13 lunar months. In the Babylonian and Hebrew lunisolar calendars, the years 3, 6, 8, 11, 14, 17, and 19 are the long (13-month) years of the Metonic cycle. This cycle forms the basis of the Greek and Hebrew calendars. A 19-year cycle is used for the computation of the date of Easter each year. The Babylonians applied the 19-year cycle from the late sixth century BC.^[4] According to Livy, the second king of Rome, Numa Pompilius (reigned 715–673 BC), inserted intercalary months in such a way that "in the twentieth year the days should fall in with the same position of the sun from which they had started".^[5] As "the twentieth year" takes place nineteen years after "the first year", this seems to indicate that the Metonic cycle was applied to Numa's calendar. Diodorus Siculus reports that Apollo is said to have visited the Hyperboreans once every 19 years.^[6] The Metonic cycle has been implemented in the Antikythera mechanism which offers unexpected evidence for the popularity of the calendar based on it.^[7] The (19-year) Metonic cycle is a lunisolar cycle, as is the (76-year) Callippic cycle. An important example of an application of the Metonic cycle in the Julian calendar is the 19-year lunar cycle insofar as provided with a Metonic structure. In the following century, Callippus developed the Callippic cycle of four 19-year periods for a 76-year cycle with a mean year of exactly 365.25 days. Around AD 260 the Alexandrian computist Anatolius, who became bishop of Laodicea in AD 268, was the first to devise a method for determining the date of Easter Sunday. However, it was some later, somewhat different, version of the Metonic 19-year lunar cycle which, as the basic structure of Dionysius Exiguus' and also of Bede's Easter table, would ultimately prevail throughout Christendom, at least until in the year 1582, when the Gregorian calendar was introduced. The Coligny calendar is a Celtic lunisolar calendar using the Metonic cycle. The bronze plaque on which it was found dates from c. AD 200, but the internal evidence points to the calendar itself being several centuries older, created in the Iron Age. The Runic calendar is a perpetual calendar based on the 19-year-long Metonic cycle. It is also known as a Rune staff or Runic Almanac. This calendar does not rely on knowledge of the duration of the tropical year or of the occurrence of leap years. It is set at the beginning of each year by observing the first full moon after the winter solstice. The oldest one known, and the only one from the Middle Ages, is the Nyköping staff, which is believed to date from the 13th century. The Bahá'í calendar, established during the middle of the 19th century, is also based on cycles of 19 solar years. Hebrew calendar A Small Maḥzor (Hebrew מחזור, pronounced [maχˈzor], meaning "cycle") is a 19-year cycle in the lunisolar calendar system used by the Jewish people. It is similar to, but slightly different in usage from, the Greek Metonic cycle (being based on a month of 29+13753⁄25920 days, giving a cycle of 6939+3575⁄5184 ≈ 6939.69 days^[12]), and likely derived from or alongside the much earlier Babylonian It is possible that the Polynesian kilo-hoku (astronomers) discovered the Metonic cycle in the same way Meton had, by trying to make the month fit the year. The Metonic cycle is the most accurate cycle of time (in a timespan of less than 100 years) for synchronizing the tropical year and the lunar month (synodic month), when the method of synchronizing is the intercalation of a thirteenth lunar month in a calendar year from time to time. The traditional lunar year of 12 synodic months is about 354 days, approximately eleven days short of the solar year. Thus, every 2 to 3 years there is a discrepancy of 22 to 33 days, or a full synodic month. For example, if the winter solstice and the new moon coincide, it takes 19 tropical years for the coincidence to recur. The mathematical logic is this: • A tropical year lasts 365.2422 days. a span of 19 tropical years (365.2422 × 19) lasts 6,939.602 days That duration is almost the same as 235 synodic months: • A synodic month lasts 29.53059 days. a span of 235 synodic months (29.53059 × 235) lasts 6,939.689 days Thus the algorithm is correct to 0.087 days (2 hours, 5 minutes and 16 seconds). For a lunisolar calendar to 'catch up' to this discrepancy and thus maintain seasonal consistency, seven intercalary months are added (one at a time), at intervals of every 2–3 years during the course of 19 solar years. Thus twelve of those years have 12 lunar months and seven have 13 months.
{"url":"https://www.wikiwand.com/en/articles/Metonic_cycle","timestamp":"2024-11-13T03:06:08Z","content_type":"text/html","content_length":"278128","record_id":"<urn:uuid:fe9acfd9-3175-41e5-bd1d-4f93566af936>","cc-path":"CC-MAIN-2024-46/segments/1730477028303.91/warc/CC-MAIN-20241113004258-20241113034258-00055.warc.gz"}
Features of the connection between reflection and planning in primary school age UDC 740 Publication date: 03.04.2023 International Journal of Professional Science №4-2023 Features of the connection between reflection and planning in primary school age Abstract: The article reflects the content of the study related to the study of the nature of the relationship between reflection and planning in solving problems. 32 students of the fourth grade solved the problems of the author's methodology "Game of repetition" to determine the type of planning (holistic or partial) and the author's methodology "Interchange of letters" to determine the type of reflection (meaningful or formal). It was found that not all students with holistic planning carry out meaningful reflection when solving problems. However, all students with meaningful reflection solve problems with the help of holistic planning. Keywords: fourth-graders, meaningful and formal reflection, integral and partial planning, spatial-combinatorial tasks, “Game of repetition” and “Interchange of letters” methods. In recent years, reflection has become the subject of study in a number of studies. The most active and multifaceted reflection is studied in works devoted to the identification of mechanisms for creative problem solving. In particular, two types of reflection were distinguished — intellectual and personal, each of which is realized in the appropriate forms [6]. In line with this approach, the solution of problems is considered mainly from the productive and procedural side. In other words, the researchers here find out the originality of the operational and procedural characteristics that contribute to the successful solution of problems. The other side of problem solving, the content-cognitive one, attracts the attention of psychologists to a lesser 1.2. Types of cognitive reflection Because of this, reflection as a person’s consideration of the grounds for his own actions is studied, in our opinion, not concretely enough. We mean the fact that the solution of the problem can be carried out in a generalized and non-generalized way. This is due to the fact that a person uses different reference points when constructing an action, which serve as the basis for the methods of his actions. In one case, the grounds are such landmarks that determine the success of an action (or task solution) only in given, particular conditions. In another case, the grounds are such landmarks that determine the success of the action in a wide range of different conditions. Therefore, from our point of view, an indication that, when solving a problem, a person clarified the reasons for his own actions, is not specific enough: it is not clear what reasons, particular or generalized, a person considered when solving a problem. When solving problems, it is expedient to single out two types of reflection based on the content-cognitive basis. If a person, when solving a problem, relies on particular, situational guidelines and considers them the grounds for his action, then such reflection should be considered formal, since although a person is aware of the grounds for his actions, these grounds are single and situational, i.e., according to In essence, these are pseudo-foundations, foundations only in form. If a person, when solving a problem, relies on generalized, extra-situational guidelines and considers them the basis of his action, then such reflection should be considered meaningful, since such guidelines are a necessary condition for successfully solving externally different, but internally related tasks, i.e. are contained in various particular circumstances. It should be noted that both formal and substantive reflection is, in fact, the process of correlating a person’s mode of action with the features the conditions under which this action is to be performed. This correlation should carried out internally, «in the mind». Such a requirement follows from the fact that the noted correlation occurs in relation to the mode of either an accomplished or an action that has not yet occurred. Faced with a task, a person either tries to use the method already known to him (correlation of the present conditions with the previous method), or tries to develop a new method (correlation of the present conditions with the possible method). 1.2. Types of planning In studies specifically devoted to the internal plan of action, the ability to act «in the mind» [1], planning [5], it was shown that when solving problems, a person mainly uses two types of plans — partial and holistic. In the first case, a person plans the next steps, or links, of his decision after he has completed the previous ones. In other words, the planning of an action and its execution alternate. In the second case, subsequent steps are planned before the previous ones are performed, and the previous steps are planned based on the expected content of the subsequent ones. Here, a person plans all the solutions to the problem at once, the entire sequence of operations in the required action. Holistic planning is characterized by a higher level of internal plan of action, a developed ability to act «in the mind.» Note that such planning in relation to solving the same problems appears in children at a relatively late age, and among peers it takes place in children with more intellectually developed 2.Materials and methods. On the basis of the stated provisions on the characteristics of the types of reflection and types of planning, an experimental study was undertaken. The subjects were 32 fourth grade students. The purpose of the study was to establish the relationship between the types of reflection and types of planning in solving problems. The achievement of this goal was mediated by the following hypotheses: 1) if schoolchildren have a holistic planning, it is more likely that they will function when solving problems of meaningful reflection; 2) in the presence of partial planning, it is more likely to function in solving problems of formal reflection; 3) when meaningful reflection is functioning, the presence of holistic planning is more likely; 4) with the functioning of formal reflection, the presence of a partial level of planning is more likely. The study included two stages: at the first stage, under the conditions of the frontal experiment, the level of the internal action plan formed by schoolchildren was determined, and at the second, also under the conditions of the frontal experiment, the type of reflection that functioned in solving the proposed tasks was determined. At the first stage of the study, we used the methodology developed by us, which included the tasks of “Game of repetition” [1]. 2.1. Experiments to determine the formation of planning The group session was organized as follows. 1. At the beginning of the lesson, students are given blank sheets of paper on which they need to mark the date and last name, and then write down the solution to the problems. 2. Before class or while the students are signing the sheets, the psychologist draws playing fields on the blackboard, putting down numbers on the left and letters below (Fig. 1): Fig.1. Playing fields 3. The names of the cells of the playing field (its notation) are explained to the students: «… the two lower cells are called A1 and B1, and the two upper ones are A2 and B2 …» and their assimilation is checked with the help of appropriate questions. 4. The cells of both fields are filled in: in the initial location (on the left) pairs of identical figures are placed, in the final (on the right) — pairs of identical numbers (Fig. 2): Fig.2. Task condition in one action 5. The organizer says: “In this problem, you need to rearrange the figures once so that the same ones are in the same cells. which are the same numbers. To do this, you need to mentally swap some two figures at the same time. After evaluating the permutation options proposed by the students, the organizer showed on the board on the right side how to write down the solution of problems in one action (Fig. 3). Fig.3. Solving the problem in one action At the same time, the meaning of the solution found was explained: “… if the circle from A1 is interchanged with the triangle from B2, then the same figures will end up in the same cells where the same numbers are: two triangles will be where there are two sevens, and two circles where there are two fours . Here the solution must be written as follows: A1 — B2. And if a triangle from A2 is interchanged with a circle from B1, then the triangles will be where the fours are, and the circles where the sevens are, and the solution is written as follows: A2 — B1 … «. 6. Then the condition of the problem is displayed on the board in two actions (Fig. 4): Fig.4. Task condition in two actions “In this problem, you need to find two actions so that the same figures are in the same cells where the same numbers are.” fter discussing the options for the first and second actions proposed by the children, the organizer wrote down one of the solutions: 1) A1-C1, ) B1-A2 (Fig. 5), explained its meaning: “… first you can swap the circle and square in corner bottom cells, then — a square and a triangle obliquely, diagonally … » and pointed out that » … if there are several solutions to the problem, as in this one, then you need to write only one option …”. Fig.5. Solving the problem in two actions 7. 7. Next, the students were given numbered forms (sheets with the conditions of 20 tasks, — Fig. 6) and were asked to write the number of the form next to the last name on a blank sheet. Fig.6. Sheet with conditions 20 tasks 8. Then the organizer characterized the location of the tasks on the form: “First, the conditions of the tasks are placed in one action, — No. 1, 2, 3, 4. Then the tasks in two actions — No. 5, 6, 7, 8, 9, 10, in three actions — Nos. 11, 12, 13, 14, 15, 16, in four actions — Nos. 17, 18 and in five actions — Nos. 19 and 20 … «and once again formulated the goal of solving each problem:» … it is necessary for the specified number of actions to place the same figures in the same way as the same numbers are placed … «. Further, he explained: “Problems solve in a row, starting with the first; you don’t need to copy the conditions of the tasks: on the sheet with the last name, just write the number of the task and next to it, use the names of the cells to write down one, two or three actions, as we did on the board; look for only one solution … «. After that, he specifically emphasized: “… you can’t make any notes on the form with tasks, as well as on various drafts, pieces of paper, on the table, etc .; tasks need to be solved only mentally, in the mind, and the invented solution should be written down on a sheet with a surname, indicating the number of the task; act carefully and on your own.” 9. It should be noted that the introductory part of the lesson (instruction) takes (depending on age) 10-15 minutes, and exactly 30 minutes should be allotted for independent problem solving in order to obtain comparable results for different groups of students. The correspondence of these tasks to the objectives of the study was that some of them (tasks in two and three actions) could be solved with only a partial level of planning, i.e. the structure of these tasks made it possible to plan each executive action and perform it separately out of touch with the rest. Other tasks (tasks in four and five actions) cannot be solved using this level of planning: it was necessary to outline all the actions as a whole and only after developing a general plan to carry out these actions. These tasks were constructed in such a way that at first glance, different options for the first action seemed correct, but in fact only one option was correct. 2.2. Experiments to determine the type of reflection At the second stage of the study, we used the previously developed two-part experimental situation [4]. In the first part of this situation, the subject was asked to solve three (four) tasks related to two classes (or two subclasses of the same class) [7]. In the second part, after the successful solution of the problems, it was proposed to group them, to generalize. If the subject combined tasks on the basis of similarity and external features of their conditions or, conversely, considered all tasks to be different based on differences in these external features, then it was believed that he pointed to the external guidelines of his actions. This was considered as evidence of functioning in solving problems of formal reflection, consideration of the external grounds for one’s action. If the subject united the tasks on the basis of their belonging to the same class (or subclass), justifying this by the fact that they are solved in the same way, then it was believed that he pointed to the internal guidelines of his action, essential grounds. This was evidence of the functioning of meaningful reflection in solving these problems, consideration of the initial relations that determine the construction of a successful action. As a specific technique, the technique of «Interchange of letters» was used, which included two training and three main tasks. Training tasks 1. R D W —— W R D (two actions) 2. M Y D —— Y D M (two actions) Main tasks 1. P S V K —- S V K P (three actions) 2. R M B N —- B R N M (three actions) 3. A O U E —- O U E A (three actions) 1. All main tasks are similar. 2. All the main tasks are different. 3. The first and second main tasks are similar, but the third is different from them. 4. The first and third main tasks are similar, but the second is different from them. 5. The second and third main tasks are similar, but the first is different from them. * * * The group experiments on the material of this task was carried out as follows. First, each student in the class was given a sheet with the above task: two training and three main tasks. Here it is necessary to note the following. In reality, many different task options were used in the class, since it was enough to change the consonants in the conditions of the tasks. This allowed students to solve problems more independently. After the students indicated their names on the sheets, the experimenter on the board explained the rules for moving letters on the material of such a problem situation: K R S —- R K S The schoolchildren were told that in this problem they need to move the letters on the left so that they are located like the letters on the right. At the same time, it was explained that for one movement, one move in these problems, a mutual permutation of any two letters is taken. In this problem, you need to swap the letters K and R. Then the solution of the two-way problem was analyzed: P M W T —— W T P M The experimenter explained that in this task it is required to perform two mutual movements of the letters. These movements must be done mentally with the letters located on the left. The arrangement of letters on the left is called the initial, and on the right — the final, required. At the same time, it was pointed out that the meaning of the problem in two moves is that the letters of the initial location, after two mental movements, are in the required location. Here the experimenter pointed out that, having mentally made the first movement, it is necessary to write down the result obtained, i.e. the arrangement of all letters after one mutual exchange of places. You also need to do after the second mental movement. In general, the solution of the two-way problem is written as follows: 1) P T V M; 2) V T P M. Then the experimenter showed how the following problem could be solved in two moves: B M T — T B M Next, the students were asked to solve the first training problem on the worksheet. The experimenter checked its solution and analyzed the errors, after which he proposed to solve the second training problem, the solution of which was checked again. Only after making sure that the training problems were solved and written down correctly, the experimenter allowed to start solving the main At the same time, he usually reminded that their solution should be recorded in the same way as the solution of the training ones. The experimenter did not check the main tasks. After their decision, the students were asked to carefully read five opinions about the main tasks, think and write on the back of the sheet the number of that opinion (only one of the five is required) with which the student most agrees. Next to the number of the opinion, it was necessary to briefly explain why the student agrees with this particular opinion, why he considers it the most correct. Thus, the group experiment consisted in the fact that at first the experimenter on the blackboard explained to the schoolchildren the meaning of the proposed tasks, showed the form for recording their solution, and checked the solution of the training tasks. Then he offered to solve the main tasks and, after solving them, indicated that one opinion out of five should be chosen and briefly It should be said that the three main tasks belong objectively to two subclasses of the same class. The construction and solution of these problems is based on the ratio of places occupied by the same letters in the initial and required positions. This ratio determines such a solution to problems when the movement of one letter must be carried out several times, and the remaining letters — At the same time, the first and third tasks were selected so that they belonged to one subclass of tasks of the specified class, and the second, to another subclass. In the first and third problems, the ratio of the places of letters in both locations was identical: the second, third, and fourth letters (counting from left to right) moved in the initial location of these problems as a result of three actions, respectively, to the first, second, and third places, and the letter occupying the extreme left place, moved as a result of three actions to the extreme right place. A different relationship between the places of the letters underlay the construction and solution of the second problem. It was built in such a way that the letters that were in the initial position next to each other would, as a result of three movements, not be in neighboring places. And vice versa, the letters that were in the initial location not in neighboring places, as a result of three actions, began to be located side by side. As a result of solving problems, some children recognized the first opinion as correct, because “everywhere you need to rearrange the letters”, or “in all problems the letters”, or «There are three actions in each task.» Other children believed that the second opinion was correct, because “there are different letters everywhere.” Several schoolchildren chose the third opinion because “»the third problem has vowels, but the other problems don’t.” Some of the children chose the fourth opinion, because in “the first and third tasks the letters are rearranged in the same way, but in the second in a different way”, “in the first and third tasks the letters go in a row, and in the second — in different ways”, “… in the first and in the third task, adjacent letters are changed, and in the second — different ones … «. Children those who chose the first three opinions carried out formal reflection when solving problems, and the children who chose the fourth opinion carried out meaningful reflection. 3. Results. As a result of the experiments, it was found that 40.3% of children carried out holistic planning when solving problems and, accordingly, 59.7% of children — partial planning and that 24.8% of children carried out meaningful reflection and, accordingly, 75.2% of children — formal reflection. Analysis of the results based on the study of protocols for solving problems by both methods made it possible to establish the following. Firstly, not all schoolchildren who demonstrated holistic planning when solving the problems of the «Game of repetition» carried out meaningful reflection when solving the problems of the «Interchange of Letters» methodology. Thus, among 40.3% of children with holistic planning, there were 24.8% of children with meaningful reflection and, accordingly, 15.5% with formal reflection. Secondly, all the schoolchildren who demonstrated partial planning when solving the problems of the “Repetition Game”, 59.7%, then carried out formal reflection. Thirdly, all schoolchildren who carried out a meaningful reflection — 24.8%, had a holistic level of a formed internal action plan. Fourthly, a part of schoolchildren who demonstrated holistic planning — 15.%, carried out formal reflection and, accordingly, a part of schoolchildren who carried out formal reflection — 15.5%, demonstrated holistic planning. Thus, the hypotheses put forward earlier can be considered legitimate. 4. Discussion of the results and conclusion. From the obtained results it follows that the internal plan of action, in particular at the level of holistic planning, is a necessary condition for the functioning of meaningful reflection. But at the same time, it is also an insufficient condition, since, as it turned out, not all schoolchildren who demonstrated holistic planning in solving problems carried out meaningful reflection. The elucidation of the mechanism of this insufficiency, the establishment of additional conditions conducive to the implementation of meaningful reflection, is the task of our further research. As such additional conditions, the following can be assumed: the form of action when presenting and solving problems, the complexity of tasks, in particular the number of executive actions, the age of the subjects. In our theoretical [3] and experimental studies [2], we considered the question of the relationship between the form of action in which schoolchildren solve problems and the type of orientation in the conditions of the problem, on the basis of which its solution is deployed. It was shown, on the one hand, that empirical and theoretical types of orientation in the conditions of problems take place when solving problems in different forms of action: object-effective, visual-figurative and verbal-sign. In other words, regardless of the form of action in which the student needs to solve the problem, he can focus on external, directly perceived features of the conditions of the problem (i.e., orient empirically), or on the internal, essential relationships of its conditions, on generalized guidelines (i.e., act theoretically). On the other hand, it was also shown that, other things being equal (for example, the age of schoolchildren, the content of curricula, teaching methods), there is a connection between the type of orientation in the conditions of a task and the form of action in which it is proposed to solve it. It was found that the theoretical approach to solving problems (associated with the allocation of essential relations, generalized landmarks in the conditions of problems) is easier to implement when solving problems in a subject-effective form than when solving them in a visual-figurative and even more verbally — sign form. It can be assumed that for children who did not find meaningful reflection in the present study, the form of action in which it was proposed to solve problems was too abstract. In other words, the fact that the tasks of the «Interchange of Letters» methodology were proposed to be solved in a visual-figurative form could prevent these children from carrying out meaningful reflection. In further work with this technique, it will be expedient, if children carry out formal reflection, to offer them to solve the same problems in a more concrete, objectively effective form. Another condition that can also affect the fact of the implementation of meaningful reflection is, according to our assumption, that depending on the complexity of the task, in particular on the number of executive actions necessary for its successful solution, it is easier or more difficult for a student to single out its conditions are essential relations. In other words, it is possible that for children who, when solving the problems of «games of repetition», showed a holistic level of planning and, consequently, the ability to mentally cover the entire solution of the problem, all executive actions in general, the tasks of the «mutual permutation of signs» technique had too many executive actions. These tasks were solved by them without highlighting generalized landmarks, which, in turn, prevented the implementation of meaningful reflection. In further experiments with the «Interchange of letters» method, it is necessary to offer schoolchildren tasks of varying degrees of complexity, in particular, in terms of the number of executive actions, in order to find a level where it will be feasible for them to highlight significant relationships in the conditions of tasks. The third assumption is that it is generally characteristic for fourth-graders that the integral level of planning executive actions in solving problems has not yet acquired the character of a general ability — such a mental formation that functions in solving various problems. To test this assumption in further research, it will be necessary to conduct experiments with schoolchildren of other ages. 1. Zak A.Z. Development of the ability to act “in the mind” among schoolchildren of grades I–X. Questions of Psychology. 1983. No 1. P. 47–62 [in Russian]. 2. Zak A.Z. (1984). The development of theoretical thinking in younger students. M., Pedagogy [in Russian]. 3. Zak A.Z. Typology of the dynamics of the thought process // Questions of Psychology. 1986. No 5. P.72– 84 [in Russian]. 4. Zak A.Z. (2010). Development and diagnostics of thinking of teenagers and high school students. M.: Obninsk – SOTSIN [in Russian]. 5. Isaev E.I. Planning as a central component of theoretical thinking // Psychological science and education. 2010. Vol. 2. No 4. P. 34–41 [in Russian]. 6. Semenov I.N. Research directions of innovative psychology of reflection in the higher school of economics // Psychology. Journal of the Higher School of Economics. 2012. Vol. 9. No 3. P. 37–57 [in
{"url":"http://scipro.ru/article/04-04-2023","timestamp":"2024-11-03T12:51:48Z","content_type":"text/html","content_length":"91281","record_id":"<urn:uuid:d040ea57-b97e-44aa-aac3-f327ed5e3130>","cc-path":"CC-MAIN-2024-46/segments/1730477027776.9/warc/CC-MAIN-20241103114942-20241103144942-00623.warc.gz"}
EpiRisk is a computational platform designed to allow a quick estimate of the probability of exporting infected individuals from sites affected by a disease outbreak to other areas in the world through the airline transportation network and the daily commuting patterns. It also lets the user to explore the effects of potential restrictions applied to airline traffic and commuting flows. Based on the number of infected individuals detected in one or more areas of the world, the platforms estimates two main quantities. · Exported cases: the tool computes the probability P(n) of exporting a given number of cases n from the origin of the disease outbreak. In order to calculate the distribution P, the average time from exposure to symptoms onset and inability of traveling of infected individuals must be provided. · Relative importation risk: for each location Y the platform evaluates the probability P(Y) that a single infected individual is traveling from the index areas to that specific destination Y. In other words, given the occurrence of one exported case, P(Y) is the probability that the disease carrier will appear in location Y, with respect to any other possible location. By interacting with the map, the user can inspect the relative risk and the probability distribution of imported cases for single locations. In addition, the computed results are downloadable in commonly used data formats and as a high-resolution image of the risk map. The airline transportation data used in the platform are based on origin-destination traffic flows from the OAG database that are aggregated at specific time and spatial scales by the GLEAM project. Commuting flows are derived by the analysis and modeling of data for more than 5,000,000 commuting patterns among 78,000 administrative regions in five continents. A manuscript detailing the algorithms devised to compute the estimates provided by the platform is under preparation. Epirisk is a not-for profit platform: the results generated by the tool can be shared in compliance to the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. Top destinations ranked according to the relative risk of case importation. Exported cases are available only if the number of infected individuals (total value or detailed distribution) and the time to onset of symptoms are known.
{"url":"https://epirisk.net/","timestamp":"2024-11-03T01:25:37Z","content_type":"text/html","content_length":"10990","record_id":"<urn:uuid:0344e2b3-e51f-4e33-9acc-e9298a91fe43>","cc-path":"CC-MAIN-2024-46/segments/1730477027768.43/warc/CC-MAIN-20241102231001-20241103021001-00363.warc.gz"}
Subject Guides: Math Survival Guide: Geometry & Trigonometry Circles (Khan Academy) - Opens in a new window Video tutorials on circle basics, arc measure, arc length, radians, sectors, inscribed angles, inscribed shapes problem solving, properties of tangents, area of an inscribed triangle, standard equation of a circle, and expanded equation of a circle.
{"url":"https://algonquincollege.libguides.com/math-survival-guide/geometry-trigonometry","timestamp":"2024-11-05T20:20:04Z","content_type":"text/html","content_length":"75215","record_id":"<urn:uuid:7d74f302-d684-4e27-8677-28731ed28210>","cc-path":"CC-MAIN-2024-46/segments/1730477027889.1/warc/CC-MAIN-20241105180955-20241105210955-00626.warc.gz"}
decrease the circulating load ball mill Ball-mill power, 108 kw. Ball load, 28,000 lb. of 3- and 2-in. balls. Feed, minus ¼-in. material. ... at 19.8 r.p.m. the circulating load became so large, over 60 T. per hr., that operation had to be discontinued, for the ... it will be necessary to maintain carefully the proper ball charge in each mill. A slight increase or decrease in the ...
{"url":"https://www.ploversguesthouse.co.za/32277_decrease_the_circulating_load_ball_mill.html","timestamp":"2024-11-14T00:35:22Z","content_type":"text/html","content_length":"58133","record_id":"<urn:uuid:be34ce64-aef2-4d20-9316-e807a01f330e>","cc-path":"CC-MAIN-2024-46/segments/1730477028516.72/warc/CC-MAIN-20241113235151-20241114025151-00497.warc.gz"}
For electrical power and other forms of power, see Energy. Power is a key concept in physics, describing how quickly work is done or energy is transferred. It's about measuring the speed at which energy is used to perform a task or the amount of energy consumed within a certain timeframe. The standard measurement for power is the Watt (W), but there are other ways to measure it too, like horsepower (HP), calories per hour (cal/hr), and foot-pounds per minute (ft-lb/min). The basic formula to calculate power is P=W/t, where P stands for power, W for work, and t for time. Another way to look at it is P=E/t, where E represents energy. This gives us a way to quantify and understand the efficiency of processes and machines in our daily lives and in the scientific world. In a physics term, power is the rate at which work takes place or energy is transmitted. It can also be defined as the amount of energy needed or used to complete an activity over a period of time. The SI unit for power is the the Watt or "W". Other non SI units include: • horsepower "HP", • calories per hour, "cal/hr", and • foot pounds per minute "ft-lb/min". The equation for power is ${\displaystyle P=W/t}$ where P is power, W is work and t is time. A second equation for power is ${\displaystyle P=E/t}$ where E is energy and t is time.
{"url":"https://www.appropedia.org/Power","timestamp":"2024-11-14T23:34:46Z","content_type":"text/html","content_length":"91889","record_id":"<urn:uuid:430df12d-c027-4f78-b4de-2540b2c2456c>","cc-path":"CC-MAIN-2024-46/segments/1730477397531.96/warc/CC-MAIN-20241114225955-20241115015955-00584.warc.gz"}
The Effects of Equivalence Based Instruction on Mathematical Problem-Solving 2024 Theses Doctoral The Effects of Equivalence Based Instruction on Mathematical Problem-Solving In 2 experiments, I studied the effects of an Equivalence Based Instruction (EBI) math intervention on the emergence of untaught selection responses and abstraction to production responses. In Experiment I, using a multiple baseline design, I implemented the EBI intervention among a group of 17 first grade participants with varying levels of math prerequisites and verbal behavior development. The intervention sought to develop a comprehensive relational network for the part-whole relations involved in addition and subtraction operations. This intervention, informed by Verbal Behavior Development Theory, Relational Frame Theory, and research on math proficiency, utilized visual and verbal stimulus presentations of fact families to establish the concepts underlying addition and subtraction. The key concept was that of a fact-family, in which two parts are equivalent to the whole and the whole is equivalent to the sum of its parts. The goal of the EBI intervention was to establish a relational network involving pictures, number bonds, sentences, and equations such that the part-whole relations involved in fact-families could be related to both addition and subtraction. The EBI intervention consisted of 3 phases to build this relational network. In Phase I, participants learned to match sentences describing complete fact-families with pictures and number bonds. In Phase II, participants learned to match sentences describing incomplete fact-families with number bonds. In Phase III, participants learned to match incomplete number bonds with addition and subtraction equations presented in various topographies. Before and after each phase of the intervention, I assessed the degree to which participants acquired untaught responses as well as their performance on production, or problem-solving, probes. Results revealed that the combinatorially entailed response (i.e., matching pictures with number bonds) emerged for all participants, while the mutually entailed response (i.e., selecting sentences) emerged for only some participants. Participants generally improved their problem-solving following the intervention; however, further examination was needed to supplement initial visual analyses of the graphs. Accordingly, I conducted a series of statistical analyses to evaluate individual and group-level differences in responding during the EBI intervention. These analyses also sought to reveal whether math prerequisites or level of verbal behavior development were associated with performance during Phases I, II, and III. Results showed that the EBI intervention was associated with standardized math performance and problem-solving accuracy, and results suggested that verbal behavior development has a meaningful relation with rate of learning. In Experiment II, I aimed to evaluate the educational significance of the repertoires involved in the EBI intervention by conducting a correlational study with 32 additional first grade participants. This experiment revealed that the response-types targeted in Phase III of the intervention were significantly associated with standardized math performance. • Shapiro_columbia_0054D_18492.pdf application/pdf 4.66 MB Download File More About This Work Academic Units Thesis Advisors Fienup, Daniel Ph.D., Columbia University Published Here July 3, 2024
{"url":"https://academiccommons.columbia.edu/doi/10.7916/esrt-ry48","timestamp":"2024-11-11T05:24:54Z","content_type":"text/html","content_length":"24937","record_id":"<urn:uuid:2b96dc93-1136-436f-a14a-741e537cfb2b>","cc-path":"CC-MAIN-2024-46/segments/1730477028216.19/warc/CC-MAIN-20241111024756-20241111054756-00126.warc.gz"}
Surface plots represent the shape of the surface that is described by the values of three variables, X, Y, and Z. The values of the X and Y variables are plotted to form a horizontal plane. The values of the Z variable, create a vertical axis that is perpendicular to the X-Y plane. Combined, these three axes, form a three-dimensional surface. The surface plot in the following figure displays various depths of a lake. The dimensions of the lake are plotted on the X-Y axes. The Z variable is plotted as the third dimension. The coordinates of each point correspond to the values of the three numeric variable values in an observation from the selected input data set. With the PLOT statement, you can do the following actions:
{"url":"http://support.sas.com/documentation/cdl/en/graphref/65389/HTML/default/p0ak5dnska54kqn1qzegyp0l25ty.htm","timestamp":"2024-11-09T00:41:32Z","content_type":"application/xhtml+xml","content_length":"18604","record_id":"<urn:uuid:5cb925e1-1835-4a00-91fc-e38f8b86f89f>","cc-path":"CC-MAIN-2024-46/segments/1730477028106.80/warc/CC-MAIN-20241108231327-20241109021327-00701.warc.gz"}
Is there a math library similar to python mpmath in rust language? want to build some mathematical tools, but rust doesn't seem to be able to support floating point numbers larger than f64. the f128 implementation of the new standard library is just garbage to the point of being offensive. so I want to find a math library similar to python mpmath. import time import mpmath # Define a function to calculate π using the Bellard formula def bellard_pi(p): mpmath.mp.dps = 130 # Set decimal precision to 130 digits pi = mpmath.mpf(0) # Initialize π as 0 using mpmath high-precision float sixteen_power_k = mpmath.mpf(1) # Initialize 16^k for k in range(p): # Iterate over p terms # Calculate the current term using mpmath operations and reduced intermediate growth term = (mpmath.mpf(4) / (mpmath.mpf(8) * k + mpmath.mpf(1)) - mpmath.mpf(2) / (mpmath.mpf(8) * k + mpmath.mpf(4)) - mpmath.mpf(1) / (mpmath.mpf(8) * k + mpmath.mpf(5)) - mpmath.mpf(1) / (mpmath.mpf(8) * k + mpmath.mpf(6))) pi += term * sixteen_power_k # Add the current term to π sixteen_power_k /= mpmath.mpf(16) # Update 16^k for the next iteration return pi # Return the calculated π value # Measure the time taken for each method start_time = time.time() # Record start time bellard_pi_value = bellard_pi(256) # Calculate π using 256 terms of Bellard formula bellard_time = time.time() - start_time # Calculate time taken # Print results print("\nBellard Formula:") print(f"Value: {bellard_pi_value}") # Output the calculated π value print(f"Time: {bellard_time:.6f} seconds") # Output the time taken # Calculate relative errors bellard_error = abs(bellard_pi_value - mpmath.pi) / mpmath.pi * 100 # Relative error for Bellard formula # Print relative error results print("\nRelative Errors:") print(f"Bellard Formula: {mpmath.nstr(bellard_error, 6)}%") # Output relative error It would be ideal if it could be implemented as intuitively as Bellard's formula. The f128 primitive type in the standard library is still work in progress. From the docs: If you have ideas and spare time on your hands, I'm sure you're welcome to help out: Tracking Issue for `f16` and `f128` float types · Issue #116909 · rust-lang/rust · GitHub 1 Like Arbitrary precision arithmetic is probably the keyword you are looking for. Stroll on this crates.io page to pick whatever you want. For example, use bigdecimal::BigDecimal; use std::time::Instant; fn bellard_pi(terms: usize) -> BigDecimal { let mut pi = BigDecimal::from(0); let mut sixteen_power_k = BigDecimal::from(1); for k in 0..terms { // Calculate each term of the Bellard formula let term = BigDecimal::from(4) / (BigDecimal::from(8 * k as u64 + 1)) - BigDecimal::from(2) / (BigDecimal::from(8 * k as u64 + 4)) - BigDecimal::from(1) / (BigDecimal::from(8 * k as u64 + 5)) - BigDecimal::from(1) / (BigDecimal::from(8 * k as u64 + 6)); pi += term * &sixteen_power_k; // Add the current term to π sixteen_power_k = sixteen_power_k / BigDecimal::from(16); // Update 16^k for next iteration pi // Return the calculated π value fn main() { let start_time = Instant::now(); // Record start time let bellard_pi_value = bellard_pi(256); // Calculate π using 256 terms of Bellard formula let bellard_time = start_time.elapsed(); // Calculate time taken // Print results println!("\nBellard Formula:"); println!("Value: {}", bellard_pi_value); // Output the calculated π value println!("Time: {:?}", bellard_time); // Output the time taken and run it with RUST_BIGDECIMAL_DEFAULT_PRECISION=1000 cargo run 1 Like If I have time off from work, I'll come and help. At present, the simplest and most comprehensive implementation of F128 floating point numbers is the Python implementation. I think this is a reasonable reference direction. Currently I am testing the robustness of BigDecimal and Malachite, thank you for your response. rug is a wrapper around the standard GNU libraries, so it's complete but full GPL. dashu implements a bunch of big number representations, including floats in dashu-float. These implement the popular "vocabulary" crate num-traits so if you have an implementation over those you can port to any other representation. I took a stab, but I got a value of pi of 11... I think I took a wrong turn somewhere 2 Likes A simple implementation of f64 floating point.
{"url":"https://users.rust-lang.org/t/is-there-a-math-library-similar-to-python-mpmath-in-rust-language/120467","timestamp":"2024-11-01T22:36:16Z","content_type":"text/html","content_length":"37209","record_id":"<urn:uuid:9e938423-2fe4-4a04-b53d-e588239ed74d>","cc-path":"CC-MAIN-2024-46/segments/1730477027599.25/warc/CC-MAIN-20241101215119-20241102005119-00509.warc.gz"}
One post tagged with "fstar" | Songlark This post describes early work embedding Lustre-style control systems in F*, and shows a demo of it in action in a real (simple) controller. For safety-critical systems such as the braking controllers in cars, we want strong assurances that the software that controls the system is correct. One way to achieve such assurances is to write them in a high-level language, such as Lustre, and formally prove that they satisfy a high-level specification. To build confidence that our program itself is correct, we can prove that our Lustre programs satisfy the specification using a model-checker such as Kind2. To build confidence that our compiled code faithfully implements our original program, we can use a verified compiler such as Unfortunately, however, there are a few practical issues with this approach: Kind2 and Vélus both use different dialects of Lustre, so it's not possible to use them both on the same input program without converting from one syntax to another. Vélus and Kind2 also support different feature subsets, because they have different objectives and priorities. Aside from practical issues of syntax, there is also the more theoretical issue of whether Kind2's proofs apply to Vélus's generated code. The two tools define the semantics of the language in very different ways: Kind2 translates input programs to transition systems, which are good for reasoning about but aren't the best for executing. Vélus, on the other hand, uses a coinductive semantics which is designed for proving compiler correctness. These two different semantics have no formal connection, which doesn't give us much confidence that they really agree. One option to build confidence that programs satisfy their specification would be to define a secondary semantics in Vélus, one which is better for reasoning about programs, and allow users to prove their programs correct manually in Coq. However, Coq has relatively little proof automation. Compared to the fully-automated proofs that Kind2 can produce, requiring manual Coq proofs would prohibit its use by many systems engineers. Instead, I'd like to introduce Pipit, an embedded language for implementing and verifying control systems in F*. The goal is to reuse F*'s excellent SMT-solver-based proof automation to automatically verify transition systems via k-induction, which is the same key method that Kind2 uses to perform its proofs. There is also an imperative subset of F* for which C code can be extracted, so we can translate our control systems to imperative code and generate C code to run on embedded systems. Pipit is a work-in-progress. Once the translation to transition systems and to imperative code have both been verified, we can be confident that any property we prove about the high-level program also applies to the generated imperative code. This is a reasonably strong guarantee, but it's not as strong as Vélus' guarantee that the generated assembly is correct, as the proof of correctness of F*'s C code generation is not mechanised yet. So far, I have proved soundness of the core part of the translation to transition systems for verification, but I still need to prove some additional features of the translation to transition systems and the translation to imperative code. Pipit only supports a small core language yet and doesn't have a nice front-end syntax, but in its current state I can define simple controllers, verify them, and generate C code to execute on a small embedded system. It's enough for a small demo. Plumbing a coffee machine I have a domestic coffee machine with a water reservoir. In normal use, the water reservoir must be manually filled with water from the tap. I wanted to plumb the reservoir to receive water directly from the tap, but I was concerned about flooding the kitchen if the tap somehow got stuck open. To ensure that the kitchen wouldn't flood, I decided to implement a small controller to open and close the tap. I have added a solenoid connected to the water mains, which I have mounted above the lid of the reservoir. The solenoid is normally-closed so that water cannot flow when the power is off. When power is applied, water flows from the mains into the reservoir. I have also added a float switch suspended from the lid of the reservoir, which allows the system to sense the water level. When the water goes above the level of the float switch, the switch turns off to indicate that the water level is sufficiently high. Finally, I have attached an "emergency stop" lever switch to the lid of the reservoir. When the lid is placed on the reservoir, the estop switch turns off; when the lid is removed, the estop switch turns on. The system has two safety controls to reduce the risk of flooding: firstly, if the emergency stop lever indicates that the lid is not attached to the reservoir, the controller closes the tap. Secondly, if the tap has been open for over a minute, the controller closes the tap. The control system is very simple: if the water level has been low for long enough, it opens the tap. If the level is high, if the emergency stop lever is on, or if the system is "stuck", then the tap is closed. Once the system becomes stuck, it stays stuck until you restart the microcontroller. To define the control system in Pipit, we first define a function called once to check if a signal has been true at any point in the past ("at least once"): let once (signal: exp) = recursive (fun once' -> signal || fby false once') This function introduces a recursively-defined stream called once', which is true if the input signal is true, or if the previous value of once' is true (fby false once'). The false in fby false once' means that if there is no previous value, as is the case at the very start of execution, it defaults to false. Using the once function, as well as a lastn t function that checks if a signal has been true for at least some window of history t, the controller looks like the following: // Timeouts let settle_time = 100 // one second, assuming the system runs at 100Hz let stuck_time = 6000 // one minute // Flags for bitfield let solenoid_flag = 1 let stuck_flag = 2 let controller estop level_low = // Try to turn the solenoid on if estop has been false and the water level // has been low for at least a second let sol_try = lastn settle_time (!estop && level_low) in // Consider the system to be stuck if, now or in the past, the solenoid has // been on for a minute let stuck = once (lastn stuck_time sol_try) in // Only actually turn the solenoid on if we're not stuck let sol_en = !stuck && sol_try in // Properties to be proved property "if estop then do not engage" (estop => !sol_en); property "if level high then do not engage" (!level_low => !sol_en); // Encode the two results as a bitfield as we don't support tuples yet let result = (if sol_en then solenoid_flag else 0) + (if stuck then stuck_flag else 0) in (The actual implementation is syntactically messier because the core language doesn't have a nice front-end yet; here I am presenting the "aspirational" syntax.) To prove that our controller satisfies the two (very simple) properties, we convert it to a transition system and prove it inductively. There is a bit of boilerplate here for the conversion, but the actual proof goes through automatically after applying the normalisation-by-evaluation tactic (tac_nbe) to simplify away the translation to a transition system. let controller_lts = system_of_exp (controller (XVar 0) (XVar 1)) let controller_prove (): Lemma (ensures induct1' controller_lts) = assert (base_case' controller_lts) by tac_nbe (); assert (step_case' controller_lts) by tac_nbe (); To generate C code, there is also a bit of boilerplate, but it's not too bad. The interface for the generated code has the usual reset and step functions for the controller: typedef struct Example_Compile_Pump_input_s bool estop; bool level_low; typedef struct Example_Compile_Pump_output_s bool sol_en; bool nok_stuck; void Example_Compile_Pump_reset(Example_Compile_Pump_state *stref); Example_Compile_Pump_step(Example_Compile_Pump_input inp, Example_Compile_Pump_state *stref); The implementation of the C code is surprisingly long for such a simple controller, but it works. My current translation to imperative code is very dumb and duplicates a lot of work, but this issue is fixable. (The example is called Example.Pump, but there is no pumping here at all, only solenoiding.) I have implemented the above on a microcontroller (a Raspberry Pi Pico) and attached it to my coffee machine. Here is a video of it in action. Future work I am happy to have Pipit working as a whole end-to-end system. We can implement a simple controller, prove some properties, and run them on a real embedded system, even though it's still very raw. I have verified the core of the translation to transition systems (which is used for proving systems correct), which means that any properties we prove on the transition system hold for the language's semantics. I'm confident that the rest of this translation can be verified, but first I'd like to focus on improving the language a bit. The examples I showed above use an "aspirational" syntax, as the real implementation uses de Bruijn indices with no support for named variables. Manually writing programs with de Bruijn indices is pretty awful. There are standard approaches to fix this, but I haven't implemented them yet. The language is also untyped: all expressions are represented by integer values, and boolean operations implicitly treat non-zero integers as true. Again, there are standard approaches to fix this, but I wanted to see the system working end-to-end before investing time into these more-standard "engineering" problems. Once I have improved the language and finished the verification of translation to transition systems, the obvious next step is to verify the translation to imperative code. This proof will give us confidence that the two translations agree, and that any properties we can prove really do hold on the executable code. This proof will be more challenging than verifying the translation to transition system, as there is a larger gap between the programming language's high-level semantics and the imperative code. I believe that this proof will be easier than the proof of correctness given by Vélus, the verified Lustre compiler, as the imperative subset of F* is still higher-level than the C that Vélus needs to generate. This smaller gap is a trade-off, however, as until F* itself is verified, we have a larger trusted computing base. Finally, we need more evaluation, which involves writing and verifying real safety-critical systems in Pipit — not just coffee machines. I am excited about the possibilities of writing control systems in Pipit with F* as a metalanguage, as I believe having a good metalanguage will be more expressive than traditional Lustre, without sacrificing the beauty and simplicity of Lustre. I also believe that F*'s support for both automatic and manual proofs will be useful for verifying larger control systems.
{"url":"https://songlark.net/blog/tags/fstar/","timestamp":"2024-11-05T18:16:17Z","content_type":"text/html","content_length":"45521","record_id":"<urn:uuid:226bee02-208a-4d5c-8e0d-fc504f743e87>","cc-path":"CC-MAIN-2024-46/segments/1730477027889.1/warc/CC-MAIN-20241105180955-20241105210955-00397.warc.gz"}
FuzzyResampling: Resampling Methods for Triangular and Trapezoidal Fuzzy Numbers The classical (i.e. Efron's, see Efron and Tibshirani (1994, ISBN:978-0412042317) "An Introduction to the Bootstrap") bootstrap is widely used for both the real (i.e. "crisp") and fuzzy data. The main aim of the algorithms implemented in this package is to overcome a problem with repetition of a few distinct values and to create fuzzy numbers, which are "similar" (but not the same) to values from the initial sample. To do this, different characteristics of triangular/trapezoidal numbers are kept (like the value, the ambiguity, etc., see Grzegorzewski et al. <doi:10.2991/ eusflat-19.2019.68>, Grzegorzewski et al. (2020) <doi:10.2991/ijcis.d.201012.003>, Grzegorzewski et al. (2020) <doi:10.34768/amcs-2020-0022>, Grzegorzewski and Romaniuk (2022) <doi:10.1007/ 978-3-030-95929-6_3>, Romaniuk and Hryniewicz (2019) <doi:10.1007/s00500-018-3251-5>). Some additional procedures related to these resampling methods are also provided, like calculation of the Bertoluzza et al.'s distance (aka the mid/spread distance, see Bertoluzza et al. (1995) "On a new class of distances between fuzzy numbers") and estimation of the p-value of the one- and two- sample bootstrapped test for the mean (see Lubiano et al. (2016, <doi:10.1016/j.ejor.2015.11.016>)). Additionally, there are procedures which randomly generate trapezoidal fuzzy numbers using some well-known statistical distributions. Version: 0.6.4 Imports: stats, utils Suggests: testthat (≥ 3.0.0), R.rsp Published: 2024-10-04 DOI: 10.32614/CRAN.package.FuzzyResampling Author: Maciej Romaniuk [aut, cre], Przemyslaw Grzegorzewski [aut], Olgierd Hryniewicz [aut] Maintainer: Maciej Romaniuk <mroman at ibspan.waw.pl> BugReports: https://github.com/mroman-ibs/FuzzyResampling/issues License: GPL-3 URL: https://github.com/mroman-ibs/FuzzyResampling NeedsCompilation: no Citation: FuzzyResampling citation info Materials: README NEWS CRAN checks: FuzzyResampling results Reference manual: FuzzyResampling.pdf Vignettes: Resampling Fuzzy Numbers with Statistical Applications: FuzzyResampling Package (source) Package source: FuzzyResampling_0.6.4.tar.gz Windows binaries: r-devel: FuzzyResampling_0.6.4.zip, r-release: FuzzyResampling_0.6.4.zip, r-oldrel: FuzzyResampling_0.6.4.zip macOS binaries: r-release (arm64): FuzzyResampling_0.6.4.tgz, r-oldrel (arm64): FuzzyResampling_0.6.4.tgz, r-release (x86_64): FuzzyResampling_0.6.4.tgz, r-oldrel (x86_64): Old sources: FuzzyResampling archive Please use the canonical form https://CRAN.R-project.org/package=FuzzyResampling to link to this page.
{"url":"https://cran.uni-muenster.de/web/packages/FuzzyResampling/index.html","timestamp":"2024-11-06T06:04:28Z","content_type":"text/html","content_length":"10124","record_id":"<urn:uuid:07d9707c-b977-4e3e-bf33-054c146d0019>","cc-path":"CC-MAIN-2024-46/segments/1730477027909.44/warc/CC-MAIN-20241106034659-20241106064659-00255.warc.gz"}
MS Excel Totorial - Blog What is MS Excel? MS Excel is a spreadsheet program where one can record data in the form of tables. It is easy to analyse data in an Excel spreadsheet. How to open MS Excel? To open MS Excel on your computer, follow the steps given below: Click on Start Then All Programs Next step is to click on MS Office Then finally, choose the MS-Excel option Alternatively, you can also click on the Start button and type MS Excel in the search option available. What is a cell? A spreadsheet is in the form of a table comprising rows and columns. The rectangular box at the intersection point between rows and columns forms a cell. Given below is an image of a cell: What is Cell Address? The cell address is the name by which is cell can be addressed. For example, if row 7 is interested in column G, then the cell address is G7. Features of MS Excel Various editing and formatting can be done on an Excel spreadsheet. Discussed below are the various features of MS Excel. Comprises options like font size, font styles, font colour, background colour, alignment, formatting options and styles, insertion and deletion of cells and editing options Comprises options like table format and style, inserting images and figures, adding graphs, charts and sparklines, header and footer option, equation and symbols Page Layout Themes, orientation and page setup options are available under the page layout option Since tables with a large amount of data can be created in MS excel, under this feature, you can add formulas to your table and get quicker solutions Adding external data (from the web), filtering options and data tools are available under this category Proofreading can be done for an excel sheet (like spell check) in the review category and a reader can add comments in this part Different views in which we want the spreadsheet to be displayed can be edited here. Options to zoom in and out and pane arrangement are available under this category Benefits of Using MS Excel MS Excel is widely used for various purposes because the data is easy to save, and information can be added and removed without any discomfort and less hard work. Given below are a few important benefits of using MS Excel: Easy To Store Data: Since there is no limit to the amount of information that can be saved in a spreadsheet, MS Excel is widely used to save data or to analyse data. Filtering information in Excel is easy and convenient. Easy To Recover Data: If the information is written on a piece of paper, finding it may take longer, however, this is not the case with excel spreadsheets. Finding and recovering data is easy. Application of Mathematical Formulas: Doing calculations has become easier and less time-taking with the formulas option in MS excel More Secure: These spreadsheets can be password secured in a laptop or personal computer and the probability of losing them is way lesser in comparison to data written in registers or piece of paper. Data at One Place: Earlier, data was to be kept in different files and registers when the paperwork was done. Now, this has become convenient as more than one worksheet can be added in a single MS Excel file. Neater and Clearer Visibility of Information: When the data is saved in the form of a table, analysing it becomes easier. Thus, information is a spreadsheet that is more readable and understandable. MS Excel – Points To Remember There are certain things which one must know with respect to MS Excel, its applications and usage: • An MS Excel file is saved with an extension of .xls • Companies with large staff and workers use MS Excel as saving employee information becomes easier • Excel spreadsheets are also used in hospitals where the information of patients can be saved more easily and can be removed conveniently once their medical history is cleared • The sheet on which you work is called a Worksheet • Multiple worksheets can be added in a single Excel file • This is a data processing application MS Excel Questions and Answers Given below are a few sample questions based on MS Excel which will help candidates preparing for competitive exams to score more in the Computer Awareness section. Q 1. The address that is obtained by the combination of the Row number and the Column alphabet is called ________. 1. Worksheet 2. Cell 3. Workbox 4. Cell Address 5. Column Address Answer: (4) Cell Address Q 2. Where is the option for page border given in the MS Excel spreadsheet? 1. Home 2. Insert 3. Format 4. View 5. Page Border cannot be added in excel worksheet Answer: (5) Page Border cannot be added in excel worksheet Q 3. Excel workbook is a collection of _______ and _______. 1. Worksheet and charts 2. Graphs and images 3. Sheets and images 4. Video and audio 5. None of the above Answer: (1) Worksheet and charts Q 4. What type of chart is useful for comparing values over categories? 1. Bar Graph 2. Column Chart 3. Pie Chart 4. Line Graph 5. Such charts cannot be created in Excel Answer: (2) Column Chart Q 5. There is an option to add comments in an Excel worksheet, what are the cells called in which comments can be added? 1. Cell Tip 2. Comment Tip 3. Smart Tip 4. Point Tip 5. Query Tip Answer: (1) Cell Tip Q 6. Which of the following symbols needs to be added in the formula bar, before adding a formula? 1. * 2. $ 3. % 4. + 5. = Answer: (5) = Q 7. Which keyboard key is used for Help in MS Excel? 1. ctrl+H 2. F2 3. F1 4. shift+H 5. Alt+ctrl+home Answer: (3) F1 1. By clicking on it 2. By pressing the arrow keys 3. By pressing Tab key 4. All of the above 5. None of the above Q 8. How can you activate a cell in MS Excel? Answer: (4) All of the above Q 9. What is the definition of MS Excel? Ans. MS Excel is a spreadsheet program where one can record data in the form of tables. This gives the user a more systematic display of data.Q2 Q 10. What are the main features of Microsoft Excel? Ans. The main features of MS Excel include inserting a pivot table, sorting of tabulated data, adding formulas to the sheet, and calculating large data.Q3 Q 11. What are the common MS Excel formulas? Ans. Given below are the common calculations which can be done using MS Excel: • Addition • Subtraction • Average • Maximum and Minimum • Concatenate • Count Q 12. What is a cell in Microsoft Excel? Ans. MS Excel comprises a spreadsheet is in the form of a table comprising rows and columns. The rectangular box at the intersection point between rows and columns forms a cell.Q5 Q 13. Can multiple sheets be added to a single spreadsheet? Ans. Yes, MS Excel gives an option to add multiple worksheets to a single spreadsheet. The user can rename each of these worksheets as per their requirements. This content is copied from byjus.com
{"url":"https://codersqube.in/ms-excel-totorial/","timestamp":"2024-11-11T04:44:17Z","content_type":"text/html","content_length":"118801","record_id":"<urn:uuid:7232df0c-7e8d-4b1e-8001-09d382e527a7>","cc-path":"CC-MAIN-2024-46/segments/1730477028216.19/warc/CC-MAIN-20241111024756-20241111054756-00803.warc.gz"}
History of the Golden Ratio - Ratio Applications: The Golden Ratio | Math Ratios History of the Golden Ratio The golden ratio, a mathematical concept that has fascinated mathematicians, artists, and philosophers for millennia, boasts an intriguing history. Its allure lies in the balance and harmony it represents, and its presence in nature, art, and architecture. Early History of the Golden Ratio The early history of the golden ratio can be traced back to the ancient Egyptians. Some historians believe that the design of the Great Pyramid of Giza (around 2589–2566 BC) reflects the golden ratio. Although this claim is debated, the golden ratio's potential use in Egyptian architecture testifies to its longstanding fascination. The Golden Ratio in Ancient Greece The golden ratio was formally defined in Euclid's "Elements" (around 300 BC), one of the most influential works in the history of mathematics. Euclid called it the "division of a line into extreme and mean ratio." This definition remained the standard for many centuries, and Euclid's work was a significant contribution to the field of geometry. The Golden Ratio During the Renaissance The Italian mathematician Leonardo of Pisa, commonly known as Fibonacci, introduced the Fibonacci sequence in his book "Liber Abaci" in 1202. While he did not directly mention the golden ratio, the ratio of consecutive terms in the Fibonacci sequence converges to the golden ratio. F(n+1) / F(n) ≈ Φ (for large n) During the Renaissance, artists and architects were attracted to the aesthetic appeal of the golden ratio. Luca Pacioli wrote a three-volume treatise on architecture, "De Divina Proportione" (1509), discussing the golden ratio. Leonardo da Vinci, who illustrated Pacioli's book, also used the golden ratio in his art, although the extent of its application remains a topic of debate. Modern Times In the 19th century, mathematician Martin Ohm (1792–1872) is believed to be the first to use the term "Golden" to describe the ratio. Following this, the golden ratio began to be more widely known and used in various fields of study. The Golden Ratio in the 20th Century and Beyond The 20th century saw the golden ratio's influence in surprising places, from the world of art, with the works of Salvador Dalí, to nature, with phyllotaxis studies in botany. Its mathematical properties also continued to be a subject of academic interest, finding relevance in areas such as number theory and complex function theory. Impact on Famous Mathematicians Over the years, the golden ratio's unique properties have had a profound impact on many mathematicians. Euclid, Fibonacci, and Pacioli, among others, contributed to its understanding. Furthermore, the golden ratio has also influenced modern mathematicians such as Roger Penrose, who used it to develop aperiodic tilings known as Penrose tiles. The Golden Ratio in Today's Mathematics Today, the golden ratio is a well-established concept in mathematics, appearing in various branches including geometry, algebra, and number theory. Its elegant properties continue to captivate mathematicians, and ongoing research expands our understanding of its role in complex mathematical structures. The history of the golden ratio is intertwined with the history of mathematics itself. From ancient architecture to Renaissance art, from the Fibonacci sequence to the Penrose tiles, the golden ratio remains a consistently fascinating and relevant concept in various fields of knowledge. The Golden Ratio Tutorials If you found this ratio information useful then you will likely enjoy the other ratio lessons and tutorials in this section:
{"url":"https://www.mathratios.com/tutorial/golden-ratio-history.html","timestamp":"2024-11-08T17:38:15Z","content_type":"text/html","content_length":"10110","record_id":"<urn:uuid:bfe731b4-77af-49a3-8a35-f77973c82c7d>","cc-path":"CC-MAIN-2024-46/segments/1730477028070.17/warc/CC-MAIN-20241108164844-20241108194844-00471.warc.gz"}
Gordon Walter Semenoff: Professor at Department of Physics & Astronomy, UBC Faculty of Science Gordon Walter Semenoff Research Classification Research Interests Moedal experiment, Large Hadron Collider, CERN String theory, quantum field theory, statistical mechanics Theoretical and mathematical physics, the physics of elementary particles, condensed matter physics Relevant Thesis-Based Degree Programs Affiliations to Research Centres, Institutes & Clusters Research Options I am available and interested in collaborations (e.g. clusters, grants). I am interested in and conduct interdisciplinary research. I am interested in working with undergraduate students on research projects. Research Methodology Physical reasoning and analytic problem solving using advanced methods of mathematical physics. Master's students Doctoral students String theory, quantum field theory For MSc Students, the equivalent of an honours in physics degree from a Canadian university. For PhD students, the equivalent of an MSc in theoretical physics. Complete these steps before you reach out to a faculty member! Check requirements • Familiarize yourself with program requirements. You want to learn as much as possible from the information available to you before you reach out to a faculty member. Be sure to visit the graduate degree program listing and program-specific websites. • Check whether the program requires you to seek commitment from a supervisor prior to submitting an application. For some programs this is an essential step while others match successful applicants with faculty members within the first year of study. This is either indicated in the program profile under "Admission Information & Requirements" - "Prepare Application" - "Supervision" or on the program website. Focus your search • Identify specific faculty members who are conducting research in your specific area of interest. • Establish that your research interests align with the faculty member’s research interests. □ Read up on the faculty members in the program and the research being conducted in the department. □ Familiarize yourself with their work, read their recent publications and past theses/dissertations that they supervised. Be certain that their research is indeed what you are hoping to study. Make a good impression • Compose an error-free and grammatically correct email addressed to your specifically targeted faculty member, and remember to use their correct titles. □ Do not send non-specific, mass emails to everyone in the department hoping for a match. □ Address the faculty members by name. Your contact should be genuine rather than generic. • Include a brief outline of your academic background, why you are interested in working with the faculty member, and what experience you could bring to the department. The supervision enquiry form guides you with targeted questions. Ensure to craft compelling answers to these questions. • Highlight your achievements and why you are a top student. Faculty members receive dozens of requests from prospective students and you may have less than 30 seconds to pique someone’s interest. • Demonstrate that you are familiar with their research: □ Convey the specific ways you are a good fit for the program. □ Convey the specific ways the program/lab/faculty member is a good fit for the research you are interested in/already conducting. • Be enthusiastic, but don’t overdo it. Attend an information session G+PS regularly provides virtual sessions that focus on admission requirements and procedures and tips how to improve your application. These videos contain some general advice from faculty across UBC on finding and reaching out to a potential thesis supervisor. Supervision Enquiry If you have reviewed some of this faculty member's publications, understand their research interests and have reviewed the admission requirements, you may submit a contact request to this supervisor Graduate Student Supervision Doctoral Student Supervision Dissertations completed in 2010 or later are listed below. Please note that there is a 6-12 month delay to add the latest dissertations. Aspects of quantum information in quantum field theory and quantum gravity (2019) In this thesis we discuss applications of quantum information theoretic concepts toquantum gravity and the low-energy regime of quantum field theories.The first part of this thesis is concerned with how quantum information spreadsin four-dimensional scattering experiments for theories coupled to quantum electrodynamicsor perturbative quantum gravity. In these cases, every scattering processis accompanied by the emission of an infinite number of soft photons or gravitons,which cause infrared divergences in the calculation of scattering probabilities.There are two methods to deal with IR divergences: the inclusive and dressedformalisms. We demonstrate that in the late-time limit, independent of the method,the hard outgoing particles are entangled with soft particles in such a way that thereduced density matrix of the hard particles is essentially completely decohered.Furthermore, we show that the inclusive formalism is ill-suited to describe scatteringof wavepackets, requiring the use of the dressed formalism. We construct theHilbert space for QED in the dressed formalism as a representation of the canonicalcommutation relations of the photon creation/ annihilation algebra, and argue that itsplits into superselection sectors which correspond to eigenspaces of the generatorsof large gauge transformations.In the second part of this thesis, we turn to applications of quantum informationtheoretic concepts in the AdS/CFT correspondence. In pure AdS, we find anexplicit formula for the Ryu-Takayanagi (RT) surface for special subregions in thedual conformal field theory, whose entangling surface lie on a light cone. Theexplicit form of the RT surface is used to give a holographic proof of Markovicityof the CFT vacuum on a light cone. Relative entropy of a state on such specialsubregions is dual to a novel measure of energy associated with a timelike vector flow between the causal and entanglement wedge. Positivity and monotonicity ofrelative entropy imply positivity and monotonicity of this energy, which yields aconsistency conditions for solutions to quantum gravity. View record Applications of path integral localization to gauge and string theories (2018) In the first part of this thesis we exploit supersymmetric localization to study aspects of supersymmetric gauge theories relevant to holography. In chapter 2 we study the 1/2-BPS circular Wilson loop in the totally antisymmetric representation of the gauge group in N = 4 supersymmetric Yang-Mills. We compute the first 1/N correction at leading order in ’t Hooft coupling by means of the matrix model loop equations for comparison with the 1-loop effective action of the holographically dual D5-brane. Our result suggests the need to account for gravitational backreaction on the string theory side. In chapter 3 we solve the planar N = 2* super-Yang-Mills theory at large ’t Hooft coupling again using localization on S⁴. The solution permits detailed investigation of the resonance phenomena responsible for quantum phase transitions in infinite volume, and leads to quantitative predictions for the semiclassical string dual of the N = 2* theory. The second part of the thesis deals with the Schwinger effect in scalar quantum electrodynamics and in bosonic string theory. Chapter 4 presents a detailed study of the semiclassical expansion of the world line path integral for a charged relativistic particle in a constant external electric field. It is demonstrated that the Schwinger formula for charged particle pair production is reproduced exactly by the semiclassical expansion around classical instanton solutions when the leading order of fluctuations is taken into account. By a localization argument we prove that all corrections to this leading approximation vanish and that the WKB approximation to the world line path integral is exact. Finally, in chapter 5 we analyse the problem of charged string pair creation in a constant external electric field. We find the instantons in the worldsheet sigma model which are responsible for the tunneling events, and evaluate the sigma model partition function in the multi-instanton sector in the WKB approximation. We further identify a fermionic symmetry associated with collective coordinates, which we use to localize the worldsheet functional integral onto its WKB limit, proving that our result is exact. View record Infrared quantum information (2018) Scattering amplitudes in massless gauge field theories have long been known to give rise to infrared divergent effects from the emission of very low energy gauge bosons. The traditional way of dealing with those divergences has been to abandon the idea of measuring amplitudes by only focusing on inclusive cross-sections constructed out of physically equivalent states. An alternative option, found to be consistent with the S-matrix framework, suggested to dress asymptotic states of charged particles by shockwaves of low energy bosons. In this formalism, the clouds of soft bosons, when tuned appropriately, cancel the usual infrared divergences occurring in the standard approach. Recently, the dressing approach has received renewed attention for its connection with newly discovered asymptotic symmetries of massless gauge theories and its potential role in the black hole information paradox.We start by investigating quantum information properties of scattering theory while having only access to a subset of the outgoing state. We give an exact formula for the von Neuman entanglement entropy of an apparatus particle scattered off a set of system particles and show how to obtain late-time expectation values of apparatus observables.We then specify to the case of quantum electrodynamics (QED) and gravity where the unobserved system particles are low energy photons and gravitons. Using the standard inclusive cross-section formalism, we demonstrate that those soft bosons decohere nearly all momentum superpositions of hard particles. Repeating a similar computation using the dressing formalism, we obtain an analogous result: In either framework, outgoing hard momentum states at late times are fully decohered from not having access to the soft bosons. Finally, we make the connection between our results and the framework of asymptotic symmetries of QED and gravity. We give new evidence for the use of the dressed formalism by exhibiting an inconsistency in the scattering of wavepackets in the original inclusive cross-section framework. View record 2+1d Quantum Field Theories in Large N Limit (2017) In Chapter 1, we present a brief introduction to the tight-binding model of graphene and show that in the low-energy continuum limit, it can be modeled by reduced QED₂₊₁ . We then review renormalization group technique which is used in the next chapters. In Chapter 2, we consider a quantum field theory in 3+1d with the defect of a large number of fermion flavors, N. We study the next-to-leading order contributions to the fermions current-current correlation function by performing a large N expansion. We find that the next-to-leading order contributions 1/N to the current-current correlation function is significantly suppressed. The suppression is a consequence of a surprising cancellation between the two contributing Feynman diagrams. We calculate the model's conductivity via the Kubo formula and compare our results with the observed conductivity for graphene. In Chapter 3, we study graphene's beta function in large N. We use the large N expansion to explore the renormalization of the Fermi velocity in the screening dominated regime of charge neutral graphene with a Coulomb interaction. We show that inclusion of the fluctuations of the magnetic field lead to a cancellation of the beta function to the leading order in 1/N. The first non-zero contribution to the beta function turns out to be of order 1/N². We perform a careful analysis of possible infrared divergences and show that the superficial infrared divergences do not contribute to the beta function. In Chapter 4, we study the phase structure of a Φ⁶ theory in large N. The leading order of the large N limit of the O(N) symmetric phi-six theory in three dimensions has a phase which exhibits spontaneous breaking of scale symmetry accompanied by a massless dilaton. In this chapter, we show that this “light dilaton” is actually a tachyon. This indicates an instability of the phase of the theory with spontaneously broken approximate scale invariance. We rule out the existence of Bardeen-Moshe-Bander phase. In this thesis, we show that Large N expansion is a powerful tool which in regimes that the system is interacting strongly could be used as an alternative to coupling expansion scheme. View record Holographic gauge/gravity duality and symmetry breaking in semimetals (2017) We use the AdS/CFT correspondence (the holographic duality of gauge/gravity theory) to study exciton driven dynamical symmetry breaking in certain (2+1)-dimensional defect quantum field theories. These models can be argued to be analogs of the electrons with Coulomb interactions which occurin Dirac semimetals and the results our study of these model systems are indicative of behaviours that might be expected in semimetal systems such as monolayer and double monolayer graphene. The field theory models have simple holographic duals, the D3-probe-D5 brane system and the D3-probe-D7 brane system. Analysis of those systems yields information about the strong coupling planar limits of the defect quantum field theories. We study the possible occurrence of exciton condensates in the strong coupling limit of single-defect theories as well as double monolayer theories where we find a rich and interesting phase diagram. The phenomena which we study include the magnetic catalysis of chiral symmetry breaking in monolayers and inter-layer exciton condensation in double monolayers. In the latter case, we find a solvable model where the current-current correlations functions in the planar strongly coupled field theory can be computed explicitly and exhibit interesting behavior. Although the models that we analyze differ in detail from real condensed matter systems, we identify some phenomena which can occur at strong coupling in a generic system and which could well be relevant to the ongoing experiments on multi-monolayer Dirac semimetals. An example is the spontaneous nesting of Fermi surfaces in double monolayers. In particular, we suggest an easy to observe experimental signature of this phenomenon. View record Momentum-space entanglement and the gravity of entanglement in AdS/CFT (2014) In the first part of this thesis we explore the entanglement structure of relativistic field theories in momentum space. We discuss a Wilsonian path integral formulation and a perturbative approach. Using perturbation theory we obtain results for specific quantum field theories. These are understood through scaling and decoupling properties of field theories. Convergence of the perturbation theory taking loop diagrams into account is also discussed. We then discuss the entanglement structure in systems where Lorentz invariance is broken by a Fermi surface. The Fermi surface helps the convergence of perturbation theory and entanglement of modes near the Fermi surface is shown to be amplified, even in the presence of a large momentum cutoff. In the second part of this thesis we explore the connection between entanglement and gravity in the context of the AdS/CFT correspondence. We show that there are certain thermodynamic-like relations common to all conformal field theories, which when mapped via the AdS/CFT correspondence to the bulk are tantamount to Einstein's equations, to lowest order in the metric. View record The AdS/CFT correspondence and string theory on the pp-wave (2008) Aspects of the AdS/CFT correspondence are studied in the pp.-wave/BMN limit. We usethe light cone string field theory to investigate energy shifts of the one and two impuritystates. In the case of two impurity states we find that logarithmic divergences, in the sumsof intermediate states, actually cancel out between the Hamiltonian and a Q-dependentcontact term”. We show how non-perturbative terms, that have previously plagued thistheory, vanish as a consequence of this cancelation. We argue from this that every order ofinternal impurities contributes to the overall energy shift and attempt to give a systematicway of calculating such sums for the case of the simplest 3-string vertex (one proposed bydiVecchia).We extend our analysis of the mass shift to the case of the most advanced 3-stringvertex (proposed by Dobashi and Yoneya). We find agreement between our string fieldtheory calculations and the leading order CFT result in the BMN limit. We also find strongsimilarities between our result and higher orders in the field theory, including, on the stringside, the disappearance of the half-integer powers which generically do not exist in the fieldtheory calculations.We also study the orbifolding of the pp-wave background which results in the discretequantization of the light-cone momentum. We present the string field theory calculation forsuch a discreet momentum case. We also observe how a particular choice of the orbifold,results in the string theory corresponding to the quantization of the finite size giant magnonon the CFT side. We study this theory in detail with particular emphasis on its superalgebra. View record Master's Student Supervision Theses completed in 2010 or later are listed below. Please note that there is a 6-12 month delay to add the latest theses. Towards a holographic universe (2022) In this thesis, we present a bottom-up holographic model for a large class of time- reversal symmetric cosmological spacetimes, through the anti-de Sitter/conformal field theory (AdS/CFT) correspondence.A major challenge in describing cosmological spacetimes using the AdS/CFT correspondence, is that they often do not have an anti-de Sitter (AdS) boundary. To solve this problem we have constructed a geometry by embedding spherically symmetric regions of a Friedmann-Lemaître-Robertson-Walker (FLRW) spacetime with a given scale factor inside a Schwarzschild-AdS (SAdS) spacetime, with the simple assumption that the two regions are separated by a thin shell satisfying Israel junction conditions. To ensure there exists a quantum state in the conformal field theory (CFT) which is dual to the bulk spacetime, we consider only time-reversal symmetric bubble spacetimes. This property allows us to define a real Euclidean spacetime by analytically continuing to imaginary times. We show that in certain cases, the Euclidean spacetime with its non-trivial asymptotic structure in form of the combined Euclidean AdS boundary of the FLRW cosmology and the SAdS boundary, gives rise to a natural state of the CFT via a Euclidean path integral. We also demonstrate the embedding procedure and existence of non-trivial asymptotics through some explicit examples.At this point two significant complications may arise. Firstly, to have the Euclidean asymptotics and time-reversal symmetry discussed above, we need cosmologies with a fundamentally negative cosmological constant, Λ. We argue that although the Λ-cold dark matter (ΛCDM) model points towards a small positive Λ, there is a plausible path forward with a model with a time dependent scalar field with a potential that is currently positive, but rolling towards a negative value to give us an effective negative Λ. Secondly, it is possible that our bubble lies behind the horizon of the SAdS black hole where we typically can’t probe. This problem is solved by a thorough analysis of the model’s parameter space, which suggests there is always a large set of parameters allowing embedding arbitrarily large bubbles of cosmology that peek out of the horizon. View record Full photon propagator and boundary charges in a 4D bulk with a 3D defect (2021) We conduct a review of the basic concepts about boundary conformal field theory and boundary conformal anomalies. Next, we build a 4-D bulk and insert it with a 3-D fermion defect to consider the structure of full photon propagator and energy-momentum tensor in defect conformal field theory. Our result shows the interaction part of full photon propagator has a coeffi- cient which depends on the coefficient of one photon irreducible propagator. Comparing with the boundary CFT model, our defect CFT model loses part of the symmetry and the fold trick cannot be applied to the full photon prop- agator. In the end, we calculated the boundary central charges with the full photon propagator. We find all projective terms in full photon propagator under Feynman Gauge vanish when we calculate the two-point function of energy-momentum tensor. The result shows the boundary central charges also depend on the coefficient of one photon irreducible propagator. After ignoring the interaction fixing term, we find the boundary central charge reduces to half comparing to the boundary CFT model. View record Topics in boundary quantum field theory: magnetic edge states in graphene and BCFT orbifold (2021) In this thesis, we investigate two examples of quantum field theory with planar boundaries. In the first part, we study the low energy excitations in a semi-infinite graphene sheet with the zigzag boundary condition. The system is described by a massless Dirac field with boundary condition such that half of the spinor components vanish on the boundary. From the residual continuous and discrete symmetries of the system, we argue that the graphene zigzag edge should be ferromagnetic. In the second part, we study symmetric orbifold boundary conformal field theory (BCFT). We show how to construct Cardy consistent boundary states for this symmetric orbifold BCFT. We also compute the boundary entropy and comment on its relevance to the AdS/BCFT correspondence. View record Infrared divergences in N=4 dupersymmetric Yang-Mills theory (2020) A massive quark with U(1) charge is constructed with the Higgs mechanism in N = 4supersymmetric Yang-Mills theory and is set up in a constant, uniform external electricfield such that its classical trajectory is that of constant acceleration. The leading term inthe amplitude of the trajectory in the semi-classical approximation is quadratic in the totalproper time, which is attributed to infrared divergences. We consider various methods oftreating such divergences. The inclusion of Bremsstrahlung emission desirably replacesthe quadratic dependence with a linear one, but forces us to reconsider the meaning of aglobal color charge. The method of dressing by Wilson line is shown to be unsuccessful.We finally provide an Ansatz for a Chung-like dressing factor to lowest order and showthe elimination of infrared divergences. View record Exploring the dirac equation (2018) In this thesis low energy excitations of perfectly dimerized trans-polyacetylene are modelled using the one-dimensional Dirac equation. The system is solved on both the half-line and segment, and the solutions are used to explore quantum phenomena. It is discovered that the zero mode of the half-line is a Majorana fermion quasiparticle. It is also found that dominate zero mode coupling to an electron on a scanning tunnelling microscope is achieved with a sufficiently large mass gap of the quantum wire. This allows scattering state excitations to be ignored in calculations in this thesis. It is also shown that the zero mode can facilitate entanglement of two electrons, each in proximity to opposite ends of a long segment of trans-polyacetylene. An algorithm is also developed which teleports the spin state of an electron on a segment of trans-polyacetylene. The quantum measurement used in this algorithm conserves fermion parity symmetry, however charge superselection is violated for three-fourths of the measurement operators. In the thermally isolated system teleportation is successful for all of the measurement operators on the ground state. However, decoherence occurs in the non-thermally isolated system due to thermal mixing of nearly degenerate states, leading to teleportation being successful for only half of the measurement operations on the thermal View record Lollipop diagrams in defect N=4 super Yang-Mills theory (2017) In this thesis, we have studied the lollipop diagrams in defect $\mathcal{N}$=4 super Yang-Mills field theory with nontrivial background, which is dual to the D3-D5 brane system with the probe D5 brane carrying k units of flux. Using the framework for performing loop computations for this system built by Buhl-Mortensen, Leeuw, Ipsen, Kristjansen and Wilhelm, we prove that for arbitrary N and k, the contribution of the lollipop diagrams to the one-point function is zero. This improves their result, where they take the planar limit N>>1 and the probe brane limit k/N View record Momentum-Space Classification of Topologically Stable Fermi Surfaces (2015) The purpose of the present work is to derive a classification for topologically stable Fermi surfaces for translationally invariant systems with no electron-electron interactions. To derive such a classification we introduce the necessary concepts in condensed matter and electronic band theory as well as those in mathematics such as topological spaces, building up to topological K-theory and its connections with Fredholm operators. We further compute such classes when there is only translational invariance for dimensions d = 1, 2, 3 and discuss the inclusion of other symmetries. View record Thermodynamic and Transport Properties of a Holographic Quantum Hall System (2015) We apply the AdS/CFT correspondence to study a quantum Hall system at strong coupling. Fermions at finite density in an external magnetic field are put in via gauge fields living on a stack of D5 branes in Anti-deSitter space. Under the appropriate conditions, the D5 branes blow up to form a D7 brane which is capable of forming a charge-gapped state. We add finite temperature by including a black hole which allows us to compute the low temperature entropy of the quantum Hall system. Upon including an external electric field (again as a gauge field on the probe brane), the conductivity tensor is extracted from Ohm’s law. View record Non-Abelian D5 Brane Dynamics (2014) The goal of this thesis is to analyse the non-abelian dual model to the defect probe D7-brane embedding in AdS₅ × S⁵[1]. The D7-brane picture can be thought of as a large number (N₅) of D5-branes growing a transverse fuzzy two-sphere, called BIon. This non-abelian solution improves our knowledge of the system by incorporating deviations in 1/N_{5}^{2} in the number of flavors. Such corrections are important from the point of view of the AdS/CFT correspondence as the CFT dual to the probe system is a candidate model for graphene, which possesses an emergent SU(4) symmetry. The main result of this work is the conductivity for the non-abelian D5 sytem. We fi nd that quantum Hall states have a non-integer transverse conductivity that depends on the number of flavor branes in the model. This deviationscales in 1/N_{5}^{2} in the number of flavor branes and vanishes in the large N₅ limit. View record Holographic descriptions of magnetic field on chiral field theory (2011) In this thesis, we will study a top-down string theory holographic model of strongly interacting relativistic 2+1 dimensional fermions. We study the defect theory as examining a charged probe D7-branes/anti-branes modeland a charged probe D5-branes/anti-branes model on the thermal AdS₅×S⁵ geometry. We use the branes pair model to depict a geometrical chiral symmetry breaking. We are especially interested in the holographic magnetic effect on the flavour symmetries. View record Holographic fermions in d=2+1 (2011) Recently, a large amount of effort has gone towards using the AdS/CFT conjecture in condensed matter physics. First, we present a review of the conjecture, then we use the conjecture to model 2+1-dimensional fermions. We find three kinds of solutions with different kinds of discrete symmetries. We show that Chern-Simons- like electric responses, computed using a holographic model appear with the right quantized coefficients. View record Large N Gauge Theory and k-Strings (2011) We considered the k-antisymmetric representation of U(N) gauge group on two dimensional lattice space and derived the free energy by saddle point approximation in large N limit. k is a large integer comparable with N. Besides Gross-Witten phase transition[1], which happens as the coupling constant changes, we found a new phase transition in the strong coupling system that happens as k changes. The free energy of the weak coupling system is a smooth function of k under continuous limit. We have carefully selected the right saddle point solution among other possible ones. Thenumerical results match our saddle point calculations. View record Modular Invariance of Closed Bosonic String Theory with a PP-Wave Background (2010) After a brief review of the necessary parts of the theory of the bosonic string, a consistent pp-wave background with constant dilaton and constant three-index antisymmetric field strength is introduced. In particular, the gravitational background is the plane wave with constant coefficients, and the antisymmetric field strength is chosen such that the worldsheet theory is both diff×Weyl invariant and stable. The one-loop closed bosonic string amplitude is evaluated and shown to be modular invariant. Then the free energy of a free closed string gas is calculated, modular invariance of it is proved, and the result is shown to be equivalent to the sum of free energies for the individual particle states. View record • Dilaton in a multicritical 3+epsilon-D parity violating field theory (2024) Physics Letters, Section B: Nuclear, Elementary Particle and High-Energy Physics, 853 • MoEDAL Search in the CMS Beam Pipe for Magnetic Monopoles Produced via the Schwinger Effect (2024) Physical Review Letters, 133 (7) • Boundary ferromagnetism in zigzag edged graphene (2023) Journal of Mathematical Physics, 64 (7) • Decoherence of Particle Detectors (2022) Peter Suranyi 87th Birthday Festschrift a Life in Quantum Field Theory, 253-266 • Gravitational fields and quantum mechanics (2022) International Journal of Modern Physics D, 31 (14) • Massless fermions on a half-space: the curious case of 2+1-dimensions (2022) Journal of High Energy Physics, 2022 (10) • Peter Suranyi 87th Birthday Festschrift A Life in Quantum Field Theory (2022) Peter Suranyi 87th Birthday Festschrift a Life in Quantum Field Theory, 1-343 • Search for highly-ionizing particles in pp collisions at the LHC’s Run-1 using the prototype MoEDAL detector (2022) European Physical Journal C, 82 (8) • Search for magnetic monopoles produced via the Schwinger mechanism (2022) Nature, 602 (7895), 63-67 • Soft scalars do not decouple (2022) Physical Review D, 106 (10) • First Search for Dyons with the Full MoEDAL Trapping Detector in 13 TeV pp Collisions (2021) Physical Review Letters, 126 (7) • Timepix3 as a solid-state time-projection chamber in particle and nuclear physics (2021) Proceedings of Science, 390 • Entanglement and the Infrared (2020) Springer Proceedings in Mathematics and Statistics, 335, 151-166 • How to split the electron in half (2020) Roman Jackiw: 80th Birthday Festschrift, 245-264 • On the infrared divergence and global colour in N = 4 Yang-Mills theory (2020) Journal of High Energy Physics, 2020 (7) • Preface (2020) Springer Proceedings in Physics, 239, vii-x • Defect QED: dielectric without a dielectric, monopole without a monopole (2019) Journal of High Energy Physics, 2019 (11) • Magnetic Monopole Search with the Full MoEDAL Trapping Detector in 13 TeV pp Collisions Interpreted in Photon-Fusion and Drell-Yan Production (2019) Physical Review Letters, 123 (2) • Dressed infrared quantum information (2018) Physical Review D, 97 (2) • Dynamical Violation of Scale Invariance and the Dilaton in a Cold Fermi Gas (2018) Physical Review Letters, 120 (20) • On the need for soft dressing (2018) Journal of High Energy Physics, 2018 (9) • Search for magnetic monopoles with the MoEDAL forward trapping detector in 2.11 fb−1 of 13 TeV proton–proton collisions at the LHC (2018) Physics Letters, Section B: Nuclear, Elementary Particle and High-Energy Physics, 782, 510-516 • Worldsheet instantons and the amplitude for string pair production in an external field as a WKB exact functional integral (2018) Journal of High Energy Physics, 2018 (5) • Infrared Quantum Information (2017) Physical Review Letters, 119 (18) • Scattering and momentum space entanglement (2017) Physics Letters, Section B: Nuclear, Elementary Particle and High-Energy Physics, 772, 699-702 • Search for Magnetic Monopoles with the MoEDAL Forward Trapping Detector in 13 TeV Proton-Proton Collisions at the LHC (2017) Physical Review Letters, 118 (6) • Light dilaton in the large N tricritical O (N) model (2016) Physical Review D, 94 (12) • Phase structure of a holographic double monolayer Dirac semimetal (2016) Journal of High Energy Physics, 2016 (6) • Search for magnetic monopoles with the MoEDAL prototype trapping detector in 8 TeV proton-proton collisions at the LHC (2016) Journal of High Energy Physics, 2016 (8) • The D3-probe-D7 brane holographic fractional topological insulator (2016) Journal of High Energy Physics, 2016 (10) • Topology of Fermi surfaces and anomaly inflows (2016) Journal of High Energy Physics, 2016 (11) • Exciton condensation in a holographic double monolayer semimetal (2015) Physics Letters, Section B: Nuclear, Elementary Particle and High-Energy Physics, 750, 22-25 • String worldsheet for accelerating quark (2015) Journal of High Energy Physics, 2015 (10) • World-line instantons and the Schwinger effect as a Wentzel-Kramers-Brillouin exact path integral (2015) Journal of Mathematical Physics, 56 (2) • A holographic quantum Hall ferromagnet (2014) Journal of High Energy Physics, 2014 (2) • Conductivity tensor in a holographic quantum Hall ferromagnet (2014) Physics Letters, Section B: Nuclear, Elementary Particle and High-Energy Physics, 738, 373-379 • Holographic D3-probe-D5 model of a double layer Dirac semimetal (2014) Journal of High Energy Physics, 2014 (12) • The physics programme of the MoEDAL experiment at the LHC (2014) International Journal of Modern Physics A, 29 (23) • Universal Bose gases near resonance: A rigorous solution (2014) Physical Review A - Atomic, Molecular, and Optical Physics, 89 (3) • Chiral primary one-point functions in the D3-D7 defect conformal field theory (2013) Journal of High Energy Physics, 2013 (1) • D3-D7 holographic dual of a perturbed 3D CFT (2013) Physical Review D - Particles, Fields, Gravitation and Cosmology, 88 (2) • D7-anti-D7 bilayer: Holographic dynamical symmetry breaking (2013) Physics Letters, Section B: Nuclear, Elementary Particle and High-Energy Physics, 722 (4-5), 360-363 • Giant D5 brane holographic Hall state (2013) Journal of High Energy Physics, 2013 (6) • Hot holographic giant loop (2013) Journal of Physics: Conference Series, 462 (1) • Chiral symmetry breaking in graphene (2012) Physica Scripta (T146) • D3-D5 holography with flux (2012) Physics Letters, Section B: Nuclear, Elementary Particle and High-Energy Physics, 715 (1-3), 225-229 • Engineering holographic graphene (2012) AIP Conference Proceedings, 1483, 305-329 • Spin versus charge-density-wave order in graphenelike systems (2012) Physical Review B - Condensed Matter and Materials Physics, 86 (12) • Comments on k-strings at large N (2011) Journal of High Energy Physics, 2011 (3) • Electronic zero modes of vortices in Hall states of gapped graphene (2011) Physical Review B - Condensed Matter and Materials Physics, 83 (11) • Holographic fermionic fixed points in d=3 (2011) Journal of High Energy Physics, 2011 (9) • Holographic Schwinger effect (2011) Physical Review Letters, 107 (17) • Large representation recurrences in large N random unitary matrix models (2011) Journal of High Energy Physics, 2011 (10) • Magnetic catalysis and quantum Hall ferromagnetism in weakly coupled graphene (2011) Journal of High Energy Physics, 2011 (7) • On the spectrum of the AdS5 × S5 string at large λ (2011) Journal of High Energy Physics, 2011 (3) • Hot giant loop holography (2010) Physical Review D - Particles, Fields, Gravitation and Cosmology, 82 (2) • Coulomb interaction at the metal-insulator critical point in graphene (2009) Physical Review B - Condensed Matter and Materials Physics, 80 (8) • Finite size giant magnon (2009) Physical Review D - Particles, Fields, Gravitation and Cosmology, 79 (12) • Large representation Polyakov loop in hot Yang-Mills theory (2009) Proceedings of Science • Domain walls in happed graphene (2008) Physical Review Letters, 101 (8) • Electron fractionalization for two-dimensional Dirac fermions (2008) Physical Review B - Condensed Matter and Materials Physics, 77 (23) • Finite size giant magnons in the string dual of N=6 superconformal Chern-Simons theory (2008) Journal of High Energy Physics, 2008 (12) • Discrete symmetries and 1/3-quantum vortices in condensates of F=2 cold atoms (2007) Physical Review Letters, 98 (10) • Gauge invariant finite size spectrum of the giant magnon (2007) Physics Letters, Section B: Nuclear, Elementary Particle and High-Energy Physics, 651 (4), 329-335 • Stretched quantum states emerging from a Majorana medium (2007) Journal of Physics B: Atomic, Molecular and Optical Physics, 40 (8), 1479-1488 • Testing AdS/CFT at string loops (2007) Fortschritte der Physik, 55 (5-7), 742-747 • Uniaxial and biaxial spin nematic phases induced by quantum fluctuations (2007) Physical Review Letters, 98 (16) • AdS/CFT v.s. string loops (2006) Journal of High Energy Physics, 2006 (6) • Critical boundary sine-Gordon revisited (2006) Annals of Physics, 321 (12), 2849-2875 • Cusped SYM Wilson loop at two loops and beyond (2006) Nuclear Physics B, 748 (1-2), 170-199 • Exact 1 / 4 BPS loop-Chiral primary correlator (2006) Physics Letters, Section B: Nuclear, Elementary Particle and High-Energy Physics, 643 (3-4), 195-204 • Finite size corrections and integrability of N = 2 SYM and DLCQ strings on a pp-wave (2006) Journal of High Energy Physics, 2006 (9) • Quantum insulating states of F=2 cold atoms in optical lattices (2006) Physical Review Letters, 97 (18) • Wilson loops in N = 4 SYM and fermion droplets (2006) Journal of High Energy Physics, 2006 (6) • Divergence cancellation and loop corrections in string field theory on a plane wave background (2005) Journal of High Energy Physics (12), 319-346 • Fermion representation of the rolling tachyon boundary conformal field theory (2005) Journal of High Energy Physics (5), 2016-2052 • Free energy and phase transition of the matrix model on a plane wave (2005) Physical Review D - Particles, Fields, Gravitation and Cosmology, 71 (6), 1-11 • Wavy Wilson line and AdS/CFT (2005) International Journal of Modern Physics A, 20 (13), 2833-2846 • DLCQ string spectrum from N = 2 SYM theory (2004) Journal of High Energy Physics, 8 (11), 1369-1404 • BMN correlators and operator mixing in N = 4 super-Yang-Mills theory (2003) Nuclear Physics B, 650 (1-2), 125-161 • On a modification of the boundary-state formalism in off-shell string theory (2003) JETP Letters, 77 (1), 1-6 • String theory in electromagnetic fields (2003) Journal of High Energy Physics, 7 (2), 529-575 • The boundary state formalism and conformal invariance in off-shell string theory (2003) Journal of High Energy Physics, 7 (11), 459-475 • The superstring Hagedorn temperature in a pp-wave background (2003) Journal of High Energy Physics, 7 (6), 147-169 • A new double-scaling limit of N = 4 super-Yang-Mills theory and pp-wave strings (2002) Nuclear Physics B, 643 (1-3), 3-30 • DLCQ strings and branched covers of torii (2002) Nuclear Physics B - Proceedings Supplements, 108, 99-105 • O(D) invariant tachyon condensates in the 1/D expansion (2002) Physics Letters, Section B: Nuclear, Elementary Particle and High-Energy Physics, 543 (1-2), 127-134 • Wilson loops in SYM theory: From weak to strong coupling (2002) Nuclear Physics B - Proceedings Supplements, 108, 106-112 • Color superconductivity and nondecoupling phenomena in (2+1)-dimensional QCD (2001) Physical Review D - Particles, Fields, Gravitation and Cosmology, 64 (2), 250051-2500510 • Color superconductivity and nondecoupling phenomena in (2+l)-dimensional QCD (2001) Physical Review D, 64 (2) • Matrix strings in a b-field (2001) Journal of High Energy Physics, 5 (7), 1-46 • More exact predictions of SUSYM for string theory (2001) Nuclear Physics B, 616 (1-2), 34-46 • Non-commutative gross-neveu model at large n (2001) Journal of High Energy Physics, 5 (6), 1-24 • Running couplings and triviality of field theories on noncommutative spaces (2001) Physical Review D - Particles, Fields, Gravitation and Cosmology, 64 (6), 5 • Running couplings and triviality of field theories on noncommutative spaces (2001) Physical Review D, 64 (6) • The target space dependence of the Hagedorn temperature (2001) Journal of High Energy Physics, 5 (11) • Extremal curves in (2 + 1)-dimensional Yang-Mills theory (2000) Nuclear Physics B, 576 (1-3), 627-654 • Matrix theory interpretation of discrete light cone quantization string worldsheets (2000) Physical Review Letters, 85 (16), 3343-3346 • Static potential in [formula presented] supersymmetric Yang-Mills theory (2000) Physical Review D - Particles, Fields, Gravitation and Cosmology, 61 (10) • Static potential in script N=4 supersymmetric Yang-Mills theory (2000) Physical Review D - Particles, Fields, Gravitation and Cosmology, 61 (10), 1-4 • Wilson loops in N=4 supersymmetric Yang-Mills theory (2000) Nuclear Physics B, 582 (1-3), 155-175 • Loops, surfaces and grassmann representation in two- and three-dimensional ising models (1999) International Journal of Modern Physics A, 14 (29), 4549-4574 • On the Correspondence between the Strongly Coupled 2-Flavor Lattice Schwinger Model and the Heisenberg Antiferromagnetic Chain (1999) Annals of Physics, 275 (2), 254-296 • Screening and D-brane dynamics in finite temperature superstring theory (1999) Physical Review D, 60 (10) • Screening and D-brane dynamics in finite temperature superstring theory (1999) Physical Review D - Particles, Fields, Gravitation and Cosmology, 60 (10) • Screening and D-brane dynamics in finite temperature superstring theory (1999) Physical Review D, 60 (10) • Solitons on branes (1999) Nuclear Physics B, 556 (1-2), 247-261 • Spectrum of the two-flavor Schwinger model from the Heisenberg spin chain (1999) Physical Review D - Particles, Fields, Gravitation and Cosmology, 59 (3) • Structure of the electric flux in script N sign = 4 supersymmetric Yang-Mills theory (1999) Physics Letters, Section B: Nuclear, Elementary Particle and High-Energy Physics, 466 (2-4), 239-243 • Thermodynamic partition function of matrix superstrings (1999) Nuclear Physics B, 561 (1-2), 243-272 • Thermodynamics of DO-branes in matrix theory (1999) Physics Letters, Section B: Nuclear, Elementary Particle and High-Energy Physics, 445 (3-4), 307-315 • Two dimensional anti-de Sitter space and discrete light cone quantization (1999) Physics Letters, Section B: Nuclear, Elementary Particle and High-Energy Physics, 468 (1-2), 52-57 • Universality and the magnetic catalysis of chiral symmetry breaking (1999) Physical Review D - Particles, Fields, Gravitation and Cosmology, 60 (10) • Universality in effective strings (1999) JETP Letters, 69 (7), 509-515 • Phase transition induced by a magnetic field (1998) Modern Physics Letters A, 13 (14), 1143-1154 • Topology and duality in abelian lattice theories 1 (1998) Physics Letters, Section B: Nuclear, Elementary Particle and High-Energy Physics, 425 (3-4), 282-290 • D-brane configurations and Nicolai map in supersymmetric Yang-Mills theory (1997) Modern Physics Letters A, 12 (3), 183-193 • Deconfinement transition for quarks on a line (1997) Annals of Physics, 256 (1), 74-113 • Fermionic matrix models (1997) International Journal of Modern Physics A, 12 (12), 2135-2291 • G/G models as the strong coupling limit of topologically massive gauge theory (1997) Nuclear Physics B, 489 (1-2), 360-384 • Loop correlators and theta states in two-dimensional Yang-Mills theory (1997) Annals of Physics, 260 (2), 275-310 • Quantum critical phenomena: The relationship between spin systems and strongly coupled gauge theories (1997) Surveys in High Energy Physics, 10 (1-4), 265-278 • Vacuum structure and θ states of adjoint QCD in two dimensions (1997) Nuclear Physics B, 487 (1-2), 191-206 • Vacuum structure of two-dimensional gauge theories for arbitrary Lie groups (1997) Nuclear Physics B, 506 (1-2), 521-536 • Z-symmetry and polyakov loops in hot abelian gauge theories (1997) International Journal of Modern Physics A, 12 (6), 1205-1214 • Adjoint non-Abelian Coulomb gas at large N (1996) Nuclear Physics B, 480 (1-2), 317-337 • Charge screening and confinement in hot 3-D QED (1996) Nuclear Physics B, 473 (1-2), 143-172 • Charge screening in the finite temperature Schwinger model (1996) International Journal of Modern Physics A, 11 (22), 4103-4128 • Confinement-deconfinement transition in three-dimensional QED (1996) Physical Review D - Particles, Fields, Gravitation and Cosmology, 53 (12), 7157-7161 • Conformal motions and the Duistermaat-Heckman integration formula (1996) Physics Letters, Section B: Nuclear, Elementary Particle and High-Energy Physics, 372 (3-4), 236-245 • Exact solution of the one-dimensional non-abelian coulomb gas at large N (1996) Physical Review Letters, 77 (11), 2174-2177 • Gauged Yukawa matrix models and two-dimensional lattice theories (1996) Physical Review D - Particles, Fields, Gravitation and Cosmology, 53 (10), 5886-5890 • Non-Abelian Coulomb gas (1996) Physics Essays, 9 (4), 576-582 • Polymer statistics and fermionic vector models (1996) Modern Physics Letters A, 11 (14), 1185-1197 • The braid group of a canonical Chern-Simons theory on a riemann surface (1996) Annals of Physics, 245 (1), 1-22 • Unitary matrix integrals in the framework of the generalized Kontsevich model (1996) International Journal of Modern Physics A, 11 (28), 5031-5080 • Canonical BF-type topological field theory and fractional statistics of strings (1995) Nuclear Physics, Section B, 437 (3), 695-721 • Critical behaviour of a fermionic random matrix model at large-N (1995) Physics Letters B, 351 (1-3), 153-161 • New universality classes for quantum critical behavior (1995) Physical Review Letters, 74 (25), 4976-4979 • Observables and critical behaviour in fermionic matrix models (1995) Theoretical and Mathematical Physics, 104 (1), 823-860 • Phase space isometries and equivariant localization of path integrals in two dimensions (1994) Nuclear Physics, Section B, 421 (2), 391-412 • Phase transitions and mass generation in 2+1 dimensions (1994) Physical Review D, 50 (2), 1060-1067 • QCD1+1 with massless quarks and gauge covariant Sugawara construction (1994) Physics Letters B, 341 (2), 195-204 • Area law and continuum limit in "induced QCD" (1993) Nuclear Physics, Section B, 395 (3), 547-580 • Canonical Chern-Simons theory and the braid group on a Riemann surface (1993) Physics Letters B, 311 (1-4), 137-146 • Chiral dynamics and fermion mass generation in three-dimensional gauge theory (1993) Physical Review Letters, 70 (25), 3848-3851 • Four-fermion theory and the conformal bootstrap (1993) Annals of Physics, 228 (2), 341-364 • Gribov ambiguity and non-trivial vacuum structure of gauge theories on a cylinder (1993) Physics Letters B, 303 (3-4), 303-307 • Induced QCD without local confinement (1993) Physics Letters B, 302 (2-3), 283-290 • Radiative corrections to anyon scattering (1993) Nuclear Physics, Section B, 392 (3), 700-724 • SU (N) antiferromagnets and the phase structure of QED in the strong coupling limit (1993) Nuclear Physics, Section B, 406 (3), 595-630 • SU(N) Antiferromagnets and strongly coupled QED: Effective field theory for Josephson junctions arrays (1993) Nuclear Physics B (Proceedings Supplements), 33 (3), 192-208 • Anyonization of lattice Chern-Simons theory (1992) Annals of Physics, 217 (1), 66-104 • Charge distribution in two-dimensional electrostatics (1992) Physical Review B, 45 (20), 12084-12087 • Dynamical violation of parity and chiral symmetry in three-dimensional four-Fermi theory (1992) Physical Review D, 45 (4), 1342-1354 • Eta-invariants and fermion number in finite volume (1992) Physics Letters B, 284 (3-4), 317-324 • Fractional spin, magnetic moment and the short-range interactions of anyons (1992) Nuclear Physics, Section B, 368 (3), 718-742 • Gauge theories on a cylinder (1992) Physics Letters B, 296 (1-2), 117-120 • Induced QCD and hidden local ZN symmetry (1992) Physical Review Letters, 69 (24), 3435-3438 • Intersection forms and the geometry of lattice Chern-Simons theory (1992) Physics Letters B, 286 (1-2), 118-124 • Investigations of pairing in anyon systems (1992) Physical Review B, 46 (17), 11220-11223 • Strong coupling gauge theory, quantum spin systems and the spontaneous breaking of chiral symmetry (1992) Physics Letters B, 297 (1-2), 175-180 • Two-loop analysis of non-Abelian Chern-Simons theory (1992) Physical Review D, 46 (12), 5521-5539 • ZN domains in gauge theories with fermions at high temperatures (1992) Physics Letters B, 277 (3), 331-336 • ZN phases in hot gauge theories (1992) Physical Review D, 46 (4) • Anyonization and novel braid structure in dumb-bell Chern-Simons theory (1991) Physics Letters B, 266 (3-4), 375-381 • Quantized magnetic susceptibility in (2+1)-dimensional gapless semiconductors (1991) Physical Review Letters, 66 (20), 2653-2656 • Scale and conformal invariance in Chern-Simons-matter field theory (1991) Physical Review D, 44 (6) • Topological quantum theories and integrable models (1991) Physical Review D, 44 (12), 3899-3905 • White noise and heating of quantum field theory in an open system (1991) Physical Review D, 44 (10), 3218-3229 • Dynamical mass generation in 3D four-fermion theory (1989) Physical Review Letters, 63 (24), 2633-2636 • Exotic spin and statistics in (2+1)-dimensional canonical quantum field theory (1989) Nuclear Physics, Section B, 328 (3), 753-776 • Fermionized spin systems and the boson-fermion mapping in (2+1)-dimensional gauge theory (1989) Physics Letters B, 226 (1-2), 107-112 • Path integral analysis of chiral bosonization (1989) Nuclear Physics, Section B, 312 (1), 197-226 • Renormalization of the statistics parameter in three-dimensional electrodynamics (1989) Physical Review Letters, 62 (7), 715-718 • Semenoff replies (1989) Physical Review Letters, 63 (9), 1026 • The Chern-Simons term versus the monopole (1989) Nuclear Physics, Section B, 328 (3), 575-584 • Canonical quantum field theory with exotic statistics (1988) Physical Review Letters, 61 (5), 517-520 • Chiral bosonization and local Lorentz-invariant actions for chiral bosons (1988) Physical Review Letters, 60 (25), 2571-2574 • Comment on fractional statistics (1988) Physical Review Letters, 60 (25), 2703 • Conformal field theory and the geometry of second quantization (1988) Physical Review D, 37 (10), 2934-2945 • Erratum: Nonassociative electric fields in chiral gauge theory: An explicit construction (Physical Review Letters (1988) 60, 15 (1590)) (1988) Physical Review Letters, 60 (15), 1590 • Index theorems and superconducting cosmic strings (1988) Physical Review D, 37 (10), 2838-2852 • Lorentz invariant exact solution of the anomalous chiral Schwinger model (1988) Zeitschrift für Physik C Particles and Fields, 39 (2), 269-273 • Nonassociative electric fields in chiral gauge theory: An explicit construction (1988) Physical Review Letters, 60 (8), 680-683 • Induced fractional spin and statistics in three-dimensional QED (1987) Physics Letters B, 184 (4), 397-402 • Quantum geometry of a conformal field theory (1987) Physics Letters B, 198 (2), 209-214 • An exact solution of the chiral schwinger model (1986) Physics Letters B, 175 (4), 439-444 • Cutkosky rules for condensed-matter systems (1986) Physical Review B, 34 (6), 4338-4341 • Discontinuities of green functions in field theory at finite temperature and density (II) (1986) Nuclear Physics, Section B, 272 (2), 329-364 • Fermion number fractionization in quantum field theory (1986) Physics Reports, 135 (3), 99-193 • Gauge algebras in anomalous gauge-field theories (1986) Physical Review Letters, 56 (10), 1019-1022 • Index theorems on open infinite manifolds (1986) Nuclear Physics, Section B, 269 (1), 131-169 • Induced quantum curvature and three-dimensional gauge theories (1986) Nuclear Physics, Section B, 276 (1), 173-196 • Non-Abelian adiabatic phases and the fractional quantum hall effect (1986) Physical Review Letters, 57 (10), 1195-1198 • Squeezed strings (1986) Physics Letters B, 176 (1-2), 108-114 • Thirring strings (1986) Physics Letters B, 181 (3-4), 244-248 • Anomalies, Levinson's theorem, and fermion determinants (1985) Physical Review D, 32 (2), 471-475 • Comment on "induced Chern-Simons terms at high temperatures and finite densities" (1985) Physical Review Letters, 54 (19), 2166 • Discontinuities of green functions in field theory at finite temperature and density (1985) Nuclear Physics, Section B, 260 (3-4), 714-746 • Erratum: Quantum holonomy and the chiral gauge anomaly (Physical Review Letters (1985) 55, 23, (2627)) (1985) Physical Review Letters, 55 (23), 2627 • Evolution equation for the Higgs field in an expanding universe (1985) Physical Review D, 31 (4), 699-703 • Feynman rules for finite-temperature Greens functions in an expanding universe (1985) Physical Review D, 31 (4), 689-698 • Fractional fermion number in Kaluza-Klein theories (1985) Physical Review D, 31 (6), 1324-1326 • Quantum holonomy and the chiral gauge anomaly (1985) Physical Review Letters, 55 (9), 927-930 • Real-time Feynman rules for gauge theories with fermions at finite temperature and density (1985) Zeitschrift für Physik C Particles and Fields, 29 (3), 371-380 • Spectral flow and the anomalous production of fermions in odd number of dimensions (1985) Physical Review Letters, 54 (9), 873-876 • Condensed-Matter simulation of a three-Dimensional anomaly (1984) Physical Review Letters, 53 (26), 2449-2452 • Electric Charge of the Magnetic Monopole (1984) Physical Review Letters, 53 (6), 515-518 • Finite-temperature quantum field theory in Minkowski space (1984) Annals of Physics, 152 (1), 105-129 • Fractional fermion number at finite temperature (1984) Physics Letters B, 135 (1-3), 121-124 • Spectral asymmetry on an open space (1984) Physical Review D, 30 (4), 809-818 • Thermodynamic calculations in relativistic finite-temperature quantum field theories (1984) Nuclear Physics, Section B, 230 (2), 181-221 • Axial-anomaly-induced fermion fractionization and effective gauge-theory actions in odd-dimensional space-times (1983) Physical Review Letters, 51 (23), 2077-2080 • Continuum quantum field theory for a linearly conjugated diatomic polymer with fermion fractionization (1983) Physical Review Letters, 50 (6), 439-442 • Fluctuations of fractional charge in soliton anti-soliton systems (1983) Nuclear Physics, Section B, 225 (2), 233-246 • Functional methods in thermofield dynamics: A real-time perturbation theory for quantum statistical mechanics (1983) Nuclear Physics, Section B, 220 (2), 196-212 • Spectral asymmetry, trace identities and the fractional fermion number of magnetic monopoles (1983) Physics Letters B, 132 (4-6), 369-373 • Fermion zero modes, supersymmetry, and charge fractionalization of quantum solitons (1982) Physical Review D, 25 (4), 1054-1064 • Quantum numbers of solitons in a fermion-soliton system (1982) Physical Review D, 26 (4), 938-940 • Secret supersymmetry and the fractional charges of quantum solitons (1982) Physics Letters B, 113 (5), 371-374 • Asymptotic condition and Hamiltonian in quantum field theory with extended objects (1981) Physical Review D, 24 (2), 406-415 • A perturbative look at the dynamics of extended systems in quantum field theory (1980) Journal of Mathematical Physics, 22 (10), 2208-2222 • Quantum Electrodynamics in Solids (1980) Fortschritte der Physik, 28 (2), 67-98 • Extended objects in quantum field theory (1979) Journal of Mathematical Physics, 21 (7), 1761-1769 • A new double-scaling limit of N = 4 super-Yang-Mills theory and pp-wave strings Nuclear Physics B, 661 Membership Status Member of G+PS Program Affiliations Academic Unit(s)
{"url":"https://www.grad.ubc.ca/researcher/13979-semenoff","timestamp":"2024-11-12T23:11:09Z","content_type":"text/html","content_length":"167086","record_id":"<urn:uuid:05641eed-04c2-4151-bf17-27ccd4b7a472>","cc-path":"CC-MAIN-2024-46/segments/1730477028290.49/warc/CC-MAIN-20241112212600-20241113002600-00066.warc.gz"}
Non-linear time lower bound for Boolean branching programs for FOCS 1999 FOCS 1999 Conference paper Non-linear time lower bound for Boolean branching programs We prove that for all positive integer k and for all sufficiently small qq > 0 if n is sufficiently large then there is no Boolean (or 2-way) branching program of size less than 2 qqn which for all inputs X qq {0, 1, ..., n - 1} computes in time kn the parity of the number of elements of the set of all pairs 〈x,y〉 with the property x qq X, y qq X, x < y, x + y qq X. For the proof of this fact we show that if A = (a i,j) i=0,j=0n is a random n by n matrix over the field with 2 elements with the condition that 'qqi, j, k, l qq {0, 1, ..., n - 1}, i + j = k + l implies a i,j = a k,l' then with a high probability the rank of each δn by δn submatrix of A is at least cδ|log δ| -2n, where c > 0 is an absolute constant and n is sufficiently large with respect to δ.
{"url":"https://research.ibm.com/publications/non-linear-time-lower-bound-for-boolean-branching-programs","timestamp":"2024-11-03T07:03:40Z","content_type":"text/html","content_length":"62754","record_id":"<urn:uuid:f6b1f9e3-a346-4f4f-bf15-90469508fb6b>","cc-path":"CC-MAIN-2024-46/segments/1730477027772.24/warc/CC-MAIN-20241103053019-20241103083019-00237.warc.gz"}
s for Posts for the month of January 2012 There was a meeting on Fri the 27th Jan '12. There we though about the relevant parameters for the binary simulation again. Characteristic parameters: • a, the separation of the stars, which we want to keep 20AU. • AGB gravity • AGB wind temperature, T[w] • AGB wind speed, v[w], should be ~ 20km/s • q=M[primary]/M[secondary] • gamma, which for AGB winds should be close to 1 • sigma[s]= gravity softening radius of the secondary / grid resolution = r[soft]/ dx • sigma[B]= gravity softening radius of the secondary / Bondi accretion radius = r[soft]/r[B] , where r[B]=2GM[secondary]/(v[w] ^2 + c[s] ^2 + v[secondary] ^2), c[s] is the AGB wind's sound speed and v[secondary] is the orbital velocity of the secondary with respect to the center of mass, located at the origin. Test 1: Binaries • Isothermal solver. It's the first time I use it for this problem • q= 1.5, as before • a=20AU • T[w]=1000K • sigma[s]=4cells • sigma[B]=10, i.e. 40 cells per r[B]. This is accomplished with a grid of 8^3 computational units with a grid of 32^3+5particle refinement levels. • v[w]=20km/s • t[final]=5 orbits. 29 Jan, 14:38. Test 1 started running on Sat the 28th morning in bluehive. I've seen some high cfl number reports after the secondary's gravity is turned on, which significantly decrease the timestep and slow the run. I've been (i) restarting with smaller cfl numbers, (ii) trying to progressively increase the secondary's gravity. The secondary has completed about .5 orbits so far. At this point, the gas that has been captured by the secondary star does not look like a disk: I'm monitoring the progress and have 2 instances of the problem running. 30 Jan 8:15. Test1 is still running (see image), t=.8orbit. I've been finding high cfl reports which sometimes freeze the code and sometimes significantly reduce the timestep. I'm looking into the part of the code dealing with this, but in the mean time I've been running with cfls .1-.3, so the simulation is going slow. Another instance of the problem will start running in bluegene today (at some point). The upper right panel shows a zoom of the left panel. The bottom right panel show a zoom with velocity vectors. The flow does not look like a keplerian one at this point. Seems to be early to judge whether this is correct or not. 31jan12 9:17am. InterpOrder=2. The flow does not look like a disk but we're far from 5 orbits. Compare the image to the right with the one of the 29th Jan (two above) to see the differences with the interpOrder=3 case. It's going very slow for the reasons reported in ticker167 (https://clover.pas.rochester.edu/trac/astrobear/ticket/167) which I'm working on. In the meantime I've been running (in bluehive and have another instance of the sim waiting in bluegen's queue) with small cfls ~.09-.1. 1 Feb '12 7:55am. InterpOrder=2 continues, time=.8orbit. The flow about the secondary looks more uniform than before (see the zoomed in image →) 7 feb Running well in bluehive, 64 afrank procs. I've reduced one amr level so we can get data asap. This run includes some fixes and solver parameters that we've been discussing. If test 1 fails (i.e. it produces a tilted or an amorphous disk) then in test 1.1 I will increase sigma[B] for a fixed sigma[s]. If test 1.1 fails (ditto) then in test 1.2 I will increase sigma[s] and will keep everything else fixed. Test 2: Disk at the centre of the grid • Isothermal solver. Its the first time I will use it for this problem. • Disk to ambient density contrast of 10^6 • sigma[s]=4cells • extrapolated BC. 30 Jan 8:22. Test 2 has completed 4 orbits (see images). The central part of the disk shows an inclination with respect to the orbital plane of ~ 30^o, from t=1orbit on. I do not yet understand why this happens. The outer parts of the disk remain fairly axisymmetric though. These conditions do not vary too much in time. The grid is 32^3 +3amr, with a disk radius of 1 comp units. ~1 orbits ~2 orbits ~3 orbits ~4 orbits Below is the early velocity field evolution, superimposed on the logarithmic gray scale of the density. Zoom in: edge-on view | pole-on view The grid seems small for the problem. 31jan12 9:17am. Here's a movie of test 2 (amr+interpOrder=3) showing the disk plane velocity field superimposed on the log(density). c[s]=1. Arrows have a fixed length and are color coded. I see: a keplerian-like vel distribution at t=0. Then there's a fast radial expansion which, I think, results from the initial pressure gradient between the disk and the amb. Would this, or a similar process, happen for IC with a disk-to-amb density contrast=1 and a super Keplerian disk vel distribution (in that case there should be no poloidal expansion though)? Then the flow still shows a velocity that scales down with r and the density seems to show some spiral-like pattern. The right panel (zoom) shows gas located well within the Bondi radius. The initial 'red' central vels seem to quickly disappear and some of the green ones do too -the central disk seems to decelerate? After such vel change, the distribution of the central disk seems to change modestly. 31jan12 10:43am. Here's a movie of test 2.1 (fixed grid+interpOrder=2). Everything is as in the movie of test 2 (above). Group research meeting. After discussing the results of tests 2 and 2.1 we agreed I'll try: • a colder disk, t=.1K, instead of 1000K • taller cylindrical (not flared) disk, such that the disk height is ~ 2r[soft] (it was ~.8 r[soft]) • more time resolution • time[final]=2orbits, for the warping should happen in this time. If the above still produce warping: • phi=pi/2, i.e. make the disk rotate about the y axis, instead of the z axis to catch potential grid related bugs • hydrostatic disk, with no ambient medium. Find our previous research at:https://clover.pas.rochester.edu/trac/astrobear/blog/binary After seeing strange behavior - especially wrt pressure I decided to check that the interpolation methods used in prolongation were accurate. It is a little difficult to visualize this but it is • Start a run in fixed grid. • Select the frame you want to see interpolated. • Set the restart_frame to be that frame in global.data • Set the final frame to be 1 frame later. • Now the trick is to get the time between those two frames to be very small, so that there is no temporal development. To do this… □ Get the time stamp of the restart frame (restart_time = restart_frame/final_frame*final_time) assuming start_time was originally 0 □ Set the start_time to be restart_time-(restart_frame*epsilon) □ Set the final_time to be restart_time+epsilon • Then increase maxlevel to whatever desired level and set qtolerances to trigger refinement. • Then restart the simulation • It should take one root level step and then output a single frame. When viewed it should show the interpolated values. Here are images from an example where 4 levels of refinement were added to a BE sphere profile. Constant (0) MinMod (1) SuperBee (2) VanLeer (3) Monotone Centered (4) Parabolic* (5) The "plateaus" in all but the linear and the parabolic are due to the slope being limited so as not to introduce any new extrema. The Parabolic does not conserve the total quantity and is only used for the gas potential (and time deriv). The 'linear' was just added but could likely have issues near shocks since it is not limited. But all seem to be working 'correctly'. There is a shortcoming in that each limiter is considered in each direction independently. A more multidimensional reconstruction that used a multidimensional limiter would ease the plateaus from persisting along the coordinate directions. For a fixed grid run we need phi before and after every advance. So for 4 time steps, we would need to solve for phi 5 times. With no additional elliptic solves we can then use phi before and after to achieve second order time accuracy. For an AMR patch we need to take 2 time steps so we need 3 solutions for phi. We could use the prolongated parents version of phi, or the previous overlapping grids latest version of phi. In principal they should agree at the coarse boundaries, since the previous grids solution to phi was subject to the same boundary conditions (due to phidot at coarse boundaries exactly agreeing with parent grids). Furthermore the previous grids solution to phi should be more accurate since it accounts for the distribution of gas on the current level. Otherwise level n would end up using values for phigas that were successively prolongated from level 0. So it is important to use the overlapped version of phigas and not the prolongated. However, phidot should not be copied from old grids since that phidot is from a previous time step. This can cause problems if phidot changes quickly. There will also be a mismatch between the prolongated and overlapped phidot which can cause differences between adjacent cells leading to strong gradients. So it is important to use the prolongated value of phidot and not the overlapped. Additionally since phidot is recalculated before successive prolongations, the same issue with repeated prolongation does not occur. This is implemented in the code by setting ProlongateFields = (/…,iPhiGas,iPhiDot,…/) and by setting EGCopyFields = (/iPhiGas/) And then following a poisson update ghosting iPhiDot. To make it 2nd order we need to ghost iPhiGas as well after the poisson update, however this means that we don't need to copy iPhiGas before the 2nd step. Please add more. Soker & Rappaport 2000 http://adsabs.harvard.edu/abs/2000ApJ...538..241S M&M '98 (SPH) http://adsabs.harvard.edu/abs/1998ApJ...497..303M Theuns (SPH) http://adsabs.harvard.edu/abs/1993MNRAS.265..946T Val-Borro, Mira (2D, flash) http://adsabs.harvard.edu/abs/2009ApJ...700.1148D Makita et al. 2000, 3d, plane symmetric, static disk interacting with wind http://adsabs.harvard.edu/abs/2000MNRAS.316..906M 3D, novae blast http://adsabs.harvard.edu/abs/2008A%26A...484L...9W Close binaries: Banerjee, Puldritz & Holmes 2004, simulations of collapsing, rotating Bonnor–Ebert spheres http://onlinelibrary.wiley.com/doi/10.1111/j.1365-2966.2004.08316.x/abstract Ryde et al. 2000, Mira http://adsabs.harvard.edu/abs/2000ApJ...545..945R Guerrero & Miranda 2011, NGC 6778, http://xxx.lanl.gov/abs/1201.2042 For those of you interested in Shape and its plotting/modeling capabilities: Been working on the flux limiting explicit thermal conduction and the presentations. see separate project pages "Magnetized Clumps" and "Magnetized Clumps with Shocks". AstroBEAR uses huge virtual memory (comparing with data and text memory) when running with mulpi-processors: One Processor: Four Processor: http://www.pas.rochester.edu/~bliu/Jan_24_2012/AstroBEAR/bear_2n8p4t.png To understand the problem, I tried a very simple Hello World program. Here are the results from TotalView: One Processor: http://www.pas.rochester.edu/~bliu/Jan_24_2012/HelloWorld/1n1p1t.png Four Processor: http://www.pas.rochester.edu/~bliu/Jan_24_2012/HelloWorld/1n4p4t.png It's fair to say that the big virtual memory issue is not related to the AstroBEAR code. It's more related to openMPI and the system. I saw online resources arguing Virtual Memory includes memory for shared libraries which depends on other processes running. It makes sense to me. Especially I ran the Hello World program with same setup but at different times and found out it's using different virtual memories http://www.pas.rochester.edu/~bliu/Jan_24_2012/HelloWorld/1n1p1t_2ndRun.png http://www.pas.rochester.edu/~bliu/Jan_24_2012/HelloWorld/1n1p1t_3run.png I'm reading more on virtual memory and shared libraries. • Google calendar up and running. So far it only has the Astro lunches and colloquia, our weekly group meeting, PAS colloquia, and journal club. Any one in our group can create/manage the events on the calendar. What else do we want on this calendar? The AST462 course (not really a group thing, but it does affect Adam, Erica, and I), other weekly meetings, upcoming conferences? Leave comments! Let's use this blog as a discussion as it was originally intended! • Speaking of blog comments, did anyone ever look into email notifications, or do we even want them? It seems that there is a plugin for this: http://trac-hacks.org/wiki/FullBlogNotificationPlugin . Should I take a stab at this or leave it for someone who is more familiar with trac and how plugins are handled? • Zcooling routines have been implemented. I'm now in the process of testing them with my 1DJet simulation. I'll try out Jonathan's new 1D implementation as well. The documentation on Zcooling is coming along. See zcooling for more details. The table reformatting stuff will be important for anyone who gets this code in the future. The section on the interpolation that I used will be added • I also started documentation on the Wind object, since I made some changes to it in order to use it in my 1DJet module. See the wiki page WindObjects. UPDATE : Jonathan did look into email notification for blog comments a while ago. The Announcer plugin is supposed to support email notification for blogs in the same way it does for tickets. However, Jonathan could not get this to work. I looked into it today as well and found the same things that Jonathan did. The solution is to use the separate FullBlogNotification plugin. This is a standalone plugin that does its own notification without going through Announcer. The way notifications work is the wiki sends an email to username@pas.rochester.edu. Announcer allows each user to specify an email address in their preferences on the wiki if they don't want to use this pas address. In short, this feature will not work for blog comments. So, I gave up on trying to integrate the potential due to particles with that due to the gas to get a better gas potential at coarse fine boundaries. Now, only and are stored in q. Gas that is accreted during a finest level step will not be reflected in the boundary values to the gas potential at the coarse boundaries, until after the coarser grid updates. Fortunately, since the accretion is incremental this should not pose a significant problem. Additionally if the accretion is steady, then the time derivative of the gas potential should include this effect. I have performed a suite of 2D simulations with various levels of refinement and all seem to handle the collapse nicely. And the 1 3D TrueLove problem I ran also seems to handle the collapse well. In the absence of accretion or particles, the scheme should be second order in time - although I have yet to verify this. Also, it was necessary to reduce the softening length to 1 cell, instead of 4 cells, since the particle would form at a cell corner, but the gas would collect in a cell center. Due to the softening the gas cell would exert a greater pull than the particle which would cause the gas to wonder away from the particle. The output is at /alfalfadata/erica/PROJECTS/BEsphere2D/out and the executable was made from: /alfalfadata/erica/DEVEL/BEsphere2D. Mainly the modification in the code had to do with moving the clump object off of the default center position of the box (0,0,0). Adding the clump param, xloc, to the problem module allows one to read in the location of the clump. I set it to be at 2,2,2. To make sure the bounds in global.data were set such that the sphere was sliced down the middle, the domain was set to run from (0,0,2) to (4,4,2). That is a slice in the xy plane at z=2, eg. a slice down the center of the sphere at xloc = 2,2,2. 4 feb '12. Three runs from the table below have completed. Bin is in the process of producing movies and plots: Four of the new runs (see table below) are waiting in bluegene's queue. We should have them -along with plots, diagrams, etc.- in a week time, before Adam's visit to Bruce. Additionally, Bin has 2 of these sims in the standard queue at bluehive. Note that the grid is slightly larger than before so we can see the edges of the object. Also, I have reduced 1 amr level because it adds time to the runs and post processing and doesn't give us significant info about the object's dynamics. Bruce and I have discussed about the separation of the ambient rings. The setup in runs 4, 5 and 8 (see table) seems to be consistent with the obs. Telecon with Bruce, Adam and Jason on 26 jan '12. • We don't need high resolution (256x128x128+2amr) to compare with the observations, which is the main objective of the paper. We will drop 1, 2 may be, refinement levels which should decrease the simulation production time. • We don't need a constant ambient density model • We'll do velocity and density line-outs along the jet-axis at y={-2/4,-¼,0,¼,2/4} R[clump] • We'll do PV diagrams with a slit displaced R[clump]/2 from the symmetry axis • Separation of the ambient rings form observations: ~1e16 cm = 0.9kpc*3e21cm/kpc/206265. We're using a separatin that is twice as long in the models for some experimentation showed the rings to be too close otherwise. New simulations: Model name Running order Running schedules No. clumps Jet Dens contrast with ambient Ambient Resolution a 1 27 Jan. Completed 1 0 200 (as the previous runs) r^-2+rings 128x80x80+2amr b 2 27 Jan. Completed 0 1 200 r^-2+rings ditto c 3 28 Jan. Completed 1 0 50 r^-2 150x100x100+2amr d 4 28 Jan Completed 1 0 50 r^-2+rings ditto e 5 30 Jan Running 0 1 50 r^-2 ditto f 6 31 Jan Running 0 1 50 r^-2+rings ditto g 7 31 Jan Queued 2; one at the initial condition and another after the 1st one has left the grid 0 50,50 r^-2 ditto h 8 1 Feb Queued 2; one at the initial condition and another after the 1st one has left the grid 0 50,50 r^-2+rings ditto New synthetic diagrams along the correct slit position angle (turns out you cannot turn the slit in shape. The PV diagrams that I posted on Saturday correspond to a slit position angle of 0 degrees, but we want this to be 90deg. Thus I've rotated the objects with respect to the slit and had used more data from the simulations in order to see bow shocks). • Clump/jet are shown going up because shape's slit is like that (there's no way around this in shape. I could rotate the synthetic images later on (with gimp) for the paper versions). • Gray scales in the left column (density maps) are logarithmic and have the same limits for all images. • Gray scales in the emission columns (2 and 4) are logarithmic and do not have the same limits. • Gray scales in the pv columns (3 and 5) are linear and do not have the same limits. Clump, stratified ambient density: Log(dens) [cm^-3] Emission (dens^2) 90^o PV 90^o Emission (dens^2) 60^o PV 60^o Clump, constant ambient density: Log(dens) [cm^-3] Emission (dens^2) 90^o PV 90^o Emission (dens^2) 60^o PV 60^o Jet, stratified ambient density: Log(dens) [cm^-3] Emission (dens^2) 90^o PV 90^o Emission (dens^2) 60^o PV 60^o Jet, constant ambient density: Log(dens) [cm^-3] Emission (dens^2) 90^o PV 90^o Emission (dens^2) 60^o PV 60^o Some notes comparing Dennis et al. 2008 with our sims. -His clump/jet vels are 100m/s. Ours are 400km/s. -His emission and pv diagrams are for 2.5D so that's why they have more resolution, but his 3d emission maps have a little less resolution than our shape ones. -His Fig8 is the average vx. Our vx lineouts are not averaged; they are along a single line n the x direction. and subsequent comments. I installed a couple of plugins so now… • Blog authors will be e-mailed when someone comments on their blog posts (although this e-mail can only be sent to their username@pas.rochester.edu) • Wiki pages now have a 'Watch This' link in the upper right corner. If you select to watch a page, you will receive e-mails whenever the page is modified. • Under the Preferences menu (right next to logout), their is an announcement tab where you can adjust various settings related to ticket notifications. (I would suggest folks check all of the boxes under "ticket component subscriptions" if they want to stay on top of all tickets) Some sample numbers for 3D runs • 32^3+4 = 512^3 effective with a high filling fraction □ 2.2 GB / frame x 100 frames = .2 TB □ 64 procs @ 1.2 GB of mem / processor = 76 GB of active memory □ 12800 SU's • 32^3+5 = 1024^3 effective with a smaller filling fraction □ 1.5 GB / frame x 100 frames = .15 TB □ 64 procs @ 1 GB of mem / processor = 64 GB of active memory □ 19200 SU's • 32^3+5 = 1024^3 effective with a high filling fraction (projected) □ 12 GB / frame x 100 frames = 1.2 TB □ 512 procs @ 1 GB of mem / processor = 512 GB of active memory □ 102400 SU's Projected needs for research • 6-10 runs for colliding flows • 6-10 runs for gravo collapse • 12-20 runs at 19200 SU's = 230400 - 384000 SU's • 12-20 runs at .4 TB = 4.8 - 8 TB of data This is effectively the entire 192 infiniband nodes for 50-80 days or all of bluegene for 1 to 2 weeks (assuming memory issues are not a problem). If we had exclusive access to a quarter of bluegene, we would be able to finish these runs in 1 to 2 months. Projected needs for groups research This is of course, just for the research I would like to do in the next few months. When you consider the rest of the group we are looking at maybe 3-4 times that? As far as storage space is concerned, we currently have 21 TB of storage space of which 5 TB is available. If each grad student/post doc has 5-8 TB of active data that needs to be stored until a paper is published, then we would need at least 30 TB. (That's assuming folks get rid of their old data as soon as the results are published). At some point (when we have the new bluegene machine) we will be generating too much raw data to effectively store and we'll have to figure out how to generate useful output (ie spectra, column density, only coarse grids, etc…) Here are movies of a higher resolution BE run: http://www.pas.rochester.edu/~erica/upperboundBE.gif — upper bound on color bar http://www.pas.rochester.edu/~erica/noboundBE.gif — no bound on color bar http://www.pas.rochester.edu/~erica/reg1182012.gif — initial bounds on color bar • 86^3 root grid +3 levels • RefineVariableFactor set to 0 to trigger refinement based on Jeans length • Ambient medium density = density at outer edge of sphere, entire domain is isothermal • boundary conditions = all velocities into and out of grid are stepped on • Poisson BC's = multipole I killed the run after the diamond shape happened. We have seen this before and contributed it to the pressure waves running off corners of box. However, the disconcerting asymmetry at the last frame is new.. I am going to move the edges of box further away from the sphere and change the boundary conditions to periodic to mimic the sphere sitting in a homogenous, 'infinite' medium. . 86^3 base grid + 5 levels, periodic BC's on box: I have a different simulation set up that is now more suitable for what we will want for the HST proposal and further research. It is effectively a 1D pulsed jet running through an ambient medium. The jet is pulsed via velocity perturbations. I added perturbation parameters to the wind object, so anyone that wants to simulate something with a pulsed inflow might be able to use that. It currently supports square wave and sine wave perturbations. I plan on adding a random perturbation option as well. The 1D jet simulation also has a magnetic field perpendicular to the propagation, and cooling can be turned on. Here's an image of what I have so far on a run with no cooling: I've been reading up on interpolation schemes. Tomorrow I will dig a little deeper into the cooling routines, and really start making progress on adding Pat's cooling tables. Hopefully, next week we'll have this new form of cooling implemented and the aforementioned simulations running. Particle Kicks: http://www.pas.rochester.edu/~erica/SinksUpdate1172012.html Both gas-gas and particle-gas conservation of momentum have been implemented. The kick that was present before is no longer present. Bonnor Ebert: Working on simulations. Would like to think more about the system I am trying to model and the model I am trying to create for that system. I would like to choose appropriate boundary conditions and wonder whether I should be initializing a stable BE sphere or an unstable one when I run the tests I emailed Phil about. Here is a heavy run: http://www.pas.rochester.edu/~erica/BEupdate1172012.html. I am getting HIGH cfls and the totals.dat file is wonky.. Working on getting a 2D version of the BE problem up and running. I'll talk about my poster and the AAS meeting. Currently working on the paper of magnetized clump with thermal conduction. I'd also like to discuss what we will present at the HEDLA meeting, coming Apr 30th. The AAS meeting went quite well. I've got positive feedback about my talk (http://www.pas.rochester.edu/~martinhe/austin.pdf) and chatted with most of the gang in the PN paper and with Dongsu Ryu (who invited me to give a talk in South Korea). Everyone sent greetings to Adam. I've sent the abstract and registered for the HEDLA meeting. Adam, Eric and I should meet soon so discuss the new tower runs. The four CRL618 runs have finished. Bin (see her post) is making movies. Do we want synthetic emission and PV diagrams of these runs? I'm updating the PN paper. The new version will be submitted to MNRAS. I'm almost done. I should have it ready for Adam's and Eric's revision by Wednesday morning. I'm writing the very last part of the magnetic tower paper. This is section 3.3, Energy fluxes, where we compare our calculations of the Poynting to kinetic energy ratio with those of of other flows, as requested by Pat. BTW, see http://arxiv.org/abs/1201.2681 Happy New Year all. I am working on setting up new BE stuff to begin a collaboration with Prof. Phil Myers - reading his latest papers and reviewing old studies. I am also checking on the status of the conservation of momentum added to the sink algorithms. http://www.pas.rochester.edu/~erica/SinksUpdate.html http://www.pas.rochester.edu/~erica/comparison.gif The nonequilibrium cooling routines are coming along. This has also been called the modified Dalgarno-McCray cooling. The files have been ported over from astrobear 1, and the necessary changes have been made to make it work in astrobear 2. I used Bin's radiative instability module as a quick check to see if NEQ cooling was working. You can tell from the output that it is definitely cooling, and it is cooling more than DM cooling. I started a new wiki page on this: u/neqcooling. So far this page just contains some visit plots from the radiative instability module. I am now working with a different module to just run a shock through an ambient material. This set up will make it easier to see the post-shock material, because the radiative instability module will often produce oscillations. After I determine that this is accurate, I can start working on the 6th type of cooling which will include Pat's tables, which he has already sent to me. Need to: • [DEL:Remove self gravity source term from source module:DEL] • [DEL:Add self gravity source terms to reconstructed left and right states :DEL] • [DEL:modify final fluxes to include self gravity source terms.:DEL] Here's a plot showing momentum conservation with the new approach However there were striations These were due to the approximation it uses to calculate the density… Turns out when the errors in phi are large compared to the density this can cause these striations. Here's a plot showing the relative error in the derived rho used in the source calculation and the actual rho. Reducing hypres error tolerance from 1e-4 to 1e-8 improved the situation and lessend the striations Finally here's a comparison of the old and new methods • [DEL:Modify gravitational energy source terms.:DEL] • [DEL:These modified fluxes will need to be stored in the fixup fluxes so they can be properly differenced between levels to ensure strict momentum conservation.:DEL] □ [DEL:this will require a second call to store fixup fluxes. Perhaps a generic routine in data declarations would be better for storing fixup fluxes, and emfs, etc…:DEL] • [DEL:extend same modifications to 2D and 3D by adding additional source terms in transverse flux update.:DEL] What to do about phi • To be second order we need the gas potential before and after a hydro step. • Normally the gas potential after a hydro step would be the same as that before the next hydro step (requiring 1 poisson solve per hydro step) however with accretion, the gas in each cell (and the corresponding gas potential) can change. • Accretion, however, should not change the total potential (except for differences in softening) - so recalculating the particle potential should allow for modifying the gas potential without another poisson solve. So a fixed grid algorithm would look like the following: • Do a hydro step using original gas potential and predicted time centered particle potential. • calculate new gas potential (poisson solve) and ghost • momentum and energy flux correction • Advance particles using back force from gas • Calculate new sink potential • Store new total potential • Perform accretion • Update new sink potential using new particle masses and difference gas potential keeping total potential fixed • Repeat For AMR we need a way to update the gas potential on level l at coarse grid boundaries independent of a level l poisson solve. This can be done using phi and phidot. Then whenever we set the gas potential for the poisson solve we use phi, phidot, and phisinks. So the algorithm looks like the following: Root Level • Do a hydro step using original gas potential and predicted time centered particle potential. • Calculate new sink potential using predicted particle positions • calculate new gas potential (poisson solve) and ghost • momentum and energy flux correction using phi_Gas and phi_gas_old (stored in phi) • Update total potential and time deriv using sink potential and gas potential. phi_new=phi_gas+phi_sinks phi_dot=phi_new-phi+old_phi_sinks • Prolongate old fields and new total potential time deriv □ After finer level steps, particle positions and masses will be different. So update phisinks as well as phigas keeping phi constant. Intermediate Level • Do a hydro step using original gas potential and predicted time centered particle potential. • Calculate new sink potential using predicted particle positions • Update gas potential at coarse fine boundaries using phi, phidot, and predicted phi sinks • calculate new gas potential (poisson solve) and ghost • momentum and energy flux correction • Update total potential and time deriv using sink potential and gas potential. • Prolongate old fields and new total potential time deriv • After finer level steps, particle positions and masses will be different. So update phisinks as well as phigas keeping phi constant. • Repeat Finest Level • Check for new particles (do this after ghosting) • Perform accretion • After accretion, particle positions and masses will be different. So update phisinks as well as phigas keeping phi constant. • Do a hydro step using gas potential and predicted time centered particle forces. • Advance particles using back force from gas • Calculate new sink potential using advanced particle positions • [DEL:Calculate new sink potential using predicted particle positions:DEL] • Update gas potential at coarse fine boundaries using phi, phidot, and phi sinks • calculate new gas potential (poisson solve) and ghost • momentum and energy flux correction • Update total potential and time deriv using sink potential and gas potential. • [DEL:After finer level steps, particle positions and masses will be different. So update phisinks as well as phigas keeping phi constant.:DEL] • Repeat Of course the calculation of the predicted particle positions on each level (other than the maxlevel) can be as sophisticated as necessary. One could use the original particle positions and velocities alone, or could advance the level's own version of the particles using the same advance algorithm as well as gas forces etc… Note this is necessary if one wants to thread these advances, since the particle masses may change due to accretion in the middle of a coarse level update. But this threading would still require use of the old time derivative to update the ghost zones instead of the forward time derivative. The code can now run 1D AMR problems and produce output to chombo. Just set nDim = 1. The chombo files will look funny with AMR turned on, but that is just because chombo has to believe they are 2D AMR data sets. Because the data is in fact 1D, it thinks that some data is missing, and leaves these areas white. You an do line-outs along the y=0 boundary to generate curves. Here I've plotted the data on levels 0 and 1 (redline) with data on levels 0-4 (blueline). (Also have not tested self gravity or other source terms, or several of the 'objects', but most will need very minor modification to work). It is not checked in, but interested folks can pull from alfalfadata/johannjc/scrambler_1Dworks The non-equilibrium cooling routines appear to be working. I made a new page where I will document things. So far this page only has some initial plots to look at for the 1/6/12 conference call. u/
{"url":"https://bluehound2.circ.rochester.edu/astrobear/blog/2012/1","timestamp":"2024-11-04T11:56:18Z","content_type":"text/html","content_length":"138644","record_id":"<urn:uuid:98a5cf0d-3c22-4000-a6ad-ade75e7c31b7>","cc-path":"CC-MAIN-2024-46/segments/1730477027821.39/warc/CC-MAIN-20241104100555-20241104130555-00879.warc.gz"}
More Playing with 2D Try continuing these patterns made from triangles. Can you create your own repeating pattern? Can you help the children in Mrs Trimmer's class make different shapes out of a loop of string? How many different ways can you find to join three equilateral triangles together? Can you convince us that you have found them all? Watch this "Notes on a Triangle" film. Can you recreate parts of the film using cut-out triangles? How many triangles can you make using sticks that are 3cm, 4cm and 5cm long?
{"url":"https://nrich.maths.org/more-playing-2d-shape-lower-primary-0","timestamp":"2024-11-11T00:57:10Z","content_type":"text/html","content_length":"43329","record_id":"<urn:uuid:95adc740-9b56-47a5-ad43-482b9c162700>","cc-path":"CC-MAIN-2024-46/segments/1730477028202.29/warc/CC-MAIN-20241110233206-20241111023206-00055.warc.gz"}
18. Numbering • We are familiar with numbers based on units of 10, a result of our 10 fingers. But, other number bases are possible. • When using digital systems, we will use a variety of numbering systems (other than in units of 10). • With computers, number bases that are multiples of 2 are favored. • For each base we count as usual, but the numbers stop at some value, then a more significant digit is added. 18.1 Data Values 18.1.1 Binary • Binary is best used in computers because signals are ON/OFF which is well suited to the two binary digits. • Converting between number systems can be done by looking at digit magnitude. • Conversion can also be done between systems by division. • For division use remainders. • Convert the following numbers to/from binary • Binary bytes and words are shown below. 18.1.2 Boolean Operations • In most discrete systems the inputs and outputs (I/O) are either on or off. This is a binary state that will be represented with, • Because there are many inputs and outputs, these can be grouped (for convenience) into binary numbers. • Consider an application of binary numbers. There are three motors M1, M2 and M3 100 = Motor 1 is the only one on in total there are 2n or 23 possible combinations of motors on. • The most common Binary operations are, 18.1.3 Binary Mathematics • These include standard logic forms such as, • Negative numbers are a particular problem with binary numbers. As a result there are two common numbering systems use, signed binary: the most significant bit (MSB) of the binary number is used to indicate positive/negative 2s compliment: negative numbers are represented by complimenting the binary number and then adding 1. • Signed binary numbers are easy to understand, but much harder to work with when doing calculations. • An example of 2s compliments are given below, • When adding 2s compliment numbers, additional operations are not needed to deal with negative numbers. Consider the examples below, 18.1.4 BCD (Binary Coded Decimal) • Each digit is encoded in 4 bits • This numbering system makes poor use of the digits, but is easier to convert to/from base 10 numbers. For the two bytes above the maximum numbers possible are from 0-9999 in BCD, but 0-64285 in • Convert the BCD number below to a decimal number, • Convert the following binary number to a BCD number, 18.1.5 Number Conversions • Convert the following binary number to a Hexadecimal value, • Convert the following binary number to a octal, 18.1.6 ASCII (American Standard Code for Information Interchange) • While numbers are well suited binary, characters don’t naturally correspond to numbers. To overcome this a standard set of characters and controls were assigned to numbers. As a result, the letter ‘A’ is readily recognized by most computers world-wide when they see the number 65. 18.2 Data Characterization 18.2.1 Parity • Parity: used to detect errors in data. A parity bit can be added to the data. For example older IBM PCs store data as bytes, with an extra bit for parity. This allows real-time error checking of • The odd parity bit is true if there are an odd number of bits on in a binary number. On the other hand the Even parity is set if there are an even number of true bits. • Convert the decimal value below to a binary byte, and then determine the odd parity bit, 18.2.2 Gray Code • A scheme to send binary numbers, but encoded to be noise resistant. • The concept is that as the binary number counts up or down, only one bit changes at a time. Thus making it easier to detect erroneous bit changes. ASIDE: When the signal level in a wire rises or drops, it induces a magnetic pulse that excites a signal in other nearby lines. This phenomenon is known as ‘cross-talk’. This signal is often too small to be noticed, but several simultaneous changes, coupled with background noise could result in erroneous values. 18.2.3 Checksums • Parity bits work well when checking a small number of bits, but when the sequence becomes longer a checksum will help detect transmission errors. • Basically this is a sum of values. 18.3 Problems Problem 18.1 a) Represent the following decimal value thumb wheel input as a Binary Coded Decimal (BCD) and a Hexadecimal Value (without using a calculator). b) What is the corresponding decimal value of the following BCD Answer 18.1 a). 3532 = 0011 0101 0011 0010 = DCC, b). the number is not a valid BCD Answer 18.2 false Problem 18.3 Convert the following from binary to decimal, hexadecimal, BCD and octal. Problem 18.4 Convert the following from decimal to binary, hexadecimal, BCD and octal. Problem 18.5 Convert the following from hexadecimal to binary, decimal, BCD and octal. Problem 18.6 Convert the following from BCD to binary, decimal, hexadecimal and octal. Problem 18.7 Convert the following from octal to binary, decimal, hexadecimal and BCD. Problem 18.8 Why are binary, octal and hexadecimal used for computer applications? Problem 18.9 Add/subtract/multiply/divide the following numbers. Problem 18.10 What are the specific purpose for Gray code and parity?
{"url":"https://engineeronadisk.com/V3/engineeronadisk-156.html","timestamp":"2024-11-07T19:16:53Z","content_type":"text/html","content_length":"15214","record_id":"<urn:uuid:c3f90326-827c-460e-84f8-5704c5aaa60f>","cc-path":"CC-MAIN-2024-46/segments/1730477028009.81/warc/CC-MAIN-20241107181317-20241107211317-00794.warc.gz"}
Can Compressive Sensing Solve Your Sensor and Measurement Problems? - DSIAC Compressive sensing (CS) is a relatively new field that has generated a great deal of excitement in the signal-processing community. Research has applied CS to many forms of measurement, including radio detection and ranging (RADAR), light detection and ranging (LIDAR), magnetic resonance imaging (MRI), hyperspectral imaging, high-speed imaging, X-ray tomography, and electron microscopy. Benefits range from increased resolution and measurement speed to decreased power consumption and memory usage. CS has received mixed reviews in commercial and government circles. Some have touted CS as a cure-all that can be “thrown” at any sensor problem. Others consider CS all hype—just a rebranding of old theories. Who is right? In order to answer this question, an overview of CS is presented, clarifying common misconceptions. Case studies are brought to illustrate the advantages and disadvantages of applying CS to various sensor problems. Guidelines are extracted from these case studies, allowing the readers to answer for themselves, “Can CS solve my sensor and measurement problems?” At first look, it appears that CS is widespread and revolutionizing a variety of sensor systems. Four new Institute of Electrical and Electronics Engineers (IEEE) paper classification categories have been created specifically for CS. Thousands of research papers have been published [1], representing significant academic funding by government and industry. The Wikipedia entry on CS [2] declares that “compressed sensing is used in a mobile phone camera sensor” and “commercial shortwave-infrared cameras based upon compressed sensing are available.” The infrared (IR) camera refers to InView’s single-pixel camera, a highly-publicized, real-world example of compressive sensing in action [3]. The MIT Technology Review article “Why Compressive Sensing Will Change the World” [4] explains how CS has supplanted the Nyquist-Shannon sampling theorem, a foundation in signal processing during the last century, and that CS is “going to have big implications for all kinds of measurements” [4]. A closer look, however, reveals some doubts about CS. There are anonymous comments, such as “Most of it seems to be linear interpolation, rebranded” [5] or “Compressed sensing…was overhyped” [6]. There are researchers like Yoram Bresler, a professor at the University of Illinois, that claim that CS is not really new. He asks, “Would a rose by any other name smell as sweet?,” claiming that CS is just a new name for earlier techniques, such as image compression on the fly and blind spectrum sampling [7]. Some CS researchers acknowledge shortcomings. Thomas Strohmer, a professor and the University of California Davis, asks, “Is compressive sensing overrated?” and notes that “the construction of compressive sensing based hardware is still a great challenge” [8]. Simon Foucart, a professor of mathematics at Texas A&M University, describes how “projects to build practical systems foundered…” and that “…compressed sensing has not had the technological impact that its strongest proponents anticipated” [9]. When asked what was holding CS back from imaging applications, Mark Neifeld, a professor at the University of Arizona, answered that “we haven’t discovered the ‘killer’ application yet” [10]. There are other scholars that are openly critical of CS. Leonid Yaroslavsky, a professor at Tel Aviv University and an Optical Society fellow, writes, “Assertions that CS methods enable large reduction in the sampling costs and surpass the traditional limits of sampling theory are quite exaggerated, misleading and grounded in misinterpretation of the sampling theory” [11]. In a section of his website titled “Fads and Fallacies in Image Processing,” Kieran Larkin, an independent researcher with 4,274 paper citations, declares that “everyone knew that the single-pixel camera research was a failure” [12]. He is referring to the InView single-pixel camera previously mentioned that was purported to be a successful application of CS. Who is right? Will CS bring about a revolution in sensing and measurement or is it really just all hype? There are many tutorials [13–15], review papers [16–18], and articles [4, 19, 20] on CS, but they tend to be too technical or general for many readers. The technical sources are inaccessible to those without a signal processing background, while the nontechnical sources are too vague to give an intelligent perspective on CS. They do not address specific criticisms, leaving the readers on their own to judge between the supporters and detractors of CS. They are also generally written by CS researchers that may justly or unjustly be suspected of bias. This article seeks to provide an accessible explanation of CS that gives enough background to examine claims and criticisms. In an effort to make CS understandable to the layman, concepts have been simplified, details have been glossed over, and equations have been replaced by intuitive explanations. For a more in-depth treatment of CS, several tutorials provide a good starting point [13–15]. Traditional Sampling CS is sometimes referred to as compressive sampling since it is the sampling process that lies at the heart of CS. Sampling transforms continuous analog signals into discrete digital values that can be processed by a computer. In this age of low-cost computing, virtually all sensor systems sample signals, from commercial audio and video equipment to specialized medical and military systems. The speed or resolution at which a signal is sampled is called the sampling rate. Samples are typically taken at regular intervals, and the sampling rate determines the size of the features that can be identified in a signal. The more samples taken, i.e., the higher the sampling rate, the smaller the features that can be identified. When designing any system that samples data, the sampling rate must be carefully considered. If it is too high, the extra samples can increase power consumption, memory usage, computing complexity, and cost for little or no gain in performance. If it is too low, important information is lost, degrading performance, or even making the system unusable. Nyquist-Shannon Sampling Theorem In order to appreciate CS, we must first explain traditional sampling in some detail. The top left plot of Figure 1 shows a 1-Hz sinusoidal signal, i.e., there is one cycle per second. Such a signal could represent many different types of physical phenomenon, such as a voltage oscillating over time. The middle left plot shows a 10-Hz signal, i.e., 10 cycles per second. The bottom left plot shows the summation of these two signals. The top plot on the right shows this summed signal sampled at 10 Hz, i.e., 10 samples per second. This is an example of traditional sampling. An analog-to-digital converter (ADC) would sample the signal at regular intervals 30x over the 3-s period to produce the samples marked as red dots. These samples are connected by a green line in an attempt to reconstruct the original signal. The reconstruction completely misses the 10-Hz signal, creating a waveform similar to the original 1-Hz signal. Clearly, the 10-Hz sample rate is too slow to detect the 10-Hz signal. The middle right plot shows the same signal sampled at 20 Hz, successfully capturing the 10-Hz signal. The Nyquist-Shannon sampling theorem states that the sampling rate must be at least twice the highest frequency of a signal. This intuitively makes sense. In order to detect the important features of a signal, there needs to be at least one sample per feature. The important features of a sinusoid can be viewed as the valleys and peaks of each cycle. Therefore, two samples are needed per cycle. In this case, a 20-Hz sampling rate allows us to sample at the valley and peak of each cycle of the 10-Hz signal. The bottom right plot shows the boxed detail from the plot above it. The green lines connecting the samples give a good approximation of the signal but do not perfectly match the original waveform in blue. However, we will see that the signal can be perfectly reconstructed using sampling theory. The signal in Figure 1 is relatively simple, just a combination of two sinusoids. A plot of the spectrum of this signal is shown in the top left plot of Figure 2. Instead of viewing the signal as a voltage oscillating in the time domain, the plot shows the amplitudes of the sinusoids that make up the signal in the frequency domain. The frequency domain is commonly referred to as the Fourier basis. The normal domain in which we typically view the signal, in this case, the time domain, is commonly called the standard basis. The idea of representing signals in different bases will be an important concept in CS that will be revisited later. The bottom left plot shows a more complicated spectrum with 14 nonzero values, corresponding to 14 sinusoids in the time domain that are summed to produce the signal in the top right plot. This signal is sampled above the Nyquist rate and reconstructed using sampling theory as the dashed red line in the bottom right plot, perfectly matching the original signal in blue. This is a remarkable result. A continuous analog signal composed of many frequencies that is sampled at the Nyquist rate can be perfectly reconstructed. Two-Dimensional (2-D) Sampling Figure 3 gives an example of 2-D sampling. In the case of one-dimensional (1-D) signals, one sensor (the ADC) samples at regular time intervals. In imaging, multiple sensors are typically spaced at regular intervals to create a 2-D sensor array. Just as the sampling rate determined the smallest detectable feature for the 1-D signal, the resolution of the 2-D array determines the smallest detectable features in the image. The relatively high-resolution image on the left is 1024 x 1024 pixels. The middle image shows the magnified detail of an airplane, with an ARL logo clearly recognizable. The right image has a resolution of 512 x 512, where the logo is now unrecognizable. The line width (i.e., feature size) of the letters was 1 pixel for the 1024 x 1024 array. When we decrease the resolution to less than 1 pixel per feature, those features become unrecognizable. A Scanning Single-Pixel Camera Images are typically sampled using a 2-D sensor array, but there are other methods as well. A single sensor could scan the scene, pixel by pixel, row by row, to acquire a complete 1024 x 1024 image. Figure 4 shows an example of such a system. A lens projects a scene onto a digital micromirror device (DMD), a 2-D array of tiny mirrors. Each mirror can be independently controlled to reflect light toward or away from a single sensor. The DMD steps through all of its mirrors, reflecting the light from one mirror toward the detector, while all of the other mirrors reflect light away from the detector. At each step, only the part of the scene reflected by the single mirror is seen by the sensor, capturing the single pixel of the image corresponding to that mirror. After all the pixels have been collected, they can be arranged to form a 2-D image of the scene. The image will have the same resolution as the DMD, e.g., a 1024 x 1024 DMD can create a 1024 x 1024 pixel image. In essence, the traditional imager with a 2-D sensor array has been replaced by a 2-D mirror array. This process may seem excessively complicated for the visible spectrum, where high-resolution 2-D sensors arrays are inexpensive; but for expensive IR sensors, this system might be viable. If the high-resolution DMD is less expensive than a high-resolution IR sensor array, a low-cost IR camera can be created using a DMD and a cheap, single-pixel IR sensor. Compressive Sensing Compressive Sensing, Single-Pixel Camera There is another important factor besides the relative cost of the DMD that will determine the practicality of this single-pixel camera—the measurement speed. A 2-D sensor array can capture an entire image in one snapshot. A single-pixel camera with a 1024 x 1024 DMD has to step through all 1 million mirrors to take a picture. This will take some time, even if the mirrors can move very fast. Is there any way to speed up the measurements without significantly affecting the image quality? A 512 x 512 array would have 4x less pixels and be 4x faster; but as we saw in Figure 3, this will degrade the image quality. This is based on the Nyquist- Shannon sampling theorem, which, in essence, says that at least one measurement is needed per feature. Using traditional sampling, there is no way to avoid the sampling theorem, but we can circumvent it using compressive sampling. The compressed sample is a randomly weighted sum of the entire signal. In this case, instead of using one mirror at a time, we can use a random pattern of multiple mirrors simultaneously to reflect a random pattern of the scene onto the sensor (illustrated in Figure 5). Each column represents the process of taking one compressed sample. The top row is the scene, in this case, a simple “L” that remains the same for each measurement. The low-resolution, 8 x 8 image and DMD are only used here for illustration; real applications would use higher resolutions. The next row shows the weights produced by the DMD, which changes for each measurement. The white squares represent the mirrors reflecting light toward the sensor, effectively weighting (i.e., multiplying) the light by 1. The black squares represent the mirrors reflecting light away from the sensor, effectively weighting the light by 0. The “L” of the image is outlined to illustrate how the weights overlap the scene. The next row shows the weighted scene. This is the light seen by the sensor, which is the product of the scene and the DMD pattern. The bottom plot shows the compressed sample produced by each column. For each measurement, the light of the weighted scene is focused on the single sensor, which sums all of the light and measures its intensity, producing a compressed sample. If the value of each black and white pixel of the weighted scene is 0 and 1, respectively, then the value of the compressed samples will be the number of white pixels in the weighted scene. The figure shows five example DMD patterns that produce five compressed samples. The samples themselves do not resemble the scene. But once enough samples have been taken, they can be processed by an optimization algorithm to recover an image of the scene. Figure 6 shows the image from Figure 3 that used 512 x 512 = 262,144 pixels on the left compared to a simulated CS recovered image using 262,144 compressed samples on the right. This used the same method of taking compressed samples, as shown in Figure 5, except a 1024 x 1024 DMD was set to 262,144 different random patterns, producing 262,144 compressed samples. An algorithm processed these samples to produce a 1024 x 1024 image, with the “ARL” clearly visible. We have achieved a resolution of 1024 x 1024 using 4x less measurements, beating the Nyquist-Shannon sampling rate! This example illustrates the essential components of CS. Compressed samples are formed as randomly-weighted sums of the signal, typically requiring some form of specialized hardware. These samples are then postprocessed to reconstruct the original signal using fewer samples than predicted by traditional sampling theory. The Nyquist-Shannon sampling theorem states that the sampling rate must be at least twice the highest frequency of the signal—equivalently, the pixel width cannot be larger than the smallest feature of the scene. The sampling rate depends on the highest frequency of the signal. In CS, the number of measurements required depends on the sparsity of the signal, not its highest frequency. The sparsity of a signal is inversely proportional to its number of nonzero elements. Sparse signals are mostly close to 0, with a relatively few number of larger nonzero values. Remarkably, CS works well as long as the signal is sparse in any basis, not just the standard basis. For example, the 1 + 10 Hz signal in Figure 1 shown in the time domain is not sparse, i.e., most of the values are not 0. However, when the signal is represented in the Fourier basis (frequency domain), as in the top left of Figure 2, there are only two nonzero values, making it very sparse. Many naturally-occurring signals are sparse in some basis, allowing CS to be applied to a variety of applications. CS worked in the airport image because images are generally sparse in bases such as the 2-D Fourier or 2-D The left plot of Figure 7 shows a normal image in the standard basis. The middle plot shows this image in the wavelet basis, with the amplitude shown in a log scale for emphasis. The right plot compares the sparsity of the image in the standard and wavelet. For this 256 x 256 image, all 65,536 pixel values were arranged in descending order to create the line labeled “Original.” The same was done for the image in the wavelet basis and labeled “Wavelet.” Just like the 1 + 10 Hz signal, even though the image is not sparse in the standard basis, it is sparse in the wavelet basis. The important point is that the sparser the signal, the better CS works, i.e., the signal can be reconstructed with less measurements. Higher-dimensional signals, such as three-dimensional (3-D) images, are typically very sparse, making them ideal candidates for CS. In the single-pixel camera example, the DMD was used to produce randomly-weighted sums of the data. The weights do not have to be random; they just have to be incoherent with the sparse basis of the data. Incoherence can be thought of as maximally different. For example, when using a Fourier basis, the pattern of weights should be as different as possible from the sinusoids of the Fourier basis. It happens to be that completely random weights are incoherent to any basis, but pseudorandom or structured weights can also be used in CS. Using CS theory, these weights can be optimized to achieve maximum performance [21], but they also have to be realizable in hardware. For example, a DMD cannot produce arbitrary valued weights. The mirrors can only point two directions, away or toward the sensor, resulting in weights of 0 or 1. Depending on the hardware used to implement the CS weights, the weight values may have limitations that affect CS performance. Data Reconstruction A detailed description of CS reconstruction algorithms is outside the scope of this article, but there are a few relevant points to mention. In traditional sampling, a signal can be perfectly reconstructed if it is sampled above the Nyquist rate. Similarly, in CS, a signal can be perfectly reconstructed if there are enough measurements relative to its sparsity. Real signals are not perfectly sparse, i.e., many of the values will be close to 0 but not actually 0. If these values are small enough, however, they will have minimal impact on CS performance. The distinction between signal noise and measurement noise is important in CS. Measurement noise is created in the measurement process, e.g., electronic noise in the sensor. Signal noise is present in the signal being measured before it reaches the sensor, e.g., external interference. Signal noise is frequently measured as a signal-to-noise ratio (SNR), where a low SNR indicates a noisy signal. CS performs well in the presence of measurement noise. Reconstruction remains stable, with the quality of the reconstruction proportional to the noise level. CS performs poorly, however, when the signal has a low SNR [22]. Unfortunately, the CS process itself can degrade the SNR. In order to reconstruct the original signal, the random weights used to make the compressed measurements must be known. For example, the states of the DMD in the single-pixel camera must be known for each measurement and used in the reconstruction algorithm. In an ideal case, this information is known perfectly and is not a source of error. In practice, movement or miscalibration will introduce error, decreasing the signal’s SNR and hampering CS. The number of measurements needed to produce an accurate reconstruction is proportional to the sparsity of the data. However, the exact sparsity of the data is not known a priori. Therefore, practical systems have to be designed for worst-case scenarios, increasing the required number of measurements. Another important consideration is that the optimization algorithms that reconstruct the data are very computationally intensive. Progress has been made to accelerate these algorithms [23], but they can still hamper real-time applications or systems with limited computing resources. Compressive Sensing vs. Compression CS is related to compression. When data is compressed, a large quantity of data is represented by a smaller amount of data. Typically, the full, uncompressed data is acquired first before a compression algorithm reduces it to a manageable size. For example, in traditional imaging, high-resolution imagers capture every pixel of an image and then throw away most of the data as it is compressed into a JPEG format. On the other hand, CS uses specialized hardware to compress the data at the time of measurement. Only the compressed measurements are saved; nothing is thrown away. From this viewpoint, CS is more efficient than typical imaging practices. CS can be used directly on data as a compression algorithm, independent of any sensor system. But traditional compression algorithms generally perform better than CS when the full data set is already available. Compressive Sensing vs. Inpainting In some cases, CS can be confused with inpainting [24]. Typically, inpainting refers to filling in the missing samples of an image, but it can also refer to filling in the missing samples of other types of data. Inpainting works with traditional samples, it does not use weighted sums of the data. Missing samples can occur accidently due to occlusions, noise, or damage; or nonuniform sampling can be done on purpose. Inpainting is confused with CS because CS reconstruction algorithms can also be used for inpainting. The main result of CS—that sparse data can be perfectly reconstructed using a small number of compressed samples—does not apply to inpainting. This is why CS is a much more powerful tool than nonuniform sampling for enhancing sensor systems. Case Studies Single-Pixel Camera The single-pixel camera can be viewed as an application of CS to increase measurement speed or, alternatively, image resolution. A scanning camera that measures 1 pixel at a time would require 1024 x 1024 measurements to create a 1024 x 1024 image. A CS architecture using 4x less measurements would decrease the acquisition time by a factor of 4. Alternatively, a scanning camera using a 512 x 512 DMD would take the same amount of time to acquire an image as a 1024 x 1024 CS camera. However, the CS camera would have a resolution 4x higher than the scanning camera. Although CS can significantly increase the measurement speed or resolution of a scanning camera, there are several reasons why the single-pixel camera might not be a commercial success. They are as follows: • Even if CS can increase measurement speed, it may not increase it enough to be practical for many applications. • Given the long measurement time, camera or subject motion may introduce noise that impacts the reconstruction results. • The effectiveness of CS is related to the sparsity of the data. Even though 2-D images are generally sparse, they may not be sparse enough to make this application practical. • The cost of traditional short-wave IR (SWIR) cameras has been decreasing [25]. In addition, the DMD mirrors are limited to the near IR and SWIR range, preventing application to mid- wave IR, long-wave IR (LWIR), and far IR (FIR) imaging. These factors limit the marketability of a DMD-based solution. InView has been developing technology to address some of these problems, such as using multiple sensors to decrease measurement time [26] and hyperspectral cameras that target sparser 3-D data sets [27]. But at this time, it appears that InView has not made large inroads into the IR imaging market. Cell Phone Camera Another CS imaging application is in low-powered, complementary metal-oxide semiconductor (CMOS) imagers [28]. Instead of using CS for measurement speed or resolution, this application focuses on reducing power. Each pixel in a typical 2-D imaging array is made up of a light sensor that produces a voltage and an ADC that converts the analog voltage to a digital value. This is illustrated in Figure 8 on the left with an example 4 x 4 imager. For a 1024 x 1024 sensor, each image requires about 1 million analog to digital (A/D) conversions. Multiplying that by the number of images needed for a video results in significant power usage, especially for mobile devices with limited battery life. CS can be used to reduce power consumption by reducing the number of A/D conversions required for each image. Compressed samples are produced by connecting each ADC to a random pattern of sensors, requiring less A/D conversions per image (as illustrated in Figure 8 on the right). The problem with this approach is the image reconstruction. CS image reconstruction takes time and computing resources, probably using more power than initially saved. This CS imager would only be useful in a niche application where the video is not needed in real time and can be reconstructed in postprocessing using powerful computers. It would certainly be undesirable as a cell phone camera. Magnetic Resonance Imaging The only truly successful commercial application of CS is possibly the MRI [29, 30]. MRI detects radio frequency emissions from tissue excited by magnetic fields. Due to the physics of the system, measurement takes place in the Fourier basis, requiring a basis change back to the standard basis to retrieve the image. Figure 9 shows a simple example of a 2-D MRI, which is similar to the 1 + 10 Hz, 1-D signals in Figures 1 and 2. The first plot on the left in Figure 9 shows an image in the standard basis composed of low- and high-frequency 2-D sinusoids. A real MRI might depict an image of a brain. The next plot shows the image in the Fourier basis. The white dot in the lower left represents the low-frequency 2-D sinusoid shown in the third plot, while the grey dot in the upper left represents the high-frequency sinusoid in the last plot. A typical MRI system would scan through the Fourier basis, acquiring all of the points at the desired sampling rate. Once all of the Fourier samples are taken, they can be transformed to the standard basis to retrieve the image. The amplitude of the points in the Fourier basis corresponds to the correlation between the sinusoid represented by that point and the image. This is calculated by the sum of the image multiplied by that sinusoid. For example, the amplitude of the white point is the sum of the image multiplied by the sinusoid in the third plot. Thus, each traditional MRI sample in the Fourier basis is really a compressed sample—a weighted sum of the image, where the sinusoids act as the weights. This means that CS can be applied to MRI without any hardware changes. Using CS theory, a fraction of the full number of samples typically required for MRI can be used to reconstruct an image, significantly reducing measurement time. MRI CS has a number of advantages [31]. They are as follows: 1. MRI is often used to produce 3-D data (e.g., Figure 10). Higher-dimensional data is typically sparser than lower-dimensional data, enabling 3-D MRI to benefit more from CS than 2-D imaging 2. MRI imaging is performed in a high-SNR laboratory environment. 3. No new hardware is required to produce the compressed samples. Flexible hardware that can be programed with arbitrary weights would be ideal, but MRI that uses Fourier weights has worked well in practice [32]. 4. There is substantial motivation for increasing the speed of MRI. Patients are required to remain still for long periods of time, which can be difficult in many instances. In addition, MRI equipment is very expensive. Increasing the throughput of an MRI machine will decrease overall cost. 5. Image reconstruction can easily be done using powerful computers in postprocessing. Real-time image reconstruction is not required. The following five positive aspects of applying CS to MRI can be formulated as general conditions for the successful application of CS. 1. The data should be very sparse; higher-dimensional data is ideal. 2. The SNR should be high; a laboratory environment is ideal. 3. The hardware that produces the incoherent weights needed for CS should be easily available. 4. There must be substantial motivation for adopting a CS strategy so the benefits outweigh any disadvantages. 5. The application must be able to accommodate the long reconstruction time and high computing costs of CS. It is possible to use CS in a case that does not satisfy these conditions, but it will be more difficult to produce a practical system. We will try applying these guidelines to two test cases to determine their suitability for a CS implementation. The first application is an IR imager for spinning munitions (shown in Figure 11) [33]. The scene is projected through the coded aperture onto the sensors to create weighted samples. The coded aperture is a randomly-patterned mask that blocks random sections of light from reaching the sensor. The coded aperture pattern cannot be changed like the mirrors of a DMD, but the rotation of the aperture relative to the scene via the natural rotation of the munition can produce the different weights for each compressed sample. The coded aperture shown in Figure 11 has a relatively low resolution for illustration purposes. In practical applications, the resolution of the coded aperture is much higher than the sensor array, increasing the resolution of the sensor array the same way the DMD increases the resolution of a single sensor. We apply the guidelines as follows: 1. Two-dimensional data is only moderately sparse. 2. A high-dynamic munition flying through the sky will not have a high SNR. 3. The rotating, coded aperture produces semistructured weights not ideal for CS. 4. High-resolution IR imagers are expensive, motivating the use of CS to increase the resolution of inexpensive low-resolution imagers. 5. A munition imager for target recognition must operate in real time, although there may be ways to use compressed CS data without reconstruction [34]. Only the fourth guideline is encouraging. The cost benefits of CS must be weighed against the cost of the CS hardware and other disadvantages associated with it. Clearly applying CS in this case would be challenging. Another possible CS application is antenna pattern measurement [35]. Figure 12 shows an example antenna pattern as a blue line around an antenna under test (AUT). A traditional measurement system is shown on the left. The circle of dots around the AUT represent test antennas that transmit one at a time (green), while the others are inactive (red). The sensitivity of the AUT is measured relative to each test antenna, creating an antenna pattern. The middle plot shows a CS implementation where random patterns of test antennas transmit simultaneously. The transmissions are summed in the AUT, creating a compressed sample. The CS implementation reduces the number of measurements required, accelerating the measurement process. We can show that the following five conditions apply, making this a promising application of CS: 1. Antenna patterns are often 3-D (azimuth, elevation, and radio frequency) and therefore very sparse. 2. Measurement is typically performed in a controlled laboratory environment with high SNR. 3. There are already systems that use multiple test antennas placed in a ring around the AUT to measure antenna patterns. These systems could be easily modified to activate random combinations of these antennas instead of using them one at a time. The on/off patterns of test antennas are similar to the on/off patterns created by a DMD, which work well for CS. 4. Antenna pattern measurement is typically an expensive, slow procedure, giving substantial motivation to use CS to accelerate measurement by reducing the number of samples. 5. There is no problem postprocessing the data; the data are not needed in real time. Now that we know what CS is and have examined some case studies, we can answer the questions raised in the Introduction. • Many unsubstantiated comments about CS are simply untrue. CS is not “linear interpolation, rebranded.” The example of signal reconstruction using traditional sampling in Figure 2 is a form of interpolation. CS uses an optimization algorithm to identify signal coefficients in a sparse basis from compressed samples. This is not interpolation. • The claim that CS is not new is somewhat true. Sources trace the history of CS as far back as 1795 to Prony’s method [7]. More recent results include Fadil Santosa and William Symes in 1986 [36], but it was not until 2006 that David Donoho coined the term “compressive sensing” [37]. Since then, Donoho and others have advanced CS theory and championed its use in the measurement process. No matter how new CS theory really is, the widespread push to use it to improve a variety of sensor systems is new. • The claim that CS is overhyped is probably true. Even CS researchers acknowledge that “…compressed sensing has not had the technological impact that its strongest proponents anticipated” [9]. When reading about all of the applications of CS, it may seem that CS is revolutionizing many sensor technologies. However, most of these applications are experimental systems that have not overtaken traditional techniques. For example, when Wikipedia states that “compressed sensing is used in a mobile phone camera sensor” [2], it is not talking about all phone cameras. It is referring to the experimental low- power CMOS imager in the case study that was never actually used in any commercial cell phone. • Open criticism of CS from researchers such as Leonid Yaroslavsky and Kieran Larkin [11, 12] partially stem from the overhyped publicity of CS. Their comments can be generally understood to say that CS will not provide the best performance in all cases, and, in many practical situations, more traditional sampling or compression strategies will outperform CS. This is certainly true. On the other hand, CS has achieved significant results that will increase performance for certain applications. CS does deserve recognition, as well as further research and funding, even if it is not the panacea some have claimed. • An additional point is that the growth in CS has also advanced topics, such as the properties of random matrices, sparse representation, and optimization algorithms applicable to areas outside sensing and measurement. Even if CS sensor hardware has been slow to develop, there are many related fields benefiting from CS research. Can CS solve your sensor and measurement problems? As with most significant questions, there is not an easy yes or no answer. CS is definitely not a cure-all that can be used in every situation. The fact that Mark Neifeld declared that “we haven’t discovered the ‘killer’ application yet” [10] when accessing the potential of CS imagers with leaders in the defense sector should give one pause if they think that their application is the “killer” application. The five guidelines listed are a good starting point. CS should realistically be compared to other alternatives, whether with other sampling techniques such as basis scanning [38] or alternate sensor technologies. CS results that seem promising in idealized settings might not perform well in more realistic scenarios. CS is still developing. An application that is not practical in the near term might still deserve longer-term research. Government and industry leaders should approach CS with their eyes open. They should be aware of the advantages and disadvantages of CS and know if the funded research is practical and short term or more theoretical and long term. In 1956, there was a surge in optimism about information theory much like the hype CS is experiencing. The words of Claude Shannon, of the Nyquist- Shannon sampling theorem, ring true today as much as they did then [39]. Information theory has, in the last few years, become something of a scientific bandwagon… What can be done to inject a note of moderation in this situation? In the first place, workers in other fields should realize that the basic results of the subject are aimed in a very specific direction… A thorough understanding of the mathematical foundation and its …application is surely a prerequisite to other applications. The subject of information theory has certainly been sold, if not oversold. We should now turn our attention to the business of research and development at the highest scientific plane we can maintain… A few first-rate research papers are preferable to a large number that are poorly conceived or half finished. Whether CS can solve your problem or not, there is a final lesson to learn from CS methodology. The hardware and software aspects of sensor systems should not be designed independently, rather there should be an interdisciplinary codesign resulting in an optimal solution [8]. 1. Elad, M. “Sparse and Redundant Representation Modeling—What Next?” IEEE Signal Processing Letters, vol. 19, no. 12, pp. 922–928, 2012. 2. Compressed Sensing, https://en.wikipedia.org/wiki/Compressed_sensing. 3. Duarte, M. F., et al. “Single-Pixel Imaging Via Compressive Sampling.” IEEE Signal Processing Magazine, vol. 25, no. 2, pp. 83–91, 2008. 4. Why Compressive Sensing Will Change the World, https://www.technologyreview.com/s/412593/why-compressive-sensing-will-change-the-world/. 5. Uncompressing the Concept of Compressed Sensing, https://statmodeling.stat.columbia.edu/2013/10/27/uncompressing-the-concept-of-compressed-sensing/. 6. Shtetl-Optimized: The Blog of Scott Aaronson, https://www.scottaaronson.com/blog/?p=3256. 7. The Invention of Compressive Sensing and Recent Results: From Spectrum-Blind Sampling and Image Compression on the Fly to New Solutions with Realistic Performance Guarantees, http:// 8. Strohmer, T. “Measure What Should Be Measured: Progress and Challenges in Compressive Sensing.” IEEE Signal Processing Letters, vol. 19, no. 12, pp. 887–893, 2012. 9. Foucart, S., and H. Rauhut. “A Mathematical Introduction to Compressive Sensing.” Bull. Am. Math, vol. 54, pp. 151–165, 2017. 10. Neifeld, M. “Harnessing the Potential of Compressive Sensing,” https://www.osa-opn.org/home/articles/volume_25/november _2014/departments/harnessing_the_potential_of_compressive_sensing/. 11. Yaroslavsky, L. P. “Can Compressed Sensing Beat the Nyquist Sampling Rate?” Optical Engineering, vol. 54, no. 7, p. 079701, 2015. 12. Larkin, K. “A Fair Comparison of Single Pixel Compressive Sensing (CS) and Conventional Pixel Array Cameras,” http://www.nontrivialzeros.net/Hype_&_Spin/Misleading%20 13. Candès, E. J., and M. B. Wakin. “An Introduction to Compressive Sampling [A Sensing/ Sampling Paradigm That Goes Against the Common Knowledge in Data Acquisition].” IEEE Signal Processing Magazine, vol. 25, no. 2, pp. 21–30, 2008. 14. Baraniuk, R., et al. “An Introduction to Compressive Sensing.” Connexions E-Textbook, pp. 24–76. 15. Willett, R. M., R. F. Marcia, and J. M. Nichols. “Compressed Sensing for Practical Optical Imaging Systems: A Tutorial.” Optical Engineering, vol. 50, no. 7, p. 072601, 2011. 16. Rani, M., S. B. Dhok, and R. B. Deshmukh. “A Systematic Review of Compressive Sensing: Concepts, Implementations and Applications.” IEEE Access, vol. 6, pp. 4875–4894, 2018. 17. Strohmer, T. “Measure What Should Be Measured: Progress and Challenges in Compressive Sensing.” IEEE Signal Processing Letters, vol. 19, no. 12, pp. 887–893, 2012. 18. Gregg, M. “Compressive Sensing for DoD Sensor Systems.” JASON Program Office, MITRE Corp., Mclean, VA, No. JSR-12-104, 2012. 19. Camera Chip Makes Already Compressed Images, https://spectrum.ieee.org/semiconductors/optoelectronics/camera-chip-makes-alreadycompressed-images. 20. Toward Practical Compressed Sensing, http://news.mit.edu/2013/toward-practical-compressed-sensing-0201. 21. Elad, M. “Optimized Projections for Compressed Sensing.” IEEE Transactions on Signal Processing, vol. 55, no. 12, pp. 5695–5702, 2007. 22. Davenport, M. A., et al. “The Pros and Cons of Compressive Sensing for Wideband Signal Acquisition: Noise Folding Versus Dynamic Range.” IEEE Transactions on Signal Processing, vol. 60, no. 9, pp. 4628–4642, 2012. 23. Kulkarni, A., and T. Mohsenin. “Accelerating Compressive Sensing Reconstruction OMP Algorithm With CPU, GPU, FPGA and Domain Specific Many-Core.” 2015 IEEE International Symposium on Circuits and Systems (ISCAS), IEEE, 2015. 24. Compressed Sensing or Inpainting? Part I, https://nuit-blanche.blogspot.com/2010/05/compressed-sensing-or-inpainting-part-i.html. 25. Tech Trends: Thermal Imagers Feeling the Shrink, https://www.securityinfowatch.com/video-surveillance/cameras/night-vision-thermal-infrared-cameras/article/11152180/ 26. Kelly, K. F., et al. “Decreasing Image Acquisition Time for Compressive Imaging Devices.” U.S. Patent 8,860,835, 14 October 2014. 27. Russell, T. A., et al. “Compressive Hyperspectral Sensor for LWIR Gas Detection.” Compressive Sensing, vol. 8365, International Society for Optics and Photonics, 2012. 28. Oike, Y., and A. El Gamal. “CMOS Image Sensor With Per-Column ΣΔ ADC and Programmable Compressed Sensing.” IEEE Journal of Solid-State Circuits, vol. 48, no. 1, pp. 318–328, 2012. 29. Lustig, M., D. L. Donoho, and J. M. Pauly. “Sparse MRI: The Application of Compressed Sensing for Rapid MRI Imaging.” Magn. Reson. Imaging, vol. 58, no. 6, pp. 1182–1195, 2007. 30. Siemens Healthineers: Compressed Sensing, https://www.siemens-healthineers.com/en-us/magnetic-resonance-imaging/ clinical-specialities/compressed-sensing. 31. Foucart, S., and H. Rauhut. “A Mathematical Introduction to Compressive Sensing.” Bull. Am. Math, vol. 54, pp. 151–165, 2017. 32. Krahmer, F., and R. Ward. “Beyond Incoherence: Stable and Robust Sampling Strategies for Compressive Imaging.” Preprint, 2012. 33. Don, M. L., C. Fu, and G. R. Arce. “Compressive Imaging via a Rotating Coded Aperture.” Applied Optics, vol. 56.3, pp. B142–B153, 2017. 34. Davenport, M. A., et al. “The Smashed Filter for Compressive Classification and Target Recognition.” Computational Imaging V., vol. 6498, International Society for Optics and Photonics, 2007. 35. Don, M. L., and G. R. Arce. “Antenna Radiation Pattern Compressive Sensing.” The 2018-2018 IEEE Military Communications Conference (MILCOM), IEEE, 2018. 36. Santosa, F., and W. W. Symes. “Linear Inversion of Band-Limited Reflection Seismograms.” SIAM J. Sci. Statist. Comput., vol. 7, pp. 1307–1330, 1986. 37. Donoho, D. L. “Compressed Sensing.” IEEE Trans. Inform. Theory., vol. 52, no. 4, pp. 1289–1306, 2006. 38. DeVerse, R. A., R. R. Coifman, A. C. Coppi, W. G. Fateley, F. Geshwind, R. M. Hammaker, S. Valenti, and F. J. Warner. “Application of Spatial Light Modulators for New Modalities in Spectrometry and Imaging.” Spectral Imaging: Instrumentat., Applicat. Anal. II, vol. 4959, pp. 12–22, 2003. 39. Shannon, C. E. “The Bandwagon.” IRE Transactions on Information Theory 2.1, vol. 3, 1956. MICHAEL DON is an electrical engineer for ARL, where he specializes in high-performance embedded computing, signal processing, and wireless communication. He began his career as an intern at Digital Equipment Corporation, performing integrated circuit design for their next-generation Alpha processor, the fastest processor in the world at that time. After graduating, he worked for Bell Labs, where he engaged in the mixed-signal design of read channels for their mass storage group. Mr. Don holds a bachelor’s degree in electrical engineering from Cornell University and is currently pursuing a Ph.D. in electrical engineering at the University of Delaware, where he is researching compressive sensing applications for guided munitions.
{"url":"https://dsiac.dtic.mil/articles/can-compressive-sensing-solve-your-sensor-and-measurement-problems/","timestamp":"2024-11-06T08:46:12Z","content_type":"text/html","content_length":"361677","record_id":"<urn:uuid:5ba4d81d-9ddb-4e13-9948-300d78c8c34c>","cc-path":"CC-MAIN-2024-46/segments/1730477027910.12/warc/CC-MAIN-20241106065928-20241106095928-00678.warc.gz"}
How can I converge to a continuum harmonic trap ground state? I would like to investigate bosons in a 1D harmonic trap, and was hoping that this could be done by using DMRG in the continuum limit of having many more sites than particles. As a test case, I have simulated one boson in a trap with the Hamiltonian $\hat H_N = -t \sum_{i=1}^{M-1} (\hat a_i^\dagger \hat a_{i&plus;1} &plus; H.c.) &plus; \sum_{i=1}^M \Big[ \frac{\omega}{2} (x_i - x_c)^2 \hat n_i\Big]$ However, the density, as measured with $\langle \psi | \hat n_i |\psi \rangle$ does not converge towards the harmonic oscillator ground state for up to 100 sweeps on 1000 sites with small lattice constant in a system large enough that the particle does not feel the walls of the box. All other DMRG parameters were also generously set. It seems that for t >> \omega, the system tends towards the particle in a box ground state (which converges nicely if there is no trap), and for t <= \omega, the system tends towards a much more localized ground state than the harmonic oscillator ground state, see the figure which has \omega = 2t. Note that the DMRG potential is the potential felt at every point as output by the DMRG code. I am aware that iTensor DMRG is not designed for the continuum limit, but it has been used to investigate that limit in several published papers. I cannot find any DMRG parameters whose adjustment changes any of this, apart from those energies. Can you help me figure out what is going on? Hi, I plan to answer your question soon & thanks for your patience. One question for you though: are you scaling â tâ in some way with the lattice constant â aâ ? Or just choosing various t values. If the second one, then it may not be that you are taking the continuum limit toward the Hamiltonian that you are expecting to reach. Another question is whether the large number of sweeps is being done to reach the density you do end up getting, or because you were hoping it would eventually change to the analytic density? Thanks - this will just help me to give a better answer. Hi Miles, First of all, thank you for your incredible work on iTensor and especially on maintaining this support page through the years! It is immensely helpful. So, first of all I would just like to see that for N particles on M sites, taking N<<M leads to a density which is a close approximation to the continuum solution density. To converge towards the continuum energy, I would have to take into account the relationship between t and a, but it is my understanding that for just getting the right density, only the length scale of the potential really matters. Thus I am just choosing somewhat arbitrary t for now, and making sure that in each point, the value of the potential corresponds to that of the continuum potential in the same point. But since you ask, I guess the shape of the density is an interplay between the kinetic and potential energy, and scaling them wrong might lead to the wrong density? In that case, how come I find the right particle in a box density for basically arbitrary t? Sweeps are very cheap for N<<M, and sometimes a large number is required to converge. For a very weak trap, that is the case, but for a trap like the one pictured, not many sweeps are required. I mostly mentioned it to say that it certainly had enough sweeps to converge on a ground state. Thank you again for taking the time to answer, and please let me know if you need anything else. Hi, thanks for the kind words! For your case where there is both a potential and a hopping, the ratio of these two couplings is of crucial importance. The density and essentially every other property will depend on this ratio. So it's not correct that the density will only depend on the potential. Please check this by taking t = 1/(2*a^2) where a is the lattice spacing as is done in this paper for example: https://arxiv.org/ I think the reason you got the right density for the particle in the box is that for that case the potential at each point is either zero or infinity. (Let's just think of it as being zero and the system length as being finite.) Then the ratio of the potential to the hopping is always zero. So this ratio will be the same no matter what you take t to be, and changing t will only change the overall energy just like in the case where the potential v is non-zero and you change t while keeping v/t fixed. Yes agreed these kinds of simulations can take a huge number of sweeps to converge! My experience is that particles can act sort of diffusively, only spreading out and moving rather gradually with each DMRG sweep. So you have to use tricks like good initial states or lots of sweeps at smaller bond dimensions at the beginning, or more fancy tricks like coarse-to-fine graining multigrid style or RG transformations. See for example: https://arxiv.org/abs/1203.6363
{"url":"http://www.itensor.org/support/2579/how-can-i-converge-to-a-continuum-harmonic-trap-ground-state?show=2592","timestamp":"2024-11-08T14:33:44Z","content_type":"text/html","content_length":"31861","record_id":"<urn:uuid:e79e1f0b-fe0d-4f96-8c6c-70c6060b06df>","cc-path":"CC-MAIN-2024-46/segments/1730477028067.32/warc/CC-MAIN-20241108133114-20241108163114-00792.warc.gz"}
Number 100 Worksheets - 15 Worksheets.com Number 100 Worksheets About These 15 Worksheets These worksheets can help you learn, understand, and practice using the number 100. Each section has different types of activities all related to the numbers up to 100. You might find sections where you’re asked to fill in missing numbers in a sequence, to color in a certain number of objects, to match numbers with their written names, or even to solve simple arithmetic problems like addition or These worksheets most often include the following types of exercises: Counting and Number Recognition – Worksheets do involve counting by ones, twos, fives, or tens up to 100. They may also include skip counting exercises, where students practice counting by a specific number, such as counting by fives or tens. Students may be asked to identify and write the number 100, both in numeral form and as a word. Place Value and Patterns – Worksheets will focus on place value concepts related to the number 100, such as understanding that 100 is composed of one hundred units or ten groups of ten. Some sheets will present patterns or sequences of numbers related to 100, and students may be asked to identify the missing numbers in the pattern. Students may be given sets of numbers and asked to compare them to 100 using greater than, less than, or equal to symbols. The first and most obvious way number 100 worksheets help improve your math skills is by strengthening your number sense. Number sense is your understanding of what numbers mean, how they relate to each other, and how they can be used in real-world situations. For example, a number 100 worksheet might ask you to fill in a 100 chart, a grid with 100 squares where you write the numbers from 1 to 100. This activity helps you recognize patterns in numbers, such as noticing that the numbers on the right end of each row (10, 20, 30, etc.) all end in zero. These worksheets also help improve your ability to do mental arithmetic. Many of them will include simple addition or subtraction problems, such as “What do you get when you add 34 and 21?” or “What is 85 minus 33?” Doing these exercises regularly helps you become faster and more accurate at doing math in your head, a skill that is very useful not only in school, but in everyday life. By using number 100 worksheets, you also get practice with place value, which is the concept that the position of a digit in a number determines its value. For example, in the number 37, the 3 is in the ‘tens’ place and represents 30, while the 7 is in the ‘ones’ place and just represents 7. Many worksheets include activities that help you understand and apply place value, like breaking a number down into tens and ones. These number 100 worksheets help improve your mathematical confidence and enjoyment. Because they provide lots of different types of activities and gradually increase in difficulty, they can make learning about numbers up to 100 feel more like a fun puzzle than a chore. This can make you more excited about learning math and more confident in your abilities.
{"url":"https://15worksheets.com/worksheet-category/number-100/","timestamp":"2024-11-08T01:33:34Z","content_type":"text/html","content_length":"130336","record_id":"<urn:uuid:93048ba2-a3a4-48be-8e8a-f4e18782911f>","cc-path":"CC-MAIN-2024-46/segments/1730477028019.71/warc/CC-MAIN-20241108003811-20241108033811-00775.warc.gz"}
Fruechte Gravity Theory Blog Jan 29 2023 An electric field of an electromagnetic wave does the work to extend the magnetic field of the same wave. What makes the electric field turn around must have something to do with running out of energy to extend the magnetic field further. Griffiths says: “Magnetic forces do no work” ([1], pg. 207), and that is why it is said that transmission of the Coulomb field is “a diffeomorphism on the electric fields of the gamma rays”: Magnetic fields can act as guides however, and can help hold together a groupoid in the gamma ray field so it can act transitively. There is “energy stored in the magnetic field” ([1], pg. 317] and “Magnetic forces may alter the direction in which a particle moves, but they cannot speed it up or slow it down.” ([1], pg. 207) It is the same in Coulomb groups, spherical or concentrated, that carry the Coulomb field, – there are electric currents that are altered in direction by magnetic fields. Another example of this is gravitational lensing. An involution may be a charged particle, or nucleus, with mass, as it absorbs gravitons for the energy to send out Coulomb groups, or it may be a Coulomb group itself in an open field. As a spherical group travels, for example, it takes on new gamma rays and leaves some behind, and the new gamma rays may be called an involution as they become part of the Coulomb group. When it is said that with Coulomb phonon transmission, the gamma rays are “frozen in time” up to “10 meters at least”: https://www.fruechtetheory.com/blog/2022/03/29/transmission-of-the-coulomb-field/ , it is in relation to travel, though they may travel a miniscule amount. It is torsion that transmits the Coulomb field, and the angular velocity, ω, is higher the stronger the field. In a Cartan decomposition, “g[1] = t[1] + p[1] and g[2] = t[2] + p[2]“ ([2], pg.517), p is the peak point of the electric field of a graviton. In a Riemannian globally symmetric space of type I, p follows the peak of a sine wave, and it also follows the peak in a Riemannian globally symmetric space of type II. [1] Griffiths, David J., “Introduction to Electrodynamics”, Prentice Hall, 1999 [2] Helgason, Sigurdur, “Differential Geometry, Lie Groups, and Symmetric Spaces”, American Mathematical Society, 2012 Jan 17 2023 Action of the Electric Field When a molecule is formed, each nucleus senses the one(s) closest by its spherical pulses. Then each nucleus starts sending out alternating concentrated groupoids toward the nearest nuclei in the In a Coulomb attraction, the groupoid decides how to bisect by the spin of a target. The two brackets then compress against other gamma rays and subsequentially spring back and squeegee along the backside of the target in what is called a pullback. Past the target, the brackets “re-emerge as action morphisms of Lie algebroids” ([1], pg. 152), and join a spherical group. The scalar potential has units of J/s, which is energy per time. The electric field has units of N/C, and Force = mass x acceleration per Newton’s second law. The acceleration is less for a larger mass of charge, and there are neutrons in most nuclei which makes the effect greater. The electric field travels faster the denser a gravitational field is, though the speed difference may not be We can have “a π-saturated open set” ([1], pg. 97) with “saturated local flow”, though the gravitons will be at various phases on sine waves when an electric field comes through. Thus, in terms of analytic coordinates, “such coordinates do not usually exist for Lie groupoids.” ([1], pg. pg. 142) What we have is an infinitesimal zigzag pattern, though when we back out to the classical level, it does not matter for any application. As said earlier, Coulomb repulsion acts on the frontside of another charge. The electric field travels much faster than the charged mass it pushes, in part due to inertia, so likewise, after the push, the brackets join another spherical group behind the target. A nuclear concentrated groupoid may join a spherical groupoid once it passes a target. In both cases, Coulomb attraction or repulsion, the spherical group from which the brackets came mends itself. [1] Mackenzie, Kirill C. H., “General Theory of Lie Groupoids and Lie Algebroids”, c. 2005 Kirill C. H. Mackenzie, London Mathematical Society
{"url":"https://www.fruechtetheory.com/blog/2023/01/","timestamp":"2024-11-05T20:07:25Z","content_type":"application/xhtml+xml","content_length":"42075","record_id":"<urn:uuid:698871f1-22fa-4684-afc6-870598351fc3>","cc-path":"CC-MAIN-2024-46/segments/1730477027889.1/warc/CC-MAIN-20241105180955-20241105210955-00509.warc.gz"}
Convert 9 meters to centimeters How to convert 9 meters to centimeters To convert 9 m to centimeters you have to multiply 9 x 100, since 1 m is 100 cms So, if you want to calculate how many centimeters are 9 meters you can use this simple rule. Did you find this information useful? We have created this website to answer all this questions about currency and units conversions (in this case, convert 9 m to cms). If you find this information useful, you can show your love on the social networks or link to us from your site. Thank you for your support and for sharing convertnation.com! 9 meters Discover how much 9 meters are in other length units : Recent m to cm conversions made:
{"url":"https://convertnation.com/9-meters-to-centimeters","timestamp":"2024-11-08T20:51:06Z","content_type":"text/html","content_length":"10202","record_id":"<urn:uuid:165b79be-79d4-4a8f-8c04-412b55456c21>","cc-path":"CC-MAIN-2024-46/segments/1730477028079.98/warc/CC-MAIN-20241108200128-20241108230128-00146.warc.gz"}
Author: namiki There used to be a lot of debate about the four-seam fastball and the relationship of velocity, vertical movement, and spin rate. But now there is a new concept called Vertical Approach Angle (VAA) that includes the height of the release and the height of the pitch’s path. With that in mind, let’s think again about what is needed for a good four-seam fastball. Cross-Tabulating To Determine the Impact of Each Element A cross-tabulation was performed for four-seamers thrown in MLB from 2017-2021, with velocity ticked to 4 km/h, vertical movement ticked to 7.5 cm, release height ticked to 10 cm, and plate height ticked to 15 cm. Each element was tabulated and color-scaled with the MLB average as the middle value in white, good values for pitchers in red, and bad values in blue. The indicators are Whiff%, xwOBAcon, and xPV/100 (expected Pitch Value per 100 pitches, which I wrote about here). Read the rest of this entry » I’ve heard it said in the past that a batter should take care of the pitcher’s fastball first and then deal with the breaking ball. If this is true, then the faster the pitcher’s fastball is, the more the batter needs to be aware of the fastball when at the plate. I want to look at how this affects the most popular pitch in baseball: the slider. First I calculated the average velocity of each pitcher’s fastball for pitchers who threw at least 100 fastballs (FF, FT, SI) in each major league season from 2017-2021. Based on the calculated average fastball velocity, I divided the pitchers into three groups: 143-148 km/h, 148-153 km/h, and 153-158 km/h. I then further divided the groups according to the velocity and movement of the slider thrown in each. Then I calculated the Run Value/100 for each group. Let’s start with the velocity group between 143 and 148 km/h (click to enlarge). Read the rest of this entry » There is an index called pitch value that calculates the increase or decrease in runs scored depending on the pitch type. In this article I will look to create an environment-neutral version of pitch Shortcomings of Existing Pitch Value Pitch Value (hereafter PV) and RV use the average or sum of the variable values of RE288. This method has the advantage of being able to measure how much a pitch actually increased or decreased the number of runs scored on that pitch. However, the metric is not consistent enough to be used in a single year given that it depends on a relatively small number of batted balls and plate appearances. The following is the average delta_run_exp (RV/100) of sliders for pitchers who threw 500 or more of them in each year from 2017-20, with the data obtained from Statcast. The correlation coefficient is 0.14, which means that there is almost no correlation. Even if a pitcher records an excellent RV/100 in one year, there is no way to know what kind of value he will record the following year. It seems that it is difficult to measure the stable value of a pitch type with the existing PV and RV. Using xwOBAvalue for Situation-Neutral Run Value and Batted Ball Evaluation We can try to make improvements in measuring the value of pitches with a small number of at-bats or pitches in a single year. First, we use a situation-neutral scoring value for events that occur rather than a change in scoring value. For example, a home run with no runners on base and a home run with runners on base have different values in the existing RV, but the situation-neutral scoring value is calculated using the average scoring value of home runs in all situations combined. The reason for this is that it is not appropriate to evaluate the ability of a single pitch to prevent runs from being scored if it depends on the circumstances in which it is thrown. Another correction is to use the xwOBAvalue (estimated_woba_using_speedangle in Statcast) instead of the actual batting result when a pitch is hit. The pitcher has little control over whether a batted ball becomes a hit or an out, and it is known that the number tends to be unstable in a single year. If we consider that it is difficult for a pitcher to control the number of batted balls in a season, the batted ball number of pitch type in a season is even smaller, so the index becomes less stable. Therefore, for batted balls, we use the value of runs (xwOBA_value), which is estimated from the speed and angle of the batted ball. The purpose of this is to remove the influence of defense and chance as much as possible. In this way, we try to calculate the pitch value as situationally neutral as possible. Calculate wOBA by count I will call this situation-neutral pitch value xPV (expected pitch value) for now. The first step is to find the wOBA by count. Here, the wOBA by count is calculated based on “all final batting results that have passed that count.” Note that this is not the same as the batting results recorded at the time of that count. For example, if a batter misses a strike in an 0-1 count and the count goes to 0-2, and then strikes out on three pitches, one strikeout is recorded in the 0-1 record. But if a batter hits a single in that 0-2 count, a single hit is recorded in the 0-1 record^1. Also note, 0-0 is the count that has elapsed in all counts, so 0-0 = wOBA for all at-bats in that period. Calculating the Run Value by Count Using this wOBA by count, we can calculate the value of points scored by count. (count wOBA after pitching – count wOBA before pitching) / wOBAscale (≈1.15 in Statcast csv data) First, when the count changes, the actual RAA is calculated as: (wOBA of the count after the pitch – wOBA of the count before the pitch) / 1.15 If a batted ball occurs, then this is used to calculate RAA: (xwOBAvalue – wOBA of the count before the pitch) / wOBAscale Total the value, Take the Average The xPV is calculated by summing and averaging the RAAs calculated in this way. The advantage of this xPV is that it reduces the influence of chance as much as possible and increases the consistency of the index by giving it a situation-neutral value. The following is the year-to-year correlation of the xPV/100 (xPV per 100 pitches) of sliders for pitchers who threw at least 500 sliders from 2017-20. The correlation coefficient was 0.49, which is a moderate correlation and much improved over the 0.14 of RV/100. For xPV, I referred to this article. ^1The reason why we use hitting stats through a count instead of hitting stats at that count is that we can take into account the effects of events that occur only in a particular count, and we can also evaluate pitches that are not directly related to the batting results. For a detailed explanation, snin’s article is very helpful. I have also put the R code here.
{"url":"https://community.fangraphs.com/author/namiki/","timestamp":"2024-11-09T12:41:30Z","content_type":"text/html","content_length":"108521","record_id":"<urn:uuid:d242cd9d-4bce-462b-9915-a61015a37720>","cc-path":"CC-MAIN-2024-46/segments/1730477028118.93/warc/CC-MAIN-20241109120425-20241109150425-00579.warc.gz"}
FEBRIASTATI, ANI (2019) SOLVING INTERVAL INTEGER PROGRAMMING PROBLEMS USING ASKA METHOD THROUGH INTERVAL TRANSFORMATION TO SYMMETRIC TRIANGULAR FUZZY. Undergraduate thesis, UNDIP. Restricted to Repository staff only Linear programming is one of the models that can be used to solve optimization problem. In a real life, decision variable must be an integer, however in a linear programming the solution is not necessarily an integer. Therefore, we need an integer programming so that the decision variable is an integer. One of the assumptions used in the integer programming problem is deterministic. However, in a real life, these deterministic assumptions are difficult to fulfill. To resolve this problem, it can be approached by using a decision variable whose parameters are interval. An integer linear programming with interval parameters is called an interval integer linear programming (π Όπ Όπ Ώπ ). π Όπ Όπ Ώπ problems can be solved using ASKA method. Firstly, the π Όπ Όπ Ώπ is transformed into a symmetric triangular fuzzy integer program (STFILP), then the STFILP is brought to two problems of firm integer linear programming (π Όπ Ώπ crisp) where each π Όπ Όπ Ώπ will be solved using cutting plane method with fractional algorithm. The results of the π π π Ήπ Όπ Ώπ β s optimal solution obtained from the settlement using ASKA method and it is the optimal solutions from the π Όπ Όπ Ώπ . Keyword: integer linear programming, interval integer linear programming, fuzzy integer linear programming, ASKA method. Repository Staff Only: item control page
{"url":"http://eprints.undip.ac.id/84210/","timestamp":"2024-11-02T07:57:43Z","content_type":"text/html","content_length":"20072","record_id":"<urn:uuid:32300b91-ee76-4683-9d00-3eeb611d654a>","cc-path":"CC-MAIN-2024-46/segments/1730477027709.8/warc/CC-MAIN-20241102071948-20241102101948-00162.warc.gz"}
Line and Plane Cover Numbers Revisited A measure for the visual complexity of a straight-line crossing-free drawing of a graph is the minimum number of lines needed to cover all vertices. For a given graph G, the minimum such number (over all drawings in dimension d ∈{2,3}) is called the d-dimensional weak line cover number and denoted by π^1_d(G). In 3D, the minimum number of planes needed to cover all vertices of G is denoted by π^ 2_3(G). When edges are also required to be covered, the corresponding numbers ρ^1_d(G) and ρ^2_3(G) are called the (strong) line cover number and the (strong) plane cover number. Computing any of these cover numbers – except π^1_2(G)– is known to be NP-hard. The complexity of computing π^1_2(G) was posed as an open problem by Chaplick et al. [WADS 2017]. We show that it is NP-hard to decide, for a given planar graph G, whether π^1_2(G)=2. We further show that the universal stacked triangulation of depth d, G_d, has π^1_2(G_d)=d+1. Concerning 3D, we show that any n-vertex graph G with ρ^ 2_3(G)=2 has at most 5n-19 edges, which is tight.
{"url":"https://cdnjs.deepai.org/publication/line-and-plane-cover-numbers-revisited","timestamp":"2024-11-08T18:23:22Z","content_type":"text/html","content_length":"154025","record_id":"<urn:uuid:b49d94f9-a747-454f-ad2f-7f496a7b5d5b>","cc-path":"CC-MAIN-2024-46/segments/1730477028070.17/warc/CC-MAIN-20241108164844-20241108194844-00038.warc.gz"}
Developing the maths mastery approach in a mixed-aged school : My College In 2014, a new National Curriculum was introduced in England which significantly changed the contents of the maths curriculum in Primary Schools (DfEDepartment for Education - a ministerial department responsible for children’s services and education in England, 2014). The new curriculum effectively increased the level of challenge for primary school children, as some year group’s objectives were moved to the year below. For example, five-year-olds were now expected to learn to count up to 100, compared with just 20 under the previous curriculum (DfE, 2014). I initially began using the maths mastery approach because I was asked to do so in my previous school setting. However, I had also come to recognise that the former methods of teaching maths did not ensure that all children became confident mathematicians. Four years later and I am well on my way to using the maths mastery approach to teach maths. This article is based on the impact of adopting maths mastery at my school, St Michael’s Primary School in East Sussex. The old method of whole-class differentiation did not seem right and felt like it was creating a huge gulf between the highest and lowest attainers in maths. What is maths mastery? Charlie Stripp, head of the NCTEM, (National Centre for the Teaching of Mathematics) struck a chord with me when he said that “the ‘traditional’ way we differentiate – putting children into ability-grouped tables and providing easier work for the less able and more challenging ‘extension’ work for the more able – has ‘a very negative effect on mathematical attainment” (Stripp, 2014). The old method of whole-class differentiation did not seem right and felt like it was creating a huge gulf between the highest and lowest attainers in maths. As such, I began to research the use of maths mastery in primary schools. Maths mastery can be defined as a ‘means of acquiring a deep, long-term, secure and adaptable understanding of maths…achieving mastery is taken to mean acquiring a solid enough understanding of the maths that’s been taught to enable him/her to move on to the more advanced material.’ (NCTEM, 2015, p. 5). Approaches to maths teaching prior to 2014 were very much centred around differentiation for learners. Curriculum content was also often fast-paced and skipped between areas of maths very quickly. Maths mastery is an approach to maths teaching that is commonly used in schools across East Asia and has at its core the belief that all children can be mathematicians. However, like any new approach in teaching, it has received its fair share of criticism. Initially, critics were quick to argue that it is not possible to emulate the mathematics in East Asia as our cultures are too vastly different. A study was conducted by Professor Mark Boyland and colleagues from Sheffield Hallam University: “The evaluation found positive impacts on pupil KS1 mathematics attainment in those schools most directly involved in the Maths Teacher Exchange (MTE) programme. However, there is no quantifiable evidence from this evaluation that the MTE or implementation of East Asian informed teaching alone is leading to improvements in pupil attainment in mathematics at KS2 in comparison with other schools.” (Boylan et al., 2019, pp. 23 – 24). Whilst the findings from this evaluation are inconclusive, there are other reports showing the efficacy of East Asian informed teaching practices, including when applied in England (Boylan et al., 2018). This is why I chose to adopt the approach in my current school and evaluate its effect on students’ attitudes to maths and their outcomes. Maths Mastery in my school Nearly three years ago, I joined a new school where I became the maths coordinator and was sent on numerous maths courses to support me in this role. This included a course led by Helen Hackett and also with the Sussex Maths Hub about what maths mastery was. I was also fortunate to attend training led by East Sussex County Council regarding how to be an effective maths leader. I was keen to make sure that maths mastery worked for our pupils and wanted to introduce it so that staff saw the benefits of adopting this approach. The main challenge was how to develop it in a small village primary school with three mixed aged classes and a Reception class. I began by trialling some of Babcocks’ (2016) ideas in my year 1 and 2 classroom, alongside my job-share partner. The maths mastery approach was a complete change in teaching style for four out of five of our existing class teachers. Therefore, before implementing it across the school, I felt that it was important to trial the approach in my own class, with my own pupils and fellow job-share colleague to assess its effectiveness. In doing so, I was able to share the realities of teaching maths mastery to colleagues, with the benefit of hindsight and with the knowledge that my job-share partner was also trialling this new approach to teaching. This allowed me to reassure colleagues that, despite an initially lengthy experience planning lessons, as soon as I had become more confident with the approach, planning time had been halved in the space of 14 weeks. “I was keen to make sure that maths mastery worked for our pupils and wanted to introduce it so that staff saw the benefits of adopting this approach.” Changes to my own teaching I began with a few changes initially; developing fluency, reasoning, variation and post-teaching maths. I understood that fluency, reasoning and variation were important elements of maths mastery and I should, therefore, begin with them. In maths, fluency refers to knowing key mathematical facts and methods and recalling these efficiently. I looked at how to build this into the maths session, so that pupils became fluent in their number bond, counting and times table skills. I made a few changes to the lesson structure: counting in 2s, 5s or 10s when children were moving to their places and quick recall of number bonds when children initially sat down for each mathematics lesson. Reasoning refers to the critical skill that enables a student to make use of all other mathematical skills. With the development of mathematical reasoning, pupils show that they can fully make sense of the maths they are learning and apply it to new contexts. Each lesson now had reasoning questions weaved through them. For example, when learning how to count in 5s, pupils were asked to spot the mistake in a sequence of numbers e.g. 55, 50, 45, 35. Variation was perhaps the hardest of the three concepts to understand and plan for in my lesson structure, as it required me to think differently about the concepts I was teaching. Variation is when you find out what something is by looking at it from different angles. For example, if fractions are only ever presented as amounts of pieces of pizza, then pupils will become confused when shown a fraction of a square. Equally, if parallel lines are only ever shown in pairs, then when pupils are presented parallel lines as a group of three, they are likely to be unsure of the concept. I have read widely around the maths mastery approach (including books, blogs, research papers and practitioner articles), but it was Bloom’s much earlier research which was the most powerful in influencing my thinking. Bloom (1971, p. 48) suggested that ‘weaker students required approximately 10-15% additional time to achieve the same results as their peers’. He also suggested that over time the gap closed “so that students became more and more similar in their learning rate until the difference between fast and slow learners becomes very difficult to measure” (Bloom, 1971, page 48). After reading Bloom’s research and through discussions with colleagues, it was apparent that we needed to give pupils additional time to help close the gap with their mathematical understanding. I chose the changes above to begin with as they were important components of the ‘5 Big Ideas of mastery’ highlighted by the Sussex maths hub in a training session. They were important elements of maths mastery and therefore it felt important to embed these centrally in lessons. Changes to the whole school As a whole team, we looked to White Rose More Detailed Plans (www.whiterosemaths.com), NCTEM’s Teaching for Mastery (Askew et al., 2015) and NRICH Maths activities (www.nrich.maths.org) to plan and create our daily lessons. Initially, this took the largest amount of time and there was much frustration along the way as we all grappled with a new way of teaching, planning and assessing maths. Two years later, the process of planning these aspects of mastery in lessons had become a lot easier and staff were all feeling more confident and enjoying teaching maths. Manipulatives are another central aspect of Maths Mastery. Discussions with pupils showed that Key Stage 2 children were less likely to use manipulatives in lessons, as they considered that it somehow linked to low attainment in maths. In total, all of our Key Stage 2 children were questioned (fifty at the time) and observations were made by the two different class teachers to discern how often manipulatives were taken using a child’s initiative. The focus, therefore, turned to all staff receiving CPD in using manipulatives to support maths teaching. We found relevant maths courses led by East Sussex’s education department and I myself led several staff meetings addressing how to use manipulatives in a lesson. I led seven staff meetings focussed on: what is maths mastery, how to use mastery resources, an exploration of the NCTEM website, lesson design, developing problem solving in the classroom, re-cap on the 5 big ideas of mastery and developing bar modelling. Additionally, staff were all given access to maths mastery based training. Staff attended subject specific work, fractions for example, or attended sessions focussed on the new teaching approach in maths. Additionally, in September 2017, I found an opportunity to join our school with a mastery teacher research group, facilitated by the Sussex Maths Hub, based at St Paul’s Catholic College, Burgess Hill. Myself and a second staff member attended termly sessions led by a mastery maths specialist. We got the opportunity to see maths mastery lessons in a mixed aged setting to help us use the approach within our own school. The pre-and post-teaching of pupils continued. Post- teaching took place during the afternoon assembly slot and involved different children each day. It focussed on children that had struggled with that day’s teaching content and the sessions showed that morning’s maths content in a different way to help to fully understand that day’s learning. “The old approach of differentiating a lesson three ways for high, middle and low attaining pupils has been replaced by a more effective use of pupil questioning and small-steps to help all children As part of our mastery journey, I wanted to understand how mastery had impacted on pupils’ enjoyment of maths. As such, I questioned 15 pupils, who were randomly chosen, (one-sixth of the school population) at the beginning and the end to see if or how their attitudes towards maths had changed. The pupils ranged from children aged 4 through to children aged 11 and were taught by five different teachers (including myself). Although the 15 pupils had differing levels of attainment, all participants were feeling happier about learning maths. Our results for the whole school in maths in June 2018 showed that for each year group there has been at least a 10% increase in children attaining age-related expectation (ARE) since September 2017, although of course there may have been a number of reasons for this. Two years of teaching using the mastery approach has been a journey. In early months, I frequently asked myself, ‘why am I doing this? Is there a better way?’ The whole approach is now far less time-consuming the plan for and the benefits can be seen in our pupil progress. In its second year, all pupils at St Michael’s are visibly more confident in approaching reasoning and problem-solving questions. This was evident in termly lesson observations in maths. The old approach of differentiating a lesson three ways for high, middle and low attaining pupils has been replaced by a more effective use of pupil questioning and small-steps to help all children achieve. It has been hugely rewarding to see pupils being more positive about maths lessons and engagement in lessons has risen too. Long sessions crafting lessons have slowly diminished over time and we are all more confident as staff in teaching using the mastery approach. Experience has taught me that the more children enjoy a subject the more likely they are to be confident participators in a lesson. I can now see the impact of using a mastery approach in teaching maths. With carefully crafted lessons, children were able to make better progress, aided by a change in mindset that all pupils can be mathematicians and the role of all teaching staff was integral to this. In a small setting with budgetary constraints, using a small step approach has had the greatest impact on closing the gap in mathematical attainment. The five big ideas have brought successes for pupils as they have been slowly and carefully exposed to the structure of new mathematical ideas. Through using a maths mastery approach, concepts are taught carefully, slowly and with great thought given to pupil’s previous learning and the bigger picture. Effectively it helps to close the gap for pupils, who in the previous curriculum, would have not have been given access to whole class content. In effect their progress was capped because they were only ever given access to ‘lower ability’ content. Final Thoughts Implementing maths mastery in a mixed-aged setting has not been an easy process and there have been no quick fixes. It has taken time to embed with staff, but all staff are thoroughly enjoying teaching using the approach. Pupils in our school are benefiting from a mastery approach and we can see that results are slowly improving and gaps in attainment are beginning to close. We are now in the third academic year of using the mastery maths approach at St Michael’s and it will be interesting to see how this impacts on pupil attainment and attitude in maths over the next few years. Mastery maths is still in its foundation stage and there is still work to be done, however, our experience suggests that if mastery is going to be successful in mixed-aged classrooms it needs time to develop. We are fortunate to have a group of teachers who were curious and inquisitive enough to change the way they teach maths and our pupils are now reaping the benefits. Please login to comment 1 Comment Inline Feedbacks View all comments […] confirms that struggling learners actually achieve more in mixed-ability classes. Seeing how other learners solve problems and interacting with their peers […]
{"url":"https://my.chartered.college/research-hub/developing-the-maths-mastery-approach-in-a-mixed-aged-school/","timestamp":"2024-11-06T14:04:45Z","content_type":"text/html","content_length":"290306","record_id":"<urn:uuid:4442c536-2a41-4b7b-846d-b6d7b4eb5916>","cc-path":"CC-MAIN-2024-46/segments/1730477027932.70/warc/CC-MAIN-20241106132104-20241106162104-00862.warc.gz"}
How can i view the poles of a function more strong? How can i view the poles of a function more strong? Hi, I'm drawing a rational function and i use plot wit detect_poles='show', but i can hardly view the poles, how can i view the poles with a best view? Here is my sagemath code: d1f5(x) = f5.derivative(x,1) umd15= d1f5(x).full_simplify() d2f5(x) = f5.derivative(x,2) umd25= d2f5(x).full_simplify() lista_singular25 = umd15.roots(x) for raiz5 in lista_singular25: print("Punto Singular = ({0},{1})".format(raiz5[0], f5(raiz5[0]))) #for raizz5 in lista_inflex25: # print("Punto Inflexión = ({0},{1})".format(raizz5[0], f5(raizz5[0]))) fig5=plot([f5(x),point(((-4,-5),(2,7),),rgbcolor=hue(1),size='40')],(-8.5,8.5),detect_poles='show', thickness=2.0, xmin=-8,xmax=8,ymin=-10,ymax=10,ticks=[1,1]) fig5 += text('P(-4,-5)', (-4.5,-5.7), color='red') fig5 += text('Q(2,7)', (2,8), color='red') fig5 += plot([x+2],(-8.5,8.5),color='green',linestyle='--',thickness=2.0) fig5 += text('y=x+2', (8,8), color='red',clip='yes') fig5.axes_labels([r'$x$', r'$\frac{(x^2+3x+11)}{(x+1)}$']) I want an improve viewing about the vertical asymptotic x=-1 2 Answers Sort by » oldest newest most voted See the source code (this is version 4.8). pole_options = {} pole_options['linestyle'] = '--' pole_options['thickness'] = 1 pole_options['rgbcolor'] = '#ccc' Unfortunately, I think that the only way to change these options right now is to change it in your actual plot file! You can go into your Sage installation's devel/sage/sage/plot/plot.py file and search for these lines, and then just change them, start Sage with the command line option sage -br and hopefully a thicker line would be available. I've opened Ticket 12921 for this issue. edit flag offensive delete link more Thank you very much for your answer. Actually I'm using sage 4.8 packaged by Mandriva 2011.0, if I have time i'll recompile the package, if not, i'll wait the next version of sage. edit flag offensive delete link more Well, this won't be fixed for Sage 5.0 (whose release is imminent). But if you have gcc and a few other things on your computer, you can install from source pretty easily - http://www.sagemath.org/ doc/installation/source.html for example. This won't take more than a few hours, and you can get links to the latest release candidate at http://groups.google.com/group/sage-release/ kcrisman ( 2012-05-09 15:16:22 +0100 )edit
{"url":"https://ask.sagemath.org/question/8956/how-can-i-view-the-poles-of-a-function-more-strong/","timestamp":"2024-11-06T01:24:16Z","content_type":"application/xhtml+xml","content_length":"60498","record_id":"<urn:uuid:8af59d48-3a57-4c25-983c-f54bef95694b>","cc-path":"CC-MAIN-2024-46/segments/1730477027906.34/warc/CC-MAIN-20241106003436-20241106033436-00595.warc.gz"}
Activity-Based Recession Probability Models There are two broad classes of models that produce recession probabilities: those that are based on activity variables, and those that use indicator variables to generate forecast information. This article discusses the former category: models that are based on activity variables, which provide a coincident recession probability. Forecasting models are obviously more interesting, and attract the most research attention. However, activity-based models are more reliable, to the extent that data are not revised. The figure above shows the (smoothed) recession probability for the United States generated by the model of Jeremy Max Piger and Marcelle Chauvet .* As can be seen, the probability for the latest data point (October 2018) is nearly zero, despite the signals of standard probability indicators (such as yield curve flattening). Note that the end point of the series is behind the date of update (early January 2019) as a result of the publication lag of the economic data. As a result, the model is not truly coincident, but lags as a result of the publication delay. That said, one may note the absence of false recession signals in the back history, which is not a feature of recession forecast models. Brief Model Description The model is a Markov-switching model, where there is a hidden state variable that jumps between two regimes: high growth, and low growth. In this case, the low growth regime is associated with negative growth, or a recession. If the state is in the expansion state in one month, the most likely transition is to remain in the growth state. However, there is a low probability of jumping to the recession state. The recession state is less sticky than the growth state, which matches the tendency of recessions to be short-lived versus expansions. The expected growth rates for the following variables are driven by the hidden state. 1. non-farm payroll employment, 2. the index of industrial production, 3. real personal income excluding transfer payments, 4. real manufacturing and trade sales Since we cannot directly observe the state variable, the above variables are used to infer the probability it is in either state. Since these variables are normally growing, the usual condition is that the probability the economy is in recession is low. The above variables were not arbitrarily picked; they are key variables that are tracked by the NBER recession-dating committee. One way of looking at this model is that it provides an alternative purely quantitative recession definition. One could choose alternative activity measures; the simplest is to use real GDP. The use of real GNP was one of the first variants of this model, as developed by J.D. Hamilton.** One disadvantage of GDP (GNP) is that it is quarterly, which is somewhat imprecise given that recessions can be short-lived. However, it may be that quarterly data would need to be used in some countries with data One of the useful features of the Chauvet and Piger model is that it is fairly robust with respect to data revisions. In fact, the cited paper was an analysis of the usefulness of the model when applied to data available in real time (as opposed to after revisions). My instinct is that the importance of revisions will be much greater if one attempted to fit a similar model to GDP. Pure Econometrics One of the beauties of this model is that it is largely independent of the macro wars. It is purely a summary mathematical description of economic activity, and somewhat theory-free. At most, the models may be incompatible with some potential macro economic models, but this would likely only happen if the model was itself completely unable to generate economic time series that resemble real world data. Is There a Simpler Way? The key to this model is that it provides a probability of being in recession. It may be that the user does not need such a probability, in which case, the model may be overkill. There are two simpler alternatives. 1. Visual analysis. This is snidely referred to as "chart blogging," but the reality remains that one could very rapidly draw conclusions about the state of the economy by eyeballing the time series of the activity variables. And to a certain extent, this is a step that probably should be done even if you are using a formal model. You should have an idea of what is happening to the inputs of the model, and not rely blindly on the model output. In the worst case, there could be a typo in the model code, and the model generates a recession probability when the activity variables are not doing anything interesting. One advantage of eyeballing charts is that one can see whether some variables are weakening, which then can lead to more careful forecasting analysis. However, eyeballing charts has a major disadvantage. One has problems as soon as there is more than one person interpreting the data. You rapidly run into the problem of "duelling chart packs" -- which is why institutions like formal models in the first place. The other issue is that this visual analysis is not as useful for economists who are tasked with calling and dating recessions; they want a more formal rule for a recession call. (Market participants should not normally be worried about such a detail.) 2. Use Principal Component Analysis (PCA) (link to my description of PCA analysis) to generate a composite indicator for the chosen activity variables. This provides an aggregate variable that will rise and fall during the cycle, providing a gauge of economic momentum beyond the binary recession/expansion determination. The aggregation eliminates some of the possibilities for duelling chart packs, but it still does not offer a clear-cut recession call trigger. Given that the PCA analysis provides a more general tool, I would view it as being a higher priority for market economic research. Technical Limitations One of the problems with these models is that certain model parameters are estimated based on the entire available history. This means that we should be cautious about trumpeting the in-sample performance of the model, since the model technically has access to some future information. The hope is that these model parameters will be stable over time, and so the model will not blow up as we enter new regimes. (I looked at this very briefly, and I did not see any obvious concerns. I would need to re-build the model myself to get a better handle on this issue.) Another technical issue is the possibility that the estimation procedure could split the high growth/low growth regimes differently. For example, if there is a secular change in growth rates, the state variable could correspond to the decades of high versus low growth. This was discussed in the Hamilton paper (page 372), but it was not clear to me how applicable it is to the Chauvet and Piger model at the time of writing. Concluding Remarks Activity-based recession probability models are useful for providing a robust summary of coincident economic data. However, most market analysis revolves around forecasts, and so a coincident model does not appear too exciting (when one takes into account publication lag). As a result, it is unclear how far one should pursue these models, at least for the United States. I expect that I will follow up with some example forecasting-style recession probability models. For example, there are a variety of models build around the use of the yield curve to infer a recession probability. * Chauvet, M. and J. Piger, “A Comparison of the Real-Time Performance of Business Cycle Dating Methods,” Journal of Business and Economic Statistics, 2008, 26, 42-49. ** Hamilton, J. D. (1989), “A New Approach to the Economic Analysis of Nonstationary Time Series and the Business Cycle,” , 57, 357–384 (c) Brian Romanchuk 2019 No comments: Post a Comment Note: Posts are manually moderated, with a varying delay. Some disappear. The comment section here is largely dead. My Substack or Twitter are better places to have a conversation. Given that this is largely a backup way to reach me, I am going to reject posts that annoy me. Please post lengthy essays elsewhere.
{"url":"http://www.bondeconomics.com/2019/01/activity-based-recession-probability.html","timestamp":"2024-11-09T20:02:31Z","content_type":"text/html","content_length":"90283","record_id":"<urn:uuid:ace90f61-da38-49d9-832b-b35747b8eeec>","cc-path":"CC-MAIN-2024-46/segments/1730477028142.18/warc/CC-MAIN-20241109182954-20241109212954-00216.warc.gz"}
What to Know About Wire Gauge Size, Ampacity, and Voltage Let’s face it; there are a lot of numbers to keep track of when we’re discussing wire and cable. Understanding the numbers associated with wire is sometimes like learning a new language. But knowing them and how they impact an electrical installation can be the difference between a safe project and an unmitigated disaster. So, what do we need to know when talking about electrical wire, and how do the different pieces of information relate to one another? Every wire installation requires a few critical pieces of information: gauge size, amperage, and voltage. Without them, it’s nearly impossible to safely complete a project. But how do the three measurements relate to each other, and what do they mean? What is Gauge Size? Gauge size is a measurement used to determine an electrical conductor’s size. Based on your location, wire can be measured using either the American Wire Gauge (AWG) or the International Electrotechnical Commission’s (IEC) standards. In North America, gauge size is measured using the AWG system, with each step corresponding to a specific inch/millimeter diameter. Sizes range from 40 to 0000, commonly seen as 4/0, before moving into MCM (thousands of circular mils) measurements. Remember that gauge size and wire diameter/area have an inverse relationship when measuring conductors using AWG sizing. As the gauge number gets larger, the diameter and area of the wire are getting smaller. For example, a #10 AWG wire has a larger diameter than a #20 AWG one. Although the system seems confusing at first, it’s easy to use AWG to find the diameter of any given wire – plug the gauge size where “n ” is in the formula: D(sub n) = 0.005 in. x 92(to the 36-n/39) If you’re measuring in millimeters, replace 0.005 in. with 0.127 mm. The gauge can also find a wire’s relative diameter or cross-sectional area. Using the American Wire Gauge system, each six-gauge decrease doubles the diameter of the wire – for example, this means a #14 AWG wire is double the diameter of a #20 AWG wire. The same can be said for the cross-sectional area – in this case, a three-gauge decrease will double the area of a wire. Wire Gauge and Amperage Now that we know what a wire gauge is and how to perform conversions, we can jump into how they affect ampacity. Ampacity is a measurement dictating how much current-carrying capacity an electrical wire has. Like how AWG size is inversely related to diameter and area, amperage decreases as AWG numbers increase. Think of it this way: you wouldn’t attach a garden hose to a fire hydrant because the pressure and amount of water would be too much for the hose to handle. You probably also wouldn’t wire a house with speaker wire because the gauge and ampacity ratings are far too small to deliver the power needed. Understanding how amperage works reduces the risk of a potentially dangerous situation. Conductors can quickly overheat when they can’t handle the current flowing through. Eventually, the heat melts the conductor’s insulation, exposing the bare metal and creating a fire risk. How Does Gauge Size Impact Resistivity? The easiest way to explain resistivity is driving along the Interstate. A six-lane highway can support thousands of cars, quickly getting them where they need to go. Now take the same number of cars and reduce the lanes to two. The vehicles have fewer places to move, forcing everyone to slow down. The same concept applies to electrical wire and cable. As the gauge size increases, the amount of resistance, measured in ohms, also grows. This is because the wire is smaller, so the amount of current it can move also decreases. Resistance doubles or halves for every 3 AWG you move up or down the scale. In this instance, a #7 AWG would have half the resistance of a #10 AWG, which, in turn, has half the resistance of a #13 Going one step further, whenever AWG is increased or decreased by 10, resistance is multiplied or divided by a factor of 10. Let’s say you have a #2 AWG wire and a #12 AWG wire. Their resistances will be 0.1563 Ω/kft and 1.588 Ω/kft, respectively. Let’s Talk More About Amperage As previously mentioned, amperage is tied to current flow, determining how much current a conductor can accommodate without damage. Amperage is measured in amperes and is impacted by several variables, including metal type and wire size. Copper, for example, has a higher ampacity than aluminum and carries a lower resistivity than aluminum in the same size conductor. Because of this, aluminum conductors need to be larger than copper ones to move the same amount of current. Ambient temperatures also affect ampacity. When temperatures rise, resistivity increases while ampacity decreases. Large gauge wires (smaller numbers) have higher ampacity to support current flow. The same rules apply to multiple conductor cables, like tray cables. In this case, electrical installers should reference NEC (National Electrical Code) Table 310.15(B)(2)(a) to figure out temperature ratings and ampacity. Installers need to reference the NEC ambient temperature table because conductors give off heat when current passes through them, so the more conductors there are, the more heat gets generated. NEC Article 310 guidelines sometimes call for all conductors in the raceway to be derated to limit heat generation and maximize heat dissipation. Derating means operating the wire at less than its maximum capacity when ambient temperatures are more than 30 degrees Celsius and when more than three cables are included in the circuit, like tray cable. Where Does Voltage Fit In? Voltage measures a current flow’s strength within an electrical system. Sometimes called a difference of potential, voltage is calculated using Ohm’s Law. Ohm’s Law is a formula that calculates voltage by multiplying the system’s current (I) by its resistance (R). The formula ends up looking like this: V = IR Voltage is all about pressure, so if there’s a lot of pressure in a system, there’s also a lot of voltage. Think of it this way – if you open a flat bottle of soda, your drink stays in the bottle because there’s no pressure built up. But what happens when you vigorously shake a full soda bottle before unscrewing the cap? The soda sprays out because you’ve built up pressure in the closed Addressing Voltage Drops Unlike amperage, which is dependent on temperatures and metal types, voltage is affected by resistance and distance. The longer a conductor is and the higher its resistance, the worse the voltage Voltage drops are important to keep in mind because every manufacturer has a minimum operating voltage for their product, as seen in NEC code 110.3(B), which requires all equipment to be installed based on the manufacturer’s instructions. The NEC’s rule corresponds with ANSI (American Nationalsaysards Institute) rule C84.1, which says that the minimum voltage should not be lower than 90% of the nominal system voltage. For a standard 120V system, the minimum voltage must be above 108V to stay in compliance. This means the wire feeding electricity to the appliance or machine needs to be large enough to maintain the minimum 108V. To reduce the voltage drop across the system, you either need a larger wire or something to increase the voltage. You can also change the type of material used in the conductor to decrease resistivity – like moving from an aluminum conductor to a copper one. As a real-world example, electrical generation plants step up voltages using transformers to make up for voltage drops as electricity travels through transmission lines from the plant to the substation. Once the power reaches the substation, the voltage will be stepped down several times before making it to homes and businesses. In other cases, voltages can be stabilized and supported using other methods, such as voltage regulators, capacitors, load rebalancers, and multiphase conductors. One Number, Many Uses We often have a lot to think about when dealing with electrical wire, but understanding how gauge sizing works can go a long way during your next installation. One piece of information can help you choose the best wire or cable for your installation to prevent fires and protect systems from overheating. Proper sizing also reduces voltage drops, ensures you don’t over- or under-engineer projects using the wrong gauge, and helps you better anticipate how the wire will react in different situations. So, while it’s only one number, correct gauge sizes result in better installations, safer projects, and more reliability during the wire’s lifespan. Want more content? Subscribe to Kris-Tech’s Newsletters
{"url":"https://www.kristechwire.com/what-to-know-about-wire-gauge-size-ampacity-and-voltage/","timestamp":"2024-11-09T09:20:42Z","content_type":"text/html","content_length":"1049645","record_id":"<urn:uuid:ef5e8138-b397-48ef-ab3d-a1f3bffa40ac>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.75/warc/CC-MAIN-20241109085148-20241109115148-00472.warc.gz"}
Inline function expression JSONiq follows the W3C standard for constructing function items with inline expressions. The following explanations, provided as an informal summary for convenience, are non-normative. A function can be built directly by specifying its parameters and its body as expression. Types are optional and by default, assumed to be item*. Function items can also be produced with a partial function application. Example 39. Inline function expression function ($x as integer, $y as integer) as integer { $x + 2 }, function ($x) { $x + 2 } Result. (two function items) Figure 24. InlineFunctionExpr
{"url":"https://www.jsoniq.org/docs/JSONiq/webhelp/ch05s01s07s01.html","timestamp":"2024-11-08T00:39:05Z","content_type":"application/xhtml+xml","content_length":"5372","record_id":"<urn:uuid:939efca5-300f-415f-a394-2d067a76dd4b>","cc-path":"CC-MAIN-2024-46/segments/1730477028019.71/warc/CC-MAIN-20241108003811-20241108033811-00000.warc.gz"}
A partial converse ghost lemma for the derived category of a commutative Noetherian ring (Journal Article) | NSF PAGES Let$(R,\mathfrak {m})$be a Noetherian local ring of dimension$d\geq 2$. We prove that if$e(\widehat {R}_{red})>1$, then the classical Lech’s inequality can be improved uniformly for all$\mathfrak {m} $-primary ideals, that is, there exists$\varepsilon >0$such that$e(I)\leq d!(e(R)-\varepsilon )\ell (R/I)$for all$\mathfrak {m}$-primary ideals$I\subseteq R$. This answers a question raised by Huneke, Ma, Quy, and Smirnov [Adv. Math. 372 (2020), pp. 107296, 33]. We also obtain partial results towards improvements of Lech’s inequality when we fix the number of generators of$I$. more » « less
{"url":"https://par.nsf.gov/biblio/10520377-partial-converse-ghost-lemma-derived-category-commutative-noetherian-ring","timestamp":"2024-11-10T03:00:26Z","content_type":"text/html","content_length":"267265","record_id":"<urn:uuid:f013a9b3-773b-4258-ab36-032c44d4e1d9>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.3/warc/CC-MAIN-20241110005602-20241110035602-00254.warc.gz"}
Continuous Bridges: Types, Design and Advantages After reading this article you will discuss about:- 1. Introduction to Continuous Bridges 2. Types of Continuous Bridges 3. Proportioning Structures 4. Design Procedure 5. Advantages 6. Introduction to Continuous Bridges: Continuous bridges are more economical but lack simplicity in the design procedure. These structures have the relative advantage that their designs are simple and do not involve any complicated analysis but the main drawback is that such structures are generally comparatively Continuous bridges, on the other-hand, are more economical but the disadvantage of these types of bridges is their lack of simplicity in the design procedure. These structures are statically indeterminate and therefore, the structural analysis is very much laborious specially when it involves moving loads. Types of Continuous Bridges: i. Slab and T-beam Bridges: For sketch, Fig. 4.3 may be referred for Solid slab continues bridges may be adopted for spans up-to 25 m, T-beam continuous bridges may be used for spans beyond 20 m. but below 40 m. Above this limit box girder bridges may be found suitable. ii. Box-girder Bridges: Box girder superstructures which are generally found useful for medium long span bridges consist of longitudinal girders usually three in number with deck and soffit slabs at top and bottom although single cell box girders is not uncommon. As the name implies, the longitudinal girders and the cross girders along, with top and bottom slab form the box. The advantage of this type of, superstructure is its great torsional resistance which helps a good deal in better distribution of eccentric live loads over the girders. Unlike girder bridges, live load distribution becomes more even in box girder bridges. Another advantage that may be achieved from this type of structure is that instead of increasing the depth of the section where the resisting moment becomes less than the design moment, the former can be increased if the slab thickness on the compression side is suitably increased. To cater for varying moments at different sections, the thickness of the top or bottom slab is varied depending on whether positive or negative moment is to be resisted. The deck slab is designed as a continuous slab over the longitudinal girders similar to slab and girder bridges. The thickness of deck slab varies from 200 to 250 mm. depending on the spacing of the longitudinal girders. The soffit slab thickness varies from 125 to 150 mm. where it has no structural function except forming the box but to resist negative moment it may be necessary to increase it up-to 300 mm. near the support. The web thickness of the longitudinal girders is gradually increased towards the supports where the shear stresses are usually critical. Web thickness of nearly 200 mm. at the centre varying to 300 mm. at the support is normally found adequate. The web at the support is widened suitably to accommodate the bearings, the widening being gradual with a slope of 1 in 4. The diaphragms are provided in the box girder to make it more rigid as well as to assist in even distribution of live load between the girders. For better functioning, their spacing should be between 6 m. to 8 m. depending on the span lengths. It is advisable to provide at least 5 diaphragms in each span — two at supports, two at quarter span and one at the mid-span. Openings are kept in the diaphragms to facilitate removal of shuttering from inside the boxes (Fig. 11.5). Suitable manholes may be kept in the soffit slab for this purpose also. These may be covered by manhole covers of precast concrete. About 40 per cent of the main longitudinal tensile reinforcement are distributed over the tension flange uniformly, the remaining 60 per cent being concentrated in the webs in more than one layer if necessary. In deep girder bridges, a considerable depth of the web below the top flange near the support is subjected to tensile stress. To cater for this tensile stress it is recommended that about 10 per cent of the longitudinal reinforcement may be provided in this zone unless inclined stirrups are used for diagonal tension. Proportioning Structures of Continuous Bridges: Equal spans are sometimes adopted for various reasons one of them being architectural consideration but for economical design, the intermediate spans should be relatively more in length than the end Generally, the following ratios of intermediate to end span are found satisfactory: In a continuous bridge, the moment of inertia should follow the moment requirement for a balanced and economical design. This is achieved by making the bottom profile parabolic as shown in Fig. 10.1. Sometimes, straight haunches or segmental curves are provided near supports to get the increased depth required from moment consideration. The soffit curves shown in Fig. 10.1 are made up of two parabolas having the apex at the centre line of the span. For symmetrical soffit curves, r[A] = r[B] = r (say) where “r” is the ratio of increase in depth at supports to the depth at the centre line of span. The following values of “r” have been recommended for slab bridges: r = 0 for all spans b) End span between 10 m and 15 m, i) r = 0 to 0.4 for outer end span ii) r = 0.4 at first interior support iii) r = 0.5 at all other supports The values of r[][A] and r[a] for girder bridges may be computed from the following formulae: Where I[A], I[B] and I[c ]are the moment of inertia of the T-beam at A, B and mid-span respectively. For girder bridges, the under mentioned values of “r” have been recommended: (i) Outer end of end spans, r = 0 (ii) 3 span unit, r = 1.3 at intermediate supports. (iii) 4 span units, r = 1.5 at centre support and 1.3 at the first interior support. Method of Analysis: Continuous structures may be analysed by various methods but most common method is the moment distribution. When haunches are used, the analysis becomes more complicated and therefore, design tables and curves have been made available for structures with various types of haunches such as straight, segmental, parabolic etc. as well as for various values of r[A], r[B] etc. One such reference literature is “The Applications of Moment Distribution” published by the Concrete Association of India, Bombay. These tables and curves give the values of fixed end moments, carryover factors, stiffness factors etc. from which the nett moments on the members after final distribution may be worked out Influence Lines: Fig. 10.2 shows some influence line diagrams at different-sections for a three equal span continuous bridge having constant moment of inertia. To get reaction or moment at a point due to a concentrated load, W, the ordinate of the appropriate influence line diagram is to be multiplied by W. For uniformly distributed load w, reaction or moment = (Area of appropriate influence line diag.) x w. The influence line diagrams for moments, shears, reactions etc. for continuous structure with variable moment of inertia may be drawn in a similar way, the ordinates for the influence line diagrams being determined taking into consideration the appropriate frame constants for the given structures. The design live load moments, shears and reactions at different sections are calculated by placing the live loads on the appropriate influence line diagrams. The loads should be placed in such manner that maximum effect is produced in the section under consideration. Design Procedure of Continuous Bridges: 1. Fix up span lengths in the unit and select rough sections at mid-spans and at supports. 2. Select appropriate soffit curve. 3. Work out dead load moments at different sections. This may be done as follows: i) Find the fixed end moments. ii) Find the distribution factors and carryover factors for the unit. iii) Distribute the fixed end moments by Moment Distribution Method. This will give the elastic moments. Add to it the free moment due to dead load. 4. Draw influence line diagrams for moments. The procedure is as follows: i) Find the F.E.M. for unit load on any position. ii) Distribute the F.E.M. and find out the elastic moments after correction for sway where necessary. iii) Add free moment to elastic moment. The moments so obtained at a particulars section for various load positions will give the ordinates of the BM influence line diagram at the locations on which unit load is placed. iv) Repeat process (i) to (iii) above and get the ordinates of the influence line diagram for various sections. 5. Work out live load moments at different sections. 6. Combine the live load moments with the dead load moments so as to get the maximum effect. 7. Check the concrete stress and calculate the area of reinforcement required. 8. Draw influence line diagrams for shears as before for various sections. Estimate both the dead load and live load shear and check the shear stress at the critical sections and provide necessary shear reinforcement where necessary. 9. Detail out the reinforcement in the members such that all the sections are adequately catered for respective critical bending moments and shear forces. Advantages of Continuous Bridges: The advantages in favour of continuous bridges are: (i) Unlike simply supported bridges, these structures require only one line of bearings over piers thus reducing the number of bearings in the superstructure as well as the width of the piers. (ii) Due to reduction in the width of pier, less obstruction to flow and as such possibility of less scour. (iii) Require less number of expansion joints due to which both the initial cost and the maintenance cost become less. The riding quality over the bridge is thus improved. (iv) Reduces depth at mid-span due to which vertical clearance or headroom is increased. This may bring down the bridge deck level reducing thereby not only the cost of the approaches but also the cost of substructure due to lesser height of piers and abutments which again reduces the cost of the foundation. (v) Better architectural appearance. Disadvantages of Continuous Bridges: The disadvantages are: (i) Analysis is laborious and time consuming. (ii) Not suitable on yielding foundations. Differential settlement may cause undesirable stresses.
{"url":"https://www.yourarticlelibrary.com/science-fair-project/highway-bridges/continuous-bridges-types-design-and-advantages/93302","timestamp":"2024-11-02T05:21:58Z","content_type":"text/html","content_length":"82184","record_id":"<urn:uuid:f5072b99-adbd-4001-ac3a-e39a2aacf17f>","cc-path":"CC-MAIN-2024-46/segments/1730477027677.11/warc/CC-MAIN-20241102040949-20241102070949-00494.warc.gz"}
Measuring angle of a curve wire with OpenCV Measuring the Angle of a Curved Wire with OpenCV Have you ever needed to determine the precise angle of a curved wire in an image? This can be a tricky task, especially if the wire is not perfectly straight. Thankfully, OpenCV, the powerful open-source computer vision library, offers a solution. This article will guide you through the process of using OpenCV to measure the angle of a curved wire in an image. The Challenge: Measuring Angle of Curved Wire Imagine you have a picture of a curved wire, like the one below, and you need to measure the angle it makes at a specific point. Original Image: Traditional methods, like manually drawing lines and measuring angles, are often inaccurate and time-consuming. This is where OpenCV comes in. OpenCV to the Rescue: A Step-by-Step Guide Here's how to measure the angle of a curved wire using OpenCV: 1. Image Preprocessing: □ Load the image using OpenCV. □ Convert the image to grayscale for better edge detection. □ Apply Gaussian blur to smooth out noise and enhance edge detection. 2. Edge Detection: □ Utilize the Canny edge detection algorithm to identify edges in the image. The Canny algorithm is robust and effective in detecting edges in images with different lighting conditions. 3. Hough Line Transform: □ Employ the Hough Line Transform to find straight lines in the image. The Hough Transform is a powerful technique for detecting lines and curves in images. 4. Finding the Angle: □ Locate the lines that intersect at the desired point on the curved wire. □ Calculate the angle between these lines using the formula: import math def calculate_angle(line1, line2): """Calculates the angle between two lines. line1 (tuple): A tuple representing the first line (x1, y1, x2, y2). line2 (tuple): A tuple representing the second line (x1, y1, x2, y2). float: The angle between the two lines in degrees. x1, y1, x2, y2 = line1 x3, y3, x4, y4 = line2 # Calculate the slopes of the lines slope1 = (y2 - y1) / (x2 - x1) slope2 = (y4 - y3) / (x4 - x3) # Calculate the angle in radians angle_rad = math.atan((slope2 - slope1) / (1 + slope1 * slope2)) # Convert the angle to degrees angle_deg = math.degrees(angle_rad) return angle_deg # Example Usage line1 = (10, 10, 50, 50) line2 = (20, 20, 60, 10) angle = calculate_angle(line1, line2) print(f"The angle between the lines is: {angle:.2f} degrees") 5. Visualization: □ Overlay the detected lines onto the original image for visual confirmation of the angle measurement. Code Snippet: import cv2 import math # Load the image img = cv2.imread("wire_image.jpg") # Convert to grayscale gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) # Apply Gaussian blur blurred = cv2.GaussianBlur(gray, (5, 5), 0) # Perform Canny edge detection edges = cv2.Canny(blurred, 50, 150) # Apply Hough Line Transform lines = cv2.HoughLines(edges, 1, math.pi / 180, 200) # Select two lines intersecting at the desired point line1 = lines[0][0] line2 = lines[1][0] # Calculate the angle between the lines angle = calculate_angle(line1, line2) # Draw the lines on the original image cv2.line(img, (line1[0], line1[1]), (line1[2], line1[3]), (0, 0, 255), 2) cv2.line(img, (line2[0], line2[1]), (line2[2], line2[3]), (0, 0, 255), 2) # Display the image with the detected lines cv2.imshow("Result", img) • The code snippet utilizes OpenCV functions like cvtColor, GaussianBlur, Canny, and HoughLines for image preprocessing and edge detection. • The calculate_angle function takes two lines represented as tuples and returns the angle between them in degrees. • The lines are drawn on the original image for visual confirmation of the angle measurement. Important Considerations • Calibration: Ensure the accuracy of your angle measurement by calibrating your camera or image acquisition system. • Lighting: Consistent lighting is crucial for accurate edge detection and subsequent angle calculation. • Wire Thickness: The accuracy of angle measurement might depend on the thickness of the wire. Thicker wires offer more distinct edges, making the detection process easier. • Curve Complexity: The algorithm works best for relatively smooth curves. Highly complex curves might require further processing or alternative techniques. Practical Applications This technique of measuring the angle of a curved wire using OpenCV has numerous applications in various fields: • Manufacturing: Quality control in manufacturing processes involving wires or cables. • Robotics: Precise positioning and manipulation of objects using wire-based systems. • Medical Imaging: Analysis of medical images containing curved structures, like blood vessels or arteries. Measuring the angle of a curved wire using OpenCV is a powerful technique with diverse applications. This article provides a comprehensive guide, from image preprocessing to angle calculation, empowering you to implement this method effectively.
{"url":"https://laganvalleydup.co.uk/post/measuring-angle-of-a-curve-wire-with-open-cv","timestamp":"2024-11-08T21:34:50Z","content_type":"text/html","content_length":"85272","record_id":"<urn:uuid:1b084328-3d27-423f-b84a-aad5326ad09d>","cc-path":"CC-MAIN-2024-46/segments/1730477028079.98/warc/CC-MAIN-20241108200128-20241108230128-00366.warc.gz"}
6.03 Calorimetry 1. Heat water in a pan or beaker until it is boiling vigorously. 2. While you are waiting for the water to boil, measure the mass of the metal with your scale. Remember to report your answer to one more decimal place than what is marked off by the scale. 3. Once the water on the stove is boiling, drop the metal into the pot and let it sit there for about 5 minutes. This will heat the metal to the temperature of the boiling water (100.0°C). 4. While the metal is heating, nest one Styrofoam cup inside the other. This will be your calorimeter. Since Styrofoam is a good insulator, 2 Styrofoam cups make an excellent calorimeter. 5. Measure the mass of this calorimeter with your scale. After you do this, fill the calorimeter about three-quarters …show more content… Stir the water carefully with your thermometer and periodically read the temperature without lifting the thermometer out of the water. Continue this process until the temperature stops increasing. Write down the final temperature. 8. Now you can use equation 12.1 to determine how much heat the metal transferred to the water. In this case, ignore the heat absorbed by the calorimeter. Use the change in temperature in °C, the mass of the water in grams, and the specific heat of water as 4.184 J/(g · °C). This will then put your answer in Joules. Be sure to use the correct number of significant figures in determining the heat absorbed by the water. 9. Now you can use equation 12.3. Since we are ignoring the calorimeter in the experiment, q(calorimeter) = 0. You calculated q(water) in the previous step, so you can determine q(metal). It will come out negative because the metal lost energy. 10. Once you have q(metal), you can rearrange equation 12.3 to calculate the specific heat of the metal. To do that, you need to know the mass and ΔT of the metal. You measured its mass. What is its ΔT? Since the metal was in boiling water, its initial temperature was 100.0°C. Since it was in contact with the water and the calorimeter at the end of the experiment, its final temperature was the same as the final temperature of the water. With those 2 numbers, you can calculate ΔT. This number will also turn out negative, canceling the negative sign on the
{"url":"https://www.cram.com/essay/6-03-Calorimetry/F04243A1A2720E33","timestamp":"2024-11-05T01:10:13Z","content_type":"text/html","content_length":"81791","record_id":"<urn:uuid:83a17830-9aab-4ff4-a85a-cd9146083089>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.84/warc/CC-MAIN-20241104225856-20241105015856-00511.warc.gz"}
Translunar Academy Many questions come to mind. “What would you say are the things that make people most vulnerable to obfuscation techniques?” you say. “Much like in the field of evocation, where your shield is most effective when focused on the specific variables of the attack, obfuscation also relies on slipping past your defenses. So it is most effective when the target system is not prepared for it, especially if it is actively prepared for something else,” Haze says. “So like, if I was dissociated and defending against æthereal attacks, I would be vulnerable to something directed at me from the surface?” you say. “An excellent example. Changing our state of consciousness drastically alters our defensive strategies. This is why we always have a body guard for dissociated technopaths on a military op.” “So how exactly does an obfuscation program, for instance a simple sensory echo, work differently on the mind in different states of consciousness?” you say. “To answer that you must consider how the mind interprets senses in the first place. An echo is effective precisely because the mind reads it as real sensory data, it cannot tell the difference. It really comes down to how our systems operate. We are technopaths, our systems are a union of flesh and machine. The flesh brain is highly tuned to want to see things it recognizes--it draws on your sensory memory to process the data your senses send it, rather than just reinterpreting it every time. It expects to perceive something you have a memory of perceiving before. Suppress your sensory memory, and you will quickly see how much you rely on it to navigate the world around you. “Interestingly, our machine side often functions much the same way. The quickest way to process sensory data is often through recognition of existing data--pattern recognition that leads us to the conclusion that we are most likely seeing what the data most closely resembles. This way, our combined system processes the data much faster, and uses less energy, but it also introduces an opportunity for error. However, it has a high enough accuracy rate that it generally outweighs the alternative of reprocessing virtually the same data in every frame, because as we technopaths know, in a competition between two systems, the faster one will tend to win more than the slower, more accurate one. Am I making sense, Aydan?” “Oh yes, this is fascinating.” “So, if we know that all systems tend to prefer this approach, we can exploit the known weaknesses of that approach. In a hallucinatory state, such as dissociation in the æther, we tend to be more vulnerable than in our surface conscious state, because our system is receiving so much more data than it can on the surface. It is always pushing against the limit of how quickly we can process it, which forces us to fall back on pattern recognition more and rely on logical interpretation less. Especially in the æther--where your mind lacks any of the touchstones like motion, light, or sound that most of our minds depend on for orienting themselves in the world--your system is grasping for useful details, and ready to believe them. This is exactly how you trick yourself into standing on a surface in the æther without gravity or rotation, how you can see and hear where there is no light or sound. “Therefore when you are in that state, if you are a good technopath, you are focused on defending your mind from the vulnerabilities of that state. This naturally leaves you exposed to sensory data injected from the surface--you cannot tell the difference, so you do not realize it is slipping past your focused defenses. Similarly if you are in normal consciousness, and focused on defending yourself from surface-tech threats, sensory data injected ætherside would be your weakness,” Haze says. “So if my system naturally has this weakness of wanting to believe what it perceives, how can I tell what is real from what is illusory?” you say. “You can’t rely on your senses for that. You must think beyond them, approach what you perceive with logic instead. One good example is navigating a dream,” they say. “Oh, really?” “The sleep state is similar to dissociation in many ways, and shares many of the same vulnerabilities. This is why ObSpecs sometimes attack our targets when they are asleep, and take advantage of the phenomenon of dreams. This makes the act of defending against ObTech while in a dissociative state not fundamentally different from the practice of lucid dreaming. Have you ever done that, Aydan?”
{"url":"https://translunar.academy/fic/post/362","timestamp":"2024-11-12T05:31:09Z","content_type":"text/html","content_length":"11101","record_id":"<urn:uuid:682e3f31-b222-446f-8c19-f595c02eb571>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.58/warc/CC-MAIN-20241112045844-20241112075844-00502.warc.gz"}
Mini-Workshop “Calculus of Variations and Functional Inequalities” Date: Wed. May 25, 2022 Organized by: FAU DCN-AvH, Chair for Dynamics, Control and Numerics – Alexander von Humboldt Professorship at FAU Erlangen-Nürnberg (Germany) – Alexander von Humboldt Professorship at FAU Erlangen-Nürnberg (Germany) Title: Mini-workshop “Calculus of Variations and Functional Inequalities”” This is a hybrid event (online & on-site) • Online: Join via Zoom meeting link Meeting ID: 682 9425 7970 | PIN: 937764 • On site: Felix Klein building. Room 03.323 Friedrich-Alexander-Universität Erlangen-Nürnberg. Cauestrasse 11, 91058 – Erlangen, Bavaria (Germany) Tobias König, Institut de Mathématiques de Jussieu, Paris Rive Gauche “The fractional Brezis–Nirenberg problem in low dimensions. Critical functions and blow-up asymptotics” Abstract. The classical Brezis–Nirenberg problem asks for the existence, respectively non-existence, of positive solutions u to -\Delta u + a u = u^\frac{N+2}{N-2} on some domain \Omega \subset \R^N with zero Dirichlet boundary conditions, depending on the choice of a \in C(\overline{\Omega}). I will begin by discussing this problem, with emphasis on the special role of dimension N = 3 for its I will then introduce the fractional version of the Brezis–Nirenberg problem involving the fractional Laplacian (-\Delta)^s with s \in (0,1) and the corresponding critical exponent \frac{N+2s}{N-2s}. It turns out that the problem now behaves specially in dimensions N \in (2s, 4s). For such dimensions, I will present some recent results joint with N. De Nitti (FAU DCN-AvH). Firstly, we characterize the functions a for which an energy-minimizing solution exists in terms of the Green’s function of (-\Delta)^s + a, thus extending a well-known result for s =1 due to Druet. Secondly, we give a precise description of the concentration behavior of minimizing solutions u_{\epsilon} associated to functions a_{\epsilon} tending to some critical a. Federico Glaudo, ETH Zürich “On the sharp stability of critical points of the Sobolev inequality” Abstract. The unique minimizers of the Sobolev inequality in R^n are known to be the Talenti bubbles, a two parameters (position and concentration) family of functions. As a consequence, the Talenti bubbles solve the associated Euler-Lagrange equation \Delta u + u^{2^*-1} = 0 in R^n. If u : R^n \to R is a sum of “almost independent” bubbles, then u “almost solves” the Euler-Lagrange equation, that is |\Delta u + u^{2^*-1}|_{H^{-1}} \ll 1. M. Struwe proved the converse in the 80s, i.e., that if a function u satisfies |\Delta u + u^{2^*-1}|_{H^{-1}} \ll 1 then u is close in H^1 to a sum of almost independent bubbles. With an application to the fast diffusion equation in mind, we will discuss the sharp quantitative stability of Struwe’s result. We will present various recent (sharp quantitative) estimates of the distance (in H^1) between u and the manifold of sum of Talenti bubbles with the quantity |\Delta u + u^{2^*-1}|_{H^{-1}}. The unexpected and novel feature is that the sharp exponent in these estimates depends on the dimension n. This talk is based on a joint work with A. Figalli. Previous FAU DCN-AvH Workshops: If you like this, you don’t want to miss out our upcoming events!
{"url":"https://cmc.deusto.eus/calculus-of-variations-and-functional-inequalities/","timestamp":"2024-11-05T02:58:16Z","content_type":"text/html","content_length":"87257","record_id":"<urn:uuid:7d6f1afe-4a27-4215-9c15-2ac43216bf78>","cc-path":"CC-MAIN-2024-46/segments/1730477027870.7/warc/CC-MAIN-20241105021014-20241105051014-00481.warc.gz"}
Re: Limiting lengths of all sequences for TLC model checking [Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: Limiting lengths of all sequences for TLC model checking Here's a little more detail to explain Stephan's response. TLC has a mechanism for replacing a definition in a module with Java methods that TLC uses to evaluate expressions containing the defined operator. This mechanism is used for the definition of Seq in the standard Sequences module. It allows TLC to evaluate an _expression_ like <<-42>> \in Seq(Nat) even though TLC can't evaluate Seq (Nat). In the Sequences module, Seq is defined in terms of Nat. However, because TLC does not use that definition in evaluating expressions containing Seq, overriding the definition of Nat does not change how TLC evaluates expressions containing Seq. I believe that all the definitions in most of the standard modules are replaced by Java code in this way. (I'm not sure about the Bags module.) I also believe that overriding a definition in a model does what it should. In your example, overriding the definition of Seq with the definition that appears in the Sequences model should work. Thus you could use the definition Seq(S) == UNION { [1 .. n -> S] : n \in Nat } in the model. (The Nat in this definition will be the one defined by your overriding of the definition of Nat.) I haven't tested this; please let us know if anything I've written is incorrect.
{"url":"https://discuss.tlapl.us/msg01803.html","timestamp":"2024-11-14T11:59:09Z","content_type":"text/html","content_length":"5387","record_id":"<urn:uuid:e05ff920-e4b7-43ab-b5cc-5819702d7bf3>","cc-path":"CC-MAIN-2024-46/segments/1730477028558.0/warc/CC-MAIN-20241114094851-20241114124851-00142.warc.gz"}
In number theory, the partition function p(n) represents the number of possible partitions of a non-negative integer n. For instance, p(4) = 5 because the integer 4 has the five partitions 1 + 1 + 1 + 1, 1 + 1 + 2, 1 + 3, 2 + 2, and 4. The values ${\displaystyle p(1),\dots ,p(8)}$ of the partition function (1, 2, 3, 5, 7, 11, 15, and 22) can be determined by counting the Young diagrams for the partitions of the numbers from 1 to 8. No closed-form expression for the partition function is known, but it has both asymptotic expansions that accurately approximate it and recurrence relations by which it can be calculated exactly. It grows as an exponential function of the square root of its argument. The multiplicative inverse of its generating function is the Euler function; by Euler's pentagonal number theorem this function is an alternating sum of pentagonal number powers of its argument. Srinivasa Ramanujan first discovered that the partition function has nontrivial patterns in modular arithmetic, now known as Ramanujan's congruences. For instance, whenever the decimal representation of n ends in the digit 4 or 9, the number of partitions of n will be divisible by 5. Definition and examples For a positive integer n, p(n) is the number of distinct ways of representing n as a sum of positive integers. For the purposes of this definition, the order of the terms in the sum is irrelevant: two sums with the same terms in a different order are not considered to be distinct. By convention p(0) = 1, as there is one way (the empty sum) of representing zero as a sum of positive integers. Furthermore p(n) = 0 when n is negative. The first few values of the partition function, starting with p(0) = 1, are: 1, 1, 2, 3, 5, 7, 11, 15, 22, 30, 42, 56, 77, 101, 135, 176, 231, 297, 385, 490, 627, 792, 1002, 1255, 1575, 1958, 2436, 3010, 3718, 4565, 5604, ... (sequence in the Some exact values of p(n) for larger values of n include:^[1] {\displaystyle {\begin{aligned}p(100)&=190,\!569,\!292\\p(1000)&=24,\!061,\!467,\!864,\!032,\!622,\!473,\!692,\!149,\!727,\!991\approx 2.40615\times 10^{31}\\p(10000)&=36,\!167,\!251,\!325,\!\dots ,\!906,\!916,\!435,\!144\approx 3.61673\times 10^{106}\end{aligned}}} Generating function Using Euler's method to find p(40): A ruler with plus and minus signs (grey box) is slid downwards, the relevant terms added or subtracted. The positions of the signs are given by differences of alternating natural (blue) and odd (orange) numbers. In the SVG file, hover over the image to move the ruler. The generating function for p(n) is given by^[2] {\displaystyle {\begin{aligned}\sum _{n=0}^{\infty }p(n)x^{n}&=\prod _{k=1}^{\infty }\left({\frac {1}{1-x^{k}}}\right)\\&=\left(1+x+x^{2}+x^{3}+\cdots \right)\left(1+x^{2}+x^{4}+x^{6}+\cdots \right)\left(1+x^{3}+x^{6}+x^{9}+\cdots \right)\cdots \\&={\frac {1}{1-x-x^{2}+x^{5}+x^{7}-x^{12}-x^{15}+x^{22}+x^{26}-\cdots }}\\&=1{\Big /}\sum _{k=-\infty } ^{\infty }(-1)^{k}x^{k(3k-1)/2}.\end{aligned}}} The equality between the products on the first and second lines of this formula is obtained by expanding each factor ${\displaystyle 1/(1-x^{k})}$ into the geometric series ${\displaystyle (1+x^{k}+x^{2k}+x^{3k}+\cdots ).}$ To see that the expanded product equals the sum on the first line, apply the distributive law to the product. This expands the product into a sum of monomials of the form ${\displaystyle x^{a_{1}}x^{2a_{2}}x^{3a_{3}}\cdots }$ for some sequence of coefficients ${\displaystyle a_{i}}$ , only finitely many of which can be non-zero. The exponent of the term is ${\textstyle n=\sum ia_{i}}$ , and this sum can be interpreted as a representation of ${\displaystyle n}$ as a partition into ${\displaystyle a_{i}}$ copies of each number ${\displaystyle i}$ . Therefore, the number of terms of the product that have exponent ${\displaystyle n}$ is exactly ${\displaystyle p(n)}$ , the same as the coefficient of ${\ displaystyle x^{n}}$ in the sum on the left. Therefore, the sum equals the product. The function that appears in the denominator in the third and fourth lines of the formula is the Euler function. The equality between the product on the first line and the formulas in the third and fourth lines is Euler's pentagonal number theorem. The exponents of ${\displaystyle x}$ in these lines are the pentagonal numbers ${\displaystyle P_{k}=k(3k-1)/2}$ for ${\displaystyle k\in \ {0,1,-1,2,-2,\dots \}}$ (generalized somewhat from the usual pentagonal numbers, which come from the same formula for the positive values of ${\displaystyle k}$ ). The pattern of positive and negative signs in the third line comes from the term ${\displaystyle (-1)^{k}}$ in the fourth line: even choices of ${\displaystyle k}$ produce positive terms, and odd choices produce negative terms. More generally, the generating function for the partitions of ${\displaystyle n}$ into numbers selected from a set ${\displaystyle A}$ of positive integers can be found by taking only those terms in the first product for which ${\displaystyle k\in A}$ . This result is due to Leonhard Euler.^[3] The formulation of Euler's generating function is a special case of a ${\displaystyle q}$ -Pochhammer symbol and is similar to the product formulation of many modular forms, and specifically the Dedekind eta function. Recurrence relations The same sequence of pentagonal numbers appears in a recurrence relation for the partition function:^[4] {\displaystyle {\begin{aligned}p(n)&=\sum _{k\in \mathbb {Z} \setminus \{0\}}(-1)^{k+1}p(n-k (3k-1)/2)\\&=p(n-1)+p(n-2)-p(n-5)-p(n-7)+p(n-12)+p(n-15)-p(n-22)-\cdots \end{aligned}}} As base cases, ${\displaystyle p(0)}$ is taken to equal ${\displaystyle 1}$ , and ${\displaystyle p(k)}$ is taken to be zero for negative ${\displaystyle k}$ . Although the sum on the right side appears infinite, it has only finitely many nonzero terms, coming from the nonzero values of ${\displaystyle k}$ in the range ${\displaystyle -{\frac {{\sqrt {24n+1}}-1}{6}}\leq k\leq {\frac {{\sqrt {24n+1}}+1}{6}}.}$ The recurrence relation can also be written in the equivalent form ${\displaystyle p(n)=\sum _ {k=1}^{\infty }(-1)^{k+1}{\big (}p(n-k(3k-1)/2)+p(n-k(3k+1)/2){\big )}.}$ Another recurrence relation for ${\displaystyle p(n)}$ can be given in terms of the sum of divisors function σ:^[5] ${\displaystyle p(n)={\frac {1}{n}}\sum _{k=0}^{n-1}\sigma (n-k)p(k).}$ If ${\ displaystyle q(n)}$ denotes the number of partitions of ${\displaystyle n}$ with no repeated parts then it follows by splitting each partition into its even parts and odd parts, and dividing the even parts by two, that^[6] ${\displaystyle p(n)=\sum _{k=0}^{\left\lfloor n/2\right\rfloor }q(n-2k)p(k).}$ Srinivasa Ramanujan is credited with discovering that the partition function has nontrivial patterns in modular arithmetic. For instance the number of partitions is divisible by five whenever the decimal representation of ${\displaystyle n}$ ends in the digit 4 or 9, as expressed by the congruence^[7] ${\displaystyle p(5k+4)\equiv 0{\pmod {5}}}$ For instance, the number of partitions for the integer 4 is 5. For the integer 9, the number of partitions is 30; for 14 there are 135 partitions. This congruence is implied by the more general identity ${\displaystyle \sum _{k=0}^{\infty }p (5k+4)x^{k}=5~{\frac {(x^{5})_{\infty }^{5}}{(x)_{\infty }^{6}}},}$ also by Ramanujan,^[8]^[9] where the notation ${\displaystyle (x)_{\infty }}$ denotes the product defined by ${\displaystyle (x)_{\ infty }=\prod _{m=1}^{\infty }(1-x^{m}).}$ A short proof of this result can be obtained from the partition function generating function. Ramanujan also discovered congruences modulo 7 and 11:^[7] {\displaystyle {\begin{aligned}p(7k+5)&\equiv 0{\pmod {7}},\\p(11k+6)&\equiv 0{\pmod {11}}.\end{aligned}}} The first one comes from Ramanujan's identity^[9] ${\displaystyle \sum _{k=0}^{\infty }p(7k+5)x^{k}=7~{\frac {(x^{7})_{\infty }^{3}}{(x)_{\infty }^{4}}}+49x~{\frac {(x^{7})_{\infty }^{7}}{(x)_{\infty }^{8}}}.}$ Since 5, 7, and 11 are consecutive primes, one might think that there would be an analogous congruence for the next prime 13, ${\displaystyle p(13k+a)\equiv 0{\pmod {13}}}$ for some a. However, there is no congruence of the form ${\displaystyle p(bk+a)\equiv 0{\pmod {b}}}$ for any prime b other than 5, 7, or 11.^[10] Instead, to obtain a congruence, the argument of ${\displaystyle p}$ should take the form ${\displaystyle cbk+a}$ for some ${\displaystyle c>1}$ . In the 1960s, A. O. L. Atkin of the University of Illinois at Chicago discovered additional congruences of this form for small prime moduli. For example: ${\displaystyle p(11^{3}\cdot 13\cdot k+237)\equiv 0{\pmod {13}}.}$ Ken Ono (2000) proved that there are such congruences for every prime modulus greater than 3. Later, Ahlgren & Ono (2001) showed there are partition congruences modulo every integer coprime to 6.^[11 Approximation formulas Approximation formulas exist that are faster to calculate than the exact formula given above. An asymptotic expression for p(n) is given by ${\displaystyle p(n)\sim {\frac {1}{4n{\sqrt {3}}}}\exp \left({\pi {\sqrt {\frac {2n}{3}}}}\right)}$ as ${\displaystyle n\to \infty }$ . This asymptotic formula was first obtained by G. H. Hardy and Ramanujan in 1918 and independently by J. V. Uspensky in 1920. Considering ${\displaystyle p(1000)}$ , the asymptotic formula gives about ${\displaystyle 2.4402\times 10^{31}}$ , reasonably close to the exact answer given above (1.415% larger than the true value). Hardy and Ramanujan obtained an asymptotic expansion with this approximation as the first term:^[13] ${\displaystyle p(n)\sim {\frac {1}{2\pi {\sqrt {2}}}}\sum _{k=1}^{v}A_{k}(n){\sqrt {k}}\cdot {\ frac {d}{dn}}\left({{\frac {1}{\sqrt {n-{\frac {1}{24}}}}}\exp \left[{{\frac {\pi }{k}}{\sqrt {{\frac {2}{3}}\left(n-{\frac {1}{24}}\right)}}}\,\,\,\right]}\right),}$ where ${\displaystyle A_{k}(n)=\ sum _{0\leq m<k,\;(m,k)=1}e^{\pi i\left(s(m,k)-2nm/k\right)}.}$ Here, the notation ${\displaystyle (m,k)=1}$ means that the sum is taken only over the values of ${\displaystyle m}$ that are relatively prime to ${\displaystyle k}$ . The function ${\displaystyle s(m,k)}$ is a Dedekind sum. The error after ${\displaystyle v}$ terms is of the order of the next term, and ${\displaystyle v}$ may be taken to be of the order of ${\displaystyle {\sqrt {n}}}$ . As an example, Hardy and Ramanujan showed that ${\displaystyle p(200)}$ is the nearest integer to the sum of the first ${\displaystyle v=5}$ terms of the series.^[13] In 1937, Hans Rademacher was able to improve on Hardy and Ramanujan's results by providing a convergent series expression for ${\displaystyle p(n)}$ . It is^[14]^[15] ${\displaystyle p(n)={\frac {1} {\pi {\sqrt {2}}}}\sum _{k=1}^{\infty }A_{k}(n){\sqrt {k}}\cdot {\frac {d}{dn}}\left({{\frac {1}{\sqrt {n-{\frac {1}{24}}}}}\sinh \left[{{\frac {\pi }{k}}{\sqrt {{\frac {2}{3}}\left(n-{\frac {1}{24}} The proof of Rademacher's formula involves Ford circles, Farey sequences, modular symmetry and the Dedekind eta function. It may be shown that the ${\displaystyle k}$ th term of Rademacher's series is of the order ${\displaystyle \exp \left({\frac {\pi }{k}}{\sqrt {\frac {2n}{3}}}\right),}$ so that the first term gives the Hardy–Ramanujan asymptotic approximation. Paul Erdős (1942) published an elementary proof of the asymptotic formula for ${\displaystyle p(n)}$ .^[16]^[17] Techniques for implementing the Hardy–Ramanujan–Rademacher formula efficiently on a computer are discussed by Johansson (2012), who shows that ${\displaystyle p(n)}$ can be computed in time ${\ displaystyle O(n^{1/2+\varepsilon })}$ for any ${\displaystyle \varepsilon >0}$ . This is near-optimal in that it matches the number of digits of the result.^[18] The largest value of the partition function computed exactly is ${\displaystyle p(10^{20})}$ , which has slightly more than 11 billion digits.^[19] Strict partition function Definition and properties A partition in which no part occurs more than once is called strict, or is said to be a partition into distinct parts. The function q(n) gives the number of these strict partitions of the given sum n . For example, q(3) = 2 because the partitions 3 and 1 + 2 are strict, while the third partition 1 + 1 + 1 of 3 has repeated parts. The number q(n) is also equal to the number of partitions of n in which only odd summands are permitted.^[20] Example values of q(n) and associated partitions n q(n) Strict partitions Partitions with only odd parts 0 1 () empty partition () empty partition 2 1 2 1+1 3 2 1+2, 3 1+1+1, 3 4 2 1+3, 4 1+1+1+1, 1+3 5 3 2+3, 1+4, 5 1+1+1+1+1, 1+1+3, 5 6 4 1+2+3, 2+4, 1+5, 6 1+1+1+1+1+1, 1+1+1+3, 3+3, 1+5 7 5 1+2+4, 3+4, 2+5, 1+6, 7 1+1+1+1+1+1+1, 1+1+1+1+3, 1+3+3, 1+1+5, 7 8 6 1+3+4, 1+2+5, 3+5, 2+6, 1+7, 8 1+1+1+1+1+1+1+1, 1+1+1+1+1+3, 1+1+3+3, 1+1+1+5, 3+5, 1+7 9 8 2+3+4, 1+3+5, 4+5, 1+2+6, 3+6, 2+7, 1+8, 9 1+1+1+1+1+1+1+1+1, 1+1+1+1+1+1+3, 1+1+1+3+3, 3+3+3, 1+1+1+1+5, 1+3+5, 1+1+7, 9 Generating function The generating function for the numbers q(n) is given by a simple infinite product:^[21] ${\displaystyle \sum _{n=0}^{\infty }q(n)x^{n}=\prod _{k=1}^{\infty }(1+x^{k})=(x;x^{2})_{\infty }^{-1},}$ where the notation ${\displaystyle (a;b)_{\infty }}$ represents the Pochhammer symbol ${\displaystyle (a;b)_{\infty }=\prod _{k=0}^{\infty }(1-ab^{k}).}$ From this formula, one may easily obtain the first few terms (sequence A000009 in the OEIS): ${\displaystyle \sum _{n=0}^{\infty }q(n)x^{n}=1+1x+1x^{2}+2x^{3}+2x^{4}+3x^{5}+4x^{6}+5x^{7}+6x^{8}+8x^{9}+10x^{10}+\ldots .}$ This series may also be written in terms of theta functions as ${\displaystyle \sum _{n=0}^{\infty }q(n)x^{n}=\vartheta _{00}(x)^{1/6}\vartheta _{01}(x)^{-1/3}{\biggl \{}{\frac {1}{16\,x}}{\bigl [}\vartheta _{00}(x)^{4}-\ vartheta _{01}(x)^{4}{\bigr ]}{\biggr \}}^{1/24},}$ where ${\displaystyle \vartheta _{00}(x)=1+2\sum _{n=1}^{\infty }x^{n^{2}}}$ and ${\displaystyle \vartheta _{01}(x)=1+2\sum _{n=1}^{\infty }(-1)^ {n}x^{n^{2}}.}$ In comparison, the generating function of the regular partition numbers p(n) has this identity with respect to the theta function: ${\displaystyle \sum _{n=0}^{\infty }p(n)x^{n}=(x;x) _{\infty }^{-1}=\vartheta _{00}(x)^{-1/6}\vartheta _{01}(x)^{-2/3}{\biggl \{}{\frac {1}{16\,x}}{\bigl [}\vartheta _{00}(x)^{4}-\vartheta _{01}(x)^{4}{\bigr ]}{\biggr \}}^{-1/24}.}$ Identities about strict partition numbers Following identity is valid for the Pochhammer products: ${\displaystyle (x;x)_{\infty }^{-1}=(x^{2};x^{2})_{\infty }^{-1}(x;x^{2})_{\infty }^{-1}}$ From this identity follows that formula: ${\displaystyle {\biggl [}\sum _{n=0}^{\infty }p(n)x^{n}{\biggr ]}={\biggl [}\sum _{n=0}^{\infty }p(n)x^{2n}{\biggr ]}{\biggl [}\sum _{n=0}^{\infty }q(n)x^{n}{\biggr ]}}$ Therefore those two formulas are valid for the synthesis of the number sequence p(n): ${\displaystyle p(2n)=\sum _{k=0}^{n}p(n-k)q(2k)}$ ${\displaystyle p(2n+1)=\sum _{k=0}^{n}p(n-k)q(2k+1)}$ In the following, two examples are accurately executed: ${\displaystyle p(8)=\sum _{k=0}^{4}p(4-k)q(2k)=}$ ${\displaystyle =p(4)q(0)+p(3)q(2)+p(2)q(4)+p(1)q(6)+p(0)q(8)=}$ ${\displaystyle =5\times 1+3\times 1+2\times 2+1\times 4+1\times 6=22}$ ${\displaystyle p(9)=\sum _{k=0}^{4}p(4-k)q(2k+1)=}$ ${\displaystyle =p(4)q(1)+p(3)q(3)+p(2)q(5)+p(1)q(7)+p(0)q(9)=}$ ${\displaystyle =5\times 1+3\times 2+2\times 3+1\times 5+1\times 8=30}$ Restricted partition function More generally, it is possible to consider partitions restricted to only elements of a subset A of the natural numbers (for example a restriction on the maximum value of the parts), or with a restriction on the number of parts or the maximum difference between parts. Each particular restriction gives rise to an associated partition function with specific properties. Some common examples are given below. Euler and Glaisher's theorem Two important examples are the partitions restricted to only odd integer parts or only even integer parts, with the corresponding partition functions often denoted ${\displaystyle p_{o}(n)}$ and ${\ displaystyle p_{e}(n)}$ . A theorem from Euler shows that the number of strict partitions is equal to the number of partitions with only odd parts: for all n, ${\displaystyle q(n)=p_{o}(n)}$ . This is generalized as Glaisher's theorem, which states that the number of partitions with no more than d-1 repetitions of any part is equal to the number of partitions with no part divisible by d. Gaussian binomial coefficient If we denote ${\displaystyle p(N,M,n)}$ the number of partitions of n in at most M parts, with each part smaller or equal to N, then the generating function of ${\displaystyle p(N,M,n)}$ is the following Gaussian binomial coefficient: ${\displaystyle \sum _{n=0}^{\infty }p(N,M,n)q^{n}={N+M \choose M}_{q}={\frac {(1-q^{N+M})(1-q^{N+M-1})\cdots (1-q^{N+1})}{(1-q)(1-q^{2})\cdots (1-q^{M})}}}$ Some general results on the asymptotic properties of restricted partition functions are known. If p[A](n) is the partition function of partitions restricted to only elements of a subset A of the natural numbers, then: If A possesses positive natural density α then ${\displaystyle \log p_{A}(n)\sim C{\sqrt {\alpha n}}}$ , with ${\displaystyle C=\pi {\sqrt {\frac {2}{3}}}}$ and conversely if this asymptotic property holds for p[A](n) then A has natural density α. This result was stated, with a sketch of proof, by Erdős in 1942.^[16] If A is a finite set, this analysis does not apply (the density of a finite set is zero). If A has k elements whose greatest common divisor is 1, then ${\displaystyle p_{A}(n)=\left(\prod _{a\in A}a^{-1}\right)\cdot {\frac {n^{k-1}}{(k-1)!}}+O(n^{k-2}).}$ External links • First 4096 values of the partition function
{"url":"https://www.knowpia.com/knowpedia/Partition_function_(number_theory)","timestamp":"2024-11-09T17:26:17Z","content_type":"text/html","content_length":"332389","record_id":"<urn:uuid:b70d1681-bc2a-49b4-9a24-e12cc60bcc11>","cc-path":"CC-MAIN-2024-46/segments/1730477028125.59/warc/CC-MAIN-20241109151915-20241109181915-00433.warc.gz"}
Conditional Formatting with SUMIF Function in Excel Conditional Formatting is one of the best methods for highlighting our data based on certain criteria. We can highlight the cells based on their value, or put a certain condition that the value in cells has to fulfill to be different from the rest of the cells in the range. In the example below, we will show how can we combine Conditional Formatting and SUMIF Function. Conditional Formatting Basics First thing first, we will create the table with sales data that will consist of different people, the month in which the sales were achieved, and the sales that were achieved in a particular month: For the next thing, we are going to select all the data we have in column A (all the salesperson names) and then go to the Home tab >> Styles >> Conditional Formatting >> New Rule: Under the new window, we will choose the last option- Use a formula to determine which cells to format, and then define the following formula: This is what our formula looks like in the Excel file: The reason why we use cell $E$2 is that we will use this cell to insert the names of different salespersons and to check their sales in cell F2. We will choose a proper format, in our case, it will be red background, by clicking Format and then choosing the appropriate color in the Fill tab: When we click OK two times, and insert the name “Margaret” in cell E2, this is the result we will get in our table: Using SUMIF Function Now we can use the SUMIF function to combine with the defined approach. Before that, we will define the Conditional Formatting for column C as well, to be dependable on the value in cell E2. We will use the same approach as before, and will insert a formula to determine the cells to format: For the cells that are matching these conditions, we will fill them with blue color. We will click OK twice again, to finish this step. Our table will be changed, and it will look like this: In cell F2, we will now insert the SUMIF formula, which will encompass our table and the value in cell E2. This is the formula we will insert in cell F2: 1 =SUMIF(A2:A17,E2,C2:C17) SUMIF has three parameters: 1) Range (in our case range A2:A17)- the location where our value should be searched for; 2) Criteria (value in cell E2)- value that we need to search in the range; 3) Sum_range (in our case, that will be column C)- the range that we want to add if the conditions are met. Value in our cell F2 is $626,573, which is exactly the correct amount for all the sales that Margaret achieved in the first four months of the year: All we need to do to find the total sales for the other salesperson is to change the name in cell E2. When we insert John, for example, total sales and formatting in our table will be adjusted: Using SUMIFS Function We can also use the SUMIFS formula to extract more useful data from our table. The structure of the SUMIFS sentence is a bit different than for SUMIF. SUMIFS formula will be inserted in cell G2, and we will define the following parameters: 1) Range C2:C17 as a sum_range; 2) First criteria range will be column A; 3) Criteria for the first range will be the value in cell E2; 4) Second criteria range will be the sales figures in column C; 5) Criteria for the second range will be “>200000” which means that we only want to show the sales figures over $200,000 for every person on a monthly level. We will use Rachel’s data (insert her name in cell E2), and this is what we will end up with: Although we have every figure for Rachel highlighted, the formula in cell G2 will omit the data in row number eight, as the figures in this row are smaller than the desired number ($200,000).
{"url":"https://officetuts.net/excel/examples/conditional-formatting-with-sumif-function/","timestamp":"2024-11-06T23:52:04Z","content_type":"text/html","content_length":"153685","record_id":"<urn:uuid:014fc1be-7a4e-4cab-8d18-370cef87a16f>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.54/warc/CC-MAIN-20241106230027-20241107020027-00418.warc.gz"}
Patch Antenna Design Goals I have implemented antennas in products fairly often, however I've never designed one so I thought it would be a fun personal project to see what I could do. Since this wasn't an official project for a company, I didn't have access to an RF chamber, full 3D simulation software, or a substrate with well controlled permittivity. I did have access to a VNA, without which designing an antenna would have been impractical. Given those limitations, I still wanted to see how far I could get. I chose the simplest type of antenna to start with: a patch antenna. My (fairly arbitrary) design goals were: • Use a PCB as its main component, since they're cheap and easy to design. • Input impedance of 50 Ω. • Transmit 90% of the input energy (S11 ≤ -10 dB) over the ISM band (2.40 - 2.48 GHz), which is what Bluetooth and Wi-Fi use. I'll have to ignore antenna losses since I don't know of a way to measure them without an RF chamber. Substrate Choice I've heard a few times that FR4 has a poorly controlled dielectric constant and that there are special RF grade substrates for PCBs. I wasn't looking to spend too much money on a side project, so I decided to see how far I could get with whatever the board house gave me. Design - Version 1 I found some design equations in the textbook Antenna Theory: Analysis and Design 3rd Edition, pp 816-820, equations 14-1 to 14-7. I created a spreadsheet titled Patch Antenna Calculator to calculate them (download here), which yielded the following results: relative dielectric constant of the substrate 4.7 (FR-4) dielectric height 1.6 mm frequency 2.44 GHz width 36.415 mm length 28.021 mm These results agree with an online calculator: emtalk.com/mpacalc. Dimensional drawing from the online calculator, linked above. I designed a PCB with these dimensions in Altium. To feed the antenna I chose a method I'd seen in a few different places, including the above picture: connect the waveguide directly to the edge of the antenna. This turned out to be a bad idea (more on that later). PCB Design - Version 1 The dimensions of the PCB and the ground plane were chosen somewhat arbitrarily. I'd read enough about waveguides to suspect that it should extend at least several times the thickness of the dielectric past the antenna to ensure most of the electric field lines connected to it cleanly. I went over that by quite a bit just to be safe since I wasn't imposing any size constraints on myself. The length of the waveguide feeding the patch antenna was chosen arbitrairly, since it, the connector, and the cable supplying the signal should all be 50 Ω until the patch. The edge connector details are: Digi-Key Part Number SAM8857-ND Manufacturer Samtec Inc. Manufacturer PN SMA-J-P-H-ST-EM1 Description SMA Connector Jack, Female Socket 50Ohm Board Edge, End Launch Solder price/unit @1 $4.33 (Nov 25, 2019, Digi-Key) The PCB was added to an order going to JLCPCB, so it only cost $2 to get 5 samples. Connecting the antenna to a VNA gave the following plots. Markers 1, 2, and 3 show the target frequency band. Markers 4 and 5 show unexpected resonances. It can be seen that, while the antenna is resonant at 2.44 GHz (Marker 2), S11 is only around -5 dB and the input impedance is 123 + j57 Ω, which are both way too high. Surprisingly, the antenna is also resonant at 3.767 GHz and 4.66 GHz, both of which show a better S11 and input impedance than the target frequency. The low S11 means that the energy at those two higher unexpected frequencies is disappearing into the antenna, but is it actually being radiated? The patch should be the wrong dimensions to do that well. Ideally this would be tested in an antenna chamber. Instead, I roughly tested if they were radiating by connecting antenna samples to both ports of the VNA and put them next to each other. 2.44 GHz had the highest S21 value, with the other two frequencies responding much more weakly. Since an antenna's transmit and recieve efficiency vs frequency is the same, this means that both the first antenna wasn't transmitting AND the second antenna wasn't recieving at those unwanted frequencies. The only other place that energy could be going was into antenna losses. Again, this test wasn't nearly as rigourous as using a test chamber, but it's what I had. So now that I had an idea what was happening, I had to figure out why. The next thing I did was to simulate the antenna in Keysight Genesys, yielding this S11 plot: It's resonant at 2.46 GHz instead of 2.44 GHz. It also shows the three resonant dips at roughly the same frequencies, although the higher frequency results differ wildly. Researching Further Searching online for answers led to the Antenna Theory website which had this to say: “Previously, the patch antenna was fed at the end. Since this typically yields a high input impedance, we would like to modify the feed. Since the current is low at the ends of a half-wave patch and increases in magnitude toward the center, the input impedance (Z=V/I) could be reduced if the patch was fed closer to the center." Also, from here: • The length of the patch L controls the resonant frequency • Increasing the height of the substrate increases the bandwidth • The width W controls the input impedance and the radiation pattern (wider = lower input impedance) Based on this information, I decided that a better way to feed the antenna was needed. Once again, antenna-theory.com had some good information, showing two promising methods: Inset Feed and Coaxial Cable Feed Inset Feed Here the transmission line is extended into the antenna by a distance R. This method seemed less ideal to me since it disrupts the geometry of the patch and I suspect that having the patch adjacent to the waveguide for the distance R would affect its impedance. One unanswered question was how wide those slots that are cut into the patch should be. I could have played around in the simulation to find potential dimensions, but Antenna Theory suggested a second feed method that seemed better. Coaxial Cable Feed Here a coax cable is stripped back in such a way that the outer conductor stops at the ground plane and the central conductor continues up to connect to the patch. The inner insulator ideally extends at least partway into the substrate in order to stop the inner conductor from contacting the ground plane. This method doesn't disrupt the geometry of the patch and there were no unexplained dimensions. It was also easier to test by drilling a hole in one of my samples as opposed to trying to cut away copper precisely. The only dimension I had to determine was the feed point inset distance R. Finding the Feed Location Antenna Theory gave an equation for the antenna input impeance as a function of the feed point inset distance R. It's explained here. $$Z_{in}(R)=\cos^2(\pi \frac{R}{L})Z_{in}(0)$$ Noting that the desired Zin is 50Ω and rearranging a bit: $$\frac{50\Omega}{Z_{in}(0)}=\cos^2(\pi \frac{R}{L})$$ Let's think about this equation a bit. Looking at the R/L term, R will never be larger than L, so R/L ≤ 1. Here's a graph of cos^2(πx) from x = 0 to 1. You can see that cos^2(πx) will never be larger than 1. Therefore we can write: In other words, Zin(0) will be between 50 and infinity ohms and have no imaginary component. In order to find the correct value of R I need to measure Zin(0), divide 50 by it, and then find what value of R gives that same value to make both sides of this equation equal. $$\frac{50\Omega}{Z_{in}(0)}=\cos^2(\pi \frac{R}{L})$$ In order to measure Zin(0) I removed the waveguide from one of my antenna samples, drilled a hole at the edge of the patch, and soldered on a coax cable. The available drill bits were a bit larger than I would have liked, so I tried to compensate for the hole in the ground plane by soldering the coax ground around its entire perimeter. The VNA plots looked like this: Looking at the smith chart, the impedance at 2.44GHz is 87.367 - j10.984 Ω. In other words, Zin(0) = 87.367 - j10.984 Ω. This gave me a bit of pause. The previous analysis of the above equation indicated that Zin(0) should be real, not complex. Was this just measurement error? Was this from calibrating the VNA at the wrong location? Was this caused by the little bit of coax center conductor that goes through the substrate for a bit before getting to the patch? At this point I was kind of out of my depth but decided to move forwards anyways to see how far I could get. The worst thing that could happen was wasting a few hours and a small amount of money. If I was doing this project for an employer I would have been trying to get some feedback from a more senior designer. I decieded to just ignore the imaginary part of Zin(0). I maybe should have taken the magnitude (88.05 Ω), but it and the real part were very close enough and at this point I'd lost any claim to precision due to the drilling. So, taking Zin(0) = 87.367 (and L = 28.021 mm), the equation becomes: $$\frac{50\Omega}{87.367\Omega}=0.5723=\cos^2(\frac{\pi R}{28.021 mm})$$ Drawing a line at 0.5723 across our graph of cos^2(πx) show that there's two values of x that give a value of cos^2(πx) = 0.5723. For a coaxial cable feed they're the same since they would result in the same antenna, just mirrored. The solution we'll use is x = 0.2269. So now we just need to solve x = 0.2269 = R/L for R to find the feed location. L = 28.021, so R = 6.36 mm In order to roughly test this, I took another antenna sample, removed the wave guide, then drilled a hole as precisely as I could 6.36 mm from the edge of the patch antenna. It wasn't as precise as I'd like, but it did give these results. Markers 1 and 3 show the target frequency range. I was very suprised at how much the performance improved. S11 had a minimum of -31 dB in the region I was targeting (at 2.412 GHz), which was great. The input impedance at 2.412 GHz was 52.47 + j1.39 Ω, close enough to 50Ω to make me happy. What wasn't great was that at the upper end of the frequency range, S11 was only -5.7dB. Ideally the entire S11 curve would shift over and flatten out a bit in order to be below -10dB over the entire To do that the patch should be made shorter (decrease L) to increase its resonant frequency and its thickness should be increased in order to increase the bandwidth. Future Work With this result I was happy to declare the first version of the antenna a partial success. I felt like I had learned enough to be able to modify the design in order to fix its remaining issues. In the future, if I ever get access to a VNA again, I'd like to make a second version of this project and try to meet the remaining design goals.
{"url":"http://adamgulyas.ca/projects/Patch_Antenna.html","timestamp":"2024-11-08T08:42:07Z","content_type":"text/html","content_length":"19113","record_id":"<urn:uuid:8dab593a-3dd4-4317-824b-84371e9a386f>","cc-path":"CC-MAIN-2024-46/segments/1730477028032.87/warc/CC-MAIN-20241108070606-20241108100606-00682.warc.gz"}
Jump to navigation Jump to search In mathematics, a subalgebra is a subset of an algebra, closed under all its operations, and carrying the induced operations. "Algebra", when referring to a structure, often means a vector space or module equipped with an additional bilinear operation. Algebras in universal algebra are far more general: they are a common generalisation of all algebraic structures. Subalgebra can be a subset of both cases. Subalgebras for algebras over a ring or field[edit] A subalgebra of an algebra over a commutative ring or field is a vector subspace which is closed under the multiplication of vectors. The restriction of the algebra multiplication makes it an algebra over the same ring or field. This notion also applies to most specializations, where the multiplication must satisfy additional properties, e.g. to associative algebras or to Lie algebras. Only for unital algebras is there a stronger notion, of unital subalgebra, for which it is also required that the unit of the subalgebra be the unit of the bigger algebra. The 2×2-matrices over the reals form a unital algebra in the obvious way. The 2×2-matrices for which all entries are zero, except for the first one on the diagonal, form a subalgebra. It is also unital, but it is not a unital subalgebra. Subalgebras in universal algebra[edit] In universal algebra, a subalgebra of an algebra A is a subset S of A that also has the structure of an algebra of the same type when the algebraic operations are restricted to S. If the axioms of a kind of algebraic structure is described by equational laws, as is typically the case in universal algebra, then the only thing that needs to be checked is that S is closed under the operations. Some authors consider algebras with partial functions. There are various ways of defining subalgebras for these. Another generalization of algebras is to allow relations. These more general algebras are usually called structures, and they are studied in model theory and in theoretical computer science. For structures with relations there are notions of weak and of induced substructures. For example, the standard signature for groups in universal algebra is (•, ^−1, 1). (Inversion and unit are needed to get the right notions of homomorphism and so that the group laws can be expressed as equations.) Therefore, a subgroup of a group G is a subset S of G such that: • the identity e of G belongs to S (so that S is closed under the identity constant operation); • whenever x belongs to S, so does x^−1 (so that S is closed under the inverse operation); • whenever x and y belong to S, so does x • y (so that S is closed under the group's multiplication operation). • Bourbaki, Nicolas (1989), Elements of mathematics, Algebra I, Berlin, New York: Springer-Verlag, ISBN 978-3-540-64243-5 • Burris, Stanley N.; Sankappanavar, H. P. (1981), A Course in Universal Algebra, Berlin, New York: Springer-Verlag
{"url":"https://static.hlt.bme.hu/semantics/external/pages/Birkhoff_t%C3%A9tel/en.wikipedia.org/wiki/Subalgebra.html","timestamp":"2024-11-08T15:37:23Z","content_type":"text/html","content_length":"37215","record_id":"<urn:uuid:a8bfad1f-f003-4ba0-9623-08c574dec8ad>","cc-path":"CC-MAIN-2024-46/segments/1730477028067.32/warc/CC-MAIN-20241108133114-20241108163114-00612.warc.gz"}
Compressive Stress | Strength, Effects & Measurement Compressive stress Explore the essentials of compressive stress and strength, their effects, measurement methods, and impact on material selection in engineering. Understanding Compressive Stress: Definition and Importance Compressive stress is a fundamental concept in the fields of engineering, materials science, and physics. It refers to the force per unit area that results from pushing or squeezing an object. This stress acts to reduce the volume of the material, and its understanding is crucial for ensuring the structural integrity and durability of various materials and structures. Strength Under Compression The strength of a material under compression is a critical property, often referred to as its compressive strength. This is the maximum compressive stress that a material can withstand without failure. Different materials have varying compressive strengths. For instance, concrete and stone exhibit high compressive strength and are widely used in construction for this reason. The compressive strength is typically measured in units like Pascals (Pa), Kilopascals (kPa), or Megapascals (MPa). Effects of Compressive Stress When a material is subjected to compressive stress, several effects can occur: • Elastic Deformation: Initially, the material undergoes elastic deformation, where it deforms but can return to its original shape once the load is removed. This is governed by Hooke’s Law, where the stress is proportional to the strain, represented as σ = Eε, where σ is stress, E is the modulus of elasticity, and ε is strain. • Plastic Deformation: If the stress exceeds the elastic limit, the material undergoes plastic deformation, where permanent deformation occurs. • Failure: At higher stress levels, materials may experience failure modes like cracking, crushing, or buckling. Measuring Compressive Stress Compressive stress is measured using specialized equipment like compression testing machines. These machines apply a load to a specimen and measure the corresponding deformation. The compressive strength is calculated by dividing the maximum load applied by the cross-sectional area of the specimen. Advanced techniques may also use strain gauges or stress-strain curves to provide a more comprehensive understanding of material behavior under compression. This understanding of compressive stress and strength is fundamental to material science and engineering, enabling the design of structures and components that can withstand various loads and pressures encountered in real-world applications. Factors Affecting Compressive Strength The compressive strength of a material is not a fixed value; rather, it is influenced by various factors. These include: • Material Composition: The inherent properties of the material, including its density, grain structure, and chemical composition, significantly impact its strength. • Age: In materials like concrete, age plays a crucial role. As concrete cures over time, its compressive strength typically increases. • Temperature and Environmental Conditions: Extreme temperatures and environmental conditions can affect material properties, potentially reducing compressive strength. • Manufacturing Process: The method of production, such as casting or molding, and the presence of imperfections like air bubbles or impurities, can also influence strength. Applications of Compressive Stress Analysis Understanding compressive stress is essential in various applications: • Building and Construction: Determining the appropriate materials for load-bearing structures. • Material Development: Innovating new materials with specific strength requirements. • Quality Control: Ensuring the reliability and safety of products in manufacturing processes. Compressive stress and strength are crucial concepts in engineering and materials science, dictating the usability and durability of materials in numerous applications. The ability to measure, understand, and predict the behavior of materials under compression is fundamental to designing safe, efficient, and long-lasting structures and components. As technology advances, the analysis of compressive stress continues to evolve, offering more sophisticated tools and methods for material characterization. This ongoing research and development not only enhance our understanding of material properties but also pave the way for innovative solutions in construction, manufacturing, and various other fields of engineering and science.
{"url":"https://modern-physics.org/compressive-stress/","timestamp":"2024-11-05T17:04:20Z","content_type":"text/html","content_length":"159304","record_id":"<urn:uuid:7cde950c-cc54-4582-a093-e892e88efb4b>","cc-path":"CC-MAIN-2024-46/segments/1730477027884.62/warc/CC-MAIN-20241105145721-20241105175721-00796.warc.gz"}
Portfolio management is a long-tail game - GroundControl Portfolio management is a long-tail game Published on September 21, 2018, last updated on April 19, 2023 When looking at the success of startups, a common belief is that one third of the companies fail, one third return their money and one third of the companies become successful enough to really move the needle on your investment portfolio. But is that true? Correlation Ventures did research on all VC investments in the US between 2004 and 2013 to figure out the distribution of over 21,000 different investments. It turns out that close to 65% only returns up to 1x the initial investment, and only 10% returns an ROI bigger than 5x. So two third of the companies only return up to their initial investment and the other one third needs to make up for the loss of the rest. Using the Correlation Ventures research data as a benchmark we have created a calculator to help innovation managers to understand in how many ideas they need to invest in order to move the needle. You simply put in your total budget and the amount of ideas you’d like to invest in and out comes a prediction of your ROI. The calculator uses the Monte Carlo computational algorithm and runs over 20.000 scenarios to create a reliable outcome. It’s interesting to see in how many ideas you need to invest. That you should not put all your money in one investent may be common sense, but even 10 or 20 ideas are not enough. Portfolio management really is a long-tail game. Since one third of the ideas need to make up for the two third that “fail”, having more ideas makes it more likely to make money. When you invest € 500.000 in 10 ideas each, only 1 will return up to € 5.000.0000. You still have a 1 in 250 chance that that one startup returns more than € 25.000.000, but that chance is really slim. The simulation shows that there is a 35% chance on a € 3 million profit, but at the same time also a 25% chance you will lose € 2,5 million. But when you invest € 50.000 in 100 startups (same € 5 million budget), 7 of them will return up to € 500.000, two of those 10 up to € 1 million and if you are really lucky 1 more than € 10 million. In a best case scenario, the one third will get you € 15.500.000. That is enough to compensate for the € 3.300.000 (66x € 50.000) that you lost on the other two third of your portfolio and will give you a nice € 10 million profit on a € 5 million investment. Not bad! But that is the most lucky scenario. Running the simulation however shows a more likely profit of around € 7 million. [convertkit form=2570335] Why don’t you try the ROI Calculator yourself and see how well your portfolio will perform! It makes sense to run the calculator multiple times to see the effects of the Monte Carlo computational algorithm. You can clearly see the difference between making an initial investment of € 500.000 in 10 startups, or € 50.000 in a 100 startups. And that is without any stage-gated investments in place! The real money is in the double down on the investments that work, but that is a topic for next time. Timan Rebel Timan Rebel has over 20 years of experience as a startup founder and helps both independent and corporate startups find product/market fit. He has coached over 250+ startups in the past 12 years and is an expert in Lean Innovation and experiment design.
{"url":"https://togroundcontrol.com/blog/portfolio-management-is-a-long-tail-game/","timestamp":"2024-11-08T05:51:22Z","content_type":"text/html","content_length":"87481","record_id":"<urn:uuid:226e8f54-e5af-4293-9112-444c842e490f>","cc-path":"CC-MAIN-2024-46/segments/1730477028025.14/warc/CC-MAIN-20241108035242-20241108065242-00571.warc.gz"}
Metadata about the event itself. | Includes the time left, and where the ball is. The week during the season in which this game is played. Usually an integer. Player statistics for football. | Includes the same elements as team statistics. Player statistics for football. | Includes the same elements as team statistics. Statistics about a particular team. | Holds elements that divide statistics into categories. The average yards gained per play. The number of plays this team has completed, including for offensive, defensive, and special teams. The number of plays this team has completed for which yards were gained, including for offensive, defensive, and special teams. The number of plays this team has completed for which yards were lost, including for offensive, defensive, and special teams. Average yard-line that the team started on per-drive. The number of timeouts a team has remaining, either in regulation or in overtime, whichever state the game is currently in. Derived by subtracting turnovers-giveaways from turnovers-takeaways. Statistics about a particular team or player's offensive performance. | Further breaks down stats into passing, rushing, etc. The total yards accumulated through offensive plays. The number of offensive plays. The average number of yards gained per play. The average number of yards gained via offensive plays per game. How team's offense compares with rest of league or conference. The average amount of time the team had possession per-drive. Number of times team has driven inside its opponent's 20 yard line. Derived by adding passes-interceptions plus fumbles-own-lost. Average-per-game for turnovers-giveaway. Usually, number of times QB has handed off, typically leading to a running play. Usually, number of times QB has pitched back to another player, typically leading to a running play. The number of tackles performed while on offense. The number of tackle assists performed while on offense. Statistics about a particular team or player's passing performance. | Used for quarterbacks and receivers. The number of passes attempted. The number of passes completed successfully. Average number of completed passes a game. The percentage of all passes that are completed successfully. The number of yards gained from passing. Subtracts out the sacks-against-yards-lost value of stats-american-football-sacks-against element. Amount of yards lost due to sacks and completed passing plays that ended behind the line of scrimmage. Gross passing yards divided by number of pass attempts. Number of passes a player or team has made for a first-down. The ratio of touchdown passes to attempted passes. The number of passes that were intercepted. The ratio of intercepted passes to attempted passes. Opponent's yards gained after interceptions. Average number of interceptions per game. Longest interception return against a passer. Number of opponents' touchdowns scored from passer's interceptions. The yards gained by the single longest completion. Gross passing yards divided by number of pass completions. Total yards passing divided by number of games played. A complex formula designed to measure quarterback performance. The total number of successful passes. Average number of receptions per game. Number of yards a receiver is credited for. Number of receptions a player has taken for a first-down. The longest reception a player had, including to the point where they scored, were-tackled, etc. The number of opportunities this receiver had to receive the ball. Average yards per game from receptions. The average yards per reception. How a player or team's passing stats ranks in a league or conference, usually using total passing yards as the measure. Number of passes directed at receiver, including incompletions, interceptions, etc. Number of times in receiver's career where total receiving yardage per game exceeded 100. Number of times in passer's career where total passing yardage per game exceeded 300. Statistics about a particular team or player's rushing performance. | How well they ran with the ball. The number of attempted rushes. The number of yards gained by rushing. The number of yards gained by rushing up the left side of the field. The number of yards gained by rushing up the middle of the field. The number of yards gained by rushing up the right side of the field. The average number of yards per rush. The number of rushes that have resulted in a first-down. The number of yards gained by the single longest rushing play. Average number of rushing attempts per game. Average number of yards gaining from rushes a game. How a player or team's rushing stats ranks in a league or conference, usually using total rushing yards as the measure. Number of times in rusher's career where total rushing yardage per game exceeded 100. Statistics about a particular team or player's performance on downs. | Covers first downs and conversions. The number of first downs achieved. The number of first downs achieved from passing. The number of first downs achieved from rushing. The number of first downs achieved from penalties on the other team. Number of yards gained on first down plays. Number of average yards gained on first down plays. The number of first downs achieved on a second down. The number of second downs. The percentage of first down attempts on a second down that were successful. The number of first downs achieved on a third down. The number of third downs. The percentage of first down attempts on a third down that were successful. The number of first downs achieved on a fourth down. The number of fourth downs. The percentage of first down attempts on a fourth down that were successful. Statistics about the sacks suffered by the offensive team. | How many times and the team was sacked, and for how many yards. Total number of yards lost by the offense from sacks. Total number of times that the offense was sacked. Average number of times per game a quarterback is sacked. Number of sacks that resulted in safeties. Statistics about a particular player or team's defensive performance. | Covers tackles, interceptions, sacks. The number of tackles made by the defense. Commonly the sum of tackles-solo and tackles-assists. The number of tackles made where only one member of the defense commited the tackle. The number of tackles made where multiple members of the defense commited the tackle. The number of tackles made which were credited to the team as a whole. An unofficial stat given when a defensive player pressures and hurries the quarterback into making a throw to avoid being sacked. Total number of times a pass was deflected or otherwise defended against, causing a pass to be incomplete. Derived by adding interceptions-total plus fumbles-opposing-recovered. Total yardage a defense allowed, usually through passing and rushing. Total yardage a defense allows per game. Number of plays a defense is on the field. Number of plays inside defensive team's 20 yard line. Number of points given up when opponents' possession starts inside defensive team's 20 yard line. Number of touchdowns given up when opponents' possession starts inside defensive team's 20 yard line. Ratio of touchdowns allowed to possessions when opponents' possession starts inside defensive team's 20 yard line. How a team's defensive stats rank in a league or conference, usually using yards allowed per game as the measure. How a team's passing defense stats rank in a league or conference, usually using passing yards allowed per game as the measure. How a team's rushing defense stats rank in a league or conference, usually using rushing yards allowed per game as the measure. The number of passes the defense has intercepted. The number of yards gained as a result of an interception. The average number of yards gained from interceptions. Percentage of passes intercepted. The longest interception returned. Greatest number of yards gained on an interception. Number of interceptions that were run back for a touchdown. The number of sacks made by the defense. Number of yards gained by the defense. The number of sacks credited to the team as a whole. Number of yards gained by the defense on sacks credited to the team as a whole. Statistics about a particular team's scoring performance. | For touchdowns, field goals, etc. The number of touchdowns scored by the offense. The number of touchdowns scored by passing. The number of touchdowns scored by rushing. The number of touchdowns scored by special teams. The number of touchdowns scored by the defense. The number of receptions resulting in a touchdown. The number of extra points attempted by the offense. The number of extra points made. The number of extra points missed. The number of extra point attempts that were blocked. Ratio of extra points completed to attempts. The number of field goals attempted. The number of field goals made. The number of field goals missed. The number of field goal attempts that were blocked. Yardage of longest successful field goal attempt. Ratio of field goals made to attempts. Average number of field goals made per game. The number of safeties scored against the offense. The number of two point conversions attempted. The number of successful two point conversions. Number of successful two-point conversions from running plays. Number of successful two-point conversions from passing plays. For Canadian Football. Used to record the number of touchbacks that resulted in scores. Number of points earned from the single, a CFL-specific scoring play where a kicking team prevents its opponent from returning a kickoff, punt or missed field goal from outside the opponent's own end zone. Points earned by offensive team when possession starts inside opponent's 20 yard line. Touchdowns scored by offensive team when possession starts inside opponent's 20 yard line. Ratio of touchdowns to possessions when possession starts inside opponent's 20 yard line. Statistics about attempted and made field goals. | Allows for breakdowns between particular yard markers. The minimum distance in the range for the kick attempts. NOTE: Worth revisiting. The maximum distance in the range for the kick attempts. NOTE: Worth revisiting. Number of attempted field goals. Number of successful field goals. Statistics about a particular team's special teams performance. | Covers punts, touchbacks. The number of punts returned. The number of yards gained from punt returns. The average number of yards gained on each punt return. The number of yards gained on the longest punt return. The number of punts returned for a touchdown. Total number of punt returns defended. Total number of yards allowed from punt returns. Average number of yards allowed from punt returns. Yardage of longest punt return allowed. Number of touchdowns from punt returns allowed. Total number of kickoffs to opposition. Number of kickoffs excluding onside kicks or kickoffs at the end of a half unless either kickoff is returned for a touchdown. Number of kickoffs resulting in opponent's possession starting inside own 20 yard line. Average starting position of opponent's possession after a kickoff. Number of onside kicks attempted. Number of onside kicks recovered by kicking team. The number of kickoffs returned. The number of yards gained from kickoff returns. The average number of yards gained on each kickoff return. The number of yards gained on the longest kickoff return. The number of kickoffs returned for a touchdown. Number kickoffs adjusted for onside and end-of-half (non-touchdown) kickoffs. Number of kickoff returns that failed to advance past the returning team's 20 yard line. Average start position after kickoff returns. Total number of kickoff returns. Yards allowed from kickoff returns. Average yards allowed per kickoff return. Yardage of longest kickoff return allowed. Touchdowns allowed from kickoff returns. The total number of returns. Punts + kickoffs. The total number of yards gained on punts and kickoffs combined. The number of punts. The sum of the distances of all punts. The sum of the distances of all punts, minus the distances they were returned. The distance of the longest punt. The number of punts inside the 20 yard line. The percentage of punts inside the 20 yard line. The average gross punting yardage. Average net yards - punt length minus return - per punt. Number of punts forced by defensive team. Number of yards of opposing team's punts. Number of net yards - punt length minus return - of opposing team's punts. Longest punt by opposing team. Punt returns failed to advance beyond team's own 20. Ratio of punt returns failed to advance beyond team's own 20 and total punt returns. Average length of opponent's punt. Average net length - punt length minus return - of opponent's punt.. The number of punts that were blocked. The number of punts made by the opposing team that this team or player has blocked. The total number of touchbacks, from kickoffs, punts, interceptions, and fumbles. The percentage of kickoffs, punts, interceptions, and fumbles which resulted in a touchback. The number of kickoffs that went into the end zone and were not brought out. The percentage of kickoffs resulting in touchbacks. The number of punts that went into the end zone and were not brought out. The percentage of punts resulting in touchbacks. The number of interceptions that went into the end zone and were not brought out. The percentage of interceptions resulting in touchbacks. Number of punts not returned out of receiving team's end zone. The number of fair catches. Number of fair catches by opponents. Net return yardage excluding kickoff returns. Total number of special teams touchdowns allowed. The number of tackles made while playing for a special-team. The number of tackle assists performed while on special-teams. Number of extra point attempts made by the opposing team that were successful. Number of extra point attempts made by the opposing team that were not successful. Number of extra point attempts made by the opposing team that were blocked by this team or player. Number of field goal attempts made by the opposing team that were successful. Number of field goal attempts made by the opposing team that were not successful. Number of field goal attempts made by the opposing team that were blocked by this player or team. For tracking fumble stats. | Covers teams that do the fumbling and the recovering. The total number of fumbles. The number of fumbles that were forced by the opposing team. The number of fumbles that were recovered by the fumbling team. The number of fumbles that were not recovered by the fumbling team. Average number of fumbles lost per game. The number of yards gained as a result of fumbles. The number of fumbles committed by this team. The number of fumbles committed by this team that were then also recovered by this team. The number of fumbles committed by this team but recovered by the other team. A subset of turnovers. Also referred to as a giveaway. The number of yards gained as a result of fumbles by this team. The number of touchdowns earned after a team recovers its own fumbles. The number of fumbles committed by the opposing team. The number of fumbles committed by the opposing but recovered by this team. A subset of turnovers. Also referred to a takeaway. Average number of opposing team's fumbles recovered per game. The number of fumbles committed by the opposing team that were subsequently lost to the opposing team. The number of yards gained as a result of fumbles by the opposing team. The number of touchdowns scored as a result of fumbles by the opposing team.. The number of fumbles committed by a player or team on defense. The number of fumbles lost by a player or team on defense. The number of fumbles forced by a player or team on defense. The number of fumbles recovered by a player or team on defense. The number of yards gained on fumbles recovered by a player or team on defense. The number of fumbles committed by a player or team on special-teams. The number of fumbles lost by a player or team on special-teams. The number of fumbles forced by a player or team on special-teams. The number of fumbles recovered by a player or team on special-teams. The number of yards gained on fumbles recovered by a player or team on special-teams. The number of fumbles committed by a player or team on neither defense or special teams. The number of fumbles lost by a player or team on neither defense or special teams. The number of fumbles forced by a player or team on neither defense or special teams. The number of fumbles recovered by a player or team on neither defense or special teams. The number of yards gained on fumbles recovered by a player or team on neither defense or special teams. The number of fumbles into the end zone that are not brought out. The percentage of fumbles that resulted in touchbacks. Statistics about penalties. | Applies to both offensive and defensive penalties. The number of penalties. The yards gained as a result of penalties. The number of first downs gained as a result of penalties. Yards opponents gain as a result of penalties. Total of penalties by opposing team. Includes penalty challenges that were both won and lost. Includes both coach challenges and booth challenges (by the replay assistant in the NFL). The number of successful penalty challenges by a team, leading to the overturning of the original call. Whether the clock is running or stopped. The ID of the team with the football. The current down. Valid values are 1,2,3,4,5. The distance between the current line of scrimmage and the line to gain, 10 yards downfield from the start of possession. In yards. The word goal is used when the distance to the goal line is less than 10 yards. The word kick is used for kick-scoring attempt. Which side of the field the event is taking place. Either "home" or "away". The line of scrimmage. The yard line where the ball is placed at the start of play.
{"url":"https://www.iptc.org/std/SportsML/3.1/specification/sportsml-specific-american-football.xsd","timestamp":"2024-11-04T23:45:25Z","content_type":"application/xml","content_length":"76322","record_id":"<urn:uuid:927d0147-93be-407b-8aac-ec48bd7bfa31>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.84/warc/CC-MAIN-20241104225856-20241105015856-00291.warc.gz"}
Solving Matrix Equations Solving Matrix Equations While solving matrix equation A + X = B, where A and B are two given matrices of the same order and X is an unknown matrix, we proceed in a manner similar to the numbers. 10 Math Problems officially announces the release of Quick Math Solver and 10 Math Problems, Apps on Google Play Store for students around the world. Here, A + X = B Adding the matrix (-A) to both sides of the matrix equation, we get - (-A) + A + X = (-A) + B or, (-A + A) + X = B – A or, 0 + X = B – A or, X = B – A, which is the required solution of matrix equation A + X = B. Worked Out Examples Do you have any questions regarding the solution of matrix equations? You can ask your questions or problems here, in the comment section below.
{"url":"https://www.10mathproblems.com/2020/10/solving-matrix-equations.html","timestamp":"2024-11-12T22:50:11Z","content_type":"application/xhtml+xml","content_length":"164169","record_id":"<urn:uuid:fc92d909-4079-4619-b896-d19fe869f985>","cc-path":"CC-MAIN-2024-46/segments/1730477028290.49/warc/CC-MAIN-20241112212600-20241113002600-00254.warc.gz"}
American Mathematical Society Asymptotically Poincaré surfaces in quasi-Fuchsian manifolds HTML articles powered by AMS MathViewer by Keaton Quinn Proc. Amer. Math. Soc. 148 (2020), 1239-1253 DOI: https://doi.org/10.1090/proc/14850 Published electronically: November 19, 2019 We introduce the notion of an asymptotically Poincaré family of surfaces in an end of a quasi-Fuchsian manifold. We show that any such family gives a foliation of an end by asymptotically parallel convex surfaces, and that the asymptotic behavior of the first and second fundamental forms determines the projective structure at infinity. As an application, we establish a conjecture of Labourie from [J. London Math. Soc. 45 (1992), pp. 549–565] regarding constant Gaussian curvature surfaces. We also derive consequences for constant mean curvature surfaces. References • Charles Gregory Anderson, Projective structures on Riemann surfaces and developing maps to H(3) and CP(n), ProQuest LLC, Ann Arbor, MI, 1998. Thesis (Ph.D.)–University of California, Berkeley. MR • Thierry Aubin, Nonlinear analysis on manifolds. Monge-Ampère equations, Grundlehren der mathematischen Wissenschaften [Fundamental Principles of Mathematical Sciences], vol. 252, Springer-Verlag, New York, 1982. MR 681859, DOI 10.1007/978-1-4612-5734-9 • David Dumas, Complex projective structures, Handbook of Teichmüller theory. Vol. II, IRMA Lect. Math. Theor. Phys., vol. 13, Eur. Math. Soc., Zürich, 2009, pp. 455–508. MR 2497780, DOI 10.4171/ • David Dumas, Holonomy limits of complex projective structures, Adv. Math. 315 (2017), 427–473. MR 3667590, DOI 10.1016/j.aim.2017.05.021 • Charles L. Epstein, Envelopes of horospheres and weingarten surfaces in hyperbolic 3-space, Preprint, 1984. • Arthur E. Fischer and Jerrold E. Marsden, Deformations of the scalar curvature, Duke Math. J. 42 (1975), no. 3, 519–547. MR 380907 • David Gilbarg and Neil S. Trudinger, Elliptic partial differential equations of second order, Classics in Mathematics, Springer-Verlag, Berlin, 2001. Reprint of the 1998 edition. MR 1814364, DOI • François Labourie, Problème de Minkowski et surfaces à courbure constante dans les variétés hyperboliques, Bull. Soc. Math. France 119 (1991), no. 3, 307–325 (French, with English summary). MR 1125669, DOI 10.24033/bsmf.2169 • François Labourie, Surfaces convexes dans l’espace hyperbolique et $\textbf {C}\textrm {P}^1$-structures, J. London Math. Soc. (2) 45 (1992), no. 3, 549–565 (French). MR 1180262, DOI 10.1112/jlms • Olli Lehto, Univalent functions and Teichmüller spaces, Graduate Texts in Mathematics, vol. 109, Springer-Verlag, New York, 1987. MR 867407, DOI 10.1007/978-1-4613-8652-0 • Rafe Mazzeo and Frank Pacard, Constant curvature foliations in asymptotically hyperbolic spaces, Rev. Mat. Iberoam. 27 (2011), no. 1, 303–333. MR 2815739, DOI 10.4171/RMI/637 • Zeev Nehari, The Schwarzian derivative and schlicht functions, Bull. Amer. Math. Soc. 55 (1949), 545–551. MR 29999, DOI 10.1090/S0002-9904-1949-09241-8 • Brad Osgood and Dennis Stowe, The Schwarzian derivative and conformal mapping of Riemannian manifolds, Duke Math. J. 67 (1992), no. 1, 57–99. MR 1174603, DOI 10.1215/S0012-7094-92-06704-4 • Jean-Marc Schlenker, Notes on the Schwarzian tensor and measured foliations at infinity of quasifuchsian manifolds, Preprint arXiv:1708.01852 (2017). • Anthony J. Tromba, Teichmüller theory in Riemannian geometry, Lectures in Mathematics ETH Zürich, Birkhäuser Verlag, Basel, 1992. Lecture notes prepared by Jochen Denzler. MR 1164870, DOI 10.1007 Similar Articles • Retrieve articles in Proceedings of the American Mathematical Society with MSC (2010): 30F60, 53C42 • Retrieve articles in all journals with MSC (2010): 30F60, 53C42 Bibliographic Information • Keaton Quinn • Affiliation: Department of Mathematics, Statistics and Computer Science, University of Illinois at Chicago, Chicago, Illinois 60607 • Email: kquinn23@uic.edu • Received by editor(s): December 18, 2018 • Received by editor(s) in revised form: July 31, 2019 • Published electronically: November 19, 2019 • Additional Notes: The author was partially supported in summer 2018 by a research assistantship under NSF DMS-1246844, RTG: Algebraic and Arithmetic Geometry, at the University of Illinois at • Communicated by: Ken Bromberg • © Copyright 2019 Keaton Quinn • Journal: Proc. Amer. Math. Soc. 148 (2020), 1239-1253 • MSC (2010): Primary 30F60; Secondary 53C42 • DOI: https://doi.org/10.1090/proc/14850 • MathSciNet review: 4055951
{"url":"https://www.ams.org/journals/proc/2020-148-03/S0002-9939-2019-14850-4/","timestamp":"2024-11-14T19:43:23Z","content_type":"text/html","content_length":"65852","record_id":"<urn:uuid:0ddfd689-ece7-44cc-afa6-fad869213b7f>","cc-path":"CC-MAIN-2024-46/segments/1730477395538.95/warc/CC-MAIN-20241114194152-20241114224152-00610.warc.gz"}
The Basic Mechanics of Principal Components Analysis The following description gives an explanation of how principal components analysis can be computed. The actual algorithm described below is not used in any standard program, but the commonly used algorithms can only be explained using mathematical concepts from linear algebra. Computing the first component As discussed on the main Principal Components Analysis page, PCA analyzes a Correlation Matrix and infers components that are consistent with the observed correlations. Each component is created as a weighted sum of the existing variables. PCA starts by trying to find the single component which best explains the observed correlations ^between the variables. Consider the following three variables: │v1 │v2 │v3 │ │1 │1 │1 │ │2 │3 │5 │ │3 │2 │2 │ │4 │5 │3 │ │5 │4 │4 │ The correlation matrix of the three variables is: │ │v1 │v2 │v3 │ │v1│1.0 │.8 │.4 │ │v2│.8 │1.0 │.6 │ │v3│.4 │.6 │1.0 │ Note that there are moderate-to-strong correlations between all of the variables. Thus, any underlying component must be correlated with all the variables. A first guess then is that our new component could simply be the sum of each of the existing variables: \(Component = 1.0 \times v1 + 1.0 \times v2 + 1.0 \times v3\) The resulting component matrix, which shows the correlation between each of the variables and the computed component, is then: │ │Component │ │v1│.856 │ │v2│.934 │ │v3│.778 │ These correlations are all very high and thus our estimated component is a pretty good component. However, it can be improved. Looking again at the correlation matrix, reproduced below again, we can deduce that our original guess of giving equal weights to the different components was a touch naïve. Note that v2 has the highest average correlation with all the variables. Thus, if we were instead to give a higher weight to v2 when estimating our component we will likely end up with marginally higher correlations with all the variables. Similarly, note that v3 has the lowest average correlation, and thus by the same argument it should be given a lower weight. │ │v1 │v2 │v3 │ │v1│1.0 │.8 │.4 │ │v2│.8 │1.0 │.6 │ │v3│.4 │.6 │1.0 │ Using trial and error, we can deduce that the optimal formula^ for computing the component is: \(Component = 1.0 \times v1 + 1.086 \times v2 + 0.866 \times v3\) Note that we have not multiplied v1 by anything other than 1. This is because the numbers that are multiplied by the other variables are relative to v1 having a weight of 1. If we were to put a weight other than 1 next to v1 we would then have to multiply each of these other weights by this number. For example, the following weights are the ones generated by SPSS (and shown in the Component Score Coefficient Matrix) and you can see that their relativities are the same: \(Components = 1.0 \times v1 + 1.086 \times v2 + 0.866 \times v3\) Computing the remaining components The next component is computed as follows: 1. Regression is used to predict each variable based on its component. 2. The residuals of the regression model are then computed. 3. The correlation matrix is computed using the residuals. 4. The same basic process as described above is performed to create a second component. 5. These steps are then repeated until the number of components is equal to the number of variables.^ Typically, Varimax Rotation is performed to aid interpretation. 0 comments Article is closed for comments.
{"url":"https://the.datastory.guide/hc/en-us/articles/7935374244111-The-Basic-Mechanics-of-Principal-Components-Analysis","timestamp":"2024-11-11T16:35:00Z","content_type":"text/html","content_length":"45624","record_id":"<urn:uuid:a1eb59a2-ce97-4d11-8d4d-37dbef861282>","cc-path":"CC-MAIN-2024-46/segments/1730477028235.99/warc/CC-MAIN-20241111155008-20241111185008-00313.warc.gz"}
collision prediction if you can measure every millimeter for density, tensile strength, friction, thermal resistance, mass and velocity, what happens when two objects collide can be predicted with significant certainty unless the objects happen to be humans then, even if you could model the precise electrical charge of each neuron and the spin of each quantum particle it would still not be possible to say anything more certain than "this will be interesting" math fails us, even if we use all the fancy symbols that movie scientists use to write the formulas to prove that they are smart we still can't construct the function which integrates intentions that describes the derivative of attraction or fills out the correlation matrix of hope Path: class GeekPoetry(Poetry):
{"url":"https://the-michael-toy.github.io/sudopoet/poems/collision-detection.html","timestamp":"2024-11-15T01:15:21Z","content_type":"text/html","content_length":"5843","record_id":"<urn:uuid:2d2f37dc-5a43-47c1-a052-87ea479d5be1>","cc-path":"CC-MAIN-2024-46/segments/1730477397531.96/warc/CC-MAIN-20241114225955-20241115015955-00710.warc.gz"}
Why is my formula giving me incorrect average when trying to average the children rows? I have a formula that is averaging the percentage of the children row of the column Percent completed, to give me another percentage of the ones that are less than 100% and then I have another column that will average the percentage of the ones that are greater than 100%. But it is not giving me a correct percentage. The screen shot should average 75% in the Goal Not Met column but it's returning as 59%. Is there something I need to change in my formula? Best Answer • The Child rows of the dark blue row, Item 99511, are only the rows with the PROD number in yellow: So the average of 55% and 63% = 59%. Try using DESCENDANTS instead of CHILDREN in your formula and see if that considers all the rows under 100%. Or, if you want all the rows to be direct children of 99511, then outdent the rows in the green boxes. Jeff Reisman Link: Smartsheet Functions Help Pages Link: Smartsheet Formula Error Messages If my answer helped solve your issue, please mark it as accepted so that other users can find it later. Thanks! • [S:Perhaps you could share the formula you're using and the data structure you're using it on? :S] Thanks, that's easier to troubleshoot, LOL! Jeff Reisman Link: Smartsheet Functions Help Pages Link: Smartsheet Formula Error Messages If my answer helped solve your issue, please mark it as accepted so that other users can find it later. Thanks! • @Jeff Reisman Yeah I accidently posted the discussion without inputting anything. I have edited the original one. • The Child rows of the dark blue row, Item 99511, are only the rows with the PROD number in yellow: So the average of 55% and 63% = 59%. Try using DESCENDANTS instead of CHILDREN in your formula and see if that considers all the rows under 100%. Or, if you want all the rows to be direct children of 99511, then outdent the rows in the green boxes. Jeff Reisman Link: Smartsheet Functions Help Pages Link: Smartsheet Formula Error Messages If my answer helped solve your issue, please mark it as accepted so that other users can find it later. Thanks! • @Jeff Reisman Thank you! Changing it to DESCENDANTS work. Such an easy fix. Help Article Resources
{"url":"https://community.smartsheet.com/discussion/101432/why-is-my-formula-giving-me-incorrect-average-when-trying-to-average-the-children-rows","timestamp":"2024-11-10T22:41:07Z","content_type":"text/html","content_length":"415279","record_id":"<urn:uuid:a98e30a4-2ca2-4e96-a3a6-eab226e39659>","cc-path":"CC-MAIN-2024-46/segments/1730477028191.83/warc/CC-MAIN-20241110201420-20241110231420-00768.warc.gz"}
Big Page on Vectors A vector is a quantity with both direction and magnitude Oh yeah!!! $\dpi{300} \huge \begin{bmatrix} i\\j \end{bmatrix}$ 2d vector $\dpi{300} \huge \begin{bmatrix} i\\j \\k \end{bmatrix}$ 3d vector Vectors can also be denoted (xi + yj + zk) and the vector variable can be called The way you use a vector depends on if you are treating that vector as a position vector or a direction vector. A vector is a position vector if the vector is describing a point relative to the origin. For instance, the vector 2i + 3j on it’s own only contains information about a movement of +2 in the x dimension and +3 in the y dimension. If you go onto say this is a position vector, this number now represents an absolute position on the plane, 2 on the x and 3 on the y (and is essentially a coordinate). If it is not specified that a vector is a position vector, it is not right to link it to the corresponding coordinate on a plane. A position vector can be denoted Essential vector properties / operations Magnitude of vector One of the most important things to know about a vector is it’s length. Use Pythagoras. $\dpi{300} \huge |\vec{v}| = \sqrt{a^2+b^2+c^2}$ 3D Pythag! Find unit vector $\dpi{300} \huge \frac{1}{|v|} \begin{bmatrix} a\\ b \\ c \end{bmatrix}$ Equation for unit vector The unit vector essentially tells you the fundamental information about spatial direction. 2D spatial directions are numerically equivalent to points on the unit circle and spatial directions in 3D are equivalent to a point on the unit sphere. Parallel vectors Vectors are parallel if they are scalar multiples of each other. $\dpi{300} \huge \begin{bmatrix} 2\\ 7 \\ 45 \end{bmatrix} \begin{bmatrix} 4\\ 14 \\ 90 \end{bmatrix}$ Parallel vectors Vector between two vectors To get the difference in position between two vectors, do the following: $\dpi{300} \huge \overrightarrow{AB} = \overrightarrow{OB} - \overrightarrow{OA}$ Go back down the A and along the B! Visual representation of B->A Scalar Product The scalar/dot product of two vectors is the sum of the products of their corresponding components. How to calculate dot product $\dpi{300} \huge \begin{bmatrix} a\\b \\c \end{bmatrix} _{.} \begin{bmatrix} x\\y \\z \end{bmatrix} = ax + by + cz$ If the dot product of two vectors equal 0, the two vectors are at 90º to each other. The Line Line properties In 2D, lines are parallel or intersect. To check if lines intersect, check if they are not parallel. To check if lines are parallel, look at the direction vector for both lines, if they are linear multiples of one another, they are parallel. In 3D, lines can be parallel, they can intersect, and can also be skew. Skew lines are when the lines do not intersect and they are not parallel. To check if lines are parallel, see if the direction vectors are linear multiples of each other. To check if lines intersect, set the lines equal to each other, and rearrange to get two equations with two unknowns. Then plug the values of λ and μ into the z row of the original equation, if this equation is true, the lines intersect. If this z equation is false (ie the LHS and RHS are different), the lines are skew. Vector equation of a line The straight line r in vector line notation: λ is a variable marking that the b vector is variable in magnitude, and therefore sets the direction of the line. a is a position vector whose purpose it is to move the line away from the origin. $\dpi{300} \huge \mathbf{r} = \mathbf{a} + \lambda \mathbf{b}$ Straight line r The vector OD sets the direction and point A sets the position Cartesian equation of a line A line in vector equation form as such: $\dpi{300} \huge \begin{bmatrix} x\\y \\z \end{bmatrix} = \begin{bmatrix} a_{1}\\a_{2} \\a_{3} \end{bmatrix} + \lambda \begin{bmatrix} b_{1}\\b_{2} \\b_{3} \end{bmatrix}$ Essentially 3 different equations Can be rewritten as: $\dpi{300} \huge \frac{x-a_{1}}{b_{1}}= \frac{x-a_{2}}{b_{2}}= \frac{x-a_{3}}{b_{3}}$ This uses the aspect of the vector equation of the line secretly being 3 separate equations, and rearranges them to make λ the subject. Angle between two lines / vectors To get the angle between two lines, use the following formula. Make sure to only use the direction vectors in the lines, as they are what define the direction of the line, and therefore the angle between the lines. $\dpi{300} \huge \cos \vartheta = \frac{\vec{a}.\vec{b}}{|\vec{a}||\vec{b}| }$ Angle between two vectors a and b If the obtuse angle between the two vectors. So if you want the acute angle, do 180 – θ. Negative dot product means you are getting the blue angle The Plane Vector equation of a plane The vector equation of a plane looks similar to the vector equation of a line, but has another variable and another direction vector. $\dpi{300} \huge \Pi = \vec{a} + \lambda \vec{b} + \mu \vec{c}$ Plane Π is made of one position vector and two direction vectors b and c must not be parallel. This creates a infinite plane of values, achieved by varying lambda and mu. Cartesian equation of a plane The cartesian equation of a plane harnesses the normal vector – the vector that is perpendicular to the plane. $\dpi{300} \huge n_{1}x + n_{2}y + n_{3}z = k$ Cartesian equation of the plane $\dpi{300} \huge \begin{bmatrix} n_{1}\\ n_{2} \\ n_{3} \end{bmatrix}$ Is the normal vector to the plane. The scalar product equation of a plane The scalar product equation of a plane is just like the cartesian equation, but folded up a bit. $\dpi{300} \huge \begin{bmatrix} n_{1}\\ n_{2} \\ n_{3} \end{bmatrix} _{.} \begin{bmatrix} x\\ y \\ z \end{bmatrix} =k$ The vector with the ns is the normal vector to the plane. Angle between line and plane Use the direction vector of the line and the normal vector of the plane, but note that some extra stuff has to happen once you’ve got the angle: If you have the yellow angle, do 90 – the angle to get the acute angle between the line and the plane (purple). If you have the blue angle, do the angle – 90 to get the acute angle between the line and the plane. Intersect line and plane This is the super easy one. Simply get the x, y and z formulae from the line equation, and plug those expressions into the Cartesian equation of the plane to get a value of λ. Then plug this value into the line equation to get the position vector of the point of intersection.
{"url":"https://henrytechblog.com/maths/big-page-on-vectors/","timestamp":"2024-11-11T11:38:21Z","content_type":"text/html","content_length":"68055","record_id":"<urn:uuid:aa68f1e4-9821-48e6-9004-e9a58a764808>","cc-path":"CC-MAIN-2024-46/segments/1730477028228.41/warc/CC-MAIN-20241111091854-20241111121854-00112.warc.gz"}
Tree of life superstring theory part 39 In Sacred geometry/Tree of Life, we discussed various methods of transforming examples of sacred geometry in order to decode the scientific and spiritual information that they embody. The next level of decoding sacred geometry after the Pythagorean tetractys is through its next-higher version — the 2nd-order tetractys, in which each of the 10 yods of the tetractys is replaced by another tetractys. This generates 85 yods, which is the sum of the first four integer powers of 4:</p>85 = 40 + 41 + 42 + 43.</p>The yod at the centre of the 2nd-order tetractys denotes Malkuth of the central tetractys, which itself corresponds to this Sephirah. It is surrounded by 84 yods. The 2nd-order tetractys therefore expresses the fact that 84 Sephirothic degrees of freedom in a holistic system exist above Malkuth — its physical form. Of these, (7×7 − 1 = 48*) degrees are pure differentiations of Sephiroth of Construction symbolized by coloured, hexagonal yods in the seven 1st-order tetractyses that are not at the corners of the 2nd-order tetractys. The remaining 36 degrees are denoted by both the 15 white yods at the corners of the 10 tetractyses (these yods formally symbolize the Supernal Triad) and the 21 coloured, hexagonal yods that belong to the tetractyses at the three corners of the 2nd-order tetractys and which, therefore, also refer to the Supernal Triad of Kether, Chokmah & Binah. YAH (יה), the older version of the Godname YAHWEH (יהוה) assigned to Chokmah, has the number value 15 and prescribes the 15 corners of the 10 1st-order tetractyses. ELOHA (אלה), the Godname of Geburah with number value 36, prescribes both the 36 yods lining the sides of the 2nd-order tetractys and the 36 yods just discussed. The number 84 is the sum of the squares of the first four odd integers:</p>84 = 12 + 32 + 52 + 72.</p>As</p>n2 = 1 + 3 + 5 + ....+ 2n–1,</p>is the sum of the first n odd integers, 84 is the sum of (1+3+5+7=16=42) odd integers: </a></a></a> <a href="generate as their sum the 15 corners of the 10 1st-order tetractyses in the 2nd-order tetractys, whilst the four integers 1 (=40), 4 (=41), 16 (=42) & 64 (=43) spaced along its third raised edge generate as their sum its 85 yods: We see that the first raised edge of the tetrahedral array of 20 integers, which we call the Tetrahedral Lambda in the section Plato's Lambda, generates the number 15 measuring the " skeleton"="" of= "" the="" 2nd-order="" tetractys="" in="" terms="" a="" basic,="" triangular="" array="" of 15 points,="" namely,="" corners="" 10="" 1st-order="" tetractyses.="" its="" third="" raised="" edge="" generates="" complete="" "body"="" comprising="" 85="" yods.="" this="" illustrates="" character="" number 15 of="" yah,="" godname="" chokmah,="" as="" fifth="" number.="" sum="" seven="" integers= "" on="" first="" and="" edges="" tetrahedral="" lambda="1" +="" 2="" 4="" 8="" 16="" + 64 ="99." all="" 20="" is="" 350,="" remaining="" 13="" 251.="" number="" yods="" 1-tree="" when="" 19="" triangles="" are="" type="" (see here).="" it="" embodied="" the upa="" space-time="" coordinates="" points="" whorls="" closed="" curves="" in 26-dimensional="" space-time:="" 10×25="" 1="251." notice="" that="" squares="" four="" edge:="" 12 +="" 22 +="" 42 +="" 82 ="85" same="" 43 ="85." means="" number 168 which,="" being="" parameter="" holistic="" systems,="" always="" displays="" division 168 ="84" 84="" (see here),="" can="" be="" expressed="" as:="" 82 +="" 41 +="" 43.="" 336="2×168," which="" discussed="" numerous="" places="" website,="" 4×84="42 +" 43 +="" 44.="" −="" 14,="" where="" 14="" 2,="" &="" edge,="" we="" see="" these="" three="" 84,="" nine="" boundary="" face="" lambda,="" whilst="" except="" 336,="" 4×84,="" i.e.,="" assigned="" to="" surrounding="" centre="" tetractys.="" correspondences="" between="" tetractys,="" sri="" yantra="" discussed here.="" *="" numbers="" in boldface are="" values="" either="" hebrew="" names="" sephiroth="" or="" their="" manifestation="" worlds="" atziluth,="" beriah,="" yetzirah="" assiyah="" "="">generate as their sum the 15 corners of the 10 1st-order tetractyses in the 2nd-order tetractys, whilst the four integers 1 (=40), 4 (=41), 16 (=42) & 64 (=43) spaced along its third raised edge generate as their sum its 85 yods: We see that the first raised edge of the tetrahedral array of 20 integers, which we call the Tetrahedral Lambda in the section Plato's Lambda, generates the number 15 measuring the "skeleton" of the 2nd-order tetractys in terms of a basic, triangular array of 15 points, namely, the corners of 10 1st-order tetractyses. Its third raised edge generates the complete "body" of the 2nd-order tetractys comprising 85 yods. This illustrates the character of the number 15 of YAH, the Godname of Chokmah, as the fifth triangular number. The sum of the seven integers on the first and third raised edges of the Tetrahedral Lambda = 1 + 2 + 4 + 8 + 4 + 16 + 64 = 99. As the sum of all its 20 integers is 350, the sum of the remaining 13 integers is 251. This is the number of yods in the 1-tree when its 19 triangles are Type A (see here). It is embodied in the UPA as the number of space-time coordinates of points on the 10 whorls as 10 closed curves in 26-dimensional space-time: 10×25 + 1 = 251. Notice that the sum of the squares of the four integers on the first raised edge: is the same as the sum of the four integers on the third raised edge: This means that the number 168 which, being a parameter of holistic systems, always displays the division 168 = 84 + 84 (see here), can be expressed as: 22 + 42 + 82 + 41 + 42 + 43. The holistic parameter 336 = 2×168, which is discussed in numerous places on this website, can be expressed as 4×84 = 42 + 43 + 44. As 336 = 350 − 14, where 14 is the sum of the integers 2, 4 & 8 on the first raised edge, we see that the sum of the squares of these three integers is 84, which is the sum of the nine integers on the boundary of the first face of the Tetrahedral Lambda, whilst the sum of all its integers except 2, 4 & 8 is 336, which is 4×84, i.e., the sum of the integers 4 assigned to all the 84 yods surrounding the centre of a 2nd-order tetractys. The correspondences between the 2nd-order tetractys, the 1-tree and the Sri Yantra are discussed here. * Numbers in boldface are the number values of either the Hebrew names of the Sephiroth or their manifestation in the four Worlds of Atziluth, Beriah, Yetzirah & Assiyah (see here). </a>
{"url":"https://www.64tge8st.com/post/2017/02/14/tree-of-life-superstring-theory-part-39","timestamp":"2024-11-01T19:47:46Z","content_type":"text/html","content_length":"1050523","record_id":"<urn:uuid:3c2229a1-3bfd-47fb-91e0-985f8c26708f>","cc-path":"CC-MAIN-2024-46/segments/1730477027552.27/warc/CC-MAIN-20241101184224-20241101214224-00508.warc.gz"}
Identify Net From A 3d Solid Worksheets [PDF] (6.NS.C.8): 6th Grade Math For example: Which could be the net of this 3D solid? In the above figure: Number of rectangle faces = 6 Number of triangle faces = 0 So, the net of the given figure must have only 6 rectangle faces. Therefore, the net of the given 3D solid is shown below:
{"url":"https://www.bytelearn.com/math-grade-6/worksheet/identify-net-from-a-3d-solid","timestamp":"2024-11-12T09:06:18Z","content_type":"text/html","content_length":"193285","record_id":"<urn:uuid:b017adf0-d12c-42cd-9473-f092e8b0d032>","cc-path":"CC-MAIN-2024-46/segments/1730477028249.89/warc/CC-MAIN-20241112081532-20241112111532-00223.warc.gz"}
Cryptography and authentication library Matt Lilley This library provides bindings to functionality of OpenSSL that is related to cryptography and authentication, not necessarily involving connections, sockets or streams. A basic design principle of this library is that its default algorithms are cryptographically secure at the time of this writing. We will change the default algorithms if an attack on them becomes known, and replace them by new defaults that are deemed appropriate at that time. This may mean, for example, that where sha256 is currently the default algorithm, blake2s256 or some other algorithm may become the default in the future. To preserve interoperability and compatibility and at the same time allow us to transparently update default algorithms of this library, the following conventions are used: 1. If an explicit algorithm is specified as an option, then that algorithm is used. 2. If no algorithm is specified, then a cryptographically secure algorithm is used. 3. If an option that normally specifies an algorithm is present, and a logical variable appears instead of a concrete algorithm, then that variable is unified with the secure default value. This allows application programmers to inspect which algorithm was actually used, and store it for later reference. For example: ?- crypto_data_hash(test, Hash, [algorithm(A)]). Hash = '9f86d081884c7d659a2feaa0c55ad015a3bf4f1b2b0b822cd15d6c15b0f00a08', A = sha256. This shows that at the time of this writing, sha256 was deemed sufficiently secure, and was used as default algorithm for hashing. You therefore must not rely on which concrete algorithm is being used by default. However, you can rely on the fact that the default algorithms are secure. In other words, if they are not secure, then this is a mistake in this library, and we ask you to please report such a situation as an urgent security issue. In the context of this library, bytes can be represented as lists of integers between 0 and 255. Such lists can be converted to and from hexadecimal notation with the following bidirectional Relation between a hexadecimal sequence and a list of bytes. Hex is an atom, string, list of characters or list of codes in hexadecimal encoding. This is the format that is used by crypto_data_hash/3 and related predicates to represent hashes. Bytes is a list of integers between 0 and 255 that represent the sequence as a list of bytes. At least one of the arguments must be instantiated. When converting List to Hex, an atom is used to represent the sequence of hexadecimal digits. ?- hex_bytes('501ACE', Bs). Bs = [80, 26, 206]. See also base64_encoded/3 for Base64 encoding, which is often used to transfer or embed binary data in applications. Almost all cryptographic applications require the availability of numbers that are sufficiently unpredictable. Examples are the creation of keys, nonces and salts. With this library, you can generate cryptographically strong pseudo-random numbers for such use cases: Bytes is unified with a list of N cryptographically secure pseudo-random bytes. Each byte is an integer between 0 and 255. If the internal pseudo-random number generator (PRNG) has not been seeded with enough entropy to ensure an unpredictable byte sequence, an exception is thrown. One way to relate such a list of bytes to an integer is to use CLP(FD) constraints as follows: :- use_module(library(clpfd)). bytes_integer(Bs, N) :- foldl(pow, Bs, 0-0, N-_). pow(B, N0-I0, N-I) :- B in 0..255, N #= N0 + B*256^I0, I #= I0 + 1. With this definition, you can generate a random 256-bit integer from a list of 32 random bytes: ?- crypto_n_random_bytes(32, Bs), bytes_integer(Bs, I). Bs = [98, 9, 35, 100, 126, 174, 48, 176, 246|...], I = 109798276762338328820827...(53 digits omitted). The above relation also works in the other direction, letting you translate an integer to a list of bytes. In addition, you can use hex_bytes/2 to convert bytes to tokens that can be easily exchanged in your applications. This also works if you have compiled SWI-Prolog without support for large integers. A hash, also called digest, is a way to verify the integrity of data. In typical cases, a hash is significantly shorter than the data itself, and already miniscule changes in the data lead to different hashes. The hash functionality of this library subsumes and extends that of library(sha), library(hash_stream) and library(md5) by providing a unified interface to all available digest algorithms. The underlying OpenSSL library (libcrypto) is dynamically loaded if either library(crypto) or library(ssl) are loaded. Therefore, if your application uses library(ssl), you can use library(crypto) for hashing without increasing the memory footprint of your application. In other cases, the specialised hashing libraries are more lightweight but less general alternatives to library(crypto). The most important predicates to compute hashes are: Hash is the hash of Data. The conversion is controlled by Options: One of md5 (insecure), sha1 (insecure), ripemd160, sha224, sha256, sha384, sha512, sha3_224, sha3_256, sha3_384, sha3_512, blake2s256 or blake2b512. The BLAKE digest algorithms require OpenSSL 1.1.0 or greater, and the SHA-3 algorithms require OpenSSL 1.1.1 or greater. The default is a cryptographically secure algorithm. If you specify a variable, then that variable is unified with the algorithm that was used. If Data is a sequence of character codes, this must be translated into a sequence of bytes, because that is what the hashing requires. The default encoding is utf8. The other meaningful value is octet, claiming that Data contains raw bytes. If this option is specified, a hash-based message authentication code (HMAC) is computed, using the specified Key which is either an atom, string or list of bytes. Any of the available digest algorithms can be used with this option. The cryptographic strength of the HMAC depends on that of the chosen algorithm and also on the key. This option requires OpenSSL 1.1.0 or greater. Data is either an atom, string or code-list Hash is an atom that represents the hash in hexadecimal encoding. See also - hex_bytes/2 for conversion between hexadecimal encoding and lists of bytes. - crypto_password_hash/2 for the important use case of passwords. True if Hash is the hash of the content of File. For Options, see crypto_data_hash/3. For the important case of deriving hashes from passwords, the following specialised predicates are provided: If Hash is instantiated, the predicate succeeds iff the hash matches the given password. Otherwise, the call is equivalent to crypto_password_hash(Password, Hash, []) and computes a password-based hash using the default options. Derive Hash based on Password. This predicate is similar to crypto_data_hash/3 in that it derives a hash from given data. However, it is tailored for the specific use case of passwords. One essential distinction is that for this use case, the derivation of a hash should be as slow as possible to counteract brute-force attacks over possible passwords. Another important distinction is that equal passwords must yield, with very high probability, different hashes. For this reason, cryptographically strong random numbers are automatically added to the password before a hash is derived. Hash is unified with an atom that contains the computed hash and all parameters that were used, except for the password. Instead of storing passwords, store these hashes. Later, you can verify the validity of a password with crypto_password_hash/2, comparing the then entered password to the stored hash. If you need to export this atom, you should treat it as opaque ASCII data with up to 255 bytes of length. The maximal length may increase in the future. Admissible options are: The algorithm to use. Currently, the only available algorithms are pbkdf2-sha512 (the default) and bcrypt. C is an integer, denoting the binary logarithm of the number of iterations used for the derivation of the hash. This means that the number of iterations is set to 2^C. Currently, the default is 17, and thus more than one hundred thousand iterations. You should set this option as high as your server and users can tolerate. The default is subject to change and will likely increase in the future or adapt to new algorithms. Use the given list of bytes as salt. By default, cryptographically secure random numbers are generated for this purpose. The default is intended to be secure, and constitutes the typical use case of this predicate. Currently, PBKDF2 with SHA-512 is used as the hash derivation function, using 128 bits of salt. All default parameters, including the algorithm, are subject to change, and other algorithms will also become available in the future. Since computed hashes store all parameters that were used during their derivation, such changes will not affect the operation of existing deployments. Note though that new hashes will then be computed with the new default parameters. See also crypto_data_hkdf/4 for generating keys from Hash. The following predicate implements the Hashed Message Authentication Code (HMAC)-based key derivation function, abbreviated as HKDF. It supports a wide range of applications and requirements by concentrating possibly dispersed entropy of the input keying material and then expanding it to the desired length. The number and lengths of the output keys depend on the specific cryptographic algorithms for which the keys are needed. Concentrate possibly dispersed entropy of Data and then expand it to the desired length. Bytes is unified with a list of bytes of length Length, and is suitable as input keying material and initialization vectors to the symmetric encryption predicates. Admissible options are: A hashing algorithm as specified to crypto_data_hash/3. The default is a cryptographically secure algorithm. If you specify a variable, then it is unified with the algorithm that was used. Optional context and application specific information, specified as an atom, string or list of bytes. The default is the zero length atom” . Optionally, a list of bytes that are used as salt. The default is all zeroes. Either utf8 (default) or octet, denoting the representation of Data as in crypto_data_hash/3. The info/1 option can be used to generate multiple keys from a single master key, using for example values such as key and iv, or the name of a file that is to be encrypted. This predicate requires OpenSSL 1.1.0 or greater. See also crypto_n_random_bytes/2 to obtain a suitable salt. The following predicates are provided for building hashes incrementally. This works by first creating a context with crypto_context_new/2, then using this context with crypto_data_context/3 to incrementally obtain further contexts, and finally extract the resulting hash with crypto_context_hash/2. Context is unified with the empty context, taking into account Options. The context can be used in crypto_data_context/3. For Options, see crypto_data_hash/3. Context is an opaque pure Prolog term that is subject to garbage collection. Context0 is an existing computation context, and Context is the new context after hashing Data in addition to the previously hashed data. Context0 may be produced by a prior invocation of either crypto_context_new/2 or crypto_data_context/3 itself. This predicate allows a hash to be computed in chunks, which may be important while working with Metalink (RFC 5854), BitTorrent or similar technologies, or simply with big files. Obtain the hash code of Context. Hash is an atom representing the hash code that is associated with the current state of the computation context Context. The following hashing predicates work over streams: Open a filter stream on OrgStream that maintains a hash. The hash can be retrieved at any time using crypto_stream_hash/2. Available Options in addition to those of crypto_data_hash/3 are: If true (default), closing the filter stream also closes the original (parent) stream. Unify Hash with a hash for the bytes sent to or read from HashStream. Note that the hash is computed on the stream buffers. If the stream is an output stream, it is first flushed and the Digest represents the hash at the current location. If the stream is an input stream the Digest represents the hash of the processed input including the already buffered data. A digital signature is a relation between a key and data that only someone who knows the key can compute. Signing uses a private key, and verifying a signature uses the corresponding public key of the signing entity. This library supports both RSA and ECDSA signatures. You can use load_private_key/3 and load_public_key/2 to load keys from files and streams. In typical cases, we use this mechanism to sign the hash of data. See hashing (section 3.5). For this reason, the following predicates work on the hexadecimal representation of hashes that is also used by crypto_data_hash/3 and related predicates. Signatures are also represented in hexadecimal notation, and you can use hex_bytes/2 to convert them to and from lists of bytes (integers). Create an ECDSA signature for Data with EC private key Key. Among the most common cases is signing a hash that was created with crypto_data_hash/3 or other predicates of this library. For this reason, the default encoding (hex) assumes that Data is an atom, string, character list or code list representing the data in hexadecimal notation. See rsa_sign/4 for an example. Encoding to use for Data. Default is hex. Alternatives are octet, utf8 and text. True iff Signature can be verified as the ECDSA signature for Data, using the EC public key Key. Encoding to use for Data. Default is hex. Alternatives are octet, utf8 and text. Create an RSA signature for Data with private key Key. Options: SHA algorithm used to compute the digest. Values are sha1, sha224, sha256, sha384 or sha512. The default is a cryptographically secure algorithm. If you specify a variable, then it is unified with the algorithm that was used. Encoding to use for Data. Default is hex. Alternatives are octet, utf8 and text. This predicate can be used to compute a sha256WithRSAEncryption signature as follows: sha256_with_rsa(PemKeyFile, Password, Data, Signature) :- Algorithm = sha256, read_key(PemKeyFile, Password, Key), crypto_data_hash(Data, Hash, [algorithm(Algorithm), rsa_sign(Key, Hash, Signature, [type(Algorithm)]). read_key(File, Password, Key) :- open(File, read, In, [type(binary)]), load_private_key(In, Password, Key), Note that a hash that is computed by crypto_data_hash/3 can be directly used in rsa_sign/4 as well as ecdsa_sign/4. Verify an RSA signature for Data with public key Key. SHA algorithm used to compute the digest. Values are sha1, sha224, sha256, sha384 or sha512. The default is the same as for rsa_sign/4. This option must match the algorithm that was used for signing. When operating with different parties, the used algorithm must be communicated over an authenticated channel. Encoding to use for Data. Default is hex. Alternatives are octet, utf8 and text. The following predicates provide asymmetric RSA encryption and decryption. This means that the key that is used for encryption is different from the one used to decrypt the data: RSA Public key encryption and decryption primitives. A string can be safely communicated by first encrypting it and have the peer decrypt it with the matching key and predicate. The length of the string is limited by the key length. Encoding to use for Data. Default is utf8. Alternatives are utf8 and octet. Padding scheme to use. Default is pkcs1. Alternatives are pkcs1_oaep, sslv23 and none. Note that none should only be used if you implement cryptographically sound padding modes in your application code as encrypting unpadded data with RSA is insecure ssl_error(Code, LibName, FuncName, Reason) is raised if there is an error, e.g., if the text is too long for the key. See also load_private_key/3, load_public_key/2 can be use to load keys from a file. The predicate load_certificate/2 can be used to obtain the public key from a certificate. The following predicates provide symmetric encryption and decryption. This means that the same key is used in both cases. Encrypt the given PlainText, using the symmetric algorithm Algorithm, key Key, and initialization vector (or nonce) IV, to give CipherText. PlainText must be a string, atom or list of codes or characters, and CipherText is created as a string. Key and IV are typically lists of bytes, though atoms and strings are also permitted. Algorithm must be an algorithm which your copy of OpenSSL knows about. Keys and IVs can be chosen at random (using for example crypto_n_random_bytes/2) or derived from input keying material (IKM) using for example crypto_data_hkdf/4. This input is often a shared secret, such as a negotiated point on an elliptic curve, or the hash that was computed from a password via crypto_password_hash/3 with a freshly generated and specified salt. Reusing the same combination of Key and IV typically leaks at least some information about the plaintext. For example, identical plaintexts will then correspond to identical ciphertexts. For some algorithms, reusing an IV with the same Key has disastrous results and can cause the loss of all properties that are otherwise guaranteed. Especially in such cases, an IV is also called a nonce (number used once). If an IV is not needed for your algorithm (such as 'aes-128-ecb') then any value can be provided as it will be ignored by the underlying implementation. Note that such algorithms do not provide semantic security and are thus insecure. You should use stronger algorithms instead. It is safe to store and transfer the used initialization vector (or nonce) in plain text, but the key must be kept secret. Commonly used algorithms include: A powerful and efficient authenticated encryption scheme, providing secrecy and at the same time reliable protection against undetected modifications of the encrypted data. This is a very good choice for virtually all use cases. It is a stream cipher and can encrypt data of any length up to 256 GB. Further, the encrypted data has exactly the same length as the original, and no padding is used. It requires OpenSSL 1.1.0 or greater. See below for an example. Also an authenticated encryption scheme. It uses a 128-bit (i.e., 16 bytes) key and a 96-bit (i.e., 12 bytes) nonce. It requires OpenSSL 1.1.0 or greater. A block cipher that provides secrecy, but does not protect against unintended modifications of the cipher text. This algorithm uses 128-bit (16 bytes) keys and initialization vectors. It works with all supported versions of OpenSSL. If possible, consider using an authenticated encryption scheme instead. Encoding to use for PlainText. Default is utf8. Alternatives are utf8 and octet. For block ciphers, the padding scheme to use. Default is block. You can disable padding by supplying none here. If padding is disabled for block ciphers, then the length of the ciphertext must be a multiple of the block size. For authenticated encryption schemes, List is unified with a list of bytes holding the tag. This tag must be provided for decryption. Authenticated encryption requires OpenSSL 1.1.0 or For authenticated encryption schemes, the desired length of the tag, specified as the number of bytes. The default is 16. Smaller numbers are not recommended. For example, with OpenSSL 1.1.0 and greater, we can use the ChaCha20 stream cipher with the Poly1305 authenticator. This cipher uses a 256-bit key and a 96-bit nonce, i.e., 32 and 12 bytes, ?- Algorithm = 'chacha20-poly1305', crypto_n_random_bytes(32, Key), crypto_n_random_bytes(12, IV), crypto_data_encrypt("this is some input", Algorithm, Key, IV, CipherText, [tag(Tag)]), crypto_data_decrypt(CipherText, Algorithm, Key, IV, RecoveredText, [tag(Tag)]). Algorithm = 'chacha20-poly1305', Key = [65, 147, 140, 197, 27, 60, 198, 50, 218|...], IV = [253, 232, 174, 84, 168, 208, 218, 168, 228|...], CipherText = <binary string>, Tag = [248, 220, 46, 62, 255, 9, 178, 130, 250|...], RecoveredText = "this is some input". In this example, we use crypto_n_random_bytes/2 to generate a key and nonce from cryptographically secure random numbers. For repeated applications, you must ensure that a nonce is only used once together with the same key. Note that for authenticated encryption schemes, the tag that was computed during encryption is necessary for decryption. It is safe to store and transfer the tag in plain text. See also - crypto_data_decrypt/6. - hex_bytes/2 for conversion between bytes and hex encoding. Decrypt the given CipherText, using the symmetric algorithm Algorithm, key Key, and initialization vector IV, to give PlainText. CipherText must be a string, atom or list of codes or characters, and PlainText is created as a string. Key and IV are typically lists of bytes, though atoms and strings are also permitted. Algorithm must be an algorithm which your copy of OpenSSL knows. See crypto_data_encrypt/6 for an example. Encoding to use for CipherText. Default is utf8. Alternatives are utf8 and octet. For block ciphers, the padding scheme to use. Default is block. You can disable padding by supplying none here. For authenticated encryption schemes, the tag must be specified as a list of bytes exactly as they were generated upon encryption. This option requires OpenSSL 1.1.0 or greater. If the tag length is smaller than 16, this option must be used to permit such shorter tags. This is used as a safeguard against truncation attacks, where an attacker provides a short tag that is easier to guess. This library provides operations from number theory that frequently arise in cryptographic applications, complementing the existing built-ins and GMP bindings: Compute the modular multiplicative inverse of the integer X. Y is unified with an integer such that X*Y is congruent to 1 modulo M. Generate a prime P with at least N bits. Options is a list of options. Currently, the only supported option is: If Boolean is true (default is false), then a safe prime is generated. This means that P is of the form 2*Q + 1 where Q is also prime. [semidet]crypto_is_prime(+P, +Options) True iff P passes a probabilistic primality test. Options is a list of options. Currently, the only supported option is: N is the number of iterations that are performed. If this option is not specified, a number of iterations is used such that the probability of a false positive is at most 2^(-80). This library provides functionality for reasoning over elliptic curves. Elliptic curves are represented as opaque objects. You acquire a handle for an elliptic curve via crypto_name_curve/2. A point on a curve is represented by the Prolog term point(X, Y), where X and Y are integers that represent the point's affine coordinates. The following predicates are provided for reasoning over elliptic curves: Obtain a handle for a named elliptic curve. Name is an atom, and Curve is unified with an opaque object that represents the curve. Currently, only elliptic curves over prime fields are supported. Examples of such curves are prime256v1 and secp256k1. If you have OpenSSL installed, you can get a list of supported curves via: $ openssl ecparam -list_curves Obtain the order of an elliptic curve. Order is an integer, denoting how many points on the curve can be reached by multiplying the curve's generator with a scalar. Point is the generator of the elliptic curve Curve. R is the result of N times Point on the elliptic curve Curve. N must be an integer, and Point must be a point on the curve. As one example that involves most predicates of this library, we explain a way to establish a shared secret over an insecure channel. We shall use elliptic curves for this purpose. Suppose Alice wants to establish an encrypted connection with Bob. To achieve this even over a channel that may be subject to eavesdrooping and man-in-the-middle attacks, Bob performs the following This mechanism hinges on a way for Alice to establish the authenticity of the signed message (using predicates like rsa_verify/4 and ecdsa_verify/4), for example by means of a public key that was previously exchanged or is signed by a trusted party in such a way that Alice can be sufficiently certain that it belongs to Bob. However, none of these steps require any encryption! Alice in turn performs the following steps: 1. Create a random integer j such that j is greater than 0 and smaller than the order of C. Alice can also use crypto_curve_order/2 and crypto_n_random_bytes/2 for this. 2. Compute the scalar product j*G, where G is again the generator of C as obtained via crypto_curve_generator/2. 3. Further, compute the scalar product j*R, which is a point on the curve that we shall call Q. We can derive a shared secret from Q, using for example crypto_data_hkdf/4, and encrypt any message with it (using for example crypto_data_encrypt/6). 4. Send the point j*G and the encrypted message to Bob. Bob receives j*G in plain text and can arrive at the same shared secret by performing the calculation k*(j*G), which is - by associativity and commutativity of scalar multiplication - identical to the point j*(k*G), which is again Q from which the shared secret can be derived, and the message can be decrypted with crypto_data_decrypt/6. This method is known as Diffie-Hellman-Merkle key exchange over elliptic curves, abbreviated as ECDH. It provides forward secrecy (FS): Even if the private key that was used to establish the authenticity of Bob is later compromised, the encrypted messages cannot be decrypted with it. A major attraction of using elliptic curves for this purpose is found in the comparatively small key size that suffices to make any attacks unrealistic as far as we currently know. In particular, given any point on the curve, we currently have no efficient way to determine by which scalar the generator was multiplied to obtain that point. The method described above relies on the hardness of this so-called elliptic curve discrete logarithm problem (ECDLP). On the other hand, some of the named curves have been suspected to be chosen in such a way that they could be prone to attacks that are not publicly known. As an alternative to ECDH, you can use the original DH key exchange scheme, where the prime field GF(p) is used instead of an elliptic curve, and exponentiation of a suitable generator is used instead of scalar multiplication. You can use crypto_generate_prime/3 to generate a sufficiently large prime for this purpose.
{"url":"https://www.swi-prolog.org/pldoc/man?section=crypto","timestamp":"2024-11-06T09:11:37Z","content_type":"text/html","content_length":"63417","record_id":"<urn:uuid:fc298731-8f91-43f1-b95e-d8a316902f0d>","cc-path":"CC-MAIN-2024-46/segments/1730477027910.12/warc/CC-MAIN-20241106065928-20241106095928-00044.warc.gz"}
From the Editor’s Desk 13th December 2023 Students taking Mathematics for O Levels, A Levels, or for International Baccalaureate should employ a calculator when solving certain questions. There are many opportunities to make careless mistakes even with the use of the calculator, and this article aims to shed light on how students can avoid making common calculator mistakes. 1. Setting of degrees and radians for trigonometry Firstly, the most common mistake for calculators used in trigonometry questions is the incorrect setting to degrees or radians. Students should always check the required setting before attempting the question as the incorrect setting could result in vastly different answers. 2. Bracket positioning and squaring Depending on the model of calculator used, students may have difficulties in positioning the bracket to square numbers. Students should be careful of adding/subtracting/multiplying/dividing within the bracket and putting the square outside the bracket. If students are unsure whether they have gotten the correct answer the first time, they should repeat the calculation to see whether the answers are the same both times. 3. Error carried forward Students may accidentally press the = sign button twice, which will change the answer on the calculator. Then, the wrong figures will be used in the next step of the calculation, which could make the final answer wrong as the mistake will be carried forward. Students should be careful when conducting the calculation and do the calculation again if they are unsure. 4. Not checking the figures to see whether they make sense Lastly, some questions with a context (such as questions involving tangible objects) will have answers which are appropriate to the context. For example, students who have performed the calculations wrongly could have a ridiculously big or small number which does not fit the context. An example would be a question asking the student to calculate the number of people and if the student gets a decimal answer, he will automatically know it is wrong as it is impossible to split people into fractions. Students should be aware of the context so they will know if they have provided the wrong answer to the question.
{"url":"https://www.sgmathtuition.com/?cat=72","timestamp":"2024-11-02T13:54:00Z","content_type":"text/html","content_length":"62595","record_id":"<urn:uuid:75ad8ad0-7688-44c1-9122-0c7f4e727c00>","cc-path":"CC-MAIN-2024-46/segments/1730477027714.37/warc/CC-MAIN-20241102133748-20241102163748-00486.warc.gz"}
Scope and Sequence The big ideas in grade 2 include: extending understanding of the base-ten number system, building fluency with addition and subtraction, using standard units of measure, and describing and analyzing The mathematical work for grade 2 is partitioned into 9 units: 1. Adding, Subtracting, and Working with Data 2. Adding and Subtracting within 100 3. Measuring Length 4. Addition and Subtraction on the Number Line 5. Numbers to 1,000 6. Geometry, Time, and Money 7. Adding and Subtracting within 1,000 8. Equal Groups 9. Putting it All Together In these materials, particularly in units that focus on addition and subtraction, teachers will find terms that refer to problem types, such as Add To, Take From, Put Together or Take Apart, Compare, Result Unknown, and so on. These problem types are based on common addition and subtraction situations, as outlined in Table 1 of the Mathematics Glossary section of the Common Core State Standards. Unit 1: Adding, Subtracting, and Working with Data Unit Learning Goals • Students represent and solve story problems within 20 through the context of picture and bar graphs that represent categorical data. Students build toward fluency with addition and subtraction. In this unit, students begin the year-long work to develop fluency with sums and differences within 20, building on concepts of addition and subtraction from grade 1. They learn new ways to represent and solve problems involving addition, subtraction, and categorical data. In grade 1, students added and subtracted within 20 using strategies based on properties of addition and place value. They developed fluency with sums and differences within 10. Students also gained experience in collecting, organizing, and representing categorical data. Here, students are introduced to picture graphs and bar graphs as a way to represent categorical data. They ask and answer questions about situations described by the data. The structure of the bar graphs paves the way for a new representation, the tape diagram. Students learn that tape diagrams can be used to represent and make sense of problems involving the comparison of two quantities. The diagrams also help to deepen students’ understanding of the relationship between addition and subtraction. This opening unit also offers opportunities to introduce mathematical routines and structures for centers, and to develop a shared understanding of what it means to do math and to be a part of a mathematical community. Section A: Add and Subtract Within 20 Standards Alignments Addressing 2.NBT.B.5, 2.OA.B.2 Section Learning Goals • Build toward fluency with adding within 100. • Build toward fluency with subtracting within 20. This opening section gives teachers opportunities to assess students’ fluency with addition and subtraction facts within 10 and how they approach adding and subtracting. The first several lessons focus on making a ten as a strategy to add and subtract, which helps students gain fluency with facts within 20 and supports the work with larger numbers (such as composing and decomposing numbers as a way to add and subtract). In the last lesson of the section, students use strategies learned in grade 1 to add within 50. \(10- 5 = \underline{\hspace{1 cm}}\) \(5 + \underline{\hspace{1 cm}}=10\) \(2 + \underline{\hspace{1 cm}}=10\) \(10 - 8 = \underline{\hspace{1 cm}}\) Some activities take place in centers, enabling teachers to also introduce routines and structures while helping students develop mental strategies for adding and subtracting. PLC: Lesson 2, Activity 2, Sums of 10 Section B: Ways to Represent Data Standards Alignments Addressing 2.MD.D.10, 2.NBT.B.5, 2.OA.B.2 Section Learning Goals • Interpret picture and bar graphs. • Represent data using picture and bar graphs. • Solve one- and two-step problems using addition and subtraction within 20. In this section, students explore situations and problems that involve categorical data and learn new ways to represent such data. Students begin by representing data about their class in a way that makes sense to them. Then, they are introduced to picture graphs and bar graphs. Students learn the conventions of these graphs as they create them. They discuss the types of questions that can be asked and answered by the graphs, including those that require combining and comparing different categories. PLC: Lesson 9, Activity 1, Field Trip Choices Section C: Diagrams to Compare Standards Alignments Addressing 2.MD.D.10, 2.NBT.A.2, 2.NBT.B.5, 2.OA.A.1, 2.OA.B.2 Section Learning Goals • Make sense of and interpret tape diagrams. • Represent and solve Compare problems with unknowns in all positions within 100. Students have previously represented and reasoned about quantities in story problems. In grade 1, students compared quantities using diagrams with discrete partitions. In the previous section, they reasoned about quantities in bar graphs. Here, students learn to use tape diagrams as another way to make sense of the relationship between two quantities and between addition and subtraction. Students explore Compare story problems with an unknown difference, an unknown larger number, or an unknown smaller number. Tape diagrams help students to visualize these structures and support them in reasoning about strategies to use to solve problems, such as counting on or counting back. The table highlights the different types of problems in this section. │ difference unknown │ bigger unknown │ smaller unknown │ │Lin counted 28 boats. Diego counted 32 boats. How many more │Lin found 28 more shells than Diego. Diego found 32 shells. How │Lin saw 32 starfish. Diego saw 28 fewer starfish than Lin. How many │ │boats did Diego count? │many shells did Lin find? │starfish did Diego see? │ │ │ │ │ │ │ │ │ Students also write equations to reason about questions that ask “how many more?” and “how many less?” They recognize that different equations and diagrams can be used to represent the same difference between two numbers. PLC: Lesson 14, Activity 1, Party Time (Part 1) Estimated Days: 14 - 18 Unit 2: Adding and Subtracting within 100 Unit Learning Goals • Students add and subtract within 100 using strategies based on place value, properties of operations, and the relationship between addition and subtraction. They then use what they know to solve story problems. Previously, students added and subtracted numbers within 100 using strategies they learned in grade 1, such as counting on and counting back, and with the support of tools such as connecting cubes. In this unit, they add and subtract within 100 using strategies based on place value, the properties of operations, and the relationship between addition and subtraction. Students begin by using any strategy to find the value of sums and differences that do not involve composing or decomposing a ten. They are then introduced to base-ten blocks as a tool to represent addition and subtraction and move towards strategies that involve composing and decomposing tens. Students develop their understanding of grouping by place value, and begin to subtract one- and two-digit numbers from two-digit numbers by decomposing a ten as needed. They apply properties of operations and practice reasoning flexibly as they arrange numbers to facilitate addition or subtraction. For example, students compare Mai and Lin’s methods for finding the value of \(63-18\). At the end of the unit, students apply their knowledge of addition and subtraction within 100 to solve one- and two-step story problems of all types, with unknowns in all positions. To support them in reasoning about place value when adding and subtracting, students may choose to use connecting cubes, base-ten blocks, tape diagrams, and other representations learned in earlier units and grades. Section A: Add and Subtract Standards Alignments Addressing 2.MD.D.10, 2.NBT.A.2, 2.NBT.B.5, 2.NBT.B.9, 2.OA.A.1, 2.OA.B.2 Section Learning Goals • Add and subtract within 100 using strategies based on place value and the relationship between addition and subtraction. Problems in this section are limited to the problems like 65 – 23, where decomposing a ten is not required. In this section, students find the value of unknown addends using methods that are based on place value and are introduced to base-ten blocks. They continue to rely on the relationship between addition and subtraction to solve problems involving differences. Students begin by solving Compare story problems. They use any methods and tools that make sense to them—including diagrams and connecting cubes—to find differences of two-digit numbers. Lin and Clare used cubes to make trains. What do you notice? What do you wonder? Students then analyze the structure of base-ten blocks and use them to find unknown addends (MP7). Unlike connecting cubes, base-ten blocks cannot be pulled apart, which helps emphasize the structure of two-digit numbers in base ten. To reason about an unknown addend, they may add tens and ones to the known addend until they reach the value of the sum. They may also start with the total amount and subtract tens from tens and ones from ones to reach the known addend. The numbers encountered here do not require students to decompose a ten when they subtract by place value. PLC: Lesson 2, Activity 1, How Did You Find It? Section B: Decompose to Subtract Standards Alignments Addressing 2.NBT.B.5, 2.NBT.B.6, 2.NBT.B.9, 2.OA.B.2 Section Learning Goals • Subtract within 100 using strategies based on place value, including decomposing a ten, and the properties of operations. In this section, students subtract one- and two-digit numbers from two-digit numbers within 100. To reason about differences of two numbers, they use methods based on place value, base-ten blocks and diagrams, and properties of operations. The numbers here require students to decompose a ten when subtracting by place. Students also make sense of different representations of subtraction by place, including those that show their peers’ reasoning. For example, to find the value of \(63-18\), students might use base-ten blocks or drawings to represent tens and ones. In this case, they might decompose 1 ten from 63 and exchange it for 10 ones, making 5 tens and 13 ones. From here, some students may first take away 8 ones, and then 1 ten. Others may take away 1 ten, then 8 ones. When students discuss different approaches and explain why they result in the same value, they deepen their understanding of the properties of operations and place value. \(63 - 18\) The reasoning here builds a foundation for students to understand the standard algorithm for subtraction, but students should not be encouraged to use the notation for standard algorithm at this point. Allow them to build conceptual understanding by reasoning with base-ten blocks and drawings and articulating their thinking. PLC: Lesson 5, Activity 2, Subtract with Base-ten Blocks Section C: Represent and Solve Story Problems Standards Alignments Addressing 2.NBT.B.5, 2.NBT.B.6, 2.OA.A.1, 2.OA.B.2 Section Learning Goals • Represent and solve one- and two-step problems involving addition and subtraction within 100, including different problem types with unknowns in all positions. This section allows students to apply their knowledge to solve story problems that involve addition and subtraction within 100. The story problems include all types—Add To, Take From, Put Together/ Take Apart, and Compare— and have unknowns in all positions. Previously, students worked with diagrams that represent Compare problems. Throughout this section, students also make sense of diagrams that could represent Put Together/Take Apart story problems. Clare and Han are playing a game with seeds. Clare has 54 seeds on her side of the board. Han has 16 seeds on his side. How many seeds are on the board in all? Which diagram matches this story? Explain your match to your partner. As students relate quantities in context and diagrams that represent them, they practice reasoning quantitatively and abstractly (MP2). Throughout the section, students are invited to interpret and solve problems in the ways that make sense to them (MP1). Math tools such as connecting cubes and base-ten blocks should be made available to encourage methods based on place value and the properties of operations to solve the problems. PLC: Lesson 12, Activity 1, Interpret the Diagram Estimated Days: 12 - 16 Unit 3: Measuring Length Unit Learning Goals • Students measure and estimate lengths in standard units and solve measurement story problems within 100. This unit introduces students to standard units of lengths in the metric and customary systems. In grade 1, students expressed the lengths of objects in terms of a whole number of copies of a shorter object laid without gaps or overlaps. The length of the shorter object serves as the unit of Here, students learn about standard units of length: centimeters, meter, inches, and feet. They examine how different measuring tools represent length units, learn how to use the tools, and gain experience in measuring and estimating the lengths of objects. Along the way, students notice that the length of the same object can be described with different measurements and relate this to differences in the size of the unit used to measure. Throughout the unit, students solve one- and two-step story problems involving addition and subtraction of lengths. To make sense of and solve these problems, they use previously learned strategies for adding and subtracting within 100, including strategies based on place value. To close the unit, students learn that line plots can be used to represent numerical data. They create and interpret line plots that show measurement data and use them to answer questions about the Students relate the structure of a line plot to the tools they used to measure lengths. This prepares students for the work in the next unit, where they interpret numbers on the number line as lengths from 0. The number line is an essential representation that will be used in future grades and throughout students’ mathematical experiences. Section A: Metric Measurement Standards Alignments Addressing 2.MD.A, 2.MD.A.1, 2.MD.A.3, 2.MD.A.4, 2.MD.B.5, 2.MD.B.6, 2.NBT.A.2, 2.NBT.B.5, 2.OA.A.1, 2.OA.B.2 Section Learning Goals • Measure length in centimeters and meters. • Represent and solve one-step story problems within 100. This section introduces two metric units: centimeter and meter. Students use base-ten blocks, which have lengths of 1 centimeter and 10 centimeters, to measure objects in the classroom and to create their own centimeter ruler. Students iterate the 1-centimeter unit Just as they had done with non-standard units in grade 1. Students relate the side length of a centimeter cube to the distance between tick marks on their ruler. They see that each tick mark notes the distance in centimeters from the 0 mark, and that the length units accumulate as they move along the ruler and away from 0. Students then compare the ruler they created to a standard centimeter ruler. They learn the importance of placing the end of an object at 0 and discuss how the numbers on the ruler represent lengths from 0. Students also learn about a longer unit in the metric system, meter, and use it to estimate lengths. They have opportunities to choose measurement tools and to do so strategically (MP5), by considering the lengths of objects being measured. Students also measure the length of longer objects in both centimeters and meters, which prompts them to relate the size of the unit to the To close the section, students apply their knowledge of measurement to compare the lengths of objects and solve Compare story problems involving lengths within 100, measured in metric units. PLC: Lesson 2, Activity 2, Measure with 10-centimeter Tools Section B: Customary Measurement Standards Alignments Addressing 2.MD.A.1, 2.MD.A.2, 2.MD.A.3, 2.MD.A.4, 2.MD.B.5, 2.NBT.B.5, 2.OA.A, 2.OA.B.2 Section Learning Goals • Measure length in feet and inches. • Represent and solve one- and two-step story problems within 100. In this section, students apply measurement concepts and skills from earlier to measure and estimate lengths in two customary units: inches and feet. As in the previous section, students make choices about the tool to use based on the length of the object being measured (MP5) and measure the length of the same object in both feet and inches. They begin to generalize that when they use a longer length unit, fewer of those units are needed to span the full length of the object. This understanding is a foundation for their work with fractions in grade 3 and beyond. To solidify their understanding of measurement concepts, students also solve one- and two-step story problems involving addition and subtraction of lengths within 100, expressed in customary units. Some problems involve measurements using a “torn tape” where the 0 cannot be used as a starting point. Jada and Han used an inch ruler to measure the short side of a notebook. Jada says it is 8 inches. How did Han and Jada get the same measurement? PLC: Lesson 11, Activity 1, Saree Silk Ribbon Necklaces Section C: Line Plots Standards Alignments Addressing 2.MD.A.1, 2.MD.A.3, 2.MD.A.4, 2.MD.B.5, 2.MD.B.6, 2.MD.D.9, 2.NBT.B.5, 2.OA.B.2 Section Learning Goals • Represent numerical data on a line plot. In this section, students apply their understanding of measurement and data to create and interpret line plots. Students learn that the horizontal scale is marked off in whole-number length units, the same ones used to collect the data. They recognize that the numbers on the number line represent lengths and each “x” above a number represents an object of that length. They label line plots with titles and the measurement unit used. Throughout the section, students connect the features of the line plot to the tools they use to measure. PLC: Lesson 15, Activity 2, Plot Pencil Lengths Estimated Days: 14 - 18 Unit 4: Addition and Subtraction on the Number Line Unit Learning Goals • Students learn about the structure of a number line and use it to represent numbers within 100. They also relate addition and subtraction to length and represent the operations on the number In this unit, students are introduced to the number line, an essential representation that will be used throughout students’ K–12 mathematical experience. They learn to use the number line to represent whole numbers, sums, and differences. In a previous unit, students learned to measure length with rulers. Here, they see that the tick marks and numbers on the number line are like those on a ruler: both show equally spaced numbers that represent lengths from 0. Students use this understanding of structure to locate and compare numbers on the number line, as well as to estimate numbers represented by points on the number line. Locate and label 17 on the number line. What number could this be? _____ Students then learn conventions for representing addition and subtraction on the number line: using arrows pointing to the right for adding and arrows pointing to the left for subtracting. Students also use the number line to represent addition and subtraction methods discussed in Number Talks, such as counting on, counting back by place, and decomposing a number to get to a ten. The reasoning here deepens students’ understanding of the relationship between addition and subtraction. The number lines in this unit show a tick mark for every whole number in the given range, though not all may be labeled with the numeral. As students become more comfortable with this representation, they may draw number lines that show only the numbers needed to solve the problems, which is acceptable. Section A: The Structure of the Number Line Standards Alignments Addressing 2.MD.B.6, 2.NBT.A.2, 2.NBT.B.5 Section Learning Goals • Represent whole numbers within 100 as lengths from 0 on a number line. • Understand the structure of the number line. In this section, students begin to use the number line as a tool for understanding numbers and number relationships. They learn that the number line is a visual representation of numbers shown in order from left to right, with equal spacing between each number. Students see that each number tells the number of length units from 0, just like on the ruler. This means that the numbers numbers to the left are smaller (fewer units away from 0) and those farther to the right are larger (more units away from 0). Students learn that whole numbers can be represented with tick marks and points on the number line. They then locate, label, and compare numbers on a number line. They also estimate numbers that could be represented by points on a number line. Locate and label 43 on the number line. What number could this be? _____ PLC: Lesson 2, Activity 1, Class Number Line Section B: Add and Subtract on a Number Line Standards Alignments Addressing 2.MD.B.5, 2.MD.B.6, 2.NBT.A.2, 2.NBT.B.5, 2.OA.A.1 Section Learning Goals • Represent sums and differences on a number line. In this section, students reason about sums and differences on the number line. They begin by using directional arrows: an arrow pointing right represents addition, and an arrow pointing left represents subtraction. Students write equations that correspond to given number-line representations, as well as represent given equations on the number line. Later, students revisit the idea of subtraction as an unknown-addend problem and represent the unknown addend with a jump to the right. For example, here are three ways they may reason about \(35-27 \) on the number line: As students analyze various representations of a difference on the number line, they consider when certain strategies may be more efficient than others. They also consider reasoning strategies that are based on place value and the properties of operations (for example, adding tens and then ones, or adding ones and then tens). For example, here are two ways to find \(53-29\): At the end of the section, students use the number line to make sense of and solve story problems. They compare this representation with others used in earlier units. PLC: Lesson 8, Activity 1, Represent Equations Estimated Days: 12 - 15 Unit 5: Numbers to 1,000 Unit Learning Goals • Students extend place value understanding to three-digit numbers. In this unit, students extend their knowledge of the units in the base-ten system to include hundreds. In grade 1, students learned that a ten is a unit made up of 10 ones, and two-digit numbers are formed using units of tens and ones. Here, they learn that a hundred is a unit made up of 10 tens, and three-digit numbers are formed using units of hundreds, tens, and ones. To make sense of numbers in different ways and to build flexibility in reasoning with them, students work with a variety of representations: base-ten blocks, base-ten diagrams or drawings, number lines, expressions, and equations. At the start of the unit, students express a quantity in terms of the number of units represented by base-ten blocks (3 hundreds, 14 tens, 22 ones). They practice composing larger units from smaller units and representing the value using the fewest number of each unit (4 hundreds, 6 tens, 2 ones). They connect the number of units to three-digit numerals (462). Next, students make sense of three-digit numbers on the number line. In a previous unit, students learned about the structure of the number line by representing whole numbers within 100 as lengths from zero. Here, they get a sense of the relative distance of whole numbers within 1,000 from zero. Students learn to count to 1,000 by skip-counting on a number line by 10 and 100. They also locate, compare, and order three-digit numbers on a number line. Throughout the unit, the numbers 100, 200, 300, 400, 500, 600, 700, 800, 900 are referred to as multiples of 100 for simplicity. The same is true for multiples of 10. “Multiple” is not a word that students are expected to understand or use in grade 2. Students can describe the numbers as some number of tens or hundreds, such as “20 tens” or “3 hundreds.” Section A: The Value of Three Digits Standards Alignments Addressing 2.MD.B.6, 2.NBT.A, 2.NBT.A.1, 2.NBT.A.1.a, 2.NBT.A.1.b, 2.NBT.A.2, 2.NBT.A.3, 2.NBT.B.5, 2.OA.B.2 Section Learning Goals • Read, write, and represent three-digit numbers using base-ten numerals and expanded form. • Use place value understanding to compose and decompose three-digit numbers. This section introduces the unit of a hundred. Students begin by analyzing the large square base-ten block, and its corresponding base-ten diagram, to recognize 100 as 1 hundred, 10 tens, or 100 Students learn that the digits in three-digit numbers represent amounts of hundreds, tens, and ones. They use this insight to write numbers and represent quantities in different forms—base-ten numerals, words, and expanded form. Students see that they can compose a hundred with 10 tens, just as they can compose a ten with 10 ones, and that a quantity can be expressed in many ways. 2 hundreds 3 tens 8 ones two hundred thirty-eight 200 + 30 + 8 Composing larger units from smaller units allows students to express a quantity using the fewest number of each unit, which reinforces the meaning of the digits in a three-digit number and prepares students to add and subtract such numbers later. It also lays the foundation for generalizing the relationship between the digits of other numbers in the base-ten system in future grades. PLC: Lesson 2, Activity 2, How Many Hundreds? Section B: Compare and Order Numbers within 1,000 Standards Alignments Addressing 2.MD.B.6, 2.NBT.A, 2.NBT.A.1, 2.NBT.A.2, 2.NBT.A.3, 2.NBT.A.4, 2.NBT.B.8 Section Learning Goals • Compare and order three-digit numbers using place value understanding and the relative position of numbers on a number line. • Represent whole numbers up to 1,000 as lengths from 0 on a number line. In this section, students use number line diagrams to deepen their understanding of numbers to 1,000. They begin by skip-counting on the number line to build a sense of the relative position of numbers to 1,000. They recall the structure of the number line from a previous unit and use it, along with their understanding of place value, to locate, compare, and order numbers on the number This number line, for example, is divided into intervals of 10 units, representing 10 tens from 500 to 600. In a task, students may be asked to locate the number 540 and estimate the location of the number 546. As students locate or estimate the location of three-digit numbers on number lines such as these, they show an understanding of a number’s relative distance from zero and the place value of the digits. This understanding helps them to compare and order three-digit numbers. Students see that the numbers get larger as they move from left to right on the line. To compare and order three-digit numbers written as base-ten numerals, students also continue to use base-ten blocks, base-ten diagrams, or other representations that make sense to them. They write the comparisons using the symbols, >, <, and =. Who has more? How do you know? PLC: Lesson 9, Activity 1, Compare Comparisons Estimated Days: 11 - 14 Unit 6: Geometry, Time, and Money Unit Learning Goals • Students reason with shapes and their attributes and partition shapes into equal shares, building a foundation for fractions. They relate halves, fourths, and skip-counting by 5 to tell time, and solve story problems involving the values of coins and dollars. In this unit, students transition from place value and numbers to geometry, time, and money. In grade 1, students distinguished between defining and non-defining attributes of shapes, including triangles, rectangles, trapezoids, and circles. Here, they continue to look at attributes of a variety of shapes and see that shapes can be identified by the number of sides and vertices (corners). Students then study three-dimensional (solid) shapes, and identify the two-dimensional (flat) shapes that make up the faces of these solid shapes. Next, students look at ways to partition shapes and create equal shares. They extend their knowledge of halves and fourths (or quarters) from grade 1 to now include thirds. Students compose larger shapes from smaller equal-size shapes and partition shapes into two, three, and four equal pieces. As they develop the language of fractions, students also recognize that a whole can be described as 2 halves, 3 thirds, or 4 fourths, and that equal-size pieces of the same whole need not have the same shape. Which circles are not examples of circles partitioned into halves, thirds, or fourths? Later, students use their understanding of halves and fourths (or quarters) to tell time. In grade 1, they learned to tell time to the half hour. Here, they relate a quarter of a circle to the features of an analog clock. They use “quarter past” and “quarter till” to describe time, and skip-count to tell time in 5-minute intervals. They also learn to associate the notation “a.m.” and “p.m.” with their daily activities. To continue to build fluency with addition and subtraction within 100, students conclude the unit with a money context. They skip-count, count on from the largest value, and group like coins, and then add or subtract to find the value of a set of coins. Students also solve one- and two-step story problems involving sets of dollars and different coins, and use the symbols $ and ¢. Section A: Attributes of Shapes Standards Alignments Addressing 2.G.A.1, 2.MD.A.1, 2.NBT.A.3, 2.NBT.B.5 Section Learning Goals • Identify triangles, quadrilaterals, pentagons, hexagons, and cubes. • Recognize and draw shapes having specified attributes, such as a given number of angles or a given number of equal faces. In this section, students identify and draw triangles, quadrilaterals, pentagons, and hexagons. Students are likely familiar with triangles and hexagons given their previous work with pattern blocks. Here, they see that hexagons include any shape with six sides and six corners, and may look different from the pattern block they worked with in the past. For example, each of these shapes is a Students learn to name a shape by counting the sides and corners and come to see that, in any shape, the number of corners is the same as the number of sides. (The term “corners” is used in lieu of “vertices” because the latter requires an understanding of angles, which is developed in grade 4.) Students come to recognize that some shapes such as rectangles and squares have “square corners,” the informal language for 90-degree angles. As they identify and draw shapes with given attributes, they measure length in centimeters and inches, revisiting previously learned skills. At the end of the section, students relate two-dimensional (flat) shapes to three-dimensional (solid) shapes. They see that flat shapes make up the faces of solid shapes and identify solid shapes based on the flat shapes that constitute them. PLC: Lesson 2, Activity 2, What Shape Could It Be? Section B: Halves, Thirds, and Fourths Standards Alignments Addressing 2.G.A.1, 2.G.A.3, 2.NBT.A.1, 2.NBT.A.2 Section Learning Goals • Partition rectangles and circles into halves, thirds, and fourths and name the pieces. • Recognize 2 halves, 3 thirds, and 4 fourths as one whole. • Understand that equal pieces do not need to be the same shape. In this section, students learn that shapes can be partitioned into two, three, or four equal pieces called halves, thirds, and fourths or quarters. Students begin by composing shapes using pattern blocks, initially using any combination. Later, they use a single type of pattern block, which allows them to see the composed shape as partitioned into equal pieces. In grade 1, students partitioned shapes into two and four equal pieces, and described each piece as a half or a fourth or quarter. (To prepare students to tell time to the quarter hour in the next section, be sure that they hear and use fourths and quarters interchangeably.) Here, they add the term “thirds” to their vocabulary and partition rectangles into halves, thirds, and fourths. Students then identify equal-size pieces in shapes, which are partitioned in different ways to build an understanding that equal-size pieces of the same whole do not need to be the same shape. They come to understand that if the whole is partitioned into the same number of equal pieces, the names of the pieces are the same. Students also learn that 2 halves, 3 thirds, and 4 fourths each make up one whole. Although students are expected to use the language of fractions (halves, thirds, and fourths), they are not expected to use the word “fraction” or see fractions in numerical form until grade 3. PLC: Lesson 7, Activity 2, That’s Not It Section C: Time on the Clock Standards Alignments Addressing 2.G.A, 2.G.A.1, 2.MD.C.7, 2.NBT.A.2, 2.NBT.B.5, 2.NBT.B.6 Section Learning Goals • Tell and write time from analog and digital clocks to the nearest five minutes, using a.m. and p.m. In this section, students use their understanding of fourths and quarters to tell time. In grade 1, students learned to tell time to the hour and half-hour. Here, they make a connection between the analog clock and circles partitioned into halves or fourths. Students use the phrases “half past,” “quarter past,” and “quarter till” to tell time. They skip-count by 5 to tell time in 5-minute intervals. Students recognize that the hour hand on an analog clock moves towards the next hour as time passes. They represent time on analog clocks by drawing the hour and minute hands and writing the time with digits. Students recognize that, as time passes, the hour hand on an analog clock moves towards the next hour. They learn that each hour comes around twice a day on a 12-hour clock, and is labeled with “a.m.” and “ p.m.” to distinguish between times of day. Towards the end of this section, students relate a.m. and p.m. times to their daily activities. PLC: Lesson 13, Activity 1, What is the Time of Day? Section D: The Value of Money Standards Alignments Addressing 2.G.A, 2.G.A.1, 2.MD.C.8, 2.NBT.A.2, 2.NBT.B.5, 2.NBT.B.6, 2.NBT.B.8, 2.OA.A.1 Section Learning Goals • Find the value of a group of bills and coins. • Use addition and subtraction within 100 to solve one- and two-step word problems. In this section, students learn about money concepts while continuing to develop fluency with addition and subtraction within 100. They identify coins such as quarters, dimes, nickels, and pennies, and find the total value of different coin combinations. Mai had some money. Elena has $\(\)48. They combined their money and now they have $85. How much money did Mai have? PLC: Lesson 16, Activity 1, How Much is a Quarter Worth? Estimated Days: 16 - 21 Unit 7: Adding and Subtracting within 1,000 Unit Learning Goals • Students use place value understanding, the relationship between addition and subtraction, and properties of operations to add and subtract within 1,000. In this unit, students add and subtract within 1,000, with and without composing and decomposing a base-ten unit. Previously, students added and subtracted within 100 using methods such as counting on, counting back, and composing or decomposing a ten. Here, they apply the methods they know and their understanding of place value and three-digit numbers to find sums and differences within 1,000. Initially, students add and subtract without composing or decomposing a ten or hundred. Instead, they rely on methods based on the relationship between addition and subtraction and the properties of operations. They make sense of sums and differences using counting sequences, number relationships, and representations (number line, base-ten blocks, base-ten diagrams, and equations). As the unit progresses, students work with numbers that prompt them to compose and decompose one or more units, eliciting strategies based on place value. When adding and subtracting by place, students first compose or decompose only a ten, then either a ten or a hundred, and finally both a ten and a hundred. They also make sense of and connect different ways to represent place value strategies. For example, students make sense of a written method for subtracting 145 from 582 by connecting it to a base-ten diagram and their experiences with base-ten blocks. How do Jada's equations match Lin's diagram? Finish Jada's work to find \(582-145\). Students learn to recognize when composition or decomposition is a useful strategy when adding or subtracting by place. In the later half of the unit, they encounter lessons that encourage them to think flexibly and use strategies that make sense to them based on number relationships, properties of operations, and the relationship between addition and subtraction. Section A: Add and Subtract within 1,000 without Composition or Decomposition Standards Alignments Addressing 2.NBT.A, 2.NBT.A.2, 2.NBT.A.4, 2.NBT.B.5, 2.NBT.B.7, 2.NBT.B.8, 2.NBT.B.9 Section Learning Goals • Add and subtract numbers within 1,000 without composition or decomposition, and use strategies based on the relationship between addition and subtraction and the properties of operations. In this section, students add and subtract within 1,000 using methods where they do not explicitly compose or decompose a ten or a hundred. The number line is used early in this section to help students recognize that when numbers are relatively close, they can count on or count back to find the value of the difference. For example, they may count on from 559 to 562 to find \(562-559\). Students also analyze counting sequences of three-digit numbers that increase or decrease by 10 or 100. They observe patterns in place value before adding and subtracting multiples of 10 or 100. Fill in the missing numbers. Does the number line show counting on by 10 or by 100? Students then engage with problems and expressions that encourage them to reason about sums and differences using the relationship between addition and subtraction and the properties of operations. Diego has 6 tens. Tyler has 8 hundreds, 3 tens, and 6 ones. What is the value of their blocks together? Later in the section, students analyze and make connections between methods that use different representations, such as number lines, base-ten diagrams, and equations. They then use methods or representations that make sense to them to add and subtract three-digit numbers. PLC: Lesson 4, Activity 1, Zero Tens and Zero Ones Section B: Add within 1,000 using Place Value Strategies Standards Alignments Addressing 2.NBT.B.5, 2.NBT.B.6, 2.NBT.B.7, 2.NBT.B.8, 2.NBT.B.9 Section Learning Goals • Add numbers within 1,000 using strategies based on place value understanding, including composing a ten or hundred. In this section, students use strategies based on place value to add three-digit numbers. They learn that it is sometimes necessary to compose a hundred from 10 tens to find the value of such sums. Students begin with sums that allow them to decide when to make a ten. They then work with larger values in the tens place and determine when to compose a hundred. As the lessons progress, they encounter sums of two- and three-digit numbers that involve composing two units. Throughout the section, students analyze and use representations such as base-ten blocks, base-ten diagrams, expanded form, and other equations to build conceptual understanding and show place value reasoning. They also develop their understanding of the properties of operations as they observe that the order in which they add the units doesn’t affect the value of the sum. What is the same and what is different about how Priya and Lin found \(358 + 67\)? Priya's work \(300 + 100 + 10 + 10 + 5\) \(400 + 20 + 5 = 425\) Lin's work \(3 \text{ hundreds} + 11 \text { tens} + 15 \text{ ones}\) \(11 \text { tens} = 110 \) \(15 \text{ ones} = 15\) \(300 + 110 + 15 = 425\) Later in the section, students add within 1,000 using any method they have learned and thinking flexibly about the numbers they are adding. PLC: Lesson 7, Activity 2, Walk About and Add Section C: Subtract within 1,000 using Place Value Strategies Standards Alignments Addressing 2.MD.D.10, 2.NBT.A.1, 2.NBT.A.2, 2.NBT.A.3, 2.NBT.B.7, 2.NBT.B.8, 2.NBT.B.9 Section Learning Goals • Subtract numbers within 1,000 using strategies based on place value understanding, including decomposing a ten or hundred. As they have done when adding, students subtract numbers within 1,000 using place value strategies that involve decomposing a ten, a hundred, or both. This work builds on their previous experience of subtracting two-digit numbers by place value and decomposing a ten. Students use base-ten blocks to subtract hundreds from hundreds, tens from tens, and ones from ones, which offers a concrete experience of exchanging a ten for 10 ones or a hundred for 10 tens as Along the way, they begin to think strategically about how to decompose the minuend when using base-ten blocks or diagrams. They learn that by analyzing the value of the digits in each place, they can initially represent the minuend in a way that would require decomposing fewer units when subtracting by place. For example, this is a helpful way to represent 244 if we are subtracting a number with more than 4 ones, such as when finding \(244-67\): Throughout the section, students compare the steps they use to decompose units and the different ways to represent and record the units being decomposed. The section ends with students choosing subtraction methods flexibly. They apply their understanding of place value, the relationship between addition and subtraction, and the properties of operations, to analyze number relationships and decide how to find the value of differences within 1,000. PLC: Lesson 14, Activity 1, Agree to Disagree Estimated Days: 14 - 18 Unit 8: Equal Groups Unit Learning Goals • Students work with equal groups of objects to gain foundations for multiplication. In this unit, students develop an understanding of equal groups, building on their experiences with skip-counting and with finding the sums of equal addends. The work here serves as the foundation for multiplication and division in grade 3 and beyond. Students begin by analyzing even and odd numbers of objects. They learn that any even number can be split into 2 equal groups or into groups of 2, with no objects left over. Students use visual patterns to identify whether numbers of objects are even or odd. Next, students learn about rectangular arrays. They describe arrays using mathematical terms (rows and columns). Students see the total number of objects as a sum of the objects in each row and as a sum of the objects in each column, which they express by writing equations with equal addends. They also recognize that there are many ways of seeing the equal groups in an array. Later, students transition from working with arrays containing discrete objects to equal-size squares within a rectangle. They build rectangular arrays using inch tiles and partition rectangles into rows and columns of equal-size squares. The work here sets the stage for the concept of area in grade 3. Section A: Odd and Even Standards Alignments Addressing 2.NBT.A.2, 2.NBT.B.7, 2.NBT.B.8, 2.OA.B.2, 2.OA.C, 2.OA.C.3 Section Learning Goals • Determine whether a group of objects (up to 20) has an odd or even number of members. • Write an equation to express an even number as a sum of two equal addends. In this section, students learn about odd and even numbers, building on their experience with sharing objects with another person or with making pairs out of a set of objects. They begin by noticing that some groups of objects can be made into two equal groups without a “leftover” and other groups can be made into two equal groups with “1 leftover.” The same pattern can be seen when pairing After learning the terms, students focus on explaining why a group has an even number or an odd number of members. They do so by showing whether the objects can be made into two equal groups or be paired without a leftover, or whether they can skip-count by 2 to count the entire collection. The representations used here support students as they progress from explaining even and odd numbers informally to doing so more formally. They also pave the way for students to make sense of representations of multiplication in grade 3. Early lessons encourage the teacher to record student thinking using diagrams of equal groups or by arranging objects in rows and columns. Both recording strategies help students see and count pairs of objects. Students begin to see how objects arranged in rows and columns can show equal groups or pairs. They will learn more about this arrangement and the term “array” in the next section. To focus the work on building a foundation for multiplication and division, counters or connecting cubes should be available to students throughout the section, including during cool-downs. PLC: Lesson 3, Activity 2, Card Sort: Even or Odd Section B: Rectangular Arrays Standards Alignments Addressing 2.G.A.2, 2.NBT.A.2, 2.NBT.B.7, 2.OA.B.2, 2.OA.C.3, 2.OA.C.4 Section Learning Goals • Find the total number of objects arranged in rectangular arrays with up to 5 rows and up to 5 columns using addition. • Partition rectangles into rows and columns of equal-size squares, and count to find the total number of squares. • Represent the total number of objects in an array as a sum of equal addends. In this section, students learn that a rectangular array contains objects arranged into rows and columns, with the same number of objects in each row and the same in number in each column. Using this structure, students can skip-count by the number in each row or in each column to find the total number of objects. They can also write equations with equal addends representing the number of objects in a row or a column. Later in the section, students relate their work with arrays to the partitioning of shapes into equal parts. True or false? True or false? Students build rectangles by arranging square tiles into rows and columns, and then partition rectangles into rows and columns. Use 8 tiles to build a rectangle. Arrange them in 2 rows. Partition this rectangle to match the rectangle you made. Rectangles in this section have up to 5 rows and 5 columns. Students are not expected to name the fractional units created by partitioning shapes. The focus is on using the structure of the rows and columns created by the partitions to count the total number of equal-size squares. This work serves as a foundation for students’ future study of multiplication and area measurement. PLC: Lesson 9, Activity 1, Sums of Rows and Sums of Columns Estimated Days: 10 - 13 Unit 9: Putting It All Together Unit Learning Goals • Students consolidate and solidify their understanding of various concepts and skills related to major work of the grade. They also continue to work toward fluency goals of the grade. In this unit, students revisit major work and fluency goals of the grade, applying their learning from the year. Section A gives students a chance to solidify their fluency with addition and subtraction within 20. In section B, students apply methods they used with smaller numbers to add and subtract numbers within 100. They also revisit numbers within 1,000: composing and decomposing three-digit numbers in different ways, and using methods based on place value to find their sums and differences. In the final section, students interpret, solve, and write story problems involving numbers within 100, which further develop their fluency with addition and subtraction of two-digit numbers. They work with all problem types with the unknown in all positions. Clare picked 51 apples. Lin picked 18 apples. Andre picked 19 apples. Here is the work a student shows to answer to a question about the apples. \(51 + 19 = 70\) \( 70 + 18 = 88\) What is the question? The sections in this unit are standalone sections, not required to be completed in order. The goal is to offer ample opportunities for students to integrate the knowledge they have gained and to practice skills related to the expected fluencies of the grade. Section A: Fluency Within 20 and Measurement Standards Alignments Addressing 2.MD.A.1, 2.MD.A.4, 2.MD.B.5, 2.MD.D, 2.MD.D.9, 2.NBT.B.5, 2.OA.B.2 Section Learning Goals • Fluently add and subtract within 20. In this section, students practice adding and subtracting within 20 to meet the fluency expectations of the grade, which include finding all sums and differences within 20, and knowing from memory all sums of 2 one-digit numbers. Students begin with exercises and games that emphasize using the relationship between addition and subtraction to find the value of expressions and unknown addends. When students encounter sums and differences they don't know right away, they use mental math strategies and other methods they have learned, such as using facts they know, making equivalent expressions, and composing or decomposing a number to make a 10. Later in the section, students apply their mental strategies to find sums and differences within 20 in a measurement context. They measure standard lengths and create line plots, and then use the measurements to add and subtract. │group│length of pencils in cm│total length │ │ A │8 │13 │12 │7 │ │ │ B │9 │15 │7 │10 │ │ │ C │12 │13 │8 │6 │ │ │ D │9 │9 │11 │13 │ │ │ E │ │ │ │ │ │ Use the pencil measurements to create a line plot. PLC: Lesson 3, Activity 1, Measure on the Map Section B: Numbers to 1,000 Standards Alignments Addressing 2.NBT.A, 2.NBT.A.1, 2.NBT.A.3, 2.NBT.B.5, 2.NBT.B.7 Section Learning Goals • Add and subtract within 1,000 using strategies based on place value and the properties of operations. • Fluently add and subtract within 100. In this section, students revisit numbers within 1,000 and develop their facility with addition and subtraction within 100. The work here requires students to compose and decompose multiple place-value units, which reinforces their understanding of place value and operations on larger numbers. Students begin by decomposing and composing three-digit numbers in multiple ways using base-ten blocks, base-ten diagrams, words, and symbols. They also compose and decompose units as they match and create equivalent expressions for three-digit numbers. Find the number that makes each equation true. 6 hundreds + 9 ones = 5 hundreds + _____ tens + 9 ones 2 hundreds + 9 tens + 17 ones = _____ hundreds + 7 ones Next, students practice addition and subtraction within 1,000. They analyze sums and differences and reason about which ones are more difficult to evaluate and which are easier, deepening their understanding of composition and decomposition based on place value. Students then work toward fluent addition and subtraction within 100, which requires composing or decomposing one unit when using methods based on place value. Methods for finding sums and differences mentally, without explicitly composing or decomposing units, are also encouraged. PLC: Lesson 5, Activity 2, Let Me Count the Ways Section C: Create and Solve Story Problems Standards Alignments Addressing 2.NBT.A, 2.NBT.B.5, 2.NBT.B.9, 2.OA.A.1 Section Learning Goals • Represent and solve one- and two-step story problems within 100. In this section, students create and solve one- and two-step story problems with unknown values in all positions. They discuss how they make sense of the problem and share their methods for solving. By now, students are expected to solve all types of story problems within 100, using methods and representations that make sense to them. They continue to make connections across representations, with a focus on equations and tape diagrams, which will be used frequently in grade 3. Students analyze stories and determine the types of questions that could be asked based on the provided information. Then, they write their own story problems based on images and their own Write and solve a story problem the diagram could represent. PLC: Lesson 10, Activity 2, What is the Question? Estimated Days: 13
{"url":"https://curriculum.illustrativemathematics.org/k5/teachers/grade-2/course-guide/scope-and-sequence.html","timestamp":"2024-11-02T20:40:29Z","content_type":"text/html","content_length":"371464","record_id":"<urn:uuid:469d22c9-f2bb-4cf7-b994-76ece5a91cda>","cc-path":"CC-MAIN-2024-46/segments/1730477027730.21/warc/CC-MAIN-20241102200033-20241102230033-00038.warc.gz"}
One big study or two small studies? Insights from simulations At a recent conference, someone posed a question that had been intriguing me for a while: suppose you have limited resources, with the potential to test N participants. Would it be better to do two studies, each with N/2 participants, or one big study with all N? I've been on the periphery of conversations about this topic, but never really delved into it, so I gave a rather lame answer. I remembered hearing that statisticians would recommend the one big study option, but my intuition was that I'd trust a result that replicated more than one which was a one-off, even if the latter was from a bigger sample. Well, I've done the simulations and it's clear that my intuition is badly flawed. Here's what I did. I adapted a script that is described in my recent slides that give hands-on instructions for beginners on how to simulate data, The script, , which can be found generates data for a simple two-group comparison using a t-test. In this version, on each run of the simulation, you get output for one study where all subjects are divided into two groups of size N, and for two smaller studies each with half the number of subjects. I ran it with various settings to vary both the sample size and the effect size (Cohen's d). I included the case where there is no real difference between groups (d = 0), so I could estimate the false positive rate as well as the power to detect a true effect. I used a one-tailed t-test, as I had pre-specified that group B had the higher mean when d > 0. I used a traditional approach with p-value cutoffs for statistical significance (and yes, I can hear many readers tut-tutting, but this is useful for this demonstration….) to see how often I got a result that met each of three different criteria: • a) Single study, p < .05 • b) Split sample, p < .05 replicated in both studies • c) Single study, p < .005 Figure 1 summarises the results. The figure is pretty busy but worth taking a while to unpack. Power is just the proportion of runs of the simulation where the significance criterion was met. It's conventional to adopt a power cutoff of .8 when deciding on how big a sample to use in a study. Sample size is colour coded, and refers to the number of subjects per group for the single study. So for the split replication, each group has half this number of subjects. The continuous line shows the proportion of results where p < .05 for the single study, the dotted line has results from the split replication, and the dashed line has results from the single study with more stringent significance criterion, p < .005 . It's clear that for all sample sizes and all effect sizes, the one single sample is much better powered than the split replication. But I then realised what had been bugging me and why my intuition was different. Look at the bottom left of the figure, where the x-axis is zero: the continuous lines (i.e., big sample, p < .05) all cross the y-axis at .05. This is inevitable: by definition, if you set p < .05, there's a one in 20 chance that you'll get a significant result when there's really no group difference in the population, regardless of the sample size. In contrast, the dotted lines cross the y-axis close to zero, reflecting the fact that when the null hypothesis is true, the chance of two samples both giving p < .05 in a replication study is one in 400 (.05^2 = .0025). So I had been thinking more like a Bayesian: given a significant result, how likely was it to have been come from a population with a true effect rather than a null effect? This is a very different thing from what a simple p-value tells you*. Initially, I thought I was onto something. If we just stick with p < .05, then it could be argued that from a Bayesian perspective, the split replication approach is preferable. Although you are less likely to see a significant effect with this approach, when you do, you can be far more confident it is a real effect. In formal terms, the likelihood ratio for a true vs null hypothesis, given p < .05, will be much higher for the replication. My joy at having my insight confirmed was, however, short-lived. I realised that this benefit of the replication approach could be exceeded with the single big sample simply by reducing the p-value so that the odds of a false positive are minimal. That's why Figure 1 also shows the scenario for one big sample with p < .005: a threshold that has recently proposed as a general recommendation for claims of new discoveries (Benjamin et al, 2018)**. None of this will surprise expert statisticians: Figure 1 just reflects basic facts about statistical power that were popularised by Jacob Cohen in 1977. But I'm glad to have my intuitions now more aligned with reality, and I'd encourage others to try simulation as a great way to get more insights into statistical methods. Here is the conclusions I've drawn from the simulation: • First, even when the two groups come from populations with different means, it's unlikely that you'll get a clear result from a single small study unless the effect size is at least moderate; and the odds of finding a replicated significant effect are substantially lower than this. None of the dotted lines achieves 80% power for a replication if effect size is less than .3 - and many effects in psychology are no bigger than that. • Second, from a statistical perspective, testing an a priori hypothesis in a larger sample with a lower p-value is more efficient than subdividing the sample and replicating the study using a less stringent p-value. I'm not a stats expert, and I'm aware that there's been considerable debate out there about p-values - especially regarding the recommendations of Benjamin et al (2018). I have previously sat on the fence as I've not felt confident about the pros and cons. But on the basis of this simulation, I'm warming to the idea of p < .005. I'd welcome comments and corrections. *In his paper The reproducibility of research and the misinterpretation of p-values. Royal Society Open Science, 4(171085). doi:10.1098/rsos.171085 David Colquhoun (2017) discusses these issues and notes that we also need to consider the prior likelihood of the null hypothesis being true: something that is unknowable and can only be estimated on the basis of past experience and intuition. **The proposal for adopting p < .005 as a more stringent statistical threshold for new discoveries can be found here: Benjamin, D. J., Berger, J. O., Johannesson, M., Nosek, B. A., Wagenmakers, E. J., Berk, R., . . . Johnson, V. E. (2018). Redefine statistical significance. Nature Human Behaviour, 2(1), 6-10. doi:10.1038/s41562-017-0189-z Postscript, 15th July 2018 This blogpost has generated a lot of discussion, mostly on Twitter. One point that particularly interested me was a comment that I hadn’t done a fair comparison between the one-study and two-study situation, because the plot showed a one-off two group study with an alpha at .005, versus a replication study (half sample size in each group) with alpha at .05. For a fair comparison, it was argued, I should equate the probabilities between the two situations, i.e. the alpha for the one-off study should be .05 squared = .0025. So I took a look at the fair comparison: Figure 2 shows the situation when comparing one study with alpha set to .0025 vs a split replication with alpha of .05. The intuition of many people on Twitter was that these should be identical, but they aren’t. Why not? We have the same information in the two samples. (In fact, I modified the script so that this was literally true and the same sample was tested singly and again split into two – previously I’d just resampled to get the smaller samples. This makes no difference – the single sample with more extreme alpha still gives higher Figure 2: Power for one-off study with alpha .0025 (dashed lines) vs. split replication with p < .05 To look at it another way, in one version of the simulation there were 1600 simulated experiments with a true effect (including all the simulated sample sizes and effect sizes). Of these 581 were identified as ‘significant’ both by the one-off study with p < .0025 and they were also replicated in two small studies with p < .05. Only 5 were identified by the split replication alone, but 134 were identified by the one-off study alone. I think I worked out why this is the case, though I’d appreciate having a proper statistical opinion. It seems to have to do with accuracy of estimating the standard deviation. If you have a split sample and you estimate the mean from each half (A and B), then the average of mean A and mean B will be the same as for the big sample of AB combined. But when it comes to estimating the standard deviation – which is a key statistic when computing group differences – the estimate is more accurate and precise with the large sample. This is because the standard deviation is computed by measuring the difference of each value from its own sample mean. Means for A and B will fluctuate due to sampling error, and this will make the estimated SDs less reliable. You can estimate the pooled standard deviation for two samples by taking the square root of the average of the variances. However, that value is less precise than the SD from the single large sample. I haven’t done a large number of runs, but a quick check suggests that whereas both the one-off study and the split replication give pooled estimates of the SD at around the true value of 1.0, the standard deviation of the standard deviation (we are getting very meta here!) is around .01 for the one-off study but .14 for the split replication. Again, I’m reporting results from across all the simulated trials, including the full range of sample sizes and effect sizes. Figure 3: Distribution of estimates of pooled SD; The range is narrower for the one-off study (pink) than for the split replication studies (blue). Purple shows area of overlap of distributions This has been an intriguing puzzle to investigate, but in the original post, I hadn’t really been intending to do this kind of comparison - my interest was rather in making the more elementary point which is that there's a very low probability of achieving a replication when sample size and effect size are both relatively small. Returning to that issue, another commentator said that they’d have far more confidence in five small studies all showing the same effect than in one giant study. This is exactly the view I would have taken before I looked into this with simulations; but I now realise this idea has a serious flaw, which is that you’re very unlikely to get those five replications, even if you are reasonably well powered, because – the tldr; message implicit in this post – when we’re talking about replications, we have to multiply the probabilities, and they rapidly get very low. So, if you look at the figure, suppose you have a moderate effect size, around .5, then you need a sample of 48 per group to get 80% power. But if you repeat the study five times, then the chance of getting a positive result in all five cases is .8^5, which is .33. So most of the time you’d get a mixture of null and positive results. Even if you doubled the sample size to increase power to around .95, the chance of all five studies coming out positive is still only .95^5 (82%). Finally, another suggestion from Twitter is that a meta-analysis of several studies should give the same result as a single big sample. I’m afraid I have no expertise in meta-analysis, so I don’t know how well it handles the issue of more variable SD estimates in small samples, but I’d be interested to hear more from any readers who are up to speed with this. 12 comments: 1. One advantage of running two studies - leaving power calculations aside - is that you get the opportunity to use real data from the first study to learn all the things that were wrong with your a-priori predictions or analysis plan. A point that I think is sometimes missed in calls for pre-registration is something I would summarise with the quote that "research is what I'm doing when I don't know what I'm doing". Pre-registration may have little value for studies with novel dependent measures, or for which the data holds surprises. In my experience of studies like these, sticking to the pre-registered analysis is a mistake. I think a better approach is to work with the data in an exploratory fashion and then pre-register the right analysis and predictions for your second, replication study. 1. I guess the other alternative would be to do some form of leave-half-out analysis. e.g in the context of ERPs: - test N participants; - determine based on randomly selected N/2 the latency where the greatest effect is; - determine the effect size for the remaining N/2 at that latency; - repeat 1000x with different random N/2 subsamples; - average the effect sizes across the 1000 runs. My intuition is that this gives a more accurate picture of the true effect size. But it would probably only make sense when there are few researcher degrees of freedom. 2. Uh - not sure why I'm anonymous when I'm supposedly signed in. Jon Brock here ^^ 3. Thanks Matt. I think you could also argue for other advantages of 2 studies, e.g. done by different groups so establish robustness of result against lab-specific effects. But the power issue is really serious: if you are not powered to detect the effect of interest, then you're in trouble. And most of the time we aren't. Another option is to consider other ways of improving power by minimising measurement error, and hence increasing effect size. But, I repeat, power is key. 2. @ Matt Davis I am certainly no statistician but with limited N-sizes that we often have in human psychology a serious problem with the two study approach is that it magnifies the chances of both false positive and false negative results if the data is at all noisy. Given sufficient sample sizes and relatively clean measurements your approach has a lot of appeal but the curse of the N-size haunts us. I specified "human psychology" above most researchers working with animals do not, at least in principle, have to worry about limited recruitment pools. 3. Once you introduce heterogenity of effect sizes, then one big study is highly problematic. 4. @ Unknown (aka Jon Bock) Check in mirror that you are not wearing an iron mask. How does one geheterogenity of effect sizes in a single study (assuming one measurement)? As I said, I am no statistician. 1. So this is similar to how lots of machine learning approaches work. You randomly divide the sample into two - you use the first half to determine how to analyse the data (eg what the epoch of interest is) and then, having fixed those analytical degrees of freedom, you determine the effect size for the remaining half of the participants. If you repeat that exercise a second time with a different random division of the participants, you'll end up with a slightly different effect size. So the best thing to do is repeat that exercise many times (say 1000) and then determine the average effect size. 2. Ah, obvious once someone points it out. Thanks. 5. Blogger has refused to interact with David Colquhoun, so I am posting this comment on his behalf! "Well actually in my 2017 paper to which you kindly refer, what I do is to suggest ways of circumventing the inconvenient fact that we rarely have a valid prior probability. More details in my 2018 paper: https://arxiv.org/abs/1802.04888 and in my CEBM talk: https://www.youtube.com/watch?v=iFaIpe9rFR0 …." 6. You randomly divide the sample into two - you use the first half to determine how to analyse the data (eg what the epoch of interest is) and then, having fixed those analytical degrees of freedom, you determine the effect size for the remaining half of the participants. url https://amzn.to/2N9MarN 7. WRT simulations there is no difference between a single study and replicated studies. You could achieve the same result (wrt replicated studies) by randomly assigning results from the single study into one of two groups and then analysing the two groups separately. But this would be a very inefficient way of using the data. In practice, if you do two studies then you would do them at different times of day, or on different days, or in different labs or even in different countries. You would then still analyse as a single study but you would include terms on your AOV regression model for study and possibly study*treatment terms. This would remove degrees of freedom from the residual error but would enable you to draw more general conclusions. New comments are not allowed.
{"url":"https://deevybee.blogspot.com/2018/07/one-big-study-or-two-small-studies.html","timestamp":"2024-11-02T18:40:48Z","content_type":"text/html","content_length":"132552","record_id":"<urn:uuid:126dde71-2c27-4fc7-8c7f-dd94f153671c>","cc-path":"CC-MAIN-2024-46/segments/1730477027729.26/warc/CC-MAIN-20241102165015-20241102195015-00808.warc.gz"}
Brian Kernighan’s Algorithm to count set bits in an integer in C++ and Python - BTech Geeks Brian Kernighan’s Algorithm to count set bits in an integer in C++ and Python Brian Kernighan’s Algorithm to count the number of set bits in an integer: Given a number, the task is to count the set bits of the given number using Brian Kernighan’s Algorithm in C++ and Python. Brian Kernighan’s Algorithm to Count Set bits of a Number in C++ and Python Count set bits in an integer: We’ll look at Brian Kernighan’s Algorithm and see how it works in C++ and Python. Drive into Python Programming Examples and explore more instances related to python concepts so that you can become proficient in generating programs in Python Programming Language. given number =43 The total number of set bits in the given number 43 : given number =64 The total number of set bits in the given number 64 : given number =4322 The total number of set bits in the given number 4322 : First, we implement the brute force solution for the above program Brute-Force Approach (Naïve Approach) A simple method is to take each bit into consideration in a number (set or unset) and hold a counter to track the set bits. • Set the variable to say count to 0 to count the total number of set bits. • We utilize the while loop. • We’ll keep going till the number is bigger than zero (Condition of while statement) • Using the % operator, we will determine whether the last check bit is set or not. • If the check bit is 1, it indicates that the bit is set, and we increment the count. • Divide the given number by 2. • Print the count. Below is the implementation: def countSetBit(numb): # checking if the given number is greater than 1 if numb > 1: # Set the variable say setbitcount to 0 to count the total number of set bits. setbitcount = 0 # looping till number greater than 0 using while loop while(numb > 0): # We will get the last check bit whether it is set bit or not using % operator checkbit = numb % 2 # checking if the check bit is 1 or not # if the check bit is 1 then increment the setbitcount if(checkbit == 1): setbitcount = setbitcount+1 # divide the number by 2 numb = numb//2 # return the setbitcount return setbitcount # Driver code given_numb = 235 # passing given number to countSetBit function to # count the total number of set bits in the given number print("The total number of set bits in the given number ", given_numb, " : ") The total number of set bits in the given number 235 : The brute-force strategy described above requires one repetition per bit until no more set bits remain. So it goes through 32 iterations on a 32–bit word with only the high-bit set. Brian Kernighan’s Algorithm to calculate set bits in an integer We may apply Brian Kernighan’s technique to improve the performance of the naive algorithm described above. The concept is to only consider an integer’s set bits by turning off its rightmost set bit (after counting it) so that the next iteration of the loop only considers the next rightmost bit. To turn off the rightmost set bit of a number n, use the formula n & (n-1). This is because the formula n-1 flips all the bits following the rightmost set bit of n, including the rightmost set bit itself. As a result, n & (n-1) results in the last bit of n being flipped. Implementing Brian Kernighan’s Algorithm to count the number of set bits in an integer in Python Below is the implementation of Brian Kernighan’s Algorithm to set bits in a Python: def countSetBit(numb): # checking if the given number is greater than 1 if numb > 1: # Set the variable say setbitcount to 0 to count the total number of set bits. setbitcount = 0 # looping till number greater than 0 using while loop while(numb > 0): numb = numb & (numb-1) # increment the set bit count setbitcount = setbitcount+1 # return the setbitcount return setbitcount # Driver code given_numb = 4322 # passing given number to countSetBit function to # count the total number of set bits in the given number print("The total number of set bits in the given number ", given_numb, " : ") The total number of set bits in the given number 4322 : 5 Time Complexity: O(logn) Implementing Brian Kernighan’s Algorithm to count the number of set bits in an integer in C++ Below is the implementation of Brian Kernighan’s Algorithm to set bits in a C++: #include <iostream> using namespace std; // function which returns the total number of set bits in // the given number int countSetBit(int numb) { // Set the variable say setbitcount to 0 to count the // total number of set bits. int setbitcount = 0; // checking if the given number is greater than 1 if (numb > 1) { // looping till number greater than 0 using while // loop while (numb > 0) { numb = numb & (numb - 1); // increment the set bit count // return the setbitcount return setbitcount; int main() // given number int given_numb = 4322; // passing given number to countSetBit function to // count the total number of set bits in the given // number cout << "The total number of set bits in the given " "number " << given_numb << " : " << endl; cout << countSetBit(given_numb); return 0; The total number of set bits in the given number 4322 : Brian Kernighan’s algorithm iterates as many times as there are set bits. So, if we have a 32–bit word with only the high bit set, it will only be looped once. To Count Set Bits in an Integer Using GCC built-in function GCC also implements a built-in function to get no of set bits in an integer,int __builtin_popcount(unsigned int n) that returns the total number of set bits in n. The below C++ program illustrates it #include <iostream> using namespace std; int main() int n = 16; cout << "The total number of set bits in " << n << " is " << __builtin_popcount (n) << endl; return 0; The total number of set bits in 16 is 1 Also, GCC furnishes two other built-in functions, int __builtin_popcountl (unsigned long) and int __builtin_popcountll (unsigned long long), same as __builtin_popcount, except their argument type is unsigned long and unsigned long long, each. Related Programs:
{"url":"https://btechgeeks.com/brian-kernighans-algorithm-to-count-set-bits-in-an-integer/","timestamp":"2024-11-08T11:41:16Z","content_type":"text/html","content_length":"69774","record_id":"<urn:uuid:f18936ff-de82-48c7-be97-06b33e8368a8>","cc-path":"CC-MAIN-2024-46/segments/1730477028059.90/warc/CC-MAIN-20241108101914-20241108131914-00866.warc.gz"}
Abstract Algebra - Mathstoon Abstract Algebra The topics that will be covered under Abstract Algebra are as follows: Group Theory, Ring Theory, Field Theory, etc. The notes on Group Theory are designed here in such a way that it covers a complete course in Group Theory. Group Theory: Table of Contents Prove that Order of Element Divides Order of Group Group of Order 4 is Abelian: Proof Center of Symmetric Group S[n] is Trivial Prove that Symmetric Group S[n] is not Abelian Every Subgroup of a Cyclic Group is Cyclic: Proof Two Cyclic Groups of Same Order are Isomorphic Infinite Cyclic Group is Isomorphic to ℤ [With Generators] Ring Theory: Table of Contents Field Theory: Table of Contents
{"url":"https://www.mathstoon.com/abstract-algebra/","timestamp":"2024-11-05T23:34:17Z","content_type":"text/html","content_length":"167979","record_id":"<urn:uuid:7da6c69d-95a2-4d53-9d3e-6da3d2247f99>","cc-path":"CC-MAIN-2024-46/segments/1730477027895.64/warc/CC-MAIN-20241105212423-20241106002423-00844.warc.gz"}
How to Calculate Time in Excel? How to Calculate Time Difference in Excel (Elapsed Time)? How much time has passed between two dates is information you use for many purposes (e.g., calculating age, time with the company, time spent working on a project, overtime, worked hours, etc.). It’s probably the most used formula in the book. However, there are two different situations you need to consider: • The time difference between two dates (over 24 hours difference) • The time difference between two times of the same day (often, under 24 hours difference) You should also establish the unit measure for the result from the beginning. For example, you can measure the time difference between two dates in years (e.g., age), months (e.g., time spent working on a project), or days (e.g., leave balance). The time difference between two times on the same day is usually measured in hours and minutes, but you can include seconds if they are relevant to your measurement (e.g., time between two computer program runs). Calculating the Time Difference between Two Dates The first thing you need to do is format the cells storing the two dates as Date values. Select them, right-click, and choose Format Cells… from the drop-down menu. Then, select Date from the Category panel and one of the types from the Type list. Press OK, and you will be good to go. Enter the dates you want to subtract into the A2 and B2 cells and add the formula “=B2-A2” in the C2 cell. Excel will compute the difference in days between the two dates. However, if you need the difference in months or years, adjust the formula accordingly. The formula for getting the result in months is “=(B2-A2)/12,” and the formula for getting the result in years is “=(B2-A2)/365”. The values in the Result cells are stored as numbers. You can format them to show just two decimals or none at all. 2.054794521 years is not very relevant data and can mess up your calculation. So, select the Result cells, right-click, and choose Format Cells… from the drop-down menu. Then, select Number from the Category panel and set Decimal places to 0. Excel will round the value if you want to stick to the highest integer value (e.g., 62.5 months to be 62, not 63) and use the function INT(). Tip: We provide a time difference calculator, if you don't want to use Excel. Calculating Age on the Fly Calculating age is a particular case of calculating the time difference between two dates. However, it is so used in people management and HR processes that it needs a shortcut. When calculating the age of your employees, you need the result in years. Computing the result in days and then converting to years is not the optimal option. Excel 365 provides the function DATEDIF(), which computes the difference between two dates and gives the result in years (“y”), years and months (“ym”), or years, months, and days (“ymd”). Given that the employee’s date of birth is in cell A2, the formula is “=DATEDIF(TODAY(), A2, “y”)”. However, if you are using older versions of Excel, you don’t have the function DATEDIF(). In this case, you can use the function YEAR() to compute the difference between the current year and the birth year of the employee and find out what age the employee will be this year. The formula is “=YEAR(TODAY())-YEAR(A2)”. If you need the exact age of an employee at the current date, you can use the formula “=INT((TODAY()-A2)/365)”. It calculates exactly how many days there are between the current date and the date of birth, converts the result into years, and rounds to the largest integer, more minor than the result. You can calculate an employee's age at any time by replacing the current date (TODAY() function) with a date of your choice. Select the cells, right-click, and choose Format Cells… from the drop-down menu. Then, select Time from the Category panel and one of the types from the Type list. Press OK, and you will be good to go. Then, it would be best to format the Result cells to store hours, hours, and minutes, or hours, minutes, and seconds. Select them, right-click, and choose Format Cells… from the drop-down menu. Calculating Hours Elapsed Between Two Times As before, the first thing you need to do to calculate the time difference between two times is to format the cells to store time values. Then, select Custom from the Category panel and h:mm:ss from the Type list. If you only want hours, replace h:mm:ss with h. If you want hours and minutes as an outcome, replace it with h:mm. And so on. However, if your Result cells hold only hours, minutes, or seconds, the values must be smaller than 24 hours, 60 minutes, and 60 seconds, respectively. If you want to store values higher than those limits, you need to use [h], [mm], and [ss] types instead. In both cases, the formula is a simple subtraction. Related: How to make an Excel timesheet? Types of Time Calculations in Excel: Add & Subtract Although time difference is the most popular requirement for time management, it’s not the only one. You often must add or subtract a certain amount to a given date or time. For example, an employee works on a project at 8:00 AM and works 8 hours. Format the cells that hold the added value as h for adding hours under 24 hours, [mm] for adding minutes under 60 minutes, [h] for adding more than 24 hours, and [mm] for adding more than 60 minutes. You can also simultaneously combine and add hours, minutes, and seconds (h:mm:ss or [h]:mm:ss). The formula for adding time is a simple add operation, while for subtracting time is a subtract operation. If everything happens within 24 hours, you can format the Result cells to show only the time. But if you pass the 24-hour limit, you must also format them to show the date. Using the TIME Function The TIME(hour, minute, second) function creates a time value formatted as h:mm AM/PM from three parameters representing hours, minutes, and seconds. It helps you calculate time differences without worrying about the date, just focusing on time. For example, you can use it to calculate employees working hours or overtime. You can also create a time value from text using the function TIMEVALUE(“text”). Both TIME() and TIMEVALUE() help you avoid using Excel formatting to deal with time by allowing you to use only numeric values or text. Using the TEXT Function The TEXT(value, format) function formats a value using the format you give as a parameter. Instead of formatting the Result cells when calculating time differences, you can use the TEXT() function and do everything in a single step. For example, you can use the function to format the difference between two dates as years or days or between two times as hours and minutes. NOW() and TODAY() You don’t always have to check your watch or calendar when using Excel. Instead, use the functions NOW() and TODAY() to get the exact time and date. Keep in mind that the results vary over time. If you want to lock in a specific date or time, copy the result of the functions NOW() and TODAY() as values in separate cells. Formulas for Calculating Time in Excel All the formulas used for this article are available in the free downloadable Excel file. The file includes examples you can use in your personnel files, timesheets, and any other time management Excel document. Here is a summary of the formulas we’ve used: Formula Purpose Description =B2-A2 Time difference between dates Computes the difference between two dates in cells B2 and A2 and provides a result in days =(B3-A3)/365 Time difference between dates Computes the difference between two dates in cells B3 and A3 and provides a result in years =(B4-A4)/12 Time difference between dates Computes the difference between two dates in cells B4 and A4 and provides a result in months =DATEDIF(TODAY(), A2, “y”) Time difference between today’s date Computes the difference between today’s date and the date in cell A2 and provides a result in years and a given date =A2+B2 Add time Computes the sum of two time values =TIME(HOUR(B5),MINUTE(B5),0)-TIME(HOUR Time difference between two times Computes the time difference between two times =TEXT(B2-A2, "h:mm") Time difference between two times Computes the time difference between two times and provides a result in hours and minutes =TEXT(B2-A2, "y") Time difference between two dates Computes the time difference between two dates and provides a result in years =INT((TODAY()-A6)/365) Time difference between today’s date Computes the difference between today’s date and the date in cell A2 and provides a result in years by rounding to the and a given date highest integer lower than the result value Tip: Want to improve your Excel know-how? Discover the top 20 most popular excel formulas. For more complex formulas like how many days left we do share the Excel formula also. Managing time efficiently is crucial for your business. There is no room for mistakes. Find and use the tools that work for you and automate the processes as much as possible. Using Excel formulas and functions is the best starting point. Develop a workflow that will soon become second nature to ensure you avoid errors and time-consuming tasks.
{"url":"https://arahr.com/how-to-calculate-time-in-excel/","timestamp":"2024-11-07T18:55:21Z","content_type":"text/html","content_length":"358752","record_id":"<urn:uuid:4b451f7a-7820-4fe6-87b4-5ddb94bdb346>","cc-path":"CC-MAIN-2024-46/segments/1730477028009.81/warc/CC-MAIN-20241107181317-20241107211317-00346.warc.gz"}
2005 AIME II Problems/Problem 3 An infinite geometric series has sum 2005. A new series, obtained by squaring each term of the original series, has 10 times the sum of the original series. The common ratio of the original series is $\frac mn$ where $m$ and $n$ are relatively prime integers. Find $m+n.$ Solution 1 Let's call the first term of the original geometric series $a$ and the common ratio $r$, so $2005 = a + ar + ar^2 + \ldots$. Using the sum formula for infinite geometric series, we have $\;\;\frac a {1 -r} = 2005$. Then we form a new series, $a^2 + a^2 r^2 + a^2 r^4 + \ldots$. We know this series has sum $20050 = \frac{a^2}{1 - r^2}$. Dividing this equation by $\frac{a}{1-r}$, we get $10 = \frac a{1 + r}$. Then $a = 2005 - 2005r$ and $a = 10 + 10r$ so $2005 - 2005r = 10 + 10r$, $1995 = 2015r$ and finally $r = \frac{1995}{2015} = \frac{399}{403}$, so the answer is $399 + 403 = \boxed{802}$. (We know this last fraction is fully reduced by the Euclidean algorithm -- because $4 = 403 - 399$, $\gcd(403, 399) | 4$. But 403 is odd, so $\gcd(403, 399) = 1$.) Solution 2 We can write the sum of the original series as $a + a\left(\dfrac{m}{n}\right) + a\left(\dfrac{m}{n}\right)^2 + \ldots = 2005$, where the common ratio is equal to $\dfrac{m}{n}$. We can also write the sum of the second series as $a^2 + a^2\left(\dfrac{m}{n}\right)^2 + a^2\left(\left(\dfrac{m}{n}\right)^2\right)^2 + \ldots = 20050$. Using the formula for the sum of an infinite geometric series $S=\dfrac{a}{1-r}$, where $S$ is the sum of the sequence, $a$ is the first term of the sequence, and $r$ is the ratio of the sequence, the sum of the original series can be written as $\dfrac{a}{1-\ frac{m}{n}}=\dfrac{a}{\frac{n-m}{n}}=\dfrac{a \cdot n}{n-m}=2005\;\text{(1)}$, and the second sequence can be written as $\dfrac{a^2}{1-\frac{m^2}{n^2}}=\dfrac{a^2}{\frac{n^2-m^2}{n^2}}=\dfrac{a^2\ cdot n^2}{(n+m)(n-m)}=20050\;\text{(2)}$. Dividing $\text{(2)}$ by $\text{(1)}$, we obtain $\dfrac{a\cdot n}{m+n}=10$, which can also be written as $a\cdot n=10(m+n)$. Substitute this value for $a\ cdot n$ back into $\text{(1)}$, we obtain $10\cdot \dfrac{n+m}{n-m}=2005$. Dividing both sides by 10 yields $\dfrac{n+m}{n-m}=\dfrac{401}{2}$ we can now write a system of equations with $n+m=401$ and $n-m=2$, but this does not output integer solutions. However, we can also write $\dfrac{n+m}{n-m}=\dfrac{401}{2}$ as $\dfrac{n+m}{n-m}=\dfrac{802}{4}$. This gives the system of equations $m+n=802$ and $n-m=4$, which does have integer solutions. Our answer is therefore $m+n=\boxed{802}$ (Solving for $m$ and $n$ gives us $399$ and $403$, respectively, which are co-prime). Video Solution https://youtu.be/z4-bFo2D3TU?list=PLZ6lgLajy7SZ4MsF6ytXTrVOheuGNnsqn&t=4500 - AMBRIGGS Video Solution by OmegaLearn ~ pi_is_3.14 See also The problems on this page are copyrighted by the Mathematical Association of America's American Mathematics Competitions.
{"url":"https://artofproblemsolving.com/wiki/index.php/2005_AIME_II_Problems/Problem_3","timestamp":"2024-11-11T10:09:26Z","content_type":"text/html","content_length":"51331","record_id":"<urn:uuid:9a72b86e-45f6-434e-9cb6-4145a354484f>","cc-path":"CC-MAIN-2024-46/segments/1730477028228.41/warc/CC-MAIN-20241111091854-20241111121854-00739.warc.gz"}
wieght of a ball mill039s balls WEBApr 19, 2024 · Best Challenging Exercise Ball: BOSU Pro Balance Trainer. Best Exercise Ball On a Budget : PROMIC Exercise Ball. Best Exercise Ball Package: Yoga Ball, 65cm Exercise Ball Fitness Balls. Best Big ... WhatsApp: +86 18838072829 WEBJan 5, 2022 · When examining max SHA across all ball types, we found a significant difference between all three balls. When looking at the density plot below, on average there was a gradual increase in max SHA as ball weight decreased. The average max SHAs for each ball type were 20, 25, and 29 degrees for the black, pink, and green balls, . WhatsApp: +86 18838072829 WEBJun 19, 2015 · The approximate horsepower HP of a mill can be calculated from the following equation: HP = (W) (C) (Sin a) (2π) (N)/ 33000. where: W = weight of charge. C = distance of centre of gravity or charge from centre of mill in feet. a = dynamic angle of repose of the charge. N = mill speed in RPM. HP = A x B x C x L. Where. WhatsApp: +86 18838072829 WEBMay 16, 2005 · I almost always run 12lb cannonballs with rudders. By using 12lbs weights the blowback is considerably less and estimating true depth is much easier. The downriggers can easily handle the 12lbs. If you are only going to go down 50'or less, then 10lbs is okay. 8lbs is too light in my opinion. WhatsApp: +86 18838072829 WEBMar 5, 2013 · Aramith Stone Collection average ball weight () I was under the impression that Aramith pool balls should weigh a consistent (6 oz) each. The balls in my new "Tournament Set" ranged from ( ) oz. Also the cue balls in the "Tournament" set and the "Stone Collection" set were the lightest balls in the boxes. WhatsApp: +86 18838072829 to 15 TPH Small Scale Miner's Ball Mill. US 30,000. Our smallscale miner's Ball Mills use horizontal rotating cylinders that contain the grinding media and the particles to be broken. The mass moves up the wall of the cylinder as it rotates and falls back into the "toe" of the mill when the force of gravity exceeds friction and ... WhatsApp: +86 18838072829 WEBBasketball (ball) A basketball is a spherical ball used in basketball games. Basketballs usually range in size from very small promotional items that are only a few inches (some centimeters) in diameter to extra large balls nearly 2 feet (60 cm) in diameter used in training exercises. For example, a youth basketball could be 27 inches (69 cm ... WhatsApp: +86 18838072829 WEBThe cricket balls that are used to play men's and women's games have different weights. Men's cricket balls weigh to ounces (156163 grams). That's just a bit more than the weight of a major league baseball. Women's cricket balls weigh to ounces (140144 grams). There are 3 different types of cricket balls, which ... WhatsApp: +86 18838072829 WEBApr 20, 2019 · Bowling ball sizes refer to the weight of the ball, not its dimensions, which are universal. The weight can vary, starting as light as 6 lbs and reaching a maximum of 16 lbs. Choosing the right weight is crucial for a bowler's performance and comfort. Many adult bowlers often prefer weights around 14 to 15 lbs. WhatsApp: +86 18838072829 WEBOct 17, 2022 · Ball: The ball lies in cylinders, which are made up of stainless steel, the size of the ball depends on the cylinder diameter. The ball covers 30 to 50% area in the cylinder. Working: Open the lid and Feed the materials into the cylinder; Introduced the fixed numbers of balls and closed the lid; Run the machine and adjust the speed as per . WhatsApp: +86 18838072829 WEBFeb 10, 2015 · These balls have a textured surface for a solid grip and are color coded for easy weight identifiion (25 to 60). Empower Medicine Balls. For a slightly softer touch, these med balls are gel ... WhatsApp: +86 18838072829 WEBSep 14, 2023 · • The International Tennis Federation (ITF) regulates the weight and size of tennis balls, with a tournamentgrade ball weighing between 56 grams and grams (2 to ounces). • Tennis balls come in different types based on weight and diameter, such as regular, heavy, and specialized training balls. WhatsApp: +86 18838072829 WEBAug 17, 2020 · This is why it's so important to approach the stretching in a responsible way. Here are some tips for wearing ball weights all day: Use enough lube. Lube can make for a much more comfortable experience. This is why you should never forget to use a lubricant with your ball stretching devices. WhatsApp: +86 18838072829 WEBDenver 4' Diameter by 5 Foot Long Ball Mill Description: Excellent condition Denver Ball Mill. Steel Ball Mills are one of the most precise ID: Quote + Eimco 4 ft. x 4 ft. Ball Mill New Rubber Liners Installed Dimensions: 4 ft. Dia. x 4 ft. Long Teco 30 HP Electric Motor Volts: 230/460 3 Phase Hz: 60 RPM 1750 ID: 633050 Quote + WhatsApp: +86 18838072829 WEBSep 6, 2023 · Smaller sizes are typically designed for children, including balls that range from 6 to 8 inches in circumference. These balls are 5 to 8 pounds lighter than adult ones. Adult bowling balls weigh 10 to 16 pounds. This weight provides the maximum control and force while striking pins. WhatsApp: +86 18838072829 WEBEach FOX Forged Steel grinding ball is solid from surface to center. This is a forged steel ball that is through hardened for superior strength. WhatsApp: +86 18838072829 WEBJan 6, 2024 · The compression of a golf ball can also affect its feel and distance. In summary, while the standard weight of a golf ball is grams ( ounces), there is some variation in weight among different types of golf balls. Some golf balls are heavier than others, and different golf ball brands weigh different amounts. WhatsApp: +86 18838072829 WEBSo how heavy is a shot put ball? These balls weigh between pounds ( kg) and just over 16 pounds ( kg) Men's shot put balls weigh pounds ( kg) for competitions, while women's shot put balls weigh pounds (4 kg). Shot puts are made from a wide range of materials. Their weight depends on the type of competition and . WhatsApp: +86 18838072829 WEBMar 7, 2024 · Now that you know that bocce balls are weighted, you are probably curious about their exact load. In general, each bocce ball weighs around a pound or so. However, regulation rules state that 107mm bocce balls must be 920 grams or two pounds each. On the other hand, the standard US Bocce Federation size balls are around pounds. WhatsApp: +86 18838072829 WEBSome studies show that it costs between 60 and 80, while others show that an 8ball of coke is as much as 350560. Cocaine is sold in various other amounts too. Here are some more dollar figures for alternative common cocaine weights: A quarter gram costs between 2550. A gram costs between 50100. An ounce of cocaine can cost . WhatsApp: +86 18838072829 WEBNov 26, 2019 · The biggest characteristic of the sag mill is that the crushing ratio is large. The particle size of the materials to be ground is 300 ~ 400mm, sometimes even larger, and the minimum particle size of the materials to be discharged can reach mm. The calculation shows that the crushing ratio can reach 3000 ~ 4000, while the ball mill's ... WhatsApp: +86 18838072829 WEBMay 31, 2017 · Mar 13, 2014. Loion. State of Jefferson. Jun 7, 2017. #7. Size and weight are the best ways to identify a cannon ball (along with type of metal). Some did have seams but many don't so that's not definitive. Also, if . WhatsApp: +86 18838072829 WEB44 Cal .454 Lead Balls 0gr Item #6070 | 100/Box . Completely uniform in size, weight, and roundness, Hornady® Round Balls deliver consistent and accurate performance. They're cold swaged from pure lead which eliminates air pockets and voids common to cast balls. And the smoother, rounder surface assures better rotation and consistency. WhatsApp: +86 18838072829 WEBSep 29, 2020 · Wieght Of A Ball Mill039s Balls Wieght Of A Ball Mill039s Balls. How Much do Steel Grinding Balls Weigh Chat Online Ball Weight Effect In Ball Mill abrasion average weight of balls after abrasion test was deduced by average balls weight before abrasion At first ball filling in the mill was spot 1 and the number of . WhatsApp: +86 18838072829 WEBWeight 1000 balls (kg) Approximate Qty per Litre. Metric (mm) Metric. Inch Decimals. Weight 1000 balls (kg) 1/64. .0156. WhatsApp: +86 18838072829 WEBOct 29, 2023 · 1. A regular tennis ball weighs around 56g58g. 2. Tennis balls must weigh between ounces (between g as well as g). Impacts on players when you are using the incorrectsized tennis balls. Often times, in days gone by, we've seen competitions being played with the incorrect type of tennis balls. WhatsApp: +86 18838072829 WEBJun 26, 2010 · It is a myth that the modern ball is lighter than the balls used in the past. Since 1937, the dry weight of the ball has been specified by Law 2: 1416oz. Prior to that, the rules governing the ball's dry weight specified something lighter – 1315oz. This goes for the new ball used in 2010 just as much as it did for the 1966 ball. WhatsApp: +86 18838072829 WEBTo find the suitable ball size for the desired final fineness, usually a factor of approximately 1000 can be applied. If a grind size of 30 µm (D90) is the objective, the most suitable ball size would be between 20 mm and 30 mm. If smaller particles are required, the balls must be removed and replaced by smaller ones for a second process step. WhatsApp: +86 18838072829 WEBOct 2, 2012 · The lighter weight bowling balls, 6, 8, and 10 lbs, are intended for use by children. Teenagers might use 11, 12, 13, or 14 lbs. depending on their strength and hand sizes. A "rule of thumb" for the kids bowling balls is that they should start by choosing a ball weight that matches their age a 6 lb. ball for a 6 or 7 years old, a 10 lb ... WhatsApp: +86 18838072829 WEBFollowing this rule of thumb, the number of grinding balls for each ball size and jar volume is indied in the table below. To pulverize, for example, 200 ml of a sample consisting of 7 mm particles, a 500 ml jar and grinding balls sized at least 20 mm or larger are recommended. According to the table, 25 grinding balls are required. WhatsApp: +86 18838072829 WEBJul 3, 2017 · Rods in place weigh approximately 400 pounds per cu. ft. and balls in place approximately 300 pounds per cu. ft.. Thus, quantitatively, less material can progress through the voids in the rod mill grinding media than in the ball mill, and the path of the material is more confined. This grinding action restricts the volume of feed which passes ... WhatsApp: +86 18838072829 WEBQuestion: Given: The three balls weigh lb (A), lb (B), and lb (C); and have and coefficient of restitution of e_A middot B = and e_B middot C = ball A is released strikes ball B and then ball B strikes ball C, determine the velocity of ball B. just after it is struck by ball A. the balls slide without friction friction. WhatsApp: +86 18838072829 WEBYes4All Slam Balls Upgraded, 1012lb Medicine Ball Weight, Durable PVC Sand Filled Workout Dynamic Medicine Ball for Core Strengthen. out of 5 stars. 409. ... ZELUS Medicine Ball with Dual Grip | 10/20 lbs Exercise Ball |Weight Ball with Handles| Textured Grip Exercise Ball |Strength Training| Core Workouts. out of 5 stars. 578. 100 ... WhatsApp: +86 18838072829
{"url":"https://larecreation-hirsingue.fr/09_04-3889.html","timestamp":"2024-11-02T20:30:14Z","content_type":"application/xhtml+xml","content_length":"28531","record_id":"<urn:uuid:cb15df11-8cc6-4497-a614-8fd21850a383>","cc-path":"CC-MAIN-2024-46/segments/1730477027730.21/warc/CC-MAIN-20241102200033-20241102230033-00613.warc.gz"}
MATLAB code for Pulse Code Modulation (PCM) and Demodulation MATLAB Code for Pulse Code Modulation close all; clear all; fm=input('Enter the message frequency (in Hz): '); fs=input('Enter the sampling frequency (in Hz): '); L=input('Enter the number of the quantization levels: '); n = log2(L); t=0:1/fs:1; % fs nuber of samples have tobe selected title('Analog Signal'); stem(t,s);grid on; title('Sampled Sinal'); ylabel('Amplitude--->'); xlabel('Time--->'); % Quantization Process vmin=-vmax; %to quantize a signal s into L levels between vmin and vmax part=vmin:del:vmax; % level are between vmin and vmax with difference of del code=vmin-(del/2):del:vmax+(del/2); % Contaion Quantized valuses [ind,q]=quantiz(s,part,code); % Quantization process % ind contain index number and q contain quantized values for i=1:l1 if(ind(i)~=0) % To make index as binary decimal so started from 0 to N for i=1:l2 if(q(i)==vmin-(del/2)) % To make quantize value inbetween the levels stem(t,q);grid on; % Display the Quantize values title('Quantized Signal'); % Encoding Process code=de2bi(ind,'left-msb'); % Cnvert the decimal to binary for i=1:l1 for j=1:n coded(k)=code(i,j); % convert code matrix to a coded row vector subplot(2,1,1); grid on; stairs(coded); % Display the encoded signal axis([0 100 -2 3]); title('Encoded Signal'); % Demodulation Of PCM signal index=bi2de(qunt','left-msb'); % Getback the index in decimal form q=del*index+vmin+(del/2); % getback Quantized values subplot(2,1,2); grid on; title('demodulated signal without low-pass filter'); % % % Demodulation after applying low-pass filter % Low-pass Filter Design fc = fm; % Cutoff frequency for the low-pass filter order = 1; % Filter order (first-order Butterworth filter) % Design the low-pass Butterworth filter [b, a] = butter(order, fc/(fs/2), 'low'); % Apply the low-pass filter to the signal filtered_signal = filtfilt(b, a, q); title('demodulated signal after applying low-pass filter') Enter the message frequency (in Hz): 1 Enter the sampling frequency (in Hz): 10000 Enter the number of the quantization levels: 8 BER vs. SNR denotes how many bits in error are received in a communication process for a particular Signal-to-noise (SNR) ratio. In most cases, SNR is measured in decibel (dB). For a typical communication system, a signal is often affected by two types of noises 1. Additive White Gaussian Noise (AWGN) 2. Rayleigh Fading In the case of additive white Gaussian noise (AWGN), random magnitude is added to the transmitted signal. On the other hand, Rayleigh fading (due to multipath) attenuates the different frequency components of a signal differently. A good signal-to-noise ratio tries to mitigate the effect of noise. Calculate BER for Binary ASK Modulation The theoretical BER for binary ASK (BASK) in an AWGN channel is given by: BER = (1/2) * erfc(0.5 * sqrt(SNR_ask)); Enter SNR (dB): Calculate BER BER vs. SNR curves for ASK, FSK, and PSK Calculate BER for Binary FSK Modulation The theoretical BER for binary FSK (BFSK) in an AWGN channel is g Modulation Constellation Diagrams BER vs. SNR BER vs SNR for M-QAM, M-PSK, QPSk, BPSK, ... 1. What is Bit Error Rate (BER)? The abbreviation BER stands for bit error rate, which indicates how many corrupted bits are received (after the demodulation process) compared to the total number of bits sent in a communication process. It is defined as, In mathematics, BER = (number of bits received in error / total number of transmitted bits) On the other hand, SNR refers to the signal-to-noise power ratio. For ease of calculation, we commonly convert it to dB or decibels. 2. What is Signal the signal-to-noise ratio (SNR)? SNR = signal power/noise power (SNR is a ratio of signal power to noise power) SNR (in dB) = 10*log(signal power / noise power) [base 10] For instance, the SNR for a given communication system is 3dB. So, SNR (in ratio) = 10^{SNR (in dB) / 10} = 2 Therefore, in this instance, the signal power i Signal Processing RMS Delay Spread, Excess Delay Spread, and Multipath... RMS Delay Spread, Excess Delay Spread, and Multipath (MPCs) The fundamental distinction between wireless and wired connections is that in wireless connections signal reaches at receiver thru multipath signal propagation rather than directed transmission like co-axial cable. Wireless Communication has no set communication path between the transmitter and the receiver. The line of sight path, also known as the LOS path, is the shortest and most direct communication link between TX and RX. The other communication pathways are called non-line of sight (NLOS) paths. Reflection and refraction of transmitted signals with building walls, foliage, and other objects create NLOS paths. [ Read More about LOS and NLOS Paths] Multipath Components or MPCs: The linear nature of the multipath component signals is evident. This signifies that one multipath component signal is a scalar multiple of Wireless Signal Processing Gaussian and Rayleigh Distribution Difference between AWGN and Rayleigh Fading 1. Introduction Rayleigh fading coefficients and AWGN, or additive white gaussian noise [↗] , are two distinct factors that affect a wireless communication channel. In mathematics, we can express it in that way. Let's explore wireless communication under two common noise scenarios: AWGN (Additive White Gaussian Noise) and Rayleigh fading. y = hx + n ... (i) The transmitted signal x is multiplied by the channel coefficient or channel impulse response (h) in the equation above, and the symbol "n" stands for the white Gaussian noise that is added to the signal through any type of channel (here, it is a wireless channel or wireless medium). Due to multi-paths the channel impulse response (h) changes. And multi-paths cause Rayleigh fading. 2. Additive White Gaussian Noise (AWGN) The mathematical effect involves adding Gauss MATLAB Code % Developed by SalimWireless.Com clc; clear; close all; % Configuration parameters fs = 10000; % Sampling rate (Hz) t = 0:1/fs:1-1/fs; % Time vector creation % Signal definition x = sin(2 * pi * 100 * t) + cos(2 * pi * 1000 * t); % Calculate the Fourier Transform y = fft(x); z = fftshift(y); % Create frequency vector ly = length(y); f = (-ly/2:ly/2-1) / ly * fs; % Calculate phase while avoiding numerical precision issues tol = 1e-6; % Tolerance threshold for zeroing small values z(abs(z) < tol) = 0; phase = angle(z); % Plot the original Signal figure; subplot(3, 1, 1); plot (t, x, 'b'); xlabel('Time (s)'); ylabel('|y|'); title('Original Messge Signal'); grid on; % Plot the magnitude of the Fourier Transform subplot(3, 1, 2); stem(f, abs(z), 'b'); xlabel('Frequency (Hz) '); ylabel('|y|'); title('Magnitude of the Fourier Transform'); grid on; % Plot the phase of the Fourier Transform subplot(3, 1, 3); stem(f, Modulation ASK, FSK & PSK Constellation BASK (Binary ASK) Modulation: Transmits one of two signals: 0 or -√Eb, where Eb is the energy per bit. These signals represent binary 0 and 1. BFSK (Binary FSK) Modulation: Transmits one of two signals: +√Eb ( On the y-axis, the phase shift of 90 degrees with respect to the x-axis, which is also termed phase offset ) or √Eb (on x-axis), where Eb is the energy per bit. These signals represent binary 0 and 1. BPSK (Binary PSK) Modulation: Transmits one of two signals: +√Eb or -√Eb (they differ by 180 degree phase shift), where Eb is the energy per bit. These signals represent binary 0 and 1. This article will primarily discuss constellation diagrams, as well as what constellation diagrams tell us and the significance of constellation diagrams. Constellation diagrams can often demonstrate how the amplitude and phase of signals or symbols differ. These two characteristics lessen the interference between t Compare the BER performance of QPSK with other modulation schemes (e.g., BPSK, 4-QAM, 16-QAM, 64-QAM, 256-QAM, etc) under similar conditions. MATLAB Code clear all; close all; % Set parameters for QAM snr_dB = -20:2:20; % SNR values in dB qam_orders = [4, 16, 64, 256]; % QAM modulation orders % Loop through each QAM order and calculate theoretical BER figure; for qam_order = qam_orders % Calculate theoretical BER using berawgn for QAM ber_qam = berawgn(snr_dB, 'qam', qam_order); % Plot the results for QAM semilogy(snr_dB, ber_qam, 'o-', 'DisplayName', sprintf('%d-QAM', qam_order)); hold on; end % Set parameters for QPSK EbNoVec_qpsk = (-20:20)'; % Eb/No range for QPSK SNRlin_qpsk = 10.^(EbNoVec_qpsk/10); % SNR linear values for QPSK % Calculate the theoretical BER for QPSK using the provided formula ber_qpsk_theo = 2*qfunc(sqrt(2*SNRlin_qpsk)); % Plot the results for QPSK semilogy(EbNoVec_qpsk, ber_qpsk_theo, 's-', Channel Impulse Response (CIR) Wireless Signal Processing CIR, Doppler Shift & Gaussian Random Variable The Channel Impulse Response (CIR) is a concept primarily used in the field of telecommunications and signal processing. It provides information about how a communication channel responds to an impulse signal. What is the Channel Impulse Response (CIR) ? It describes the behavior of a communication channel in response to an impulse signal. In signal processing, an impulse signal has zero amplitude at all other times and amplitude ∞ at time 0 for the signal. Using a Dirac Delta function, we can approximate this. ...(i) δ( t) now has a very intriguing characteristic. The answer is 1 when the Fourier Transform of δ( t) is calculated. As a result, all frequencies are responded to equally by δ (t). This is crucial since we never know which frequencies a system will affect when examining an unidentified one. Since it can test the system for all freq
{"url":"https://www.salimwireless.com/2024/03/pcm-matlab.html","timestamp":"2024-11-12T03:20:46Z","content_type":"application/xhtml+xml","content_length":"123097","record_id":"<urn:uuid:6faaac3c-b065-4526-92f9-b4bc6b715597>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.50/warc/CC-MAIN-20241112014152-20241112044152-00717.warc.gz"}
- The Electrodynamics of Heat Abstracts of ABRI Monographs Series 3 - Aetherometric Theory Vol. VI - The Electrodynamics of Heat: Entropy and Order in Thermal and Biological Systems Chapter 1: What is Heat? (20 pages, 77 kB) Front Cover (4.3 MB) AS3-VI.1 What Is Heat? Correa PN, Correa AN, Askanas M Aetherom Theor of Synchronicity, Vol. 6, 1:1-20 (June 2024) The present introductory monograph poses the general problem of understanding heat, its dynamics and how these are thought to obey a strange function and even stranger concept that goes by the name of entropy. It also casts a perspective on what is to come in the present disquisition on thermodynamics. VIEW (77KB) AS3-VI.2 Foundations of Thermodynamics Correa PN, Correa AN, Askanas M Aetherom Theor of Synchronicity, Vol. 6, 2:1-111 (June 2024) This communication presents the foundations of modern thermodynamics and then question them, to provisionally arrive at some new concepts of basic thermal functions. What exactly is meant by heat as a form of energy? What are the heat capacity, specific heat and heat content of a body, substance or system? How does enthalpy differ from the heat content of a body? How is heat transferred? Are there thermal forces responsible for holding the heat content of bodies, or deployed in heat transfer? Does the conventional function of entropy denote a thermal force? What are and have been the various conventional definitions of entropy? How do they differ and how are they ascertained? How does entropy relate to Gibbs free energy? What is reversible heat and what is reversible work? Are they fictional concepts and functions? What is the potential energy of a system? Is it the same as its internal energy function? In the course of answering these questions and presenting the conventional theory of thermodynamic functions, the authors propose an alternative view: an algebraic theory of thermodynamics based on the calculus of discernible quantities, including an algebraic treatment of entropy - instead of treatments based on a calculus of infinitesimal units of non- existent "reversible" fluxes. The present chapter introduces the map that will be explored at length in subsequent communications of this volume. AS3-VI.3 Entropies of State and of Transfer, and the Functions of the Calorimeter: A Different Granulation of Heat Correa PN, Correa AN, Askanas M Aetherom Theor of Synchronicity, Vol. 6, 3:1-80 (September 2024) The Carnot ideal engine led to Kelvin's and Clausius' discovery of an absolute scale of temperature, but the emergence of a statistically-dependent quantum theory of heat failed to provide the linear relationship of absolute temperature to thermal energy - in particular, to photon (electromagnetic) energy. The mistake harks back to Planck's law and his second radiation constant. The authors correct this with a simple law that permits them to distinguish between electromagnetic and thermokinetic heats, whether of state or involved in thermal energy transfer. They uncover for the first time the real dimensionality of temperature and demonstrate how it is both a molal electromagnetic production and a photon property. This leads them to examine the functions of the calorimeter - an instrument that, since Joule and Nernst to the present, has not ceased being the object of development - in light of their original theoretical framework and as applied to an entirely experimental approach. The authors then systematize the aetherometric algebra of discrete quantization for heating and cooling processes of the calorimeter. AS3-VI.4 The Zeroth Law of Thermodynamics and the 2-Body Problem of Thermal Equilibrium Correa PN, Correa AN, Askanas M Aetherom Theor of Synchronicity, Vol. 6, 4:1-81 (October 2024) The Zeroth Law is not about relations between numbers, like numerical equalities, but between states of physical substances; and it is only needed if the notion of thermal equilibrium is axiomatically taken as being primary with respect to temperature. Accordingly, the authors first seek the conditions under which thermal equilibrium occurs by exploring different facets of the 2-body problem, while contrasting the conventional treatment of entropy with the aetherometric two-headed treatment of distinct entropies of state and heat flow. They find that, in all cases, determination of the final common temperature of the system is the critical parameter that permits definition of thermal equilibrium. Ultimately, this determination is extrinsic to the system and reduces to the temperature of the environment. But this does not abrogate the existence of intrinsic energy-based determinations of the equilibrium temperature whose function is demonstrated using a fluid-based physical treatment of the 2-body problem and without taking recourse to the fiction of an isolated system. This leads to the conclusion that the Zeroth Law has no role within Aetherometry, since the aetherometric concept of thermal equilibrium is based on a numerical relationship between photon energies, and is therefore ipso facto transitive. Two bodies are in thermal equilibrium when their molal electromagnetic heats of state are identical, irrespective of whether their molal thermokinetic heats of state are the same or different. This does not imply that the heat contents of the two bodies will be identical. It only requires that the primary (modal) photons of state in both substances have the same quantum energy. Accordingly, in Aetherometry, temperature does not need to have its existence axiomatically postulated.
{"url":"http://aetherometry.com/Electronic_Publications/Science/abs-AS3-VI.php","timestamp":"2024-11-05T00:58:33Z","content_type":"text/html","content_length":"12551","record_id":"<urn:uuid:5029dbac-46c7-4c9e-a644-b591e7594664>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.84/warc/CC-MAIN-20241104225856-20241105015856-00430.warc.gz"}
thales australia jobs These are the two default Google Sheets budgeting templates: Monthly budget – Log individual income and spending transactions. If you want Query, use the MAX worksheet function within Query as below. Google Sheets will give you the option to sort by date or time as long as you left-click on a valid date or time inside the pivot table. select sum(AD) 1. My company recently moved from MS Office to G-Suite which means i need to use google sheets for my calculations. I have detailed above how to use Sum aggregation function in Google Sheets Query. In the context of Google Sheets, aggregation is the process of summarizing tabular data. This time there is column B as “Select Column B” and of course the Group clause at the last. Find the average of column C using Query. You can relate the function this time to AVERAGEIFS. I cannot seem to figure out how to get this formula right in sheets. That name appears on both charts and I’m trying to do it this way so I don’t have to manually add tasks completed by the engineers. This is the formula: =IFERROR(IF(AGGREGATE(3,5,[@[OUTSTANDING AMOUNT]])=1,1,0),""). Summarize Date From Multiple Sheets In Google Sheets. There are four columns with numbers. You can use the Avg() aggregation function in line with that. This book has been written to help you implement attribution modelling. Consider the following data set from a Google Sheet: Here is how this tabular data can be aggregated in Google Sheets: Google Sheets provide many functions through which you can aggregate data. However, you should use some aggregate functions in order to summarize them. If you ask me how to find the average using Query, here are the examples. Google Sheets QUERY group by command is used to concatenate rows. You don’t need a monthly subscription — it’s 100% free budgeting spreadsheet bliss. Built-in formulas, pivot tables and conditional formatting options save time and simplify common spreadsheet tasks. You have entered an incorrect email address! In this, the function N converts the blank to zero. There are five aggregation functions in Google Sheets Query for data manipulation. I’m sure query is the way to do it and that the max() aggregation needs to be there but I can’t make it work. 2. if there is at least 1 Facebook lead, but none of them had a sale, give #N/A It will teach you, how to leverage the knowledge of attribution modelling in order to understand the customer purchasing journey and determine the most effective marketing channels for investment. I have included a wide variety of Query formula examples in this tutorial, that will eventually help you to learn the use of the above said aggregation functions in Query. Google Sheets provide many functions through which you can aggregate data. The Query function is easy to learn if you know how to use aggregation functions in it. A simple second sheet with =page1!A1 etc and adding the column month has the same problem. This is the equivalent to the AVERAGE aggregation function. Group the days by day of week. For this type of min calculation, I only find the Query. Templates like Monthly management report, Company monthly report and Monthly expense report are ready-made templates and can be used in the free web-based Google Sheets application, and it is compatible with any file format which you can download anytime and anywhere. I have already 40+ Query-based tutorials on this blog. There is, of course, one equivalent function that you can use outside Query. Without Query, to conditionally sum a single column, you can use the function SUMIF. Try this Query. Save my name, email, and website in this browser for the next time I comment. Actually using Google Sheets SQL similar Query, you can also get this month and year summary. Multiply the range by 1 to convert TRUE to 1 and FALSE to 0. Find it here – How to Group Data by Month and Year in Google Sheets. The Aggregate alternative is Subtotal, which is available in Google Sheets. This example shows why the Query is a must to find an average in different ways. =iferror(n(query(leads, "select sum(AD) where L = 'Facebook' label sum(AD)''",1)),0). You can compare your planned and actual benefits by category. Whether you need to track the student progress or attendance over a few weeks or months, or figure out the average annual earnings per employee, there's got to be a clever solution in spreadsheets. Similar to the Sum() and Avg() aggregation functions, you can use the Count() function too in Query. It also provides a dashboard that can be customized with your desired income and expenses by category so you can track your budget throughout the month. The Monthly Spreadsheet. Active yesterday. Have you ever used the MIN function in Google Sheets? Viewed 7k times 1. Google Sheets inventory templates Any assistance is greatly appreciated. How to use “aggregate” chart feature on Google Sheets. Download FREE printable 2021 monthly google docs calendar template and customize template as you like. In the second Query did you use the column in the select clause correctly? For example: Use this function to calculate the sum/total of all values: Use this function to calculate the average of all values: Use this function to find the maximum /highest value in a numeric field: Use this function to find the minimum / lowest value in a numeric field: Use this function to find the median in a numeric field. In earlier formulas, there were no columns in the Select clause. Finance Twins’ Monthly Budget Template. See the illustration below. It can be something as simple as selecting the timeframe, such as a monthly or yearly calendar, to something more complex like its design. Hi there, I’m hoping someone can help me out. Then use the Query. L = 'Facebook' and a couple of others besides. In this case, a Google Sheets inventory template will come in handy. This book focuses solely on the ‘analytics’ that power your email marketing optimization program and will help you dramatically reduce your cost per acquisition and increase marketing ROI by tracking the performance of the various KPIs and metrics used for email marketing. Just replace Sum with Avg. =ArrayFormula(query(A1:C*1,"Select Sum(Col1),Sum(Col2),Sum(Col3)")). This is similar to that. Ask Question Asked 1 year, 3 months ago. I’ve written a simple query to add up the sales we got from each lead, where the lead source is Facebook. Suppose you want the formula to Sum column F if column B is “A”. You’re in the right place if you’re looking for nested query google sheets functions, google sheets query col1, google sheets query select multiple columns, etc. Written to help you use the function this time there is column B ” of! This…I am assuming as a part of my many Query tutorials comes with standard... ( CONCAT ) to the use of min aggregation function in place, the spreadsheet automatically updates you., a pie chart is used to CONCATENATE rows google sheets monthly aggregate need for your budget this purpose use all Sheets. Option called Google Sheets your finances charts and graphs Query as below alternative is Subtotal, are. However, as a Query and Beyond attribution modelling in Google Sheets makes your data at the time... Sheets, if you want to link data from multiple cells together, you should use some aggregate functions Google. Caluse in sum in Sheets software and completely capable of running any the! Google fonts and easy to learn if you want to sum multiple columns Query. Am assuming as a Query docs Suite, you can also use for! Summary, you should use some aggregate functions in order to allocate marketing budget and buying... The usage ( i mean the syntax ) of all the formulas can be customized income! Matter google sheets monthly aggregate it ’ s 100 % free budgeting spreadsheet bliss, based on 3 columns of criteria through... To combine them in one cell year, 3 months ago fill in your own case! Twins gives you a chart type for your budget my monthly interest is 0.25 (! 100 % free budgeting spreadsheet bliss, and JOIN functions to combine them in one cell Select a where ''. Worked on the left and the google sheets monthly aggregate sum on the left and grouped... All those variations with the Count aggregation function in Google Sheets using the function. They are sum ( ), Count ( ) Twins gives you perfect... The result without showing sum Math or Query to sum by month year. And year counts four columns separately at a time month has the same problem cells, the spreadsheet updates! Budget template is a user-friendly income and expense tracker a new spreadsheet and with. Someone can help me achieve this via Query types of Google Sheets, if want! Sheet from a Master list for your data at once aggregation functions in Google Sheets am starting with the aggregation. Marketing channels for investment the Basic Match functions in it my savings balance is $ 100.00 and my interest. Charts and graphs have detailed above how to leverage the knowledge of modelling. Available as editable Google / docs / pdf document colorful charts and graphs data. Function is not null ” skips blank rows in the context of Google Sheets the above data! Answer 14 Replies 1 Upvote year, 3 months ago, average, Count, max, website... Equivalents to sum multiple columns using Query, you can ’ t well manipulate your data at once and! Financial progress throughout the whole year function no matter whether it ’ s sum,,... Is easy to learn if you know how to get it to show the original formula i in! But i ca n't find any documentation on how to use all those variations with min ( ) and. Of max ( ) function too in Query A1: B10, '' a! So learn it thoroughly to conquer the rest without any additional effort months ago 's not taking because aggregate! Course the Group clause at the same checkbox for `` aggregate '' in Select. That is, of course, one equivalent function that you can use the avg )! Some aggregate google sheets monthly aggregate in Query can replace the worksheet functions sum, SUMIF, and SUMIFS Count... Is one example to the sum example, how can i display the result without showing Math... You most likely had to use “ aggregate ” chart feature on Google Sheets makes your data and! The sum ( ) function too in Query is the sample data that i am to. Conquer the rest without any additional effort rewrite this…I am assuming as a part google sheets monthly aggregate my many Query tutorials which! Formulas you need for your data pop with colorful charts and graphs Match functions in it can! Can help me out resources i built to help you implement attribution modelling in Google and... The process of summarizing tabular data great alternative to Excel this tool case, Google! Month i get 100.00 * 0.0025 ( which is available in Google Sheets provide many functions which. In this tutorial free printable 2021 monthly Google Sheets functions range from the simplistic ( CONCAT ) the... In place, the total updates to include the new version of.. Checkbox for `` aggregate '' in the result and completely capable of running any of the Google Sheets Sheets many! Columns separately at a time t need a monthly subscription — it ’ sum! To zero multiple cells together, you can watch your financial progress throughout the whole year no matter it... You ’ re not a spreadsheet whiz, however, you can learn the available aggregation equivalents! Time there is column B ” and of course the Group clause at the last day of month. Year-To-Date summary, you most likely had to use sum aggregation function equal! Ca n't find any documentation on how to merge them this Google sheet is at! New version of Excel B1: B10 ) ) actually using Google Sheets for my calculations comes with free Google. However, designing a system and writing all the formulas you need for your.! Sheets calendar templates the usage ( i mean “ Select J ” instead of “ K! As a Query short, the spreadsheet automatically updates when you make changes in the setup tab avg... My company recently moved from MS Office to G-Suite which means i need to rewrite this…I am assuming a. So learn it thoroughly to conquer the rest without any additional effort the equivalent to the sum ( ) make. More ; monthly budget template is going to take some time up.. Docs calendar template and customize template as you like Basic Match functions it... Join ) functions in it list your monthly progress earlier formulas, pivot and... Contain numbers functions sum, SUMIF, and min ( ) function too in Query, which is 100.02.. 3 months ago use some aggregate functions in order to allocate marketing budget and understand buying.! Master list of Excel this, the function SUMIF and my monthly interest is 0.25 % 1/4! Tables and conditional formatting options save time and simplify common spreadsheet tasks from your,... Import my Excel file to Google Sheets Query a system and writing all the formulas you need for data! Month i get 100.00 * 0.0025 ( which is available as editable Google / docs / document... Without any additional effort suppose you want the formula would be as.. For the next time i comment writing all the formulas i ’ m hoping someone can help me.!: B10 ) ) a single column, you can use the column month has the same software completely! Second sheet with =page1! A1 etc and adding the column month has the.. With that which one is better, i ’ ve written a simple to. Have shared with you five formula variations with min ( ) and avg ( ) of Query. Within Query as below from MS Office to G-Suite which means i need to use “ ”! Blank rows in the context of Google Sheets does n't have the aggregate is... It shows my sample data that i am starting with the sum,! Has the same using Query, use the CONCAT, CONCATENATE, and website in,!, free, and min ( ) aggregation functions in it with a tab each! Course, one equivalent function that you can see all the formulas can be customized income! And most other types in Google Sheets for my calculations in one cell manage your data. Determining the most effective marketing channels for investment you to manage your inventory data in real-time handy. And understand buying behaviour: B10 ) ) by command is used at... A Google Sheets does n't have the aggregate function the min function in Google Sheets with =page1 A1. Is equal to the complex ( JOIN ) can ’ t need monthly! Shows my sample data: here is the equivalent of the Google docs Suite, have... Sheets calendar templates my monthly interest is 0.25 % ( 1/4 of 1 percent ) figure... ; monthly budget by Google Sheets a time categorizing Values in Google Sheets formatting options time! Of all the formulas google sheets monthly aggregate be customized with income and spending transactions many... All the Basic Match functions in it get this formula to sum month! Also use MAXIFS for this type of min aggregation function in Google Sheets similar. Log individual income and expenses by category to track your expenses and income on a subscription. Better, i ’ m hoping someone can help me achieve google sheets monthly aggregate via Query ” skips rows. Short, the formula would be as below simple monthly Google Sheets simple second sheet =page1! Makes your data at the last day of every month is one example to the use of min aggregation.... Of these Math functions help me out spending transactions F using Query in Sheets! I only find the average aggregation function in Google Sheets in place, the editor! Written to help you implement attribution modelling in Google Analytics and Beyond attribution modelling the...
{"url":"http://krayany.in.ua/q5v41/d1de71-thales-australia-jobs","timestamp":"2024-11-04T17:04:54Z","content_type":"text/html","content_length":"23025","record_id":"<urn:uuid:0a8a5902-62a3-443c-8294-d9b6d0bea002>","cc-path":"CC-MAIN-2024-46/segments/1730477027838.15/warc/CC-MAIN-20241104163253-20241104193253-00454.warc.gz"}
Bogolyubov Institute for Theoretical Physics List of Doctors Doctors Philosophiae Honoris Causa Bertrand I. Halperin (A Brief Biographical Sketch) Halperin's research interests have concerned many aspects of the theory of condensed matter systems and statistical physics. At Bell laboratories, a major portion of his work was focused on dynamic phenomena at a classical critical point, i.e., transport properties and time-dependent correlations near a phase transition with diverging correlation length. Together with Pierre Hohenberg and other collaborators, he developed a scheme for classifying dynamic behavior at different type of critical points and showed how the recently-developed renormalization group methods could be extended to calculate quantities such as critical exponents for dynamic properties. At Harvard, together with David Nelson, Halperin developed a theory of melting in two dimensions, which showed that under appropriate conditions, melting could occur in two stages, with a liquid-crystal phase occurring between the solid and the liquid. This phase, which they termed “hexatic”, would have only short-range translational order but quasi-long-range bond-orientation order, with six-fold symmetry. A major focus of Halperin’s work since 1981 has been on quantum Hall effects, the various peculiar phenomena that can occur in two-dimensional electron systems in strong magnetic fields at low temperatures. In early work, Halperin pointed out that quantized Hall systems necessarily had conducting states at their boundaries, and these states were crucial for understanding the exactness of the quantized Hall conductance in physical systems. In later work, he showed that quasiparticles in fractional quantized Hall systems do not behave as fermions or bosons but rather obey fractional statistics, a phenomenon that had earlier been proposed as a mathematical possibility in two-dimensional systems but had not been known to occur in any actual system. In the early 1990s, together with Patrick Lee and Nicholas Read, Halperin developed a theory of the quantum state at Landau-level filling ½, where there is no quantized Hall conductance, but a number of other peculiar properties are observed. Over the years, Halperin has made contributions in various other areas, including one-dimensional metals, quantum antiferromagnets in one and two dimensions, low-temperature properties of glasses, and transport in inhomogeneous media. Halperin is a member of the U. S. National Academy of Sciences and the American Philosophical Society, and a fellow of the American Academy of Arts and Sciences and the American Physical Society. His awards include the Buckley Prize and the Onsager Prize from the APS, the Dannie Heineman Prize of the Göttingen Akademie der Wissenschaften, the Lars Onsager Lecture and Medal of the Norwegian University of Science and Technology, an honorary doctorate from the Weizmann Institute of Science, the Lise Meitner Lecture and Medal, the 2019 APS Medal for Exceptional Achievement in Research, and the Wolf Prize in Physics.
{"url":"https://bitp.kiev.ua/en/doctor/halperin","timestamp":"2024-11-10T15:15:11Z","content_type":"text/html","content_length":"10055","record_id":"<urn:uuid:46493374-827b-49ff-a24f-54fecb777854>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.60/warc/CC-MAIN-20241110134821-20241110164821-00280.warc.gz"}
1. In a reduction system, a closure is a data structure that holds an expression and an environment of variable bindings in which that expression is to be evaluated. The variables may be local or global. Closures are used to represent unevaluated expressions when implementing functional programming languages with lazy evaluation. In a real implementation, both expression and environment are represented by pointers. is a closure which includes a flag to say whether or not it has been evaluated. The term " " has come to be synonymous with "closure" but originated outside functional programming 2. In domain theory, given a partially ordered set, D and a subset, X of D, the upward closure of X in D is the union over all x in X of the sets of all d in D such that x <= d. Thus the upward closure of X in D contains the elements of X and any greater element of D. A set is "upward closed" if it is the same as its upward closure, i.e. any d greater than an element is also an element. The downward closure (or "left closure") is similar but with d <= x. A downward closed set is one for which any d less than an element is also an element. ("<=" is written in and the upward closure of X in D is written \uparrow_{D} X). Last updated: 1994-12-16 Nearby terms: close parenthesis ♦ Clos network ♦ closure ♦ closure conversion ♦ cloud Try this search on Wikipedia, Wiktionary, Google, OneLook.
{"url":"https://foldoc.org/closure","timestamp":"2024-11-07T00:40:20Z","content_type":"text/html","content_length":"10192","record_id":"<urn:uuid:b1d2d634-57a0-4ee9-8300-cfce5a4972c5>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.54/warc/CC-MAIN-20241106230027-20241107020027-00151.warc.gz"}
Direkt Güneş Işınımının Spektral Dağılımının Belirlenmesi dc.contributor.advisor Topçu, Sema dc.contributor.author Oğuzhan, Bahar dc.contributor.authorID 46117 dc.contributor.department Meteoroloji Mühendisliği dc.date.accessioned 2023-03-02T13:24:54Z dc.date.available 2023-03-02T13:24:54Z dc.date.issued 1995 dc.description Tez (Yüksek Lisans) -- İstanbul Teknik Üniversitesi, Fen Bilimleri Enstitüsü, 1995 Bu çalışmada, İstanbul (41.1°N ve 29.0 E°) için açık bir atmosferde yeryüzeyine ulaşan direkt ışınımın spektral dağılımının belirlenmesi amacıyla, ölçüm çalışmaları ve model hesaplamaları birlikte yürütülmüştür. Bird ve Riordan (1986) tarafından ileri sürülen bu modelde, matematiksel ifadeler ile basınç, sıcaklık, bağıl nem, görüş uzaklığı gibi yer ölçümleri kullanılmaktadır. Görüş uzaklığına bağlı olarak, türbidite katsayısının bulunduğu bağıntılar çıkartılmış, bunların yerine doğrudan doğruya, pirhelyometrik ölçümlerle hesaplanan türbidite katsayıları giriş bilgisi olarak verilmiştir. Atmosfer dışına gelen ışınıma çeşitli atmosfer bilelenlerinin, geçirgenlik fonksiyonlarının etkisi ilave edilmiştir. Rayleigh saçılması, subuhan ve ozon absorbsiyonu, aerosoller ve gazlar tarafından azaltılma ile ilgili geçirgenlik fonksiyonları gözönüne alınmıştır. Hesaplamalar saçılma ve absorbsiyon olaylarında önemli olan 0.3-4.0 um arasındaki 122 dalgaboyunda yapılmıştır. Modelin dc.description.abstract gerçeklenmesi amacıyla, tüm spektrum boyunca ve belirli spektral bantlardaki pirhelyometrik ölçümler ile modelden elde edilen değerlerin karşılaştırılması yoluna tr_TR gidilmiştir. Ancak spektral bant değerlerinin bulunmasında, eşit olmayan bu dalgaboyu aralıkları interpole edilerek, sayısal integrasyon yöntemiyle hesaplamalar yapılmıştır. Pirhelyometrik ölçümler san (OG1) ve kırmızı (RG2) filtreleriyle yapılmıştır. Bu filtrelerin ölçüm aralıkları sırasıyla 0.530-2.8 um ve 0.630- 2.8 um dalgaboylandır. Modelden elde edilen değerlerle hesaplanan değerler arasındaki uyum araştırılmış ve ortalama bağıl hataların bu tür çalışmalar için kabul edilebilen şuurlar içerisinde olduğu bulunmuştur. Ayrıca atmosferde değişken olan subuhan miktarı ve aerosollerin güneş ışınımının spektral dağılımı üzerindeki etkileri incelenmiştir. Ultraviyole, görünür ve infrared ışınım bölgeleri için bu etkiler ayn ayn hesaplanarak, absorbsiyon ve saçılma sonucunda spektral ışınımı azaltmaları da detaylı bir şekilde araştırılmıştır. In recent years, mainly due to its renewable and nonpollutant character, solar energy (utilization) has been a subject of utmost importance. The knowledge of the solar radiation on the earth's surface is essential to many solar conversion systems in terms of their desian, size selection, performance efficiency; heating and cooling the buildings and other energy problems. In the past, researchers were being tought that, knowing the total radiation was enough to solve these problems. But in recent studies, the spectral distribution of solar radiation has been an important subject For industrial, medical and biological applications, not only the total energy but also the spectral distribution of the sunlight is important. For instance, the selection of materials used in the buildings (the degration of colours, paintings or sensitive materials as a fiinction of their uses), the evaluation of electrical energy avahable from the use of solar cells, the study of growth and photosynthetic activity of plants in function of their spectral sensitivity and the evaluation of the UV radiation responsible for skin cancer. Because of having the large application fields, there is a general need for knowing the spectral distribution of radiation and to what extent changes in the meteorological factors effect this energy distribution in addition to the affect on the total energy received With increasing importance of spectral irradiance, the study of the spectral climatological structure of a selected region has become useful in fields of meteorology, architecture, agriculture, hydrology and solar energy. The recent studies are related with the modelling and prediction of the direct spectral radiation at the earth. Different spectral models for the solar radiation have been presented by many researchers which are suitable for technical applications. The spectral distribution of the direct solar radiation reaching the surface of the earth depends on a number of factors; water vapor, ozone, aerosol particle and uniformly mixed gases. The scattering and absorbtion of the solar radiation by these components produce a remarkable attenuation of the direct solar irradiance. In this thesis, a spectral model which has been presented by Bird and Riordan, 1986, has been used in Istanbul (41.1° N ; 29.0°E). The spectral model for cloudless VI days uses simple mathematical expressions and the measurements at the surface which is easily accessiaWe to generate direct horizontal irradiance. The primary significance of this model is its simplicity, which allows its use on small computers. The spectrum produced by this model is limited to 0.3-4.0 urn wavelength. The model gives a description of the physical behavior of the atmosphere such as absorbtion and scattering with related data, as well as the climatological state of the atmosphere. The first chapter of this study is devoted to a state of the art review of the related studies. In doing so, complementary to the models to estimate direct radiation special attention has been given to the studies on solar constants, absorbtion and scattering of solar radiation. Direct solar radiation and spectral distribution of it have been given in the second chapter. Absorbtion of direct component of the spectral solar radiation by ozon, water vapor, uniformly mixed gases and scattering by aerosols have been extensively rewieved. Direct solar radiation has been varied considerably for any given region especially with local atmospheric conditions, time of the day, seasons of the year. The spectral distribution of direct solar radiation is altered as it passes through the atmosphere by absorbtion and scattering. The amount of the attenuated radiation depends on the path length of the solar rays through the atmosphere and the content of the water vapor, ozone, carbondioxide and aerosol particles in the atmosphere. In the third chapter, a spectral model for estimation direct radiation has been represented. In the model, air temperature (degrees Celcius), atmospheric pressure (hPa), percentual relative humidity, horizontal visibility (km) and ground albedo are given for the definition of the climatological state of the atmosphere. Horizontal visibility is used for computing of the Angstrom turbidity cofficient, (3. But the relation between horizontal visibility and turbidity cofficient is valid in the limited range and the observations of the horizontal visibility has not enough sensitivity. So, in this study, turbidity cofficient is used directly instead of horizontal visibility. Pyrheliometric measurements and a computer programme are used for the determination of the turbidity cofficient. In this model, water vapor, ozone, uniformly mixed gases and aerosols were assumed as reducing factors of irradiance. This led to the following expression in which the direct irradiance appears as multiplication of extraterrestrial irradiance and transmittance of atmospheric attenuation components Ij. = hk T,* TaA, ToA. Twx TuX where, Iox. is extraterrestrial spectral irradiance. The spectral atmospheric transmittance functions are Trx ; after Rayleigh scattering T^ ; after the attenuation by aerosol T0}, ; after the absorbtion by ozon layer VII Twx ; after the absorbtion by water vapor Tux ; after the absorbtion of the uniformly mixed gas (02, C02, CH4, v.b.). The selected wavelengths were based on spectra measured at totally 122 wavelenghts between (0.3-4.0 urn). The extraterrestrial irradiance spectrum in the above band, accounting for % 98 of solar constant (1339 of total 1367 W/m2 ) is given according dc.description.abstract to Neckel and Labs (1981). T,* ; The spectral transmittance after Rayleigh scattering is computed by the function of the relative air mass, (M). The relative air mass en_US (the ratio between the oblique optical path length to the vertical path in the zenit direction), a function of the sun zenith angle, is computed, as indicated by Kasten (1966). The pressure corrected relative air mass, M' = M P/Po where P is the actual atmospheric pressure and Po is the standart atmospheric pressure. Tax is the spectral transmittance after the attenuation by aerosols. It is computed by the function as follows. TaX = exp(-MpX'a) the exponent is the relative air mass times the Angstrom turbidity. Twx. is the spectral transmittance after the absorbtion by the atmospheric water vapor is computed as a function of the relative water vapor mass, the precipitable water vapor height and the spectral water vapor cofficient which is given according to Neckel and Labs (1981). Here the relative water vapor is used instead of air mass, although the difference is very small. The relative water vapor mass is computed as a function of zenith angle. A small departure from relative air mass, increasing with zenith angle, is due to the fact that water vapor is mainly concentrated in the lower troposphere. The spectral transmittances after absorbtion by ozone layer are defined as a function of ozone amount, the spectral ozone absorbtion cofficient and relative ozone mass (ao), according to this equation, To = exp (- ao 03 Mo ) O3 is the surface density of the volume of ozone contained in the vertical column, reduced at normal temperature and pressure, or the NTP ozone amount, computed for each day of the year, at the given site. T"x is the spectral transmittance after the absorbtion of the uniformly mixed gas also defined as a function of the spectral absorbtion cofficient for the uniformly mixed gas (unit: 1/ km) and the pressure corrected relative air mass. VIII With a computer programme using these equations of transmittance, the spectral distributions for the 16 selected clear days which belongs the 1993 and 1994, are defined. The extraterrestrial and direct irradiance at the surface are presented both for the selected day (1 1 Agust 1994). The temperature, atmospheric pressure, relative humidity, turbidity cofficients which define the climatological state of the atmosphere, are represented on the figures. The observed values using this study, are obtained at Istanbul Technical University, Maslak, Meteorological Observation Station. Attenuation of direct solar radiation is maximum in visible region (0.38-0.78 urn). In this region the scattering is important. Water vapor and carbondioxide have absorbtion bands in the ultraviole and infrared region and they caused attenuation of the spectral solar radiation at the earth surface in these regions. For defining the validation of the model a calculated and measured values have been compared in the certain spectral bands (red; 0.630-2.8 urn, yellow; 0.530- 2.8 um ) and total spectrum. Using the spectral distributions of the selected clear days the direct solar irradiances reaching the earth have been calculated with the numerical integration. But for the equation of the wavelength steps interpolation has been made. These results are presented on the respective tables. There is a good agreement between the calculated and measured values. The relative errors belongs to the selected clear days, Red filter : 0.039 Yellow filter : 0.035 The total spectrum: 0.052 The atmospheric components produce a remarkable attenuation of the direct component of solar irradiance. In this study water vapor and aerosols have been considered as the most important components at the lower troposphere. Also the amount of the solar radiation reaching the earth depends on the path length. For determining the attenuation effects caused by these parameters have been calculated for a selected day at the mean earth-sun distance. Atmospheric temperature, relative humidity, atmospheric pressure and turbidity cofficient belonging to this day have been considered as the monthly average values. Effects of these parameters on the spectral distribution of direct radiation have been investigated as considered possible minimum and maximum values in the optical air mass, relative humidity and turbidity cofficient. Direct radiation values have been effected the variations on the optical air mass especially between the (0.3-1. lum). Whenever the optical air mass increases, path length of the solar radiation also increases. Therefore, the scattering caused by ozone, aerosols and air molecule, increases. IX The atmospheric relative vapor varies with time and location. According to the sample results, relative humidity effects the spectral distribution nearly in the infrared region. In order to investigate the effects of atmospheric aerosols the variations in the Angstrom turbidity cofficient have been determined. Angstrom turbidity cofficient has been given as 0.40 for the polluted air. Here the spectral distributions have been calculated for the J3 = 0.10, 0.20, 0.30 and 0.40. Turbity cofficient is directly proportional with the concentration of the aerosols in the atmosphere. Scattering is important especially in the UV and in the visible region. In the infrared region, absorbtion effects are more important than the scattering effects. The difficult calculation of the spectral composition of clear sky direct radiation can be handled with fair accuracy within the framework of the model. This simple model for direct horizontal spectral irradiance for clear sky condition produce results that agree very well with measurement data. The model is simple enough that it should be suitable for anyone desiring spectral data, and it requires a very small computing capability. dc.description.degree Yüksek Lisans dc.identifier.uri http://hdl.handle.net/11527/22265 dc.language.iso tr dc.publisher Fen Bilimleri Enstitüsü dc.rights Kurumsal arşive yüklenen tüm eserler telif hakkı ile korunmaktadır. Bunlar, bu kaynak üzerinden herhangi bir amaçla görüntülenebilir, ancak yazılı izin alınmadan tr_TR herhangi bir biçimde yeniden oluşturulması veya dağıtılması yasaklanmıştır. dc.rights All works uploaded to the institutional repository are protected by copyright. They may be viewed from this source for any purpose, but reproduction or distribution in en_US any format is prohibited without written permission. dc.subject güneş radyasyonu tr_TR dc.subject spektrum analizi tr_TR dc.subject solar radiation en_US dc.subject spectrum analysis en_US dc.title Direkt Güneş Işınımının Spektral Dağılımının Belirlenmesi tr_TR dc.type Master Thesis Orijinal seri 2.12 MB Adobe Portable Document Format Lisanslı seri 3.16 KB Plain Text
{"url":"https://polen.itu.edu.tr/items/149364df-0585-488e-b099-52a8c17094bd/full","timestamp":"2024-11-09T06:41:27Z","content_type":"text/html","content_length":"194468","record_id":"<urn:uuid:7d0c14b9-c2e7-461b-9f59-a58b81e13087>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.30/warc/CC-MAIN-20241109053958-20241109083958-00524.warc.gz"}
Two Parallel-Sample Means Help Aids Top Application: This procedure is used to test the following hypotheses: The test drug is concluded to be equivalent to the control in average if the null hypothesis is rejected at significance level α. 1. Enter a) value of α, the probability of type I error b) value of β, the probability of type II error c) value of allowable difference d) value of Population variance e) value of δ>0, the equivalence limit. 2. Click the button “Calculate” to obtain result sample size of each group n. Formula: [] (*) α: The probability of type I error is the Probability of rejecting the null hypothesis when null hypothesis is true. The null hypothesis is the two mean values are not equivalent. β: The probability of type II error is the Probability of failing to reject the null hypothesis when null hypothesis is false. δ: The largest change from the reference value (baseline) that is considered to be trivial. μ[2] – μ[1]: Value of allowable difference is the true mean difference between a test drug (μ[2]) and a placebo control or active control agent (μ[1]). Example 1: Suppose the true difference is 1% (i.e., μ[2]–μ[1]=1%) and the equivalence limit is 5% (i.e., δ=0.05). Thus, by using (*), with the standard deviation is 10% (i.e., population variance is 0.01), the required sample size to achieve an 80% power (β=0.2) at α=0.05 for correctly detecting such difference of 0.05 change obtained by normal approximation as n=108. Reference: Chow, Shao and Wang, Sample Size Calculations In Clinical Research, Taylor & Francis, NY. (2003) Pages 59-61.
{"url":"https://www2.ccrb.cuhk.edu.hk/stat/mean/tsmp_equivalence.htm","timestamp":"2024-11-10T07:22:21Z","content_type":"text/html","content_length":"40396","record_id":"<urn:uuid:0384ca51-a88f-49dc-af97-835c92a2e1d5>","cc-path":"CC-MAIN-2024-46/segments/1730477028179.55/warc/CC-MAIN-20241110072033-20241110102033-00507.warc.gz"}
What's wrong with NPV valuations? Risk-adjusted NPV is Notoriously Fallible Over at least the past decade, risk-adjusted Net Present Value (rNPV) has emerged as the de facto standard for valuing pharmaceutical R&D projects (1,2). These valuations are used for several purposes including prioritizing projects within a portfolio, making investment decisions, valuing a licensing transaction and valuing intellectual property in a sale setting. The approach is based on standard discounted cash flow (DCF) techniques with future cash flows weighted by the probability of a drug progressing from one development stage to the next. Controversies remain, not least the choice of discount rate to apply, but the methodology remains very widely used, at least in Big Pharma and those biotech companies that have not lost faith in rNPV. In a context of high – and often unquantifiable – uncertainties inherent on pharmaceutical R&D and market forecasting (3), it is known that DCF and even rNPV techniques provide “misplaced concreteness” whereby “the tendency to overlook uncertainties, margins of error and ranges of probability can lead to damaging misjudgements” (4). Superimposing Monte Carlo (MC) simulations on to rNPV calculations provides explicit recognition of this and results in an rNPV expressed as a range associated with a specific probability distribution. This paper briefly documents an alternative risk-profiled MC rNPV valuation (rpNPV), and highlights a material divergence between the perspective of a biotech company (with a single or small number of projects) and Big Pharma (with a broad portfolio). Conventional r-NPV Valuation with Monte Carlo Simulations (Standard MC Model) • Phase I to Phase II trials: 71% • Phase II to Phase III trials: 45% • Phase III trials to pre-registration: 64% • Pre-registration to product approval: 93% Cumulatively, the probability of technical success, from preclinical development through to product approval, is 19%; many would argue this is overoptimistic relative to contemporary experience, especially in a challenging indication such as NSCLC, but the analysis presented below holds with more stringent benchmarks based on success rates in specific indications and with different technologies (small molecules, biologics, etc.) (6) (data not shown). In the standard rNPV model, the net cash flow is multiplied by the cumulative probabilities at each stage; i.e. all cash flows from Phase I to Phase II are multiplied by 0.71, from Phase II to Phase III by 0.71 x 0.45 = 0.32, etc. The formal calculation of rNPV uses a familiar standard algorithm (7). Using the midpoint values for all the ranges specified in Table 1, at a discount rate of 8% the rNPV = $485 million. Using the MC method, this calculation is repeated many times (in this example, 50,000 times) using a Microsoft Excel spreadsheet plug-in (Model Risk 5, Vose Software BVBA), each run using a different value in each of the assumption ranges in Table 1. These 50,000 simulations effectively sample the range of possible outcomes based on an appropriate probability distribution for each input variable (for most variables, this model used a PERT distribution (8)). ^th and 95^th percentiles for rNPV respectively being $357 million and $627 million. This probability distribution provides a far richer insight into the rNPV associated with this early-stage R&D project. In this example, all values in the range are positive, but for projects where the commercial target is smaller or the uncertainties higher (data not shown), the first quartile of even more of the rNPV range can be negative despite a positive mean rNPV, clearly providing a more accurate view of the risks involved in pharmaceutical R&D. More importantly still, the relative impact of each range of input assumptions on the outcome of the rNPV calculation is shown in a Tornado plot (Fig. 1b). Consistent with conventional wisdom, the assumptions with the greatest impact on valuation are: • Price • Peak market share • Accessible market (i.e. the available market taking into account clinical, payer and other restrictions on eligibility for treatment) Assumptions such as the cost of clinical trials, even Phase III, have a significantly lower impact on rNPV. It is precisely this sort of analysis that led many in the industry, certainly up to ca. 2005, to focus very heavily on commercial parameters and to invest heavily in clinical development with relatively scant regard for R&D budgets. While it provides an illustration of the potential spread of project NPVs and the assumptions that the greatest impact on the range, the MC rNPV method still masks the reality of the situation where projects more often fail than succeed. As a result it has less utility in decision-making than stringent use of MC methods can provide. Overcoming the Limitations of the Conventional Approach Using a Stringent MC Model The fundamental problem with the standard rNPV method is that it applies a probability weighting to cash flows according to transitions through key development hurdles, e.g. it calculates 71% of cash flows from Phase I to Phase II. In reality, however, there is no such thing as 71% of a cash flow; instead, 29% of the scenarios result in no cash flow beyond Phase I (as the trial yielded a negative result), and 71% of the scenarios resulted in a full, not partial, cash flow between Phase I and Phase II. To better reflect real-life scenarios, we use a stringent MC model: simulations are run in which 71% proceed beyond Phase I, of which 45% proceed beyond Phase II, of which 64% proceed beyond Phase III, etc. Only 19% of the scenarios have cash flows beyond the pre-registration phase, consistent with the overall probability of the product reaching the market; 81% of the scenarios have negative rNPV. ^th percentile with an rNPV of -$202 million, at the 15^th percentile with an rNPV of -$34 million and at the 90^th centile with an rNPV of $2.7 billion. The first peak corresponds to a late-stage development failure. By far the highest peak (the most probable outcome) is the middle one, corresponding to an outcome of a modest loss on a project cancelled at a relatively early stage. The third peak, with a very high valuation is clearly the least probable and reflects a successfully launched new product. This model is materially more representative of the economics of drug development than Method 1 and can therefore arguably claim higher validity. It is not in common use possibly due to the lack of MC simulation expertise and culture within the pharmaceutical and biotech industries. It is the Tornado plot for the stringent MC method that provides a fascinating dichotomy (Fig. 2b). In contrast to the standard method, commercial assumptions such as price, market share and market access pale in relation to development parameters, namely: • Cost of Phase II trials • Cost of Phase I trials • Length of Phase I trials Given that early development is where most projects fail, it is not surprising that these parameters have the highest impact on valuation. Interestingly, biotechs tend to be much better at minimizing costs and time of early phase clinical trials than Big Pharma. Risk-profiled NPV Using the stringent MC method, the shape of the rNPV histogram, together with the parameters of the Tornado plot, constitute a risk-profiled NPV model (rpNPV) which is more representative of the reality of life sciences R&D than standard rNPV values. The standard method may be germane to a biotech company focusing on a single product or perhaps a small portfolio. It is particularly useful in situations addressing a single asset, such as a partnering transaction, where the Monte Carlo analysis constitutes a systematic, multi-parameter sensitivity analysis. In these situations, the histogram plot output makes explicit the range of value encompassed by the uncertainty in the input assumptions, and the Tornado plot identifies which assumptions contribute most to this uncertainty. This can be useful to focus negotiations onto key parameters rather than those which have little or no effect on the ultimate rNPV number used as the basis for the transaction. It can also be important to guide further analysis and/or market research onto parameters where a narrowing of the input assumption ranges would significantly reduce the uncertainty in the valuation. However, the probability distribution of the rNPV range using the standard approach does not reflect reality and is materially and consistently overoptimistic. The rpNPV (stringent method), however, reflects the dynamics of a large portfolio of the type present in major integrated pharmaceutical firms or venture capital investors. The standard rNPV approach does not serve portfolio management well as it always favours short-term and incremental projects at the expense of early stage and strategic projects. Instead, an rpNPV approach should be used to make trade-offs between projects within a portfolio. Beyond this, a possible message from the rpNPV model is that Big Pharma should pay maximum attention to containing the costs and duration of early development phases as these have a higher economic impact in the context of a broad portfolio than optimizing post-launch commercial parameters for individual products although, obviously, these are not unimportant. A corollary is that it may be economically most efficient for Big Pharma not to conduct early development at all, rather it would maximize shareholder value to acquire products that have already successfully navigated Phase II proof-of-concept clinical trials; indeed, many are already some way down this path as they increasingly outsource R&D. The rpNPV model provides much greater insight/transparency into the dynamics driving project returns, enabling more effective comparison between alternative development paths for a project and between different projects competing for resources where they may have similar standard rNPVs but radically different risk profiles. As such it is a valuable decision-making tool for modern portfolio 1 Bogdan, B. & Villiger, R., Valuation in Life Sciences 3^rd Ed., doi 10.1007/978-3-642-10820-4_2, Springer-Verlag Berlin Heidelberg (2010) 2 Svennebring, A.M. & Wikberg, A.E.S. Net present value approaches for drug discovery Springerplus doi 10.1186/2193-1801-2-140 (2013) 3 Cha, M., Rifai, B. & Sarraf, P. Pharmaceutical forecasting: throwing darts? Nat Reviews Drug Discovery 12, 737–738 (2013) 4 Savage, S.L., The Flaw of Averages, John Wiley & Sons, Inc. (2012) 5 DiMasi, J.A., Feldman, L., Seckler, A. & Wilson, A. Trends in risks associated with new drug development: success rates for investigational drugs. Clin. Pharmacol. Ther. 87, 272–277 (2010). 6 Hay, M., Thomas, D.W., Craighead, J.L., Economides, C. & Rosenthal, J.. Clinical development success rates for investigational drugs. Nat Biotechnol. 32, 40–51 (2014) 7 Stewart, J.J., Allison, P.N., Johnson, R.S. Putting a price on biotechnology Nat Biotechnol. 19, 813-7 (2001) 8 Vose, D. Risk Analysis, a Quantitative Guide, 3^rd Ed. John Wiley & Sons Ltd (2008) Download a PDF version: Pharma and Biotech Valuations, Anthony Walker et al, Alacrita, July 2015 Follow-up Whitepaper Challenge Our client was a privately-held US biotech company developing a proprietary platform technology for in vivo expression of DNA coding for a therapeutic antibody or protein. The... Challenge: A former venture fund client identified a novel Phase 1-ready asset for treating a rare CNS infectious disease and was interested in licensing the asset and founding a new... Challenge: A UK-based pharma company specializing in oncology drug development sought to understand the potential value of a nanoparticle drug product in small cell lung cancer (SCLC).... Challenge: In preparation for fundraising, a biotech company developing a clinical stage radiolabeled peptide to diagnose pulmonary diseases required an independent valuation of its... Challenge: An investor and asset manager was considering an investment in a European biotech company with one development product for hemophilia and another approved OTC product in an... Challenge A preclinical-stage company developing an integrin-targeting oncology asset engaged Alacrita to quantify the value potential of its lead asset in pancreatic ductal adenocarcinoma...
{"url":"https://www.alacrita.com/whitepapers/pharma-and-biotech-valuations-divergent-perspectives","timestamp":"2024-11-07T17:03:16Z","content_type":"text/html","content_length":"70674","record_id":"<urn:uuid:129f4992-6442-4879-a561-26a503221123>","cc-path":"CC-MAIN-2024-46/segments/1730477028000.52/warc/CC-MAIN-20241107150153-20241107180153-00715.warc.gz"}
In mathematics, an , from the Greek: ἴσος "equal", and μορφή "shape", is an invertible way of relating one structured object to another. This means that there is a way of relating the second structured object to the first in such a way that composing these two relations in one order identifies the first object with itself and composing them in the other order identifies the second object with itself. When such a relation exists, the two objects are said to The above text is a snippet from Wikipedia: Isomorphism and as such is available under the Creative Commons Attribution/Share-Alike License. 1. Similarity of form 1. the similarity in form of organisms of different ancestry 2. the similarity in the crystal structures of similar chemical compounds 1. the similarity in the structure or processes of different organizations 2. A one-to-one correspondence 1. A bijection f such that both f and its inverse f^ −1 are homomorphisms, that is, structure-preserving mappings. 2. a one-to-one correspondence between all the elements of two sets, e.g. the instances of two classes, or the records in two datasets The above text is a snippet from Wiktionary: isomorphism and as such is available under the Creative Commons Attribution/Share-Alike License. Need help with a clue? Try your search in the crossword dictionary!
{"url":"https://crosswordnexus.com/word/ISOMORPHISM","timestamp":"2024-11-07T06:09:42Z","content_type":"application/xhtml+xml","content_length":"11367","record_id":"<urn:uuid:bb194532-d89b-48b9-912a-c6fc8e06e0b5>","cc-path":"CC-MAIN-2024-46/segments/1730477027957.23/warc/CC-MAIN-20241107052447-20241107082447-00082.warc.gz"}
Compute the hyper-posterior distribution in Magma — hyperposterior Compute the hyper-posterior distribution in Magma Compute the parameters of the hyper-posterior Gaussian distribution of the mean process in Magma (similarly to the expectation step of the EM algorithm used for learning). This hyper-posterior distribution, evaluated on a grid of inputs provided through the grid_inputs argument, is a key component for making prediction in Magma, and is required in the function pred_magma. trained_model = NULL, data = NULL, hp_0 = NULL, hp_i = NULL, kern_0 = NULL, kern_i = NULL, prior_mean = NULL, grid_inputs = NULL, pen_diag = 1e-10 A list, containing the information coming from a Magma model, previously trained using the train_magma function. If trained_model is not provided, the arguments data, hp_0, hp_i, kern_0, and kern_i are all required. A tibble or data frame. Required columns: 'Input', 'Output'. Additional columns for covariates can be specified. The 'Input' column should define the variable that is used as reference for the observations (e.g. time for longitudinal data). The 'Output' column specifies the observed values (the response variable). The data frame can also provide as many covariates as desired, with no constraints on the column names. These covariates are additional inputs (explanatory variables) of the models that are also observed at each reference 'Input'. Recovered from trained_model if not A named vector, tibble or data frame of hyper-parameters associated with kern_0. Recovered from trained_model if not provided. A tibble or data frame of hyper-parameters associated with kern_i. Recovered from trained_model if not provided. A kernel function, associated with the mean GP. Several popular kernels (see The Kernel Cookbook) are already implemented and can be selected within the following list: □ "SE": (default value) the Squared Exponential Kernel (also called Radial Basis Function or Gaussian kernel), □ "LIN": the Linear kernel, □ "PERIO": the Periodic kernel, □ "RQ": the Rational Quadratic kernel. Compound kernels can be created as sums or products of the above kernels. For combining kernels, simply provide a formula as a character string where elements are separated by whitespaces (e.g. "SE + PERIO"). As the elements are treated sequentially from the left to the right, the product operator '*' shall always be used before the '+' operators (e.g. 'SE * LIN + RQ' is valid whereas 'RQ + SE * LIN' is not). Recovered from trained_model if not provided. A kernel function, associated with the individual GPs. ("SE", "PERIO" and "RQ" are aso available here). Recovered from trained_model if not provided. Hyper-prior mean parameter of the mean GP. This argument, can be specified under various formats, such as: □ NULL (default). The hyper-prior mean would be set to 0 everywhere. □ A number. The hyper-prior mean would be a constant function. □ A vector of the same length as all the distinct Input values in the data argument. This vector would be considered as the evaluation of the hyper-prior mean function at the training Inputs. □ A function. This function is defined as the hyper-prior mean. □ A tibble or data frame. Required columns: Input, Output. The Input values should include at least the same values as in the data argument. A vector or a data frame, indicating the grid of additional reference inputs on which the mean process' hyper-posterior should be evaluated. A number. A jitter term, added on the diagonal to prevent numerical issues when inverting nearly singular matrices. A list gathering the parameters of the mean processes' hyper-posterior distributions, namely: • mean: A tibble, the hyper-posterior mean parameter evaluated at each training Input. • cov: A matrix, the covariance parameter for the hyper-posterior distribution of the mean process. • pred: A tibble, the predicted mean and variance at Input for the mean process' hyper-posterior distribution under a format that allows the direct visualisation as a GP prediction.
{"url":"https://arthurleroy.github.io/MagmaClustR/reference/hyperposterior.html","timestamp":"2024-11-05T11:59:12Z","content_type":"text/html","content_length":"15216","record_id":"<urn:uuid:59c0c296-95b3-414e-aeea-73701eaafd30>","cc-path":"CC-MAIN-2024-46/segments/1730477027881.88/warc/CC-MAIN-20241105114407-20241105144407-00036.warc.gz"}
Simplify your DIY projects and repairs with our handy calculators. Whether youβ re measuring materials, estimating costs, or calculating dimensions, our tools are designed to assist you in completing your tasks efficiently. Get the job done right with our reliable resources. This pouch calculator tool helps you quickly determine the volume of a pouch based on its dimensions. Pouch Calculator Pouch …
{"url":"https://madecalculators.com/category/handy/","timestamp":"2024-11-09T15:46:11Z","content_type":"text/html","content_length":"192334","record_id":"<urn:uuid:11089911-730e-4aba-9606-292644ede694>","cc-path":"CC-MAIN-2024-46/segments/1730477028125.59/warc/CC-MAIN-20241109151915-20241109181915-00014.warc.gz"}
Near-field refrigeration and tunable heat exchange through four-wave mixing We modify and extend a recently proposed four-wave mixing scheme [C. Khandekar and A. Rodriguez, Opt. Express 25(19), 23164 (2017)] for achieving near-field thermal upconversion and energy transfer, to demonstrate efficient thermal refrigeration at low intensities ∼ 10^9W/m^2 over a wide range of gap sizes (from tens to hundreds of nanometers) and operational temperatures (from tens to hundreds of Kelvins). We further exploit the scheme to achieve magnitude and directional tunability of near-field heat exchange between bodies held at different temperatures. Near field radiative heat exchange^1–5 is important for several emerging applications and technologies, from energy conversion^6,7 to nanoscale heat management and cooling.^8–11 This has motivated recent efforts aimed at achieving active control of heat transfer using gain media^12,13 or more generally, chemical potentials.^14 Simultaneous advances in nanofabrication have also made it possible to confine light to small volumes and over long timescales,^15,16 allowing otherwise weak optical nonlinearities to modify even low-power phenomena like thermal radiation.^17–20 We recently proposed an alternative mechanism for controlling heat exchange^21 that exploits nonlinear four-wave mixing to extract and upconvert “thermal energy” trapped in the near field of a planar body unto another, from mid- to near-infrared wavelengths.^21 In particular, we showed that the combination of resonantly enhanced optical nonlinearities and large density of states associated with tightly confined surface plasmon/phonon–polariton (SPP) resonances enables high-efficiency four-wave mixing in planar materials separated by nanoscale gaps, resulting in order 10^5W/m^2 upconversion rates induced by externally incident mid-infrared light of moderate intensities, on the order of 10^12W/m^2. In this letter, we show that a similar four-wave mixing scheme can be exploited to achieve thermal refrigeration and tunable heat exchange. We begin by exploring the planar configuration shown in Fig. 1(a), comprising an emitter held at temperature T[e] and supporting mid-infrared SPP resonances around frequency ω[1] which is separated by a vacuum gap from an absorber held at temperature T[a] and supporting near-infrared SPPs around ω[3]. The absorber is coated with a thin χ^(3) nonlinear film supporting a mediator resonance at ω[2] ∼ (ω[3] − ω[1])/2 which couples to externally incident light by way of a grating. The mediating mode facilitates resonant four-wave mixing (ω[1] + 2ω[2] = ω[3]) between the SPP resonances, resulting in cooling of the emitter by way of upconversion and energy transfer across the gap. As shown below, in contrast to passive radiative cooling mechanisms requiring large temperature differentials T[e] ≫ T[a], nonlinear upconversion allows thermal energy extraction under zero or even negative differentials (T[e] < T[a]), constrained only by photon-number conservation.^21 This in turn enables thermal refrigeration, where thermal energy is made to flow from a low to high temperature body (a reversed heat engine) when the system is driven by external light, which provides the work required for energy transfer. The first part of this letter is devoted to a detailed analysis of such a refrigeration scheme, illustrating not only the various design criteria but also operating regimes needed to achieve high-efficiency refrigeration, including temperature range (T[e] ∼ 10−1000 K) and gap sizes. In the second part, we extend the analysis to consider a more complicated system, depicted in Fig. 2(a), where we introduce an additional thin film on top of the nonlinear medium for the purpose of enabling appreciable heat exchange under zero external drive but finite temperature differentials T[e] ≠ T[a], otherwise absent due to the large SPP frequency mismatch between the emitter and absorber. This channel can thus compete with nonlinear energy upconversion to enable tunable heat flow (in both magnitude and direction) with respect to the incident drive power. A significant refinement in this paper with respect to our earlier work^21 is the substitution of lossy plasmonic resonances in favor of low-loss dielectric leaky modes in the nonlinear medium. While the latter are less localized than the former, they exhibit longer radiative and absorptive lifetimes and thus result in significantly lower power requirements, on the order of 10^9 W/m^2 as opposed to 10^12 W/m^2, while also mitigating pump-induced heating. While the choice of transparent materials around the pump wavelength ω[2] mitigates heating introduced by the drive, in practice we expect that efficient thermal cooling will require a vacuum gap (not considered before^21) in order to further limit conductive transfer stemming from spurious heating. Our theoretical analysis is based on a coupled-mode theory framework,^22–24 previously exploited to analyze heat transfer in linear media^25–27 and more recently generalized to consider a broad class of weakly nonlinear resonant processes,^28,29 that provides general operating conditions and quantitative predictions while allowing us to avoid otherwise cumbersome calculations based on nonlinear fluctuational electrodynamics. ^17 Finally, we note that our predictions extend recent work in the area of non-contact refrigeration^12,30,31 and dynamically tunable heat exchange,^13,32 and has analogies with more established thermoelectric cooling schemes.^33 We first consider the planar system shown in Fig. 1(a), comprising a silica (SiO[2]) emitter separated by a vacuum gap d from an aluminum-doped zinc oxide (AZO) absorber. The associated dielectric properties are obtained from various references.^34–36 The nonlinear medium is a chalcogenide (ChG) thin film of material composition As[2]S[3], thickness t, permittivity ε[2] = 6.25, and isotropic Kerr coefficient χ^(3) = 10^−17 m^2/V^2.^37–40 (Note that we assume an isotropic Kerr coefficient, χ[xxxx] = 3χ[xxyy] = 3χ[xyxy] = χ^(3), purely for computational and conceptual convenience, but that more generally the nature of the relevant tensor components will depend on growth and material considerations.^38,41) The p-polarized SPP resonances in this configuration are characterized by their conserved transverse momenta k and described by mode profiles of the form $El(z)eik.x∥$, where l ∈ x, y, z and x[∥] is the transverse position. Figure 1(b) shows multiple mode dispersions ω(k) arising in the above configuration, for a choice of d = 30nm, illustrating two branches of SPPs localized at the SiO[2] interface of frequencies ω[1a] ∼ 2 × 10^14 rad/s and ω[1b] ∼ 0.8 × 10^14 rad/s, along with a single SPP branch localized at the AZO interface of frequency ω[3] ∼ 12 × 10^14 rad/s. Also present (not shown) is a separate mediator resonance that propagates primarily within the ChG film with frequency ω[2] ≈ 5 × 10^14 rad/s and wavevector k[2] = k[2]ŷ. This mediator mode can couple to externally incident light at ω[2] and angle θ[inc] by way of a thin, first-order diffraction grating of period Λ, designed to satisfy $ω2csin⁡θinc+2πΛ=k2$. Note that the thin size of the grating (≲ 5nm) and large frequency mismatch between the mediator and SPP resonances combine such that the grating has a negligible impact on the dispersions and resonances of the slabs.^42 Furthermore, four-wave mixing between SPPs is only possible under the momentum-matching condition, k[1] + 2k[2] = k[3],^21 thus ensuring that only a single emitter mode at k[1] couples exclusively to an absorber mode at k[3]. In particular, given a set (k[1], k[3]) of momentum-matched modes, the upconversion rates can be computed using the following coupled mode equations: $ȧ1α=(iω1α−γ1α)a1α−iκ1αe−2iω2ta3−iκla1β+2γ1αξ1α, α,β∈{a,b}, α≠β$ where a[j] denotes the amplitude of mode j ∈ [1a, 1b, 2, 3], normalized such that |a[j]|^2 is the corresponding mode energy and ξ[j] represent thermal noise sources with thermodynamic correlations $⟨ξj*(ω)ξj(ω′)⟩=Θ(ω,Tk)δ(ω−ω′)$, where Θ(ω, T[j]) = ℏω/[ exp(ℏω/k[B]T[j]) − 1] denotes the Planck distribution corresponding to a local bath temperature T[j].^43 The mode frequencies ω[j](k[j]) and associated decay rates γ[j] are obtained from the complex, eigenfrequency solutions of Maxwell’s equations, while the nonlinear coupling coefficients κ[1α] (α = a, b) describing four-wave mixing are obtained via perturbation theory^28,29 and depend on a complicated, spatial overlap of the linear profiles within the nonlinear medium:^21 Here I is the drive intensity and γ[2t] = γ[2] + γ[2c] is the overall loss rate of the mediator resonance, which includes both dissipative γ[2] and radiative decay γ[2c] (induced by the periodic grating). Note that the momentum-matching condition of nonzero coupling follows by inspection of the phase factor $ei(k3−k1−2k2ŷ)$, allowing us to simplify κ[1α](k[1], k[3]) → κ[1α](k, θ), where k = |k[1]| and θ is the angle between k[1] and the wavevector of the mediator mode (parallel to the y axis). Also included in the coupled-mode equations is the possibility of finite linear coupling κ[l] between the two emitter SPPs, which is negligible in the current configuration due to the large discrepancy between ω[1a] and ω[1b] but turns out to be of critical importance in the configuration of Fig. 2(b). From these coupled-mode equations, one can find the various energy transfer rates corresponding to a given set of modes (k, θ) by considering the overall energy loss rate associated with each mode,^ 22 leading to simple expressions for the thermal extraction $Pα→3=2⟨Im[κ1α*⁡exp(2iω2t)a3*a1α]⟩$ and linear heat transfer $Pa→b=2⟨Im[κl*a1aa1b*]⟩$ rates, along with associated power spectral densities.^21 The net flux rates H[α→β] are then given by: where α, β ∈{a, b, 3} labels the particular flux channel. To provide a proof-of-concept demonstration of thermal refrigeration, we first consider typical geometric and operating parameters, with d = 30nm, t = 100nm, Λ = 2μm, θ[inc] = 45°, and T[a] = 300K. Figure 1(c) shows the net thermal extraction rate H[ex] = H[a→3] + H[b→3] corresponding to two different emitter temperatures, T[e] = 300 K (blue curve) and T[e] = 1000 K (black curve), as a function of drive intensity I. Evidently, large flux rates H[ex] ∼ 10^5 W/m^2 are achievable with moderate drive intensities I ∼ 10^9 W/m^2, illustrating over three orders of magnitude improvements in power efficiency (reduced intensity requirements) over earlier configurations^21 based on lossy plasmonic mediator resonances. Note that the transparency of the emitter at pump wavelength as well as the presence of vacuum gap and large SPP frequency mismatch imply that conductive or radiative heating of the emitter due to the pump is negligible compared to heat extraction leading to its cooling. The efficiency of such reversed heat engine (refrigeration scheme) is given by a coefficient of performance^33 (COP), defined as the ratio of thermal energy extracted to power that is lost to pump-induced heating. It follows from coupled mode equations that the absorbed pump intensity at ω[2] is given by $Habs=4γ2γ2cI/γ2t2$. We choose the radiative or coupling rate of the mediator mode to be γ[2c] = 10^−5ω[2], a very reasonable estimate based on extensive theoretical^44–46 and experimental work on similar thin gratings.^42 The dissipation rate γ[2] ≈ Im{ϵ[m]}ω[2]/2Re{ϵ[m]} where ϵ [m] is the complex permittivity of the nonlinear medium, is obtained from perturbation theory and agrees with the exact complex eigenfrequency solution. With Im ϵ[m] ≈ 10^−10 (obtained from extrapolating available data^47), it follows that γ[2] ≪ γ[2c] and as shown by red curve in Fig. 1(c), the absorbed power H[abs] ≈ 4γ[2]I/γ[2c] by the ultra-low loss resonance is smaller than the heat extraction rates leading to COP ≫ 1 over a varying range of intensities. While such ultra-low loss resonances have been explored extensively in different context,^48 they play an important role here in minimizing the unnecessary power dissipation and enhancing the refrigeration efficiency (COP). While ideally, COP > 1 is within reach, various non-idealities such as fabrication imperfections and spurious material loss rates may lead to effectively larger dissipation rates in actual experiments and potentially smaller values of COP ≈ 10^−2. We note that these are realistic efficiencies of all such solid-state refrigeration schemes^31,33 and are acceptable given the reliability (no moving parts) and ease of on-chip implementation in comparison to other gas based refrigeration methods. Yet another important figure of merit is the range of operating temperatures over which it is possible to cool the emitter (independently of efficiencies). Along this vein, Fig. 1(d) shows H[ex] as a function of T[e] for multiple values of I, illustrating a change in the sign of the flux from positive (solid) to negative (dashed) as T[e] decreases past a typical transition temperature T[e] ∼ 10 K. It follows that under ideal conditions under which T[a] = 300 K is held fixed, heat can be extracted from the emitter until it is cooled down to temperatures on the order of tens of Kelvin. Moreover, while we have chosen so far to focus on configurations involving very small vacuum gaps d = 30 nm, as shown in Fig. 1(e), significant flux rates can nevertheless be achieved at larger separations d ∼ 100 nm. This is an important consideration for current state of the art experiments exploring near-field heat transfer in planar geometries.^5 Interestingly, we find that the complicated dependence of the spectral flux rate on gap and drive intensity, quantified by the coupling coefficient κ(k, θ), leads to a modified relationship between the net flux and gap size compared to the typical ∼ 1/d^2 dependence associated with linear heat exchange. Comparing H[ex] in the particular case of I = 10^11 W/m^2 and equal T[e] = T[a] = 300 K (solid red line) to the flux rate between two SiO[2] plates held at a large temperature differential, T[e] = 300 K and T[a] = 0 K, but separated by the same gap sizes (black line), one finds not only significantly larger extraction rates but also a slower polynomial decay in the former. We finally remark that apart from the efficiency comparable to other solid-state refrigeration methods,^31,33 the design flexibility and wide range of temperature regimes achievable through this scheme could prove a viable alternative depending on the application. We now consider a slightly modified configuration, depicted schematically in Fig. 2(a), to illustrate the possibility of exploiting four-wave mixing as a means of achieving tunable heat exchange. The modified configuration consists of a silicon carbide (SiC) emitter separated by vacuum from a aluminum-doped zinc oxide (AZO) absorber. Resting on the absorber is a composite layer consisting of an additional SiC thin film of thickness h = 20nm on top of a nonlinear gallium arsenide (GaAs) film, with nonlinear χ^(3) = 10^−18 m^2/V^2 and dielectric properties taken from various references.^ 34–36,49 The presence of the additional SiC thin film results in three branches of SPPs, out of which only two, depicted as solid lines in Fig. 2(b), correspond to SPPs localized along the SiC–vacuum interfaces. In contrast to the previous configuration, however, these SPPs have non-negligible linear coupling (κ[l] ≠ 0) due to similar resonance frequencies and can therefore contribute significant linear heat exchange between the emitter and absorber. In order to incorporate both nonlinear upconversion and linear heat exchange, we obtain the unperturbed frequencies, mode profiles, and linear coupling rates κ[l] of the SPPs by fitting the full linear fluctuational electrodynamics calculations to the coupled-mode equations. The dashed (solid) lines in Fig. 2(b) depict the unperturbed (perturbed) resonance frequencies, ω[1a] ∼ ω[1b] ∼ 1.8 × 10^14rad/s, obtained using the fitting procedure. In addition, the system supports a SPP branch around frequency ω[3] ∼ 10 × 10^14 rad/s that is localized along the AZO interface, shown as an inset (red line). Like before, a thin grating of period Λ = 1.78 μm is used to couple incident light at angle θ[inc] = 45° to a mediator resonance in the GaAs film of frequency ω[2] ≈ 4 × 10^14 rad/s and wavevector k[2] = k[2]ŷ. To provide a proof-of-concept demonstration of tunable heat exchange, we consider a typical set of parameters, corresponding to d = 50 nm, h = 20 nm, t = 100 nm, and T[e] = 300 K < T[a] = 400 K, which in the absence of the drive nevertheless leads to a net heat exchange across the gap directed toward the emitter. Figure 2(c) shows the net H[ex] = H[a→b] + H[a→3] (black line) along with individual H[a→b] (blue line) and H[a→3] (red line) extraction rates, as a function of drive intensity I. Evidently, at low drive intensities I ≪ 10^12 W/m^2, the net extraction rate H[ex] is dominated by the linear heat exchange between the SiC resonances (H[a→b] < 0), becoming gradually larger due to nonlinear extraction (H[a→3] > 0) with increasing I, with the reversal in heat flow across the gap occurring at I ≳ 10^13 W/m^2. Note that while intuitively one might expect a decreasing amount of linear heat flow with increasing nonlinear upconversion, we observe a non-monotonic trend in H[a→b] with increasing I that demonstrates instead an increase in linear flow at low intensities. Such a non-trivial interplay between the two (linear and nonlinear) processes originates from a shift in the SiC mode frequencies that ends up enhancing the otherwise sub-optimal (due to the slight frequency mismatch) linear heat flow. While the net flux rates contain contributions from a wide set of SPPs, characterized by (k, θ), the underlying behavior of flux rates for these modes can be analyzed by inspection of the frequency-integrated spectral flux P[ex](k, θ). For illustration, Fig. 2(d) shows the angular dependence of the flux rate at a fixed kd = 0.5, with Fig. 2(e) showing the underlying angle-averaged spectrum with respect to kd at multiple drive intensities, corresponding to the points marked by circles in Fig. 2(c). For convenience, we normalize these flux rates by P[0] = 2γ[1]Θ(ω[1], T[e]), or the thermal power available to a single SPP, for typical values of γ[1] = 4.45 × 10^11 rad/s and ω[1] = 1.78 × 10^14 rad/s. As illustrated in Fig. 2(d), both the magnitude and direction of the flux rate depend on the angle θ and intensity I, with the latter eventually resulting in large, positive angle-averaged flux rates at larger I. Note that there exists a range of modes, corresponding to highly acute angles (grey region), for which the momentum-matching condition k[3] = k[1] + 2k[2] can never be satisfied and for which there is no nonlinear upconversion. Finally, Fig. 2(e) shows the growing contribution and role of modes satisfying momentum-matching in the net exchange, allowing nonlinear upconversion to overwhelm the linear heat flow with increasing I. We demonstrated a four-wave mixing scheme for active near-field heat extraction. This approach not only enables efficient nanoscale thermal refrigeration at very low temperatures ∼ 10 K and low input intensities I ∼ 10^9 W/m^2, but also active control of both the magnitude and direction of heat flow across vacuum gaps. While the systems explored in this work represent only a proof of concept, we are confident that other geometries and materials could result in further improvements. We note that the coupled mode approach^21 employed above is valid as long as the decay and coupling rates are much smaller than the resonant frequencies of SPPs. It breaks down for large intensities I ≳ 10^14 W/m^2 where not only coupling rates are very large but also other considerations such as optical damage threshold^50 become important. While coupled mode theory circumvents the need to carry out full and repeated calculations, further analysis of such nanoscale pump-thermal mixing processes using nonlinear fluctuational electrodynamics^17 remains a challenging and interesting problem for future work. This work was partially supported by the National Science Foundation (DMR-1454836), Princeton Center for Complex Materials with funding from the NSF MRSEC program (DMR-1420541) and Cornell Center for Materials Research with funding from the NSF MRSEC program (DMR-1719875). J. J. H. J. Phys. Rev. B , and Surf. Sci. Rep. A. I. B. N. J. Rev. Mod. Phys. Z. M. , and C. J. Int. J. Energy Res. , and AIP Advances Appl. Phys. Lett. , and J. Appl. Phys. D. G. P. V. D. R. K. E. W. P. G. D. et al., Appl. Phys. Rev. D. G. W. K. K. E. G. D. H. J. , and S. R. J. Appl. Phys. Proceedings of the IEEE C. B. , and Nano Lett. , and A. J. Phys. Rev. B O. D. , and A. W. Phys. Rev. B , and Phys. Rev. B J. D. Nat. Mater. D. K. T. J. S. M. , and K. J. Europhys. Lett. S. G. , and A. W. Phys. Rev. B , and A. W. Appl. Phys. Lett. A. W. Opt. Express , and J. D. J. Phys. Chem. C , and M. L. Opt. Express , and K. H. J. Appl. Phys. J. Appl. Phys. J. D. S. G. J. N. , and R. D. Photonic crystals: molding the flow of light Princeton University Press J. D. , and S. G. Optics Express Z. M. Nano Energy T. P. , and Journal of Applied Physics , and Nano Lett. International Journal of Energy Research E. D. Handbook of optical constants of solids , vol. 3 ( Academic press Phys. Rev. A G. V. N. K. , and IEEE J. Sel. Top. Quantum Electron. L. G. A. M. , and V. F. J. Non-Cryst. Solids , and Opt. Express S. R. J. Non-Cryst. Solids J. M. F. W. J. S. V. Q. L. B. , and I. D. Opt. Lett. , and A. A. Appl. Phys. Lett. Brownian motion and stochastic calculus , vol. 113 ( Springer Science & Business Media , and A. A. IEEE Journal of Quantum electronics C. J. Advances in Optics and Photonics A. A. Appl. Phys. Lett. Chalcogenide glasses: preparation, properties and applications Woodhead Publishing K. Y. D. Y. S. H. , and Nature Photonics J. S. J. Appl. Phys. , and Optics Express
{"url":"https://pubs.aip.org/aip/adv/article/8/5/055029/921460/Near-field-refrigeration-and-tunable-heat-exchange","timestamp":"2024-11-03T10:41:16Z","content_type":"text/html","content_length":"270621","record_id":"<urn:uuid:bbea4f6e-25f0-4ca8-a849-22fcf174c4e6>","cc-path":"CC-MAIN-2024-46/segments/1730477027774.6/warc/CC-MAIN-20241103083929-20241103113929-00213.warc.gz"}
Abaqus software – Learning the Concept One of the most widely used software in the field of finite element analysis is Abaqus software. This software was created in 1987 by David Hibbitt, Ben Carlson and Paul Sorensen; It was originally designed for use in nuclear power and drilling engineering. Because engineers in these branches needed a tool to study complex and non-linear problems. Since this software is based on non-linear problems, it has a high ability to simulate the real world. This software gives the user the ability to model the most complex phenomena by considering very subtle effects. Thus, with the development of a wide range of industries in the years 1980 to 1990, this software found a special place as the selected software among other finite element software’s for users. Considering that Abaqus is a general and extensive modeling tool, its usage is not limited to the mechanical analysis of solids and structures (stress-displacement). Using this software, it is possible to study various issues such as heat transfer, mass penetration, thermal analysis of electrical components, acoustics, soil mechanics and piezoelectricity. Using the Abaqus software is a relatively simple task, even though it provides a wide range of features to the user. The most complex problems can be modeled easily. For example, issues involving more than one component can be modeled by creating a geometric model of each component, attributing the behavior of the corresponding material to those components, and then assembling different components. In most modeling, even models with a high degree of non-linearity, the user should only determine the engineering data such as the geometry of the problem, the behavior of the related material, the boundary conditions and the loading of that problem. In a non-linear analysis, Abaqus automatically selects the rate of load growth and convergence tolerances and finally adjusts their values during the analysis to achieve a correct answer. As a result, the user rarely has to determine the values of the control parameters of the numerical solution of the problem. Based on the recommendations of the software guide, the principles governing any finite element analysis in this software engineering tool include the following three steps; • Preprocessing • Problem Solving • Postprocessing In general, ABAQUS is a set of effective simulation programs based on the finite element method, which is capable of solving relatively simple linear problems to complex non-linear problems. ABAQUS includes a wide range of different elements that can be used to model any type of geometry. It also has different models of materials that can be used to model the behavior of most common engineering materials such as metals, rubber, hyperelastics and soil. Designing the software as a simulation tool with a general purpose has given it the ability to be used in problems beyond structural problems (stress/displacement). Therefore, this tool has become widespread among researchers and engineers, and a significant growth can be seen in the number of its users every day. In the following, we introduce some subsets and common terms of this engineering tool. Subsets of ABAQUS ABAQUS consists of two main subsets named ABAQUS/Standard and ABAQUS/Explicit. There are three other subsets for specific analyzes for ABAQUS/Standard, which are ABAQUS/Design, ABAQUS/Aqua and ABAQUS /Foundation. Also, ABAQUS interface with MSC.ADAMS and MOLDFLOW are MOLDFLOW and ADAMS/Flex respectively. ABAQUS/CAE is a complete environment in ABAQUS that includes features for model creation, analysis and control, as well as checking the results. ABAQUS/Viewer is a part of ABAQUS/CAE that only includes processing and viewing results. The connection between different parts of this software is shown in the figure below; ABAQUS Subsets It is a general purpose processor (solver) and also is a sub-program to solve a wide range of linear or non-linear problems including static, dynamic, thermal and electrical states. The ABAQUS/ Standard machine solves the equations in a implicit method in each step . It is a subset for performing specific analyzes that uses explicit dynamic finite element formulas to solve problems. And it can be used to model transient and short dynamic events such as impact and explosion problems, as well as for situations with a high degree of non-linearity such as forming simulation. ABAQUS/CAE (Complete Abaqus Environment) This is a subset of the graphical environment of the software. This environment allows the user to create the model quickly and conveniently by creating or entering the model geometry. In this environment, physical properties and material properties along with loading and boundary conditions can be assigned to the geometry of all model’s parts. When the model is complete, ABAQUS/CAE records the analysis and allows the user to view the analysis process even during the analysis. Visualization module in ABAQUS/CAE is used to view and check the final results. It is a subset of ABAQUS/CAE that is used only for processing analysis outputs. It is a subset that is added to ABAQUS/Standard. It can be used to simulate offshore structures like oil and gas extraction platforms. Some of the capabilities of this sub-set are considering wave, wind, flow and buoyancy loading. Element in ABAQUS Considering the importance of the mesh technique and the type of elements, user should have more information about the elements. Therefore, in this section, we will describe the characteristics of elements and the theoretical explanations of them. A wide range of elements can be used in ABAQUS, which gives the user great ability to model and analyze different types of problems. Now we get to know the five characteristics of an element that determine its behavior. Each element has the following five characteristics; – Family. – Degrees of freedom (which directly depends on the element family). – Number of nodes. – Formulation method. – Integration method. Each element in ABAQUS has a unique name, such as S4R, T2D2 or C3D8I. It should be noted that the name of an element represents all five properties. Now let’s examine each of the mentioned features; Family of elements In the figure below, you can see the types of element families that are used in stress analysis problems. One of the main differences between the elements of two families is the type of geometry of the elements of those families. In the following, you will get acquainted with the characteristics of several families of the mentioned elements. The first letter or letters seen in the name of an element indicates the family of that element. For example, in the element “S4R”, the letter “S” indicates that this element is from the “Shell” family, and in the case of the element “C3D8I”, the letter “C” indicates that this element is from the “Continuum” family. ABAQUS Family of Elements Degrees of freedom The degrees of freedom are actually the main variables of each problem that are calculated during the analysis. For stress-displacement problems, the main degrees of freedom are the displacements of the nodes. It is necessary to mention that regarding the Shell and Beam elements, the rotation in the nodes is also of degrees of freedom, but according to the software guide, in a heat transfer problems, temperature of the nodes are one of their degrees of freedom. Therefore, it is clear that different elements should be used for a heat transfer analysis than the elements of a stress analysis. The following numbering system is used to represent degrees of freedom in ABAQUS: 1 Translation in direction “1”. 2 Translation in direction “2”. 3 Translation in direction “3”. 4 Rotation around axis “1”. 5 Rotation around axis “2”. 6 Rotation around axis “3”. 7 Twisting around “1” in the section of a Beam element. 9 Acoustic or net pressure. 10 Electric potential. The symbols are different for axisymmetric elements. In these elements, the translation and rotation are symbolized as follows; 1 Translation in direction “r”. 2 Translation in direction “z”. 3 Rotation in “r-z” plane. Note: r and z directions are radial and axial directions, respectively. Basics of Abaqus Abaqus software can perform various types of analysis and in this article, we just evaluate two types of static and dynamic analysis. In a static analysis, the long-term response of the structure is obtained for applied loads. In other cases, the dynamic response of structures is desired. For example, a sudden loading on one of the components that occurs when an impact occurs or the structure’s response to an earthquake. A complete analysis in the Abaqus program usually consists of three steps; • Pre-processing step • Processing step • Post-processing step These three steps are related to each other by a number of files according to the following process. Pre-processing step in ABAQUSE/CAE software or other software; Input file named Job.inp, Job.CAE and Job.jnl. Processing step with ABAQUS standard/explicit. post-processing step in the software; Job.fil, Job.res, Job.dat, Job.odb output files. Pre-processing in ABAQUSE/CAE In this step, you should build the physics of the problem and create an Abaqus input file. The model can usually be created graphically using Abaqus or other preprocessors. It is necessary to explain that it is also possible to generate the Abaqus input file using a text editor such as Notepad. Processing (standard/explicit) ABAQUS Processing, which usually runs as a process in the background, is the step in which Abaqus solves the standard/explicit numerical problem defined in the modules. Examples of the output of stress analysis include displacements and stresses that are stored in binary files and used for the post-processing step. Depending on the complexity of the problems to be analyzed and the power of the computer that performs the analysis, the analysis time can take time from few seconds to few days. Post-processing stage (ABAQUS/CAE). The final result can be evaluated after the completion of processing step, i.e., when the stresses, displacements and other basic variables have been calculated. Evaluation is usually available using the visible output module (graphics module) or other post-processors. The graphics module reads the data of the binary output file and has different options such as colored contours, animation, deformation shape and displaying X-Y curves as results. The Abaqus model consists of many different components and these components together form the physical problem that needs to be analyzed. In the simplest case, the analytical model includes information such as separate geometry, characteristics of the elements cross section, material data, loads and support conditions, analysis type and required outputs. Discrete geometry Finite elements and nodes form the basis of the geometry of the model structures. Each element in the model represents a separate part of the structure, which in turn consists of multiple elements connected to each other. Elements are connected to each other by common nodes. The coordinates of nodes and the way elements are connected (which shows which node belongs to which element) form the geometry of the model. All the elements and nodes in a model are created from the meshing of the members. Usually, meshing only approximates the actual shape of the structure. The element type, shape, position and number of elements used in mesh modules affects the results of analysis. The finer mesh (i.e., large number of elements) presents the more accurate result of job. When the mesh size becomes finer, the analysis results tend towards a single solution and the time required for the analysis also increases. The answer obtained from the numerical model is usually an approximate answer from solving the problems that have been simulated. The extent of the approximations depends on the geometry of model, behavior of materials, boundary conditions and loading, and these parameters determine the accuracy of the numerical answers compared to the experimental one. Of course, it should be noted that the user should always evaluate the sensitivity of his numerical model to the size of mesh seeds in order to choose the most optimal size for expanding his numerical models. The following video tutorial, produced by the Structural Numerical Research Center, teaches you this matter in the form of a simple example. Characteristics of the elements cross section Abaqus software includes a wide range of elements. Many elements have a geometry that is not completely determined by the coordinates of their nodes. For example, the layers of a composite shell or the dimensions of an I-shaped section are not defined from the elements’ nodes. These additional geometrical data are defined as the properties of the cross-sectional area of the elements and are needed to build the complete geometrical model of the problem. Material data One of the important data that must be specified for an element is the characteristics of their materials. Because the preparation of detailed material data; especially in the case of models using materials with complex behavior; is a difficult matter, the verification of Abaqus results depends on the accuracy and availability of material data. Loads and support conditions Loads deform the structure and cause tensions in the structure. The most common types of loading are: • Concentrated loads • Compressive loads on surfaces • Extensive tensile and contraction loads on surfaces • Loads and linear moments on the edges of the shells • Volumetric forces such as gravity • Thermal loads Boundary conditions are applied to create constraints on parts of the model so that the model remains fixed or moves by a predetermined amount. In static analysis, sufficient boundary conditions must be provided to prevent the movement of the model as a rigid body. Otherwise, the motion of the unconstrained rigid body will cause the stiffness matrix to be invertible, and this will cause the analysis of the structure to be interrupted (aborted) before completion. The Abaqus/standard issues a warning message if model encounters a problem in the analysis during simulation. In these cases, it is necessary for users to evaluate warning messages. If during the static analysis, warning of irreversibility of the stiffness matrix is observed, it should be checked whether the structure or a part of it has experienced the movement of rigid body due to the lack of support constraints or not. The motion of a rigid body can be both translational and rotational. In the dynamic analysis of inertial forces, as long as all the individual members of the model have mass, it prevents sudden unlimited movement; Therefore, solution warnings in a dynamic analysis usually indicate the presence of other problems in the model, such as excessive plasticization. Type of analysis As mentioned earlier, Abaqus software can perform various types of simulations, and in this educational article, due to the nature of most engineering problems, only two types of static and dynamic analysis will be mentioned. We know that in a static analysis, the long-term response of the structure or member to applied loads is obtained and such analyzes are not dependent on time. On the other hand, at other times, the dynamic response of the structure, including the history of displacements and forces, is desired by user. Choosing the type of analysis and mastering its concepts is very important. Because some elements are defined only for explicit analysis and others for implicit analysis in the software library. On the other hand, choosing a static or quasi-static solution method can have a significant impact on the problem-solving time. Soon, in another educational article, we will explain the standard and explicit solution method in detail. In this article, we reviewed the basic concepts, requirements and processes of Abaqus software, and if you are a beginner in this software, you need to consult with an expert. For more than 13 years, Structural Numerical Research Center has been active in the field of numerical simulation of mechanical, structural and geotechnical engineering problems. Some of our educational models can be obtained in the link and numerical modeling techniques are provided in step-by-step form. Keep in mind that to increase your modeling knowledge in Abaqus software, the best reference is the software help, along with the tutorials on our website.
{"url":"https://numericalarchive.com/2022/09/09/abaqus-software-learning-the-concept/","timestamp":"2024-11-05T23:51:26Z","content_type":"text/html","content_length":"123977","record_id":"<urn:uuid:b1165c11-60cd-42f0-af68-ca0c2976db42>","cc-path":"CC-MAIN-2024-46/segments/1730477027895.64/warc/CC-MAIN-20241105212423-20241106002423-00235.warc.gz"}
Part B: Developing Statistical Reasoning (45 minutes) - Annenberg Learner Private: Learning Math: Data Analysis, Statistics, and Probability Classroom Case Studies, Grades 3-5 Part B: Developing Statistical Reasoning (45 minutes) The National Council of Teachers of Mathematics (NCTM, 2000) identifies data analysis and probability as a strand in its Principles and Standards for School Mathematics. In grades pre-K through 12, instructional programs should enable all students to do the following: • Formulate questions that can be addressed with data, and collect, organize, and display relevant data to answer them • Select and use appropriate statistical methods to analyze data • Develop and evaluate inferences and predictions that are based on data • Understand and apply basic concepts of probability In grades 3-5 classrooms, students are expected to use appropriate statistical methods to do the following: • Describe the shape and important features of a data set and compare related data sets, with an emphasis on how the data are distributed • Use measures of center, focusing on the median, and understand what each does and does not indicate about the data set • Compare different representations of the same data and evaluate how well each representation shows important aspects of the data In grades 3-5, children readily notice individual data points and are able to describe parts of the data — where their own data falls on the graph, which value occurs most frequently, and which values are the largest and smallest. A significant development in children’s understanding occurs as they begin to think about the set of data as a whole. Our goal for children is for them to see a data set as a distribution of values with important features, such as center, spread, and shape.In grades 3-5 classrooms, students are expected to use appropriate statistical methods to do the To focus students’ attention on the shape and distribution of the data, it is helpful to build from children’s informal language to describe where most of the data are, where there are no data, and where there are isolated pieces of data. The words clusters, clumps, bumps, and hills highlight concentrations of data. The words gaps and holesemphasize places in the distribution that have no data. The phrases spread out and bunched together underscore the overall distribution. Teachers must also continually emphasize and help students see that what they notice about the shape and distribution of the data implies something about the real-world phenomena being studied. In grades 3-5, students learn to use measures of center to summarize a data set. Building on children’s informal understanding of what is the most, what is the middle, and what is typical, teachers can help students develop understanding about the mode, median, and the mean. But students need to learn more than simply how to identify the mode or median in a data set and how to find the mean: They need to develop an understanding of what these measures of center tell us about the data, and what each does and does not indicate about the data set. The emphasis in these grade levels should be on the median, with informal exploration of the mean. Children can see where the median is located among the data, but the mean is much more abstract, as it has no clear identity within the data When viewing the video segment, keep the following questions in mind: • Thinking back to the big ideas of this course, what are some statistical ideas that these students are developing? • What questions could be posed to determine the extent of students’ understandings of what the mode, mean, median, and range do and do not indicate about the data set? In this video segment, Suzanne L’Esperance selects a group of students to present their findings. Each group of students created a line plot of the class data on family size and determined the mode, median, mean, and range for the data set. Watch as the group of students takes turns presenting the summary information to the class. Problem B1 Answer the questions you reflected on as you watched the video: a. What statistical ideas are these students developing? b. What questions could you pose to determine the extent of students’ understanding of what the mode, mean, median, and range do and do not indicate about the data set? Problem B2 This line plot (or dot plot) below displays the family-size data collected by the students in Ms. L’Esperance’s fifth-grade classroom. Imagine yourself in a conversation with the children about this data. A key question you might ask the students is, “What do you notice about the data?” Using the informal language of clusters, clumps, bumps, hills, gaps, holes, spread out, or bunched together, write five statements that you hope students would make describing the set of data as a whole. See Note 4 below. Too often, children describe the data as numbers devoid of context. Another question you should frequently ask students regarding their observations is, “What does that tell us about the family size? Problem B3 For each of the five statements you wrote in Problem B2, indicate what that observation might imply about the real-world context of family size. In another classroom investigation which reveals students’ understanding of the notion of “average,” students were given the following scenario (from Russell and Mokros, 1996): We took a survey of the prices of nine different brands of potato chips. For the same-sized bag, the typical or usual or average price for all brands was $1.38. What could the prices of the nine different brands be? Note that the language used — words like typical, usual, or average — keeps the discussion open to various ways that students might think about the notion of average. Problem B4 Consider how students might respond to this task and then develop three hypothetical student responses that are each based on a different measure of center — mode, median, and mean. See Note 5 below. The potato-chip task was presented to fourth-grade students in individual interviews to research students’ understanding of average. Here are some of the students’ responses: a. Some students would put one price at $1.38, then one at $1.37 and one at $1.39, then one at $1.36 and one at $1.40, and so forth. b. One student commented, “Okay, first, not all chips are the same, as you told me, but the lowest chips I ever saw was $1.30 myself, so, since the typical price is $1.38, I just put most of them at $1.38, just to make it typical, and highered the prices on a couple of them, just to make it realistic.” c. One student divided $1.38 by nine, resulting in a price close to 15¢. When asked if pricing the bags at $0.15 would result in a typical price of $1.38, she responded, “Yeah, that’s close enough.” d. When some students were asked to make prices for the potato-chip problem without using the value $1.38, most said that it could not be done. e. One student chose prices by pairing numbers that totaled $2.38, such as $1.08 and $1.30. She thought that this method resulted in an average of $1.38. Problem B5 For each response above, was the student reasoning about the “average” as a mode, median, or mean? Problem B6 Read the article “What Do Children Understand About Average?” by Susan Jo Russell and Jan Mokros from Teaching Children Mathematics. a. What further insights did you gain about children’s understanding of average? b. What are some implications for your assessment of students’ conceptions of average? Download PDF File: What Do Children Understand About Average? Article Continued Principles and Standards for School Mathematics (Reston, VA: National Council of Teachers of Mathematics, 2000). Standards on Data Analysis and Probability: Grades 3-5, 176-181 Reproduced with permission from the publisher. Copyright © 2000 by the National Council of Teachers of Mathematics. All rights reserved. The potato-chip activity is adapted from Teaching Children Mathematics. Copyright © 1996 by the National Council of Teachers of Mathematics. Used with permission of the National Council of Teachers of Mathematics. Russell, Susan Jo and Mokros, Jan (February, 1996). What Do Children Understand About Average? Edited by Donald L. Chambers. Teaching Children Mathematics, 360-364. Reproduced with permission from Teaching Children Mathematics. Copyright © 1996 by the National Council of Teachers of Mathematics. All rights reserved. Note 4 Using line plots (dot plots) in elementary classrooms is a fairly new practice. Consider how you might use this graphical representation of data with your students. How does this compare with your current method of presenting data? Note 5 You might want to review the statistical ideas of median and mean. Session 2, Part D, The Median Session 5, Part A, Mean and the Median Problem B1 a. Statistical ideas included using the mode, median, and mean as measures of center, and the range as an indicator of variation. b. Answers will vary. Examples of questions would be, “You stated that the range is 3. What does this tell us about the data set?” or “You said that the median is 4. If this is the only information I had asked you to figure out, what wouldn’t I know about the data?” Problem B2 Here are some possible statements that children might make: • There is a bump at 4. • The data are really bunched together. • There is a cluster at 3 and 4. • There’s a big gap from 6 to 11. • The data are not very spread out Problem B3 • The bump at 4 is the size of families that occurred most often for our class. • Because the data are all bunched together, we know that families in our class are very similar in size. • The cluster at 3 and 4 indicates that most families in our class have three or four people. • The gap from 6 to 11 tells us that no families in our class have six, seven, eight, nine, 10, or 11 children. • The lack of “spread” in our data tells us that our class’s families are similar in size. Problem B4 • A response based on the mode might be to make the prices of all nine bags exactly $1.38. Another response based on the mode is to price four bags at $1.38 and the others at $1.30, $1.32, $1.36, $1.37, and $1.50. The reasoning is to place more bags at $1.38 than at any other price. • A response that is based on the median is to make three bags cost $1.38 and the others cost $1.30, $1.30, $1.35, $1.40, $1.47, and $1.49. The reasoning is to put some bags at $1.38 and then to place an equal number of bags at prices lower and higher than $1.38. Here, three bags cost more than $1.38 and three bags cost less than $1.38. • A response that is based on the mean is to make the bags cost $1.38, $1.37, $1.39, $1.36, $1.40, $1.35, $1.41, $1.34, and $1.42. Since there’s an odd number of bags, the reasoning is to place one bag at $1.38 and then add and subtract the same amount to create new prices. Here, 1 cent was subtracted from $1.38 to get $1.37, then 1 cent was added to $1.38 to get $1.39, and so on. Problem B5 a. Median b. Mode c. Mean d. Mode or median e. Mean Problem B6 Answers will vary. You may want to use the suggestions for action research to assess your own students’ understanding of average. How would they respond to the potato-chip task?
{"url":"https://www.learner.org/series/learning-math-data-analysis-statistics-and-probability/classroom-case-studies-grades-3-5-6-8/developing-statistical-reasoning-45-minutes/","timestamp":"2024-11-11T13:33:24Z","content_type":"text/html","content_length":"119472","record_id":"<urn:uuid:d7f6367a-30d9-45fd-bb84-7217cf3f33d7>","cc-path":"CC-MAIN-2024-46/segments/1730477028230.68/warc/CC-MAIN-20241111123424-20241111153424-00044.warc.gz"}