content
stringlengths
86
994k
meta
stringlengths
288
619
Zero division (Summary). Hi Dear Biomch-L Readers, Here is a summary of the messages I received after my posting regarding the zero division problem. Actually, it's not a summary, but a list. Of course you can call me lazy, but the point of views were so different I preffered to keep them in their 'original version'. What I'm doing now is to put a variable in the setup file of the program. According to the value of this variable, the program may adopt one of the following solutions: warning messages, add a value to denominator if zero, always add value ... Also admire and enjoy the lesson of geography about Norway sent by Oyvind Stavdahl!! |. .| Serge Van Sint Jan (sabbatical) | " | Department for Mechanical Engineering \=/ The University of New Mexico \|/ | \|/ Albuquerque, USA \ | / voice: int + 1 505 277 2339 -^-----^- fax: 1571 | email: serge@slider.unm.edu | ( ) / \ / \ | | | | -- -- --------- ORIGINAL POSTING ---------- Dear Biomch-L Readers, Everybody knows the problem of a division with a divisor equal to ZERO. When programming, three solutions can be used to avoid an interruption of your program with display of inflamed error messages from the system: 1: Test the divisor before each division and if it is equal to zero skip the division or even abort the current 2: Test the divisor before each division and if it is equal to zero, add a very small number to the divisor. 3: Systematically add a very small number to ANY divisor. (In the examples of Matlab, 2.204e-16 is added systematically). Solution 1 is surely the most honnest. But it is difficult to explain to a user, who does not care about math, that his data can not be processed because somewhere "a division by zero" appears. So, Solution 2 can be used. But the addition of the small number only takes place for some data and not for the others (because of the "if"). To avoid to disturb the "harmony of the balance" in the data set, Solution 3 may be applied. But is it really scientific to alter numbers with the only reason that one of them will produce a 0 divide exception? So, after this (maybe too!) long introduction, my questions are: 1- What is your personal opinion and experience on the subject? How do you solve the problem when programming and why? 2- Are they any well-defined rules? Or maybe nobody cares and uses the best solution according to his own application? Thank you for any answers! I shall post a summary of the answers later. ---- From: "Will G Hopkins" You should do what SAS does: abort that step and return a missing value. Nothing else makes sense. Maybe if the numerator is zero too, you could flag it specially, because 0/0 is a bit different from (nonzero)/0. Will G Hopkins PhD Physiology and Physical Education University of Otago, Dunedin, NZ ---- From deleva@risccics.ing.uniroma1.it A division by zero should give as a result an infinite number. For example, the slope of a straight line parallel to the y axis is infinite, and so is the tangent of 90 degrees (PI/2). I suggest to change the result of a division by 0, instead of the divisor. If your programming language allows computing the tangent of PI/2, the result of a division by zero could be made equal to: TAN (ATAN(1) * 2), where ATAN(1) * 2 is the value of PI/2. (TAN=tangent, ATAN=arctangent). Computing the tangent of PI/2, you will have the largest number your machine can produce, i.e. the colosest to infinite. Adding the same infinitesimal number to any divisors is not a good solution: an error will occur when the divisor is equal to the opposite of your infinitesimal number (i.e., the number times -1) The best solution anyway, in my opinion, is to write appropriate specific code for handling the condition divisor=0, in each part of your program where the division=0 might occur. However, SOMETIMES IT IS RIGHT for a program to stop and report an unrecoverable error when a division by zero is attempted. A division by zero could indicate either the use of wrong input data, or a BUG in the program, and this is a good reason not to automatically correct any division by zero, wherever it occurs, without knowing the specific consequences of this action in each particular segment of your program. Please, let me know what are the resposes of the other __________ _________ ___________~___ ________ _________________~___ / ~ ~ ~ ~ \ /______________~______~__________ _______~_____~______________~_____~_____\ | Paolo de Leva ~ \ Tel.+ FAX: (39-6) 575.40.81 | | Istituto Superiore di Educazione Fisica > other FAX: (39-6) 361.30.65 | | Biomechanics Lab / | | Via di Villa Pepoli, 4 < INTERNET e-mail address: | | 00153 ROME - ITALY \ deLEVA@RISCcics.Ing.UniRoma1.IT | |_____________________~________~__________________ __________________ _____| Panta rei :-) ---- From dapena@valeri.hper.indiana.edu In our work, there are certain situations in which a division by zero is possible, although unlikely. For instance, it is possible that a short segment (say, a hand) may very occasionally come out as having length zero through a quirk in digitizing the raw data. In some of the calculations that we do when looking for the angular velocity of a segment, this will give a "divide by zero" error. To handle these problems, I take what you call "approach #2", i.e., fix the problem, but only when there truly is a divison by zero. But I do an extra thing. When the computer makes such a fix, I make it write to the screen (not to any file) a message such as, "AT TIME T = 9.98 S THE LENGTH OF SEGMENT 6 WAS ZERO", or something like that. That way, I know when one of these adjustments has happened (which is extremely rare). The possible solution #1 that you mention would be terrible, because then every time that the program fails, you have to go crazy looking for where it failed, and that can be very time consuming and frustrating. We've all been there!! Solution #3 is also not very good in my opinion because it adds a lot of extra computation to the program if you have to add a small value to each divisor before performing tha division. Also, I am not sure, but would it not be possible that the addition of a small value to the divisor might MAKE the divisor a previously safe (i.e., non-zero) divisor acquire the zero value that will make it fail? As I said before, I like best your solution #2 with the added warning each time that the solution is actually used. Of course, as i am sure that you realize, you have to consider also what will happen to the number that comes out of the division in which we have made the denominator be slightly off from zero. The number that will come out will be a huge number, and we have to think of what the program will later do with it, and what are the implications. Continuing with the example that I mentioned previously, I alter the segment length to a non-zero value to be able to make the division, and thus calculate the angular velocity. However, the result will be a near-infinite angular velocity, which when later multiplied by the moment of inertia of the segment will give us a near-infinite angular momentum for the segment (and consequently also for the whole body), which would mess up our results. So what I do is alter the denominator into a non-zero value, and let the computer calculate the (garbage) angular velocity. That way I keep the job alive. Then I declare that the angular velocity just calculated for the segment was zero, and make the program continue from there as when the divider was not close to zero. In other situations you may want to set up other "salvaging" routes. This is one that works well in the situation described. Jesus Dapena Jesus Dapena Department of Kinesiology Indiana University Bloomington, IN 47405, USA 1-812-855-8407 (office phone) dapena@valeri.hper.indiana.edu (email) ---- From STERGIOU@OREGON.UOREGON.EDU Dear Serge, the problem can be solved if before you will do any divisions or any other math, you check your data for zeros. If you will find any zeros, then you can offer several interpolating solutions to the user. The user will then decide what is best for his/her data. Nick Stergiou Biomechanics Lab Univ. of Oregon ---- From Oddvar.Arntzen@avh.unit.no Hello Serge! You may instruct your compiler to generate error checking code for the part of your program where divisions take place. When a division by zero happens (or other illegal divisions as well), this error is trapped by your program. Control is then transferred to an error handler written by you. The error handler must have the ability to take the needed precautions. This method let you divide anything by anyting without abortion of the program. As an example, if a division by zero takes place, the error handling routine may try to find why the divisisor is zero (it may be that the divisor is a result of a data acquisition, and this acquisition is not yet performed), and if wanted, adjust the divisor accordingly, skip the division or whatever you want. This error trapping method can also be utilized for other mathematical purposes such as logarithms (what is the logarithm to zero?). Oddvar Arntzen Faculty of Social Sciences University of Trondheim 7055 Dragvoll Phones 47-73591908 47-74827673 Fax 47-73591901 E-mail oddvar.arntzen@avh.unit.no ---- From Oyvind.Stavdahl@itk.unit.no Dear Serge, I do not know any standard solution to the Divide-by-zero problem, but I do have a personal opinion: Any computer algorithm is constructed in order to solve a (more or less) "realistic" problem, that is, which is somehow related to a real-world problem. Therefore, when the divide-by-zero problem arises and indicates "no solution", it may have two (or probably more) reasons: 1. The real-world problem related to the running computer algorithm DOES NOT HAVE a numerical solution. The reason for this cold be, say, that the physical conditions present during the data aquisition did not match the assumptions made when constructing the algorithm. Therefore, an error message would actually be the RIGHT RESULT of the computation; any data adjustments that would cause the divide-by-zero not to occur, would probably cause erroneous conclusions in this case. 2. If the data set is "valid", the problem might be related to the computer software or hardware, such as roundoff errors or simply program BUGS (logical algorithmic errors). The correct action would be redesigning the algorithm or updating/improving the hardware. If the problem is roundoff errors, a minor adjustment of the data set could solve the problem, but one must carefully examine the consequences of such an action (if the divisor is so small that it is being rounded off to zero, adding a "small number" is likely to change its value by several orders of magnintude; the result of the division would be canged by the same factor). To sum up this ("maybe too long") tirade, I believe one must determine the REASON for the zero divisor in each case, otherwise one may deceive oneself by creating solutions that are erroneous or even impossible. If you get any "standard solution" responses, I would love to see them. Good luck dividing numbers! Oyvind Stavdahl (Siv.Ing., Doctoral student) Dept. of Engineering Cybernetics O. Bragstads plass 8 N-7034 TRONDHEIM Phone +47 73 59 43 91 (direct) +47 73 59 43 76 (switchboard) Fax +47 73 59 43 99 Email stavdahl@itk.unit.no NORWAY - _/ /| '`-. The land _// ' _ _,' of the _/`-/ / \ -, Vikings /._/ ,-. | |/ /_ - \ \_,_/ ;' / ._/ ' ,',' / / / ; / / /, / _,-'O--- Trondheim _/ / / \ \ ,'- / |--' O---- Lillehammer |,-, ' | `, O---- Oslo / _/\ / \ / ' ---- From salviap@ulb.ac.be Salut Serge J'ai lu ton message concernant les divisions par zero, la plupart du temps lorsque j'ai un tel probleme, j'essaye d'abord de voir si la facon de presenter le probleme necessite reellement d'utiliser la division. Si je suis oblige de diviser, j'utilise une variable au denominateur que je teste. Je pense que chaque situation a sa solution. A Bientot, Patrick. Patrick Salvia Departement for Functionnal Anatomy University of Brussels - Belgium I read your message about divisions by zero. Most of the time when I have such a problem, I first try to see if the problem really needs to use a zero division. And if I have to use it, I test the denominator. I think each case has its specific solution. ---- From yhc3@cornell.edu I am fairly new to the research scene, but it seems to me that altering numbers to the nth degree changes the data very slightly. As long as when you are interpreting your results, you don't expect to proclaim the same (or even anywher near the same) magnitude of precision as the alteration, then the addition will be lost during the analysis. I think it depends upon how precisely you intend to interpret your results. Anyway, that's one grad students opinion. Young Hui ================================================== =========================== Young Hui Chang C C Department of Anatomy C e-mail: yhc3@cornell.edu College of Veterinary Medicine C U U phone: 607-253-3551 Ithaca, NY 14853-6401 C U C U fax: 607-253-3541 U U ================================================== =========================== ---- From CORNWALL@NAUVAX.UCC.NAU.EDU I use solution #1 because it is the most honest Mark Cornwall Northern Arizona University
{"url":"https://biomch-l.isbweb.org/forum/biomch-l-forums/biomch-l-1988-2010/3205-zero-division-summary","timestamp":"2024-11-13T18:14:30Z","content_type":"application/xhtml+xml","content_length":"66187","record_id":"<urn:uuid:89fafe5c-e048-4f35-bfb2-cfbad9299dea>","cc-path":"CC-MAIN-2024-46/segments/1730477028387.69/warc/CC-MAIN-20241113171551-20241113201551-00284.warc.gz"}
• java.lang.Object □ com.google.common.graph.Traverser<N> Type Parameters: N - Node parameter type @DoNotMock("Call forGraph or forTree, passing a lambda or a Graph with the desired edges (built with GraphBuilder)") public abstract class Traverser<N> extends java.lang.Object An object that can traverse the nodes that are reachable from a specified (set of) start node(s) using a specified There are two entry points for creating a Traverser: forTree(SuccessorsFunction) and forGraph(SuccessorsFunction). You should choose one based on your answers to the following questions: 1. Is there only one path to any node that's reachable from any start node? (If so, the graph to be traversed is a tree or forest even if it is a subgraph of a graph which is neither.) 2. Are the node objects' implementations of equals()/hashCode() recursive? If your answers are: □ (1) "no" and (2) "no", use forGraph(SuccessorsFunction). □ (1) "yes" and (2) "yes", use forTree(SuccessorsFunction). □ (1) "yes" and (2) "no", you can use either, but forTree() will be more efficient. □ (1) "no" and (2) "yes", neither will work, but if you transform your node objects into a non-recursive form, you can use forGraph(). Jens Nyman □ Method Summary All Methods Static Methods Instance Methods Concrete Methods Modifier and Type Method Description java.lang.Iterable breadthFirst(java.lang.Iterable<? Returns an unmodifiable Iterable over the nodes reachable from any of the startNodes, in the order of a breadth-first traversal. <N> extends N> startNodes) java.lang.Iterable breadthFirst(N startNode) Returns an unmodifiable Iterable over the nodes reachable from startNode, in the order of a breadth-first traversal. java.lang.Iterable depthFirstPostOrder Returns an unmodifiable Iterable over the nodes reachable from any of the startNodes, in the order of a depth-first post-order <N> (java.lang.Iterable<? extends N> traversal. java.lang.Iterable depthFirstPostOrder(N startNode) Returns an unmodifiable Iterable over the nodes reachable from startNode, in the order of a depth-first post-order traversal. java.lang.Iterable depthFirstPreOrder Returns an unmodifiable Iterable over the nodes reachable from any of the startNodes, in the order of a depth-first pre-order <N> (java.lang.Iterable<? extends N> traversal. java.lang.Iterable depthFirstPreOrder(N startNode) Returns an unmodifiable Iterable over the nodes reachable from startNode, in the order of a depth-first pre-order traversal. static <N> forGraph(SuccessorsFunction<N> Creates a new traverser for the given general graph. Traverser<N> graph) static <N> forTree(SuccessorsFunction<N> tree) Creates a new traverser for a directed acyclic graph that has at most one path from the start node(s) to any node reachable from the Traverser<N> start node(s), and has no paths from any start node to any other start node, such as a tree or forest.
{"url":"https://guava.dev/releases/33.2.1-jre/api/docs/com/google/common/graph/Traverser.html","timestamp":"2024-11-10T16:15:21Z","content_type":"text/html","content_length":"31007","record_id":"<urn:uuid:16d4efe6-5555-4ceb-9ebb-c847615ce3bb>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.60/warc/CC-MAIN-20241110134821-20241110164821-00670.warc.gz"}
Numbers "between" -0 and 0 from The Bloxin Channel The Bloxin Channel appears to be a YouTube Channel primarily inspired by a YouTube Channel called Numberblocks which seems to be an educational channel about basic mathematics. Specifically they created a bunch of characters formed out of blocks to represent the some of the simplest of the positive integers, beginning with 1 and continuing at least until 10. In fact there appears to be several channels inspired by the NumberBlocks channel, featuring made up NumberBlocks characters in a number video akin to the various number videos we see on YouTube. Why does any of this matter? Well because the Bloxin Channel also appears to take inspiration from videos like those created by Mathis R.V. and NO!. The whole "Numbers beyond Absolute Infinity" seems to have begun this way. On May 22nd of 2021 Mathis R.V. released "Numbers 0 to Absolute Infinity !!!". This got over 1 million views. It featured basically almost everything important to googologists. of the 37 minute and 52 second runtime 36 minutes and 5 seconds were devoted to finite numbers. The last 1 minute and 47 seconds are devoted to Cantors ordinal and cardinal transfinite numbers. At the end we reach Absolute Infinity, regarded by googologist's as "the largest possible number" assuming it even exists. Naturally since that makes it an "End Number" and googology is about continuing without end, it is almost never really talked about. The work of googology is to make as many finite and infinite numbers as possible below infinity and absolute infinity respectively. This video is important however as a jumping off point. Because having made the whole video that contains all of googology Mathis R.V. made something that could be sped up and remixed. Which is exactly what he did. The next video is essentially a creepypasta version of the previous video. On August 20th of 2021 Mathis R.V. releases "Numbers 0 to ABSOLUTELY EVERYTHING !!!". It goes through the entirety of Numbers 0 to Absolute Infinity !!! in a mere 42 seconds. The music becomes distorted from running super fast. The video then repeats but this time with 4 simultaneous copies playing side by side with a slight offset in timing. When this loop ends, 16 versions fill up the screen. All the audio overlaps creating more and more distortion. The number of simultaneous version keep going up by powers of 4 until the screen is nothing but audatory and pixelated distortion. This goes on for a disturbingly long time. The video runs for a total of 43 mintues and 19 seconds, longer than the original video in fact. Eventually the video goes entirely haywire, ditching the quadrupling sequence instead becoming something outta 2001. I won't spoil it. Essentially it is the "mind screw" version of a number video. The mysterious titular "ABSOLUTELY EVERYTHING" never shows up, but the title was enough ... to suggest continuing beyond Absolute Infinity. One possible Interpretation is "Absolutely Everything" would mean everything from 0 to Absolute Infinity (cause that should be everything, right?). However, my interpretation is that the video is trying to go beyond Absolute Infinity to reach this supposed Absolutely Everything ... and in the end it explodes math. This explains the video looping. It keeps cycling through 0 to Absolute Infinity in an attempt to get beyond Absolute Infinity and it keeps failing to actually go beyond it (Absolute Infinity x4 or 16 or 64 etc. is STILL just Absolute Infinity). This video is very important because it inspired NO! to start making videos past Absolute Infinity. This one only got about 138k views so is less well So what happened next? NO! took inspiration from the Mathis R.V. video. On September 11th of 2021 NO! releases "Numbers 0 to ABSOLUTE TRUE END - (Beyond the Absolute Infinity And Everything)". NO! clearly interpretted "Absolutely Everything" not as a description of covering all numbers in mathematics, but as a number itself. The proof is in the title itself, where he adds the clarification this video goes beyond Absolute Infinity and Absolute Everything. NO! didn't just introduce one new number beyond "Absolute Everything" though, but a whole host of new "numbers" beyond Absolute Infinity. This was basically the video that actually kicked off fictional googology. Much like the Mathis R.V. video Numbers 0 to Absolute Infinity it spends most of it's 47 minute and 55 second runtime on the finite numbers. The first 36 minutes and 50 seconds deals with tradtional googology. Cantor's transfinite numbers then begin ending at Absolute Infinity at 39 minutes and 22 seconds. The remaining 8 minutes and 33 seconds deal with so called numbers beyond Absolute Infinity. At 45 minutes and 11 seconds we reach "Absolute Everything". The last 2 minutes and 44 seconds are numbers supposedly larger than even that. This oneupmanship between Mathis R.V. and NO! is what sparked the beginning of this community. This video also has far less views than the original Numbers 0 to Absolute Infinity only having 191k views as of now. In a previous blogpost I provided a full list of the numbers in this video that occur after Absolute Infinity. I consider this our first "canon list". Among the numbers named in this video is superfinity, the first entry after Bear's Number in fact. This will be important later. This brings us to the subject of this blogpost, and our earliest Bloxin video of interest. On September 26th of 2021 The Bloxin Channel released "-1000000 To Beyond Absolute Infinity". Notice the similarity in the title from the Mathis R.V. and NO! videos. The Bloxin Channel decides to buck the trend of starting at 0. As we will see, this ends up being important. I like to think of this video as a creepypasta version of the Numberblocks videos. Here we extend the idea of block characters for numbers that would normally make no sense as block characters (hence the creepypasta element). It begins with negative numbers with inverted colors and creepy backwards playing music in the background, beginning arbitrarily with -1000000. It quickly goes through the negatives. The negative entries are: and then we get -0 at 0:44. A troll right? -0 is obviously just 0, so why have it on the list? Then things go completely insane until 1:13 (a mere 29 seconds later) when we reach ... 0. So there are a bunch of crazy entries that are "between" -0 and ... 0. We will come back to this. But first let's consider the rest of the content of the video to confirm it took some inspiration from the NO! video specifically (that came out only 15 days earlier). After 0 it goes through the googologically small numbers beginning with one versillionth. This continues for quite a while until we hit the 3:06 mark when we reach 1 ... the first official Numberblocks character. Half numbers are added in for a bit, such as 1+1/2, 2+1/2, and so on. Starting at 5 we just go through the standard Numberblocks characters. This continues for a while basically serving as a review of basic googology. We have the usual illions, eventually leading to Bowers extended illions, and then eventually some googology notations are introduced. We've got some Knuth Arrow notation, Bower's Operator notation, then it quickly goes through Graham's Number, the TREE function and the SSCG function. The largest finite number entry is the 1000000th-Xi Function Number. This is followed by "Infinity" with the classic lemniscate symbol. This is followed by "infiniteplex" (10^infinity) which under some interpretations would be no larger than infinity. Some Cardinals and Ordinals show up with the occasional absurdity like aleph_1/2. Some of these numbers are out of order as well. Aleph_1 for example comes before epsilon-zero even though epsilon-zero is a countable ordinal whereas Aleph_1 is an uncountable cardinal. At this point things just go off the rails. The video starts inventing made-up cardinal names (below Absolute Infinity) and even starts inventing new words for number types such as "Gendinal". This term would later show up on this very wiki, perhaps to avoid use of the word cardinal, since "numbers" beyond Absolute Infinity might theoretically not even be cardinals anymore. Next come the "Rondinals". Things become quite trollish after this, including reference to things that aren't even strictly numbers like "Five Pounds" (since this is a physical unit). Finally at the 10:00 mark we reach Absolute Infinity. At 10:36 we get Transfinity, the 16th entry after Absolute Infinity from the NO! video. Immediately after that we get superfinity the 165th entry after Absolute Infinity from the NO! video. A little later megafinity shows up, which is actually the entry just after superfinity in the NO! video. The fact that it uses the same names in the same order I think is enough evidence to show that the Bloxin video took inspiration from the NO! video. Unfortunately ordinal level breaking comes later even though it's actually only the 34th entry after Absolute Infinity in the NO! video. But some of these are clearly borrowed from NO!. The very last Infinity on the video is unfortunately hard to make out because the text can not be seen or easily seen. The closest I can make it out is "baggiragigationalfinity", but don't quote me on that. Also it's debatable whether that would actually surpass Absolute True End. I may investigate the contents of this video in more detail later, in particular, the non-canon numbers above Weakly Compact Cardinal and below Absolute Infinity, as well as the numbers above Absolute Infinity, but what I'd like to draw attention to for now is the curious "numbers" that occur in the video between -0 and 0. There are exactly 14 entries. They are as follows: " ' ^ ' " I think we can safely make the assumption that the entire video is intended to be a video of numbers gradually "increasing". So no matter how illogical, if an entry occurs after another it must be thought of as "greater" at least according to the video, and if an entry occurs before another entry it must be "lesser" according to the video. By that reasoning we can say that all 14 entries, have the property of being greater than -0 and less than 0. However since -0 and 0 are in actual fact equal one may well say that this is, strictly speaking, impossible, since it would imply being both greater and less than 0 at the same time! Another possible interpretation, suggested by some in the comments of the video itself, are that all these numbers are simply equal to 0. That makes sense, if we define numbers whose difference is 0 to be the same number ... However I offer another interpretation. I think the video is implying that these numbers are somehow ... impossible as it might seem ... smaller than 0. That is, neither greater than 0 nor less than 0 nor equal to 0, but actually a smaller size than nothing at all. If we assume this, along with the idea that these numbers are ordered, as well as the idea that all the numbers after -0 are "not negative" (because they don't have a sign), I think we can safely say that branoro is intended as the absolute smallest number in this video ... followed in size by tutanoro, gihenoro, jiwanoro, kodanoro, arrunoro, and hegirondo. The source of these names is completely unknown, but that is sort of part of their appeal. More evidence for the idea that these numbers were intended to be smaller than 0, is that the next one, de-zeroed, is defined as 0^12. This suggests the idea of multipying 0 by itself to get a smaller number. Think of it like this. When we multiply something by say 0.1, we make it 10 times smaller. If we multiply something by 0, we might say we make it "infinitely times smaller". Well if we allow for sizes smaller than 0 then this would suggest 0*0, or 0^2, would be a number infinitely smaller than 0 which is already infinitely small. By this reasoning 0^12 is very very small indeed. The next entries are defined as 0^15, 0^33, and 0^3003 respectively. But the numbers are suppose to be getting larger, right? Well I think the Bloxin Channel misunderstood how exponents work with small numbers. With large numbers, as the exponent increases the number gets larger, but with small numbers, the number actually gets smaller as the exponent increases. Thus by this misunderstanding the creator(s) think that by making the exponent larger they are getting larger and closer to 0. In actuality we would have to approach 0^1 to get 0 (at least following this logic). These definitions, by the way, are part of the justification people used to say that all these numbers equal 0, since in standard mathematics 0 to any positive power is 0. None the less powers of 0, like 0^12, at least suggest the idea of something smaller than 0, which seems in the spirit of the video, so I take that as evidence that that was the intention. So why do I think these are important? Well, here we claim there is No True End (a hard level cap), not even an implied one by some endless structure (a soft level cap). The absolutely infinite, as Cantor called it, is more conceptual than actual in mathematics. That's because any attempt to define it leads to a consistency paradox. So the accepted way to handle this is to say that such collections that would otherwise be absolutely infinite are not sets at all but proper classes and don't have an actual "size". They are in a certain sense "sizeless", that is they can transcend any desired size just by generating the desired number of elements. Absolutely Infinite is therefore a quality of proper classes, not a specific quantity. Despite these issues, here we've decided that not only does an absolutely infinite collection have an actual size we may call Absolute Infinity, but that it is also possible to go beyond that. Well if we are going to get larger without limit, shouldn't we also be able to get smaller without limit? Well technically we can! You see in the surreal number system for every initial ordinal, such as w, or w1, we get an infinitesimal which is its reciprocal! So since the surreals include all ordinals it follows that there are infinitesimals going absolutely infinitely downwards towards 0 just as the ordinals go absolutely infinitely upwards towards Absolute Infinity! In otherwords, 0 and Absolute Infinity are both boundaries of a sort. The weird thing is ... 0 is considered to exist in mathematics while Absolute Infinity is not. Despite some issues like division by 0, this is because the mere definition of 0 does not lead to a consistency paradox, like Absolute Infinity. But remember at the outset, we said that whether dealing with a soft level cap or a hard level cap here we should be able to surpass either. 0 is like a HARD LEVEL CAP for smallness instead of largeness. It is literally ... the smallest number possible in mathematics. It seems almost to defy logic to have something smaller than absolutely nothing (is that a fictional infinity yet?) ... but wait ... isn't this place all about going beyond the impossible? About no selling the illogical? So why not consider the possibility of numbers smaller than 0 just as we consider numbers larger than Absolute Infinity. And once again I do mean "smaller than 0" (but not less or greater), and not merely infinitesimally small ... because these still are always larger than 0, and greater than or less than 0, even though they are "infinitely small". For this reason, I am beginning to think of 0 as more than infinitely small ... but rather absolutely infinitely small. And if we can go beyond the absolutely infinitely large than why not go before the beginning, smaller than the absolutely infinitely small ... I may investigate this in more detail in a later blogpost, but for now I simply wanted to bring this to the wiki's attention. Lastly there is some precedence for us adding these to our "canon". We are partially based on the pattern established by the All Dimensions Wiki. They have a hierarchy infinitely going up of different classes, just like us, but they also have a "pre-hierarchy" for going smaller and smaller, just as the ordinary hierarchy is fro going larger and larger. Maybe we should have the same.
{"url":"https://fictional-googology.fandom.com/wiki/User_blog:%D0%9D%D0%B5%D1%82_%D0%BA%D0%BE%D0%BD%D1%86%D0%B0/Numbers_%22between%22_-0_and_0_from_The_Bloxin_Channel","timestamp":"2024-11-11T19:54:07Z","content_type":"text/html","content_length":"164742","record_id":"<urn:uuid:e1b1fcc5-a49e-487c-a99a-4d40db5233e6>","cc-path":"CC-MAIN-2024-46/segments/1730477028239.20/warc/CC-MAIN-20241111190758-20241111220758-00648.warc.gz"}
Mixed Multiplication And Division Worksheets Word Problems Math, particularly multiplication, creates the foundation of numerous scholastic self-controls and real-world applications. Yet, for many students, mastering multiplication can pose a challenge. To resolve this obstacle, educators and parents have actually embraced a powerful tool: Mixed Multiplication And Division Worksheets Word Problems. Intro to Mixed Multiplication And Division Worksheets Word Problems Mixed Multiplication And Division Worksheets Word Problems Mixed Multiplication And Division Worksheets Word Problems - Mixed Multiplication And Division Worksheets Word Problems, Mixed Multiplication And Division Word Problems Worksheets Grade 3, Mixed Multiplication And Division Word Problems Worksheets Pdf, Mixed Addition Subtraction Multiplication And Division Word Problems Worksheets, Multiplying Mixed Numbers Word Problems Worksheet, Multiplying Mixed Numbers Word Problems, Add Subtract Multiply Divide Word Problems Worksheet Mixed Operation Word Problems 2 The worksheets on this page combine the skills necessary to solve all four types of problems covered previously addition word problems subtraction word problems multiplication word problems and division word problems and they require students to determine which operation is appropriate for solving the each Think These math word problems may require multiplication or division to solve The student will be challenged to read the problem carefully and think about the situation in order to know which operation to use Worksheet 1 Worksheet 2 Worksheet 3 Worksheet 4 Worksheet 5 Worksheet 6 Value of Multiplication Method Comprehending multiplication is crucial, laying a strong foundation for sophisticated mathematical principles. Mixed Multiplication And Division Worksheets Word Problems offer structured and targeted practice, fostering a much deeper understanding of this basic arithmetic operation. Development of Mixed Multiplication And Division Worksheets Word Problems Mixed Multiplication And Division Worksheets 3rd Grade Free Printable Mixed Multiplication And Division Worksheets 3rd Grade Free Printable Include Word Problems Worksheet Answer Page Now you are ready to create your Word Problems Worksheet by pressing the Create Button If You Experience Display Problems with Your Math Worksheet This word problems worksheet will produce addition multiplication subtraction and division problems with 1 or 2 digit numbers Worksheet 2 Worksheet 3 Worksheet 4 The worksheets in this section combine extra facts with multiplication word problems and division word problems on the same worksheet so students not only need to solve the problem but they need to figure out which of the two operations is required first Students will need to figure out which facts are From typical pen-and-paper exercises to digitized interactive styles, Mixed Multiplication And Division Worksheets Word Problems have advanced, accommodating diverse understanding styles and Sorts Of Mixed Multiplication And Division Worksheets Word Problems Basic Multiplication Sheets Basic workouts concentrating on multiplication tables, helping learners build a strong arithmetic base. Word Trouble Worksheets Real-life scenarios integrated right into problems, enhancing critical reasoning and application abilities. Timed Multiplication Drills Examinations created to improve speed and precision, helping in fast mental math. Advantages of Using Mixed Multiplication And Division Worksheets Word Problems Mixed Multiplication And Division Worksheets 3rd Grade Free Printable Mixed Multiplication And Division Worksheets 3rd Grade Free Printable Welcome to the mixed operations worksheets page at Math Drills where getting mixed up is part of the fun This page includes Mixed operations math worksheets with addition subtraction multiplication and division and worksheets for order of operations We ve started off this page by mixing up all four operations addition subtraction multiplication and division because that might be Mixed multiplication and division word problems Grade 3 Math Word Problems Worksheet 1 Your class is having a pizza party You buy 5 pizzas Each pizza has 4 slices How many slices is that altogether 2 Beth has 4 packs of crayons Each pack has 10 crayons in it She also has 6 extra crayons How many crayons does Beth have altogether 3 Enhanced Mathematical Skills Consistent practice develops multiplication effectiveness, improving overall mathematics abilities. Enhanced Problem-Solving Abilities Word issues in worksheets develop analytical thinking and method application. Self-Paced Understanding Advantages Worksheets accommodate specific knowing speeds, fostering a comfortable and versatile knowing environment. How to Produce Engaging Mixed Multiplication And Division Worksheets Word Problems Including Visuals and Colors Vibrant visuals and colors record attention, making worksheets visually appealing and engaging. Consisting Of Real-Life Circumstances Associating multiplication to everyday circumstances includes relevance and functionality to workouts. Tailoring Worksheets to Different Skill Levels Personalizing worksheets based upon varying efficiency levels makes certain comprehensive knowing. Interactive and Online Multiplication Resources Digital Multiplication Devices and Gamings Technology-based sources supply interactive understanding experiences, making multiplication interesting and delightful. Interactive Sites and Applications On-line platforms give varied and available multiplication practice, supplementing standard worksheets. Personalizing Worksheets for Various Understanding Styles Aesthetic Learners Visual aids and layouts help comprehension for students inclined toward aesthetic knowing. Auditory Learners Verbal multiplication issues or mnemonics accommodate learners that understand concepts through acoustic methods. Kinesthetic Students Hands-on activities and manipulatives support kinesthetic learners in comprehending multiplication. Tips for Effective Application in Knowing Consistency in Practice Normal technique enhances multiplication abilities, advertising retention and fluency. Balancing Repeating and Variety A mix of repeated exercises and diverse issue layouts preserves rate of interest and understanding. Giving Constructive Responses Feedback aids in determining areas of improvement, motivating continued development. Difficulties in Multiplication Practice and Solutions Inspiration and Engagement Hurdles Dull drills can result in disinterest; innovative methods can reignite inspiration. Overcoming Concern of Math Unfavorable assumptions around math can impede development; developing a positive knowing atmosphere is important. Impact of Mixed Multiplication And Division Worksheets Word Problems on Academic Performance Studies and Research Study Findings Research study indicates a favorable relationship in between constant worksheet usage and improved math performance. Mixed Multiplication And Division Worksheets Word Problems become functional tools, promoting mathematical proficiency in students while fitting varied learning styles. From basic drills to interactive on-line resources, these worksheets not just boost multiplication skills yet also advertise vital thinking and problem-solving abilities. Multiplication and Division Pre Algebra Worksheets Word Problems Addition Subtraction Multiplication Division Worksheet For 4th Grade Lesson Check more of Mixed Multiplication And Division Worksheets Word Problems below Maths Word Problems For Grade 4 Addition And Subtraction Multiplication Division Thekidsworksheet PPT Two Step Division Word Problems PowerPoint Presentation Free Download ID 6185554 Mixed Worksheet Multiplication And Division Multiplication Word Problem Worksheets 3rd Grade Grade 5 Maths Resources word problems Printable worksheets Lets Grade 5 word problems In Printable Primary Math Worksheet For Math Grades 1 To 6 Based On The Singapore Math Curriculum Mixed Multiplication Division Word Problems K5 Learning Think These math word problems may require multiplication or division to solve The student will be challenged to read the problem carefully and think about the situation in order to know which operation to use Worksheet 1 Worksheet 2 Worksheet 3 Worksheet 4 Worksheet 5 Worksheet 6 Mixed multiplication and division word problems for grade 4 K5 Learning These grade 4 math worksheets have mixed multiplication and division word problems All numbers are whole numbers with 1 to 4 digits Division questions may have remainders which need to be interpreted e g how many left over In the last question of each worksheet students are asked to write an equation with a variable for the unknown Think These math word problems may require multiplication or division to solve The student will be challenged to read the problem carefully and think about the situation in order to know which operation to use Worksheet 1 Worksheet 2 Worksheet 3 Worksheet 4 Worksheet 5 Worksheet 6 These grade 4 math worksheets have mixed multiplication and division word problems All numbers are whole numbers with 1 to 4 digits Division questions may have remainders which need to be interpreted e g how many left over In the last question of each worksheet students are asked to write an equation with a variable for the unknown Multiplication Word Problem Worksheets 3rd Grade PPT Two Step Division Word Problems PowerPoint Presentation Free Download ID 6185554 Grade 5 Maths Resources word problems Printable worksheets Lets Grade 5 word problems In Printable Primary Math Worksheet For Math Grades 1 To 6 Based On The Singapore Math Curriculum Worksheets On Multiplication And Division For Grade 4 PrintableMultiplication Fun With Elapsed Time And A Freebie The Applicious Teacher Grade 4 Time word Problem Fun With Elapsed Time And A Freebie The Applicious Teacher Grade 4 Time word Problem Mixed Multiplication And Division Worksheets Grade 3 Times Tables Worksheets FAQs (Frequently Asked Questions). Are Mixed Multiplication And Division Worksheets Word Problems appropriate for every age teams? Yes, worksheets can be customized to various age and ability degrees, making them versatile for different students. How often should students practice utilizing Mixed Multiplication And Division Worksheets Word Problems? Regular practice is key. Routine sessions, ideally a few times a week, can generate substantial enhancement. Can worksheets alone boost mathematics skills? Worksheets are an useful device however should be supplemented with different knowing techniques for extensive skill advancement. Are there on the internet platforms using cost-free Mixed Multiplication And Division Worksheets Word Problems? Yes, many educational internet sites supply open door to a variety of Mixed Multiplication And Division Worksheets Word Problems. How can parents sustain their youngsters's multiplication practice in the house? Encouraging constant method, providing aid, and producing a positive understanding atmosphere are useful steps.
{"url":"https://crown-darts.com/en/mixed-multiplication-and-division-worksheets-word-problems.html","timestamp":"2024-11-13T20:50:21Z","content_type":"text/html","content_length":"33130","record_id":"<urn:uuid:bcddd2d9-a29f-4289-8125-45a2f94b0202>","cc-path":"CC-MAIN-2024-46/segments/1730477028402.57/warc/CC-MAIN-20241113203454-20241113233454-00665.warc.gz"}
What instrument is used to measure engine revolution per minute? - Answers How to Convert revolution per minute to metres per minute? You cannot. Revolutions per minute are a measure of angular velocity whereas metres per minute are a measure of linear velocity. There is no simple way to convert from one to the other. For example, at any given rpm, a point on the rim of a wheel is moving much faster than a point near the hub. You need the distance of a point from the axis of revolution (in metres) to convert angular speed to linear speed. If the distance from the centre is r metres then the point moves through 2*pi*r metres every revolution. ie 1 rpm = 2*pi*r linear metres per minute.
{"url":"https://math.answers.com/math-and-arithmetic/What_instrument_is_used_to_measure_engine_revolution_per_minute","timestamp":"2024-11-14T19:00:14Z","content_type":"text/html","content_length":"166484","record_id":"<urn:uuid:f49d5261-d5ed-42ae-846a-89a0f80a17a3>","cc-path":"CC-MAIN-2024-46/segments/1730477393980.94/warc/CC-MAIN-20241114162350-20241114192350-00463.warc.gz"}
Week 3: Mathematical Foundations - BASIS Independent Schools Week 3: Mathematical Foundations March 18, 2024 Hello everyone! Welcome back to my blog! Today, I will be discussing the mathematical concepts behind my project. In Game Theory, we often use payoff matrices to define the parameters of two-player games. In a payoff matrix, each cell represents a potential outcome; the first number representing the payoff of the first player, the second representing the payoff of the second player. In the prisoner’s dilemma, the payoff matrix will look like this: where T > R > U > S. Traditionally, the Nash Equilibrium, or optimal strategy for both players, would be for both players to defect. We would call defecting a dominant strategy, since no matter what the other player chooses to do, it gives more payoff for you to defect, rather than to cooperate. Yet, you may also notice that both players cooperating would lead to a better outcome for both players. So, we would call the outcome of both players defecting to be Pareto dominated (in other words, worse for both players), and the outcome of both players cooperating to be Pareto efficient (none of the other outcomes Pareto dominate it). The basis of my project is to try to reach that improve the Nash Equilibrium and find ways to reach the Pareto efficient outcomes. In my previous blog post, I have explained ways in which people have managed to reach this Pareto efficient outcome; either through doing repeated games, through designing programs to perform your strategies, or through using concepts like quantum entanglement. In my project, I will be offering a different solution, which involves allowing the players to modify the payoff matrices to their benefit. Imagine, for a moment, that the first player could offer a promise to the second player, that “if my payoff is at least T, I will only keep A of it and give you the rest.” Then, our new payoff matrix would look like this: Assuming that S+T-A > U, a dominant strategy no longer exists for either player. Thus, the new Nash Equilibrium must be a mixed strategy, where each player chooses to cooperate with a certain probability, and to defect the rest of the time. Given S+T-A > U, it is clear that S+T must be greater than 2U in order for this new situation to be productive. The expected payoff for the first player turns out to be \frac{U+R-A-S}{UR-AS}. Taking its derivative, we find that the first player’s expected payoff increases as A increases. For this reason, we can split this into two cases: If S+T > R+U, then, if A > R, it becomes a dominant strategy for player 1 to defect, but for player 2 to cooperate. The expected payoff of player 1 would be A. If S+T < R+U, then, A approaches S+T-U. In this case, the new payoff is \frac{U+R-S-T+U-S}{UR-(S+T-U)S}. After some algebraic manipulation, it can be shown that as long as S+T > 2U, the expected payoff is greater than U, resulting in a better situation for both players. The math shows that the first person can in effect “sacrifice” some of their utility to bring the outcome of the game closer to being Pareto efficient, resulting in increases in expected payoff for both players. That’s it for this week’s post. Stay tuned for next week, where I will be diving slightly deeper into the underlying mathematics, as well as talking about the practicality and implications of these ideas. See you then! View more of Larry X.'s posts. Reader Interactions You must be logged in to post a comment.
{"url":"https://basisindependent.com/schools/ca/silicon-valley/academics/the-senior-year/senior-projects/larry-x/week-3-mathematical-foundations/","timestamp":"2024-11-08T13:35:48Z","content_type":"text/html","content_length":"84041","record_id":"<urn:uuid:840f32ba-912b-4c1c-9b8b-300ee2ecc098>","cc-path":"CC-MAIN-2024-46/segments/1730477028067.32/warc/CC-MAIN-20241108133114-20241108163114-00182.warc.gz"}
Recognizing subtraction problems to 10 Students learn to use an image to help understand a subtraction problem. Discuss with students that it is useful if you can use images to represent mathematical problems. You can use the images to help understand the problem and use them to help learn how to subtract. Show a selection of animals on the interactive whiteboard. Have the students count how many of each kind of animal there are. Ask students if they know any clever ways to count the animals. Say that you can cross off the animals as you count them to make sure you don't miss any or count them twice. When you have an answer, erase the grey square to check the answer. Show ice cubes on the interactive whiteboard. Some ice cubes are melted, and others are still frozen. First count how many ice cubes there are in total and then count how many are melted. There are 10 ice cubes in total, and 1 has melted. If you take the one melted ice cube away from the others, you have 9 left. The subtraction problem this represents is then 10 - 1. Discuss all the variations of ice cubes/melted cubes with your students and be sure to show what the matching subtraction problem is.Next show students the matches. Count the total number of matches, there are 6. Tell students that 2 matches have been struck and are taken away. Emphasize that the - (minus sign) means to take away because the second number (subtrahend) is taken from the first, or total number (minuend). Next show the notepads, and note that some of the notepads are out of paper. Ask students to determine which subtraction problem is shown by the image. Erase the grey boxes to show the answer. Next students must drag apples and apple cores to form an image that matches the given subtraction problem.Using the matches, discuss with students that when they are able to form the subtraction problem using the image, they can also solve for the difference by taking the second number (subtrahend) away from the first number (minuend). The difference is the number that is left. If you take away the burnt matches from the total number of matches, you determine the difference. Check to see that students can find the difference in an exercise with paint tubes.Check that students are able to use an image to determine and solve subtraction problems by asking the following questions:- How do you know which numbers are part of a subtraction problem?- What do you do if you want to determine how many objects are left if you take some away?- What is the first number of a subtraction problem? (what does it represent)- How do you determine which number is second in a subtraction problem? Students first practice recognizing subtraction problems from given answers. Then they must determine which subtraction problem matches a given image, and finally students are asked to determine the subtraction problem and difference of a given image. Discuss with students that it is important to be able to use images to understand subtraction problems to help understand how subtraction works. Check that students are able to form subtraction problems based on a given image. Ask them to write their answers on a sheet of paper and hold up their answers so you can check their work. Students who have difficulty recognizing subtraction from given images can make use of manipulatives, like MAB blocks. Set out a group of blocks and have the student count out the blocks. Then remove a number of the blocks and ask the student to count that amount, and finally to count how many blocks are left. Gynzy is an online teaching platform for interactive whiteboards and displays in schools. With a focus on elementary education, Gynzy’s Whiteboard, digital tools, and activities make it easy for teachers to save time building lessons, increase student engagement, and make classroom management more efficient.
{"url":"https://www.gynzy.com/en-us/library/items/recognizing-subtraction-problems-to-10","timestamp":"2024-11-11T08:17:58Z","content_type":"text/html","content_length":"554464","record_id":"<urn:uuid:5fd7df76-87df-4268-892c-8e06e82e6652>","cc-path":"CC-MAIN-2024-46/segments/1730477028220.42/warc/CC-MAIN-20241111060327-20241111090327-00039.warc.gz"}
How Many Nickels in 2 Dollars: Quick Coin Guide There are 40 nickels in 2 dollars. It’s a simple calculation of 2 dollars divided by 0.05. If you’re wondering about the number of nickels in 2 dollars, you’ve come to the right place. Whether it’s for a math problem or simply satisfying your curiosity, understanding the quantity of nickels in a specific dollar amount can be useful. This article will provide you with a straightforward answer and explain the process behind the calculation. Understanding how many nickels are in 2 dollars can be an interesting and practical exercise. Let’s delve into the details to find the exact number of nickels in 2 dollars and explore the significance of this calculation. The Value Of A Nickel In $2, there are 40 nickels since each nickel is worth 5 cents. Therefore, 40 nickels make up $2. Nickels may seem small, but their value is significant. Let’s delve into the composition and historical worth of a nickel. Nickel Composition Nickels are composed of 75% copper and 25% nickel. This composition gives them their distinct silvery appearance. Nickel’s Historical Worth In the past, the value of a nickel was substantial. It could buy items like candies, drinks, and even small toys. Breaking Down Two Dollars When it comes to currency, it’s important to understand the basics of currency conversion. Today, we’re going to focus on breaking down two dollars and figuring out how many nickels are in two Currency Conversion Basics Before we dive into the specifics of how many nickels are in two dollars, let’s review some currency conversion basics. The value of currency can be determined by comparing it to another currency or a commodity. The exchange rate is the value of one currency compared to another. Dollars To Nickels To figure out how many nickels in 2 dollars, we first need to know the value of a nickel. A nickel is worth five cents, or 0.05 dollars. To convert two dollars to nickels, we can use the following So, there are 40 nickels in two dollars. This calculation can be helpful when you’re trying to figure out how many coins you need to make a certain amount of money. In conclusion, now you know how many nickels are in two dollars. By understanding the basics of currency conversion and knowing the value of each coin, you can easily calculate different currency Quick Math: Counting Nickels In Two Dollars When it comes to counting coins, nickels are one of the most commonly used denominations. If you’re wondering how many nickels are in two dollars, it’s a relatively simple calculation that anyone can do. In this post, we’ll explore how to count nickels in two dollars using simple math and visualization techniques. Simple Calculation The value of a single nickel is five cents, so to calculate how many nickels are in two dollars, we need to divide 200 cents by 5 cents. This gives us a total of 40 nickels in two dollars. It’s as simple as that! Visualizing The Coin Stack If you’re a visual learner, you might find it helpful to visualize the coin stack. To do this, you can imagine stacking up 40 nickels on top of each other to create a tower. This tower would be about 2 inches tall, which gives you an idea of just how many nickels are in two dollars. Number of Nickels in Two Dollars Nickels Dollars Cents 1 $0.05 5¢ 10 $0.50 50¢ 20 $1.00 100¢ 40 $2.00 200¢ As you can see from the table above, 40 nickels equals two dollars, which is a handy fact to know if you ever need to count a large number of coins quickly. • Remember, each nickel is worth 5 cents. • To count how many nickels are in two dollars, divide 200 by 5. • You should get 40 as your answer. • Visualize the coin stack to get a better idea of how many nickels this is. Coin Collecting Fun Facts Coin collecting is an exciting hobby that allows enthusiasts to delve into the rich history and fascinating stories behind various currencies. Whether you’re a seasoned collector or just starting out, learning some fun facts about coins can add an extra layer of enjoyment to your hobby. In this section, we’ll explore some intriguing trivia about nickels and uncover rare nickels that are worth collecting. Let’s dive in! Nickel Trivia Nickels, also known as five-cent coins, have been a staple of American currency since their introduction in 1866. Here are some interesting facts about nickels: • The United States Mint has produced several different designs for the nickel over the years, featuring iconic figures such as President Thomas Jefferson and the American bison. • The term “nickel” actually refers to the composition of the coin, which is made primarily of copper with a small amount of nickel. • Did you know that during World War II, the composition of the nickel was changed to save nickel for the war effort? These nickels, known as “war nickels,” are distinguishable by a large mint mark above Monticello on the reverse side. • Nickels are often used as a measure of thickness due to their consistent diameter. For example, a stack of nickels one inch high contains 20 coins. Rare Nickels Worth Collecting While most nickels in circulation are worth their face value, there are a few rare ones that can fetch a hefty sum among collectors. Keep an eye out for these valuable nickels: Year Description Approximate Value 1913 The 1913 Liberty Head nickel is one of the rarest and most sought-after coins. Only five are known to exist, making it highly valuable. $3 million+ 1937-D The 1937-D “Three-Legged Buffalo” nickel features a buffalo with only three legs due to a minting error. This error makes it a popular collectible. $500+ 1950-D The 1950-D nickel is scarce in high grades, particularly those with full steps on Monticello. Finding one in excellent condition can be quite rewarding. $100+ These are just a few examples of rare nickels, and there are many more out there waiting to be discovered. So keep your eyes open and you might stumble upon a valuable addition to your coin Using Nickels In Everyday Transactions Nickels In Retail In retail, nickels are commonly used to make change for larger denominations. For example, if a customer purchases an item for $4.95 and pays with a $10 bill, the cashier would typically provide five nickels as part of the change. Nickels And Vending Machines Vending machines often accept nickels as payment for various items, such as snacks, beverages, and other goods. When inserting coins into a vending machine, it’s common to use nickels in combination with other coins to reach the total amount required for the desired product. Credit: www.thesprucecrafts.com The Impact Of Inflation On Nickel Value The impact of inflation can affect the value of nickel, influencing how many nickels are needed to make up $2. As inflation rises, the purchasing power of a nickel decreases, requiring more nickels to reach the same value over time. Inflation And Coinage Inflation plays a significant role in the value of coins, including nickels. As the cost of goods and services increases over time, the purchasing power of money decreases. This means that the same amount of money can buy fewer goods and services in the future compared to the present. The impact of inflation on the value of nickels can be seen in the changing purchasing power of these coins. Future Of The Nickel With the continuous rise of inflation, the future of the nickel as a valuable coin is uncertain. As inflation erodes the value of currency, the purchasing power of a nickel diminishes. In the past, nickels were made primarily of nickel, which is a more valuable metal. However, due to cost-saving measures, the composition of nickels has changed over time. They are now made of a combination of copper and nickel, with a thin layer of pure nickel on the surface. This change in composition reflects the need to reduce production costs while still maintaining the coin’s functionality. In the future, as inflation continues to impact the value of currency, there may be a need to reassess the composition of the nickel once again. It is possible that the value of the metal used in the coin may exceed its face value, which could lead to changes in its composition or even the discontinuation of the nickel as a circulating coin. In conclusion, the impact of inflation on the value of the nickel is undeniable. As inflation erodes the purchasing power of money, the value of coins like the nickel is affected. The future of the nickel remains uncertain, and it will be interesting to see how changes in inflation and the overall economy shape the fate of this coin. Creative Ways To Save With Nickels Looking for fun and simple ways to save money? Nickels can be a valuable asset in your savings journey! Discover creative methods to make the most out of your nickels and watch your savings grow. Starting A Nickel Savings Jar Encourage saving by starting a nickel savings jar. Simply drop in a nickel each day! Teaching Kids About Money With Nickels Introduce kids to the concept of money management through nickels. Use fun activities to make learning engaging. Credit: www.quora.com Nickels Vs. Other Coins Nickels, the smallest coin denomination, have a value of 5 cents each. In 2 dollars, there are 40 nickels. This makes them a convenient choice for small transactions and counting change. Comparing Coin Sizes And Values Nickels are smaller and worth more than pennies. Nickels are larger in size compared to dimes. Nickels have a greater value than quarters. Why Nickels Stand Out Nickels are unique due to their value and size. Nickels are distinguishable from other coins. Nickels are essential for everyday transactions. Frequently Asked Questions How Many Nickels Are In 2 Dollars? To find the number of nickels in 2 dollars, divide 2 by 0. 05 (the value of a nickel). The result is 40, so there are 40 nickels in 2 dollars. What Is The Value Of A Single Nickel? A nickel has a value of 5 cents. It is equivalent to 0. 05 dollars. Can You Exchange Nickels For Dollars At A Bank? Yes, you can exchange nickels for dollars at a bank or a currency exchange service. They will convert the nickel coins to their equivalent value in dollars. Are There Any Rare Or Valuable Nickels To Look Out For? Yes, there are rare and valuable nickels, such as the 1913 Liberty Head nickel and the 1937-D 3-legged Buffalo nickel. It’s advisable to research or consult a coin expert for more information. To sum up, calculating the number of nickels in 2 dollars is a simple yet important task. By understanding that a nickel is worth 5 cents, we can determine that there are 40 nickels in 2 dollars. This knowledge can come in handy in various situations, whether you’re teaching children about money or managing your own finances. Remember, a little math can go a long way!
{"url":"https://4pxtracking.com/how-many-nickels-in-2-dollars/","timestamp":"2024-11-08T12:57:08Z","content_type":"text/html","content_length":"122905","record_id":"<urn:uuid:1a1d9a93-6ab1-4667-8c0e-76a9755b857e>","cc-path":"CC-MAIN-2024-46/segments/1730477028059.90/warc/CC-MAIN-20241108101914-20241108131914-00179.warc.gz"}
Machine Learning Interview Questions - CSVeda Machine Learning Interview Questions Machine Learning Engineers play a very important role in today’s Computing industry. As many graduates apply for highly coveted Machine Learning professional roles, a quick reference for the same will be handy to prepare for interviews. Here are some frequently asked samples of Machine Learning Interview Questions: What is Precision and recall? precision=TP/TP+FP or Total Positive/Total negatives Read here to know the details of these two terms What do you mean by information gain in Machine Learning? Information gain is defined as the expected reduction in entropy due to partitioning as per the attribute of model. Ideally in a tree, we keep partitioning until an observation reaches to the purest form. It can be defined as: Information gain= Entropy of parent-sum(weight% * entropy of child) where weight%=Number of observations of a child/sum(observations in all child nodes) Compare terms Model Accuracy and Model performance Among Machine Learning Interview Questions, this one is the most commonly asked question. Model Accuracy: Model accuracy deals with the output of the model. It is defined as follows: (classifications a model predicts correctly)/(total predictions). Model accuracy is can be used to check the accuracy of the model. Model performance: It refers is associated with the speed of the model. Model accuracy can be used to assess the model’s performance. What do you understand by Entropy in Decision Tree? Entropy gives the measure of impurity of data. For fully homogeneous data, the entropy is 0 and it is 1 when the data sample is equally divided. In decision tree, the data with most heterogeneity or maximum entropy is chosen. What does Gini index mean in Decision tree algorithms? Gini index is the measure of misclassification. It is usually applied to multi class classification problems. It is relatively faster to calculate that other metrics. Its value is ideally lower than To decide the place of each node, splitting computing methods such as Gini index is used hierarchical structures. Gini=1-∑i=1 to n (pi)^2 where pi is defined as the probability of an object being classified to a particular class. Also, take the least Gini index attribute as the root node. How to find outliers? Define collinearity and multicollinearity Collinearity is defined as a linear association between twoexplanatory variables. Multicollinearity is related to multiple regression which defines the linearly related associations between two or more variables. What is the relation between NumPy and SciPy In short, NumPy library is part of SciPy ecosystem. NumPy library can be used for various array operations like indexing, reshaping, ordering, sorting etc whereas Scipy will keep be used for all the numerical code. NumPy also have many algebraic functions, transforms etc. SciPy contains more scientific modules and functions along with various advanced algebra functions. It also depends on the type of application to choose from the two. Also, if the use is for high-level scientific application, it is handy to keep both NumPy and SciPy How to tackle a model with low bias and high variance? In the case of low bias and high variance, over-fitting will be caused so methods like bagging and ensemble learning can be used to tackle this kind of model. Compare L1 and L2 regularizations. There are mainly two types of regularization techniques, namely Ridge Regression and Lasso Regression. Both techniques help to reduce the dimensionality of the data to get rid of over-fitting. The major difference is in the penalty term added to the loss function of both the techniques. L1 is Lasso (Least Absolute Shrinkage Selector Operator): • It works well when data with high dimensionality or sparse data is available at the time of classification. • It adds absolute value of magnitude as a penalty term in the cost function. L2 or Ridge: • It adds squared magnitude as a penalty term in cost function. • It is mostly used when we need non-sparse outputs or even when we need to predict a continuous output How to tackle high variance? Some ways to tackle high variance includes: • By performing regularization. • By reducing number of features. • Increasing size of training data size. • By trying to fit the model Mean or Median? Which is bigger in left skew? Mean is the largest, while the mode is the smallest. Mean also reflects the skewing to the most degree. If the median is more, the data distribution is skewed to the left. Also, the mean is always less than the mode. How to do pattern analysis in Machine Learning? Pattern recognition is the process of recognizing patterns in data by using machine learning algorithms The primary ideology used in pattern analysis is the involvement of classification of events based on previously available historical data, statistical information etc. Techniques and algorithms such as neural networks, Naive Bayes, Decision Tree, Support Vector Machines, clustering etc are frequently used in pattern analysis of data. What is the ROC curve and how is it used ? Read Here What are the different performance metrics that can be used for classification and regression Some important performance metrics are mentioned below: • F1-Score -F1 Score is the harmonic mean of precision and recall values for a classification problem. It is the measure of a model’s accuracy on a given dataset. F1 Score=2*precision*recall/precision*recall • MSE (Mean Square Error): RME is the Mean Squared Error or MSW in statistics describes the closeness of aregression line is to a set ofdata points. Squaring is used to remove any negative signs. It’s called the mean squared error as you’re finding the average of a set of errors. The lower the MSE, the better the prediction. Mathematically it can be defined as: MSE = (1/n) * Σ(actual – predicted)2 • R-squared (Root mean squared error) – R-squared is a very crucial statistical measure to get a measure that how close the data are to the fitted regression line. It is also known as the coefficient of determination as well: R-squared = Explained variation / Total variation • MAE (Mean Absolute Error)- MAE or Mean Absolute Error is defined as the average magnitude of all the errors in a set of predictions, without their corresponding directions. It is a loss function used in regression. It can be used in the cases where outliers need to be reduced. So, these were the most possible 15 Machine Learning Interview Questions that you face in your first interaction for a ML job. If you have some addition to these, we would be glad to extend our list. Be First to Comment
{"url":"https://csveda.com/machine-learning-interview-questions/","timestamp":"2024-11-09T00:20:04Z","content_type":"text/html","content_length":"61291","record_id":"<urn:uuid:7d87f796-f344-4aa8-be11-1c74e97eeb18>","cc-path":"CC-MAIN-2024-46/segments/1730477028106.80/warc/CC-MAIN-20241108231327-20241109021327-00339.warc.gz"}
Transversal: Lines, Angles, and Constructions with Solved Examples What is the importance of knowing the concept of transversals? The concept of transversals help us in determining whether or not the given lines in the Euclidean plane are parallel. How do you know if an angle is transversal? When a transversal line passes through two parallel lines in the same plane, the angles so formed between these lines are termed as transversal angles. Are all transversal angles equal? No, all transversal angles are not equal. Corresponding angles, alternate interior angles and alternate exterior angles among the transversal angles are equal. How many angles do transversals make? A transversal makes a total of 8 angles when it cuts two parallel lines. Do Transversal Lines have to be Parallel? It is not necessary for transversals line to be parallel.
{"url":"https://testbook.com/maths/transversal-angles","timestamp":"2024-11-15T03:43:26Z","content_type":"text/html","content_length":"865241","record_id":"<urn:uuid:5ab61cf5-f5e3-46a3-a35f-70629147181e>","cc-path":"CC-MAIN-2024-46/segments/1730477400050.97/warc/CC-MAIN-20241115021900-20241115051900-00695.warc.gz"}
Maple Questions and Posts Hi! I have a Maple worksheet comprising of TEXT + MATH, mainly for loops and if statements, using matrices. Can someone please let me know how I can turn this sheet into Fortran code (very detailed instructions necessary!) Thanks for your help! :) I need to generate pimitive polynomials of degree 17 over GF(2^32) for use in an LFSR working over 32-bit words. Does anyone know how it can be done? I'm trying to solve a partial differential equation with two boundary conditions below. The general solution contains arbitrary functions of the non-differentiated variable. These functions are solved for and assigned but do not appear in the final solution return. Can anybody help me with this? > restart; > l:=lambda; I'm trying to solve a partial differential equation with two boundary conditions below. The general solution contains arbitrary functions of the non-differentiated variable. These functions are solved for and assigned but do not appear in the final solution return. Can anybody help me with this? > restart; > l:=lambda; Hi. I'm a very new Maple user, but I don't think that this qualifies as a Newbie question. Has anybody had problems when trying to write directly to a PS file from within the Maple GUI on OS 10.4.3 for Mac? Consider the following: currentdir(kernelopts(homedir)): plotsetup(ps,plotoutput="testplot.ps"): plot(sin(x),x=0..2*Pi); Running this code from the command-line works as expected, but running this from the GUI interface usually returns: Error, Error in device driver: plot terminated The strange thing is that about 10% of the time, the above code actually will work from the GUI. I cannot produce failure or success consistently in the GUI. I have tried this on both a G4 and a G5. Am I the only user who has had this problem? It is often difficult to use the Symbolic toolbox of Matlab (which is linked to the engine). It can be difficult to read the input and output from the toolbox. To solve this problem, I have developed a graphical interface to the Symbolic toolbox as I describe below. I try to deduce distribution fonction from a parametrized set of points with Maple. At this stage I have a dataset define through a relationship like y=f(x), and I want to obtain the distribution of y points, given that x in [-A,A]. The theoretic formula is : F[Y](z) = P(y<z). i.e. F[Y](z) = Int(delta[f(x)< z],x=-A..A); and I use the piecewise Maple function to implement it ( Int(piecewise(f(x)< z,1,0),x=-A..A) ), but for Maple : Int(piecewise(f(x)< z,1,0),x=-A..A) = piecewise(Int(f(x)< z, x=-A..A),1,0)) which is totally different ! Using the following integrand I get -0 for Int( %, xi= -99.496042185589..infinity): evalf(%,14); which certainly is false. But if I cut off at a reasonable upper bound (say exp(...) <= 1E-16) I get what I expect (up to rounding errors). I consider that as a bug and wonder whether it is in the NAG library or through Maple calling it - any explanation? Edited to add: `Mapl I was wondering if some one had the knowledge as to why the plot3d command will not work for the dirichlet elliptical wave equation, at fixed time t, yet maple is able to evaluate this function at specific values for theta and r. (the coord system for the 3d plot should work with "ellcylindrical" but does not) the modes of the function look like (for Cosine-Elliptic) j=0, 1, 2...n where q is a 'zero' value of MathieuCE(j,q,I*1) 0<theta<2*Pi, 0<r<1, t>=0 the error stated is "Plotting error, MESH must be a list of lists of vertices, or an hfarray" I would like to split a polynomial into even and odd terms. Has this capability been provided in a package? PolynomialTools seems the obvious choice, but doesn't do this. Here's one approach SplitPolynomialEvenOdd := proc(poly::polynom(anything,v), v) description "return the even and odd parts of a polynomial in v"; local p; p := collect(poly,v); if p::`+` then return selectremove(t -> degree(t,v)::even, p); elif degree(p,v)::even then return (p,0); return (0,p); end if; end proc: SplitPolynomialEvenOdd(x^2 + 3*x + 1, x); i want to know that is ther any procedure or builtin functin which return the exponent of a variable.For example if i have a variabl x^2 result will be 2.and also i want to know the procedure by which if i have two lists of variables we can compare there type of variables and cobine the coefficients of same variables.For example >s:=[v1,2*x^2*y,v2,1*x*y^2,v0,3*x^3]; >p:=[a*x^3,b*x*y^2,c*x^ 2*y]; i want to get result v1,2+c ,v2,1+b, v0,3+a how can i get this result any body help me I have a system of matrix equations and would like to solve it for a certain vector, but without stating anything else but the names of the matrices and vectors involved. This is to be used for further studies in a numerical matlab model where the matrices and vectors are specified. Example: Let A, B and C be regular matrices where: A*B=C Solving for B we get: B=inv(A)*C How can I make maple do this for me without specifying the elements in A, B, and C. I Need to Generate Primitive and Irreducible Polynomials in Galois Extension Field GF(2^32). How do I do it? Any Pointers to any code / theory is welcome. Thanks I want to 'revert' the product rule from differentiation: I want to collect terms like f(x)*diff(g(x),x)+g(x)*diff(f(x),x) into diff(f(x)*g(x),x) I'm having problems using Compiler:-Compile() on the following procedure: findallroots:=proc(eqs,x,rng::range(numeric)) local roots,pts,i; roots:={fsolve}(eqs,x,rng,'avoid'={x=lhs(rng),x=rhs(rng)}); if roots={} or not roots::set(numeric) then NULL else pts:=sort([op(rng),op(roots)]); op(roots),seq(procname(eqs,x,pts[i-1]..pts[i]),i=2..nops(pts)) end if; Error, (in IssueError) only named functions are supported If you could help me with what this error is referring to and what I could do to over come it that would be most appreciated. Thank you for your help First 2145 2146 2147 2148 2149 2150 2151 Last Page 2147 of 2158
{"url":"https://mapleprimes.com/products/Maple?page=2147","timestamp":"2024-11-08T01:26:29Z","content_type":"text/html","content_length":"138254","record_id":"<urn:uuid:c6ac0ebf-2054-48bb-a388-2f9be8256e81>","cc-path":"CC-MAIN-2024-46/segments/1730477028019.71/warc/CC-MAIN-20241108003811-20241108033811-00878.warc.gz"}
Sentiment Analysis with Naive Bayes Classifier Built from Scratch Open-Source Internship opportunity by OpenGenus for programmers. Apply now. In this article, we will implement a Naive Bayes classifier from scratch to perform sentiment analysis. Table of Contents • Overview of Sentiment Analysis • Overview of Bayes' Theorem and How it Applies to Sentiment Analysis • Overview of Naive Bayes for Sentiment Analysis • Dataset • Installation and Setup • Calculating Word Frequency • Calculating Probabilities and Implementing the Classifier • Testing and Results • Conclusion Overview of Sentiment Analysis Sentiment Analysis is the process of classifying whether textual data is positive, negative, or neutral. Some use cases of sentiment analysis include: • Monitoring product feedback by determining whether customers' opinions are positive, neutral, or negative. • Targeting people to improve their service. Businesses can reach out to customers who feel negatively about a product. • Gauging opinions on social media about a particular issue. In this article, we will be learning the mathematics behind a machine learning algorithm called Naive Bayes. We will then implement it from scratch to perform sentiment analysis. Overview of Bayes' Theorem and How it Applies to Sentiment Analysis Naive Bayes is a supervised machine learning algorithm based on Bayes’ theorem. Bayes' theorem is defined mathematically as the following equation: • P(A|B) represents the probability of event A happening given that B is true. • P(B|A) represents the probability of event B happening given that A is true. • P(A) and P(B) are the probabilities of observing A and B without any prior conditions. These are referred to as prior probabilities. Imagine we are trying to classify the sentence "I like this article" as either positive or negative. We would first determine the probability that the text is positive and the probability that the text is negative. We would then compare the two probabilities. To calculate the probability of "I like this article" being positive, we can rewrite Bayes' theorem as follows (we would do the same thing for the negative class, except we would replace all occurrences of "Positive" with "Negative"): • P(Positive | I like this article) represents the probability of the text "I like this article" being positive. • P(I like this article | Positive) represents the probability of a positive text being "I like this article". To calculate this, we can count the number of occurrences of "I like this article" in the positive texts and divide it by the total number of samples labeled as positive. • P(Positive) is the prior probability of a training sample being classified as positive. For example, if there are 10 sentences in our training data, and 7 sentences are positive, then P(Positive) would be 7/10. • We can discard the denominator since the denominator will be the same for each class. Overview of Naive Bayes for Sentiment Analysis Based on our current formula, P(I like this article | Positive) would return 0 if there are no exact matches of "I like this article" in the positive category from the training dataset. Our model won't be very useful if sentences that we can only correctly classify those sentences that appear in the training data. This is where the "Naive" part of Naive Bayes comes into play. Naive Bayes assumes that every word in the sentence is independent of the other words. Context does not matter. The sentence "I like this article" and "this article I like" are the same for a Naive Bayes classifier. Instead of counting the occurrences of specific sentences in the training data, we are now looking at the frequency of individual words. The below equation demonstrates how we can calculate P (I like this article | Positive): Calculating P(like | Positive) is done by dividing the number of occurrences "like" appears in positive texts divided by the total number of words in positive texts. For example, if there are 3 occurrences of "like" in positive texts and there are 11 total words, P(like | Positive) would be calculated as follows: This same method is used for every word and class. However, what if "like" did not appear even once in the training data? That would cause P(like | Positive) to be 0. Since we multiply this probability by all the others, we will end up with an overall probability of 0. This doesn't give us any relevant information. To fix this, we apply Laplace smoothing to each individual word's probability. In Laplace smoothing, we add 1 to the numerator to ensure that the probability won't be 0. To prevent the probability from being greater than 100%, we add the total number of possible words to the divisor. For example, if there were 15 unique words that appeared in the training data (regardless of the label), we would add 15 to the denominator. The equation below demonstrates this: After calculating all the individual word probabilities and applying Laplace smoothing to each of them, we finally multiply them together. We multiply the result by the prior probability of the class. After that, all we do is compare which class has the greatest probability; this class is the output of the classifier. Now, we will go over an implementation of a Naive Bayes classifier from scratch in Python. We will be using the IMDb movie reviews dataset. You can download this dataset in CSV format using this link. Please see this link for more information. When you download it, rename the file to imdb_reviews.csv. Installation and Setup The only library we will be using is The Natural Language Toolkit (NLTK) for text preprocessing (tokenization, removing stop words, lemmatization, etc.). Install NLTK using the following pip command: pip install nltk Download the following corpora: import nltk Import the necessary libraries: from collections import defaultdict import math from nltk.corpus import stopwords from nltk.stem import WordNetLemmatizer from nltk.tokenize import RegexpTokenizer import pandas as pd Read the dataset: df = pd.read_csv('imdb_reviews.csv') Calculating Word Frequency Next, we will calculate the frequency of each term that appears in the dataset. We create a dictionary that stores the number of occurrences of each token. We use a defaultdict, which does not return an error when we try to update a key that does not exist. If we try to increment the value of a key does not exist, it will automatically set the value to 0 and increment it to 1. Initialize all the variables we will use: lemmatizer = WordNetLemmatizer() # word_counts[word][0] = occurrences of word in negative reviews # word_counts[word][1] = occurrences of word in positive reviews word_counts = defaultdict(lambda: [0, 0]) # returns [0, 0] by default if the key does not exist STOP_WORDS = stopwords.words('english') tokenizer = RegexpTokenizer(r'\w+') sentiment = list(df['sentiment']) done = 0 total_positive_words = 0 total_negative_words = 0 # keep track of the number of positive and negative reviews (prior probabilities) total_positive_reviews = 0 total_negative_reviews = 0 Iterate though each review. For each token in the review, we preprocess it and keep track of the number of occurrences: for i, review in enumerate(list(df['review'])): if sentiment[i] == 'positive': total_positive_reviews += 1 total_negative_reviews += 1 for token in tokenizer.tokenize(review): token = token.lower() token = lemmatizer.lemmatize(token) if token not in STOP_WORDS: if sentiment[i] == 'positive': word_counts[token][1] += 1 total_positive_words += 1 word_counts[token][0] += 1 total_negative_words += 1 We will limit our vocabulary size to the 5000 most frequent words. To do this, we sort the dictionary using a custom comparator function. The parameters to the sorted function are iterable (the sequence to sort), key (a function to decide the order), and reverse (a boolean - True will sort descending). key needs to be a function used to decide the order. Since we store each word's frequency in positive and negative reviews separately, we will add these frequencies and use this sum to sort the words. Since we want to use the most frequent words, we will specify reverse=True. To select the first 5000 elements after sorting, we can use slicing. word_counts = sorted(word_counts.items(), key=lambda x : x[1][0] + x[1][1], reverse=True)[:5000] Let's convert this back into a defaultdict: word_counts = defaultdict(lambda: [0, 0], word_counts) Calculating Probabilities and Implementing the Classifier Next, we will write a function for calculating P(word|positive) and P(word|negative). Calculating a class label involves multiplying several probabilities together. This can lead to underflow since we are multiplying together several small numbers. To solve this, we can take the logarithm of these probabilities. The product rule for logarithms states that the logarithm of a product is equal to the sum of logarithms. Mathematically, this can be represented as shown below: log(ab) = log(a) + log(b) The advantage of computing logarithms is that they lead to larger, negative numbers. Lower probabilities are further away from zero (that is, more negative), and larger probabilities are closer to zero, so the resulting values can still be compared in the same way (i.e., the greatest number corresponds to the class with the greatest probability). def calculate_word_probability(word, sentiment): if sentiment == 'positive': return math.log((word_counts[word][1] + 1) / (total_positive_words + 5000)) return math.log((word_counts[word][0] + 1) / (total_negative_words + 5000)) Next, we will write a function to compute the probability of a full review. This function utilizes calculate_word_probability and adds the results for each individual word. It also adds the prior probability, which is computed at the beginning of the function. def calculate_review_probability(review, sentiment): if sentiment == 'positive': probability = math.log(total_positive_reviews / len(df)) probability = math.log(total_negative_reviews / len(df)) for token in tokenizer.tokenize(review): token = token.lower() token = lemmatizer.lemmatize(token) if token not in STOP_WORDS: probability += calculate_word_probability(token, sentiment) return probability Finally, we will create a function predict, which compares the probabilities of the two possible classes ("positive" and "negative") and returns the class with the greatest probability. def predict(review): if calculate_review_probability(review, 'positive') > calculate_review_probability(review, 'negative'): return 'positive' return 'negative' Testing and Results Let's try testing this function with some sample sentences: print(predict('This movie was great')) print(predict('Not so good... I found it somewhat boring')) Now, we will test our classifier on the entire dataset: correct = 0 incorrect = 0 sentiments = list(df['sentiment']) for i, text in enumerate(list(df['review'])): if predict(text) == sentiments[i]: correct += 1 incorrect += 1 Now, we have variables correct and incorrect. To calculate our model's accuracy, we divide the number of correct predictions by the sum of correct and incorrect predictions. print(correct / (correct + incorrect)) This leads to an accuracy of about 85%, which is very good considering that we did not apply any hyperparameter optimization or other preprocessing techniques like TF-IDF. In this article at OpenGenus, we learned how to create a Naive Bayes classifier from scratch to perform sentiment analysis. Although Naive Bayes relies on a simple assumption, it is a powerful algorithm and can produce great results. That is it for this article, and thank you for reading. Andrew L. Maas, Raymond E. Daly, Peter T. Pham, Dan Huang, Andrew Y. Ng, and Christopher Potts. (2011). Learning Word Vectors for Sentiment Analysis. The 49th Annual Meeting of the Association for Computational Linguistics (ACL 2011).
{"url":"https://iq.opengenus.org/naive-bayes-sentiment-analysis/","timestamp":"2024-11-09T19:47:31Z","content_type":"text/html","content_length":"74705","record_id":"<urn:uuid:4457e98c-88c0-4bc4-a3ff-c20f3f348e4d>","cc-path":"CC-MAIN-2024-46/segments/1730477028142.18/warc/CC-MAIN-20241109182954-20241109212954-00885.warc.gz"}
\circ $ and $45^\ Here, the height of the first aeroplane is given and we have to find how high this aeroplane is from the airplane whose height is not known. First find the distance of the observation point from the aeroplanes using the angle of elevation and the height given by using the formula, $\tan \theta = \dfrac{{\text{P}}}{{\text{B}}}$. Then find the height of the other aeroplane by using the same formula. Then subtract this height from the given height and you’ll get the answer. Complete step-by-step answer: Given, the height of first aeroplane PS =$3000$ m whose angle of elevation from the observation point is $\angle {\text{PQS = 60}}^\circ $ .The second aeroplane has height PR= h m and the angle of elevation from observation point is $\angle {\text{PQR = 45}}^\circ $.We have to find the height of second plane from first aeroplane RS. Let the distance of the observation point from both the planes PQ be x m. Then in right angled triangle SPQ, $ \Rightarrow \tan \theta = \dfrac{{{\text{PS}}}}{{{\text{PQ}}}}$ $\left[ {{\text{as tan}}\theta {\text{ = }}\dfrac{{\text{P}}}{{\text{B}}}} \right]$ On putting the given values, we get- $ \Rightarrow \tan {60^ \circ } = \dfrac{{3000}}{{\text{x}}} \Rightarrow \sqrt 3 = \dfrac{{3000}}{{\text{x}}}$ On rationalizing and solving for x, we get- $ \Rightarrow {\text{x = }}\dfrac{{3000 \times \sqrt 3 }}{{\sqrt 3 \times \sqrt 3 }} = \dfrac{{3000\sqrt 3 }}{3} = 1000\sqrt 3 $ m Now we need to find the height of the second aeroplane. So in right angled triangle RPQ, $ \Rightarrow \tan {45^ \circ } = \dfrac{{{\text{PR}}}}{{{\text{PQ}}}}$ $ \Rightarrow 1 = \dfrac{{\text{h}}}{{\text{x}}} \Rightarrow {\text{h = x}}$ Since we know the value of x, we put it in the equation- $ \Rightarrow {\text{h = 1000}} \times {\text{1}}{\text{.732 = 1732}}$ m Now we have to find the height of the second aeroplane from first. So, $ \Rightarrow {\text{RS = PS - PR}}$ On putting the given values, we get- $ \Rightarrow {\text{RS = 3000 - 1732 = 1268}}$ m Hence the first aeroplane is $1268$ m high from the second aeroplane.Note: To solve this type of question, we have to draw the correct diagram. Hence it is important to read the statement carefully. In this question, the formula of $\tan \theta $ is used because one quantity is given and we have to find the other quantity and angle is also given. Here, P stands for perpendicular and B stands for base of triangle.
{"url":"https://www.vedantu.com/question-answer/an-aeroplane-when-3000-m-high-passes-vertically-class-11-maths-cbse-5f5da40c8f2fe2491852d252","timestamp":"2024-11-03T00:07:09Z","content_type":"text/html","content_length":"174744","record_id":"<urn:uuid:0541465b-98b1-48d3-8062-560d69dd0ccf>","cc-path":"CC-MAIN-2024-46/segments/1730477027768.43/warc/CC-MAIN-20241102231001-20241103021001-00421.warc.gz"}
If that can be useful, here 05-31-2011 12:07 PM I am looking for an equivalent to the most useful intrinsic_mm_alignr_epi8 with AVX registers (I guess its equivalent to PALIGNR or VPALIGNR for the one who are not familiar with C intrinsics). More precisely i would need the equivalent of an hypothetical _mm256_alignr_ps (i need float granularity, not byte one). Since there is no "slri_si256" or "slli_si256", I have though of a solution with _mm256_permute2_ps but this intrinsic does not seem to be available on my compiler (and maybe neither on my Core i7 2600K). I am using Intel XE 12 Update 4 for Windows. Right now I have used extractf128/insertf128 combined with two alignr_epi8 but the performances are as expected very bad (i.e my AVX code is slower than the SSE one) because of the mixing of XMM and YMM instructions. Best regards 05-31-2011 12:16 PM 05-31-2011 02:05 PM 09-26-2013 04:32 AM 10-11-2013 11:32 PM 10-12-2013 03:05 AM 10-13-2013 08:26 AM
{"url":"https://community.intel.com/t5/Intel-ISA-Extensions/AVX-mm-alignr-epi8-equivalent-for-YMM-registers/m-p/813775","timestamp":"2024-11-12T03:11:49Z","content_type":"text/html","content_length":"289620","record_id":"<urn:uuid:f8b774d8-5560-4a70-a2e9-55e11f83d5b5>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.50/warc/CC-MAIN-20241112014152-20241112044152-00333.warc.gz"}
The Use of Variables in a Patterning Activity: Counting Dots The Use of Variables in a Patterning Activity: Counting Dots Keywords: generalisation, patterning activity, variable The present paper examines a patterning activity that was organised within a teaching experiment in order to analyse the different uses of variables by secondary school students. The activity presented in the paper can be categorised as a pictorial/geometric linear pattern. We adopted a student-oriented perspective for our analysis, in order to grasp how students perceive their own generalising actions. The analysis of our data led us to two broad categories for variable use, according to whether the variable is viewed as a generalised number or not. Our results also show that students sometimes treat the variable as closely linked to a referred object, as a superfluous entity or as a constant. Finally, the notion of equivalence, which is an important step towards understanding variables, proved difficult for our students to grasp. Download data is not yet available. Blanton, M., & Kaput, J. (2002). Developing elementary teachers’ algebra “eyes and ears†: Understanding characteristics of professional development that promote generative and self-sustaining change in teacher practice. Paper presented at the annual meeting of the American Educational Research Association, New Orleans, LA. Cobb, P., & Steffe, L. P. (1983). The constructivist researcher as teacher and model builder. Journal for Research in Mathematics Education, 14(2), 83–94. Dörfler, W. (2008). En route from patterns to algebra: Comments and reflections. ZDM Mathematics Education, 40(1), 143–160. Dretske, F. (1990). Seeing, believing, and knowing. In D. Osherson, S. M. Kosslyn, & J. Hollerback (Eds.), Visual cognition and action: An invitation to cognitive science (pp. 129–148). Cambridge, MA: MIT Press. Ellis, A. B. (2007a). Connections between generalizing and justifying: Students reasoning with linear relationships. Journal for Research in Mathematics Education, 38(3), 194–229. Ellis, A. B. (2007b). A taxonomy for categorizing generalizations: Generalizing actions and reflection generalizations. Journal of the Learning Sciences, 16(2), 221–262. Ellis, A. B. (2011). Generalizing-promoting actions: How classroom collaborations can support students’ mathematical generalizations. Journal for Research in Mathematics Education, 42(4), 308–345. English, L., & Warren, E. (1995). General reasoning processes and elementary algebraic understanding: Implications for instruction. Focus on Learning Problems in Mathematics, 17(4), 1–19. English, L. D., & Warren, E. (1998). Introducing the variable through pattern exploration. The Mathematics Teacher, 91(2), 166–171. Kieran, C. (1989). A perspective on algebraic thinking. In G. Vergnaud, J. Rogalski, & M. Artigue (Eds.), Proceedings of the 13th conference of the international group for the psychology of mathematics education (pp. 163–171). Paris: PME. Krygowska, A. Z. (1980). Zarys Dydaktyki Matematyki, tom II [Overview of didactics of mathematics, Vol. 2]. Warsaw: Wydawnictwa Szkolne i Pedagogogiczne. Lee, L. (1996). An initiation into algebraic culture through generalization activities. In N. Bednarz, C. Kieran, & L. Lee (Eds.), Approaches to algebra. Perspectives for research and teaching (pp. 87–106). Dordrecht: Kluwer. Lee, L., & Wheeler, D. (1987). Algebraic thinking in high school students: Their conceptions of generalization and justification (Research Report). Montreal, CA: Concordia University, Department of Legutko, M., & StaÅ„do J. (2008). Jakie dziaÅ‚ania powinny podjąć polskie szkoÅ‚y w Å›wietle badaÅ„ PISA? [What actions should be taken by Polish schools in the light of PISA exams?]. In H. KÄ…kol (Ed.), Prace Monograficzne z Dydaktyki Matematyki. Współczesne problemy nauczania matematyki 1 (pp. 19–34). Bielsko-BiaÅ‚a: Stowarzyszenie Nauczycieli Matematyki. Lobato, J. (2003). How design experiments can inform a rethinking of transfer and vice versa. Educational Researcher, 32(1), 17–20. Malara, N. (2012). Generalization processes in the teaching/learning of algebra: Students behaviours and teacher role. In B. Maj-Tatsis & K. Tatsis (Eds.), Generalization in mathematics at all educational levels (pp. 57–90). Rzeszów: Wydawnictwo Uniwersytetu Rzeszowskiego. Mason, J. (1996). Expressing generality and roots of algebra. In N. Bednarz, C. Kieran, & L. Lee (Eds.), Approaches to algebra. Perspectives for research and teaching (pp. 65–86). Dordrecht: National Council of Teachers of Mathematics (NCTM) (2000). Principles and standards for school mathematics. Reston, VA: National Council of Teachers of Mathematics. Orton, A., & Orton, J. (1994). Students’ perception and use of pattern and generalization. In J. P. da Ponte & J. F. Matos (Eds.), Proceedings of the 18th international conference for the psychology of mathematics education (Vol. III, pp. 407–414). Lisbon: PME Program Committee. Orton, A., & Orton J. (1999). Pattern and the approach to algebra. In A. Orton (Ed.), Patterns in the teaching and learning of mathematics (pp. 104–120). London, UK: Cassell. Radford, L. (2006). Algebraic thinking and the generalization of patterns: A semiotic perspective. In S. Alatorre, J. Cirtina, M. Sáiz, & A. Méndez (Eds.), Proceedings of the 28th international conference for the psychology of mathematics education, NA Chapter (Vol. I, pp. 2–21). Mexico: UPN. Radford, L. (2011). Grade 2 students’ non-symbolic algebraic thinking. In J. Cai & E. Knuth (Eds.), Early algebraization: A global dialogue from multiple perspectives (pp. 303–322). Heidelberg: Reznic, T., & Tabach, M. (2002). Armon HaMathematica - Algebra Be Sviva Memuchshevet, Helek Gimel [The mathematical palace - Algebra with computers for grade seven, Part C]. Rehovot: Weizmann Institute of Science. Rivera, F. (2010). Visual templates in pattern generalization activity. Educational Studies in Mathematics, 73(3), 297–328. Stacey, K. (1989). Finding and using patterns in linear generalizing problems. Educational Studies in Mathematics, 20(2), 147–164. Strauss, A., & Corbin, J. (1990). Basics of qualitative research. Grounded theory procedures and techniques. Newbury Park, CA: Sage Publications. Wilkie, K. J. (2016). Students’ use of variables and multiple representations in generalizing functional relationships prior to secondary school. Educational Studies in Mathematics, 93(3), ZarÄ™ba, L. (2012). Matematyczne uogólnianie. MożliwoÅ›ci uczniów i praktyka nauczania [Mathematical generalisation. Abilities of students and teaching practices]. Krakow: Wydawnictwo naukowe Uniwersytetu Pedagogicznego. Zazkis, R., & Hazzan, O. (1999). Interviewing in mathematics education research: Choosing the questions. Journal of Mathematical Behavior, 17(4), 429–439. Zazkis, R., & Liljedahl, P. (2002). Generalization of patterns: The tension between algebraic thinking and algebraic notation. Educational Studies in Mathematics, 49(3), 379–402. How to Cite Maj-Tatsis, B., & Tatsis, K. (2018). The Use of Variables in a Patterning Activity: Counting Dots. Center for Educational Policy Studies Journal, 8(2), 55-70. https://doi.org/10.26529/cepsj.309 Authors who publish with this journal agree to the following terms: 1. Authors are confirming that they are the authors of the submitted article, which will be published online in the Center for Educational Policy Studies Journal (for short: CEPS Journal) by University of Ljubljana Press (University of Ljubljana, Faculty of Education, Kardeljeva ploščad 16, 1000 Ljubljana, Slovenia). The Author’s/Authors’ name(s) will be evident in the article in the journal. All decisions regarding layout and distribution of the work are in the hands of the publisher. 2. The Authors guarantee that the work is their own original creation and does not infringe any statutory or common-law copyright or any proprietary right of any third party. In case of claims by third parties, authors commit themselves to defend the interests of the publisher, and shall cover any potential costs. 3. Authors retain copyright and grant the journal right of first publication with the work simultaneously licensed under https://creativecommons.org/licenses/by/4.0/deed.en that allows others to share the work with an acknowledgement of the work's authorship and initial publication in this journal. 4. Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the journal's published version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgement of its initial publication in this journal. 5. Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) prior to and during the submission process, as it can lead to productive exchanges, as well as earlier and greater citation of published work.
{"url":"https://cepsj.si/index.php/cepsj/article/view/309","timestamp":"2024-11-14T20:56:23Z","content_type":"text/html","content_length":"29981","record_id":"<urn:uuid:4a4e50ff-1a0a-4629-81fb-935f68a1a776>","cc-path":"CC-MAIN-2024-46/segments/1730477395538.95/warc/CC-MAIN-20241114194152-20241114224152-00120.warc.gz"}
Materiomics Chronicles: week 3 Oct 07 2023 Materiomics Chronicles: week 3 In week three of the academic year at the chemistry and materiomics programs of UHasselt, the students started to put their freshly gained new knowledge of weeks 1 and 2 into practice with a number of exercise classes. For the second bachelor chemistry students, this meant performing their first calculations within the context of the course introduction to quantum chemistry. At this point this is still very mathematical (e.g., calculating commutators) and abstract (e.g., normalizing a wave function or calculating the probability of finding a particle, given a simple wave function), but this will change, and chemical/physical meaning will slowly be introduced into the mathematical formalism. For the third bachelor chemistry, the course quantum and computational chemistry continued with perturbation theory, and we started with the variational method as well. The latter was introduced through the example of the H atom, for which the exact variational ground state was recovered starting from a well chosen trial wave function. In the master materiomics, the first master course fundamentals of materials modelling, dove into the details underpinning DFT introducing concepts like pseudo-potentials, the frozen-core approximation, periodic boundary conditions etc. This knowledge was then put into practice during a second exercise session working on the supercomputer, as a last preparation for the practical lab exercise the following day. During this lab, the students used the supercomputer to calculate the Young modulus of two infinite linear polymers. An intense practical session which they all executed with great courage (remember 2 weeks ago they never heard of DFT, nor had they accessed a supercomputer). Their report for this practical will be part of their grade. For the second master materiomics, the course focused on Density Functional Theory consisted of a discussion lecture, covering the topics the students studied during their self study assignments. In addition, I recorded two video lectures for the blended learning part of the course. For the course Machine learning and artificial intelligence in modern materials science self study topics were covered in such a discussion lecture as well. In addition, the QM9 data set was investigated during an exercise session, as preparation for further detailed study. At the end of this week, we have added another 16h of live lectures and ~1h of video lectures, putting our semester total at 35h of live lectures. Upwards and onward to week 4. This site uses Akismet to reduce spam. Learn how your comment data is processed.
{"url":"https://dannyvanpoucke.be/materiomics-chronicles-week-3/","timestamp":"2024-11-11T17:25:29Z","content_type":"text/html","content_length":"81242","record_id":"<urn:uuid:9c522925-ccc8-4ab9-9373-6da12088dc6d>","cc-path":"CC-MAIN-2024-46/segments/1730477028235.99/warc/CC-MAIN-20241111155008-20241111185008-00546.warc.gz"}
MESFUN7C: The Measurability of Complex-Valued Functional Sequences :: The Measurability of Complex-Valued Functional Sequences :: by Keiko Narita , Noboru Endou and Yasunari Shidama :: Received December 16, 2008 :: Copyright (c) 2008-2021 Association of Mizar Users Lm1: for X being non empty set for f being PartFunc of X,COMPLEX holds |.f.| is nonnegative
{"url":"https://mizar.uwb.edu.pl/version/current/html/mesfun7c.html","timestamp":"2024-11-06T18:57:33Z","content_type":"text/html","content_length":"233269","record_id":"<urn:uuid:ce845e0b-7d72-46fe-8ca1-539747022822>","cc-path":"CC-MAIN-2024-46/segments/1730477027933.5/warc/CC-MAIN-20241106163535-20241106193535-00768.warc.gz"}
On solving the discrete location problems when the facilities are prone to failure The classical discrete location problem is extended here, where the candidate facilities are subject to failure. The unreliable location problem is defined by introducing the probability that a facility may become inactive. The formulation and the solution procedure have been motivated by an application to model and solve a large size problem for locating base stations in a cellular communication network. We formulate the unreliable discrete location problems as 0-1 integer programming models, and implement an enhanced dual-based solution method to determine locations of these facilities to minimize the sum of fixed cost and expected operating (transportation) cost. Computational tests of some well-known problems have shown that the heuristic is efficient and effective for solving these unreliable location problems. All Science Journal Classification (ASJC) codes • Modelling and Simulation • Applied Mathematics Dive into the research topics of 'On solving the discrete location problems when the facilities are prone to failure'. Together they form a unique fingerprint.
{"url":"https://researchoutput.ncku.edu.tw/en/publications/on-solving-the-discrete-location-problems-when-the-facilities-are","timestamp":"2024-11-05T23:52:18Z","content_type":"text/html","content_length":"55382","record_id":"<urn:uuid:162e2b3a-7ef4-4004-b578-b8743a9c53c3>","cc-path":"CC-MAIN-2024-46/segments/1730477027895.64/warc/CC-MAIN-20241105212423-20241106002423-00144.warc.gz"}
Flat Plate Natural Frequency Calculator “a” and “b”, the dimensions of the plate “c” and “d”, the half-dimensions of the plate (for polynomial equations) “E”, plate material Young’s modulus “h”, the plate thickness/height “?”, plate material Poisson’s Ratio “?” or “ “D”, plate stiffness factor (defined on References sheet) “Z”, plate deflection at resonance Calculation Reference Flat Plates Plate Bending Strength of Materials The natural frequency of a flat plate is an important parameter when designing structures or components subjected to dynamic loading or vibrations. To calculate the natural frequency of a flat plate, you can use the following steps based on classical plate theory and the Rayleigh-Ritz method: 1. Define plate geometry and properties: Determine the dimensions (length 'a' and width 'b') and thickness 'h' of the flat plate. Identify the material properties, such as Young's modulus 'E' and Poisson's ratio 'v', and calculate the mass density 'ρ'. 2. Boundary conditions: Specify the boundary conditions of the plate, which can be simply supported, clamped, or free on its edges. 3. Formulate the governing equation: Use the classical plate theory to derive the governing equation for the flat plate, which involves the deflection of the plate (w) and its bending stiffness (D). The bending stiffness (D) can be calculated using the formula: D = (E * h^3) / (12 * (1 - v^2)) 4. Assume a deflection shape: Assume a sinusoidal deflection shape for the flat plate, which can be expressed as a product of sine functions in both the x and y directions: w(x, y) = A * sin(m * π * x / a) * sin(n * π * y / b) where 'A' is the amplitude, and 'm' and 'n' are the mode numbers in the x and y directions, respectively. 5. Apply the Rayleigh-Ritz method: Apply the Rayleigh-Ritz method by substituting the assumed deflection shape into the governing equation and calculating the strain energy (U) and kinetic energy (T) of the plate. 6. Calculate the natural frequency: The natural frequency (ω) can be obtained by minimizing the Rayleigh quotient, which is the ratio of the strain energy (U) to the kinetic energy (T): ω² = U / T Calculate the natural frequency (ω) by taking the square root of the obtained value for ω². 7. Convert to frequency: Convert the natural frequency (ω) to frequency (f) using the following formula: f = ω / (2 * π) These steps provide an approximate method for calculating the natural frequency of a flat plate based on classical plate theory and the Rayleigh-Ritz method. The accuracy of this method depends on the complexity of the plate geometry, boundary conditions, and material properties. For more accurate results, especially for complex or irregular plates, consider using numerical methods, such as the Finite Element Method (FEM), which can provide detailed solutions for the natural frequencies and mode shapes of the plate. Calculation Preview Full download access to any calculation is available to users with a paid or awarded subscription (XLC Pro). Subscriptions are free to contributors to the site, alternatively they can be purchased. Click here for information on subscriptions 8 years ago Thank you for your debut calculation I have awarded a free 3 month subscription to the site by way of thanks.
{"url":"https://www.excelcalcs.com/calcs/repository/Dynamics/Flat-Panels/Flat-Plate-Natural-Frequency-Calculator/","timestamp":"2024-11-12T07:12:37Z","content_type":"text/html","content_length":"32674","record_id":"<urn:uuid:36af5193-7bc4-4933-982a-0f3dec51cbb1>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.58/warc/CC-MAIN-20241112045844-20241112075844-00234.warc.gz"}
Fahrenheit to Celsius - Formula, Examples | Convert F to C - - [[company name]] [[target location]], [[stateabr]] Fahrenheit to Celsius Formula, Examples | Convert F to C Nevertheless where you’re in the world, knowing the outside temperature is relevant to practice for the day or to pack for a trip. Within the three different scales to calculate temp, Fahrenheit and Celsius are typically most used. As a mathematical concept, it’s simple to look at the benefits of converting from one to the other. Let’s check out where these scales come from and the formula to convert from Fahrenheit to Celsius. If you get this right, later when you travel, you can swifty convert these units without an online What Is Temperature? The primary information to know regarding Celsius and Fahrenheit is that they are units for metering temps. Temperature is the magnitude used to calculate the warmth or coldness of things. It is calculated with three main scales: Fahrenheit, Kelvin and Celsius. The Fahrenheit and Celsius units are employed in almost every part of the world for everyday life, while the Kelvin temperature unit is more generally utilized in a technical environment. The Celsius and Fahrenheit temperatures are distinct scales because they are founded on the work of separate persons. The Celsius scale was created by Anders Celsius, a Swedish scientist. He created this scale with water, setting temp units keep in mind the freezing point and boiling point of water. He assigned 100 degrees Celsius to indicate when water boils and 0 to the freezing temp of water. The Fahrenheit scale was created by Daniel Gabriel Fahrenheit, a German scientist. His measurement was in accordance with brine's freezing and boiling point, that is salty water. He designated the point when water freezes at 32, while the temperature when it boils is 212 degrees Fahrenheit. Most countries determine temperature in Celsius, but English-speaking countries, namely England and the United States, use the Fahrenheit temperature unit. What Is the Formula for Converting Fahrenheit to Celsius? Now that we understand more regarding the Celsius and Fahrenheit measures, let's focus on how the conversion among them functions. As we looked into it previously, Celsius was established with the freezing and boiling temperatures of water being 0 and 100 degrees Celsius. These measures correspond to 32 F for freezing and 212 F for boiling in Fahrenheit. Formula to Convert to Celsius scale Using water's features, we can establish a link with the temperatures in the Celsius and Fahrenheit scales. Bear in mind that the boiling point of water is when it reaches 100 degrees Celsius. We also know that the boiling point of water in Fahrenheit is 212 degrees. Given that, we can set up the following equation: 100 Celsius = 212 Fahrenheit We can work out this equation for Celsius and reach the following equation: Celsius = (Fahrenheit - 32) * (5/9) This is the equation for conversion from Fahrenheit and Celsius. To use it, put the temperature in Fahrenheit that you want to change it to Celsius. Let's ensure that this equation works. Let’s work out 212 F to C. C = (212 F - 32) * (5/9) C = 180 * 5/9 C = 100 As intended we get: 212 degrees F = 100 degrees C How to Convert from Fahrenheit to Celsius Now that we understand the equation to change Fahrenheit to Celsius, let's go through] it one-by-one. Steps to convert Fahrenheit to Celsius 1) Mainly, take the temp in Fahrenheit that you want to change, and subtract 32 from it. 2) Then, multiply that value by 5/9. 3) The answer is the equivalent temp in Celsius. Example 1 We can furthermore use the conversion equations to study the general human body temperature from Fahrenheit to Celsius. 98.6 F to C Put this provided temp into the Celsius equation: C = (98.6 F - 32) * (5/9) C = 66.6 * 5/9 C = 37 As we've just observed, an average body temperature is 98.6 degrees F, or its equivalent in the Celsius unit, 37 degrees C. Example 2 Let's use the Celsius conversion equation to transform a single degree Fahrenheit to Celsius. As per instruction, we would have to put the value into the Celsius formula: C = (1 F - 32) * (5/9) By working out the complete equation, we will end up with a Celsius temperature of: C = (- 31) * (5/9) C = -17.22 One degree Fahrenheit appears even more chilly when you learn that it as same as -17.22 degrees Celsius! Grade Potential Can Help You with Converting from Fahrenheit to Celsius All this data is just the tip of the iceberg about temp scales. Temperature conversion is a crucial theory in mathematics and physics. Grade Potential can guide you conquer it now before you lag behind in your academics. If you need assistance in mathematics, science, or any other subject, Grade Potential has a huge number of instructors in all academic subjects, prepared to help you immediately. Achieve your potential by fixing a session today.
{"url":"https://www.losangelesinhometutors.com/blog/fahrenheit-to-celsius-formula-examples-convert-f-to-c","timestamp":"2024-11-11T03:17:27Z","content_type":"text/html","content_length":"74845","record_id":"<urn:uuid:6cf57f9a-9f59-4433-be9a-66acf5f9dfc9>","cc-path":"CC-MAIN-2024-46/segments/1730477028216.19/warc/CC-MAIN-20241111024756-20241111054756-00101.warc.gz"}
Verkle trees 2021 Jun 18 See all posts Special thanks to Dankrad Feist and Justin Drake for feedback and review. Verkle trees are shaping up to be an important part of Ethereum's upcoming scaling upgrades. They serve the same function as Merkle trees: you can put a large amount of data into a Verkle tree, and make a short proof ("witness") of any single piece, or set of pieces, of that data that can be verified by someone who only has the root of the tree. The key property that Verkle trees provide, however, is that they are much more efficient in proof size. If a tree contains a billion pieces of data, making a proof in a traditional binary Merkle tree would require about 1 kilobyte, but in a Verkle tree the proof would be less than 150 bytes - a reduction sufficient to make stateless clients finally viable in practice. Verkle trees are still a new idea; they were first introduced by John Kuszmaul in this paper from 2018, and they are still not as widely known as many other important new cryptographic constructions. This post will explain what Verkle trees are and how the cryptographic magic behind them works. The price of their short proof size is a higher level of dependence on more complicated cryptography. That said, the cryptography still much simpler, in my opinion, than the advanced cryptography found in modern ZK SNARK schemes. In this post I'll do the best job that I can at explaining it. Merkle Patricia vs Verkle Tree node structure In terms of the structure of the tree (how the nodes in the tree are arranged and what they contain), a Verkle tree is very similar to the Merkle Patricia tree currently used in Ethereum. Every node is either (i) empty, (ii) a leaf node containing a key and value, or (iii) an intermediate node that has some fixed number of children (the "width" of the tree). The value of an intermediate node is computed as a hash of the values of its children. The location of a value in the tree is based on its key: in the diagram below, to get to the node with key 4cc, you start at the root, then go down to the child at position 4, then go down to the child at position c (remember: c = 12 in hexadecimal), and then go down again to the child at position c. To get to the node with key baaa, you go to the position-b child of the root, and then the position-a child of that node. The node at path (b,a) directly contains the node with key baaa, because there are no other keys in the tree starting with ba. The structure of nodes in a hexary (16 children per parent) Verkle tree, here filled with six (key, value) pairs. The only real difference in the structure of Verkle trees and Merkle Patricia trees is that Verkle trees are wider in practice. Much wider. Patricia trees are at their most efficient when width = 2 (so Ethereum's hexary Patricia tree is actually quite suboptimal). Verkle trees, on the other hand, get shorter and shorter proofs the higher the width; the only limit is that if width gets too high, proofs start to take too long to create. The Verkle tree proposed for Ethereum has a width of 256, and some even favor raising it to 1024 (!!). Commitments and proofs In a Merkle tree (including Merkle Patricia trees), the proof of a value consists of the entire set of sister nodes: the proof must contain all nodes in the tree that share a parent with any of the nodes in the path going down to the node you are trying to prove. That may be a little complicated to understand, so here's a picture of a proof for the value in the 4ce position. Sister nodes that must be included in the proof are highlighted in red. That's a lot of nodes! You need to provide the sister nodes at each level, because you need the entire set of children of a node to compute the value of that node, and you need to keep doing this until you get to the root. You might think that this is not that bad because most of the nodes are zeroes, but that's only because this tree has very few nodes. If this tree had 256 randomly-allocated nodes, the top layer would almost certainly have all 16 nodes full, and the second layer would on average be ~63.3% full. In a Verkle tree, on the other hand, you do not need to provide sister nodes; instead, you just provide the path, with a little bit extra as a proof. This is why Verkle trees benefit from greater width and Merkle Patricia trees do not: a tree with greater width leads to shorter paths in both cases, but in a Merkle Patricia tree this effect is overwhelmed by the higher cost of needing to provide all the width - 1 sister nodes per level in a proof. In a Verkle tree, that cost does not exist. So what is this little extra that we need as a proof? To understand that, we first need to circle back to one key detail: the hash function used to compute an inner node from its children is not a regular hash. Instead, it's a vector commitment. A vector commitment scheme is a special type of hash function, hashing a list \(h(z_1, z_2 ... z_n) \rightarrow C\). But vector commitments have the special property that for a commitment \(C\) and a value \(z_i\), it's possible to make a short proof that \(C\) is the commitment to some list where the value at the i'th position is \(z_i\). In a Verkle proof, this short proof replaces the function of the sister nodes in a Merkle Patricia proof, giving the verifier confidence that a child node really is the child at the given position of its parent node. No sister nodes required in a proof of a value in the tree; just the path itself plus a few short proofs to link each commitment in the path to the next. In practice, we use a primitive even more powerful than a vector commitment, called a polynomial commitment. Polynomial commitments let you hash a polynomial, and make a proof for the evaluation of the hashed polynomial at any point. You can use polynomial commitments as vector commitments: if we agree on a set of standardized coordinates \((c_1, c_2 ... c_n)\), given a list \((y_1, y_2 ... y_n)\) you can commit to the polynomial \(P\) where \(P(c_i) = y_i\) for all \(i \in [1..n]\) (you can find this polynomial with Lagrange interpolation). I talk about polynomial commitments at length in my article on ZK-SNARKs. The two polynomial commitment schemes that are the easiest to use are KZG commitments and bulletproof-style commitments (in both cases, a commitment is a single 32-48 byte elliptic curve point). Polynomial commitments give us more flexibility that lets us improve efficiency, and it just so happens that the simplest and most efficient vector commitments available are the polynomial commitments. This scheme is already very powerful as it is: if you use a KZG commitment and proof, the proof size is 96 bytes per intermediate node, nearly 3x more space-efficient than a simple Merkle proof if we set width = 256. However, it turns out that we can increase space-efficiency even further. Merging the proofs Instead of requiring one proof for each commitment along the path, by using the extra properties of polynomial commitments we can make a single fixed-size proof that proves all parent-child links between commitments along the paths for an unlimited number of keys. We do this using a scheme that implements multiproofs through random evaluation. But to use this scheme, we first need to convert the problem into a more structured one. We have a proof of one or more values in a Verkle tree. The main part of this proof consists of the intermediary nodes along the path to each node. For each node that we provide, we also have to prove that it actually is the child of the node above it (and in the correct position). In our single-value-proof example above, we needed proofs to prove: • That the key: 4ce node actually is the position-e child of the prefix: 4c intermediate node. • That the prefix: 4c intermediate node actually is the position-c child of the prefix: 4 intermediate node. • That the prefix: 4 intermediate node actually is the position-4 child of the root If we had a proof proving multiple values (eg. both 4ce and 420), we would have even more nodes and even more linkages. But in any case, what we are proving is a sequence of statements of the form "node A actually is the position-i child of node B". If we are using polynomial commitments, this turns into equations: \(A(x_i) = y\), where \(y\) is the hash of the commitment to \(B\). The details of this proof are technical and better explained by Dankrad Feist than myself. By far the bulkiest and time-consuming step in the proof generation involves computing a polynomial \(g\) of the form: \(g(X) = r^0\frac{A_0(X) - y_0}{X - x_0} + r^1\frac{A_1(X) - y_1}{X - x_1} + ... + r^n\frac{A_n(X) - y_n}{X - x_n}\) It is only possible to compute each term \(r^i\frac{A_i(X) - y_i}{X - x_i}\) if that expression is a polynomial (and not a fraction). And that requires \(A_i(X)\) to equal \(y_i\) at the point \(x_i We can see this with an example. Suppose: • \(A_i(X) = X^2 + X + 3\) • We are proving for \((x_i = 2, y_i = 9)\). \(A_i(2)\) does equal \(9\) so this will work. \(A_i(X) - 9 = X^2 + X - 6\), and \(\frac{X^2 + X - 6}{X - 2}\) gives a clean \(X - 3\). But if we tried to fit in \((x_i = 2, y_i = 10)\), this would not work; \(X^2 + X - 7\) cannot be cleanly divided by \(X - 2\) without a fractional remainder. The rest of the proof involves providing a polynomial commitment to \(g(X)\) and then proving that the commitment is actually correct. Once again, see Dankrad's more technical description for the rest of the proof. One single proof proves an unlimited number of parent-child relationships. And there we have it, that's what a maximally efficient Verkle proof looks like. Key properties of proof sizes using this scheme • Dankrad's multi-random-evaluation proof allows the prover to prove an arbitrary number of evaluations \(A_i(x_i) = y_i\), given commitments to each \(A_i\) and the values that are being proven. This proof is constant size (one polynomial commitment, one number, and two proofs; 128-1000 bytes depending on what scheme is being used). • The \(y_i\) values do not need to be provided explicitly, as they can be directly computed from the other values in the Verkle proof: each \(y_i\) is itself the hash of the next value in the path (either a commitment or a leaf). • The \(x_i\) values also do not need to be provided explicitly, since the paths (and hence the \(x_i\) values) can be computed from the keys and the coordinates derived from the paths. • Hence, all we need is the leaves (keys and values) that we are proving, as well as the commitments along the path from each leaf to the root. • Assuming a width-256 tree, and \(2^{32}\) nodes, a proof would require the keys and values that are being proven, plus (on average) three commitments for each value along the path from that value to the root. • If we are proving many values, there are further savings: no matter how many values you are proving, you will not need to provide more than the 256 values at the top level. Proof sizes (bytes). Rows: tree size, cols: key/value pairs proven 65,536 224 608 4,112 12,176 12,464 16,777,216 272 1,040 8,864 59,792 457,616 4,294,967,296 320 1,472 13,616 107,744 937,472 Assuming width 256, and 48-byte KZG commitments/proofs. Note also that this assumes a maximally even tree; for a realistic randomized tree, add a depth of ~0.6 (so ~30 bytes per element). If bulletproof-style commitments are used instead of KZG, it's safe to go down to 32 bytes, so these sizes can be reduced by 1/3. Prover and verifier computation load The bulk of the cost of generating a proof is computing each \(r^i\frac{A_i(X) - y_i}{X - x_i}\) expression. This requires roughly four field operations (ie. 256 bit modular arithmetic operations) times the width of the tree. This is the main constraint limiting Verkle tree widths. Fortunately, four field operations is a small cost: a single elliptic curve multiplication typically takes hundreds of field operations. Hence, Verkle tree widths can go quite high; width 256-1024 seems like an optimal range. To edit the tree, we need to "walk up the tree" from the leaf to the root, changing the intermediate commitment at each step to reflect the change that happened lower down. Fortunately, we don't have to re-compute each commitment from scratch. Instead, we take advantage of the homomorphic property: given a polynomial commitment \(C = com(F)\), we can compute \(C' = com(F + G)\) by taking \(C' = C + com(G)\). In our case, \(G = L_i * (v_{new} - v_{old})\), where \(L_i\) is a pre-computed commitment for the polynomial that equals 1 at the position we're trying to change and 0 everywhere else. Hence, a single edit requires ~4 elliptic curve multiplications (one per commitment between the leaf and the root, this time including the root), though these can be sped up considerably by pre-computing and storing many multiples of each \(L_i\). Proof verification is quite efficient. For a proof of N values, the verifier needs to do the following steps, all of which can be done within a hundred milliseconds for even thousands of values: • One size-\(N\) elliptic curve fast linear combination • About \(4N\) field operations (ie. 256 bit modular arithmetic operations) • A small constant amount of work that does not depend on the size of the proof Note also that, like Merkle Patricia proofs, a Verkle proof gives the verifier enough information to modify the values in the tree that are being proven and compute the new root hash after the changes are applied. This is critical for verifying that eg. state changes in a block were processed correctly. Verkle trees are a powerful upgrade to Merkle proofs that allow for much smaller proof sizes. Instead of needing to provide all "sister nodes" at each level, the prover need only provide a single proof that proves all parent-child relationships between all commitments along the paths from each leaf node to the root. This allows proof sizes to decrease by a factor of ~6-8 compared to ideal Merkle trees, and by a factor of over 20-30 compared to the hexary Patricia trees that Ethereum uses today (!!). They do require more complex cryptography to implement, but they present the opportunity for large gains to scalability. In the medium term, SNARKs can improve things further: we can either SNARK the already-efficient Verkle proof verifier to reduce witness size to near-zero, or switch back to SNARKed Merkle proofs if/when SNARKs get much better (eg. through GKR, or very-SNARK-friendly hash functions, or ASICs). Further down the line, the rise of quantum computing will force a change to STARKed Merkle proofs with hashes as it makes the linear homomorphisms that Verkle trees depend on insecure. But for now, they give us the same scaling gains that we would get with such more advanced technologies, and we already have all the tools that we need to implement them efficiently.
{"url":"https://0xe4ba0e245436b737468c206ab5c8f4950597ab7f.arb-nova.w3link.io/general/2021/06/18/verkle.html","timestamp":"2024-11-13T18:20:18Z","content_type":"text/html","content_length":"22558","record_id":"<urn:uuid:fe6cce5c-269d-40a0-b8a1-bcf29e8b2eb7>","cc-path":"CC-MAIN-2024-46/segments/1730477028387.69/warc/CC-MAIN-20241113171551-20241113201551-00378.warc.gz"}
Calculation of mechanical stresses in adjoint system of electronic component and compound and strength assessment The paper represents mathematical model and formulas developed for project calculations which are applied to sealed electronic units and provide assessing strength of passive electronic components having revolution shape (capacitors, resistors, diodes, pins, etc.). The stress calculation has been produced for materials of resistor and compound in the temperature interval (from –60 to +70°C) along the radius of resistor and compound. 1. Introduction Modern development of electronics, that applies cutting-edge integrated and module assemblies, and tendency to micro-levels for reaching small and light high-density assemblies have posed new problems for developers to solve, one of which is to provide mechanical strength and reliability [1]. A lot of modern electronic units are performed as a sealed polymer bar with numerous inclusions as components and printed circuit boards with connecting pins and contact pads, which can be considered as a whole body – a composition consisting of many materials united in one (Fig. 1). Using such units made from new non-metallic materials with insufficiently studied physical and mechanical properties and applied in engineering of rockets, airplanes, machines, ships, radar-tracking stations and others, which work in conditions of temperature drops (thermal impacts from -60 to +70°C), causes strain which frequently breaks electronic components or their sealing [2]. This causes the necessity to develop mathematical models for estimating strength of passive electronic components sealed with compound what allows reasonable selecting contacting materials, specifying structural dimensions and spacing components inside the volume of compound on one hand and on the other hand to develop experimental methods, which allow assessing stress condition of passive components. 2. Developing calculative scheme In the most general case cylindrical component (capacitor, resistor, pin) is coated by irregular compound layer. Imaginary circle of radius equal to minimal distance from component axis to the outer wall of the unit (Fig. 2) will select compound cylinder around electronic component in order to focus study on interaction between only selected layer of compound and the electronic component. Then calculative scheme can obviously be considered an axially symmetric problem of two cylindrical bodies to have contact interaction (Fig. 3) [3]. Fig. 2Creating calculative scheme: 1 – compound, 2 – selected compound cylinder, 3 – passive electronic component Fig. 3Electronic component coated by layer of compound Solving this problem may be expected to have contracting and tension loads appeared in the bound between component and compound both in the passive electronic component and the compound under contact pressure caused by difference of coefficients of linear thermal expansion and other physical and mechanical characteristics of materials of component and compound at the temperature changes. Thus, in general, solving this problem is possible by using Lyame-Gadolin theory for strength calculation (theory of built-up barrels of artillery cannons) [4]. Although the nature of acting forces at this instance is different: for a cannon – it’s a pressure of powder gases inside the barrel, and for passive electronic component sealed by compound – it’s pressure in the bound of passive electronic component and compound; the basic part of solution is generalized to calculative scheme as axisymmetric problem. 3. Assessment of stress in electronic component and compound In the common case the interaction between electronic component and coating compound considers electronic component to be loaded over external surface by contact pressure $P$, and over internal one by atmospheric pressure ${P}_{1}$, and compound cylinder to be loaded over internal surface by contact pressure $P$ and over external surface by pressure ${P}_{2}$ equal the pressure outside (Fig. Fig. 4Load scheme of internal: a) and external, b) cylinders Assuming deducing cumbersome formulas the only final solution of the problem by using theory for thick-wall cylinders is represented. The formulas for radial stress ${\sigma }_{r}$, tangential stress ${\sigma }_{r}$ and radial strain $U$ in material of electronic component (1-3) and compound (4-6) are represented in accordance to [4] as: ${\sigma }_{{r}_{1}}=\frac{{Ε}_{1}}{1-{\mu }_{1}}\left[-\frac{1}{{r}^{2}}\underset{{R}_{1}}{\overset{r}{\int }}{\alpha }_{1}\Delta {t}_{1}rdr+\frac{{r}^{2}-{R}_{1}^{2}}{{r}^{2}\left({R}_{2}^{2}-{R}_ {1}^{2}\right)}\underset{{R}_{1}}{\overset{{R}_{2}}{\int }}{\alpha }_{1}\Delta {t}_{1}rdr\right]+\frac{{P}_{1}{R}_{1}^{2}-P{R}_{2}^{2}}{{R}_{2}^{2}-{R}_{1}^{2}}-\frac{\left({P}_{1}-P\right){R}_{1}^ $\begin{array}{c}{\sigma }_{{t}_{1}}=\frac{{E}_{1}}{1-{\mu }_{1}}\left[\frac{1}{{r}^{2}}\underset{{R}_{1}}{\overset{r}{\int }}{\alpha }_{1}\Delta {t}_{1}rdr+\frac{{r}^{2}-{R}_{1}^{2}}{{r}^{2}\left ({R}_{2}^{2}-{R}_{1}^{2}\right)}\underset{{R}_{1}}{\overset{{R}_{2}}{\int }}{\alpha }_{1}\Delta {t}_{1}rdr-{\alpha }_{1}\Delta {t}_{1}\right]\end{array}$$+\frac{{P}_{1}{R}_{1}^{2}-P{R}_{2}^{2}}{{R}_ ${U}_{1}=\frac{1}{r}\frac{1+{\mu }_{1}}{1-{\mu }_{1}}\underset{{R}_{1}}{\overset{r}{\int }}{\alpha }_{1}\Delta {t}_{1}rdr+r\left[\frac{\left(1-2{\mu }_{1}\right)\left({P}_{1}{R}_{1}^{2}-P{R}_{2}^{2}\ right)}{{E}_{1}\left({R}_{2}^{2}-{R}_{1}^{2}\right)}+\frac{1-3{\mu }_{1}}{\left(1-{\mu }_{1}\right)\left({R}_{2}^{2}-{R}_{1}^{2}\right)}\underset{{R}_{1}}{\overset{{R}_{1}}{\int }}{\alpha }_{1}\Delta {t}_{1}rdr\right]+\frac{1}{r}\left[\frac{\left(1+{\mu }_{1}\right)\left({P}_{1}-P\right){R}_{1}^{2}{R}_{2}^{2}}{{E}_{1}\left({R}_{2}^{2}-{R}_{1}^{2}\right)}+\frac{{R}_{1}^{2}\left(1+{\mu }_{1}\ right)}{\left(1-{\mu }_{1}\right)\left({R}_{2}^{2}-{R}_{1}^{2}\right)}\underset{{R}_{1}}{\overset{{R}_{1}}{\int }}{\alpha }_{1}\Delta {t}_{1}rdr\right],$ ${\sigma }_{{r}_{2}}=\frac{{E}_{2}}{1-{\mu }_{2}}\left[-\frac{1}{{r}^{2}}\underset{{R}_{2}}{\overset{r}{\int }}{\alpha }_{2}\Delta {t}_{2}rdr+\frac{{R}^{2}-{R}_{2}^{2}}{{r}^{2}\left({R}_{3}^{2}-{R}_ {2}^{2}\right)}\underset{{R}_{2}}{\overset{{R}_{3}}{\int }}{\alpha }_{2}\Delta {t}_{2}rdr\right]+\frac{P{R}_{2}^{2}-{P}_{2}{R}_{3}^{2}}{{R}_{3}^{2}-{R}_{2}^{2}}-\frac{\left(P-{P}_{2}\right){R}_{2}^ $\begin{array}{c}{\sigma }_{{t}_{2}}=\frac{{E}_{2}}{1-{\mu }_{2}}\left[\frac{1}{{r}^{2}}\underset{{R}_{2}}{\overset{r}{\int }}{\alpha }_{2}\Delta {t}_{2}rdr+\frac{{R}^{2}-{R}_{2}^{2}}{{r}^{2}\left ({R}_{3}^{2}-{R}_{2}^{2}\right)}\underset{{R}_{2}}{\overset{{R}_{3}}{\int }}{\alpha }_{2}\Delta {t}_{2}rdr-{\alpha }_{2}\Delta {t}_{2}\right]\end{array}$$+\frac{P{R}_{2}^{2}-{P}_{2}{R}_{3}^{2}}{{R}_ ${U}_{2}=\frac{1}{r}\frac{1+{\mu }_{2}}{1-{\mu }_{2}}\underset{{R}_{2}}{\overset{R}{\int }}{\alpha }_{2}\Delta {t}_{2}rdr+r\left[\frac{\left(1-2{\mu }_{2}\right)\left(P{R}_{2}^{2}-{P}_{2}{R}_{3}^{2}\ right)}{{E}_{2}\left({R}_{3}^{2}-{R}_{2}^{2}\right)}+\frac{1-3{\mu }_{2}}{\left(1-{\mu }_{2}\right)\left({R}_{3}^{2}-{R}_{2}^{2}\right)}\underset{{R}_{2}}{\overset{{R}_{3}}{\int }}{\alpha }_{2}\Delta {t}_{2}rdr\right]+\frac{1}{r}\left[\frac{\left(1+2{\mu }_{2}\right)\left(P-{P}_{2}\right){R}_{2}^{2}{R}_{3}^{2}}{{E}_{2}\left({R}_{3}^{2}-{R}_{2}^{2}\right)}+\frac{{R}_{2}^{2}\left(1+{\mu }_{2}\ right)}{\left(1-{\mu }_{2}\right)\left({R}_{3}^{2}-{R}_{2}^{2}\right)}\underset{{R}_{2}}{\overset{{R}_{3}}{\int }}{\alpha }_{2}\Delta {t}_{2}rdr\right],$ where ${\mu }_{1}$, ${\mu }_{2}$ – Poisson's ratios of component and compound materials correspondingly; ${E}_{1}$, ${E}_{2}$ – elasticity modulus of component and compound materials; ${\alpha }_{1}$ , ${\alpha }_{2}$ – coefficient of linear expansion of component and compound materials; ${R}_{1}$, ${R}_{2}$ – internal and external radius of electronic component; ${R}_{2}$, ${R}_{3}$ – internal and external radius of compound; $r$ – variable radiuses: ${R}_{1}\le r\le {R}_{2}$, ${R}_{2}\le R\le {R}_{3}$; (Fig. 3), $\underset{{R}_{1}}{\overset{r}{\int }}{\alpha }_{1}\cdot ∆{T}_{1}\cdot rdr$, $\underset{{R}_{1}}{\overset{{R}_{2}}{\int }}{\alpha }_{1}\cdot ∆{T}_{1}\cdot rdr$, $\underset{{R}_{2}}{\overset{r}{\int }}{\alpha }_{2}\cdot ∆{T}_{2}\cdot rdr$, $\underset{{R}_{2}}{\overset{{R}_{3}} {\int }}{\alpha }_{2}\cdot ∆{T}_{2}\cdot rdr$ – temperature integrals. Temperature drops ${∆t}_{1}$, ${∆t}_{2}$ present in formulas of thermal integrals (and stresses ${\sigma }_{{t}_{1}}$, ${\sigma }_{{t}_{2}}$) are defined as: ${∆t}_{1}={t}_{elcm}\left(r,\tau \right)- {t}_{0}$, ${∆t}_{2}={t}_{cmp}\left(r,\tau \right)-{t}_{0}$, where $t\left(r,\tau \right)$ – temperature of cylindrical surface of $r$ radius in the moment of time $\tau$, which is counted from the moment to bring the item from the constant temperature ${t}_{0}$ into the temperature ${t}_{1}$. ${t}_{0}$ is assumed to be the initial temperature of the body in calculations [5]. The formulas given above get significantly simplified if to neglect low environmental pressures ${P}_{1}$ and ${P}_{2}$ in comparison with much higher contact pressure $P$. For the stabilized temperature drop $\mathrm{\Delta }t$ when temperature of the whole item reaches ${t}_{1}$ temperature, Eqs. (1)-(6) for calculating stress and strain become stresses and strains in the material of electronic component: ${\sigma }_{{r}_{1}}=-\frac{P{R}_{2}^{2}}{{R}_{2}^{2}-{R}_{1}^{2}}\left(1-\frac{{R}_{1}^{2}}{{r}^{2}}\right),$ ${\sigma }_{{t}_{1}}=-\frac{P{R}_{2}^{2}}{{R}_{2}^{2}-{R}_{1}^{2}}\left(\frac{{R}_{1}^{2}}{{r}^{2}}+1\right),$ ${U}_{1}=-\frac{P{R}_{2}^{2}}{{E}_{1}\left({R}_{2}^{2}-{R}_{1}^{2}\right)}\left[r\left(1-2{\mu }_{1}\right)-\frac{1+{\mu }_{1}}{r}{R}_{1}^{2}\right],$ stresses and strains in the material of compound: ${\sigma }_{{r}_{2}}=-\frac{P{R}_{2}^{2}}{{R}_{3}^{2}-{R}_{2}^{2}}\left(\frac{{R}_{3}^{2}}{{r}^{2}}-1\right),$ ${\sigma }_{{t}_{2}}=\frac{P{R}_{2}^{2}}{{R}_{3}^{2}-{R}_{2}^{2}}\left(\frac{{R}_{3}^{2}}{{R}^{2}}+1\right),$ ${U}_{2}=\frac{P{R}_{2}^{2}}{{E}_{2}\left({R}_{3}^{2}-{R}_{2}^{2}\right)}\left[r\left(1-2{\mu }_{2}\right)+\frac{\left(1+{\mu }_{2}\right){R}_{3}^{2}}{r}\right].$ Stress diagrams in electronic component and compound are shown in Fig. 5. Fig. 5Diagrams of tangential and radial stresses in materials of: a) internal, b) external cylinders Analyzing formulas (7, 8, 10, 11) testified that if external radius of compound cylinder were 4 times greater than one of the component its further increasing would result in only 1/16 strain increment of the maximal. That's why compound cylinder can be considered as that with endless large wall at 5-6 % error tolerance. That provides strength calculation of sealed component irrespective to sealing compound profile by formulas (7, 8, 10, 11) in only condition that compound thickness is 4 times greater that component's external radius. It’s also clear that at the specified relation between thickness of compound and external radius of electronic component solution can be limited to axisymmetric problem since increasing pressure from compound out of the zone of selected cylinder will be insignificant in comparison with maximal one found by solving symmetric problem and may be neglected in engineering calculations [6]. 4. Determining contact pressure All previously represented formulas aimed at finding stress and strain of electronic component and compound are function of contact pressure $P$. It should be found by considering condition of joint strain of electronic component and compound. Within the electronic component and compound structure at their sufficient adhesion they are joined and can strain only together. The condition of joint strain is: Substituting into this formula strains of electronic component and compound in their bound from formulas (9) and (12) and solving equation relative to contact pressure (pressures ${P}_{1}$ and ${P}_ {2}$ are neglected) results in equation (14): $P=\frac{\left[\left(1+{\mu }_{1}\right){\alpha }_{1}-\left(1+{\mu }_{2}\right){\alpha }_{2}\right]t}{\frac{\left(1+{\mu }_{1}\right)\left[{R}_{2}^{2}\left(1-2{\mu }_{1}\right)+{R}_{1}^{2}\right]} {{Ε}_{1}\left({R}_{2}^{2}-{R}_{1}^{2}\right)}+\frac{\left(1+{\mu }_{2}\right)\left[{R}_{2}^{2}\left(1-2{\mu }_{2}\right)+{R}_{3}^{2}\right]}{{Ε}_{2}\left({R}_{3}^{2}-{R}_{2}^{2}\right)}}.$ 5. Example of stress calculation in resistor OMLT-0.125 sealed with compound EZK-25 in sealed unit ZU5.760.001 at the stabilized temperature drop The strength of resistor OMLT-0.125 sealed with compound EZK-25 will be calculated using given formulas in the temperature diapason $∆t=$ 130°С (from +70°С to –60°С) along the radius of resistor and compound. The calculation is produced at regular thickness of compound layer and the range of fixed values of compound thickness taken from practice. Those are most unfavorable (from the point of view of strength) conditions when structure is taken from one temperature into the another. That’s why the structure which appeared to have sufficient strength in extreme conditions can be assumed to guarantee its reliability in normal conditions. Calculation is produced using the following data gathered on laboratory tests: resistor – $\mu =$ 0.292; $E=$ 13.1·10^10·N/m^2; $\alpha =$ 6 10^-6 deg^-1; ${R}_{1}=$ 0.2 mm; ${R}_{2}=$ 0.75 mm; compound – $\mu =$ 0.3; $E=$ 1.21·10^10·N/m^2; $\alpha =$ 45 10^-6 deg^-1; ${R}_{2}=$ 0,75 mm; ${R}_{3}=$ 1-10 mm. Results of calculations are shown in graphs (Fig. 6, 7) where radiuses are traced on abscissa axis, and values of tangential and radial stresses in material of resistor or compound – on ordinate As materials of resistor and compound are in complicated stressed condition so their strength assessment should be performed by using strength theory [4]. Using third strength theory or theory of greatest tangential stresses for assessing strength of compound and ceramics of resistor represents the most interest. Speaking specifically both materials the resistor and the compound are in three-dimensional stress but as the absolute value of longitudinal stress ${\sigma }_{z}$ is significantly less than radial $ {\sigma }_{r}$ and tangential ${\sigma }_{t}$ stresses so it can be neglected and the stress condition can be assumed as two-dimensional. Then, as ${\sigma }_{t}>{\sigma }_{r}$ by the absolute value so for compound cylinder ${\sigma }_{1}={\left({\sigma }_{t}\right)}_{R={R}_{2}},\text{\hspace{0.17em}\hspace{0.17em}}{\sigma }_{2}=0,\text{\hspace{0.17em}\hspace{0.17em}}{\sigma }_{3}={\left({\sigma }_{r}\right)}_{R={R}_{2}}$ and the strength condition is: ${\sigma }_{eqv}^{III}={\left({\sigma }_{1}-{\sigma }_{3}\right)}_{\mathrm{m}\mathrm{a}\mathrm{x}}=\frac{2{P}_{k}{R}_{3}^{2}}{{R}_{3}^{2}-{R}_{2}^{2}}\le \left[\sigma \right]\text{\hspace{0.17em}}.$ Fig. 6Stress diagram along the radius of resistor at various thickness of compound cylinder R3:––– R3= 1 mm, - - - - R3= 2 mm, - - · - - R3= 3 mm, - · · - · R3= 10 mm Fig. 7Stress diagram along the radius of compound at various thickness of compound cylinder: a) R3= 1.0 mm, b) R3= 2.0 mm, c) R3= 10.0 mm Thus, the greatest equivalent stresses in compound will appear on the internal surface of compound cylinder and will always be greater, by the absolute value, than contact pressure. The residual strain in compound will appear when ${\left({\sigma }_{t}-{\sigma }_{r}\right)}_{\mathrm{m}\mathrm{a}\mathrm{x}}$ reaches the yield limit and increasing compound thickness will have no effect. Indeed, let ${R}_{3}\to \infty$, meaning that compound thickness grows infinitely, then strength condition by the 3rd strength theory will appear to be: ${\sigma }_{eqv}^{III}={\sigma }_{t}-{\sigma }_{r}=\frac{2{P}_{k}}{1-{\left(\frac{{R}_{2}}{{R}_{3}}\right)}^{2}}\le \left[\sigma \right]\text{\hspace{0.17em}}.$ When ${R}_{3}\to \infty$, we get ${P}_{k}\le \frac{\left[\sigma \right]}{2}$, meaning that although infinitely thick layer of compound it won’t sustain the contact pressure producing stress exceeding the half of tension ultimate strength of compound. Analogous results can be obtained for ceramic cylinder also. 6. Conclusion On the basis of third strength theory improving strength of electronic component – compound system requires designing and using ceramics and compound to have yield limit and ultimate strength as high and stable as possible. However, using strength theory in considered case won’t guaranty the accurate strength assessment. If this is added by the fact that used ceramics, fragile initially, at -60 deg C and three-dimensional stress condition, what is round compression, can act as a plastic material, and compound conversely may become fragile, so it would be more reliable to assess experimentally measured stresses in compound and resistor against ultimate stresses, those which are destructive, measured in stress conditions close to operational. However this comment goes beyond this paper and will be discovered in the further publications. • Royzman V. P. Problem of strength reliability in radioelectronics. Technology and Construction in Electronic Devices, Odessa, “Polytechperiodica” Publishing House, 2005, Number 6, p. 6-12, (in • Kamburg V. G., Kovtun I. I., Grigorenko S. A. Influence of temperature to mechanical strength of passive electronic components, which are hermetized by compound. Reliability and Quality, Penza, Publishing House of Penza State University, 2000, p. 348-351, (in Russian). • Royzman V., Grigorenko S. Strength of passive electronic components hermetized by compound at thermal impacts. Herald of the Lvov Polytechnical State University “Radioelectronics and Telecommunications”, 2000, Number 387, p. 265-270, (in Ukrainian). • Pisarenko G. S., Kvitka O. L., Umanskiy E. S. Support of Materials. Handbook for students of mechanical departments of higher educational institutions, Kiev, Higher School Publishing House, 2004, 655 p., ISBN 966-642-056-2, (in Ukrainian). • Petraschuk S. A., Kofanov Yu. N., Royzman V. P. Solution of the problem of heat conduction in hermetized module at non stationary temperatures. Herald of Khmelnitskiy National University, Section of Technical Sciences, Vol. 6, 2011, p. 147-151, (in Russian). • Petrashchuk S. A., Kovtun I. I. Theoretical and experimental evaluation of strain in components under thermal impact. Proceedings of the International Conference “Science and Education”, Colombo, Sri Lanka, February 12 – 22, 2010, p. 79-84. About this article 27 December 2012 28 February 2013 sealed electronic unit passive electronic component radial stress tangential stress contact pressure Copyright © 2013 Vibroengineering This is an open access article distributed under the Creative Commons Attribution License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
{"url":"https://www.extrica.com/article/14519","timestamp":"2024-11-09T00:37:15Z","content_type":"text/html","content_length":"142173","record_id":"<urn:uuid:077662b6-1517-4822-859e-81190589a6f6>","cc-path":"CC-MAIN-2024-46/segments/1730477028106.80/warc/CC-MAIN-20241108231327-20241109021327-00720.warc.gz"}
Designing of an active high-pass filter The main function of filters is to suppress or filter out components from mixed and unwanted frequency signals to ensure clear communication. In the first article of this series, we learned about the different types of filters, including low-pass filters which we covered in the last tutorial. In this article, we’ll learn how to design a high-pass filter or HPF. These filters allow all of the frequencies that are higher than their cut-off frequency to pass while stopping all of the others. The first order of a high-pass filter • Step 1: Select or choose the required cut-off frequency. For this tutorial, let’s suppose that we want to suppress all of the frequencies below 100 Hz. These frequencies are similar to a humming sound or power-line frequency noise (50 or 60 Hz). This means that: FC = 100 Hz • Step 2: Now, assume the required value of the capacitor. It should be less than 0.1 micro Farad. This is required for better frequency stability. Let’s use the same value for C as 100 nF (nano Steps 3 and 4 are shown here with the calculations required to find the resistance and pass-band gain for the first order of the high-pass filter. The final design with the component values is shown below. Since the op-amp is an active component, it requires +ve and -ve biasing voltages. It’s possible to test the circuit by applying input through the signal generator and observing the output on the DSO or the oscilloscope. A circuit diagram of the LM741 IC-based for the first order of a high-pass filter. Note: I have simulated the above circuit in NI’s multisim 11 software. The schematic design is also prepared using the same software. The software is available as a free one-month trial period from National Instrument’s (NI) website. The below circuits are also prepared using the multisim 11 software and tested in it. The second order of a high-pass filter • Step 1: For simplicity, let’s assume that R1 = R2 = R and C1 = C2 = C • Step 2: Select the desired cut-off frequency. For our purposes, let’s use FC = 500 Hz • Step 3: Now assume the capacitor value of C as 100nF Steps four and five are shown here with the calculations required to find the resistance and pass-band gain for the second order of a high-pass filter. A circuit diagram of the LM741 IC-based for the second order of a high-pass filter. Higher-order high-pass filters Higher-order filters, such as the third, fourth, or fifth-order filters can be designed by cascading the first and second-order LPF sections. In this case, increasing the order increases the stop-band attenuation by 20 DB. The figure below provides a clear image of this idea. By using the higher-order filter, we can get a better response with the stiff slop. For example, we may get a response such as an idle LPF. An overview of the third, fourth, and fifth order of a high-pass filter. The cut-off frequency for all the stages is the same, which means the RC value of all stages is also the same. Filed Under: Tutorials Questions related to this article? 👉Ask and discuss on EDAboard.com and Electro-Tech-Online.com forums. Tell Us What You Think!! Cancel reply You must be logged in to post a comment.
{"url":"https://www.engineersgarage.com/designing-of-high-pass-filter/","timestamp":"2024-11-12T02:25:47Z","content_type":"text/html","content_length":"122740","record_id":"<urn:uuid:b369a646-c54a-4f00-a095-e67ca8310c21>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.50/warc/CC-MAIN-20241112014152-20241112044152-00029.warc.gz"}
Cite as Ran Duan, Kaifeng Lyu, and Yuanhang Xie. Single-Source Bottleneck Path Algorithm Faster than Sorting for Sparse Graphs. In 45th International Colloquium on Automata, Languages, and Programming (ICALP 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 107, pp. 43:1-43:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018) Copy BibTex To Clipboard author = {Duan, Ran and Lyu, Kaifeng and Xie, Yuanhang}, title = {{Single-Source Bottleneck Path Algorithm Faster than Sorting for Sparse Graphs}}, booktitle = {45th International Colloquium on Automata, Languages, and Programming (ICALP 2018)}, pages = {43:1--43:14}, series = {Leibniz International Proceedings in Informatics (LIPIcs)}, ISBN = {978-3-95977-076-7}, ISSN = {1868-8969}, year = {2018}, volume = {107}, editor = {Chatzigiannakis, Ioannis and Kaklamanis, Christos and Marx, D\'{a}niel and Sannella, Donald}, publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik}, address = {Dagstuhl, Germany}, URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2018.43}, URN = {urn:nbn:de:0030-drops-90475}, doi = {10.4230/LIPIcs.ICALP.2018.43}, annote = {Keywords: Graph Algorithm, Bottleneck Path, Combinatorial Optimization}
{"url":"https://drops.dagstuhl.de/search/documents?author=Duan,%20Ran","timestamp":"2024-11-11T23:54:59Z","content_type":"text/html","content_length":"92326","record_id":"<urn:uuid:a8803aa4-bd05-4f0b-9a56-b19204d54afe>","cc-path":"CC-MAIN-2024-46/segments/1730477028240.82/warc/CC-MAIN-20241111222353-20241112012353-00218.warc.gz"}
The Top 10 Logical Thinkers of all Time | Discovering Strategies for Effective Thinking Logical thinking is a crucial skill that allows us to identify patterns, make connections, and consider the consequences of our actions. Throughout history, there have been many brilliant minds who have pushed the boundaries of logical inquiry and made groundbreaking contributions to philosophy, math, and science. In this article, we will explore the Top 10 Logical Thinkers of all Time. Our list is not definitive - there are many other important thinkers who could have easily made the cut - but we believe that it provides a balanced representation of some of the most influential and creative minds in history. Whether you are a philosophy student, a math enthusiast, or simply someone who enjoys learning about great thinkers, we hope that this article will inspire you to think critically and engage with the exciting world of logic. 1. Aristotle Aristotle is one of the most influential philosophers and scientists in history, known for his logical theories and contributions to a wide range of fields. Born in Greece in 384 BC, Aristotle studied under Plato and went on to become the tutor of Alexander the Great. Life and Contributions Aristotle’s most famous work is the Organon, a collection of six treatises on logic, including the Categories, On Interpretation, and Prior Analytics. In these works, he developed the fundamental concepts of deductive reasoning, such as syllogisms and the principle of non-contradiction. Aristotle’s philosophical works also had a profound impact, particularly his ethics and politics. He believed that humans had a natural desire to achieve happiness and that virtues were key to achieving this goal. He also believed in a hierarchical society with a ruling class that was responsible for the wellbeing of the state. Logical Theories Aristotle’s logical theories had a significant influence on later philosophers and scientists. He argued that logic was a necessary tool for analyzing and understanding the natural world, which was characterized by order and causality. His work on syllogisms laid the foundation for deductive reasoning, which became a central tool for mathematics and science. Aristotle also developed a theory of metaphysics that was based on empirical observation, rather than abstract speculation. He believed that everything in the world had a purpose and that understanding these purposes was key to understanding the nature of reality. Aristotle’s works were highly regarded in the ancient world and were studied extensively in medieval Europe. They continued to be influential in later periods, particularly during the Renaissance and Enlightenment. His philosophical works, in particular, had an enduring impact on ethics, politics, and metaphysics. Today, Aristotle is considered one of the greatest thinkers in history and his ideas continue to inform debates in a wide range of fields. 2. Rene Descartes Rene Descartes was a French philosopher, mathematician, and scientist who lived in the 17th century. He is considered to be the father of modern Western philosophy and made significant contributions to mathematics and science as well. Descartes is known for his method of doubt, which he described in his famous work “Meditations on First Philosophy”. The method of doubt involves questioning everything that he had previously believed to be true and only accepting propositions that were absolutely certain. This led to his famous quote, “I think, therefore I am”, which he came up with as a result of trying to find something that he could be certain of. In addition to his work on philosophy, Descartes also made significant contributions to geometry. He is credited with creating the system of Cartesian coordinates, which is now commonly used in algebra and geometry. This system involves plotting points on a graph using two perpendicular axes, one vertical and one horizontal. Descartes’ contributions to philosophy and mathematics were groundbreaking and continue to be studied and revered to this day. 3. Immanuel Kant Immanuel Kant was a German philosopher who lived from 1724 to 1804. He is widely considered one of the most important thinkers in modern philosophy. Overview of Kant’s life and contributions Kant’s work spans an impressive array of topics, including metaphysics, ethics, epistemology, and logic. He is perhaps most famous for his grounding of ethics in his “categorical imperative” and his philosophy of transcendental idealism. Explanation of his philosophy of transcendental idealism and how it relates to logic Kant’s transcendental idealism holds that the structure of our knowledge of the world is determined by the structure of the mind itself. This means that the mind imposes certain categories and structures on our experience of reality, and that we cannot know reality independent of these structures. For Kant, this insight has important implications for logic and reasoning. Kant’s understanding of the mind’s role in shaping knowledge led him to see the study of logic as a necessary part of philosophy. He believed that logic provided us with a set of rules and procedures to help us make judgments and reason well. In particular, he believed that deductive reasoning, which he called “analytic” reasoning, was especially important. This is because analytic reasoning allows us to break down complex claims or ideas into simpler, more manageable parts and then put them back together in a clear and coherent way. Discussion of his contributions to metaphysics and ethics In addition to his work on logic, Kant also made significant contributions to metaphysics and ethics. His grounding of ethics in the “categorical imperative” is one of his most important and enduring contributions. According to Kant, the categorical imperative is a principle that tells us what we ought to do regardless of our specific desires or goals. It is a principle of rationality that underlies all moral thinking and action. Kant’s metaphysical work is similarly important. He argued that our knowledge of the world is limited to what we can know through experience, but that there are certain fundamental truths about the world that we can know independently of experience. These truths, which Kant called “categories,” are the building blocks of our understanding of the world. Overall, Kant’s work on logic, metaphysics, and ethics has had a profound impact on philosophy and continues to be studied and debated by scholars today. 4. Bertrand Russell Bertrand Russell (1872-1970) was a British philosopher, logician, and mathematician. He is widely known for his contributions to mathematical logic, his work on the foundations of mathematics, and his activism for peace and social justice. Overview of Russell’s life and contributions Russell was born into a prominent British family and was educated at Cambridge University, where he became interested in mathematics and philosophy. He later became a professor at Cambridge and taught there for many years. Russell made many contributions to logic and mathematics, including his theory of types, which helped to resolve issues with set theory. He also developed the concept of logical atomism, which asserted that the world is composed of logical atoms that can be analyzed and understood through logic and language. Explanation of Russell’s contribution to mathematical logic, including his theory of types Russell’s theory of types was a solution to the paradoxes that arose in set theory, particularly the set of all sets that do not include themselves. He proposed that each entity had a type, and that no entity could be a member of its own type. This helped to avoid the paradoxes that had plagued set theory. Russell also made other contributions to mathematical logic, including the development of his logician’s paradox, which helped to establish the limits of formal systems, and his work on the propositional calculus, which is the foundation of mathematical logic. Discussion of his philosophical works, including “The Problems of Philosophy” Russell was also a prominent philosopher, and his works touched on a variety of issues, including metaphysics, epistemology, and ethics. His book, “The Problems of Philosophy,” was a popular introduction to philosophy and covered many of the fundamental issues in the field. In the book, Russell discusses the nature of reality, the limits of human knowledge, and the relationship between knowledge and belief. He also explores issues related to perception, induction, and the nature of truth. Overall, Russell was a major figure in the development of mathematical logic and a significant contributor to the field of philosophy. His work on the foundations of mathematics and his contributions to logic continue to be studied and debated today. 5. Gottfried Wilhelm Leibniz Gottfried Wilhelm Leibniz was a German philosopher, mathematician, and logician who is best known for his work in calculus. However, Leibniz also made significant contributions to the field of logical systems. In this section, we will discuss his logical works and their impact on philosophy and science. Overview of Leibniz’s life and contributions Leibniz was born in 1646 in Leipzig, Germany. He received a law degree from the University of Altdorf and later studied mathematics and philosophy on his own. Leibniz’s work in calculus was done independently of Isaac Newton, and there was a famous controversy over which of them had invented calculus first. Nonetheless, Leibniz made many other contributions to mathematics, science, and Development of calculus and his work in logical systems Leibniz’s work in logical systems was closely linked to his development of calculus. He saw the calculus as a way of analyzing and expressing logical relationships between concepts. Leibniz developed a characteristica universalis, a symbolic language that he believed could be used to represent all human knowledge in a precise and unambiguous way. He also believed that this language could be used to analyze and solve philosophical problems. Leibniz’s work on logical systems included the development of the concept of binary arithmetic. He realized that any arithmetic operation, including addition and multiplication, could be carried out using only the digits 0 and 1. This insight is the foundation of modern digital technology and is the reason why computers use binary code to represent data. Philosophical works, including his theory of monads Leibniz’s philosophy was deeply influenced by his work in mathematics and logic. He believed that the world was made up of “monads,” tiny units of substance that were indivisible and could not be destroyed. Each monad had its own internal state, and the relationships between monads were predetermined by God at the beginning of time. This philosophy has been described as a form of “pre-established harmony” and was intended to solve the problem of how to reconcile determinism with free will. Leibniz’s other philosophical works included his “Discourse on Metaphysics,” where he argued that the universe was the best of all possible worlds, and his “Theodicy,” where he attempted to reconcile the existence of evil with a benevolent God. In conclusion, Leibniz’s many contributions to mathematics, science, and logic have had a profound impact on our understanding of the world. His work on logical systems, including the development of binary arithmetic, helped pave the way for modern digital technology. And his philosophy, based on the concept of monads, continues to influence contemporary debates in metaphysics and philosophy of 6. John Stuart Mill John Stuart Mill was a 19th-century English philosopher, economist, and political theorist known for his contributions to utilitarianism and liberalism. He was also interested in logic, and his work in this area has been influential in the development of modern philosophy. Overview of Mill’s life and contributions Mill was born in London in 1806 and was raised in a highly educated family. His father, James Mill, was a philosopher and economist, and he served as Mill’s primary teacher. Under his father’s guidance, Mill became well-versed in the fields of logic, economics, and political philosophy. As an adult, Mill worked for the East India Company and also served as a Member of Parliament. He was a vocal advocate for women’s suffrage, education reform, and abolitionism. Explanation of his utilitarian philosophy and how it relates to logic Mill is best known for his work on utilitarianism, a moral theory that emphasizes the importance of maximizing happiness and minimizing suffering. Mill believed that utilitarianism could be applied to all areas of life, including politics, economics, and personal decision-making. Mill’s views on logic were heavily influenced by his utilitarian philosophy. He believed that logical reasoning was an essential tool for identifying the actions that would lead to the greatest good for the greatest number of people. Discussion of his contributions to political economy In addition to his work on utilitarianism, Mill was also a prominent political economist. He believed in the importance of free markets and argued that government intervention in the economy should be limited. One of Mill’s most significant contributions to economics is his theory of distribution, which holds that the benefits of economic growth should be distributed in a way that is fair and just. Mill also wrote extensively on topics such as the labor theory of value and the role of taxation in society. “It is better to be a human being dissatisfied than a pig satisfied; better to be Socrates dissatisfied than a fool satisfied. And if the fool, or the pig, is of a different opinion, it is because they only know their own side of the question.” – John Stuart Mill This quote from Mill reflects his utilitarian philosophy and his belief that happiness and fulfillment come from more than just satisfying our animalistic desires. Overall, John Stuart Mill’s contributions to philosophy and logic have been substantial. His work on utilitarianism and political economy continues to influence modern thought, and his advocacy for personal freedom and social justice remains relevant today. 7. Kurt Gödel Kurt Gödel (1906-1978) was an Austrian mathematical logician who is widely regarded as one of the most significant logicians in history. His incompleteness theorems are considered some of the most important contributions to the field of mathematical logic. Life and Contributions Born in Brünn, Austria-Hungary (now Brno, Czech Republic), Gödel studied physics and mathematics at the University of Vienna, where he earned his Ph.D. at the young age of 25. His doctoral thesis on the completeness of logic laid the groundwork for his later work on incompleteness. Gödel’s two incompleteness theorems, published in 1931, showed that any sufficiently complex formal system (such as the axioms of mathematics) cannot be both consistent and complete. His theorems demonstrated that there are certain mathematical statements (called “Gödel statements”) that cannot be proven or disproven within a system, providing a profound insight into the limits of mathematical proof. Implications of his Work Gödel’s incompleteness theorems had far-reaching implications that extended beyond mathematics. They showed that any formal system (including those in philosophy, computer science, and other fields) must have undecidable statements, and that the idea of a complete and consistent system is impossible. Gödel’s work had a profound impact on philosophy, particularly in the areas of ontology and epistemology. It challenged traditional conceptions of the limits of knowledge and showed that there are truths that are beyond the reach of formal systems. Legacy and Influence Gödel’s work has had a lasting impact on the field of mathematical logic and beyond. His incompleteness theorems and related contributions have influenced work in computer science, philosophy, and even artificial intelligence. In recognition of his contributions, Gödel was awarded the National Medal of Science and became a fellow of the American Academy of Arts and Sciences. He is widely regarded as one of the greatest logicians of all time, and his work remains relevant and widely studied today. 8. Aristotle’s student: Alexander of Aphrodisias Alexander of Aphrodisias was a Greek philosopher and a student of Aristotle. He is renowned for his commentaries on Aristotle’s works, particularly the Organon. His contributions to the field of philosophy and logic are significant, and he made several important contributions, including: • Commentaries on Aristotle’s works: Alexander wrote extensive commentaries on Aristotle’s logical works, including the Categories and the Prior Analytics. His commentaries are considered some of the most insightful and valuable interpretations of Aristotle’s work. • Development of Aristotelian Logic: Alexander built on Aristotle’s works and made several important contributions to Aristotelian logic. He created a system of modal syllogistic that included the concepts of possibility and necessity, which were not included in Aristotle’s original work. • Defence of Aristotle’s philosophy: Alexander defended Aristotle’s philosophy and argued against the views of other philosophers, including the Stoics and Epicureans. He is noted for his critique of the Stoics’ view that all things are corporeal. Alexander’s work on Aristotelian philosophy and logic had a significant impact on philosophy and science, particularly in the Islamic world. His commentaries on Aristotle’s works were translated into Arabic and played an important role in the development of Islamic philosophy. “Alexander made a decisive contribution to the development of Aristotelian logic. His commentaries provided a comprehensive interpretation of Aristotle’s work and his innovations enriched the field of logic.” - John Marenbon In conclusion, Alexander of Aphrodisias was a significant figure in the development of Aristotelian philosophy and logic. His commentaries on Aristotle’s work and his innovations enriched the field of logic and had a lasting impact on philosophy and science. 9. Alfred Tarski Alfred Tarski (1901-1983) was a Polish-American logician and mathematician who made significant contributions to mathematical logic and the philosophy of language. He is most well-known for his work on truth and formal semantics. Overview of Tarski’s life and contributions Tarski was born in Warsaw, Poland, and began to study formal logic and set theory while attending the University of Warsaw. In 1925, he moved to Vienna, Austria, where he studied under the influential philosopher and logician, Moritz Schlick. After completing his PhD, Tarski held academic positions at a number of universities in Europe and the United States, including the University of California, Berkeley, where he spent most of his career. Throughout his career, Tarski was interested in the power and limitations of formal systems, including the relationship between formal logic and natural language. One of his major contributions to the philosophy of language was his theory of truth, which he developed in a series of papers beginning in the late 1920s. Explanation of Tarski’s work on truth and formal semantics Tarski’s theory of truth is often referred to as the semantic theory of truth. The central idea behind his theory is that the truth of a proposition is determined by the way the words in the proposition refer to things in the world. This is in contrast to other theories of truth, such as the correspondence or coherence theories, which focus on the relationships between propositions rather than on the relationship between propositions and the world. Tarski developed his theory of truth in a formal way, using the tools of mathematical logic. He defined a truth predicate, which is a formal symbol that can be used to indicate whether a proposition is true or false. He then gave a set of criteria for when a truth predicate is correctly applied to a proposition. Tarski’s work on formal semantics had a significant impact on the philosophy of language and has been widely studied and debated in the decades since it was first published. Discussion of Tarski’s impact on mathematical logic and philosophy of language Tarski’s work on truth and formal semantics was groundbreaking and had a significant impact on the fields of mathematical logic and philosophy of language. His semantic theory of truth challenged traditional views of truth and opened up new avenues for examining the relationship between language and the world. In addition to his work on truth, Tarski was also known for his contributions to set theory, model theory, and the foundations of mathematics. He was a prolific author and his work has had a lasting impact on the development of logic and philosophy. Overall, Alfred Tarski’s contributions to logic and philosophy have been essential in shaping the way we think about truth, language, and the relationship between formal systems and the real world. 10. Augustus De Morgan Augustus De Morgan (1806-1871), an English mathematician, was a prominent figure in the development of symbolic logic. He started his academic career as a professor of mathematics at the University of London, where he later became the first professor of mathematics at the newly established University of London. De Morgan’s contributions to the study of logic and mathematics are significant. He introduced the idea of using symbols to represent logical operations, which later became the basis of symbolic logic. He created a set of laws of set theory known as De Morgan’s laws, which are essential to modern set theory. Contributions to Symbolic Logic De Morgan’s contributions to symbolic logic are significant. He is credited with introducing the use of symbols to represent logical operations, which enabled logicians to create precise equations to express logical relationships. This development significantly impacted the field of logic and facilitated the development of modern computer science. In 1847, De Morgan published a work called “Formal Logic,” which contained the first systematic exposition of symbolic logic. In this book, he used a system of lines and symbols to represent logical operations, such as negation, disjunction, and conjunction. He also introduced the concept of a variable, which allowed for the representation of any number of objects. Laws of Set Theory De Morgan’s laws of set theory are fundamental to modern set theory. These laws are based on the concept of complements, which represent elements that are not members of a specific set. De Morgan’s laws show how to represent the complement of a union or intersection of sets using a negation operation. De Morgan’s laws are expressed in the following equations: • The complement of the union of two sets A and B is equal to the intersection of the complements of A and B. • The complement of the intersection of two sets A and B is equal to the union of the complements of A and B. These equations are used to solve problems related to sets and to model logical situations. De Morgan applied his background in mathematics to the field of probability theory. He made significant contributions to the development of the theory of probabilities, introducing the idea of conditional probability. He showed how to calculate the probability of an event based on the occurrence of another event. Augustus De Morgan is a notable figure in the history of logic and mathematics. His contributions to the development of symbolic logic and set theory are fundamental to the field. The laws that he introduced are used in modern mathematics and computer science. His work in probability also significantly impacted the field of statistics. De Morgan’s contributions continue to inspire new research and development in the fields of logic and mathematics. As we come to the end of our list of the Top 10 Logical Thinkers of all Time, it is clear that the field of logical thinking has a rich, diverse history with countless brilliant minds that have contributed to its development. From Aristotle to Augustus De Morgan, the thinkers on this list have made profound contributions to the field that have impacted not only philosophy and science but also modern society as we know it. It is important for us to reflect on the value of studying the history of logical thinking. Understanding the work of these great minds can deepen our understanding of our own logical processes and help us to more effectively solve problems in our daily lives. Beyond that, examining the foundations of logical thinking can give us insight into the ways in which our own society operates, from the way we process information to the way we make decisions. Of course, this list is by no means exhaustive, and there are countless other thinkers in the field of logical thinking who have made significant contributions to our understanding of the world. We encourage all readers to continue exploring the world of logic and philosophy, and to discover for themselves the amazing insights and discoveries that have been made by brilliant thinkers throughout
{"url":"https://thinkbetteracademy.com/posts/the-top-10-logical-thinkers-of-all-time/","timestamp":"2024-11-11T14:54:29Z","content_type":"text/html","content_length":"81747","record_id":"<urn:uuid:a2bd91d2-cb34-4ef2-a509-e8787881edf9>","cc-path":"CC-MAIN-2024-46/segments/1730477028230.68/warc/CC-MAIN-20241111123424-20241111153424-00141.warc.gz"}
DEtools[rifsimp] Case Splitting and Related Options Case Splitting Defined If an input system is nonlinear with respect to any of its indeterminates (including constants), case splitting may occur. As an example, consider the ODE in : . rifsimp can only isolate the leading indeterminate u'' when a <> 0. Unfortunately, we do not know if this condition holds, so two possibilities result; either a <> 0 or a = 0. Consider as a second example the equation . rifsimp needs to isolate the highest power of the leading derivative, which introduces the same two possibilities as for the first example. This second type of pivot is called the initial of the equation. This prompts the following definitions: The coefficient of the highest power of the leading indeterminate in an unsolved equation. It is not known whether this coefficient is nonzero or whether it vanishes with respect to the resulting A case split in the rifsimp algorithm is the separate handling of the cases of nonzero or zero pivot during the solution of an equation in the system. By default (without casesplit on), only the nonzero case is explored. In the u[t]-a u[tt] = 0 example, the pivot is a, and the generic a <> 0 case is handled by default. Exploring All Cases For some systems, a simplified result for all cases is required. To obtain this, use the casesplit option. Specification of casesplit in the arguments to rifsimp tells it to examine all cases in the following way: Proceed to find the generic solution by assuming that all pivots are nonzero. After each solution is found, back up to the most recent nonzero assumption, change to a zero assumption, and proceed to find the solution under this new restriction. The process is complete when no nonzero assumptions are left to back up to. The result is a tree of disjoint cases, with the simplified forms presented in a table. Each case is stored as a numbered entry in the output table, having all the standard entries, along with an entry called Case that can be used to reconstruct a tree of case splittings using the caseplot command. This Case entry is a list of assumptions made to obtain the answer for that case. Each assumption is a two or three element list, the first element is the assumption made, the second is the derivative being isolated that required the assumption, and if the third entry is present, it indicates that is was later found that the assumption is always true, so the assumption does not represent a true case split of the problem, but rather a false split. There are three options below that mildly alter this behavior, and these are the faclimit, factoring, and options. The first two allow pivots to be introduced to the system that do not necessarily occur as the coefficient of a specific leading derivative. When one of these pivots occur, then the second element of the assumption is the name of the option (rather than the leading derivative). The third option, , has the effect of introducing fewer pivots (as equations that are nonlinear in their leading derivative, such as the equation in the second example at the top of this page, do not directly introduce case splittings), but has the disadvantage that a less efficient nonlinear equation handling method (pure Groebner basis) must be employed. For more detail on the output see rifsimp[output], and for more detail on nonlinear equation handling see rifsimp[nonlinear]. Case Restrictions Two options can be used to limit or restrict which cases are explored: This option turns on casesplit and gives the start of the tree to be examined. It is used primarily for repeating a calculation, or for avoiding particularly difficult cases. Note: A rerun of the system with additional equations is likely to change the order of the pivots, so this option is only reliable for the same input as the initial calculation. If a calculation for a specific case is to be repeated with additional equations, it would be better to append the assumptions made in the Case list to the original system, and run rifsimp with the combined system. This option is used to limit the cases being examined based on the solution space dimension of the linear part of the current system. Though the dimension of the solution space is not known until the computation is complete, an upper bound is available by looking at the current Solved equations. Note: In the presence of nonlinear constraints, the computed dimension represents an upper bound on the dimension of the case. This could be used, for example, to find all forms of a second order ODE that are related by a Lie-point transformation to the equation y''=0, by setting the minimum free parameters of the Lie-determining system to 8. The minimum dimension can be specified in a number of ways, including some shortcuts: mindim=n Minimum of n free 0-dimensional parameters in the dependent variables in vars. mindim=[v,n] Minimum of n free 0-dimensional parameters in the dependent variables v (v is either a dependent variable name or set of more than one). mindim=[v,n,d] Minimum of n free d-dimensional parameters in the dependent variables v, where v is as above. mindim=[c1,c2,...] Here c1,c2,... are conditions of the [v,n,d] type. When this option is used, a dimension entry is also provided for each case in the output system. See the information in rifsimp[output] for more detail. Note: When using multiple specifications, each must be a full specification defining the number, the dependent variables, and the dimension of the required data. If any of the input conditions fail, computation is halted on the current case, it is tagged with the status "free count fell below mindim", and the computation proceeds to the next case. Pivoting and Pivot Selection The rifsimp algorithm proceeds by putting as many PDEs into solved form as possible without introducing new pivots. When it has reached a point where none of the remaining unsolved equations can be solved without the introduction of a pivot, it must then decide which equation to solve. By default, rifsimp chooses the leading linear unsolved equation that is smallest in size, but this behavior can be modified by the pivselect option described below. The new pivot for the chosen equation then results in a case split if casesplit is set, otherwise exploration of only the nonzero (generic) case if casesplit is not set. If none of the equations are leading linear, then no standard pivot can be found, so rifsimp then attempts to factor the leading nonlinear equations searching for factors that do not depend on the leading derivative. If these are found, then the smallest factor (based on the Maple length function) is chosen, and a split is performed. This behavior can be modified through use of the factoring option described below. If nonlinear equations remain, then the coefficient of the highest degree of the leading derivative in the equation (the initial of the equation) is split on if initial has been specified (see below) or if grobonly has not been specified (see rifsimp[nonlinear]). The following options relate to pivoting and to the control of pivoting in the course of the algorithm: During the course of a computation, rifsimp proceeds by performing as much elimination as possible before introducing a pivot. Once no further elimination can be done, a pivot must be introduced, and there is generally more than one possible choice. There are currently six possible choices for pivot selection: "smalleq" Choose the pivot belonging to the smallest equation (based on the Maple length function). This is the default. "smallpiv" Similar to "smalleq", but the length of the pivots are compared instead. "lowrank" Choose the pivot for the equation with leading derivative of lowest rank. Ties are broken by equation length. "mindim" Choose the pivot for the equation with leading derivative that will reduce the size of the initial data by the greatest amount. This option can only be used in combination with the option described above. ["smalleq",vars] Same as "smalleq" above, but it only looks at equations with leading derivatives in the vars sequence. ["lowrank",vars] Same as "lowrank" above, but it only looks at equations with leading derivatives in the vars sequence. The choice of a pivoting strategy can significantly affect the efficiency of a computation for better or worse. In addition to efficiency considerations, the selection of different pivoting strategies can also simplify the resulting case structure, though the choice that gives the best case structure is highly problem dependent. The "smalleq" and "smallpiv" options are generally focussed on efficiency. "smalleq" makes the computation more efficient by selecting smaller equations for splitting, while "smallpiv" makes the computation more efficient by introducing smaller pivots. The "lowrank" and "mindim" options are focussed on keeping the number of cases to a minimum. This option gives a list of all variables and constants that cannot be present in pivots (that is, any pivots that arise in the computation that involve these variables will not be used as case splits in the process of reducing the system, and the corresponding equations with those pivots must be treated as nonlinear equations). This is most useful for quasi-linear systems for which the linearity of the resulting system is of greatest importance. In this case the variables for which linearity should be maintained should be in the list. For problems of this type, this may have the effect of reducing the size of the case tree (with some possible trade-off in efficiency). If this option is used with highly nonlinear systems, the decrease in the efficiency may be prohibitively severe, as many equations that could be treated by the linear code without , now must be treated with the nonlinear code (see rifsimp[nonlinear]). Typically the pivot chosen is the coefficient of the leading indeterminate in the equation. In the event that the leading indeterminate is itself a factor of the equation, and this same leading indeterminate factor occurs in <n> or more equations, then it is chosen as the pivot rather than as the coefficient. This is most useful for Lie determining systems having many inconsistent cases, since in these systems, many equations of this form typically occur during the calculation. The answer may be obtained without this option, but it can significantly reduce the number of cases As briefly described above, by default equations that factor are split if one of the factors does not depend upon the leading derivative of that equation. This option can be used to modify that behavior based on the value of desc used in the factoring option: nolead This is the default. Only split on a factor if it does not contain the leading derivative of the equation. all Split on any factors, even if they contain the leading derivative. none Do not perform factored splitting. In any event, the factoring pivots are always the last to be considered (i.e. faclimit and regular pivots are used if available). Note: factoring="all" should be used with caution as it has the potential to significantly increase the number of cases returned. Note: factoring="none" can result in non-termination for some nonlinear problems. This option only applies to systems which have nonlinear Constraint equations in their output. The initial is the coefficient of the highest power of the leading derivative in a polynomially nonlinear equation (the leading nonlinear derivative). For example, in the equation , the initial is the coefficient of , which is . By default, rifsimp splits on the initial, unless is specified (in which case it does not need to), introducing additional cases in the output. With these additional cases, rifsimp isolates the leading nonlinear derivatives in the Constraint equations. When is not specified, only the form of the Constraint output is different. • Note: These options do not require that casesplit be on, but are typically most useful in that situation. Other Case-related Options Another option that is also related to case-splitting is the gensol option, which may return multiple cases for some problems: This option indicates that the program should explore all cases that have the possibility of leading to the most general solution of the problem. Occasionally it is possible for rifsimp to compute only the case corresponding to the general solution of the ODE/PDE system. When this option is given, and this occurs, rifsimp will return the single case corresponding to the general solution. When it is not possible for rifsimp to detect where in the case tree the general solution is, then multiple cases are returned, one or more of which correspond to the general solution of the ODE/ PDE system, and others correspond to singular solutions. For some particularly difficult problems, it is possible that the entire case tree is returned. Note: this option cannot be used with the casesplit,casecount, and mindim options. Suppose we have the ODE shown below. We want to determine conditions on g(y(x)) and f(x) that allow the equation to be mapped to y'' = 0 by an invertible change of variables of the form Y=Y(x,y), X= X(x,y). It is known that the equation can be mapped to y''=0 if the Lie group of the equation is 8-dimensional. This is the perfect opportunity to use the mindim option in combination with cases, to tell rifsimp that we are only interested in cases with 8 dimensions or more: We can use the DEtools[odepde] command to generate the determining system for this ODE, and we obtain the following: Applying rifsimp: Issuing the caseplot command above would show that there is one case for which this occurs. This case is given by: So the original ODE is equivalent to y''=0 when g'(y) is zero, regardless of the form of f(x). As a demonstration of the faclimit option, consider the following system: The regular case-splitting strategy produces an undesirable result for this system, namely more cases than required to describe the system: So we get 4 cases. Now set faclimit set to 2: So although both ans2_1 and ans2_2 equally valid, it is clear that ans2_2 would be preferred. As an example of the factoring option, consider the following purely algebraic system: With default options, we obtain With full factoring enabled, we obtain So we see that the system has been split into three disjoint cases. Also note that the Case entries describe the path the computation took, and there are no Pivots entries. This is because the pivots resulting from the case splits are identically satisfied for each case. See Also caseplot, DEtools[odepde], rifsimp, rifsimp[nonlinear], rifsimp[output] Was this information helpful? Please add your Comment (Optional) E-mail Address (Optional) What is This question helps us to combat spam
{"url":"https://www.maplesoft.com/support/helpjp/maple/view.aspx?path=DEtools%2Frifsimp%2Fcases","timestamp":"2024-11-05T13:14:11Z","content_type":"application/xhtml+xml","content_length":"321695","record_id":"<urn:uuid:0eec9ae6-359f-449d-9811-5491181e87e3>","cc-path":"CC-MAIN-2024-46/segments/1730477027881.88/warc/CC-MAIN-20241105114407-20241105144407-00789.warc.gz"}
Download e-book for iPad: Crossed Products With Continuous Trace by Siegfried Echterhoff By Siegfried Echterhoff ISBN-10: 0821805630 ISBN-13: 9780821805633 The significance of separable non-stop hint $C^*$-algebras arises from the next proof: to begin with, their solid isomorphism periods are thoroughly classifiable by way of topological facts and, secondly, continuous-trace $C^*$-algebras shape the development blocks of the extra common style I $C^*$-algebras. This memoir offers an in depth research of strongly non-stop activities of abelian in the community compact teams on $C^*$-algebras with non-stop hint. below a few typical assumptions at the underlying process $(A,G,\alpha )$, beneficial and enough stipulations are given for the crossed product $A{\times }_{\alpha }G$ to have non-stop hint, and a few relatives among the topological facts of $A$ and $A{\times }_{\alpha }G$ are got. the consequences are utilized to enquire the constitution of crew $C^*$-algebras of a few two-step nilpotent teams and solvable Lie teams. For readers' comfort, expositions of the Mackey-Green-Rieffel computing device of precipitated representations and the speculation of Morita similar $C^*$-dynamical structures are integrated. there's additionally an in depth elaboration of the illustration thought of crossed items via activities of abelian teams on kind I $C^*$-algebras, leading to a brand new description of activities resulting in style I crossed items. The newest effects at the conception of crossed items with non-stop hint. Applications to the illustration idea of in the neighborhood compact teams and constitution of team $C^*$-algebras. An exposition at the glossy idea of caused representations. New effects on kind I crossed items. Read Online or Download Crossed Products With Continuous Trace PDF Similar products books Get System-on-a-Chip: Design and Test PDF Beginning with a uncomplicated evaluate of system-on-a-chip (SoC), together with definitions of similar phrases, this ebook is helping you recognize SoC layout demanding situations, and the newest layout and try methodologies. you spot how ASIC know-how advanced to an embedded cores-based idea that contains pre-designed, reusable highbrow estate (IP) cores that act as microprocessors, information garage units, DSP, bus keep watch over, and interfaces - all ? Read e-book online Software Development for Embedded Multi-core Systems: A PDF The multicore revolution has reached the deployment level in embedded platforms starting from small ultramobile units to giant telecommunication servers. The transition from unmarried to multicore processors, prompted by way of the necessity to raise functionality whereas maintaining strength, has positioned nice accountability at the shoulders of software program engineers. PIC Microcontrollers: Know It All by Lucio Di Jasio, Tim Wilmshurst, Dogan Ibrahim и др. PDF The Newnes are aware of it All sequence takes the simplest of what our authors have written during the last few years and creates a one-stop reference for engineers fascinated about markets from communications to embedded platforms and far and wide in among. PIC layout and improvement a usual healthy for this reference sequence because it is without doubt one of the preferred microcontrollers on the earth and we've numerous beautifully authored books at the topic. Download PDF by Richard J. P. Cannell (auth.), Richard J. P. Cannell (eds.): Natural Products Isolation Typical items Isolation presents a complete advent to innovations for the extraction and purification of common items from all organic resources. The booklet opens with an advent to separations and chromatography and discusses the method of an isolation. skilled experimentalists describe a big selection of equipment for isolation of either recognized and unknown typical items, together with preliminary extraction, open column chromatography, HPLC, countercurrent and planar chromatography, SFE, and crystallisation. Extra resources for Crossed Products With Continuous Trace Example text Suppose now that ir x U G (A xa G)~such that P — ker(7r x U). Then ir x U is a GCi^-element of (A xa G^by assumption. Thus {TT X U} is a Ginvariant locally closed subset of Prim(A xa G). Let B denote the corresponding subquotient of A xa G. Then B x^G is a locally closed subset of (A xa G) x^G. 2). To this end suppose that p' is an element of A such that 46 3. REPRESENTATIONS OF T Y P E I ABELIAN T W I S T E D SYSTEMS INDp' G (B x^Gy. 2 that ker(7r xU) = ker (resf lc} (INDp')) = ker(indf e} p'), from which follows that p' is in the quasi-orbit of p. The group G*-algebra C*(N) is type I) closed normal subgroup of the second countable locally compact group G such that G/N is abelian. Then C*(G) is isomorphic to the twisted crossed product C*(N) X 7 N J T N G of the abelian twisted system (C*(N), G, 7 N , rN). If V G N, then a closed subgroup HofGis maximally V-unitary with respect to the twisted action ( 7 ^ , rN) if and only if H is maximal with respect to the property that there exists a unitary extension, say V, of V to H. 3. Let G be a separable locally compact group and let N be a closed normal type I subgroup of G such that G/N is 7. Then the following conditions are equivalent: (1) A x a ? T G is type I (resp. CCR). (2) The G-orbit G(p x U) is locally closed (resp. closed) in (A xa^T Hpy for all pxU ell. (3) All Mackey obstructions of (A, G, a, r ) are type I and the G-orbits G(a x W) are locally closed (resp. closed) in (A x a > r Spy for all p G A and a x W G (A x a , T Spy such that ker<7 = kerp. Moreover, if A x a ? T G is type I then ind# (p x U) is irreducible for all pxU e1Z and ind# (p x U) is equivalent to ind# , (p' x U') for some other p' x U' G 11, if and only if p' x Uf = (p x U) o 7S p for some s G G. Crossed Products With Continuous Trace by Siegfried Echterhoff by Paul Rated of 5 – based on votes
{"url":"http://kvanta.ua/ebooks/crossed-products-with-continuous-trace","timestamp":"2024-11-08T06:06:53Z","content_type":"text/html","content_length":"35139","record_id":"<urn:uuid:7e99053f-1015-4687-a9c9-339e81f0ef71>","cc-path":"CC-MAIN-2024-46/segments/1730477028025.14/warc/CC-MAIN-20241108035242-20241108065242-00542.warc.gz"}
learning rate 01 Sep 2024 Title: The Role of Learning Rate in Deep Neural Networks: A Theoretical Analysis Abstract: The learning rate is a crucial hyperparameter in deep neural networks that controls the magnitude of weight updates during training. In this article, we provide a theoretical analysis of the learning rate and its impact on the convergence properties of deep neural networks. Deep neural networks have achieved state-of-the-art performance in various machine learning tasks. However, their training process is often plagued by slow convergence or oscillations due to the choice of learning rate. The learning rate determines how quickly the model updates its weights based on the error gradients. A high learning rate can lead to overshooting and divergence, while a low learning rate may result in slow convergence. Learning Rate Formulation: The learning rate (η) is typically formulated as: η = η0 / (1 + t) where η0 is the initial learning rate and t is the current iteration or epoch number. This formulation ensures that the learning rate decreases over time, which can help stabilize the training Convergence Properties: The convergence properties of deep neural networks are influenced by the choice of learning rate. A high learning rate can lead to oscillations between two local minima, while a low learning rate may result in slow convergence or stagnation. The optimal learning rate depends on the specific problem and model architecture. Optimal Learning Rate: The optimal learning rate (ηopt) can be estimated using the following formula: ηopt = σ / √(2 * N) where σ is the standard deviation of the error gradients and N is the number of parameters in the model. This formula provides a theoretical upper bound on the optimal learning rate. The learning rate plays a critical role in determining the convergence properties of deep neural networks. A well-chosen learning rate can lead to faster convergence and better generalization performance. However, choosing an optimal learning rate requires careful consideration of the specific problem and model architecture. Further research is needed to develop more effective methods for selecting the optimal learning rate. [1] Kingma, D. P., & Ba, J. (2014). Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980. [2] Sutskever, I., Vinyals, O., & Le, Q. V. (2014). Sequence to sequence learning with neural networks. Advances in Neural Information Processing Systems 27, 3104-3112. Note: The references provided are for illustrative purposes only and may not be directly related to the topic of learning rate. Related articles for ‘learning rate’ : Calculators for ‘learning rate’
{"url":"https://blog.truegeometry.com/tutorials/education/3d3eb2e5c4431393af25be806fb35107/JSON_TO_ARTCL_learning_rate.html","timestamp":"2024-11-13T23:05:40Z","content_type":"text/html","content_length":"15919","record_id":"<urn:uuid:60fb7f10-0586-4fbf-a53d-0e3258ff9f5a>","cc-path":"CC-MAIN-2024-46/segments/1730477028402.57/warc/CC-MAIN-20241113203454-20241113233454-00624.warc.gz"}
Math Problem Statement Bob makes his first $ 1 comma 100$1,100 deposit into an IRA earning 8.1 %8.1% compounded annually on his 2424th birthday and his last $ 1 comma 100$1,100 deposit on his 4242ndnd birthday (1919 equal deposits in all). With no additional deposits, the money in the IRA continues to earn 8.1 %8.1% interest compounded annually until Bob retires on his 6565th birthday. How much is in the IRA when Bob retires? Question content area bottom Part 1 The amount in the IRA when Bob retires is $enter your response here. Ask a new question for Free By Image Drop file here or Click Here to upload Math Problem Analysis Mathematical Concepts Compound Interest Exponential Growth Series Summation A = P(1 + r)^t (Compound Interest Formula) Sum = P(1 + r)^t + P(1 + r)^(t-1) + ... + P(1 + r)^(t-n) Compound Interest Theorem Suitable Grade Level Grades 11-12, College-level Finance
{"url":"https://math.bot/q/ira-compound-interest-1100-deposits-FMOFPXR0","timestamp":"2024-11-05T23:50:24Z","content_type":"text/html","content_length":"87259","record_id":"<urn:uuid:a63cf777-00c8-4858-8a96-78858c615518>","cc-path":"CC-MAIN-2024-46/segments/1730477027895.64/warc/CC-MAIN-20241105212423-20241106002423-00769.warc.gz"}
Spectra of some new extended corona [1] Ch. Adiga, B.R. Rakshith and K.N. Subba Krishna, Spectra of extended neighborhood corona and extended corona of two graphs, Electronic Journal of Graph Theory and Application. Vol. 4 No. 1 (2016), pp. 101-110. [2] Ch. Adiga, B.R. Rakshith and K.N. Subba Krishna, Spectra of some new graph operations and some new classes of integral graphs, Iranian Journal of Mathematical Sciences and Informatics. Vol. 13 No. 1 (2018), pp. 51-65. [3] K. Balinska, D. Cvetkovic, Z. Radosavljevic, S. Simic, D. Stevanovic, A survey on integral graphs, Univerzitet u Beogradu. Publikacije Elektrotehnickog Fakulteta. Serija Matematika. Vol. 13 (2002), pp. 42-65. [4] S. Barik, S. Pati and BK. Sarma, The spectrum of the corona of two graphs, SIAM Journal on Discrete Mathematics. Vol. 21 (2007), pp. 47-56. [5] S. Barik, G. Sahoo, On the Laplacian spectra of some variants of corona, Linear Algebra and its Applica- tions. Vol. 512 (2017), pp. 32-47. [6] L. Barriere, F. Comellas, C. Dalfo and M. A. Fiol, The hierarchical product of graphs, Discrete Applied Mathematics. Vol. 157 (2009), pp. 36-48. [7] AE. Brouwer, WH. Haemers, Spectral of graphs, Springer, New York, (2012). [8] S.-Y. Cui, G.-X. Tian, The spectrum and the signless Laplacian spectrum of coronae, Linear Algebra and its Applications. Vol. 437 (2012), pp. 1692-1703. [9] D. B. S. Cvetkovic, M. Doob and H. Sachs, Spectra of graphs- theory and applications, (Third edition), Johann Ambrosius Barth, Heidelberg, (1995). [10] D. B. S. Cvetkovic, P. Rowlinson and H. Simic, An introduction to the theory of graph spectra, Cambridge University Press, Cambridge, (2010). [11] W. L. Ferrar, A text-book of determinants, matrices and algebraic forms, (Second edition), Oxford Univer- sity Press, (1957).
{"url":"https://as.yazd.ac.ir/article_1216.html","timestamp":"2024-11-10T09:00:16Z","content_type":"text/html","content_length":"45634","record_id":"<urn:uuid:6c818438-61f0-4661-a553-c8ca382a3f61>","cc-path":"CC-MAIN-2024-46/segments/1730477028179.55/warc/CC-MAIN-20241110072033-20241110102033-00344.warc.gz"}
All That You Need To Know About Pivot Points In Forex - Forex Broker Review FxBrokerReviews.org – The pivot point and its derivatives are one instrument that gives forex traders possible support and resistance levels and aids in risk minimization. When entering the market, setting stops, and taking gains, reference points like support and resistance can be used as a guide. The relative strength index (RSI) and the moving average convergence divergence (MACD) are two technical indicators that many beginner traders pay too much attention to (RSI). While important, these indicators are unable to pinpoint a specific moment when danger is defined. Calculated risk considerably increases the likelihood of long-term success whereas unknown risk might result in margin calls. What are Pivot Points in Forex? To identify probable turning moments in the commodities markets, floor traders created an indicator called a pivot point. Day traders utilise pivot points to identify expected levels of support and resistance and, consequently, potential turning points from bullish to bearish or vice versa in the forex market and other markets. Pivot points are designed to anticipate market turning points, unlike the majority of technical indicators. They are determined using basic math and the high, low, and closing prices from the previous day. The price at the conclusion of the U.S. “session” is taken into account as the closing price for calculating pivot points in the currency market. The pivot point itself, the strongest of the indicators, as well as three levels of support and three levels of resistance are all produced by the conventional pivot point calculations. A given trading session’s overall bullish or bearish bias is determined by the price’s position in relation to the major pivot point. The majority of technical analysis utilised by day traders is built around pivot points, however, its accuracy in identifying turning moments may be related to the fact that they are so widely used as an indicator that market behaviour at the provided levels tends to become self-fulfilling. Using weekly, monthly, quarterly, or yearly pricing, one may even determine longer-term pivot points. Trading With Pivot Points: What you need to know? In order to consistently win using pivot points, traders still need a workable method, regardless of how effective they are at forecasting turning moments. Like any trading strategy, this one needs an entry strategy, a stop-loss trigger, and a profit objective or exit signal. By figuring out where the bulk of other traders could be doing similarly, some day traders utilise pivot points to establish levels for entry, stops, and profit-taking. Retail forex brokers and independent websites all offer free online calculators for calculating the pivot points of the currency market. The best trading strategies combine pivot points with additional technical indicators including trend lines, Fibonacci levels, moving averages, prior highs and lows, and closing prices. Also read: Technical Analysis: All That You Need To Know Basic Pivot Point Formula That You need to Know Using prices from the prior day, the following formula is used to determine the principal pivot point: Forex Pivot Points= (High+Low+Close):3 What is Pivot Points 101? As with hinges from which trade swings high or low, a pivot point is utilised to depict a change in market sentiment and to ascertain broad patterns over a long period. They were initially utilised by floor traders on equities and futures exchanges, but today they are generally used with support and resistance levels to confirm trends and reduce risk. Similar to other types of trend line analysis, pivot points emphasise the significance of the links between high, low, and closing prices over the course of trading days; hence, the pivot point for the current trading day is determined using prices from the previous trading day. Although pivot points may be used with almost any trading instrument, they have shown to be very helpful in the forex (FX) market, particularly when trading currency pairings. Due to their extreme liquidity and enormous trading volume, forex markets are less susceptible to market manipulation, which might normally prevent pivot points from projecting support and Support and Resistance Level The support and resistance levels themselves depend on more arbitrary placements to assist detect potential breakout trading opportunities, whereas pivot points are determined based on precise calculations to help spot significant resistance and resistance levels. A theoretical framework known as support and resistance lines is used to explain why traders are hesitant to push an asset’s price past particular levels. Bullish trading is considered to have encountered resistance if it seems to reach a steady level before pausing and retracing or reversing. Bear trading is considered to have met support if it looks to strike a floor at a given price before steadily trading upward again. Investors watch for price breaks through defined support and resistance levels as an indication that new trends are emerging and as an opportunity for rapid gains. Support and resistance lines are key components of many trading techniques. Also read: Fundamental vs Technical Analysis: What Is Better? Calculating Pivots There are a number of derivative formulae that may be used to calculate the pivot points at which two currencies in a forex pair will support and oppose each other. To assess the likelihood of prices exceeding particular levels, these numbers can be tracked through time. Starting with the pricing from the prior day, the computation is made: Pivot Point for Current = High (previous) + Low (previous) + Close (previous)3 Using the pivot point as a starting point, one may then anticipate support and resistance levels for the current trading day. Resistance 1 = (2 x Pivot Point) – Low (previous period) Support 1 = (2 x Pivot Point) – High (previous period) Resistance 2 = (Pivot Point – Support 1) + Resistance 1 Support 2 = Pivot Point – (Resistance 1 – Support 1) Resistance 3 = (Pivot Point – Support 2) + Resistance 2 Support 3 = Pivot Point – (Resistance 2 – Support 2) Compile information for the EUR/USD on how far away from each high and low each high and low have been from each estimated resistance (R1, R2, R3) and support level to fully appreciate how successfully pivot points may operate (S1, S2, S3). To perform the computation on your own: • Find the pivot points, support and resistance levels for a certain amount of days. • Subtract the pivot points for support from the day’s actual low (Low – S1, Low – S2, Low – S3). • From the day’s actual high, deduct the resistance pivot points (High – R1, High – R2, High – R3). • For each difference, compute the average. The following are the outcomes since the euro’s introduction (January 1, 1999, with the first trading day on January 4, 1999): • On average, Support 1 is 1 pip below the actual low. • Averaged over all highs, Resistance 1 is 1 pip below the actual high. • On average, Support 2 is 53 pip above the actual low. • Averaging 53 pip below Resistance 2, the actual peak is below that resistance. • Typically, 158 pip above Support 3 marks the exact bottom. • Averaged over all highs, Resistance 3 is 159 pip below the actual high. What are Judging Probabilities? According to the data, the pivot points S1 and R1 are good indicators of the real high and low of the trading day. We then went one step further and determined how many days the high was greater than the R1, R2, and R3 values and how many days the low was lower than each of those values. Therefore, as of October 12, 2006, there have been 2,026 trading days since the euro’s launch. • In 892 instances, or 44% of the time, the actual low was lower than S1. • 853 times—or 42% of the time—the actual high has exceeded R1. • 342 times, or 17% of the time, the actual low was lower than S2. • 354 times, or 17% of the time, the actual high was higher than R2. • In 63 instances, or 3% of the time, the real low was lower than S3. • R3 has been exceeded by the actual high 52 times or 3% of the time. This information is helpful to a trader because it enables them to confidently set a stop loss order below S1 since they know that probability is on their side. For example, if you know that the pair drops below S1 44% of the time, you can do so. Considering that the high for the day reaches R1 just 42% of the time, you could also choose to take gains right below R1. Once more, the odds are in your favour. But it’s crucial to realise that these are only possibilities, not absolutes. The peak is typically 1 pip above R1 and goes above R1 42% of the time. This does not imply that the high will consistently be 1 pip below R1 or that the high will surpass R1 four out of the following ten days. The strength of this knowledge is that it enables you to accurately anticipate areas of support and resistance, provide reference points for stop-loss and limit orders, and, most crucially, reduce risk while improving your chances of profit. Applying the Information Potential supports and barriers include the pivot point and its derivatives. The examples below demonstrate a setup that combines a pivot point with the well-known RSI oscillator. RSI Divergence at Pivot Resistance/Support The profit-to-risk ratio in this transaction is normally high. Due to the recent peak, the danger is clearly defined (or low for a buy). The pivot points in the aforementioned illustrations were determined using weekly data. The aforementioned illustration demonstrates how, between August 16 and 17, R1 held a strong resistance (first circle) around 1.2854, and how the RSI divergence indicated that the upside was constrained. This implies that there may be a chance to short on a break below R1, with a stop at the most recent high and a limit at the pivot point, which is currently the support level: • Sell short at 1.2853. • Stop at the recent high at 1.2885. • Limit at the pivot point at 1.2784. With 32 pip of risk, this initial trade produced a profit of 69 pip. There was a 2.16 reward to risk ratio. The setting was roughly the same the next week. The week started with a rise that reached and briefly crossed above R1 at 1.2908, along with a bearish divergence. When the price declines back below R1, a short signal is created, and at that moment we may sell short with a stop at the most recent high and a limit at the pivot point (which is now support): • Sell short at 1.2907. • Stop at the recent high of 1.2939. • Limit at the pivot point at 1.2802. With only 32 pip of risk, this trade generated a profit of 105 pip. There was a 3.28 benefit-to-risk ratio. Rules that you need to know for Setup Setting pivot points differ for bullish, long traders and bearish, short traders depending on their attitude toward the market. 1. For Shorts • Find a bearish divergence at R1, R2, or R3, the pivot point (most common at R1). • Start a short trade with a stop at the most recent swing high when the price falls back below the reference point (which might be the pivot point, R1, R2, or R3). • At the following level, place a limit (take profit) order. Your first goal would be R1 if you sold at R2. In this situation, prior opposition now becomes support and vice versa. 2. For Longs • At the pivot point, look for bullish divergence at S1, S2, or S3 (most common at S1). • When the price moves back above the reference point (which might be the pivot point, S1, S2, or S3), open a long trade with a stop at the most recent swing low. • If you bought at S2, your initial goal would be S1; otherwise, place a limit (take profit) order at the next level. Prior support turns become the opposition, and vice versa). Final Thoughts The identification of broad price trends may be done by charting pivot points, which are shifts in the direction of market trade. To predict the degree of support or resistance shortly, they use the high, low, and closing data from the previous period. In technical analysis, pivot points can be the most often applied leading signal. There are many other sorts of pivot points, each with its own formulae and derivative formulas, but they all have the same implicit trading philosophy. Pivot points can also show when a significant and sudden inflow of traders is simultaneously entering the market when used in conjunction with other technical tools. These market inflows frequently result in breakouts and profitable trading chances for range-bound forex traders. They can make educated guesses using pivot points as to which significant price points should be used to enter, leave, or set stop losses. Any time frame can have a pivot point calculation. Day traders may use daily data to determine pivot points every day, swing traders can use weekly data to determine pivot points every week, and position traders can use monthly data to determine pivot points at the start of each month. Investors can even utilise annual data to make approximations of important levels for the upcoming year. No of the time range, the analytical approach and trading mentality remain the same. In other words, the computed pivot points provide the trader with an indication of where support and resistance are for the upcoming period, but the trader must always be ready to act, since being prepared is the most crucial aspect of trading.
{"url":"https://fxbrokerreviews.org/blog/forex-pivot-point/","timestamp":"2024-11-07T22:46:18Z","content_type":"text/html","content_length":"116466","record_id":"<urn:uuid:0f5c0f30-3640-4c9e-831e-3ee760c14d96>","cc-path":"CC-MAIN-2024-46/segments/1730477028017.48/warc/CC-MAIN-20241107212632-20241108002632-00807.warc.gz"}
How To Connect Solar Panels Together to Increase Power? Updated: Mar 22, 2023 Connecting Solar Panels Together The trick here when connecting solar panels together is to choose a connection method that is going to give you the most energy efficient configuration for your particular requirements. Connecting solar panels together is a simple and effective way of increasing your solar power capabilities. Going green is a great idea, and as the sun is our ultimate power source, it makes sense to utilize this energy to power our homes. As solar power becomes more accessible, there is need to know how best to connect the panels together to achieve more power. There are three basic but very different ways of connecting solar panels together and each connection method is designed for a specific purpose. For example, to produce more output voltage or(and) to produce more current. Solar photovoltaic panels can be electrically connected together in series to increase the voltage output, or they can be connected together in parallel to increase the output amperage. Solar PV panels can also be wired together in both series and parallel combinations to increase both the output voltage and current to produce a higher wattage array. Whether you are connecting two or more solar panels, as long as you understand the basic principles of how connecting multiple solar panels together increases power and how each of these wiring methods works, you can easily decide on how to wire your own panels together. After all, connecting solar panels together correctly can greatly improve the efficiency of your solar system. Connecting Solar Panels Together in Series Sometimes the system voltage required for a power inverter is much higher than what a single PV module can produce. In such cases, N-number of PV modules is connected in series to deliver the required voltage level. This series connection of the PV modules is similar to that of the connections of N-number of cells in a module to obtain the required voltage level. Solar panels in series add up or sum the voltages produced by each individual panel, giving the total output voltage of the array as shown. In this method ALL the solar panels are of the same type and power rating. The total voltage output becomes the sum of the voltage output of each panel. Using the same three 6-volt, 3.0-amp panels from above, we can see that when these PV panels are connected together in series, the array will produce an output voltage of 18 Volts (6 + 6 + 6) at 3.0 Amperes, giving 54 Watts (volts x amps) at full sun. Now let’s look at connecting solar panels in series with different nominal voltages but with identical current ratings. Solar Panels in Series of Different Voltages In this method all the solar panels are of different types and power rating but have a common current rating. When they are connected together in series, the array produces 21 volts at 3.0 amps, or 63 watts. Again, the output amperage will remain the same as before at 3.0 amps but the voltage output jumps to 21 volts (5 + 7 + 9). Finally, let’s look at connecting solar panels in series with completely different nominal voltages and different current ratings. Solar Panels in Series of Different Currents In this method all the solar panels are of different types and power rating. The individual panel voltages will add together as before, but this time the amperage will be limited to the value of the lowest panel in the series string, in this case 1 Ampere. Then the array will produce 19 Volts (3 + 7 + 9) at 1.0 Ampere only, or only 19 watts out of a possible 69 watts available reducing the arrays efficiency. We can see that the solar panel rated at 9 volts, 5 amps, will only use one fifth or 20% of its maximum current potential reducing its efficiency and wasting money on the purchase of this solar panel. Connecting solar panels in series with different current ratings should only be used provisionally, as the solar panel with the lowest rated current determines the current output of the whole Now to understand these steps in a more mathematical way. Let’s take an example of a power plant of 2 MW, in which a large number of PV modules are connected in series. (The 2 MW inverter can take input voltage from 600 V to 900 V). Determine the number of modules be connected in series to obtain a maximum power point voltage of 800 V. Also determine the power delivered by this PV array. The parameters of the single PV module are as follows; • Open circuit voltage VOC = 35 V • Voltage at maximum power point VM = 29 V • Short circuit current ISC = 7.2 A • Current at maximum power point IM = 6.4 A Step 1: Note the voltage requirement of the PV array • PV array open-circuit voltage VOCA = Not given • PV array voltage at maximum power point VMA = 800 V Step 2: Note the parameters of PV module that is to be connected in the series string Open circuit voltage VOC = 35 V Voltage at maximum power point VM = 29 V Short circuit current ISC = 7.2 A Current at maximum power point IM = 6.4 A Step 3: Calculate the number of modules to be connected in series N = 27.58 (Higher integer value 28) Take higher integer value 28 modules. Due to the higher integer value of N, the value of VMA and VOCA will also increase. Step 4: Calculating the total power of the PV array Thus, we need 28 PV modules to be connected in series having a total power of 5196.8 W to obtain the desired maximum PV array voltage of 800 V. Connecting Solar Panels Together in Parallel Sometimes to increase the power of the solar PV system, instead of increasing the voltage by connecting modules in series the current is increased by connecting modules in parallel. The current in the parallel combination of the PV modules array is the sum of individual currents of the modules. The parallel combination is achieved by connecting the positive terminal of one module to the positive terminal of the next module and negative terminal to the negative terminal of the next module as shown in the following figure. When you connect solar panels together in parallel, the total voltage output remains the same as it would for a single panel, but the output current becomes the sum of the output of each panel as In this method ALL the solar panels are of the same type and power rating. Using the same three 6 Volt, 3.0 Amp panels as above, the total output of the panels, when connected together in parallel, the output voltage still remains at the same value of 6 volts, but the total amperage has now increased to 9.0 Amperes (3 + 3 + 3), producing 54 watts at full sun. But what if our newly acquired solar panels are non-identical, how will this affect the other panels. We have seen that the currents add together, so no real problem there, just as long as the panel voltages are the same and the output voltage remains constant. Let’s look at connecting solar panels in parallel with different nominal voltages and different current ratings. Solar Panels in Parallel with Different Voltages and Currents Here the parallel currents add up as before but the voltage adjusts to the lowest value, in this case 3 volts. Solar panels must have the same output voltage to be useful in parallel. If one panel has a higher voltage it will supply the load current to the degree that its output voltage drops to that of the lower voltage panel. We can see that the solar panel rated at 9 volts, 5 amps, will only operate at a maximum voltage of 3 volts as its operation is being influenced by the smaller panel, reducing its efficiency and wasting money on the purchase of this higher power solar panel. Connecting solar panels in parallel with different voltage ratings is not recommended as the solar panel with the lowest rated voltage determines the voltage output of the whole array. Then when connecting solar panels together in parallel it is important that they ALL have the same nominal voltage value, but it is not necessary that they have the same ampere value. Let’s take an example, calculate the number of modules required in parallel to obtain maximum power point current IMA of 40 A. The system voltage requirement is 14 V. The parameters of the single PV module are as follows; • Open circuit voltage VOC = 18 V • Voltage at maximum power point VM = 14 V • Short circuit current ISC = 6.5 A • Current at maximum power point IM = 6 A Step 1: Note the current requirement of the PV array • PV array short-circuit current ISCA = Not given • PV array current at maximum power point IMA = 40 A Step 2: Note the parameters of PV module that is to be connected in parallel Open circuit voltage VOC = 18 V Voltage at maximum power point VM = 14 V Short circuit current ISC = 6.5 A Current at maximum power point IM = 6 A Step 3: Calculate the number of modules to be connected in parallel N = 6.66 (Higher integer value 7) Take higher integer value 7 modules. Due to the higher integer value of N, the value of IMA and ISCA will also increase. Step 4: Calculating the total power of the PV array Thus, we need 7 PV modules to be connected in parallel having a total power of 588 W to obtain the desired maximum PV array current of 40 A. Series – Parallel Connection of Modules – Mixed Combination When we need to generate large power in a range of bigger-watts for large PV system plants we need to connect modules in series and parallel. In large PV plants first, the modules are connected in series known as “PV module string” to obtain the required voltage level. Then many such strings are connected in parallel to obtain the required current level for the system. The following figures shows the connection of modules in series and parallel. To simplify this, take a look at right in the following figure. Module 1 and module 2 are connected in series let’s call it the string 1. The open-circuit voltage of the string 1 is added. Whereas the short-circuit current of string 1 is the same. Similar to string 1, the modules 3 and 4 make up the string 2. The open-circuit voltage of the string 2 is added. Whereas the short-circuit current of string 2 is the same. Now string 1 and string 2 are connected in parallel, nowhere the voltage remains the same but the current is added. Now let’s take an example for the mix – combination. We have to determine the number of modules required for a PV array having the following parameters; • Array power PMA = 40 KW • Voltage at maximum power point of array VMA = 400 V • Current at maximum power point of array IMA = 100 A The module for the design of the array has the following parameters; • Voltage at maximum power point of module VM = 70 V • Current at maximum power point of module IM = 17 A Step 1: Note the current, voltage, and power requirement of the PV array • PV array power PMA = 40 KW • PV array voltage at maximum power point VMA = 400 V • PV array current at maximum power point IMA = 100 A Step 2: Note the PV module parameters • Voltage at maximum power point of module VM = 70 V • Current at maximum power point of module IM = 17 A Step 3: Calculate the number of modules to be connected in series and parallel NS = 5.71 (Higher integer value 6) Take higher integer value 6 modules. Due to the higher integer value of NS, the value of VMA and VOCA will also increase. NP = 5.88 (Higher integer value 6) Take higher integer value 6 modules. Due to the higher integer value of NP, the value of IMA and ISCA will also increase. Step 4: Calculating the total power of the PV array Thus, we need 36 PV modules. A string of six modules connected in series and six such strings connected in parallel, having a total power of 42840 W to obtain the desired maximum PV array current of 100 A and voltage of 400 V. Note that due to higher integer value of 6 the maximum PV array current and voltage is 102 A and 420 V respectively. Connecting solar panels together to form bigger arrays is not all that complicated. How many series or parallel strings of panels you make up per array depends on what amount of voltage and current you are aiming for. If you are designing a 12-volt battery charging system than parallel wiring is perfect. If you are looking at a higher voltage grid connected system, than you’re probably going to want to go with a series or series-parallel combination depending on the number of solar panels you have. But for a simple reference in regards to how to connect solar panels together in either parallel or series wiring configurations, just remember that parallel wiring = more amperes, and series wiring = more voltage, and with the right type and combination of solar panels you can power just about any electrical device you may have in your home.
{"url":"https://www.lotalinks.com/post/how-to-connect-solar-panels-together-to-increase-power","timestamp":"2024-11-14T07:57:07Z","content_type":"text/html","content_length":"1050516","record_id":"<urn:uuid:9fb99884-ee61-4248-9596-b9272242796a>","cc-path":"CC-MAIN-2024-46/segments/1730477028545.2/warc/CC-MAIN-20241114062951-20241114092951-00200.warc.gz"}
math library Defines a 2-dimensional axis-aligned bounding box between a min and a max position. Defines a 3-dimensional axis-aligned bounding box between a min and a max position. Defines a frustum constructed out of six Planes. Defines a result of an intersection test. 2D Matrix. Values are stored in column major order. 3D Matrix. Values are stored in column major order. 4D Matrix. Values are stored in column major order. Defines a 3-dimensional oriented bounding box defined with a center, halfExtents and axes. Defines a quad by four points. Defines a Quaternion (a four-dimensional vector) for efficient rotation calculations. Defines a Ray by an origin and a direction. Defines a sphere with a center and a radius. Defines a triangle by three points. Base class for vectors 2D column vector. 3D column vector. 4D column vector. degrees2Radians → const double Constant factor to convert and angle from degrees to radians. radians2Degrees → const double Constant factor to convert and angle from radians to degrees. absoluteError(dynamic calculated, dynamic correct) → double Returns absolute error between calculated and correct. The type of calculated and correct must match and can be any vector, matrix, or quaternion. buildPlaneVectors(Vector3 planeNormal, Vector3 u, Vector3 v) → void Sets u and v to be two vectors orthogonal to each other and planeNormal. catmullRom(double edge0, double edge1, double edge2, double edge3, double amount) → double Do a catmull rom spline interpolation with edge0, edge1, edge2 and edge3 by amount. cross2(Vector2 x, Vector2 y) → double 2D cross product. vec2 x vec2. cross2A(double x, Vector2 y, Vector2 out) → void 2D cross product. double x vec2. cross2B(Vector2 x, double y, Vector2 out) → void 2D cross product. vec2 x double. cross3(Vector3 x, Vector3 y, Vector3 out) → void 3D Cross product. degrees(double radians) → double Convert radians to degrees. dot2(Vector2 x, Vector2 y) → double 2D dot product. dot3(Vector3 x, Vector3 y) → double 3D dot product. makeFrustumMatrix(double left, double right, double bottom, double top, double near, double far) → Matrix4 Constructs a new OpenGL perspective projection matrix. makeInfiniteMatrix(double fovYRadians, double aspectRatio, double zNear) → Matrix4 Constructs a new OpenGL infinite projection matrix. makeOrthographicMatrix(double left, double right, double bottom, double top, double near, double far) → Matrix4 Constructs a new OpenGL orthographic projection matrix. makePerspectiveMatrix(double fovYRadians, double aspectRatio, double zNear, double zFar) → Matrix4 Constructs a new OpenGL perspective projection matrix. makePlaneProjection(Vector3 planeNormal, Vector3 planePoint) → Matrix4 Returns a transformation matrix that transforms points onto the plane specified with planeNormal and planePoint. makePlaneReflection(Vector3 planeNormal, Vector3 planePoint) → Matrix4 Returns a transformation matrix that transforms points by reflecting them through the plane specified with planeNormal and planePoint. makeViewMatrix(Vector3 cameraPosition, Vector3 cameraFocusPosition, Vector3 upDirection) → Matrix4 Constructs a new OpenGL view matrix. mix(double min, double max, double a) → double Interpolate between min and max with the amount of a using a linear interpolation. The computation is equivalent to the GLSL function mix. pickRay(Matrix4 cameraMatrix, num viewportX, num viewportWidth, num viewportY, num viewportHeight, num pickX, num pickY, Vector3 rayNear, Vector3 rayFar) → bool On success, rayNear and rayFar are the points where the screen space pickX, pickY intersect with the near and far planes respectively. radians(double degrees) → double Convert degrees to radians. relativeError(dynamic calculated, dynamic correct) → double Returns relative error between calculated and correct. The type of calculated and correct must match and can be any vector, matrix, or quaternion. setFrustumMatrix(Matrix4 perspectiveMatrix, double left, double right, double bottom, double top, double near, double far) → void Constructs an OpenGL perspective projection matrix in perspectiveMatrix. setInfiniteMatrix(Matrix4 infiniteMatrix, double fovYRadians, double aspectRatio, double zNear) → void Constructs an OpenGL infinite projection matrix in infiniteMatrix. fovYRadians specifies the field of view angle, in radians, in the y direction. aspectRatio specifies the aspect ratio that determines the field of view in the x direction. The aspect ratio of x (width) to y (height). zNear specifies the distance from the viewer to the near plane (always positive). setModelMatrix(Matrix4 modelMatrix, Vector3 forwardDirection, Vector3 upDirection, double tx, double ty, double tz) → void Constructs an OpenGL model matrix in modelMatrix. Model transformation is the inverse of the view transformation. Model transformation is also known as "camera" transformation. Model matrix is commonly used to compute a object location/orientation into the full model-view stack. setOrthographicMatrix(Matrix4 orthographicMatrix, double left, double right, double bottom, double top, double near, double far) → void Constructs an OpenGL orthographic projection matrix in orthographicMatrix. setPerspectiveMatrix(Matrix4 perspectiveMatrix, double fovYRadians, double aspectRatio, double zNear, double zFar) → void Constructs an OpenGL perspective projection matrix in perspectiveMatrix. setRotationMatrix(Matrix4 rotationMatrix, Vector3 forwardDirection, Vector3 upDirection) → void Constructs a rotation matrix in rotationMatrix. setViewMatrix(Matrix4 viewMatrix, Vector3 cameraPosition, Vector3 cameraFocusPosition, Vector3 upDirection) → void Constructs an OpenGL view matrix in viewMatrix. View transformation is the inverse of the model transformation. View matrix is commonly used to compute the camera location/orientation into the full model-view stack. smoothStep(double edge0, double edge1, double amount) → double Do a smooth step (hermite interpolation) interpolation with edge0 and edge1 by amount. The computation is equivalent to the GLSL function smoothstep. unproject(Matrix4 cameraMatrix, num viewportX, num viewportWidth, num viewportY, num viewportHeight, num pickX, num pickY, num pickZ, Vector3 pickWorld) → bool On success, Sets pickWorld to be the world space position of the screen space pickX, pickY, and pickZ.
{"url":"https://pub.dev/documentation/fluttershy/latest/math/math-library.html","timestamp":"2024-11-02T11:21:45Z","content_type":"text/html","content_length":"48334","record_id":"<urn:uuid:44c6523d-6394-4e55-8a67-557034b349cb>","cc-path":"CC-MAIN-2024-46/segments/1730477027710.33/warc/CC-MAIN-20241102102832-20241102132832-00285.warc.gz"}
The Monte Carlo Method Written by Mike James Friday, 01 November 2024 Pi Program The program is fairly easy. First we set up a loop that will repeat the random "shot" at the target: var num = 1000; var total = 0; var hit = 0; for (var i = 1; i <= num; i++) { Next we generate two random numbers x and y and work out how for this point is from the center of the circle at 0.5,0.5: var x = Math.random(); var y = Math.random(); var r = Math.sqrt(Math.pow(x - 0.5, 2) + Math.pow(y - 0.5, 2)); If the point is closer than 0.5 then we have a hit: if (r < 0.5) hit++; Text1.value = hit / total * 4; If you run the program (using Firefox in this case) for a range of values you get something like: num Pi 10 3.2 100 3.16 1000 3.1 10000 3.1512 100000 3.14 1000000 3.143372 As you can see convergence isn't fast but it generating random numbers is fairly cheap. The complete program as an HTML page is: <!DOCTYPE html> <html lang="en"> <meta charset="utf-8" /> <title>Random Pi</title> <input type="text" id="Text1"/> var num = 1000000; var total = 0; var hit = 0; for (var i = 1; i <= num; i++) { var x = Math.random(); var y = Math.random(); var r = Math.sqrt(Math.pow(x - 0.5, 2) + Math.pow(y - 0.5, 2)); if (r < 0.5) hit++; Text1.value = hit / total * 4; More Than Just Areas You might be convinced that you can work out areas, volumes and even Pi by random numbers but where next? The answer is that you can estimate the results of just about any numerical computation using randomness in much the same way. The only problem is that explaining how it works would get us ever deeper into mathematics so a simple example will have to do. You can use the Monte Carlo method to solve linear equations like where b is a known vector and A is a known matrix. Usually this problem is solved by inverting the matrix or a similar numerical method but when this is large finding the inverse is an expensive problem. Again randomness comes to the rescue and it is possible to estimate the x vector in much the same way as the needle dropping estimated Pi. The actual steps to get to the solution are complicated but what about just working out the matrix product y=Ab, where A and b are known. This isn’t such a big problem but it is a step on the way to solving Ax=b and at first it doesn't seem to have anything at all to do with random numbers. See if you can work out how to do it first. Last Updated ( Friday, 01 November 2024 )
{"url":"https://www.i-programmer.info/babbages-bag/274-monte-carlo-or-bust.html?start=2","timestamp":"2024-11-02T09:32:22Z","content_type":"text/html","content_length":"30644","record_id":"<urn:uuid:8af553fb-7acf-4f9e-b695-4a5f16172cd0>","cc-path":"CC-MAIN-2024-46/segments/1730477027709.8/warc/CC-MAIN-20241102071948-20241102101948-00548.warc.gz"}
Select an individual value plot Create an individual value plot that displays the values of multiple-level groups. Multiple-level groups are displayed as clustered groups of symbols. The worksheet must include multiple columns of numeric or date/time data and at least one column of categorical data. Groups are defined by separate columns and by values or unique combinations of values in categorical variables. For example, the following worksheet contains the diameters of pipes produced each week for three weeks, on two machines. Week 1, Week 2, and Week 3 contain the numeric data. Machine contains the categorical data. The graph shows the individual pipe diameters produced by each machine, clustered by week. C1 C2 C3 C4 Week 1 Week 2 Week 3 Machine 5.19 5.57 8.73 1 5.53 5.11 5.01 2 4.78 5.76 7.59 1 ... ... ... ... For more information, go to Create an individual value plot of multiple Y variables with groups.
{"url":"https://support.minitab.com/en-us/minitab/help-and-how-to/graphs/individual-value-plot/create-the-graph/select-an-individual-value-plot/","timestamp":"2024-11-02T17:02:23Z","content_type":"text/html","content_length":"25400","record_id":"<urn:uuid:93adb54c-7c1c-436e-96d6-011a0a8b6a9c>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.63/warc/CC-MAIN-20241112145015-20241112175015-00639.warc.gz"}
Sample size determination: why, when, how? power, MCRD, variability (+ possible additional assumptions / parameters, e.g. number of events, correlations, …) no matter how complex • Lots of published formula (search Google Sc )), books, software, and of course… statisticians – need to find the one right for your study • A post hoc power calculation is worthless • Instead report effect size + 95% CI
{"url":"https://speakerdeck.com/graemeleehickey/sample-size-determination-why-when-how","timestamp":"2024-11-09T18:02:38Z","content_type":"text/html","content_length":"119706","record_id":"<urn:uuid:116c8627-9e72-48c3-b8d3-bb1c7bb489ae>","cc-path":"CC-MAIN-2024-46/segments/1730477028125.59/warc/CC-MAIN-20241109151915-20241109181915-00334.warc.gz"}
Tsl - eloquence - negate In glsl, we had: In tsl was wondering what would be the most eloquent way to write please ? Anyone know what would be the most optimized one please ? Another one ? TSL generates a tree of nodes, that is transformed into an AST (abstract syntax tree), which in turn is compiled into a shader language (WGSL or GLSL). This means that sometimes what appears to be uniquely optimal in TSL is not necessarily uniquely optimal as GLSL (or is not translated into GLSL at all). You may experiment with the TSL Editor and check how TSL code is compiled into GLSL. These three TSL instructions: const val1 = float(-1.).mul(sin(angle)).toVar(); const val2 = sin(angle).mul(-1).toVar(); const val3 = negate(sin(angle)).toVar(); are translated into GLSL as: nodeVar1 = ( -1.0 * sin( nodeVar2 ) ); nodeVar3 = ( sin( nodeVar2 ) * -1.0 ); nodeVar4 = ( - sin( nodeVar2 ) ); but I believe the GLSL compiler will (should!) generate the same code for all three cases. PS. I added .toVar in order to isolate the code, otherwise it gets embedded in expressions that use them. 5 Likes Thanks so much
{"url":"https://discourse.threejs.org/t/tsl-eloquence-negate/72445","timestamp":"2024-11-10T12:46:33Z","content_type":"text/html","content_length":"27675","record_id":"<urn:uuid:dd5063cd-3492-42e8-91ba-2366f6dfeda8>","cc-path":"CC-MAIN-2024-46/segments/1730477028186.38/warc/CC-MAIN-20241110103354-20241110133354-00825.warc.gz"}
Undecidability in Theory of Computation In this article, we have explained concept of Undecidability in Theory of Computation along with idea of undecidable languages like Halting Problem. Prerequisite: Decidability in Theory of Computation Table of contents: 1. Definition of Undecidability 2. Language A[TM] in undecidable Let us get started with Undecidability in Theory of Computation. Definition of Undecidability Undecidability is defined as follows: Σ is a set of alphabet and A is a Language such that A is proper subset of Σ*. Σ* is a set of all possible strings. A is a undecidable language if there exists NO Computational Model (such as Turing Machine) M such that for every string w that belong to Σ*, the following two conditions hold: 1. If w belongs to A, then the computation of Turing Machine M on w as input, ends in the accept state. 2. If w does not belong to A, then computation of Turing Machine M on w, ends in the reject state. Note there may be a Computation Machine M for which one condition hold but does not hold both conditions. A is an Undecidable Language. In simple words, A is a undecidable language if there is NO Turing Machine or an Algorithm that correctly tells if a given string w is a part of the language or not in finite time. Language A[TM] in undecidable Language A[TM] is defined as follows: { (M ,w) : M is a Turing Machine (TM) that accepts the string w } Therefore, in this language DFA is enough. We do not need stronger models. The language Language A[CFG] is Undecidable Language. Note this is the only undecidable language. This means that there exists no Algorithm that can determine if a given Algorithm M can accept or not a given string w in finite time. The decidability and undecidability of each language can be proved. We have omitted the proof of these for now. The summary is as follows: Language Machine Decidability Language A[DFA] Deterministic Finite Automaton Yes Language A[NFA] Non-Deterministic Finite Automaton Yes Language A[CFG] Context Free Grammar (CFG) Yes Language A[TM] Turing Machine (TM) No The Halting Problem Halt is defined as: Halt = { (P, w) : P is a program that terminates execution with w as input }. The Language Halt is undecidable. With this article at OpenGenus, you must have the complete idea of Undecidability in Theory of Computation.
{"url":"https://iq.opengenus.org/undecidability-in-theory-of-computation/","timestamp":"2024-11-07T22:38:49Z","content_type":"text/html","content_length":"29636","record_id":"<urn:uuid:5af0e0bf-f801-4054-bf05-32a6149ffab0>","cc-path":"CC-MAIN-2024-46/segments/1730477028017.48/warc/CC-MAIN-20241107212632-20241108002632-00802.warc.gz"}
The Direct Method In Soliton Theory This summer I'm going to The Direct Method In Soliton Theory This summer I'm going to learn a mini-course about soliton theory ("Soliton equations and symmetric functions" in LHSM (Russian summer school in mathematics).The web-page of this course is (warning: it's in Russian, so here's my translation of its program: The Direct Method in Soliton Theory This program has intrigued me (despite my main interest is representation theory and related topics) and now I'm looking for a book in which the soliton theory will be outlined according to this program (i.e. using Hirota derivatives from very beginning), maybe even without any mentions of symmetric functions.. This paper presents the q-analogue of Toda lattice system of difference equations by discussing the q-discretization in three aspects: differential-q-difference, q-difference-q-difference and q-differential-q-difference Toda equation. The paper develops three-q-soliton solutions, which are expressed in the form of a polynomial in power functions, for the differential-q-difference and q-difference-q-difference Toda equations by Hirota direct method. Furthermore, it introduces q-Hirota D-operator and presents the q-differential-q-difference version of Toda equation. Finally, the paper presents its solitary wave like a solution in terms of q-exponential function and explains the nonexistence of further solutions in terms of q-exponentials by the virtue of Hirota perturbation. Inspired by this fact, the purpose of this paper is to present the q-analogue of Toda lattice system of difference equations and discuss the applicability of Hirota direct method for constructing multi-soliton solutions. There are several ways to q-discretize a given continuous equation. By q-discretization we mean q-difference equations determined by q-difference operator and additionally q-differential equations constructed by q-derivative operator. Therefore, we present q-discretization in three aspects: differential-q-difference, q-difference-q-difference and q-differential-q-difference Toda equation. We show that Hirota direct method allows to produce three-soliton solutions for the differential-q-difference and q-difference-q-difference Toda equations. We emphasize that the solutions not only possess soliton behaviors but have additional power counterparts for q-discrete variables. We call such soliton solutions as q-soliton solutions. Therefore, unlike the Toda equation [14] or discrete-time Toda equation [15], the differential-q-difference and q-difference-q-difference Toda equations have soliton solutions in the form of a polynomial in power functions. On the other hand, we conclude that Hirota direct method fails to derive multi-soliton solutions of the q-differential-q difference Toda equation. Furthermore, it is not possible to obtain multi-soliton solutions for any q-differential-q-difference or q-differential-difference type of equations by means of Hirota perturbation. This form is called bilinear form of F[u]. We should remark that some integrable equations can only be transformed to a single bilinear form while some of them can be written as a combination of bilinear forms. On the other hand, for some equations it is not possible to find a proper transformation. The next step towards Hirota direct method, is introducing the so-called Hirota D-operator which is a binary differential operator exhibiting a new calculus. To be more precise, although the equation (82) can be put into Hirota bilinear form (83), in both cases Hirota perturbation fails to produce further solutions. Moreover, it is straightforward to conclude that for any q-differential-q-difference or q-differential-difference type of equation even if the equation has a Hirota bilinear form, it is not possible to derive multisoliton solutions by the use of Hirota Direct method. The structure of this paper is as follows. In the second section, some basic knowledge on time-space scale are introduced. The third section is important that new AKNS system is constructed, and specific parameters are selected to obtain the KdV equation on time-space scale, which can be simplified into classical and discrete KdV equation. In Section 4, the single-soliton solution of KdV equation under the time scale framework is constructed by using the idea of direct method, and the nonlinear dispersion relationship of the equation is obtained. In particular, solutions of KdV equation on two different time scales are obtained. The last part is our conclusion. In this paper, a method of generating integrable system on time-space scale is introduced. Starting from the -dynamical system, the coupled KdV equation on time-space scale is derived from the Lax pair and zero curvature equation. When different time scales are considered, different soliton equations can be obtained. In addition, the variable transformation of the KdV equation on time-space scale is constructed to obtain its single-soliton solution. Solitons are an important class of solutions to nonlinear differential equations which appear in different areas of physics and applied mathematics. In this study we provide a general overview of the Hirota method which is one of the most powerful tool in finding the multi-soliton solutions of nonlinear wave and evaluation equations. Bright and dark soliton solutions of nonlinear Schrödinger equation are discussed in detail In Soliton theory, Hirota direct method is most efficient tool for seeking one soliton solutions or multi-soliton solutions of integrable nonlinear partial differential equations. The key step of the Hirota direct method is to transform the given equation into its Hirota bilinear form. Once the bilinear form of the given equation is found, we can construct the soliton and multi-soliton solutions of that model. Many interesting characteristics of Pfaffians were discovered through studies of soliton equations. In this thesis, a shallow water wave model and its bilinear equation are investigated. Using Hirota direct method, we obtain the multi-soliton solutions and Pfaffian solutions for a shallow water wave model. AbstractWe construct the $N$-solitons solution in the Novikov-Veselovequation from the extended Moutard transformation and the Pfaffianstructure. Also, the corresponding wave functions are obtainedexplicitly. As a result, the property characterizing the$N$-solitons wave function is proved using the Pfaffian expansion.This property corresponding to the discrete scattering data for$N$-solitons solution is obtained in [arXiv:0912.2155] from the $\overline\partial$-dressing method. • ReferencesAthorne C., Nimmo J.J.C., On the Moutard transformation for integrable partial differential equations, Inverse Problems 7 (1991), 809-826. • Basalaev M.Yu., Dubrovsky V.G., Topovsky A.V., New exact solutions with constant asymptotic values at infinity of the NVN integrable nonlinear evolution equation via $\overline\partial$-dressing method, arXiv:0912.2155. • Bogdanov L.V., Veselov-Novikov equation as a natural two-dimensional generalization of the Korteweg-de Vries equation, Theoret. Math. Phys. 70 (1987), 219-223. • Chang J.H., The Gould-Hopper polynomials in the Novikov-Veselov equation, J. Math. Phys. 52 (2011), 092703, 15 pages, arXiv:1011.1614. • Dubrovin B.A., Krichever I.M., Novikov S.P., The Schrödinger equation in a periodic field and Riemann surfaces, Sov. Math. Dokl. 17 (1976), 947-952. • Dubrovsky V.G., Formusatik I.B., New lumps of Veselov-Novikov integrable nonlinear equation and new exact rational potentials of two-dimensional stationary Schrödinger equation via $\overline\ partial$-dressing method, Phys. Lett. A 313 (2003), 68-76. • Dubrovsky V.G., Formusatik I.B., The construction of exact rational solutions with constant asymptotic values at infinity of two-dimensional NVN integrable nonlinear evolution equations via the $ \overline\partial$-dressing method, J. Phys. A: Math.Gen. 34 (2001), 1837-1851. • Grinevich P.G., Rational solitons of the Veselov-Novikov equations are reflectionless two-dimensional potentials at fixed energy, Theoret. Math. Phys. 69 (1986), 1170-1172. • Grinevich P.G., Manakov S.V., Inverse scattering problem for the two-dimensional Schrödinger operator, the $\overline\partial$-method and nonlinear equations, Funct. Anal. Appl. 20 (1986), • Grinevich P.G., Mironov A.E., Novikov S.P., New reductions and nonlinear systems for 2D Schrödinger operators, arXiv:1001.4300. • Hirota R., The direct method in soliton theory, Cambridge Tracts in Mathematics, Vol. 155, Cambridge University Press, Cambridge, 2004. • Hu H.C., Lou S.Y., Construction of the Darboux transformaiton and solutions to the modified Nizhnik-Novikov-Veselov equation, Chinese Phys. Lett. 21 (2004), 2073-2076. • Hu H.C., Lou S.Y., Liu Q.P., Darboux transformation and variable separation approach: the Nizhnik-Novikov-Veselov equation, Chinese Phys. Lett. 20 (2003), 1413-1415, nlin.SI/0210012. • Ishikawa M., Wakayama M., Applications of minor-summation formula. II. Pfaffians and Schur polynomials, J. Combin. Theory Ser. A 88 (1999), 136-157. • Kaptsov O.V., Shan'ko Yu.V., Trilinear representation and the Moutard transformation for the Tzitzéica equation, solv-int/9704014. • Kodama Y., KP solitons in shallow water, J. Phys. A: Math. Gen. 43 (2010), 434004, 54 pages, arXiv:1004.4607. • Kodama Y., Maruno K., $N$-soliton solutions to the DKP equation and Weyl group actions, J. Phys. A: Math. Gen. 39 (2006), 4063-4086, nlin.SI/0602031. • Kodama Y., Williams L.K., KP solitons and total positivity for the Grassmannian, arXiv:1106.0023. • Kodama Y., Williams L.K., KP solitons, total positivity, and cluster algebras, Proc. Natl. Acad. Sci. USA 108 (2011), 8984-8989, arXiv:1105.4170. • Konopelchenko B.G., Introduction to multidimensional integrable equations. The inverse spectral transform in 2+1 dimensions, Plenum Press, New York, 1992. • Konopelchenko B.G., Landolfi G., Induced surfaces and their integrable dynamics. II. Generalized Weierstrass representations in 4D spaces and deformations via DS hierarchy, Stud. Appl. Math. 104 (2000), 129-169. • Krichever I.M., A characterization of Prym varieties, Int. Math. Res. Not. 2006 (2006), Art. ID 81476, 36 pages, math.AG/0506238. • Liu S.Q., Wu C.Z., Zhang Y., On the Drinfeld-Sokolov hierarchies of $D$ type, Int. Math. Res. Not. 2011 (2011), 1952-1996, arXiv:0912.5273. • Manakov S.V., The method of the inverse scattering problem, and two-dimensional evolution equations, Russian Math. Surveys 31 (1976), no. 5, 245-246. • Matveev V.B., Salle M.A., Darboux transformations and solitons, Springer Series in Nonlinear Dynamics, Springer-Verlag, Berlin, 1991. • Mironov A.E., A relationship between symmetries of the Tzitzéica equation and the Veselov-Novikov hierarchy, Math. Notes 82 (2007), 569-572. • Mironov A.E., Finite-gap minimal Lagrangian surfaces in $\mathbb C\rm P^2$, in Riemann Surfaces, Harmonic Maps and Visualization, OCAMI Stud., Vol. 3, Osaka Munic. Univ. Press, Osaka, 2010, 185-196, arXiv:1005.3402. • Mironov A.E., The Veselov-Novikov hierarchy of equations, and integrable deformations of minimal Lagrangian tori in $\mathbb C\rm P^2$, Sib. Electron. Math. Rep. 1 (2004), 38-46, math.DG/0607700. • Moutard M., Note sur les équations différentielles linéaires du second ordre, C.R. Acad. Sci. Paris 80 (1875), 729-733. • Moutard M., Sur la construction des équations de la forme $\frac1z \frac\partial^2z\partial x\partial y =\lambda(xy)$, qui admettent une intégrale générale explicite, J. de. l'Éc. Polyt. 28 (1878), 1-12. • Nimmo J.J.C., Darboux transformations in (2+1)-dimensions, in Applications of Analytic and Geometric Methods to Nonlinear Differential Equations (Exeter, 1992), NATO Adv. Sci. Inst. Ser. C Math. Phys. Sci., Vol. 413, Kluwer Acad. Publ., Dordrecht, 1993, 183-192. • Nimmo J.J.C., Hall-Littlewood symmetric functions and the BKP equation, J. Phys. A: Math. Gen. 23 (1990), 751-760. • Novikov S.P., Two-dimensional Schrödinger operators in periodic fields, J. Sov. Math. 28 (1985), 1-20. • Novikov S.P., Veselov A.P., Two-dimensional Schrödinger operator: inverse scattering transform and evolutional equations, Phys. D 18 (1986), 267-273. • Ohta Y., Pfaffian solutions for the Veselov-Novikov equation, J. Phys. Soc. Japan 61 (1992), 3928-3933. • Orlov A.Yu., Shiota T., Takasaki K., Pfaffian structures and certain solutions to BKP hierarchies. I. Sums over partitions, arXiv:1201.4518. • Shiota T., Prym varieties and soliton equations, in Infinite-Dimensional Lie Algebras and Groups (Luminy-Marseille, 1988), Adv. Ser. Math. Phys., Vol. 7, World Sci. Publ., Teaneck, NJ, 1989, • Stembridge J.R., Nonintersecting paths, Pfaffians, and plane partitions, Adv. Math. 83 (1990), 96-131. • Taimanov I.A., Tsarev S.P., Two-dimensional rational solitons and their blowup via the Moutard transformation, Theoret. Math. Phys. 157 (2008), 1525-1541, arXiv:0801.3225. • Takasaki K., Dispersionless Hirota equations of two-component BKP hierarchy, SIGMA 2 (2006), 057, 22 pages, nlin.SI/0604003. • Veselov A.P., Novikov S.P., Finite-zone, two-dimensional, potential Schrödinger operators. Explicit formulas and evolution equations, Sov. Math. Dokl. 30 (1984), 588-591. Previous article Next article Contents of Volume 9 (2013) 041b061a72
{"url":"https://www.ilpegasso.com/group/mysite-231-group/discussion/166b09a2-1368-4a5c-8aa4-58d5f2840728","timestamp":"2024-11-03T02:44:55Z","content_type":"text/html","content_length":"1050488","record_id":"<urn:uuid:4adb7b4e-c1a2-40a7-af80-40cc00c7a607>","cc-path":"CC-MAIN-2024-46/segments/1730477027770.74/warc/CC-MAIN-20241103022018-20241103052018-00521.warc.gz"}
Hillol Kargupta • University of Maryland, USA According to our database , Hillol Kargupta authored at least 108 papers between 1991 and 2019. Collaborative distances: IEEE Fellow IEEE Fellow 2011, "For contributions to distributed data mining". Book In proceedings Article PhD thesis Dataset Other Online presence: On csauthors.net: Analyzing Driving Data using the ADAPT Distributed Analytics Platform for Connected Vehicles. Proceedings of the 2019 IEEE International Conference on Data Science and Advanced Analytics, 2019 In-network outlier detection in wireless sensor networks. Knowl. Inf. Syst., 2013 Breaching Euclidean distance-preserving data perturbation using few known inputs. Data Knowl. Eng., 2013 Peer-to-peer distributed text classifier learning in PADMINI. Stat. Anal. Data Min., 2012 Introduction to data mining for sustainability. Data Min. Knowl. Discov., 2012 Connected Cars: How Distributed Data Mining Is Changing the Next Generation of Vehicle Telematics Products. Proceedings of the Sensor Systems and Software - Third International ICST Conference, 2012 Making Data Analysis Ubiquitous: My Journey Through Academia and Industry. Proceedings of the Journeys to Data Mining, 2012 Scalable, asynchronous, distributed eigen monitoring of astronomy data streams. Stat. Anal. Data Min., 2011 Multi-objective optimization based privacy preserving distributed data mining in Peer-to-Peer networks. Peer-to-Peer Netw. Appl., 2011 A Sustainable Approach for Demand Prediction in Smart Grids using a Distributed Local Asynchronous Algorithm. Proceedings of the 2011 Conference on Intelligent Data Understanding, 2011 MineFleet®: The Vehicle Data Stream Mining System for Ubiquitous Environments. Proceedings of the Ubiquitous Knowledge Discovery - Challenges, Techniques, Applications, 2010 A local asynchronous distributed privacy preserving feature selection algorithm for large peer-to-peer networks. Knowl. Inf. Syst., 2010 MineFleet®: an overview of a widely adopted distributed vehicle performance data mining system. Proceedings of the 16th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 2010 The next generation of transportation systems, greenhouse emissions, and data mining. Proceedings of the 16th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 2010 PADMINI: A Peer-to-Peer Distributed Astronomy Data Mining System and a Case Study. Proceedings of the 2010 Conference on Intelligent Data Understanding, 2010 A Generic Local Algorithm for Mining Data Streams in Large Distributed Systems. IEEE Trans. Knowl. Data Eng., 2009 Approximate Distributed K-Means Clustering over a Peer-to-Peer Network. IEEE Trans. Knowl. Data Eng., 2009 A communication efficient probabilistic algorithm for mining frequent itemsets from a peer-to-peer network. Stat. Anal. Data Min., 2009 On the Privacy of Euclidean Distance Preserving Data Perturbation CoRR, 2009 Scalable Distributed Change Detection from Astronomy Data Streams Using Local, Asynchronous Eigen Monitoring Algorithms. Proceedings of the SIAM International Conference on Data Mining, 2009 A Local Distributed Peer-to-Peer Algorithm Using Multi-Party Optimization Based Privacy Preservation for Data Mining Primitive Computation. Proceedings of the Proceedings P2P 2009, 2009 Assured Information Sharing Life Cycle. Proceedings of the IEEE International Conference on Intelligence and Security Informatics, 2009 TagLearner: A P2P Classifier Learning System from Collaboratively Tagged Text Documents. Proceedings of the ICDM Workshops 2009, 2009 A Survey of Attack Techniques on Privacy-Preserving Data Perturbation Methods. Proceedings of the Privacy-Preserving Data Mining - Models and Algorithms, 2008 Guest Editors' Introduction: Special Section on Intelligence and Security Informatics. IEEE Trans. Knowl. Data Eng., 2008 Distributed Identification of Top-l Inner Product Elements and its Application in a Peer-to-Peer Network. IEEE Trans. Knowl. Data Eng., 2008 Distributed Decision-Tree Induction in Peer-to-Peer Systems. Stat. Anal. Data Min., 2008 A Scalable Local Algorithm for Distributed Multivariate Regression. Stat. Anal. Data Min., 2008 Distributed probabilistic inferencing in sensor networks using variational approximation. J. Parallel Distributed Comput., 2008 An Efficient Local Algorithm for Distributed Multivariate Regression in Peer-to-Peer Networks. Proceedings of the SIAM International Conference on Data Mining, 2008 Distributed Linear Programming and Resource Management for Data Mining in Distributed Environments. Proceedings of the Workshops Proceedings of the 8th IEEE International Conference on Data Mining (ICDM 2008), 2008 Topic 5: Parallel and Distributed Databases. Proceedings of the Euro-Par 2008, 2008 Thoughts on Human Emotions, Breakthroughs in Communication, and the Next Generation of Data Mining. Proceedings of the Next Generation of Data Mining., 2008 Privacy-Preserving Data Analysis on Graphs and Social Networks. Proceedings of the Next Generation of Data Mining., 2008 Algorithms for Distributed Data Stream Mining. Proceedings of the Data Streams - Models and Algorithms, 2007 Distributed Top-K Outlier Detection from Astronomy Catalogs using the DEMAC System. Proceedings of the Seventh SIAM International Conference on Data Mining, 2007 Multi-party, Privacy-Preserving Distributed Data Mining Using a Game Theoretic Framework. Proceedings of the Knowledge Discovery in Databases: PKDD 2007, 2007 Uniform Data Sampling from a Peer-to-Peer Network. Proceedings of the 27th IEEE International Conference on Distributed Computing Systems (ICDCS 2007), 2007 Peer-to-Peer Data Mining, Privacy Issues, and Games. Proceedings of the Autonomous Intelligent Systems: Multi-Agents and Data Mining, 2007 Random Projection-Based Multiplicative Data Perturbation for Privacy Preserving Distributed Data Mining. IEEE Trans. Knowl. Data Eng., 2006 Orthogonal Decision Trees. IEEE Trans. Knowl. Data Eng., 2006 Client-side web mining for community formation in peer-to-peer environments. SIGKDD Explor., 2006 On-board Vehicle Data Stream Monitoring Using MineFleet and Fast Resource Constrained Monitoring of Correlation Matrices. New Gener. Comput., 2006 Clustering distributed data streams in peer-to-peer environments. Inf. Sci., 2006 Distributed Data Mining in Peer-to-Peer Networks. IEEE Internet Comput., 2006 Local L2-Thresholding Based Data Mining in Peer-to-Peer Systems. Proceedings of the Sixth SIAM International Conference on Data Mining, 2006 K-Means Clustering Over a Large, Dynamic Network. Proceedings of the Sixth SIAM International Conference on Data Mining, 2006 An Attacker's View of Distance Preserving Maps for Privacy Preserving Data Mining. Proceedings of the Knowledge Discovery in Databases: PKDD 2006, 2006 Random-data perturbation techniques and privacy-preserving data mining. Knowl. Inf. Syst., 2005 Distributed data mining and agents. Eng. Appl. Artif. Intell., 2005 Orthogonal Decision Trees for Resource-Constrained Physiological Data Stream Monitoring Using Mobile Devices. Proceedings of the High Performance Computing, 2005 Topic 5 - Parallel and Distributed Databases, Data Mining and Knowledge Discovery. Proceedings of the Euro-Par 2005, Parallel Processing, 11th International Euro-Par Conference, Lisbon, Portugal, August 30, 2005 A collaborative distributed privacy-sensitive decision support system for monitoring heterogeneous data sources. Proceedings of the 2005 International Symposium on Collaborative Technologies and Systems, 2005 A Fourier Spectrum-Based Approach to Represent Decision Trees for Mining Data Streams in Mobile Environments. IEEE Trans. Knowl. Data Eng., 2004 Learning Functions Using Randomized Genetic Code-Like Transformations: Probabilistic Properties and Experimentations. IEEE Trans. Knowl. Data Eng., 2004 Collective Mining of Bayesian Networks from Distributed Heterogeneous Data. Knowl. Inf. Syst., 2004 VEDAS: A Mobile and Distributed Data Stream Mining System for Real-Time Vehicle Monitoring. Proceedings of the Fourth SIAM International Conference on Data Mining, 2004 Privacy-Sensitive Bayesian Network Parameter Learning. Proceedings of the 4th IEEE International Conference on Data Mining (ICDM 2004), 2004 Orthogonal Decision Trees. Proceedings of the 4th IEEE International Conference on Data Mining (ICDM 2004), 2004 Communication Efficient Construction of Decision Trees Over Heterogeneously Distributed Data. Proceedings of the 4th IEEE International Conference on Data Mining (ICDM 2004), 2004 Multi-agent Systems and Distributed Data Mining. Proceedings of the Cooperative Information Agents VIII, 8th International Workshop, 2004 Dependency detection in MobiMine: a systems perspective. Inf. Sci., 2003 Analysis of privacy preserving random perturbation techniques: further explorations. Proceedings of the 2003 ACM Workshop on Privacy in the Electronic Society, 2003 Privacy Sensitive Distributed Data Mining from Multi-party Data. Proceedings of the Intelligence and Security Informatics, First NSF/NIJ Symposium, 2003 Towards a Pervasive Grid. Proceedings of the 17th International Parallel and Distributed Processing Symposium (IPDPS 2003), 2003 On the Privacy Preserving Properties of Random Data Perturbation Techniques. Proceedings of the 3rd IEEE International Conference on Data Mining (ICDM 2003), 2003 Homeland security and privacy sensitive data mining from multi-party distributed resources. Proceedings of the 12th IEEE International Conference on Fuzzy Systems, 2003 MobiMine: Monitoring the Stock Market from a PDA. SIGKDD Explor., 2002 Book Reviews. Review of Advances in Distributed and Parallel Knowledge Discovery. Pattern Anal. Appl., 2002 Toward Machine Learning Through Genetic Code-like Transformations. Genet. Program. Evolvable Mach., 2002 Editorial: Computation in Gene Expression. Genet. Program. Evolvable Mach., 2002 Distributed, Collaborative Data Analysis from Heterogeneous Sites Using a Scalable Evolutionary Technique. Appl. Intell., 2002 Dependency Detection in MobiMine and Random Matrices. Proceedings of the Principles of Data Mining and Knowledge Discovery, 2002 Constructing Simpler Decision Trees from Ensemble Models Using Fourier Analysis. Proceedings of the 2002 ACM SIGMOD Workshop on Research Issues in Data Mining and Knowledge Discovery, 2002 A Random Matrix-Based Approach for Dependency Detection from Data Streams. Proceedings of the 2002 ACM SIGMOD Workshop on Research Issues in Data Mining and Knowledge Discovery, 2002 A Resampling Technique for Learning the Fourier Spectrum of Skewed Data. Proceedings of the 2002 ACM SIGMOD Workshop on Research Issues in Data Mining and Knowledge Discovery, 2002 Distributed Clustering Using Collective Principal Component Analysis. Knowl. Inf. Syst., 2001 Distributed Multivariate Regression Using Wavelet-Based Collective Data Mining. J. Parallel Distributed Comput., 2001 Gene Expression and Fast Construction of Distributed Evolutionary Representation. Evol. Comput., 2001 Computation in Gene Expression. Complex Syst., 2001 A Striking Property of Genetic Code-like Transformations. Complex Syst., 2001 A Fourier Analysis Based Approach to Learning Decision Trees in a Distributed Environment. Proceedings of the First SIAM International Conference on Data Mining, 2001 Data mining "to go": ubiquitous KDD for mobile and distributed environments. Proceedings of the Tutorial notes of the fifth ACM SIGKDD international conference on Knowledge discovery and data mining, 2001 Mining Decision Trees from Data Streams in a Mobile Environment. Proceedings of the 2001 IEEE International Conference on Data Mining, 29 November, 2001 Distributed Web Mining Using Bayesian Networks from Multiple Data Streams. Proceedings of the 2001 IEEE International Conference on Data Mining, 29 November, 2001 Toward ubiquitous mining of distributed data. Proceedings of the Data Mining and Knowledge Discovery: Theory, 2001 Report from the Workshop on Distributed and Parallel Knowledge Discovery, ACM SIGKDD-2000. SIGKDD Explor., 2000 The Genetic Code-Like Transformations and Their Effect on Learning Functions. Proceedings of the Parallel Problem Solving from Nature, 2000 Collective Principal Component Analysis from Distributed, Heterogeneous Data. Proceedings of the Principles of Data Mining and Knowledge Discovery, 2000 Distributed and parallel knowledge discovery (workshop session - title only). Proceedings of the sixth ACM SIGKDD international conference on Knowledge discovery and data mining, 2000 Computation in Genetic Code-Like Transformations. Proceedings of the Genetic and Evolutionary Computation Conference (GECCO '00), 2000 Collective, Hierarchical Clustering from Distributed, Heterogeneous Data. Proceedings of the Large-Scale Parallel Data Mining, 1999 Further Experimentations on the Scalability of the GEMGA. Proceedings of the Parallel Problem Solving from Nature, 1998 SEARCH, Computational Processes in Evolution, and Preliminary Development of the Gene Expression Messy Genetic Algorithm. Complex Syst., 1997 Scalable, Distributed Data Mining - An Agent Architecture. Proceedings of the Third International Conference on Knowledge Discovery and Data Mining (KDD-97), 1997 DNA To Protein: Transformations and Their Possible Role in Linkage Learning. Proceedings of the 7th International Conference on Genetic Algorithms, 1997 Web Based Parallel/Distributed Medical Data Mining Using Software Agents. Proceedings of the AMIA 1997, 1997 Polynominal Complexity Blackbox Search: Lessons From the SEARCH Framework. Proceedings of 1996 IEEE International Conference on Evolutionary Computation, 1996 The Gene Expression Messy Genetic Algorithm. Proceedings of 1996 IEEE International Conference on Evolutionary Computation, 1996 The Performance of the Gene Expression Messy Genetic Algorithm On Real Test Functions. Proceedings of 1996 IEEE International Conference on Evolutionary Computation, 1996 SEARCH, Blackbox Optimization, And Sample Complexity. Proceedings of the 4th Workshop on Foundations of Genetic Algorithms. San Diego, 1996 The gene expression messy genetic algorithm for financial applications. Proceedings of the IEEE/IAFE 1996 Conference on Computational Intelligence for Financial Engineering, 1996 A Temporal Sequence Processor Based on the Biological Reaction-diffusion Process. Complex Syst., 1995 Signal-to-noise, Crosstalk, and Long Range Problem Difficulty in Genetic Algorithms. Proceedings of the 6th International Conference on Genetic Algorithms, 1995 Information Transmission in Genetic Algorithm and Shannon's Second Theorem. Proceedings of the 5th International Conference on Genetic Algorithms, 1993 RapidAccurate Optimization of Difficult Problems Using Fast Messy Genetic Algorithms. Proceedings of the 5th International Conference on Genetic Algorithms, 1993 Ordering Genetic Algorithms and Deception. Proceedings of the Parallel Problem Solving from Nature 2, 1992 System Identification with Evolving Polynomial Networks. Proceedings of the 4th International Conference on Genetic Algorithms, 1991
{"url":"https://www.csauthors.net/hillol-kargupta/","timestamp":"2024-11-07T01:06:45Z","content_type":"text/html","content_length":"119271","record_id":"<urn:uuid:a53dfefd-3614-4f02-9429-286e03a0a222>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.54/warc/CC-MAIN-20241106230027-20241107020027-00064.warc.gz"}
Individual prediction explanations Individual prediction explanations¶ Dataiku DSS provides the capability to compute individual explanations of predictions for all Visual ML models that are trained using the Python backend (this includes custom models and algorithms from plugins, but not Keras/Tensorflow models). The explanations are useful for understanding the prediction of an individual row and how certain features impact it. A proportion of the difference between the row’s prediction and the average prediction can be attributed to a given feature, using its explanation value. In other words, you can think of an individual explanation as a set of feature importance values that are specific to a given prediction. DSS provides two modes for using the individual prediction explanations feature: In the model results¶ The Individual explanations tab in the results page of a model is an interactive interface for providing a better understanding of the model. As an example, consider the case where the global feature importance values for a black-box model may not be enough to understand its internal workings. In such a situation, you can use this mode to compute the explanations for extreme predictions (i.e. for records that output low and high predictions) and to display the contributions of the most influential features. You can then decide whether these features are useful from a business perspective. For speed, DSS uses different samples of the dataset to compute explanations, depending on the splitting mechanism that was used during the model design phase. □ If the model was built on training data (using a train/test split), DSS computes the explanations on a sample of the test set. □ If cross-validation was used during the model design phase, then DSS computes the explanations on a sample of the whole dataset. You can modify settings for the sample by clicking the gear icon in the top right of the individual explanations page. The interactive interface also allows you to specify values for other parameters, such as: □ The number of highly influential features to explain (or desired number of explanations). □ The method to use for computing explanations. □ An approximate number of records of interest at the low and high ends of the predicted probabilities. □ A column to use for identifying the explanations of each record. The result of the computation is a list of cards, one card per prediction. The cards on the left side of the page are for the records that give low predictions, while those on the right side of the page are for high predictions. Within the cards, bars appear next to the most influential features to reflect the explanation values. Green bars oriented to the right reflect positive impacts on the prediction, while red bars oriented to the left reflect negative impacts. If the model was trained in a container, then this computation will be implemented in a container. Otherwise, the computation will be implemented on the DSS server. The same is true for other post-training computations like partial dependence plots and subpopulation analysis. With the scoring recipe¶ The individual prediction explanations feature is also available within a scoring recipe (after deploying a model to the flow). If your model is compatible, i.e. a Visual ML model that is trained using the Python backend (this includes custom models and algorithms from plugins, but not Keras/Tensorflow models), then the option for Output explanations is available during scoring. Activating this option allows you to specify the number of highly influential features to explain, and to select the computation method. It also forces the scoring to use the original Python backend. By default, the scoring recipe is performed in memory. However, you can choose to perform the execution in a container. Running the scoring recipe outputs the predictions and an explanations column. The explanations column contains a JSON object with features as keys and computed influences as values, and can easily be unnested in a subsequent preparation recipe. Computation methods¶ To compute the individual prediction explanations, DSS provides two methods based on: • The Shapley values • The Individual Conditional Expectation (ICE) Method 1: Based on the Shapley values¶ This method estimates the average impact on the prediction of switching a feature value from the value it takes in a random sample to the value it takes in the sample to be explained, while a random number of feature values have already been switched in the same way. To understand how the method based on Shapley values works, consider that you have a data sample \(X\), and you want to explain the impact of one of its features \(i\) on the output prediction \(y\). This method implements these main steps: 1. Create a data sample \(X^\prime\) by selecting a random sample from your dataset and switching a random selection of its features (excluding the feature of interest \(i\)) to their corresponding values in \(X\). Then compute the prediction \(y^\prime\) for \(X^\prime\). 2. Switch the value of the feature \(i\) in \(X^\prime\) to the corresponding value in \(X\), to create the modified sample \(X^{\prime\prime}\). Then compute its prediction \(y^{\prime\prime}\). 3. Repeat the previous steps multiple times, and average the predictions \(y^{\prime\prime}\), to determine an average prediction. 4. Finally, compute the difference between the average prediction and \(y^\prime\) to obtain the impact that feature \(i\) has on the prediction of \(X\). The number of random samples used in the implementation depends on the expected precision and the non-linearity of the model. As a guideline, multiplying the number of samples by four improves the precision by a factor of two. Also, a highly non-linear model may require 10 times more samples to achieve the same precision of a linear model. Because of these factors, the required number of random samples may range from 25 to 1000. Finally, the overall computation time is proportional to the number of highly influential features to be explained and the number of random samples to be scored. Method 2: Based on ICE¶ This method explains the impact of a feature on an output prediction by computing the difference between the prediction and the average of predictions obtained from switching the feature value randomly. This method is a simplification of the Shapley-value-based method. To understand how the method based on ICE works, consider that you have a data sample \(X\), and you want to explain the impact of one of its features \(i\) on the output prediction \(y\). This method implements these main steps: 1. Switch the value of the feature \(i\) in \(X\) to a value chosen randomly. Then compute its prediction \(y^\prime\). 2. Repeat the previous step multiple times, and average the predictions \(y^\prime\), to determine an average prediction. 3. Finally, compute the difference between the average prediction and \(y\) to obtain the impact that the feature \(i\) has on the prediction of \(X\). For binary classification, DSS computes the explanations on the logit of the probability (not on the probability itself), while for multiclass classification, the explanations are computed for the class with the highest prediction probability. More about the computation methods¶ 1. The ICE-based method is faster to implement than the Shapley-based method. When ICE is used with scoring, the computation time (about 20 to 50 times longer than with simple scoring) is faster than that for the method based on Shapley values. 2. A major drawback of using the ICE-based method is that the sum of explanations over all the feature values is not equal to the difference between the prediction and the average prediction. This discrepancy can result in a distortion of the explanations for models that are non-linear. 3. The performance of the ICE-based method is model-dependent. Therefore, when choosing a computation method, consider comparing the explanations from both methods on the test dataset. You can then use the ICE-based method (for speed) if you are satisfied with its approximation. For more details on the implementation of these methods in DSS, see this document. • The computation for individual prediction explanations can be time-consuming. For example, to compute explanations for the five most influential features, expect a computation time multiplied by a factor of 10 to 1000, compared to simple scoring. This factor also depends on the characteristics of the features and of the computation method used. • For a given prediction, individual explanations approximate the contribution of each feature to the difference between the prediction and the average prediction. When that difference is small, the computation must be done with more random samples, to account for random noise and to return meaningful explanations. • When the number of highly influential features to be explained is fewer than the number of features in the dataset, it is possible to miss an important feature’s explanation. This can happen if the feature has a low global feature importance, that is, the feature may be important only for a small fraction of the samples in the dataset. • Individual prediction explanations are available only for Visual ML models that are trained using the Python backend (this includes custom models and algorithms from plugins, but not Keras/ Tensorflow or computer vision models). • Using the Scoring Recipe with individual explanations can be very memory consuming. You can tweak the Scoring Recipe parameters to decrease the memory footprint. However, this will slow down the run. In particular: ☆ If you are using “Shapley” method, the “Sub chunk size” and “Number of Monte Carlo steps”. Decreasing “Sub chunk size” should have the biggest impact ☆ In “Advanced” tab, the “Python batch size”
{"url":"https://doc.dataiku.com/dss/latest/machine-learning/supervised/explanations.html","timestamp":"2024-11-02T23:16:20Z","content_type":"text/html","content_length":"46937","record_id":"<urn:uuid:469511fb-df44-4114-98c5-1fe304605a03>","cc-path":"CC-MAIN-2024-46/segments/1730477027768.43/warc/CC-MAIN-20241102231001-20241103021001-00699.warc.gz"}
Basketball Release Angle Calculator Home » Simplify your calculations with ease. » Sports Calculators » Basketball Release Angle Calculator The Basketball Release Angle Calculator is a dynamic tool designed to assist players in perfecting their shooting technique by calculating the optimal angle of release. This calculator leverages physics to help players understand how various elements like shot height, distance, and initial velocity influence their shooting angle, thereby increasing their accuracy and consistency on the Formula of Basketball Release Angle Calculator Identify the Key Variables: To use the calculator, you need to input the following variables: • h_r: Release height of the basketball (in feet or meters) • h_t: Height of the basketball hoop (typically 10 feet or 3.05 meters) • d: Horizontal distance from the player to the hoop (in feet or meters) • v_0: Initial velocity of the basketball (in feet per second or meters per second) • g: Acceleration due to gravity (32.2 feet per second² or 9.8 meters per second²) Calculate the Required Angle θ: The release angle is determine using the formula: θ = arctan((v_0^2 ± √(v_0^4 – g(gd² + 2v_0²(h_t – h_r)))) / (gd)) This formula accounts for the initial velocity, the difference in elevation between the release point and the hoop, and the distance to the hoop, providing two potential angles due to the “±” in the equation, which correspond to either a higher arc or a lower arc shot. Table for General Terms and Calculations This table provides explanations for key terms associated with the Basketball Release Angle Calculator: Term Definition Release Height (h_r) The height from which the basketball is released. Hoop Height (h_t) The standard height of the basketball hoop. Distance (d) The horizontal distance from the player to the basketball hoop. Initial Velocity (v_0) The speed at which the basketball is thrown. Gravity (g) The acceleration due to Earth’s gravity. Release Angle (θ) The angle at which the basketball should be released for a successful shot. Example of Basketball Release Angle Calculator Consider a player who is 6 feet tall and wants to make a shot from 15 feet away with an initial ball velocity of 20 feet per second: • h_r: 7 feet (player’s height plus arm extension) • h_t: 10 feet • d: 15 feet • v_0: 20 feet/sec • g: 32.2 feet/sec² Using the calculator: • θ = arctan((20^2 ± √(20^4 – 32.2(32.215² + 220²*(10 – 7)))) / (32.2*15)) This calculation would yield two possible angles, giving the player options for a high arc or a low arc shot, allowing them to choose based on their style and situation on the court. Most Common FAQs How does the release angle affect the success of a basketball shot? The release angle is crucial because it determines the trajectory of the ball, which in turn affects whether the shot makes it to the hoop without interference from defenders. Can I use this calculator for any distance and height? Yes, the calculator can be use for any distance and player height, as long as the variables are enter correctly. What is the best release angle for a basketball shot? While the optimal angle varies with each situation, angles between 45° and 55° are generally consider ideal for achieving both maximum height and distance coverage. Leave a Comment
{"url":"https://calculatorshub.net/sports-calculators/basketball-release-angle-calculator/","timestamp":"2024-11-06T21:07:16Z","content_type":"text/html","content_length":"118449","record_id":"<urn:uuid:37e10bb6-2a8a-4020-ae93-ee5b35425f4b>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.47/warc/CC-MAIN-20241106194801-20241106224801-00209.warc.gz"}
imc WAVE noise Measuring sound power and sound pressure with imc WAVE noise Download demo version With the imc WAVE noise analyzer, you can determine the acoustic properties according to IEC 61672 Class 1 as a time-weighted and integrated sound level meter. The signals of several microphones can be processed in parallel, online and synchronously. The characteristics of a channel can be entered manually from the calibration sheet of the sensor or read in automatically via the TEDS chip. Since, when measuring according to standards, a calibration measurement of the microphone must be performed before and after each measurement, which is a central function in imc WAVE noise. The measured value can be adjusted to the applied acoustic calibration value or simply a control measurement is performed. Both procedures are documented with the measurement channel. In addition to the sound analysis, the microphone signal can be calculated in the octave or third octave spectrum and as an FFT analysis in real time and displayed as a 2D or 3D diagram (waterfall). As a further function, a complete sound power measurement according to ISO 374x is also available. • A-, B-, C-, Z-weighting filter, Fast, Slow, Impulse, Peak and Leq • Sound level meter according to IEC 61672, IEC 60651 and IEC 60804 • 1/1-tel and 1/3-tel octave analysis according to IEC 61260 • FFT analysis (up to 131072 points) • Sound intensity level with directional sign • Sound intensity terc analysis • Sound intensity FFT analysis • 1/12 octave and 1/24 octave band spectrum • Loudness and loudness spectrum according to Zwicker ISO 532-1 • Sharpness or sharpness • Articulation index • Standardized acoustic measurements • Acceptance and certification measurements related to noise emission • Product qualifications • Product optimization in the development area • Noise comfort in vehicles: measurements for qualification and optimization • Holistic investigations of causes, propagation paths and effects of sound and vibrations • Contribution of acoustic expertise in general application areas of physical measurement technology NVH stands for: Noise - Vibration – Harshness Humans can hear vibrations as noise, feel vibrations or perceive harshness. The phenomena also merge into each other. When the frequency of vibrations ranges between 0,1 Hz to 20 Hz, these can be perceived by the human body and influence our well-being. If vibrations are somewhat higher in the frequency range, from approximately 20 Hz to 100 Hz, vibrations are both perceptible by the body and audible via the air and are classified as harshness. Since the perceptible vibrations decrease significantly from approximately 50-100 Hz, the frequency range from approximately 100 Hz to 20 kHz is referred to as noise, i.e. unpleasant airborne sound that we hear. A sound level meter is a data acquisition system that records noise similiar to the human ear. It provides results that are objective and reproducible. Therefore the sound level originates in change of the air pressure within a frequency range of 20 Hz to 20 kHz, it can be recorded with a microphone and further calculated with frequency-weighting and time-weighting. DIN-ISO 61672 describes the analysis and the method of data acquisition. Here, accuracy classes - class 1 and class 2, are give, that are dependable from the microphones used for test and measurement. The sound level is always in dB. The lowest sound that a healthy human can hear, is about 20 µPa at 1 kHz. This is less than one part in a billion of the normal air pressure, on which the sound signals are superimposed. To reach the threshold of pain of sound pressure fluctuations, the sound is about 10 million times larger than the hearing threshold. In order to get a handy scale of the sound, the dB scaling has been introduced. Thereby the factor 10 in the sound pressure corresponds to an increase of the level by 20dB. Usually the sound level is labeled with an addition in parentheses, e.g. dB(A). The (A) stands for the weighting of the level. However, (B), (C) or (D) can also be written in parentheses, whereby the B-weighting is almost no longer used.The C-weighting is used for impulsive noise and the D-weighting for aircraft noise. This is less than one billionth of the normal air pressure on which the sound signals are superimposed. To reach the threshold of pain of sound pressure fluctuations, the sound is about 10 million times larger than the hearing threshold. In order to get a handy scale of the sound, the dB scaling has been introduced. Thereby the factor 10 in the sound pressure corresponds to an increase of the level by 20dB. Usually the sound level is labeled with an addition in parentheses, e.g. dB(A). The (A) stands for the weighting of the level. However, (B), (C) or (D) can also be written in parentheses, whereby the B-weighting is almost no longer used.The C-weighting is used for impulsive noise and the D-weighting for aircraft noise. What does frequency weighting mean? frequencies are rated with four frequency weighting filters, A, B, C or Z. for low sound levels of approximately 20-40 Phone (blue curve). Today, the A-rating is used in most cases. In favor of easy handling a better frequency response adjustment is dispensed. for sound levels of approximately 50-70 Phone (red curve). B-rating is not used any more. for high sound levels of approximately 80-90 Phone (orange curve). The C-rating is used in noise protection, when the A-rating has to be expanded when sound levels consist of impulsive and tonal means without frequency weighting What does time-weighting mean? The time weighting determines the floating RMS value of the frequency-weighted sound signal. It is a compromise between fast following of the fluctuating vibration signal and the readability of the measured value and is also called display inertia. FAST-rating: Τ = 125 msec, description FAST: quick increase and quick decrease of the signal. increase time T = 63% decrease time T = 36% , decrease time: -34,7 dB/sec SLOW-rating: Τ= 1000 msec description SLOW: slow increase and slow decrease of the signal. increase time T = 63% decrease time T = 36% , decrease time: -4.3 dB/sec IMPULSE-Bewertung: Τincrease = 35 msec, Tdecrease = 1500 msec, description IMPULSE: very quick increase and very slow decrease of the signal. increase time T = 63% decrease time T = 36% , decrease: -2,9 dB/sec What is frequency analysis? A frequency analysis can be calculated narrow-band as FFT analysis or clearly laid out as as a third-octave and octave analysis. In frequency analysis, a basic distinction is made between two methods: third-octave and octave analysis or FFT analysis. What is a one-third octave analysis? To get detailed information about a complex sound signal, more information has to be identified about the composition of the signal's frequency. The different frequencies can be best explained with a musical scale. The octave [lat.], describes the interval of 8 diatonic steps away from the first note. In acoustics it denotes the tonal sound that has twice the frequency, related to a first sound. Since antiquity, the representation of the western tonal system has been based on the octave. The third octave analysis (third [lat.], the third tone) is a frequency analysis with relatively constant frequency resolution, i.e. that the center frequency fm of a bandpass filter in relation to the bandwidth (fB=fO-fU) is the same for all frequency bands. The upper cutoff frequency fO and the lower cutoff frequency fU of a bandpass filter are at an amplitude attenuation of -3 dB (factor 0.707). The relative bandwidth of the octave is fB = 0.707, the third octave is fB = 0.23 and the 1/12th octave is fB = 0.059. The one-third octave spectrum allows an evaluation of the spectral line distribution, for example of a sound signal. Its advantage is the ability to rate the logarithmic frequency of the human ear. The one-third octave filters roughly correspond in their bandwidths to the frequency groups that reach the discriminating ability of the ear. Because of this, their discriminating power is sufficient for many psychoacoustic problems, including loudness determination. A frequency analysis can be calculated narrowband as FFT analysis or clearly represented as third octave and octave analysis. What is a FFT (fast fourier transform) analysis? The Fast Fourier Transform (FFT) is a fast computational algorithm for calculating the discrete Fourier transform (DFT). The algorithm developed by James Cooley and John W. Tukey (1965) uses computational advantages that arise with a number of 2 to the power of N values. In modern analysis software packages, one is no longer dependent on the number of values of 2 to the N, because if the number of FFT points is not a power of 2, the signal is interpolated to the corresponding higher sampling frequency. In this way, various parameters can be set with imc WAVE. • Frequency weighting: A, B, C or Z • Averaging: none, Leq from start • Windows: Rectangle, Hamming, Hanning, Blackman Blackman-Harris and Flat-Top • Overlap: 0%, 10%, 25%, 33.33%, 50%, 66.66%, 75%, 90% • Diff./Int.: differentiate, dual differentiate, integrate, dual integrate. • Points: 128.....131072 • Log.axis: Yes / No • Reference value dB 20 µPa = 2 E-05 Pa • Display of: Bandwidth, resolution and output rate
{"url":"https://www.imc-tm.com/products/measurement-software/imc-wave-nvh-analysis/noise","timestamp":"2024-11-14T05:30:43Z","content_type":"text/html","content_length":"93806","record_id":"<urn:uuid:fd424eab-5f80-401f-8cc1-415f66b46b6e>","cc-path":"CC-MAIN-2024-46/segments/1730477028526.56/warc/CC-MAIN-20241114031054-20241114061054-00544.warc.gz"}
Genetic Algorithm Framework - Theory 7.2. Genetic Algorithm Framework - Theory In recent literature, several approaches are reported to solve the different problems of single runs of GA: 1. Massart and Leardi [98],[256] use a very refined algorithm for the variable selection, which is based on parallel runs of many GA with different combinations of test and calibration data. Then a validation step is performed to find the best variable subset. The GA is a hybrid algorithm using a stepwise backward elimination of variables to find the smallest possible subset of variables. Although this approach is very promising, Jouan-Rimbau et al. [255] showed that this algorithm is still partly subject to chance correlation. 2. In [99] Leardi et al. use 100 runs of GA with the same calibration and test data sets. The final model is obtained by adding systematically the variables, which are ranked according to the frequency of selection of the GA runs and by using the combination with the smallest error of prediction. In [97] this algorithm is modified by the different GA runs learning from each other. 3. In [126] the predictions are averaged by several models found by different GA runs. Yet, the average prediction was not better than the prediction by a single model. 4. In [254] 10 runs of GA are performed by using different calibration and test data subsets. The final model uses all variables, which were selected at least 5 times, whereby this limit is rather The genetic algorithm framework proposed in this work picks several elements of the studies mentioned above and is presented in the flow diagram in figure 44. The framework can be divided into three steps. The first step consists of multiple parallel runs of the GA presented in section 2.8.9 and in section 7.1 using different calibration and test data subsets (yellow boxes in the flow diagram). Variables, which are represented higher than average in the final population of each GA run, are collected over all GA runs and are ranked according to the frequency of appearance in the final populations. The second step of the framework finally selects the variables in an iterative procedure by adding the variables to the neural network model according to their rank in a stepwise procedure. The neural network is evaluated by the use of different calibration and test data subsets (green boxes in figure 44). The RMSE of prediction of all test data sets is compared with the RMSE of the previous model. If the RMSE is lower (see section 10.2), the last variable is accepted and the procedure is repeated adding the next important variable until the predictions are not improved any more. In the third step, the final neural network topology is determined. First, the number of hidden neurons of a single hidden layer is optimized in an iterative procedure, which is shown in figure 45. Starting with fully connected neural networks with 1 hidden neuron additional fully connected neurons are added until the error of prediction of the test data doesn't improve any more, whereby the l different test data subsets are generated by a data subsampling procedure. Finally, this neural network topology is trained with the complete data set several times, and the neural net with the smallest error of crossvalidation should be used as final optimized model and should be validated by an external data set not used during the complete variable selection algorithm. In all three major steps of the framework, the complete data set is split several times into a calibration (75 %) and a test (25 %) subset, which was done by a random sub­sampling procedure (see section 2.4) resulting in rather pessimistic predictions of the test data. Conse­quently, according to expression (16) models are preferred, which are more predictive and which yield a better As already stated in section 2.8.5, the choice of a in the fitness function (16) influences the numbers of variables being selected during each run of a GA. A too high value of a ignores partly the accuracy of the neural nets and ends in only few variables being selected. Consequently, there might be too few variables selected in the first step to be added to the neural net in the second step. This problem can be recognized by all variables with a ranking higher than "0" being used for the neural net in the second step. On the other side, a too low value of a results in too many variables being selected. This can be detected by the absence of a differentiation of the variables in the ranking. An empirical way to select an optimal a is based on running a single GA with different values of a and on choosing that a, which results in the selection of the number of variables expected to be needed for the calibration. A good choice to start with is setting a to "1" for these single runs of the GA. Yet, preliminary studies showed that the parallel runs of the GA make the framework quite robust towards the choice of a and to the population size, which is suggested to be set to the number of variables to select from. Although the framework seems to be complex on the first sight, this robustness renders the algorithm quite user-friendly. figure 44: Flow chart of the genetic algorithm framework. figure 45: Optimization of the number of hidden neurons. This figure is a detailed flow chart of the blue box of the genetic algorithm framework shown in figure 44.
{"url":"http://frank-dieterle.de/phd/7_2.html","timestamp":"2024-11-02T21:05:33Z","content_type":"text/html","content_length":"22236","record_id":"<urn:uuid:4fe1581e-751c-4c8d-be36-32c64c623fde>","cc-path":"CC-MAIN-2024-46/segments/1730477027730.21/warc/CC-MAIN-20241102200033-20241102230033-00392.warc.gz"}
Loudspeaker impedance measurement using a multimeter and 2 resistors Loudspeaker impedance measurement using a multimeter How to measure speaker impedance? Making a loudspeaker impedance measurement is not a as easy as you might think. If you compare to resistance, impedance changes value with frequency. Therefore, an impedance measurement is actually a graph, not a number. Normally, there is specialized hardware that does this stuff in a second, automatically plotting the graph. Anyway, let’s be resourceful and make a loudspeaker impedance measurement with what we have. Equipment needed The title states that only a multi-meter and 2 resistors are needed, but the measurement requires additional items, which most likely you already have. The resistors can be found at your local electronics store for basically no money. The list : Here is how the excel spreadsheet looks like. By the end of the article, it will be all filled out, and the impedance graph will show on the right. Room EQ Wizard will be used as a signal generator software, in conjunction with the amplifier. Normally, you would have a stand-alone signal generator, but a good one is pretty expensive and doesn’t have other uses. I’m guessing you already have a computer. Since you are making a loudspeaker impedance measurement, I’m guessing you already have an amplifier too. The software is free. Loudspeaker impedance measurement method The technique we are using in this tutorial is the voltage divider method, with some additional tweaks. Normally you would have a standalone signal generator which can maintain a constant voltage. In that case we would measure the resistor just once. However, in our tweaked scenario, we will have to measure the resistor just as many times as we measure the speaker. This will make more sense later Advantages : • Equipment readily available or cheap/easy to come by. • It doesn’t take as much time and less tedious than other methods (current source method). • Speaker is connected to an amplifier. This will emulate a real-life scenario, since the amplifier’s damping will be taken into account. Disadvantages : • High impedance drivers may affect the measurement accuracy. Now, to give a general picture on how the loudspeaker impedance measurement works, here is a diagram : The basic principle is that you set a level for your amplifier. With the speaker in place, set the generator to a certain frequency you are interested in. Take a reading on the volt-meter. Swap the speaker with the resistor and take another voltage reading. Since the resistance of the resistor in known, you can calculate the resistance of the speaker at that particular frequency. Write it down, and repeat this step for different frequencies, until you have sufficient data to plot the graph in the excel spreadsheet. Bear in mind that you will not make this swap every step of the way. First measure the speaker for all the frequencies you are interested in, and then switch to the resistor. Shouldn’t take that much Step 1 : Room EQ Wizard (REW) setup First of all, download the software here. Normally you would have to make a lot of adjustments for the microphone and stuff, if you plan to use this software for what it was intended to, but we are going to use only the signal generator. How to design loudspeakers - video courses Go to Preferences -> Preferences, and make sure that your output device is set correctly (your sound card, most likely). Then, go ahead and click the signal generator icon. Once there, make sure that the output is set to both channels, and that the RMS level is set to its highest point (-3.0). As for the signal type, select sine wave. The multi-meter I’m using is the Fluke 177 (Amazon affiliate paid link) which has a frequency counter. As a result, I can check if REW is working correctly. As you can see, REW works great as a frequency generator, and the numbers are spot-on. Step 2 : Set your amplifier level The amplifier needs to have a fixed volume level for the whole process. First of all, hook up your speaker in the rig and set the multi-meter to AC volts. Next, set the signal generator to pink noise and press the play icon. Now fiddle with the amplifier volume knob until you reach a reading of around 150 mV on the multi-meter. When playing pink noise, the voltage will not be stable. In conclusion, if you have an average function on your multi-meter, use it. Try to reach close to the 150 mV mark. Step 3 : Establish the points of interest To complete the excel graph, we have to make some measurements at certain frequency points. We decide these values, but there are certain frequencies which must be on the graph. First of all, you have to decide the resolution of the graph. If you look closely, you will see that the file contains 3 spreadsheets : Impedance x35, x50 and x75. This indicates how many measurement points the plot will contain. As a result, a plot containing more measurement points will be more resolute/accurate, but it will take more time. Regarding this loudspeaker impedance measurement tutorial, the Impedance x35 is the pick, since it is less time consuming. For a driver in free-air or inside a closed box For this type of setup, there will be a couple of points of interest that must appear on the graph. This is how the graph should look like : You must find those 2 frequencies : 1. The frequency where the impedance spikes. This corresponds to the resonant frequency of the driver / closed box. 2. After the resonant frequency, the impedance starts to decrease, reaches a minimum, and then rises again due to voice coil inductance. That minimum point is important. For a driver in a bass-reflex box There are 4 points of interest when we are talking about bass-reflex. Here is how the graph should look like : Find these 4 frequencies: 1. Bass reflex has 2 impedance spikes. Sweep the frequency generator until you find the highest value of the first spike. 2. Then the impedance will decrease, and start to rise again. Find the minimum point. This corresponds to the resonant frequency (tuning) of the box. 3. Then, the impedance will rise again until it reaches a maximum. Note the frequency of when that happens. 4. Finally, just like with the sealed box, the impedance will drop off, reach a minimum, and then rise again due to voice coil inductance. Note the frequency when the minimum occurs. Other points of interest When making a loudspeaker impedance measurement, to have a complete graph, you must have these frequencies in your chart : • The 2/4 frequency points mentioned above for sealed / bass reflex. • 2 additional points (or more) close to the previous mentioned values. For example : If the resonant frequency is at 57 Hz, make sure you plot 50 Hz and 65 Hz as well. • The starting points of the graph : 10 Hz and 20.000 Hz. • Every decade : 100 Hz, 1.000 Hz, 10.000 Hz. • Every half of decade : 50 Hz, 500 Hz, 5.000 Hz. • All octaves : 20 Hz, 40 Hz, 80 Hz, 160 Hz, 320 Hz. 640 Hz. 1.250 Hz, 2.500 Hz (5 kHz and 10 kHz already been mentioned). • Half of octaves : 30 Hz, 60 Hz, 120 Hz, 250 Hz, 2.000 Hz, 4.000 Hz, 7.500 Hz, 15.000 Hz. • Add frequency points to complete the table and fill obvious gaps : 12.500 Hz, 17.500 Hz etc. Step 4 : Measure the resistor Unless you have an expensive resistor, most likely it’s not exactly 10 Ohm. Therefore, we have to measure it. Measure the resistance of the test leads by shorting them. After that, measure the resistor and subtract the value of the test leads. In our case 10 – 0.1 = 9.9 Ohms. Insert that value into the Step 5 : Complete the frequency column in the spreadsheet We are going to measure the impedance of a driver in free air : (Seas CA 18 RNX) (Amazon affiliate paid link). The driver has a pole piece in its construction. As a result, I placed the driver on 2 books so the vent is not obstructed. As mentioned earlier, we have to find the highest impedance point (at resonance) and the minimum point, before the impedance start to rise. How to design loudspeakers - video courses Place the speaker in the rig, and set the multi-meter to AC volts. Now sweep the frequency, until you see the highest value. Be smart about this. Since it is a mid-bass driver, the resonance will be somewhere between 40 – 80 Hz. If it’s a tweeter aim for 500 – 2000 Hz. After you found it, note down the frequency and the voltage. Then, increase the frequency. You will see the voltage dropping as frequency increases, but at a certain point it will start to rise. Note this minimum point (frequency and voltage), just before it starts to rise. After we found these important points in the graph, we can begin populating the frequency column, and then we can move on with the loudspeaker impedance measurement. Complete the frequency column similar to this fashion. It is important that the frequency column is in ascending order, otherwise the graph won’t show correctly. Step 6 : Complete the voltage column (measuring the speaker) Proceed to place the speaker in the rig. Input each frequency (from the previous step) in the generator, and take a voltage measurement. Each measurement you write down in the corresponding cell. After you’re done, it should look something like this : When changing the frequency to take another measurement, please do this extra step. Don’t just change the frequency, but also stop and restart the generator. Sometimes I saw obvious erroneous readings. Restarting the generator (depressing and pressing the “play” button) would fix this inconvenience. In conclusion, restart the generator every time you change the frequency. Step 6 : Complete the voltage column (measuring the resistor) Go ahead and swap the speaker with the resistor. Do the same thing as you did previously with the speaker. Note down in the correct column all the voltages shown on the multi-meter for each of the corresponding frequencies. After you completed the column, it should look something like this : As you can see, the impedance column is completed automatically. Please write only in the yellow cells, otherwise you could overwrite the formulas in the impedance column. Loudspeaker impedance measurement complete If you have done all the steps, you should see a finished graph on the right. I know that this looks like a lot of work, but it will take 20-30 minutes. There are other methods out there that are far more tedious. Of course this all makes sense in the absence of specialized measuring equipment. Now you can measure impedance with items that you probably already have. 1. Loudspeaker Design Cookbook 7th Edition by Vance Dickason (Audio Amateur Pubns, 2005). 2. Image source : link. 16 comments 1. Hello, I have a question for you. I have noticed that the fluke 177 multimeter has, in AC voltage mode, a frequency range of 45Hz-500Hz. Hence I wondered: how have you executed the measurements off this frequency band? Thank you 1. The multi-meter is not limited to that frequency bandwidth only. Between 45 – 500 Hz, it has the highest accuracy. Outside this interval, slight errors may occur. However, the purpose of the article was to show how the process is done at a basic level. If you need high accuracy measurements, there are a lot of alternatives out there. 2. Hello, thanks for the tutoriel. I’ll start measurements in a couple of days. I’m wondering what are you using when you Do the first sweep to find max peaks and dips? REW, sweeps doesn’t show you the frequency meter? 1. I use DATS from Dayton Audio. It’s a standalone device that measures impedance and calculates T/S parameters. You can do that with REW as well but you have to build some resistive probes. Much more accurate and easier to do, but you have to buy the device. 3. Hi, really thanks for the tutorial. I have some questions about setting amplifier level. 1. If the voltage of pink noise is not stable, why use it? 2. If the output watt will affect the impedance curve? 3. Why you use 150 mV? 1. 1) Because pink noise is a signal that contains all frequencies. If you choose a sine wave you might set your amplifier right for that particular frequency but it might be too high or too low for other frequencies. Using pink noise gives a middle ground. 2) Don’t understand the question. 3) It’s not a fixed number. You can choose something else, but it’s important that any of your measurements (at different frequencies) don’t exceed 1 V. This is a small signal measurement. You don’t want to go too low either, because the measurement will be inaccurate. 4. Sorry for the silly question, I’m kind of a newbie. Will this work with a complete speaker aswell? Like a full range tower? 1. Yes, it will work but it will plot the impedance of the whole system. 5. How did you get the equation for impedance in the excel spreadsheet? 1. Well it’s not really a formula for impedance, it’s just basic math. For a specific frequency you measure the voltage on a resistor, which has a known resistance. Then you measure the voltage on the speaker for the exact same frequency. By doing a simple cross multiplication, you can calculate the resistance of the speaker for that frequency. 6. Hi How to you convert the excel file to a zma file, kindly share. Many thanks 1. Just paste the values in txt file and rename the extension to zma. Make sure you have the frequencies on one column and the the impedance values on another column. 1. .. And make sure you place a point as a decimal point, not a comma 🙂 Don ask how I know this. 7. Thanks sir for the tutorial. I have been working with speakers and trying to get impedance at different frequency with what I have – DMM, DIY mini-amp and computer. I had been struggling to get correct impedance values but in vain to the point I kinda gave up. Your answer to Mara question has shown another side of coin and I feel confident on how to get impedance at any point. Thanks for the tutorial. 8. I just went through my spares box, and I don’t seem to have the exact value resistors. But I think that actually shouldn’t matter. If I were to use 1.5k and 22 Ohm I don’t think that would change the outcome? This site uses Akismet to reduce spam. Learn how your comment data is processed.
{"url":"https://audiojudgement.com/loudspeaker-impedance-measurement/","timestamp":"2024-11-06T11:21:58Z","content_type":"text/html","content_length":"126461","record_id":"<urn:uuid:e7197ca1-f695-452f-a779-2c2642c0d17a>","cc-path":"CC-MAIN-2024-46/segments/1730477027928.77/warc/CC-MAIN-20241106100950-20241106130950-00091.warc.gz"}
Pricing Insurance Risk Pricing insurance risk is a book I am writing with John Major. It describes the last mile of underwriting. Actuaries and accountants have determined the cost of goods sold: the expected loss cost, direct and allocated expenses. In fact they have gone beyond simple point estimates and have provided a full range of potential outcomes, understood within the context of all the other risks written by the company. All that remains is to set a manual rate or quote a price or to accept or reject an offered market price (firm order). The book will describe the actuarial, risk theory, finance and accouning approaches to pricing insurance risk. Market Structure When does it make sense for different risks to pool together? This paper investigates equilibrium risk pools in a market with risk-based solvency regulation and costly capital. It considers a market with two classes of risk, each having different aggregate volatility characteristics, such as personal auto and catastrophe exposed property. It identifies three possible equilibrium solutions: a single multiline pool, a multiline pool and a monoline pool, and two monoline pools. The results help explain various features seen in insurance markets, including the structure of the Florida homeowners market and the US medical malpractice market, and it can be applied more broadly to any regulated risk market. Bounds on Consistent Prices We introduce a straightforward algorithm to determine a range of prices consistent with complete information about the risk but only partial information about the pricing risk measure. In many cases the algorithm produces bounds tight enough to be useful in practice. We illustrate the theory by applying it to three important problems: pricing for high limits relative to low limits, evaluating reinsurance programs, and portfolio-level strategic decision making. We also show how the theory can be used to test if prices for known risks are consistent with a single partially specified risk
{"url":"https://www.convexrisk.com/research","timestamp":"2024-11-10T16:13:36Z","content_type":"text/html","content_length":"13560","record_id":"<urn:uuid:b6cc5a17-20ab-46a2-984f-3e641060eafa>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.60/warc/CC-MAIN-20241110134821-20241110164821-00054.warc.gz"}
Minnesota's new flag: from scratch A new flag Minnesota is soon getting a new state flag. New design for Minnesota's flag, to be adopted 2024-05-11. In May 2023, the Minnesota Legislature passed, and the governor signed, HF 1830, a state government finance bill that also created a commission to redesign Minnesota's state flag and state seal. The previous flag design uses the classic "seal on a bedsheet" arrangement decried by vexillologists.^1 Seals and flags have opposite purposes: seals are supposed to be complicated while flags are supposed to be simple and recognizable from a distance.^2 The previous flag design (Public Domain, from Wikipedia) In the second half of 2023, the State Emblems Redesign Commission invited members of the public to submit proposed designs for Minnesota's new flag and seal. Many designs were proposed. The winning proposal was proposal F1953, designed and submitted by Andrew Prekker of Luverne, Minnesota. The commission adopted a simplified version of Prekker's design this past December. This design will be adopted on May 11, the 166th anniversary of Minnesota statehood. The new design is thought to be one of the best in the nation. In this post, I will explain how to implement the design of this flag as a short manually-coded SVG file. The specifications The new flag as a very simple design, specified by the State Emblems Redesign Commission in its official report (p. 21). You can make the flag out of three shapes: Official specification from the State Emblems Redesign Commission • A water blue (#52c9e8) rectangle. Two sides of the rectangle have length \(n\) and the other two sides have length \(5n/3\). • A night sky blue (#002d5d) pentagon with three sides lying on the edges of the rectangle: one side that is the entire edge of the rectangle with length \(n\), and the two neighboring perpendicular sides that have a length of \(14n/15\). The last two sides meet at a vertex halfway between the long edges of the rectangle and a distance \(13n/20\) away from the opposite edge of the pentagon. Specifications for the water blue and night blue portions of the flag. • A white eight-pointed star, which is a regular octogram. The center of the star is halfway between the long edges of the rectangle, and a distance of \(21n/60\) from the short edge of the rectangle that is also an edge of the pentagon. The points of the star are eight evenly spaced points along a circle of radius \(11n/60\), with two points on a line perpendicular to the long sides of the rectangle and two points on a line perpendicular to the short sides. An outline of this polygon can be constructed by connecting each of these eight points with the points three the eight points of the star are equally spaced around the circle of radius \(11n/60\). The center of the star has a distance of \(21n/60\) from the left edge and is centered vertically. Coding the flag We can use these descriptions to write our SVG file. Outside element We begin with the outside svg element, which defines the viewport. Since the flag sits inside of a \(n \times 5n/3\) rectangle, we'll set \(n = 300\), so that the flag's dimensions are 500 × 300. Typically, in computer graphics, the \(y\) coordinates increase as you go down, which is the opposite of the convention used in mathematics. So, our viewport's upper-left corner will be at \((0, 0) \), and the lower right corner will be at \((500, 300)\). This is expressed in SVG with the attribute viewbox="0 0 500 300" in the svg element. mn_flag.svg SVG <svg viewbox="0 0 500 300" xmlns='http://www.w3.org/2000/svg'> Water blue rectangle The next element to add is a water blue (#52c9e8) rectangle with the same dimensions as the viewport. We'll use the rect element for this. We insert the rect inside the svg element. mn_flag.svg SVG <svg viewbox="0 0 500 300" xmlns='http://www.w3.org/2000/svg'> <rect x="0" y="0" width="500" height="300" stroke="none" fill="#52c9e8" /> Night sky blue pentagon The next element to add is the night-sky blue (#002d5d) pentagon. For this, we'll use the polygon element.^3 Setting \(n = 300\), we see that the points of the vertices of the pentagon are at the following coordinates: \[(0, 0), (280, 0), (195, 150), (280, 300), (0, 300).\] So, we insert after the rect: mn_flag.svg SVG <polygon points="0,0 280,0 195,150 280,300 0,300" stroke="none" fill="#002d5d" /> White star Last, and most tricky, is the eight-pointed star that goes inside the pentagon. To get these points, we are going to use some trigonometry. If a circle has radius \(r\) and center at the origin, then each point \(P\) on the circle has the form \[P = (r \cos \theta, r \sin \theta),\] where \(\theta\) is the angle between the positive \(x\)-axis and the ray starting at the center of the circle and passing through \(P\). For a circle with arbitrary center \((h, k),\) we translate to get the parameterization \[P = (h + r \cos \theta, k + r \sin \theta).\] Parameterization of an arbitrary point on a circle In our case, the points of the star sit on a circle with center \(105, 150)\) and radius 55. There are eight points equally spaced around the circle, with two points on the same horizontal line as the center, so the appropriate angles are 0°, 45°, 90°, 135°, and so on—or, in radians, \(0, \pi/4, \pi/2, 3\pi/4, \dots\). However, we create the star by attaching each point to a point three away, so the first point has an angle of \(0\), the next an angle of \(3\pi/4\), the next an angle of \(3\pi/2\), etc. Specifically, the \(j\)th point will have an angle of \(3j\pi/4\), and so the \(j\)th point will have coordinates \[(105 + 55 \cos(3j\pi/4), 150 + 55 \sin(3j\pi/4)).\] We could compute these by hand (using knowledge of trigonometry and a decimal approximation for \(\sqrt{2}/2\)), but a short Python script is perfect for this. star_points.py Python """Calculate the points of the star on the Minnesota flag""" import math center_x = 105 center_y = 150 r = 55 theta = [3 * k * math.pi / 4 for k in range(0, 9)] p = [(center_x + r * math.cos(k), center_y + r * math.sin(k)) for k in theta] for pt in p: We run this script to get the coordinates: Now, we use the polyline SVG element, inserting the following after the pentagon in our SVG file: mn_flag.svg SVG <polyline points=" stroke="none" fill="white" /> Final result The final result is this short SVG file: mn_flag.svg SVG <svg viewbox="0 0 500 300" xmlns='http://www.w3.org/2000/svg'> <rect x="0" y="0" width="500" height="300" stroke="none" fill="#52c9e8" /> <polygon points="0,0 280,0 195,150 280,300 0,300" stroke="none" fill="#002d5d" /> <polyline points=" stroke="none" fill="white" /> And it looks like this: What a beautiful flag! 1. The seal in question was also considered distasteful by many for its celebration of settler-colonialism, and so the same commission that redesigned the flag also came up with a new seal. But even if the seal had been perfectly fine, as the new seal is, putting your state seal on a flag is a terrible flag design. ↩ 2. The North American Vexillological Association has identified five principles of flag design, several of which the old design clearly violates. ↩ 3. We could have used a polygon for the sky blue rectangle as well. ↩
{"url":"https://chrisphan.com/posts/2024-04-28_minnesotas_new_flag/index.html","timestamp":"2024-11-09T10:28:31Z","content_type":"text/html","content_length":"31931","record_id":"<urn:uuid:c170b9e7-b449-4ae7-8484-80ae70cd50b0>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.75/warc/CC-MAIN-20241109085148-20241109115148-00866.warc.gz"}
The curvature exponent of sub-Finsler Heisenberg groups Inserted: 10 oct 2024 Year: 2024 The curvature exponent $N_{\mathrm{curv}}$ of a metric measure space is the smallest number $N$ for which the measure contraction property $\mathsf{MCP}(0,N)$ holds. In this paper, we study the curvature exponent of sub-Finsler Heisenberg groups equipped with the Lebesgue measure. We prove that $N_{\mathrm{curv}} \geq 5$, and the equality holds if and only if the corresponding sub-Finsler Heisenberg group is actually sub-Riemannian. Furthermore, we show that for every $N\geq 5$, there is a sub-Finsler structure on the Heisenberg group such that $N_{\mathrm{curv}}=N$.
{"url":"https://cvgmt.sns.it/paper/6824/","timestamp":"2024-11-09T19:39:23Z","content_type":"text/html","content_length":"8381","record_id":"<urn:uuid:ecae0675-ad42-4e7b-9c79-e917bece600c>","cc-path":"CC-MAIN-2024-46/segments/1730477028142.18/warc/CC-MAIN-20241109182954-20241109212954-00165.warc.gz"}
MINXOR - Editorial PROBLEM LINK: Author: Konstantin Sokol Tester: Gerald Agapov Editorialist: Tasnim Imran Sunny Sqrt decomosition, Trie Given an array A of N integers, you have to process two types of query on the array: 1. L R: find the minimal number in the subarray A[L…R] and count how many times it appears there. 2. L R K: replace each number A[i] with the expression (A[i] xor K) for the subarray A[L…R]. Sqrt Decomposition and Trie building: 1. Split the numbers in sqrt(N) blocks, where each block contains sqrt(N) numbers. If you are not familiar with sqrt(N) decomposition you may read the section “O(N), sqrt(N) solution” of this tutorial which explains how to answer RMQ with sqrt(N) decomposition. 2. Let every number A[i] = a[1]a[2]… a[16] be a 16-bit binary number where a[1] denotes the most significant bit and a[16] denotes the least significant bit. The maximum value is less than 65536, so 16 bits are enough to represent the numbers, 2^16 = 65536. 3. Now for each block build a Trie, where all the numbers of that block are inserted into the Trie as a 16-bit binary number. That means, each edge of the trie is either 0 or 1 representing a bit of the numbers. 2nd type of query: 2 L R K Let pending[j] = The pending value that has to be xor-ed with all the numbers on the j-th block. Whenever the j-th block is completely inside an update of this type, the new value of pending[j] will be pending[j] xor K . There could be at most two blocks (the blocks corresponding to left and right endpoint of the query) which are partially covered by the query. For each number A[i] inside the query from such blocks just update the new value to be (A[i] xor K ). Also delete the old value from the trie of that blocks and insert the new value to the trie. As there would be at most sqrt(N) blocks and at most 2 * sqrt(N) indexes total on the partially covered blocks and inserting and deleting into the trie takes 16 nodes: the complexity of each type 2 updates would be O(sqrt(N)*16). 1st type of query: 1 L R Take the minimum of the numbers from each blocks which is completely covered by the query range. All the numbers of j-th block (a block covered completely) has to be xor-ed with pending[j]. For finding the minimum value A[i] xor pending[j] (A[i] is a number in the trie of that block), the strategy is to try finding each bit of A[i] from first bit to last bit. With each k-th bit, check whether the k-th bit can be same as the k-th bit of pending[j], if not then make it different. After each step, we know one more bit of A[i] and that is why Trie can help us in this process. We just need to store the current node of the Trie which corresponds to the current prefix of A[i]. From this node we can easily check whether the next bit of A[i] can be 0 or 1 and make the decision from that. To get the count of the minimum just store the additional info about the count of how many times a node is visited. AUTHOR’S AND TESTER’S SOLUTIONS: Author’s solution can be found here. Tester’s solution can be found here. 3 Likes Pretty cool solution 1 Like what do you mean by pending value??? any example??? 1 Like The pending value panding[j] of the j-th block means that all the values inside that block has to be xor-ed with pending[j] . Initially pending[j]=0 . Assume that for a type 2 update that block is completely covered and say K = 5 for that update. So the new value of pending[j] would be pending[j] = 5 Now say another update which completely covers the j-th block and say K = 7. So the new value of pending[j] would be pending[j] = ( 5 xor 7 ) = 6. Say A[i] is a number from the j-th block, so the actual current value of A[i] would be A[i] xor pending[j] . 3 Likes
{"url":"https://discuss.codechef.com/t/minxor-editorial/4907","timestamp":"2024-11-07T13:54:18Z","content_type":"text/html","content_length":"25141","record_id":"<urn:uuid:46d8c3fb-cab3-40db-853d-0c9a9209a489>","cc-path":"CC-MAIN-2024-46/segments/1730477027999.92/warc/CC-MAIN-20241107114930-20241107144930-00814.warc.gz"}
M theory on seven dimensional manifolds with SU(3) structure arXiv:hep-th/0602163v2 9 Mar 2006 M-theory on seven-dimensional manifolds with Andrei Micu1§[, Eran Palti]2¶[, P.M.Saffin]2,3k 1[Physikalisches Institut der Universit¨][at Bonn] Nussallee 12, D-53115, Bonn, Germany 2[Department of Physics and Astronomy, University of Sussex] Falmer, Brighton BN1 9QJ, UK 3[School of Physics and Astronomy, University of Nottingham] University Park, Nottingham NG7 2RD, UK In this paper we study M-theory compactifications on seven-dimensional manifolds withSU(3) structure. As such manifolds naturally pick out a specific direction, the resulting effective theory can be cast into a form which is similar to type IIA compactifications to four dimensions. We derive the gravitino mass matrix in four dimensions and show that for different internal manifolds (torsion classes) the vacuum preserves either no supersymmetry, orN = 2 supersymmetry or, through spontaneous partial supersymmetry breaking,N = 1 supersymmetry. For the latter case we derive the effective[N] = 1 theory and give explicit examples where all the moduli are stabilised without the need of non-perturbative effects. February 2006 §[email: [email protected]] ¶[email: [email protected]] The low energy limit of M-theory, that is eleven-dimensional supergravity, forms arguably the most natural starting point from which we hope to recover observable physics from a fully consistent theory. The first issue to address is of course the fact that we observe four dimensions and the most phenomenologically successful approach so far has been to single out one of the space dimensions as independent of the other nine. Compactifying on this dimension then leads to type IIA string theory [1, 2, 3] which can then be compactified to four dimensions on a six-dimensional Calabi-Yau. The dimension may also be taken to be an interval, and then compactifying on a Calabi-Yau leads to a Brane-world scenario [4]. If we do not require the existence of such a special trivially fibred direction we should consider compactifying on seven dimensional manifolds. The possible contenders for such manifolds are required by supersymmetry to have special holonomy and until recently the main body of work has concentrated on manifolds withG2-holonomy that lead to Minkowski space in four dimensions and preserve[N] = 1 supersymmetry [5]. These compactifications lead to massless scalar fields in four dimensions that are known as moduli and an important first phenomenological step is to lift these flat directions. In string theory flux compactifications have proved very successful in achieving this (for a review see [6]) and in M-theory there has been some success in the case of G2-manifolds [7, 8, 9]. A feature of flux compactifications is that flux on the internal manifold will back-react on the geometry and in general induce torsion and warping on the manifold deforming its special holonomy to the more general property of a G structure [10, 11]. To take this back-reaction into account we should therefore consider compactifications on manifolds with a particular G structure. Compactifications that derive the four dimensional theory have been done for the case of manifolds with G2 structure [9, 12, 13, 14]. Eleven dimensional solutions that explore the structure of the vacuum have been studied for the cases of SU(2), SU(3) and G2 structure in [15, 16, 17, 18, 19, 20, 21, 22]. An interesting point to come out of these studies is that compactifications on manifolds with SU(3) structure have a much richer vacuum spectrum than manifolds with G2 structure. Indeed there are solutions that preserve only N = 1 supersymmetry in the vacuum putting them on an equal phenomenological grounding with G2 compactifications in that respect. There are however many phenomenologically appealing features that are not present in the G2 compactifications such as warped anti-deSitter solutions and solutions with non-vanishing internal flux. In this paper we will study compactifications on manifolds with SU(3) structure. We will see that because the SU(3) structure naturally picks out a vector on the internal manifold these compactifications can be cast into a form that is similar to type IIA compactifications on SU(3) structure manifolds [23]. However unlike in (massless) type IIA, we will show that it is possible to find purely perturbative vacua with all the moduli stabilised that preserve either N = 2 or N = 1 supersymmetry [24, 25, 26]. Moreover, as also remarked in [27], such compactifications offer the possibility to obtain charged scalar fields which reside in the [N] = 2 vector multiplets rather than in the hypermultiplets as realised so far in most cases (see for example [6]). induce spontaneous partial supersymmetry breaking that will lead to an N = 1 effective theory. We will derive this theory and go through an explicit example of moduli stabilisation. This will also serve as an interesting example of a mass gap between G structures. Finally, in the Appendices, we present our conventions and some technical details related to the calculations we perform in the main text. Note added: While this manuscript was prepared for publication another paper appeared, [57], which has some overlap with the issues discussed in this paper. Further to this we were informed of work in progress which also relates to the discussed issues [58]. In this section we briefly discuss the notion of a G structure and the two particular cases ofG2- and SU(3) structure in seven-dimensions. For a more thorough introduction to G structures we refer the reader to [10, 11]. A manifold is said to have G structure if the structure group of the frame bundle reduces to the groupG. In practice this translates into the existence of a set ofG-invariant forms and spinors on such manifolds. In general these forms are not covariantly constant with respect to the Levi-Civita connection, which would imply that the holonomy group of the manifold is reduced to G. The failure of the Levi-Civita connection to have reduced holonomyG is measured by theintrinsic torsion. In turn, the intrinsic torsion, and in particular its decomposition in G-representations, is used to classify such manifolds withGstructure. In the following we will give a couple of examples ofGstructures defined on seven-dimensional manifolds which we will use in this paper. 2.1 G2 structure in seven dimensions A seven-dimensional manifold with G2 structure has a globally defined G2-invariant, real and nowhere-vanishing three-formϕwhich can be defined by a map to an explicit form in an orthonormal basis [28]. Alternatively, manifolds with G2 structure feature a globally defined, G2-invariant, Majorana spinorǫ. Note that we shall work in a basis where Majorana spinors are real. In terms of this spinor theG2 form, ϕis defined as ϕmnp =iǫTγmnpǫ , (2.1) with the spinor normalisation ǫTǫ= 1. Using the G2 structure form ϕwe can write dϕ=W1⋆ ϕ−ϕ∧W2+W3 , d(⋆ϕ) = 4 3 ⋆ ϕ∧W2+W4 , where W1, . . . , W4 are the four torsion classes. In terms of G2 representations W1 is a singlet, W2 a vector, W3 a27whileW4 transforms under the adjoint representation,14. For further reference we note here that manifolds with onlyW16= 0 are called weak-G2 manifolds and they are the most general solutions of the Freund-Rubin Ansatz [29, 30]. 2.2 SU(3) structure in seven dimensions vanishes identically (ie, as explained before they have SU(3) holonomy). One the other hand, seven-dimensional manifolds withSU(3) structure were less studied partly due to the fact that for the case of no torsion where the holonomy group of the manifold is SU(3) the seven-dimensional manifold is just a direct product of a Calabi–Yau manifold and a circle. Therefore studying M-theory on such manifolds is equivalent to studying type IIA string M-theory on a Calabi-Yau. Once some torsion classes are non-vanishing a non-trivial fibration is generated thereby making such studies different to type IIA compactifications. An SU(3) structure on a seven dimensional manifold implies the existence of two globally defined, nowhere-vanishing Majorana spinors ǫ1 and ǫ2 which are independent in that they satisfy 1ǫ2 = 0. In the following we will find it more convenient to use two complex spinorsξ± ξ±= 1 2 ǫ 1[±][iǫ]2[.] [(2.3)] Similar to the case presented in the previous subsection, we construct the SU(3) invariant forms Ω, J,V Ωmnp = −ξ†+γmnpξ−, Jmn = iξ+†γmnξ+ =−iξ−†γmnξ−, (2.4) Vm = −ξ†+γmξ+=ξ†−γmξ−. Note that in comparison to six-dimensionalSU(3) structures, in seven dimensions there also exists a globally defined vector field V. It is important to bear in mind that in general this vector is not a Killing direction and thus the manifold does not have the form of a direct product. One can now show that Ω,J andV are all the possible independent combinations which one can construct and any other non-vanishing quantities can be expressed in terms of them. For example we have ξ[−]†γmnpξ+= ¯Ωmnp, ξ[+]†γmnpξ+=ξ−†γmnpξ−=i(J∧V)mnp . (2.5) Furthermore, one can also show that the forms defined in (2.4) satisfy the seven-dimensional SU(3) structure relations J[∧]J[∧]J = 3i 4Ω∧Ω¯ , Ω[∧]J =Vy[J] [=][V]y[Ω = 0][,] where the contraction symbol y [is defined in equation (A.4). Finally one can prove the following] useful relations Vy[V] [= 1][,] Jm[i]Ji[n] = −δ[n]m+VmVn, J[m]iΩ±inp = ∓Ω∓mnp, (2.7) ⋆Ω± = ±Ω∓∧V , ⋆(J [∧]V) = 1 2J∧J , where we have split the complex three-form Ω in to its real and imaginary parts Let us now see how to decompose the intrinsic torsion in SU(3) modules. As before they are most easily defined from the differentials of the forms Ω, J andV. Generically we have [16, 22] dV = RJ+ ¯W1yΩ +W1yΩ +¯ A1+V ∧V1, (2.9) dJ = 2i 3 c1Ω−c¯1Ω¯ +J∧V2+S1+V ∧ 3(c2+ ¯c2)J+ ¯W2yΩ +W2yΩ +¯ A2 , (2.10) dΩ = c1J∧J+J∧T + Ω∧V3+V ∧[c2Ω−2J∧W2+S2] , (2.11) where the representatives of the 15 torsion classes are denoted byR,c1,2,V1,2,3,W1,2,A1,2,T and S1,2. It is easy to read off the interpretation of the above torsion classes in terms of the SU(3) structure group. There are three singlet classesR (real) andc1,2 (complex), five vectorsV1,2,3 (real) and W1,2 (complex), three 2-formsA1,2 (real) and T (complex) and two 3-formsS1,2. Before concluding this section we should make more precise the relation between theSU(3) and G2 structures on a seven dimensional manifold. Obviously, as SU(3) ⊂ G2, an SU(3) structure automatically defines a G2 structure on the manifold. In fact, an SU(3) structure on a seven-dimensional manifold implies the existence of two independentG2 structures whose intersection is precisely theSU(3) structure. Concretely, using the spinorǫ1andǫ2defined above we can construct the twoG2 forms ϕ± ϕ+[mnp]≡2iǫ1γmnpǫ1 , ϕ−[mnp]≡2iǫ2γmnpǫ2 . (2.12) The relation to theSU(3) structure is now given by ϕ±=±Ω−−J∧V . (2.13) Throughout this paper it will sometimes be useful to use the SU(3) forms and sometimes the G2 forms but we should keep in mind that the two formulations are equivalent. 2.3 Mass hierarchies When the torsion on the internal manifold vanishes the holonomy group directly determines the amount of supersymmetry preserved in the vacuum. This is not the case with G structures where the amount of supersymmetry in the vacuum need not be related to the structure of the manifold. It should nevertheless be kept in mind that the amount of supersymmetry of the effective action is not unrelated to the structure group. In particular, the existence of globally defined spinors on the internal manifold allows us to define four-dimensional supercharges and therefore constitute a sufficient condition for supersymmetry of the effective action. Even though in general the situation can be more complicated we will assume that such supercharges, which are related to the globally defined spinors, are the only ones which survive in four dimensions and so the amount of super-symmetry of the effective action is given directly in terms of the structure group of the internal manifold.1 [Consequently, we will consider that M-theory compactifications on seven-dimensional] manifolds with SU(3) structure lead to an N = 2 supergravity theory in four dimensions2 while We thank Nikolaos Prezas for pointing this out. For a recent discussion of this we refer the reader to [31]. Strictly speaking, as manifolds with G2 structure are known to have in fact SU(2) structure [32], the effective action in four dimensions would be that of anN = 4 supergravity. However, asSU(2) structures in seven-dimensions are much less tractable thanSU(3) ones, we shall consider that the additional spinors lead to massive particles and we shall ignore them right from the beginning. In fact we shall see in sections 4 and 5 that for some seven-dimensional coset manifolds theSU(2) structure is not compatible with the symmetries of the coset. As the lower mass states the vacuum may preserveN = 2 orN = 1 supersymmetry or even break it completely depending on which torsion classes (and fluxes) are turned on. This may be understood from the fact that when there are more than one internal spinors on the manifold they may satisfy different differential relations according to what torsion classes are present and so may correspond to different eigenval-ues of the Dirac operator. Consider decomposing the eleven-dimensional gravitino in terms of the globally defined spinors on the internal manifold. Than the four-dimensional gravitini may have varying masses and there will appear mass hierarchies throughout the four-dimensional low-energy field spectrum. If the mass scales are well separated we can consider that only the lowest mass states are excited and so it is clear that in such a vacuum only a fraction of the original amount of supersymmetry is preserved. We will present such an example in section 5.4.2 where it will become clear that one of the two gravitini will become massive in the vacuum and thus supersymmetry will be spontaneously broken from [N] = 2 to [N] = 1. The reduction The theory we will be considering is the low energy limit of M-theory that is eleven-dimensional supergravity. The bosonic action of the theory as well as the relevant gravitino terms are given by S11 = 1 κ2 11 Z [√] 1 2Rˆ11− 1 2ΨM¯ Γˆ M N P[D][ˆ] NΨP − 1 4 4!FˆM N P QFˆ M N P Q [(3.1)] +1 2 1 (12)4ǫ LM N P QRST U V W[F][ˆ] LM N PFˆQRSTCˆU V W − 3 4(12)2( ¯ΨMΓˆ M N P QRS[Ψ] N + 12 ¯ΨPΓˆQRΨS)FP QRS The field spectrum of the theory contains the eleven-dimensional graviton ˆgM N, the three-form ˆ CM N P and the gravitino, ˆΨP. The indices run over eleven dimensions M, N, .. = 0,1, ...,10. For gamma matrix and epsilon tensor conventions see the Appendix. κ11denotes the eleven-dimensional Planck constant which we shall set to unity henceforth thereby fixing our units. In this section we will consider this theory on a space which is a direct productM11=M4×K7 with the metric Ansatz ds2[11]=gµν(x)dxµdxν +gmn(x, y)dymdyn, (3.2) wherexdenotes co-ordinates in four-dimensions andyare the co-ordinates on the internal compact manifold. The first thing to note is that this Ansatz is not the most general Ansatz possible for a metric as we have not included as possible dependence of the four-dimensional metric on the internal co-ordinates that is usually referred to as a warp factor. There are many compactifications that can consistently neglect such a warp factor because either a warp factor is not induced by the flux or it can be perturbatively ignored if the internal volume is large enough. Including such a warp factor is a difficult proposition for an action compactification because it can, and generally will, be a function of the four-dimensional moduli3. For now we will proceed with an unwarped Ansatz bearing in mind that this is only consistent for certain compactifications. The four-dimensional effective theory will be an N = 2 gauged supergravity. These type of theories have been studied extensively in the literature, see [34, 35, 36, 37, 38, 39] and references within, and this work will be useful as a guide for the compactification. In the upcoming sections we will derive most of the quantities necessary to specify this theory. The kinetic terms for the low energy fields will be derived from the Ricci scalar and the kinetic term for the three-form. The prepotentials can then be derived from the four-dimensional gravitini mass matrix. 3.1 The Ricci scalar As is well known, the metric on the compactification manifold is not rigid and its fluctuations can be written in terms of scalar fields in the effective low-energy theory. Important constraints on the spectrum and kinetic terms for these scalar fields come from the fact that they should form a four-dimensional[N] = 2 supergravity. Compactifications of type II supergravities from ten to four dimensions on Calabi-Yaus naturally lead to such a supergravity. In this section we will show that it is possible to keep an analogy with these compactifications for the case of M-theory on SU(3) structure manifolds that we are considering. A similar approach was adopted in [23] and we will closely follow their results. 3.1.1 The induced metric variations Having SU(3) structure on a manifold is a stronger condition than having a metric. Infact the SU(3) structure induces a metric on the manifold that we can write in terms of the invariant forms as gab ≡ |s|− 1 9s sab ≡ 1 16 4 ΩamnΩ¯bpq+ ¯ΩamnΩbpq JrsVt ˆǫmnpqrst. Clearly, as the metric is determined uniquely in terms of the structure forms, all the metric fluc-tuations can be treated as flucfluc-tuations of the structure forms. The converse however is not true as it is possible that different structure forms give rise to the same or equivalent metrics. Therefore, when expressing the metric variations in terms of changes in the structure forms one has to take care not to include the spurious variations as well. Varying the formula above we can write the metric deformations as δgab = 1 8δΩ (a Ω¯b)mn+ 1 8Ω (a δΩ¯b)mn+ 2V(aδVb)+VaVb(JyδJ) +J(amδJb)m +VmV[(a]Jn[b)]δJmn− 4δΩyΩ +¯ 1 4ΩyδΩ +¯ JyδJ gab . (3.4) Note that this is very similar to normal Calabi–Yau compactifications where the metric variations were expressed in terms of K¨ahler class and complex structure deformations. Keeping the termi-nology we will refer to the scalar fields associated with δJ and δΩ as K¨ahler moduli and complex structure moduli respectively. Furthermore we will denote the scalar associated toδV as the dilaton in complete analogy to the type IIA compactifications. Before starting the derivation of the kinetic terms associated to the metric deformations dis-cussed above we mention that the metric variations can be dealt with more easily in terms of the variations of either of the two G2 structures which can be defined on seven-dimensional manifolds withSU(3) structure (2.13) δgab= 1 2ϕ ± (a δϕ±[b)mn]− 1 3 ϕ ab. (3.5) 3.1.2 The Ricci scalar reduction Let us now see explicitely how to derive the kinetic terms for the moduli fields described above. As they are metric moduli, their kinetic terms should appear from the compactification of the eleven-dimensional Ricci scalar. The explicit calculation is presented in Appendix B and here we will only outline the main steps before stating the final result. We should also mention that during this process we are mainly interested in the fate of the scalar fields which appear as fluctuations of the metric on the internal manifold and therefore we shall not discuss the vector field (graviphoton), which also arises from the metric, as we expect that its kinetic term is the standard one. For now we do not decompose Ω andJ into their four-dimensional scalar components but with the vectorV we write V(x, y)[≡]eφ(x)ˆ z(y), (3.6) wherezis the single vector we have on the internal manifold from theSU(3) structure requirements. Note that it is stillV and notzthat features in theSU(3) relations (2.6). The difference betweenV andz can be understood asV is theSU(3) vector which also encodes the possible deformations of the manifold, whilezis only a basis vector in which we expandV. Therefore, the factoreφˆ [encodes] information about the deformations associated to the vector V. This is completely analogous to the compactification of eleven-dimensional supergravity on a circle to type IIA theory and in order to continue this analogy we shall call the modulus in equation (3.6) the dilaton. Let us further define a quantity which in the case where the compactification manifold becomes a direct product of a six-dimensional manifold (withSU(3) structure) and a circle, plays the role of the volume of the six-dimensional space V6 ≡e−φˆV , (3.7) whereV is the volume of the full seven-dimensional space V ≡ Z g7 = 1 6 J∧J ∧J ∧V . (3.8) To see the use of this quantity, note that due to the first relation in (2.6), a scaling of the three-form Ω automatically induces a change in the volume. Thus, scalings of Ω would have the same effect as appropriate scalings of J and in order not to count the same degree of freedom twice we shall define e12KcsΩcs ≡ √1 8Ω(V6) −1 2 , (3.9) where we have also introduced the K¨ahler potential for the complex structure deformations, Kcs, extending the results of [23, 40, 41] Kcs≡ −ln (||Ωcs||V6) =−lni <Ωcs|Ω¯cs>≡ Z Ωcs[∧]Ω¯cs[∧]z . (3.10) It is easy to check that rescalings of Ω precisely cancel the corresponding variation of [V]6 on the RHS of equation (3.9) and hence Ωcs defined on the LHS stays unchanged. In this way we have managed to decouple the volume modulus from the form Ω. The relation (3.9) deserves one more explanation. The additional factor on the LHS, exp1[2]Kcs has been introduced in order to describe by Ωcs the exact analogue of the Calabi–Yau holomorphic 3-form whose norm precisely gives the K¨ahler potential of the complex structure deformations. order to make sure that such variations are not introduced as degree of freedom we should “gauge” these phase transformation for Ω. Given the K¨ahler potential (3.10) and the definition (3.9) it is not hard to see that K¨ahler transformations, which correspond to scalings of Ωcs by some function which is holomorphic in the complex structure moduli, precisely correspond to phase variations of Ω. Therefore, the covariant derivative for the “gauged” phase transformations of Ω should precisely be the K¨ahler covariant derivative DµΩ≡∂µΩ +1 2∂µKcsΩ = p p 8V6e 2KcsDµΩcs . (3.11) Finally we note that we have to take into account the usual Weyl rescalings in order to arrive to the four-dimensional Einstein frame gµν → V−1gµν , gmn → e− 2 3φˆg Following the above steps one can derive the (linearised) variation of the Ricci scalar under the metric fluctuation (3.4). The calculation is presented in the appendix and here we recall the final Z √ −g11d11X 1 2R11 = Z −g4d4x h[1] 2R4−∂µφ∂ µ[φ][+]1 2e 2φ[V]−1 Z √[g] 7R7 (3.13) 8e −φˆ[e]Kcs Z √ g7d7y DµΩcsyDµΩ¯cs−1 4V −1 6 e− ˆ φZ √[g] 7d7y ∂µJy∂µJ i where we have also defined the four-dimensional dilaton φ≡φˆ− 1 2lnV6 . (3.14) The important thing to notice on this result is that the metric fluctuations have naturally split into the dilaton, theJ and Ωcs [variations with separate kinetic terms. Moreover, due to the dependence] of √g7 on the dilaton, it can be seen that the all the dilaton factors drop out from the kinetic terms of the K¨ahler and complex structure moduli. Therefore, this result is very much like the one for usual type IIA compactifications on Calabi–Yau manifolds with the notable difference that a potential for the moduli appears due to the fact that manifolds withSU(3) structure are in general no longer Ricci flat. 3.2 Four-dimensional field content and kinetic terms In this section we will complete the kinetic terms for the low energy scalar field spectrum by reducing the three-form field ˆC3. These scalar fields pair up with the geometrical moduli intoN = 2 multiplets. We will however ignore the presence of additional fields, like gauge fields, which are expected to have similar kinetic terms to the gauge fields coming from type IIA compactifications. 3.2.1 Reduction of the three-form C3 along the vector direction which is featured in the seven-dimensional manifolds with SU(3) structure under consideration. Consequently we write C3 =C3+B2∧z , (3.15) where C3 is assumed to have no component along z, ie C3yz = 0. As expected, in the type IIA picture C3 will correspond to the RR 3-form, while B2 represents the NS-NS 2-form field. Then compactifying the eleven-dimensional kinetic term, taking care to perform the appropriate Weyl rescalings (3.12), we arrive at Z [√] (3.16) = Z [√] Z √[g] 7d7y∂µC3y∂µC3− 1 4V −1 6 e− ˆ φ Z One immediately notices that the kinetic term for fluctuations of the B2-field along the internal manifold is the same as the kinetic term for the fluctuations of the fundamental formJ. Therefore we see that these fluctuations pair up into the complex field T [≡]B2−iJ . (3.17) In order to analyse the four-dimensional effective action we have to specify which are the modes we want to preserve in a Kaluza-Klein truncation. In general one restricts to the lowest mass modes, but in the case at hand this is a hard task partly due to the big uncertainties regarding the spectrum of the Laplace operator on forms for arbitrary manifolds with SU(3) structure. The best thing we can do is to use our knowledge from other similar cases where the structure of four-dimensional theory was derived [23, 40, 42, 43, 44], as well as the close analogy to the type IIA compactifications and postulate the existence of a set of forms in which to expand the fluctuations we have discussed so far. For the moment these forms are quite arbitrary, but for specific cases it should be possible to derive some of their most important properties. In fact we shall see such examples in sections 4 and 5 where explicit examples of manifolds with SU(3) structure will be discussed. Therefore we consider a set of two-forms, ωi, with dual four-forms, ˜ωi which satisfy ωi∧ω˜j ∧z=δji . (3.18) Furthermore we introduce three-forms (αA, βA) which obey αA∧βB∧z=δAB, Z αA∧αB∧z= Z βA[∧]βB[∧]z= 0. Anticipating that we expand the structure variations in these forms we also consider them to be compatible with theSU(3) structure relations (2.6) and (2.7) ωi∧αA=ωi∧βA= 0, zy[ω][i] [=][z]y[α][A][=][z]y[β]A[= 0][.] (3.20) Given the forms defined above we should expand all the fluctuations and interpret the coefficients as the four-dimensional degrees of freedom. Consequently we write for the metric variations J(x, y) =vi(x)ωi(y), Ωcs(x, y) =ZA(x)αA(y)−FA Z(x)βA(y), where we have already used the fact that the deformations of Ω span a special-K¨ahler manifold and therefore can be written as above, whereFAis a holomorphic function of the complex coordinates ZA, which is also homogeneous of degree one in ZA. From the four-dimensional perspective vi are real scalar fields which we will refer to as K¨ahler moduli. ZA on the other hand are not all independent and we shall consider as the true degrees of freedom the quantitiesza[=][Z]a[/Z]0[, where] the indexaruns over the same values as the indexA, except for the value 0. For the matter fields we take B2(x, y) = ˚B2(y) + ˜B2(x) +bi(x)ωi(y), C3(x, y) = ˚C3(y) + ˜C3(x) +Ai(x)∧ωi(y) +ξA(x)αA(y)−ξ˜A(x)βA(y). (3.22) Note that in the above decomposition we have allowed for a background value forB2 andC3 which we denoted ˚B2 and ˚C3 respectively. These values should be understood as giving rise to the flux terms for the field strengths of B2 and C3 and therefore they should not be globally well defined over the internal manifold. We will postpone their discussion until the next section when we deal with background fluxes. Note that B2 can not be expanded along the z direction as it already comes from a three-form with one leg along z, whileC3 was assumed not to have any component alongzcf equation (3.15). The fieldsbi[,][ξ]A[and ˜][ξ] Aare scalar fields in four dimensions and they will be important for our following discussion. Moreover, ˜B2(x) is a four-dimensional two-form which, in the absence of fluxes, can be dualised into an axion b(x). Here however we will not perform this dualization as in the examples we present in sections 4.2 and 5.4 the ˜B-field will be massive in four dimensions and therefore we will keep it as a member of “the universal” tensor-multiplet. C3(x) is a three-form which carries no degree of freedom in four dimensions and is dual only to some constant, but its dualisation in four dimensions requires more care. As explained before, we shall not deal with the vector fields Ai [here as their couplings are expected to be similar to the] type IIA compactifications. Also we shall neglect other vector degrees of freedom which arise from the isometries of the internal manifold and leave their proper treatment for another project. We will also find it useful to introduce at this level one more notation. As we are mostly interested in the scalar fields in the theory we will denote all the fluctuations of ˆC3 which give rise to scalar fields in four dimensions by ˆc3. Just from its definition we can see that this is a three-form on the internal manifold. In terms of the expansions above it takes the form c3(x, y) =bi(x)(ωi∧z)(y) +ξA(x)αA(y)−ξ˜A(x)βA(y). (3.23) Finally, as we expect that the low energy effective action is aN = 2 (gauged) supergravity, the light fields should assemble intoN = 2 multiplets. This is briefly reviewed in table 1. As mentioned gµν, A0 gravitational multiplet ξ0,ξe0, φ,B˜2 universal tensor-multiplet bi, vi, Ai vector multiplets ξa,ξea, za hypermultiplets before, the internal parts of the two formB, and the fundamental formJ combine themselves into a complex field T(x, y)≡B2(x, y)−iJ(x, y) =ti(x)ωi(y)≡(bi(x)−ivi(x))ωi(y), (3.24) which will become the scalar components of the N = 2 vector multiplets. The associated K¨ahler potential is again similar to the one in type IIA theory Kt=−ln 1 6 J[∧]J [∧]J[∧]z=[−]ln[V]6. (3.25) As we expect from the structure of[N] = 2 supergravity theories as well as from the analogy to type IIA compactifications [23, 40], the fieldstispan a special K¨ahler geometry with a cubic prepotential F =−1 6 t0 , whereKijk are the analogue of the triple intersection numbers Kijk= Z ωi∧ωj∧ωk∧z . (3.26) The symplectic sections are given by XI = (t0, ti) and [F]I = ∂IF with t0 = 1. Indeed, one can easily check, using the expansion (3.21) that the K¨ahler potential above derives from the general N = 2 formulaK =−lni XIF¯I−X¯IFI It is interesting to note that while in type IIA compactifications with fluxes only charged hypermultiplets can appear, in the case of M-theory compactified on seven-dimensional manifolds withSU(3) structure one can also obtain charged vector multiplets as also remarked in [27]. Indeed it is not hard to see that provided R dωiy(ωj ∧z) ≡ kij does not vanish, the kinetic term for the three-form ˆC3 in eleven dimensions generates a coupling of the type kijbiAj in the low energy effective action which precisely uncovers the fact that the scalars in the vector multiplets become charged. 3.3 Flux and gravitino mass matrix So far we have only discussed the kinetic terms of the various fields which appear in the low energy theory and we have seen that their structure is very much like in type IIA compactifications. We will now turn to study the effect of the non-trivial structure group and of turning on fluxes. The only background fluxes which can be turned on in M-theory compactifications and which are compatible with four-dimensional Lorenz invariance can be written as h ˆ F4 i Background=f η4+G . (3.27) Heref is known as Freud-Rubin parameter whereη4 is the four-dimensional volume form andG is the four-form background flux which can locally be written as G=dC˚3(y), (3.28) where ˚C3(y) is the background part of the three-form field ˆC3which was defined in equation (3.22). As observed in the literature [9, 12, 45], the Freund-Rubin flux is not the true constant parameter describing this degree of freedom. Rather one has to consider the flux of the dual seven-form field strength ˆF7 F7 =dCˆ6+1 which should now be the true dual of the Freund-Rubin flux. As can be seen the ˆF7 flux also receives a contribution from the ordinary ˆF4 flux. Therefore, in general, the Freund-Rubin flux parameter is given by f = 1 λ+ 1 2 Z ˆ c3∧ G+1 2 Z ˆ , (3.30) whereλis a constant which parameterizes the 7-form flux. On top of these fluxes which can be turned on for the matter fields one has to consider the torsion of the internal manifold withSU(3) structure which is also known as “metric flux”. The effects of the torsion can be summarised as follows. We have already seen that the compactification of the Ricci scalar contains a piece due to the non-vanishing scalar curvature of the internal manifold. This is entirely due to the torsion as manifolds with SU(3) holonomy are known to be Ricci flat. Moreover, a non-trivial torsion is associated with non-vanishing exterior derivatives of the structure forms. If we insist that we expand the fluctuations of these structure forms as in equation (3.21) it is clear that the expansion forms cannot be closed. Therefore, the presence of torsion forces us to perform the field expansions in forms which are no longer closed. Such forms will induce in the field strength of the three-form ˆC3 terms which are purely internal and which are – from this point of view – indistinguishable from the normal fluxes and so the flux in (3.27) is modified to be the full field strength expression F4 =f η4+G+dcˆ3 , (3.31) where the derivative should be understood as the exterior derivative on the seven-dimensional manifold. However such “induced” fluxes are not constant, but they depend on the scalar fields which arise from ˆC3. It is also worth noting at this point that provided these scalar fields are fixed at a non-vanishing value in the vacuum, these vacuum expectation values will essentially look like fluxes for ˆF4 in that specific vacuum. We will use this fact later on when we discuss moduli stabilization. As mentioned before, the effect of the fluxes and torsion is to “gauge” the N = 2 supergravity theory and induce a potential for the scalar fields. These effects can be best studied in the gravitino mass matrix to which we now turn. In an[N] = 1 supersymmetric theory, the gravitino mass is given by the Kahler potential and superpotential, while in anN = 2 theory we have a mass matrix which is constructed out of the Killing prepotentials (electric and magnetic) that encode information about the gaugings in the hyper-multiplet sector. Moreover, the same gravitino mass matrix appears in the supersymmetry transformations of the four-dimensional gravitini and therefore its value in the vacuum gives information about the amount of supersymmetry which is preserved in that particular case. This can also be understood from the fact that unbroken supersymmetry requires vanishing physical masses4 for the gravitino and so, non-zero eigenvalues of the gravitino mass matrix in the vacuum imply partial or complete spontaneous supersymmetry breaking. In the case of partial supersymmetry breaking of an [N] = 2 theory, the superpotential and D-terms of the resulting N = 1 theory are completely determined by the N = 2 mass matrix. In a compactification from a higher-dimensional theory there are several ways to determine the gravitino mass matrix in the dimensional theory. If we have explicit knowledge of the four-dimensional degrees of freedom we can derive the complete bosonic action and from the potential and gaugings derive the [N] = 2 Killing prepotentials. Alternatively one can directly perform a computation in the fermionic sector and directly derive the gravitino mass matrix or compactify the higher dimensional supersymmetry transformations. The advantage of the last two methods 4[In AdS space, the mass parameter which appears in the Lagrangian is not the true mass of a particle. Therefore] we use the terminologyphysical mass in order to distinguish the true mass from the parameter which appears in the is that one obtains a generic formula for the mass matrix in terms of integrals over the internal manifold without explicit knowledge of the four-dimensional fields. Once these fields are identified in some expansion of the higher-dimensional fields one can obtain an explicit formula for the mass matrix which should also be identical to the one obtained from purely bosonic computations. In the following we choose to determine the gravitino mass matrix by directly identifying all the possible contributions to the gravitino mass from eleven dimensions. For this we will first have to identify the four-dimensional gravitini. Recall from section 2.2 that on a seven-dimensional manifold with SU(3) structure one can define two independent (Majorana) spinors which we have denoted ǫ1,2. Then, we consider the Ansatz 1 4 ψ1 , (3.32) where ψ1,2 [are the four-dimensional gravitini which are Majorana spinors and the overall ] normal-isation factor is chosen in order to reach canonical kinetic terms in four-dimensions. It is more customary to work with gravitini which are Weyl spinors in four dimensions and therefore we decomposeψ1,2 [above as] ψ[µ]α= 1 2 ψ , (3.33) whereα, β = 1,2 and the chiral components of four-dimensional gravitini satisfy γ5ψα±µ=±ψ±αµ. (3.34) Then compactifying the eleven-dimensional gravitino terms in (3.1) and performing the appropriate Weyl rescalings (3.12) we arrive at the four-dimensional action −gh[−]ψ¯[+µ]α γµνρDνψα+ρ+Sαβψ¯+µα γµνψβ−ν+ c.c. i . (3.35) The main steps in deriving the mass matrix are presented in appendix C and for similar calculations we refer the reader to the existing literature [12, 23, 57] where similar calculations were performed. Equation (C.14), which is the final result for the gravitino mass matrixSαβ, can be written as S11 = dU+∧U++ 2G ∧U++ 2λ S22 = dU−[∧]U−+ 2[G ∧]U−+ 2λ , (3.36) S12 = S21= 2iG ∧Ω++ 2idˆc∧Ω+−2dJ∧Ω+∧z . Here G denotes the internal part of the background flux which was defined in equation (3.28),λis the constant to which the three-form ˜C3 is dual in four dimensions and we have further introduced U±[≡]ˆc3+ie− ˆ φ[φ]±[= ˆ][c] 3±ie− ˆ φ[Ω]− −iJ[∧]z , (3.37) where ˆc3 denotes the purely internal value of the three-form field ˆC3 which which was defined in equation (3.23). prepotentials, Px andQx for the hypermultiplets and the K¨ahler potential,K, for the vector mul-tiplets of theN = 2 supergravity by comparing the mass matrix with the general expression for an N = 2 gauged supergravity [37, 38, 39] Sαβ = 2 σ x αβ PAxXA−QxAFA, (3.38) where P[A]x and QxA are the electric respectively magnetic prepotentials which depend on the hy-permultiplets in the theory while (XA, FA) is a symplectic section which characterizes the special K¨ahler geometry of the vector multiplet scalars. Note that we have used the general formula for the N = 2 gauged supergravity mass matrix which appears when both electric and magnetic gaugings are present. This is because we expect to have both type of gaugings which is in general signaled by the presence of massive tensor multiplets in the four-dimensional effective action. It is easy to infer that such massive tensors appear if one takes into account that the one-form z, used in the expansion (3.15), is not closed. Squaring the field strength which comes from this expansion, B2 will pick up a mass proportional toR dz[∧]⋆dz. Finally we note that in a generic vacuum the off diagonal components of the mass matrix are non-vanishing and therefore the gravitini as defined in equation (3.32) are not mass eigenstates. The masses of the two gravitini are then given by the eigenvalues of the mass matrix evaluated in the vacuum. If these masses are equal and the two gravitini physically massless then supersymmetry is preserved in the vacuum. However this is not the case in general and then one encounters partial (when one gravitino is physically massless) or total spontaneous supersymmetry breaking. We shall come back to this issue in section 5. Preserving N=2 supersymmetry In this section we will consider the case where the internal manifold is one that will preserve the full N = 2 supersymmetry in the vacuum. We will begin by studying the constraints such a solution should satisfy in section 4.1, moving onto studying the form of the mass matrix for this solution in section 4.1.1. Finally in section 4.2 we will go through an explicit example of such a vacuum by considering the coset SO(5)/SO(3)A+B. 4.1 N=2 solution In this section we will classify the most general manifolds with SU(3) structure that are solutions to M-theory that preserveN = 2 supersymmetry with 4D spacetime being Einstein and admitting two Killing spinors. In order to study such solutions in full generality we allow for a warped product metric ds2[11]=e2A(y)gµν(x)dxµdxν +gmn(x, y)dymdyn, (4.1) but will eventually show that the warp factor,A(y), vanishes. This class of solutions has also been recently discussed in [22]. We look for solutions to the eleven-dimensional Killing spinor equation ∇Mη+ 1 288 Γ[M]N P QR−8δ[M][NΓP QR]iFˆN P QRη = 0. (4.2) For the background field strength ˆFM N P Q above we will consider the most general Ansatz compat-ible with four-dimensional Lorentz invariance. Therefore, the only non-vanishing components of ˆF Given that the internal manifold has SU(3) structure we know there exist at least two globally defined Majorana spinors and so we take a killing spinor Ansatz η=θ1(x)⊗ǫ1(y) +θ2(x)⊗ǫ2(y). (4.3) Since we are looking forN = 2 solution we treat θ1 andθ2 as independent. This will lead to more stringent constraints than theN = 1 case, where they may be related, which will make finding the most general solution straightforward. As we are looking for four-dimensional maximally symmetric spaces, the Killing spinorsθ1,2 satisfy ∇µθi =− 2Λ i 1γµγθi+ 1 2Λ 2γµθi (no sum overi), (4.4) where the index i= 1,2 labels the two spinors. The integrability condition reads Rµν =−3 h Λi[1]2+ Λi[2]2igµν , i= 1,2, (4.5) and so one immediately sees that not all Λi[1,2] are independent, but have to satisfy Λ1[1]2+ Λ1[2]2 = Λ2[2]2+ Λ2[2]2 . (4.6) Now decomposing the Killing spinor equation into its external and internal parts we arrive at the following equations ∇mǫ1,2 = 12e −4A[f γ] ǫ1,2, (4.7) 0 = γ[m]npqrFˆnpqr−8γpqrFˆmpqr ǫ1,2 , (4.8) 2Λ 1,2 1 ǫ1,2 = 1 2e A[γ]n[∂] nA+ 6e 3A[f] ǫ1,2 , (4.9) 1 2Λ 1,2 2 ǫ1,2 = − 1 A[γ]npqr[F][ˆ] npqr ǫ1,2 . (4.10) In order to classify this solution from the point of view of the SU(3) structure we have find the corresponding non-vanishing torsion classes by computing the exterior derivatives of the structure forms. Using their definition in terms of the spinors (2.4) and applying the results above one finds dV = 1 3f J , dJ = 0, (4.11) dΩ = [−]2i 3fΩ∧V , dA = 0. From equations (4.7) we can also determine the parameters Λi[1,2], which determine the value of the cosmological constant, which are given by Λ1[1]= Λ2[1] = f 3 , Λ1[2]= Λ2[2] = 0. The Killing spinor equations (4.7) also give constraints on the internal flux that imply it should vanish. However an easier way to see this is to consider the integral of the external part of the eleven-dimensional Einstein equation which reads R[(4)]+4 3 f2+ 1 72 Z ˆ FmnpqFˆmnpq = 0. (4.13) We see that substituting (4.12) we indeed recover ˆFmnpq = 0. Finally we note that in terms of the two G2 structuresϕ±, equations (4.11) can be recast into a simple form dϕ± = 2 3f ⋆ ϕ ±[,] [(4.14)] which shows that both G2 structures are in fact weak G2. 4.1.1 The mass of the gravitini We can now use this solution to illustrate the discussion on the relation between the gravitini masses and supersymmetry and to check our form of the mass matrix. Inserting the solution just derived into the mass matrix we should find that the masses of the two gravitini degenerate and that they are both physically massless. Taking the solution (4.11) from the previous section the mass matrix (3.36) reads S12= 0, S11=S22= − if e72φˆ , (4.15) which indeed shows that the masses of the two gravitini are the same. To show that the two gravitini are physically massless we recall that in AdS space the physical mass of the gravitino is given by mphys=m3/2−l , (4.16) wherem[3/2] is the actual mass parameter which appears in the Lagrangian (in our case[|]S11|), while l is the AdS inverse radius and is defined as R=−12l2 , (4.17) withR the corresponding Ricci scalar. In order to obtain the AdS radius correctly normalised we recall that the mass matrix (4.15) was obtained in the Einstein frame which differs from the frame used in the previous section by the Weyl rescaling (3.12). Inserting this into (4.5) we obtain the properly normalised AdS inverse radius l= f e 7 2φˆ . (4.18) 4.2 The coset SO(5)/SO(3)A+B In order to see the above considerations at work we will now go through an explicit example of a manifold that satisfies the N = 2 solution discussed in the previous sections. The manifold we will consider is the coset space SO(5)/SO(3)A+B. Cosets are particularly useful as examples of structure manifolds because the spectrum of forms that respect the coset symmetries is highly constrained. There are more details about cosets in general and about this particular coset in the appendix, or, for further reference we refer the reader to [46]. In this section we summarise the results and construct a basis of forms with which we can perform the compactification. We begin by finding the most general symmetric two-tensor that respects the coset symmetries, this will be the metric on the coset and is given by a 0 0 0 d 0 0 0 a 0 0 0 d 0 0 0 a 0 0 0 d 0 0 0 b 0 0 0 d 0 0 0 c 0 0 0 d 0 0 0 c 0 0 0 d 0 0 0 c , (4.19) where all the parameters are real. The parameters of the metric are the geometrical moduli and we see that we have four real moduli on this coset. Note that there is a positivity domainac > d2. Having established the metric on the coset we can move on to find the structure forms. The strategy here is to find the most general one, two and three forms and then impose the SU(3) structure relations on them. It is at this stage that we really see what theG structure of the coset is. This analysis is performed in the appendix and we find that the structure forms are given by V = eφˆz , J = v ω , (4.20) Ω = ζ3α0+ζ4α1+ζ6β1+ζ7β0 , where the relations between the ζs and the metric moduli are given in the appendix. The basis forms satisfy the differential relations dz = [−]ω , dω = 0, dα0 = z∧α1 , dβ0 = [−]z[∧]β1, (4.21) dα1 = 2z∧β1−3z∧α0 , dβ1 = [−]2z[∧]α1+ 3z∧β0 . The structure forms (4.20) show that indeed the coset has exactly SU(3) structure. In terms of the moduli classification we have been using it has a dilaton, one K¨ahler modulus and one complex structure modulus5 thus making up the four degrees of freedom in the metric. We also show in appendix 4.2 that scalar functions are in general not compatible with coset symmetries and therefore we conclude that for such compactifications no warp factor can appear. 4.2.1 Finding [N] = 2 minima In this section we want to find out if the potential which arises from the compactification on the coset above has a minimum where the geometric moduli are stabilised. In particular we wish to look for minima that preserve [N] = 2 supersymmetry and correspond to the solution discussed in section 4.1. As usual, in a bosonic background, the condition for supersymmetry is the vanishing of the supersymmetry variations of the fermions. This is precisely what we used in the previous section and thus a supersymmetric solution should satisfy all the conditions derived there, and in particular (4.11). It is easy to see that the forms (4.20) obey dV = −e ˆ φ vJ , dJ = 0, (4.22) dΩ = z[∧]([−]3ζ4)α0+ (ζ3−2ζ6)α1+ (2ζ4−ζ7)β1+ (3ζ6)β0 Therefore these forms will in general not satisfy the solution constraints (4.11). Requiring them to match the solution gives a set of equations for the moduli that will exactly determine the value of the moduli in the vacuum. For the coset at hand these are easy to solve and the solution is given by eφˆ = 6 1 3√42 14 λ 1 6 , v = 6 7 λ 3, (4.23) ζ3 = −ζ6=−iζ4 =iζ7= 6 where we have replaced the Freund-Rubin flux f by true flux parameter λfrom equation (3.30). Note that this solution fixes all the geometric moduli which is an important result for M-theory compactifications. It is important to stress however that ζ are not the true complex structure moduli, but are related to them by the rescaling (3.9). However, the complex structure moduli defined in (3.21), which can be most easily read off in special coordinates, do not depend on the rescalings of Ω and therefore, in our case the value of the single modulus is given by z1= Z 1 Z0 = =i (4.24) It can also be shown that the other scalar fields, which come from the expansion of the 3-form ˆ C3, (3.22), in the forms (4.21) are also stabilised. A simple argument to support this statement is that non-vanishing values of these scalars would lead to a non-zero internal ˆF4 flux at this vacuum solution due to the non-trivial derivative algebra the basis forms satisfy, (4.21), which in turn is ruled out by the supersymmetry conditions found in section 4.1. Hence, these scalar fields are forced by supersymmetry to stay at zero vacuum expectation value and therefore are fixed. Finally we note that as the solution above is supersymmetric, the four-dimensional space-time is AdS with the AdS curvature which scales with λas l[∼] 1 λ16 . (4.25) Thus, in the large volume limit (ieλ≫1) the four-dimensional space-time approaches flat space. = 1 In this section we will analyse the case where we only preserve N = 1 supersymmetry in the vacuum. We will show that this occurs due to spontaneous partial supersymmetry breaking, much like in massive type IIA [23], and that it is possible to write an effective N = 1 theory about this vacuum. We will derive the K¨ahler potential and superpotential for this theory and go through an explicit example of a manifold that leads to this phenomenon. 5.1 Spontaneous partial supersymmetry breaking In section 3.3 we showed that for certain manifolds there is a mass gap between the two gravitini in the vacuum and if this is the case then the vacuum no longer preserves the full[N] = 2 supersymmetry but rather spontaneously breaks to eitherN = 1 orN = 0 supersymmetry the former corresponding to one physically massless gravitino and the latter to no massless gravitini. In this section we will consider the case where the vacuum still preserves [N] = 1 supersymmetry. With this a mass gap of the scale of supersymmetry breaking, which is set by the vev of the scalars, appears throughout the spectrum and so we can consider specifying an effective [N] = 1 theory that is composed of the lower mass states. The superpotential and the K¨ahler potential for this theory will then be given by the mass of the physically massless gravitino as is usual for [N] = 1 theories. Determining the superfield spectrum is a more complicated problem and an important role is played by constraints on general partial supersymmetry breaking. Partial supersymmetry breaking has been considered in [47, 48, 49, 50, 51]. Following their dis-cussions we briefly summarise how the matter sector of the theory is affected by the breaking. In the [N] = 2 theory the fields were grouped into multiplets as described in Table 1. Once supersym-metry is broken these multiplets should split up into N = 1 multiplets. The N = 2 gravitational multiplet will need to split into a N = 1 ’massless’ gravitational multiplet and a massive spin-3[2] multiplet [51] gµν, ψ1, ψ2, A0 →massless (gµν, ψ1) + massive ψ2, A0, A1, χ the correct complex coordinates. For simple cases, as we will encounter in this paper, this can be done and one can find explicitely the correct complex combinations which span the N = 1 scalar K¨ahler manifold. Before concluding this section we should also mention some subtle issues related to the spon-taneous N = 2 → N = 1 breaking. It has been shown [47, 48, 49, 51, 52] that in Minkowski space spontaneous partial supersymmetry breaking can only occur if the symplectic basis in the vector-multiplet sector is such that no prepotential exists. However these results do not apply to the cases we discuss in this paper for the following reasons. First of all, the no-go result above has been obtained for purely electric gaugings of the [N] = 2 supergravity. Here we will see that we encounter magnetic gaugings as well and going to purely electric gaugings requires to perform some electric-magnetic duality which, in special cases, can take us to a symplectic basis where no prepotential exists. The second argument is that we will encounter the phenomenon of spontaneous partial supersymmetry breaking in AdS space and in such a case it is not clear how to extend the no-go arguments of [47].6 5.2 The superfields and K¨ahler potential Although the general pattern of partial supersymmetry breaking is constraining it is not enough to determine the superfields in general. The particular difficulty, as explained before, lies in truncating the hypermultiplet spectrum by finding the appropriate K¨ahler submanifold. However for the special case where we have only the universal hypermultiplet this is possible. We will therefore restrict our general analysis to such a situation anticipating also the fact that the specific example we will study in section 5.4 will be of this type. In order to find models with only one hypermultiplet we will rely on the observation of [53], that six-dimensional manifolds with SU(3) structure for which Ω+ is exact feature no complex structure moduli and therefore the hypermultiplet sector corresponding to compactifications on such manifolds consists only of the universal hyper-multiplet. We therefore restrict ourselves to the case where the torsion classes in (2.11) are restricted to Re(c1) =V2 =S1 =c2 =W2=A2 = 0, Im(c1)6= 0, and we see that under these conditions that the three form Ω+ [is indeed exact.] We further have to determine the gravitino mass matrix for this situation. Using (3.36), (3.14), (3.7) we find that in the particular case considered above, (5.2), the gravitino mass matrix becomes diagonal due to the fact that the internal flux[G] has to be closed due to the Bianchi identity S11 = √ V6 [dU+[∧]U++ 2[G ∧]U++ 2λ], S22 = √ V6 [dU−∧U−+ 2G ∧U−+ 2λ], (5.3) S12 = S21= 0. The condition (5.2) appears to be quite strong and we have already come across an example where this is violated in section 4.2. On the other hand we know from ref. [22] that anN = 1 anti-deSitter vacuum, which is required for all the moduli to be stabilised, necessarily means thatJ is not closed. Hence we always expect at least one of the torsion classes in (5.2) to be non-vanishing. Other than this we must take the condition as a limitation of this paper. Let us now see how we can identify the surviving degrees of freedom in a spontaneously broken N = 2 theory which comes from a compactification on a manifold which satisfies the requirements above. First of all we know that in order to have partial susy breaking we need at least two Peccei-Quinn isometries of the quaternionic manifold to be gauged such that the corresponding scalar fields become Goldstone bosons which are eaten by the graviphoton and another vector field in the theory. In the model at hand, where we only have one hypermultiplet, we have three such shift symmetries which can be gauged. They correspond to the axion, the dual of the two-form in four dimensions, and the two scalar fields which arise from the expansion of the three-form ˆc3 in the basis of three-forms (α0, β0). In order to gauge one of these last two directions, or a combination thereof, we need that the corresponding combination of the formsα0and β0 is exact. Without loss of generality we will assume thatβ0 is exact. Consistency with equations (3.18) and (3.19) implies then thatα0is not closed. We therefore see that the scalar field which comes from the expansion in the formβ0[, which we denote ˜][ξ] 0, is a Goldstone boson and will be eaten by one (or a combination) of the vector fields which come from the expansion of C3. Then the other Goldstone boson can only be given by the dual of the two form ˜B2. The way to see how this direction becomes gauged is obscured by the fact that we are dealing with a two-form rather then directly with a scalar field, but we can note that provided z is not closed, but its derivative is proportional to one of the two formsωi, there will appear in the compactified theory a Green-Schwarz interaction, ˜B2∧dA, which upon dualization precisely leads to the desired gauging.7 [Therefore we learn that the fields which] survive the truncation in the N = 1 theory are the dilaton and the second scalar field from the expansion of ˆc3 which we denote byξ0. The final thing which we need to do is to identify the correct complex combination of these two fields which defines the correct coordinate on the corresponding K¨ahler submanifold. Knowing that the N = 2 gravitino mass matrix becomes the superpotential in theN = 1 theory, which has to be holomorphic in the chiral fields, we are essentially led to the unique possibility F0 1 , (5.4) where the sign [±] is determined by which of the gravitini is massless and we will drop the index unless required for clarity. Z0 andF0 are the coefficients of the expansion of Ω in the basis (α0, β0), (3.21), and the quantity[−]4iZ0/F0is a positive real number as in the particular choice of symplectic basis we have made (β0 is exact) Z0 is purely imaginary. To check that this is indeed the correct superfield we should make sure we recover the moduli space metric from the K¨ahler potential in the gravitino mass. The appropriate kinetic terms in (3.16) S[kin]U = Z [√] " − e2φ∂µ ξ0+ie−φ −4iZ0 F0 1 2! ∂µ ξ0−ie−φ F0 1 .(5.5) The gravitino mass in the N = 1 theory is given by the product of the K¨ahler potential and the superpotential M3 2 =e 2K|W|. (5.6) From this we can use (5.3) to read off the K¨ahler potential eK/2 = e 2φ , (5.7) 7[The issue of the dualization is further obstructed by the fact that][B] [will be massive. This, as explained at the] end of section 3.3, is triggered by the non-closure of the one formz, which leads to mass term for the two-form field B2 of the type It is then easily shown that indeed the superfield and K¨ahler potential satisfy e2φ. (5.8) Hence we have identified the correct superfield in the truncated spectrum. Determining the super-fields arising from the [N] = 2 vector multiplets is a much easier task as they are just the natural pairing found in (3.17) ti≡bi−ivi, (5.9) where the index inow runs over the lower mass fields. 5.3 The superpotential The superpotential for the [N] = 1 theory can be read off from the gravitino mass to be W = √i 8 Z dU±[∧]U±+[G ∧]U±+ 2λ , (5.10) where again the[±]sign is fixed by the lower mass state. From this expression for the superpotential we can see that we should generically expect a constant termλ, linear terms in U, quadratic terms t2, U2 as well as mixed terms tU. These type of potentials will, in general, stabilise all the moduli and we will see such an example in the next section. It is instructive to note that finding a supersymmetric solution for this superpotential automati-cally solves the equations which are required for a solution of the fullN = 2 theory to preserve some supersymmetry. Therefore, for such a solution, it would be enough to show, using the mass matrix (5.3), that a mass gap between the two gravitini forms in order to prove that partial supersymmetry breaking does indeed occur. 5.4 The Coset SU(3)[×]U(1)/U(1)[×]U(1) In this section we will go through an explicit example of a manifold that preserves[N] = 1 supersym-metry in the vacuum. The manifold we will be considering is the cosetSU(3)×U(1)/U(1)×U(1) and for simplicity we shall turn off the four-form flux[G]= 0. Details of the structure of the coset can be found in the appendix and in this section we summarise the relevant parts. The coset is specified by three integers p,q, and r that determine the embeddings of the U(1)×U(1) in SU(3)×U(1), where the integers satisfy 0[≤]3p[≤]q , (5.11) with all other choices corresponding to different parameterisations of the SU(3). As with the previous coset example we can use the coset symmetries to derive the invariant SU(3) structure forms and the metric. The metric is given by a 0 0 0 0 0 0 0 a 0 0 0 0 0 0 0 b 0 0 0 0 0 0 0 b 0 0 0 0 0 0 0 c 0 0 0 0 0 0 0 c 0 0 0 0 0 0 0 d where the parametersa, b, c, d are all real. We can write the invariant forms as V = √dz , J = aω1+bω2+cω3 , (5.13) Ω = √abc iα0−4β0 This basis can be shown to satisfy the following differential relations dz = miωi , dωi = eiβ0 , dω˜i = 0, (5.14) dα0 = eiω˜i, dβ0= 0, where we have introduced two vectors ei = (2,2,2), andmi = (α,−β, γ), i= 1,2,3 which encode the information about the metric fluxes. The quantitiesα, β andγ are not independent, but satisfy α[−]β+γ = 0 and in terms of the integersp and q have the expressions α ≡ p q 3p2[+][q]2 , β ≡ 3p+q 2p3p2[+][q]2 , (5.15) γ ≡ 3p−q 2p3p2[+][q]2 . This ends our summary of the relevant features of the coset. We see that this manifold indeed has the required torsion classes (5.2) and, as expected, has no complex structure moduli and three K¨ahler moduli. 5.4.1 [N] = 1 minimum As explained in [54], M-theory compactifications on the coset manifold presented above are expected to preserveN = 1 supersymmetry in the vacuum. Therefore we can use the machinery developed at the beginning of this section and derive theN = 1 theory in the vacuum. We will also turn off the four-form flux[G] and so, using equations (5.7) and (5.10) we find the superpotential and K¨ahler potential to be W = √1 4U0 t1+t2+t3+ 2αt2t3[−]2βt1t3+ 2γt1t2+ 2λ, (5.16) K = [−]4ln[−]i U0[−]U¯0[−]ln[−]i t1[−]¯t1 t2[−]t¯2 t3[−]¯t3+ const. (5.17) where the superfieldsti were defined in (5.9) while forU0 we have U0±=ξ0±ie−φ, (5.18) as (5.13) gives [−]4iZ0[/F] 0 = 1. We can look for supersymmetric vacua to this action by solving the F-term equations. For convenience we restrict to the family of cosets withp= 0 though the results can be reproduced for more general choices of embeddings. We find the solution to the F-term equations t1 2 =t 2[=][t]3[=][U]0[=][−][i] r At this point we can go back to check which of the gravitini is more massive. Inserting the solution (5.19) into the expression of the mass matrix (5.3) we obtain S11> S22, (5.20) which means ψ2 [is the lighter gravitino and the one that should be kept in the truncated theory.] This gravitino is physically massless as expected. This also fixes the ± sign ambiguity in the superfield and superpotential so that we have U0[≡]U0−. Finally we note that as this solution is a supersymmetric solution of the truncated[N] = 1 theory and that according to (5.20) the gravitino masses are not degenerate we indeed have encountered the phenomenon of partial super symmetry breaking. 5.4.2 The structure in the vacuum It is informative to look at the form of the G structure of the coset in the vacuum in terms of the G2 structures. The two G2 forms (2.13) satisfy the vacuum differential and algebraic relations dϕ± = √2 −8β0[∧]z[±]2ω1∧ω2+ (±2 + 1)ω2∧ω3±2ω1∧ω3 2 3f ⋆ ϕ ± [=] √[2] . (5.21) It is clear to see that only ϕ− is weak-G2, and this is indeed the G2 structure that features in the superpotential and is associated with the lower mass gravitino. This shows an explicit mass gap appearing between the two G2 structures which is the same mass gap that corresponds to the partial supersymmetry breaking which we have used to write an effective N = 1 theory. Hence we have shown an example of the idea of an effective G structure where we could have arrived at this truncated [N] = 1 theory through a G2 structure compactification even though the manifold actually has SU(3) structure. Finally we should note that we could have used the condition that the manifold should be weak-G2 in the vacuum to solve for the values of the moduli in the vacuum as we did in section 4.2.1 instead of solving the F-term equations. In this paper we studied compactifications of M-theory on manifolds with SU(3) structure. We showed that these compactifications can be cast into a form much like type IIA compactifications on six-dimensional manifolds with SU(3) structure. The classical potential for the fields in four-dimensions differed however from the IIA case and we have proved in two explicit examples that one can find vacua which fix all the moduli without the need of non-perturbative effects. There are many interesting direction than can be followed from this paper. It would be in-teresting to consider manifolds that are more general then the restriction (5.2) and in particular the case where both the c1 and c2 torsion classes are non-vanishing should lead to a theory with a vacuum that preserves [N] = 1 supersymmetry and has a stable vacuum where the axions are stabilised at non-zero values. This would correspond to the unwarped solution with non-vanishing exact internal flux found in [22]. We have not touched on the subject of realistic particle content in this paper one reason being that one can not possibly achieve a viable spectrum of particles in M-theory compactifications by considering smooth manifolds as we do in this paper. However, in the effort to construct four-dimensional theories which contain chiral matter and gauge fields from M-theory compactifications (for recent developments see [55]), considering seven-dimensional manifolds with SU(3) structure should be very interesting because, as shown in this paper one can easily fix all the bulk moduli. This could be supplemented by turning on torsion classes that would lead to off-diagonal terms in the mass matrix that can be interpreted as D-terms in the effective N = 1 theory thereby breaking supersymmetry spontaneously. We would like to thank Pablo Camara, Joseph Conlon, Gianguido Dall’Agata, Thomas House, Josef Karthauser, Nikolaos Prezas and Silvia Vaul`a for useful discussions. The work of PMS and EP was supported by PPARC. The work of AM was supported by the European Union 6th Framework Program MRTN-CT-2004-503369 “Quest for Unification” and MRTN-CT-2004-005104 “ForcesUniverse”. In this appendix we outline the conventions used throughout this paper. The index ranges are M, N, P, Q, R, S, T, U, V, W = 0, ...,10, a, b, m, n, p, q, r, s, t = 0, ...,6, (A.1) µ, ν, ρ = 0, ...,3 i, j, k = 1, ...,Number of two[−]forms in the basis, A, B = 1, ...,Number of three−forms in the basis, α, β = 1,2. We worked with a mostly plus metric signature ˆ η11= (−1,+1,+1, ...), (A.2) where generallyˆdenotes eleven-dimensional quantities. The ˆǫtensor density is defined as ˆ ǫ0123...= +1, (A.3) and we define the inner product between forms as (ωpyνq)[µ][p][+1][...µ][q] ≡ 1
{"url":"https://1library.net/document/q5mm4lgy-m-theory-seven-dimensional-manifolds-su-structure.html","timestamp":"2024-11-05T12:12:06Z","content_type":"text/html","content_length":"218717","record_id":"<urn:uuid:b525e454-8140-4f2f-955a-de5cbdeb0788>","cc-path":"CC-MAIN-2024-46/segments/1730477027881.88/warc/CC-MAIN-20241105114407-20241105144407-00360.warc.gz"}
C Program to Calculate Area and Perimeter of a Rhombus Here is a C program to find the area and perimeter of a rhombus. A rhombus is quadrilaterals(having four sides) with all sides equal in length and opposites sides parallel. Opposite angles of a rhombus are also equal. A rhombus is a special case of parallelogram where all sides are equal in length. • Diagonals of a rhombus are perpendicular and divides each other in two equal half. • Opposite sides of a rhombus are parallel and all sides are equal in length. • Opposite angles of a rhombus are equal. • The sum of any two adjacent angles of a rhombus are 180 degrees. • The diagonal of rhombus bisects the interior angle. To calculate the area of rhombus we need length of Base and Height of length of both diagonals. Base : We can choose any side of a rhombus as base to calculate area of rhombus. Height : Height of a rhombus is the perpendicular distance between the base and it's opposite side. Diagonals : A line segment joining opposite vertices of a rhombus. Area of Rhombus A Rhombus is also a parallelogram. Hence, If we know the length of Base and Height then we can calculate area of rhombus by multiplying Base and Height. Where, B is the length of base of Rhombus. H is the length of height of Rhombus. (Base and Height are perpendicular on each other) If we know the length of both diagonals of a rhombus the, it's area can be calculated by multiplying length of both diagonals and then dividing it by 2. • Area of Rhombus = (Product of diagonals)/2 = (A X B)/2 Where A and B are the length of diagonals of rhombus C Program to find the area of the rhombus To calculate the area of a rhombus we need the length of both diagonals of a rhombus. Below program first takes the length of diagonals as input from user and stores it in two floating point numbers. Now, to find the area of rhombus we take semi product of diagonals. It finally prints the area of rhombus on screen using printf function. #include <stdio.h> int main(){ float diagonalOne, diagonalTwo, area; printf("Enter the length of diagonals of rhombus\n"); scanf("%f %f", &diagonalOne, &diagonalTwo); area = (diagonalOne * diagonalTwo)/2; printf("Area of rhombus : %0.4f\n", area); return 0; Enter the length of diagonals of rhombus 3.5 4 Area of rhombus : 7.0000 C Program to find the perimeter of rhombus The perimeter of a rhombus is the linear distance around the boundary of the rhombus. In other words, we can think of perimeter of a rhombus as the length of fence needed to enclose a rhombus. Perimeter of Rhombus The perimeter of a Rhombus can be calculated by adding the length of all four sides of Rhombus. As we know the length of all sides of rhombus are equal, so perimeter of rhombus is equal to four times side of rhombus. • Perimeter of Rhombus = 4 X S Where, S is the length of any side of rhombus. #include <stdio.h> int main(){ float side, perimeter; printf("Enter the length of side of rhombus\n"); scanf("%f", &side); perimeter = 4 * side; printf("Perimeter of rhombus : %0.4f\n", return 0; Enter the length of side of rhombus Perimeter of rhombus : 26.0000 To calculate the perimeter of rhombus, we need length of side of rhombus. Above program, first takes length of any side of rhombus as input from user and stores it in a floating-point variable. Then, we multiply length of side by 4 and store the result in a floating point variable called 'perimeter'. Finally, it prints the perimeter of Rhombus on screen using printf function. Interesting Facts about Rhombus • A rhombus has rotational symmetry. • The shape of basketball ground is rhombus. • A square is also a rhombus, having equal sides and all interior angles as 90 degrees. • A rectangle is also a rhombus, having opposites sides equal parallel and all interior angles as 90 degrees. • A rhombus is a special case of parallelogram, where all sides are equal. • The diagonals of rhombus intersect at right angle. Related Topics
{"url":"https://www.techcrashcourse.com/2015/03/c-program-to-calculate-area-of-rhombus.html","timestamp":"2024-11-11T00:09:32Z","content_type":"application/xhtml+xml","content_length":"85897","record_id":"<urn:uuid:4495695d-98b1-4bea-adc2-a145796fcf45>","cc-path":"CC-MAIN-2024-46/segments/1730477028202.29/warc/CC-MAIN-20241110233206-20241111023206-00463.warc.gz"}
Types and Properties of Angles Previous - Algebra 2: Solving for Equations Types and Properties of Angles Right Angles Angles which measure exactly 90° are right angles, that is, ϴ = 90°. Acute Angles Acute angles are angles which are greater than 0° but less than 90°, that is, 0° < ϴ < 90°. Obtuse Angles Obtuse angles are those which are greater than 90° but less than 180°, that is, 90° < ϴ < 180°. Straight Angles Angles which measure exactly 180° (degrees) are straight angles. Therefore, straight angles are straight lines. Angles are represented by the sign ϴ, called theta. That is, for straight angles, ϴ= Reflex Angles Reflex angles are angles which are greater than 180° but less than 360°, that is, 180° < ϴ < 360°. Adjacent Angles Two angles which share the same vertex and have a common side (line) are called adjacent angles. Complementary Angles Complementary angles are two angles which when summed equal 90°. A° + B° = 90° Supplementary Angles Supplementary angles are two angles which when summed equals 180° such as: A° + B° = 180° Vertically Opposite Angles Vertically opposite angles are the angles opposite to each other when two straight lines intersect. Their defining property is that, vertically opposite angles are equal in magnitude. For example, A= B and C=D. Corresponding Angles When two parallel lines are crossed by a line, this line is called a transversal. The angles formed by this crossing of the parallel lines by the transversal are called corresponding angles and these angles are equal in magnitude. Corresponding angles: b = f d = h a = e c = g Types and Properties of Triangles Equilateral Triangles Equilateral triangles are those with all three sides equal in length and all three angles equal in size. Since the angles in a triangle sum to 180° and the size of each angle is the same in an equilateral triangle, the angles are all 60°. Isosceles Triangles Isosceles triangles are triangles with two sides equal in length and two angles equal in size. Scalene Triangles A scalene triangle is one which has no sides equal in length and no angles equal in magnitude. Congruent Triangles Congruent triangles are triangles which have the same area, angles and side lengths. Angles Sides a=d 6=6 b=e 4=4 c=f 5=5 Similar Triangles Triangles are similar if they have the same shape, but not necessarily the same size. Two triangles are similar if the only difference is size and possibly the need to turn or flip one round. Angles Sides a=d 8≠4 b=e 6≠3 c=f 4≠2 Note: With similar angles, the angles are equal but not the sides.
{"url":"https://www.econnect-study.com/index.php/mathematics-geometry1-types-and-properties-of-angles-and-triangles","timestamp":"2024-11-05T02:18:56Z","content_type":"text/html","content_length":"38442","record_id":"<urn:uuid:d36671d3-3552-4027-ad4b-745a81c85bc1>","cc-path":"CC-MAIN-2024-46/segments/1730477027870.7/warc/CC-MAIN-20241105021014-20241105051014-00666.warc.gz"}
Lesson 22 Solving Rational Equations 22.1: Notice and Wonder: Thoughtful Multiplication (5 minutes) This warm-up begins where students left off at the end of the previous lesson, considering how to multiply strategically to “clear the denominators” in a rational equation. The purpose of this warm-up is to elicit the idea that when multiplying to clear denominators, we only need to multiply by the least common denominator shared by the different terms in the expression. This idea will be used in the following activities as students solve rational equations and investigate extra solutions that sometimes arise in the solving process. While students may notice and wonder many things about these images, the multiplication by \(x(x+2)\) and not \(x(x+2)^2\) (the result of multiplying together all the denominators) is the important discussion point. Display the partially solved equation for all to see. Ask students to think of at least one thing they notice and at least one thing they wonder. Give students 1 minute of quiet think time, and then 1 minute to discuss the things they notice and wonder with their partner, followed by a whole-class discussion. Student Facing What do you notice? What do you wonder? \(\displaystyle \frac{3}{x(x-2)} &= \frac{2x+1}{x-2} \\ \frac{3}{x(x-2)} \boldcdot x(x-2) &= \frac{2x+1}{x-2} \boldcdot x(x-2) \\ 3 &= 2x^2 + x \\ 0 &= 2x^2+x-3 \\ \) Activity Synthesis Ask students to share the things they noticed and wondered. Record and display their responses for all to see. If possible, record the relevant reasoning on or near the image. After all responses have been recorded without commentary or editing, ask students, “Is there anything on this list that you are wondering about now?” Encourage students to respectfully disagree, ask for clarification, or point out contradicting information. If students do not recall or have not heard the phrase “clearing the denominators” for this type of multiplying, tell them it is a useful description for the first few steps shown. If time allows, have students finish solving for \(x\) by either using the quadratic formula or factoring (-1.5, 1). 22.2: Rational Solving (15 minutes) The purpose of this activity is for students to understand how steps used to solve a rational equation sometimes lead to nonequivalent equations, giving rise to so-called extraneous solutions. For example, multiplying each side of a rational equation by \(x+1\) creates a new equation that is true when \(x= \text-1\), since \(0=0\), even if the original equation was not true at \(x= \text-1\). Monitor for students who notice that neither 0 nor -1 can be substituted for \(x\) in the original expressions due to division by 0, and students who notice that multiplying by a variable on each side of an equation is different than the type of steps they have used in the past to solve linear or quadratic equations (specifically, that multiplying by \(x+1\) is the same as multiplying by 0 when \(x= \text-1\)). Writing, Conversing: MLR1 Stronger and Clearer Each Time. Use this routine to help students improve their writing by providing them with multiple opportunities to clarify their explanations through conversation. Give students time to meet with 2–3 partners to share their response to the question “Why does Jada’s method produce an \(x\) value that does not solve the equation?” Provide listeners with prompts for feedback that will help their partner add detail to strengthen and clarify their ideas. For example, students can ask their partner, “How do you know . . . ?”, “Is it always true that . . . ?”, or “Can you say more about . . . ?” Next, provide students with 3–4 minutes to revise their initial draft based on feedback from their peers. This will help students produce a written generalization for why some solutions for rational equations are extraneous. Design Principle(s): Optimize output (for explanation) Student Facing Jada is working to find values of \(x\) that make this equation true: \(\displaystyle \frac{5x+5}{x+1} = \frac{5}{x}\) She says, “If I multiply both sides by \(x(x+1)\), I find that the solutions are \(x=1\) and \(x= \text-1\), but when I substitute in \(x= \text-1\), the equation does not make any sense.” 1. Is Jada’s work correct? Explain or show your reasoning. 2. Why does Jada’s method produce an \(x\) value that does not solve the equation? Student Facing Are you ready for more? 1. What are the solutions to \(x^2=1\)? 2. What are the solutions to \(\frac{x^2}{x-1} = \frac{1}{x-1}\)? 3. How can you solve \(\frac{x^2}{x-1} = \frac{1}{x-1}\) by inspection? 4. How does the denominator influence the solution(s) to \(\frac{x^2}{x-1} = \frac{1}{x-1}\)? Activity Synthesis The goal of this discussion is for students to share what they think is happening to make \(x= \text-1\) appear as a solution to the original equation even though it is not. Invite previously identified students to share things they noticed about the original equation and what it means when we multiply each side of an equation by a variable. Consider these questions to help further the discussion: • “Jada multiplied by \(x(x+1)\). What values of \(x\) make that expression equal to 0?” (0 and -1.) • “What are values of \(x\) that we cannot substitute into the original equation?” (We cannot substitute 0 or -1 for \(x\) in the original equation, because both values create an invalid equation due to division by 0.) • “What happens when you multiply each side of any equation by 0? What values of \(x\) make this new equation true but not necessarily the original equation?” (You get the equation \(0=0\), which is true for all values of \(x\).) Help summarize the discussion by telling students that sometimes steps we do to solve an equation result in a new equation that is not equivalent to the original equation. Two equations are equivalent if they have the exact same solutions. Since \(x= \text-1\) is a solution to the new equation but not the original equation, some people call this an extraneous solution. The step that created a new equation that is not equivalent to the original equation was when each side was multiplied by an expression that can have the value 0. This can make two previously unequal (or undefined) sides equal for some value of the variable, so the original equation and the new equation are not equivalent. These inequivalent equations do not always arise from multiplying both sides of an equation by an expression, so we should always check the solution in the original equation to be sure. In later units, students will see other types of equation solving steps that can result in new equations that have solutions that do not satisfy the original equation. 22.3: More Rational Solving (15 minutes) The purpose of this activity is for students to practice solving rational equations and identifying extraneous solutions, if they exist. Students are expected to use algebraic methods to solve the equations and should be discouraged from using graphing technology. Arrange students in groups of 2. Tell students to complete the first problem individually and then check their work with their partner before completing the following problems together. Conversing: MLR8 Discussion Supports. Use this routine to help students describe their reasons for choosing values of \(x\) that cannot be solutions to the equations. Students should take turns stating values and explaining their reasoning to their partner. Display the following sentence frames for all to see: “_____ and _____ cannot be \(x\) values because . . . .”, and “I noticed _____ , so I know . . . .” Encourage students to challenge each other when they disagree. This will help students clarify their reasoning about extraneous solutions. Design Principle(s): Support sense-making; Maximize meta-awareness Engagement: Internalize Self Regulation. Chunk this task into more manageable parts to differentiate the degree of difficulty or complexity. Invite students to identify 2 equations that they think would be least difficult to solve and 2 that would be most difficult to solve, and to choose and respond to at least 2 of the questions they identified. Supports accessibility for: Organization; Attention Student Facing 1. Here are a lot of equations. For each one, use what you know about division to identify values of \(x\) that cannot be solutions to the equation. 1. \(\dfrac{x^2+x-6}{x-2} = 5\) 2. \(\dfrac{2x+1}{x} = \dfrac{1}{x-2}\) 3. \(\dfrac{10}{x+8} = \dfrac{5}{x-8}\) 4. \(\dfrac{x^2+x+1}{13} = \dfrac{2}{x-1}\) 5. \(\dfrac{x+1}{4x} = \dfrac{x-1}{3x}\) 6. \(\dfrac{1}{x} = \dfrac{1}{x(x+1)}\) 7. \(\dfrac{x+2}{x} = \dfrac{3}{x-2}\) 8. \(\dfrac{1}{x-3} = \dfrac{1}{x(x-3)}\) 9. \(\dfrac{(x+1)(x+2)}{x+1} = \dfrac{x+2}{x+1}\) 2. Without solving, identify three of the equations that you think would be least difficult to solve and three that you think would be most difficult to solve. Be prepared to explain your reasoning. 3. Choose three equations to solve. At least one should be from your “least difficult” list and one should be from your “most difficult” list. Anticipated Misconceptions Some students may forget how to solve a quadratic equation. Remind them of options such as factoring or using the quadratic formula when they have an expression equal to zero. Activity Synthesis The purpose of this discussion is for students to share strategies for solving different types of equations. Some students may have thought that an equation was in the “least difficult” category, while others thought that the same equation was in the “most difficult” category. Remind students that once you feel confident about the strategies for solving an equation, it may move into the “least difficult” category, and recognizing good strategies takes practice and time. Informally poll the class for each equation as to whether they placed it in the “most difficult” category, “least difficult” category, or if it was somewhere in the middle. Record and display the results for all to see. For questions with a split vote, have a group share something that made it seem difficult about it and another group share something that made it seem less difficult. If there are any questions that everyone thought would be more difficult or everyone thought would be less difficult, ask students why it seemed that way. Ask students, “Were there any equations that were more difficult to solve than you expected? Were there any that were less difficult to solve than you expected?” If not brought up, ask how students worked out the equation \(\frac{2x+1}{x} = \frac{1}{x-2}\), which has irrational solutions. Lesson Synthesis The purpose of this writing prompt is for students to reflect on what they have learned about how so-called extraneous solutions can arise when solving rational equations. Ask students to respond to the following prompt: “How can extra solutions arise in the process of solving an equation?” Encourage students to use equations from the lesson to help in their explanations if needed. If time allows, ask students to share what they’ve written with a partner and then select 2–3 students to share something from either their paper or their partner’s with the class. Key understandings are that when we multiply each side of an equation by an expression that can have the value 0, we sometimes get an equation that is not equivalent to the initial equation, and that we should always check to see if a solution satisfies the original equation. 22.4: Cool-down - Find Rational Solutions (5 minutes) Student Facing Consider the equation \(\frac{x+2}{x(x+1)} = \frac{2}{(x+1)(x-1)}\). We could solve this equation for \(x\) by multiplying each expression by \(x(x+1)(x-1)\) to get an equation with no variables in denominators, and then rearranging it into an expression that equals 0. Here is what that looks like: \(\displaystyle \frac{x+2}{x(x+1)} \boldcdot x(x+1)(x-1) &= \frac{2}{(x+1)(x-1)} \boldcdot x(x+1)(x-1) \\ (x+2)(x-1) &= 2x \\ x^2 + x - 2 &= 2x \\ x^2 - x - 2 &= 0 \\ (x-2)(x+1) & = 0 \\ \) The last equation, \((x-2)(x+1) = 0\), leads us to believe that the original equation has two solutions: \(x=2\) and \(x=\text-1\). Substituting \(x=2\) into the original equation, we get \(\frac {2+2}{2(2+1)} = \frac{2}{(2+1)(2-1)}\), which is true since each side is equal to \(\frac23\). But, substituting \(x=\text-1\) into the original equation, we get \(\frac{\text-1+2}{\text-1(\ text-1+1)} = \frac{2}{(\text-1+1)(\text-1-1)}\), which isn’t a valid equation since division by 0 is not allowed. This means \(x=\text-1\) isn’t a solution, so what happened to make us think that it Let’s consider the simpler equation \(x - 5 = 0\). This equation has one solution, \(x=5\). But if we multiply each side by \((x-1)\) the result is a new equation, \((x-1)(x-5)=0\), which has solutions 5 and 1. The 1 is a solution to the new equation because when \(x=1\), \(x-1=0\). But if we substitute 1 for \(x\) into the original equation, we get \(1-5 = \text-4 = 0\), which is not a valid equation, so 1 is not a solution to the original equation. Because we multiplied each side of the original equation by an expression that has the value 0 when \(x=1\), the two sides \(x-5\) and 0 that were unequal at that specific \(x\)-value are now equal. For this example, \(x=1\) is sometimes called an extraneous solution. In the original example, \(x=\text-1\) is the extraneous solution. While \(x=\text-1\) is a solution to the equation we wrote after we multiplied the original equation by \(x(x+1)(x-1)\) on each side, it is not a solution to the original equation since they are not equivalent. It should be noted that even though we multiplied by \(x\), \((x+1)\), and \((x-1)\), only one extraneous solution was added. This shows that multiplying by an expression that can equal zero does not always cause an extraneous solution. So how do we tell if a solution is extraneous or not? We substitute it into the original equation and make sure the result is a valid equation.
{"url":"https://curriculum.illustrativemathematics.org/HS/teachers/3/2/22/index.html","timestamp":"2024-11-14T11:02:42Z","content_type":"text/html","content_length":"102682","record_id":"<urn:uuid:3e93d95c-2c73-4b48-a7ae-cd6d86c281bb>","cc-path":"CC-MAIN-2024-46/segments/1730477028558.0/warc/CC-MAIN-20241114094851-20241114124851-00745.warc.gz"}
Get the data of a KeyBlock, as a list of data items. Each item will have a different data format depending on the type of this Key. Note that prior to 2.45 the behaviour of this function was different (and very wrong). Old scripts might need to be updated. • Mesh keys have a list of Vectors objects in the data block. • Lattice keys have a list of Vectors objects in the data block. • Curve keys return either a list of tuples, eacn containing four Vectors (if the curve is a Bezier curve), or otherwise just a list of Vectors. For bezier keys, the first three vectors in the tuple are the Bezier triple vectors, while the fourth vector's first element is the curve tilt (the other two elements are reserved and are currently unused). For non-Bezier keys, the first three elements of the returned vector is the curve handle point, while the fourth element is the tilt. A word on relative shape keys; relative shape keys are not actually stored as offsets to the base shape key (like you'd expect). Instead, each shape key stores an entire model (actually the state of the mesh vertices after exiting editmode with any given key active). The additive offset for a shape key is calculated (when needed) by comparing the shape key with its base key, which is always the very first in the keyblock list.
{"url":"http://www.zoo-logique.org/3D.Blender/scripts_python/API/Key.KeyBlock-class.html","timestamp":"2024-11-13T10:38:57Z","content_type":"application/xhtml+xml","content_length":"14269","record_id":"<urn:uuid:2f738b66-37dc-498e-85f4-6f4ae1317af8>","cc-path":"CC-MAIN-2024-46/segments/1730477028347.28/warc/CC-MAIN-20241113103539-20241113133539-00078.warc.gz"}
Cellular CyberKey A friend knows I’m into electronic locks and gave me this CyberKey key as a present. He did not have the locks to share so you will have to do with pictures of the key. Let’s just admire the construction and not worry about all the ways you would break an electronic access system like this. Note: Click the pictures for the full size image. CC-BY-4.0 Jan-Willem Markus Toool Blackbag What battery does it use? • It uses a CR2 (3V lithium) battery. Fairly common battery for electronic locks. Why is it called Cellular? I don’t see a cell modem in there. • Hi Dan. I’ve got no clue. I’ve not found another using image search but they’ve dropped the ‘cellular’ for later releases.
{"url":"https://blackbag.toool.nl/?p=3511","timestamp":"2024-11-14T13:33:39Z","content_type":"application/xhtml+xml","content_length":"31298","record_id":"<urn:uuid:fc87a829-20a7-49f5-b8c1-a9adaf81d022>","cc-path":"CC-MAIN-2024-46/segments/1730477028657.76/warc/CC-MAIN-20241114130448-20241114160448-00267.warc.gz"}
Analysis of optical properties of III-V semiconductorsSince III-V semiconductors are used as materials for light emitting devices such as diodes, it is useful to predict their optical characteristics from first-principles calculations.Use Cases Since III-V semiconductors are used as materials for light emitting devices such as diodes, it is useful to predict their optical characteristics from first-principles calculations. Here, we explain how to calculate the dielectric function of Si and GaAs, which is a group III-V semiconductor, using first-principles calculation. It is also possible to evaluate optical properties such as absorption spectrum, permittivity, and reflectance from the dielectric function . 1. Computational model creation You can easily obtain the required crystal structure by searching for the substance names Si and GaAs on the Import Materials screen (Fig. 1 and Fig. 2). Figure 1 Search for GaAs crystal structure of space group F-43m from Import Material Figure 2 Searched GaAs crystal model (F-43m) 2. Execution of electronic dielectric function calculation The electronic dielectric function is calculated using epsilon.x, which is the dielectric function calculation function of Quantum Espresso. Table 1 shows the calculation conditions for calculating the dielectric function in epsilon.x. Table 1 Dielectric function calculation conditions Table 3 Bandgap calculation results 2.1 Absorption spectrum Figures 3 and 4 show the calculation results of the imaginary part of the dielectric function. The imaginary part of the dielectric function corresponds to the absorption spectrum. By performing scissor approximation, it can be confirmed that the absorption position of the spectrum approaches the experimental result. Fig. 3 Comparison of the calculation result of the absorption spectrum (imaginary part of the dielectric function) of Si and the experimental result [1] Fig. 4 Comparison of the calculation result of the absorption spectrum (imaginary part of the dielectric function) of GaAs and the experimental result [1] 2.2 Permittivity In addition, Fig. 5 and Fig. 6 show the calculation results of the real part of the dielectric function. The permittivity can be evaluated from the value of the dielectric function when Photon energy is 0. Table 4 shows the calculation results of the permittivity. The shape of the dielectric function (peak position, etc.) is closer to the experimental result by performing scissor approximation. On the other hand, the value of the dielectric function was improved by scissor approximation in Si, but not in GaAs. Fig. 5 Comparison of the calculation result of the real part of the dielectric function of Si and the experimental result [1] Fig. 6 Comparison of the calculation result of the real part of the dielectric function of GaAs and the experimental result [1] Table 4 Dielectric constant calculation results 2.3 Reflectance Next, Fig. 7 and Fig. 8 show the calculation results of the reflectance. By performing scissor approximation, it can be confirmed that the shape of the reflectance (peak position, etc.) approaches the experimental result. In the case of Si, the reflectance value was close to the experimental result by performing scissor approximation. Fig. 7 Comparison of the calculation result of the reflectance of Si and the experimental result [1] Fig. 8 Comparison of GaAs reflectance calculation results and experimental results [1] 3. About the workflow of electronic dielectric function calculation Table 5 shows the workflow (Figure 9) and detailed information for calculating the electronic dielectric function in epsilon.x of Quantum Espresso. Figure 9 Workflow for electronic dielectric function calculation Table 5 Workflow information for permittivity calculation In addition, the energy range of the graph output can be set by inputting the lower and upper limits of energy during the mk_graph flow (Fig. 10). Fig. 10 Graph drawing execution flow Energy range input field in mk_graph Emin corresponds to the energy lower limit value and Emax corresponds to the energy upper limit value (eV unit) For the workflow of the electronic dielectric function, obtain "electronic_dielectric_function (QE6.3)" from Bank and use it. 4. Calculation time and cost Finally, the calculation time and cost required to carry out this calculation are shown in Table 6. Table 6 Calculation time and cost of electronic dielectric function calculation (using saving node) [1] DE Aspnes and AA Studna, "Dielectric functions and optical parameters of Si, Ge, GaP, GaAs, GaSb, InP, InAs, and InSb from 1.5 to 6.0 eV", Phys. Rev. B 27, 985 (1983) Original Source from: https://ctc-mi-solution.com/iii-v半導体の光学特性の解析/
{"url":"https://www.mat3ra.com/case-studies/analysis-of-optical-properties-of-iii-v-semiconductors","timestamp":"2024-11-03T22:03:14Z","content_type":"text/html","content_length":"50730","record_id":"<urn:uuid:7acd0c57-a0b2-4d4a-896c-2e829e19dd10>","cc-path":"CC-MAIN-2024-46/segments/1730477027796.35/warc/CC-MAIN-20241103212031-20241104002031-00005.warc.gz"}
12.2 Internal Validity | An Introduction to Spatial Data Science with GeoDa 12.2 Internal Validity Indices pertaining to the internal validity of a cluster focus on the composition of the clusters in terms of how homogeneous the observations are, how well clusters find the right separation between groups, as well as the balance among individual cluster sizes. Five types of measures are considered here. First are traditional measures of fit that were included among the Summary of cluster characteristics in the preceding chapters. Next follow indicators of the balance of cluster sizes, i.e., the evenness of the number of observations in each cluster. The final three types of measures are less common. The join count ratio is an indicator of the compactness and separation between clusters in terms of the number of connections with observations outside the cluster. Compactness is a characteristic of the shape of a spatially constrained cluster and can be assessed by many different indicators. Finally, connectedness is an alternative measure of compactness that is derived from the graph structure of the spatial weights. 12.2.1 Traditional Measures of Fit As covered in previous chapters, the total sum of squared deviations from the mean (TSS) is decomposed into one part attributed to the within sum of squares (WSS) and a complementary part due to the between sum of squares (BSS). These are the most commonly used indicators of the fit of a clustering to the data. A better cluster is considered to be one with a smaller WSS, or, equivalently, a larger BSS to TSS ratio. As noted earlier, this is only an appropriate metric when the dissimilarity matrix is based on an Euclidean distance. When this is not the case, e.g., for K-Medoids, a different metric should be used. While useful, these indicators of fit miss other important characteristic of a cluster. For example, in order to better identify critical assignments within a given cluster alignment, Kaufman and Rousseeuw (2005) introduced the notion of average silhouette width. This is the average dissimilarity of an observation to members of its own cluster compared to the average dissimilarity to observations in the closest cluster to which it was not classified. An extension that takes the spatial configuration into account in the form of so-called geo-silhouettes is presented in Wolf, Knaap, and Rey (2021). 12.2.2 Balance In many applications, it is important that the clusters are of roughly equal size. Two common indicators are the entropy and Simpson’s index. Both compare the distribution of the number of observations among clusters to an even distribution. Whereas entropy is maximized for such a distribution, Simpson’s index is minimized. The entropy for a given clustering \(P\) consisting of \(k\) clusters with observations \(n_i\) in cluster \(i\), for a total of \(n\) observations, is (see, e.g., Vinh, Epps, and Bailey 2010): \[H (P) = - \sum_{i=1}^k \frac{n_i}{n} \log \frac{n_i}{n}.\] For equally balanced clusters, \(n_i/n = 1/k\), so that entropy is \(- k .(1/k) \ln(1/k) = ln(k)\), which is the maximum that can be obtained for a given \(k\). To facilitate comparison among clusters of different sizes, a standardized measure can be computed as entropy/max entropy. The closer this ratio is to 1, the better the cluster balance (by construction, the ratio is always smaller than 1). The entropy fragmentation index can be computed for the overall cluster result, but also for each cluster separately. This is especially relevant when a cluster consists of subclusters, i.e., subsets of contiguous observations. A cluster that consists of subclusters of equal size would have a relative entropy of 1. The sub-cluster entropy is only a valid measure for non-contiguous or non-compact clusters, so it is not appropriate for the results of spatially constrained clustering methods. Another indicator of cluster balance is Simpson’s index of diversity (Simpson 1949), also known in economics as the Herfindahl-Hirschman index: \[S = \sum_{i=1}^k (\frac{n_i}{n})^2.\] With equal representation in clusters, Simpson’s index equals \(1/k\), the lowest possible value. It ranges from \(1/k\) to \(1\) (all observations in a single cluster). A standardized index yields a value of 1 for the equal representation case, and values larger than 1 for others, with smaller values indicating a more balanced distribution.^55 Same as for the entropy measure, Simpson’s index is only applied to subclusters in cases where the solution is not spatially constrained. 12.2.3 Join Count Ratio An index that addresses both compactness and separation is the join count ratio. It is derived from the contiguity structure among observations that is reflected in a spatial weights matrix. For a given such structure, a relative measure of compactness of a cluster is indicated by how many neighbors of each observation in the cluster are also members of the cluster. For a spatially compact cluster, ignoring border effects, this ratio should be 1. The higher the ratio, the more compact and self-contained is the cluster, with the least connectivity to other clusters. The join count ratio can be computed for each cluster separately, as well as for the clustering as a whole. A value of zero (the minimum) indicates that all neighbors of the cluster members are outside the cluster. This is only possible when the cluster is not spatially constrained and when all cluster elements are singletons in a spatial sense. 12.2.4 Compactness For spatially constrained clusters, compactness is a key characteristic. For example, this is a legal criterion in the context of electoral redistricting, which is a form of spatially constrained clustering (Saxon 2020). However, there is no single measure to characterize compactness, and many different aspects of the shape of clusters can be taken into account (e.g., the review in Niemi et al. 1990). For example, Saxon (2020) reviews no less than 18 indicators of compactness and compares their properties in the context of gerrymandering.. Perhaps the most famous measure of compactness is the isoperimeter quotient (IPQ), i.e., the ratio of the area of a cluster shape to that of a circle of equal perimeter (Polsby and Popper 1990). The point of departure is the view that a circle is the most compact shape. It is compared to an irregular polygon with the same perimeter.^56 The area of a circle, expressed in function of its perimeter \(p\) is \(C = p^2 / 4\pi\).^57 Consequently, the isoperimeter quotient as the ratio of the area of the polygon \(A\) over the area of the circle is: \[IPQ = 4 \pi A / p^2,\] with \(A\) as the area of the polygon. The IPQ is only suitable for spatially constrained cluster results. 12.2.5 Connectedness With the spatial weights viewed as a network or graph, a spatially constrained cluster must constitute a so-called connected component. The diameter of the network structure that corresponds with the cluster is the length of the longest shortest path between any pair of observations (Newman 2018, 133). Starting from an unweighted graph representation of the spatial weights matrix, each connection between two neighbors corresponds with one step in the graph. For a given number of observations in a cluster, the diameter computed from the spatial weights connectivity graph gives a measure of compactness (smaller is more compact). For example, for a star-shaped layout of observations, the diameter would equal two (the longest shortest path between any pair of observations goes through the center of the star in two steps). On the other end of the spectrum, for a long string of \(m\) observations, the diameter would be \(m-1\). Everything else being the same, the diameter of a network increases with its size. Dividing the diameter by the number of observations in the cluster gives a relative measure, which corrects for the size of the cluster. As for the IPQ, the diameter of a cluster is only applicable to spatially constrained clusters. 12.2.6 Implementation To compare the different cluster validation measures, eight different outcomes are considered, all obtained with \(k=12\) for the Ceará economic indicators (\(n = 184\)). Two clusterings are non-spatial, i.e., Hierarchical clustering with Ward’s linkage (not used earlier, but shown in Figure 12.2 below), and K-Means (cluster map in the left-hand panel of Figure 9.3, characteristics in Figure 9.4). The remaining six patterns represent different methods to obtain spatially constrained results: SCHC with Ward’s linkage (Figure 10.10 and Figure 10.11); SKATER (Figure 10.18 and Figure 10.19); REDCAP (Figure 10.24 and Figure 10.25); AZP with simulated annealing (Figure 11.12); AZP with SCHC as initial solution (Figure 11.14); and the max-p outcome that yielded \(p=12\) (Figure The WSS, BSS/TSS ratio, overall Entropy, Simpson’s index and the join count ratio for each clustering are listed in Figure 12.3. The first two measures are included as part of the Summary for each cluster result. The others are invoked by selecting Clusters > Validation from the menu, or as the last item in the cluster drop-down list from the toolbar (see Figure 12.1). The required input is a Cluster Indicator (i.e., the categorical variable saved when carrying out a clustering exercise) and a spatial weights file. The latter is required for the join count ratio, even for traditional (non-spatial) clustering methods. The Validation option brings up a results window, shown in Figure 12.4 for the hierarchical clustering outcome. At the top, this gives the number of clusters (12), the raw and standardized entropy measures as well as the raw and standardized Simpson’s index. Since the hierarchical cluster outcome is not spatially constrained, the fragmentation characteristics are also listed for each of the twelve clusters individually. The size is given (N), its share in the total number of observations (Fraction), the number of sub-clusters (#Sub), raw and standardized Entropy and Simpson index, as well as the minimum, maximum and mean size of the subclusters. This provides a detailed picture of the degree of fragmentation by cluster. For example, the table shows that cluster 11 consists of 4 compact observations (fragmentation results given as 0), whereas cluster 12 is made up of 4 singletons, the most fragmented result (yielding a standardized value of 1 for both entropy and Simpson’s index). The best result, in the sense of the least fragmentation or least diverse is obtained for cluster 1. Its 29 observations are divided among 9 subclusters (smallest standardized entropy, i.e., worst diversity, of 0.797 and largest standardized Simpson of In addition to the fragmentation measures, the join count ratio is computed for each individual cluster as well. This provides the number of neighbors and the count of joins that belong to the same cluster, yielding the join count ratio. At the bottom, the overall join count ratio is listed. Taking into account the neighbor structure yields cluster 11 (which has no subclusters, hence is compact) with the highest score of 0.476, closely followed by cluster 1 with 0.405. Since the result pertains to a method that is not spatially constrained, there are no measures for compactness and diameter. Comparing the overall results in Figure 12.3 confirms the superiority of the K-Means outcome in terms of fit, with the best WSS and BSS/TSS. In general, the unconstrained clustering methods do (much) better on these criteria than the spatially constrained results, with only AZP-initial coming somewhat close. This matches a similar dichotomy for the fragmentation indicators, with the spatially constrained outcomes much less equally balanced (smaller entropy, larger Simpson) than the classical results. Interestingly, the minimum population size imposed in max-p yields a more balanced outcome, with entropy and Simpson on a par with the classical results (but much worse overall fit). Finally, the overall join count ratio confirms the superiority of the spatially constrained results in this respect, with SKATER yielding the highest ratio. For a spatially constrained clustering method, the results window for Validation is slightly different, as illustrated in Figure 12.5 for AZP with a SCHC initial region. The fragmentation summary takes the same form as before, but there is no report on subcluster fragmentation. The join count ratio is again included, with the outcome for all individual clusters as well as the overall ratio. In the example, the highest ratio is obtained in cluster 1, with a value of 0.6759, compared to the overall ratio of 0.564. Two additional summary tables include the computation of the IPQ as a compactness index, as well as the diameter based on the spatial weights in each cluster. The highest values for IPQ are obtained for the singletons, which is not very informative. Of the four largest clusters, cluster 4 (with 14 observations) seems the most compact, with a ratio of 0.032. Overall, the smaller clusters clearly do better on this criterion. For example, cluster 5 (with 5 observations) achieves a ratio of 0.066, and cluster 7 (with 4 observations) obtains a ratio of 0.128. For comparison, the largest ratio obtained for a singleton is for cluster 12, with a ratio of 0.539. Finally, the diameter ranges from 0 (for singletons) to 23 for cluster 2 (with 51 observations). Standardized for the number of observations in each cluster, cluster 1 is most connected (22 steps with 82 observations, for a ratio of 0.286), closely followed by cluster 3, which is much smaller (16 observations with 5 steps, for a ratio of 0.3125). Clearly, the different dimensions of performance highlight distinct characteristics of each cluster. In each particular application, some dimensions may be more relevant than others, requiring a careful assessment. 55. In the literature, a normalized Herfindahl-Hirschman index is sometimes used as (HHI - 1/n) / (1 - 1/n), which runs from 0 to 1, with 0 as best.↩︎ 56. Technically speaking, determining the perimeter of an irregular polygon is not without problems, and depends on the precision of the digital representation of the boundary.↩︎ 57. The perimeter of a circle is \(p = 2 \pi r\), with \(r\) as the radius, so \(r = p / 2\pi\). The area of a circle is \(C = \pi r^2\), so expressed as a function of the perimeter, it is \(C = \pi (p / 2\pi)^2 = p^2 / 4\pi\).↩︎
{"url":"https://lanselin.github.io/introbook_vol2/internal-validity.html","timestamp":"2024-11-08T18:15:18Z","content_type":"text/html","content_length":"70500","record_id":"<urn:uuid:8e9608e9-13bb-4b0f-a13c-d738ac245824>","cc-path":"CC-MAIN-2024-46/segments/1730477028070.17/warc/CC-MAIN-20241108164844-20241108194844-00503.warc.gz"}
• Fuzzy rules can be developed verbally to describe a controller. • Fuzzy sets can be developed statistically or by opinion. • Solving fuzzy logic involves finding fuzzy set values and then calculating a value for each rule. These values for each rule are combined with a weighted average. • Fuzzy logic controllers can have multiple inputs and outputs
{"url":"https://engineeronadisk.com/V2/book_modelling/engineeronadisk-266.html","timestamp":"2024-11-02T14:41:56Z","content_type":"text/html","content_length":"1785","record_id":"<urn:uuid:0a2c7260-46a8-401a-9fa8-97ccf3be8c54>","cc-path":"CC-MAIN-2024-46/segments/1730477027714.37/warc/CC-MAIN-20241102133748-20241102163748-00375.warc.gz"}
The Mathematical Physics at Leeds Seminar Series is aimed at bringing together researchers at any level from across the University of Leeds — from both mathematics and physics departments alike — to give talks on themes in mathematical physics, broadly construed. On occasion, we also host seminars by researchers from outside the University of Leeds. Talks are held every other Tuesday from 10:00 to 11:00 in the MALL on Level 8 in the main building of the School of Mathematics. The series is being organised by Linden Disney-Hogg and Anup Anand Singh. Slides/notes from the talks will be made available on the MaPLe Teams chat. If you would like to give a seminar or want to be added to the chat and the mailing list, just drop an email to a.l.disney-hogg[at]leeds.ac.uk or mmaasi[at]leeds.ac.uk. October 22, 2024 | 10:00-11:00 Jiannis K. Pachos School of Physics and Astronomy University of Leeds Abstract Anyons are quasiparticles in two-dimensional systems that show statistical properties very distinct from those of bosons or fermions. While their isolated observation has not yet been achieved, it is possible to perform quantum simulations with physical systems that reveal central properties of anyons. In this talk I will present encoding and manipulation of anyons with quantum technology platforms that reveal their exotic statistical properties with the goal of eventually employing them for topological quantum computation. November 05, 2024 | 10:00-11:00 How to cheat at billiards: a new open classical and quantum billiard model Katherine Holmes Department of Mathematics Imperial College London Abstract The classical billiard model has been used to study dynamical systems and chaos theory. Its quantum counterpart is the quantum billiard model, a toy model of quantum optical systems in QED cavities and quantum dots. The billiard model in both the classical and quantum regimes has been well-documented in the literature, with a multitude of variations having been constructed. In recent years, we have seen the introduction of leaky billiards, billiards with loss mechanics such as internal holes and permeable boundaries. In this talk, I will introduce the billiard model and debut a new classical leaky billiard model with a permeable internal region. This model allows for the study of intricate structures on the Poincaré-Birkhoff phase space via intensity landscapes. The talk will conclude with a discussion of what may be the quantum and semiclassical counterpart to this classical leaky billiard. This talk is based on a paper soon to be released on arXiv: Intensity landscapes in elliptical and oval billiards with a circular absorbing region. The final section will implement semiclassical methods inspired by a recent PRL: Husimi dynamics generated by non-Hermitian Hamiltonians, 2023. November 19, 2024 | 10:00-11:00 The Yang-Baxter equation and quantum group symmetry Benjamin Morris School of Mathematics University of Leeds Abstract We begin with an introduction to the Yang-Baxter equation, as a master equation for integrability in 2D lattice models in statistical mechanics. We will see that through the classification of (classes of) solutions to this equation it is natural to consider solutions related to physical symmetries known as quantum groups. We will then discuss a scheme for obtaining factorised solutions to the Yang-Baxter equation in a class of infinite-dimensional representations of the quantum group U𝑞sl(n). December 03, 2024 | 10:00-11:00 Peter Gracar School of Mathematics University of Leeds Abstract TBA A list of all past MaPLe seminars can be found here.
{"url":"https://anupanand.space/maple/","timestamp":"2024-11-14T01:46:26Z","content_type":"text/html","content_length":"13469","record_id":"<urn:uuid:b188b07f-5937-4da3-ba79-5e34a390deea>","cc-path":"CC-MAIN-2024-46/segments/1730477028516.72/warc/CC-MAIN-20241113235151-20241114025151-00247.warc.gz"}
On the Efficiency of Baroclinic Eddy Heat Transport across Narrow Fronts 1. Introduction An understanding of how eddies transport tracers is of intrinsic importance because eddies constitute a fundamental component of the general oceanic and atmospheric circulations. There has been much recent work related to parameterizing the transport of passive and active tracers by mesoscale eddies (e.g., Gent and McWilliams 1990 Larichev and Held 1995 Visbeck et al. 1996 Treguier et al. 1997 ), which has been at least partially motivated by the desire to represent small-scale processes in large-scale climate models without the need to explicitly resolve the variability on mesoscale time and space scales. It is well known that the eddy field in the ocean is spatially nonhomogeneous, with increased eddy variability generally found in the vicinity of strong lateral density gradients, that is, narrow fronts ( Treguier et al. 1997 ). This correspondence has led to the development of parameterizations of the eddy fluxes in terms of the local properties of the large-scale flow ( Green 1970 Stone 1972 Gent and McWilliams 1990 Treguier et al. 1997 Visbeck et al. 1997 ). These parameterizations vary considerably in their details (e.g., isopycnal vs diapycnal, see Visbeck et al. 1997 ), but they typically represent the eddy fluxes as a diffusion down the mean property gradient. Green (1970) (see also Stone 1972 ) used energetics arguments to suggest that the magnitude of the horizontal eddy diffusivity is proportional to a length scale squared and inversely proportional to the Eady timescale for exponential growth, is the Coriolis parameter, is the length scale of the large-scale baroclinic flow, and Ri = is the Richardson number of the large-scale flow with buoyancy frequency given by and vertical shear of the alongfront velocity given by . The nondimensional scale factor , which we call the efficiency constant to avoid possible confusion with the various definitions of similar proportionality constants that have previously appeared in the literature, is unknown and presumed by Green (1970) to be constant. This proportionality constant can be thought of as a correlation coefficient between the swirl velocity of the eddies and the density anomaly, typically much less than 1. If the lateral eddy heat flux is assumed proportional to the product of the diffusivity and the large-scale density gradient, then, using the thermal wind relation, the eddy heat flux can be written ′ and ′ are deviations from the large-scale time and/or spatial average mean quantities, Δ is the cross-front change in density over a horizontal length scale is a scale for the alongfront velocity, which may be interpreted as the maximum alongfront velocity for a front with density change Δ over a horizontal scale of the deformation radius (assuming a deep level of no motion). Note that is independent of the length scale and that the eddy heat flux is in the direction, perpendicular to the mean flow (the direction of the mean flow is assumed here to be uniform with depth). Several recent studies have made use of this formalism to parameterize the lateral heat transport by baroclinic eddies (e.g., Visbeck et al. 1996, 1997; Legg et al. 1996; Chapman and Gawarkiewicz 1997; Jones and Marshall 1997). Configurations in which buoyancy is extracted from the surface of an initially resting ocean develop strong baroclinic rim currents that are very nearly in geostrophic balance with the density gradient that develops around the edge of the cooling region. For forcing regions large compared to the deformation radius, the rim currents are baroclinically unstable and shed eddies, leading to a quasi-equilibration between the lateral (and vertical) heat transport carried by the eddies and the heat loss to the atmosphere. The properties of the cooling region (depth and density) have been predicted by applying the eddy heat flux parameterization proposed by Green (1970). The efficiency constant c[e], an unknown in the problem, has been estimated by empirical fit to the data. Visbeck et al. (1996) found that c[e] ≈ 0.025 (with variability between 0.014 and 0.056) over a wide range of forcing parameters in both numerical and laboratory experiments.^^2 Applications of similar ideas to shallow convection in coastal regions (Chapman and Gawarkiewicz 1997; Chapman 1998), unforced baroclinic frontal zones, and wind-forced periodic channels (Visbeck et al. 1997) all produce similar values of c[e]. These results suggest that the formulation proposed in (2) is valid (at least for the problems tested) and that the efficiency constant c[e] is independent of all external parameters. While this form is dimensionally consistent, there is no reason a priori that c[e] should be independent of external parameters (such as the Burger number or the Richardson number), nor has there been a physical justification for the nearly constant value of c[e]. The purpose of this study is to derive a quantitative estimate of the eddy heat flux in frontal zones and to provide a physical interpretation of what controls the magnitude of the heat flux and its dependencies on the basic frontal parameters. For simplicity, we restrict our attention to narrow fronts, that is, those whose cross-front length scale is of the order of the internal deformation radius. We show that an estimate of the heat flux derived explicitly from a model of eddy interactions and heat transport results in a form similar to that proposed by Green (1970). Perhaps the most important result of this study is that the simple model used to estimate the magnitude and dependencies of the eddy heat flux also provides a physically based means to calculate the efficiency constant c[e]. The theoretical estimate is tested by comparison with eddy-resolving models in two different flow configurations. 2. Isopycnal heat transport by baroclinic eddies Our goal is to estimate the isopycnal eddy heat flux across a baroclinic front. Diapycnal mixing could also be added, but we view this as a separate process from the isopycnal transport carried by coherent vortices, as discussed by Gent and McWilliams (1990) and Visbeck et al. (1997). The eddy heat flux u′ρ′ could, in principle, be calculated directly as the space and/or time average of the product of the perturbation velocity and the perturbation density. However, it is difficult to estimate u′ρ′ a priori because it involves an unknown correlation between the two quantities that is typically much less than one. Furthermore, a variety of complicated dynamical mechanisms may contribute to the time-dependent and spatially varying motions, including propagation of coherent vortex structures, nonlinear waves and wave breaking, and small-scale turbulence and mixing. Nevertheless, considerable progress can be made if we assert from the outset that the dominant mechanism of eddy heat transport across baroclinically unstable fronts is through the formation and propagation of individual eddies with length scale on the order of the internal deformation radius. This is consistent with the previous studies mentioned in the introduction, and it allows the relatively simple interpretation that the heat flux carried by each eddy is the product of the average density anomaly of the eddy and its propagation speed away from the front. We are interested only in the eddy heat flux across the front, so we assume that all eddies are formed at the front, move away and never return. This is approximately true in the model calculations, although some eddies do eventually return to the frontal zone after formation. We do not try to parameterize their ultimate decay and disappearance. In keeping with this perspective, we limit the analysis to narrow fronts, that is, those whose cross-front length scale is approximately the internal deformation radius (L ≈ L[d]). This view is also motivated by previous laboratory and numerical modeling studies, and the observation that the largest eddy activity in the ocean is found in the vicinity of narrow fronts. Furthermore, we expect that wider fronts may introduce additional complications because another length scale is introduced into the problem and the properties of the eddies (i.e., density, propagation speed) will depend on their origin and mixing along their path. Spatial and temporal averaging of the heat flux carried by eddies will necessarily be reduced compared with that carried by an individual eddy. Spatial averaging along the front over a wavelength immediately reduces the heat flux by one half. Temporal averaging is more difficult to quantify. Eddy shedding typically occurs quasiperiodically with some time required for the front to develop large amplitude meanders between eddy shedding events. The theory developed here is appropriate for the large amplitude meandering regime. The fraction by which the eddy heat flux will be reduced due to temporal averaging can be approximated by is a linear growth timescale and is a nonlinear timescale, which we interpret as the time it takes eddies to form and propagate away from the front. While it is difficult to define these timescales precisely, the numerical calculations in section 3 Fig. 4a , for example) can be used to obtain a rough estimate of , suggesting a modest reduction in the eddy flux of (35%). However, because these estimates are difficult to quantify a priori, and because we are primarily interested in gaining a simple phenomenological understanding of what controls the amplitude of the eddy heat flux, we do not attempt to formally incorporate this effect in our estimate of . Our estimate should thus be viewed as an upper bound in this regard. The large space- and timescale average eddy heat flux may now be written is the propagation speed of an eddy away from the front, and is the density anomaly of the eddy relative to the mean stratification of the motionless ocean on one side of the front. The primary advantage of this formulation is that it implicitly removes the need to know the correlation between the eddy swirl velocity and the density anomaly. For narrow frontal regions the density anomaly of the eddies will be either Δ or zero, depending on which side of the front they originate. If density is conserved following the Lagrangian path of an eddy, the density anomaly of that eddy does not change in time (as we have defined it here), although its density anomaly relative to the ambient fluid may change. , the efficiency constant may now be written in terms of the eddy propagation speed as The task is now to determine . In order for to be valid, must be independent of all frontal parameters;that is, We consider a large-scale flow that is uniform in the alongfront direction. Variations in both bottom topography and planetary vorticity are ignored. The most likely mechanism by which baroclinic eddies transport heat along isopycnals in such a flow is by eddy–eddy interactions, or self propagating eddy pairs. Hogg and Stommel (1985) first noted the rapid and efficient heat transport resulting from the pairing of upper-layer and lower-layer eddies of opposite sign, which they called hetons. Pedlosky (1985) found this structure to be the preferred orientation of the fastest growing mode based on a linear stability analysis of strong frontal regions. Legg et al. (1996) demonstrated that the heton model provides a useful approximation for the spread of heat away from a cooling region by baroclinic eddies. Therefore, we make use of the heton mechanism to estimate u[e]. For simplicity, we assume that the frontal region and surrounding ocean are represented by two layers of different density with a reduced gravity g′ = g(ρ[2] − ρ[1])/ρ[0], where ρ[0] is a reference density for seawater (Fig. 1a). For narrow fronts of width L[d] = g′H/f, the maximum alongfront velocity (assuming no motion in the deep layer) is V[m] = g′hh/H, where H is a scale height for the mean stratification and h is the vertical displacement of the interface across the front. It is assumed that the eddies are quasigeostrophic so that the perturbation of the interface in the eddies is small compared to the resting layer thickness. This assumption is clearly not satisfied in some of the previous numerical and laboratory experiments where the density surfaces outcrop, but we make this assumption here in order to obtain a quasi-analytic solution. The large amplitude regime is investigated numerically in section 3. We assume that the eddies represent isolated volumes of water that originated from the other side of the front and have been transported across the front by large-amplitude baroclinic wave events and resulting ageostrophic cross-front velocities accompanying baroclinic instability (Spall 1995), a reasonable assumption for fronts of width L[d]. In this case, the eddies have uniform potential vorticity dictated by the thickness of each layer on the original side of the front. Thus, the thickness anomaly of the eddies will be of different sign in the upper and lower layers, giving rise to one cyclonic vortex and one anticyclonic vortex [see Pedlosky (1985) and Spall (1995) for a discussion on the formation of hetons from baroclinic fronts]. We assume that the eddies are axisymmetric with radius r[0] and that their structure is unaffected by the presence of the eddy in the other layer. Stronger eddies tend to be more elliptical but have only slightly slower propagation speeds (Polvani 1991). The self-propagation speed of baroclinic eddy pairs driven by the interaction between upper- and lower-layer eddies in a quasigeostrophic ocean on an plane may be written as ( Pakyari and Nycander 1996 is the distance perpendicular to the front, is the quasigeostrophic streamfunction in layer ) = (∂ ) − (∂ ) is the Jacobian operator. The integrals are taken over the horizontal area assumed to encompass the entire eddy pair. As shown by Pakyari and Nycander (1996) , the propagation speed is a function of the horizontal distance between the eddy centers, that is, the offset Fig. 1b ). If there is no offset and the upper-layer eddy is of the same structure and opposite in sign to the lower-layer eddy, the Jacobian vanishes and there is no self-propagation. If the offset is small, Pakyari and Nycander state that the propagation speed increases linearly with and with the eddy swirl velocity. For eddies of finite radius, in which the velocity goes to zero outside of the radius , we anticipate that the eddy–eddy interaction will decrease at large because the area over which the eddies overlap will decrease; ) → 0 [the denominator in does not depend on the offset]. An approximate closed form solution for , and hence the resulting eddy heat flux and , can be obtained if we assume that the relative vorticity is uniform within each eddy and that the decrease in propagation speed as the offset increases arises solely as a result of the decreasing area of interaction between the eddies. An approximate solution may be derived from the small offset limit, for which, following Pakyari and Nycander (1996) can be written in terms of the lateral offset the swirl velocity in each layer , and the quasigeostrophic streamfunction as For quasigeostrophic, uniform relative vorticity eddies, the velocity profile is linear with radius, ) = , where is the maximum swirl velocity of the eddy. The quasigeostrophic streamfunction for each eddy is then quadratic with radius, is the thickness anomaly at the center of the eddy [assumed here to be the same as the interface displacement across the front, this approximation is valid for = ( ≪ 1, Spall (1995) ]. Substituting for the velocity and streamfunction, may be written as The eddies are presumed to have been generated through baroclinic instability of the frontal zone, so the eddy radius is taken to be a function of the deformation radius, = 2 Killworth 1983 Spall 1995 for similar discussions). This gives a Burger number for the eddies of = 0.125 [direct numerical integrations of show that is only weakly dependent on as shown below]. The maximum swirl velocity is obtained from evaluated at , resulting in = 2 = 2 . The propagation speed of the eddy, in the small offset limit, is estimated by integrating to give For large offsets, we assume that the propagation speed of the eddy pair decreases in proportion to the decreasing area of overlap. Using a truncated series approximation to estimate the area of overlap, the propagation speed for large offsets is then estimated to be and the eddy heat flux becomes While this solution is not a formal limit of the integral relation (6), it does indicate several important properties of the way in which the eddy pairs transport heat. First, (11) supports the assertion of Green (1970) that the eddy heat flux is linearly related to the product of the density change across the front and the alongfront velocity. The eddy flux is reduced for weak frontal zones because the propagation speed of the heton pair depends on the change in interface thickness over the eddy radius (h), while the size of the eddies is related to the mean stratification H through the deformation radius. For h ≪ H the eddies propagate more slowly than similar sized eddies with h = O(H) (see definition of V[m]). For the convection problems discussed by Visbeck et al. (1996), Jones and Marshall (1997), Chapman and Gawarkiewicz (1997), and Chapman (1998), the interface displacement h is the same as the resting depth of the interface H because the interface outcrops. Equation (11) also demonstrates that the eddy heat flux increases with increasing density change across the front by two mechanisms:the eddies have a larger density anomaly relative to the surrounding water and their propagation speed increases as Δρ^1/2 through V[m]. provides a quantitative estimate of the efficiency constant which indicates that the efficiency constant is independent of all external parameters and depends only on the relative offset of the upper- and lower-layer eddies. Thus, the efficiency of the eddy heat flux across a narrow frontal region is essentially determined by the ratio of the propagation speed of the eddies to the alongfront velocity. The value of c[e] from (12) is shown in Fig. 2 by the dashed line. For small vortex offsets (δ/r[0] ≪ 2), c[e] increases linearly with δ/r[0], as suggested by Pakyari and Nycander (1996). As δ/r[0] increases, the area of interaction decreases and the vortex propagation speed decreases, eventually vanishing as δ/r[0] → 2 (the finite radius eddies no longer interact when δ/r[0] > 2). The maximum value of c[e] can be calculated directly from (12) as c[e] = 0.064, which occurs at an offset of δ/r[0] = 0.8. A more accurate estimate of the eddy propagation speed, and hence the efficiency constant c[e], can be obtained directly from (6) using the streamfunction derived from the uniform potential vorticity solutions given in Spall (1995). The parameters required to fully define the eddy structure, and ψ[n] in (6), are the layer thicknesses on both sides of the front and the Burger number B = (L[d]/r [0])^2. For purposes of comparing the integral solution with the approximate solution for c[e] we initially take B = 0.125, h = 0.5H with H[1] = H[2] = H. The streamfunction is assumed to be constant (zero horizontal velocity) outside of the maximum radius of the eddies. The value of c[e] estimated directly from (5) and (6) is shown by the solid line in Fig. 2a as a function of the lateral offset between vortex centers δ/r[0]. The integral solution compares reasonably well with the approximate closed form solution (12), confirming that the primary cause of the decrease in propagation speed for increasing offsets is the decreasing area of overlap between the eddies. Point vortex models will thus overestimate the efficiency of the lateral heat transport by finite radius heton pairs. This may partially explain the larger value of c[e] found by Legg et al. (1996) for heat transport carried by point vortex hetons when compared to high-resolution numerical simulations. The maximum propagation speed occurs for offsets close to the radius of the eddies (δ/r[0] ≈ 1). In general, the vortex offset δ/r[0] remains an unknown parameter. The linear stability analysis of Pedlosky (1985) provides a physically based means of estimating the offset expected in the vicinity of the frontal region. His analysis shows that the maximum growth rate occurs for a heton pair with an offset of δ/r[0] ≈ 1, close to the offset that produced the maximum propagation speed for the isolated vortex pair found above (Fig. 2). This value may be interpreted as a phase shift between the upper layer and the lower layer of 90°, as expected for baroclinically unstable waves. We assume here that the offset in our frontal eddies is determined by the behavior of the linearly most unstable mode as derived by Pedlosky (1985) and take δ/r[0] = 1. We note that c[e] is not strongly dependent on our choice of δ/r[0] in that c[e] > 0.04 for 0.4 < δ/r[0] < 1.1. The approximate solution suggests that the value of c[e], as defined in (2), is independent of all other parameters. This need not be so, however, as additional nondimensional factors involving h/H or B might be involved. The value of c[e] calculated from (5) and (6) with δ/r[0] = 1 is shown in Fig. 2b as a function of the interface displacement across the front h/H and the Burger number of the eddies. The value of c[e] is nearly constant for wide ranges of both the eddy radius and the interface displacement, reinforcing the functional relationship suggested by the approximate solution (11) . We take as our estimate for the efficiency constant the average over all values of h/H at B = 0.125, resulting in c[e] = 0.045 (averaging over all values of B gives c[e] = 0.043). Our estimate of c[e] is essentially independent of all model parameters; the only provision is that rotation is important to the dynamics. This implies that the heat flux carried by the eddies does not depend on how the frontal region is maintained, provided that the front is baroclinically unstable. It should be kept in mind, however, that many simplifying assumptions have been made in obtaining this estimate, so we present numerical calculations in the next section to provide support for the theory. 3. Numerical model results High-resolution numerical models are now used to evaluate (2) for the lateral heat transport by baroclinic eddies. The purpose of these calculations is twofold: 1) to confirm that the dominant mode of lateral heat transport is characterized by baroclinic dipole pairs (hetons) and 2) to quantify the rate at which these eddy pairs transport heat perpendicular to the front. Although similar calculations have already been reported in the literature (as summarized in the introduction and also below), we briefly present two sets of calculations in which the efficiency of the eddy heat fluxes is calculated in a manner consistent with the definition (2) for both weak and strong fronts. This allows for a quantitative evaluation of the theoretical estimate of the eddy heat transport, and also demonstrates the applicability of this idealized model of eddy heat transport to a range of situations. a. Spindown of an unforced front The first application is that of the spindown of an initially narrow frontal region in the absense of any external forcing (as in Spall 1995). Small perturbations initialized along the frontal region grow in time, eventually reaching sufficient amplitude to form separated vortices that can transport heat across the front. Spall (1995) noted that the eddies can pair up with eddies in the opposite layer to form baroclinic dipole pairs that transport heat away from the frontal region. The structure of these eddy pairs is in general agreement with the heton model of Hogg and Stommel (1985) and the linear stability theory of Pedlosky (1985). Calculations similar to those reported here have also been analyzed by Visbeck et al. (1997); however we extend the analysis into the small h/H limit not investigated in the previous convection problems. Only a brief review of the model is given here; for a more complete description the reader is referred to Spall (1995) and the references therein. The model solves the primitive equations of motion in isopycnal coordinates. Calculations are carried out with both two and three layers in the vertical with a reduced gravity between each of the layers of 0.003 m s^−2. The domain is 500 km × 500 km square with horizontal grid spacing of 2 km (251 × 251 grid points). The Coriolis parameter f = 10^−4 s^−1 and is constant. The stratification is such that each of the layers is H[n] = H = 400 m thick on the anticyclonic side of the front. The interface between layers 1 and 2 is displaced by an amount h over a horizontal scale of L = L[d] = g′H/f = 10 km such that the thickness of layer 2 (1) is greater (less) on the cyclonic side of the front than it is on the anticyclonic side of the front. For the cases with three layers, the interface between layers 2 and 3 is initially flat. The reference level is chosen so that there is initially no flow in the deepest layer. Mass exchange is not allowed between layers. Subgrid-scale mixing is parameterized by a Laplacian thickness diffusion with amplitude 10 m^2 s^−1. The frontal region is initialized with small perturbations of wavelengths between 25 and 250 km. From these initial conditions the model is integrated for at least 500Ri/f, where Ri can be written as H/h. The horizontal velocity together with the potential vorticity for layers 1 and 2 on day 14 are shown in Figs. 3a and 3b. This calculation has only two layers and was initialized with h = 100 m, or 25% of the resting layer thickness. The structure of the growing meanders is essentially the same as predicted by the linear theory of Pedlosky (1985), and whose large amplitude development is described in detail by Spall (1995). On the anticyclonic (warm) side of the front, troughs of cyclonic (high) potential vorticity extend away from the initial frontal position. In the second layer, there are deep anticyclonic vortices of low potential vorticity adjacent to the upper-layer cyclones. The deep anticyclones are positioned just upstream of the cyclones with an offset δ/r[0] ≈ 1, consistent with the most unstable mode predicted by Pedlosky (1985). This hetonic structure is self-propagating so that these density anomalies are advected away from the frontal region. Similar structures are found for all values of h tested. These results confirm that for the flat-bottom, f-plane cases studied here the heat transport is carried primarily by baroclinic eddy pairs. The efficiency constant can be estimated directly from the model fields by making use of is the alongfront average of the eddy thickness flux perpendicular to the front for layer The value of c[e] fluctuates in time as individual cycles of meander growth and vortex formations take place. This is illustrated in Fig. 4a by a typical time series of the efficiency constant c[e] calculated at the middle of the channel using (13). As expected, the value of c[e] is small early in the calculation because the initial meanders take some time to form. The eddy flux peaks as the baroclinic waves reach large amplitude, producing a maximum value of approximately 0.05 at about day 37. This peak value is similar to, but slightly larger than, the theoretical value of 0.045 derived in the previous section. The amplitude of the eddy heat flux then fluctuates as cycles of eddy growth and propagation away from the front continue. Eventually, the calculated c[e] decreases over a longer-time scale because the potential energy of the front is reduced as a result of the eddy heat flux and a narrow front no longer exists. This late stage appears more turbulent than the early fields in Fig. 3, however the eddies still propagate through the formation of heton-like pairs. An objective measure of the amplitude of c[e] in the narrow-front regime is obtained by taking the maximum of a running average over a time period τ = 200Ri/f. For reference, the Eady linear growth timescale based on a channel width of 2L[d] is approximately 6Ri/f. This approach smooths the high-frequency variations in the eddy flux associated with individual instability cycles and thus gives a value representative of the average eddy heat flux. The running average is indicated by the dashed line in Fig. 4a and has a maximum value of c[e] = 0.031. While different averaging procedures produce slightly different estimates of c[e], all methods tested give similar results. While our primary objective is to estimate the eddy heat flux in (2), the intermediate relations relating the eddy flux to the eddy propagation speed, (4) and (5), can also be tested. The propagation speed of an eddy pair for a case with an outcropping front was calculated by Spall (1995) to be u[e] = 3.5 cm s^−1. With the model parameters h = 100 m and g′ = 0.003 m s^−2, V[m] = g′h = 55 cm s^−1, resulting in c[e] = 0.032, very close to the values estimated from a direct calculation of the eddy heat flux in (13). It is difficult to apply this estimate in a general sense to the fully evolving frontal region, particularily in the large amplitude turbulent regime, because of the difficulties with identifying and tracking individual eddies in the vicinity of the front. A series of spindown front calculations using both two and three layers have been carried out in which the initial thickness change across the front has been varied from h = 0.125H to h = H (outcropping front). The maximum value of c[e] taken from the running time mean over a time period τ = 200Ri/f is shown in Fig. 4b as a function of the thickness change across the front h/H. The efficiency constant c[e] for both two and three layers varies between 0.030 and 0.046 over all ranges of the frontal strength. The average value of c[e] taken from all of the two layer calculations is 0.035, within 35% of the theoretical estimate of 0.045 and similar to the value found by Visbeck et al. (1996, 1997) of 0.025. The average for the three layer calculations is 0.036. Additional calculations have been made in which the Coriolis parameter was reduced or increased by a factor of 2 and they resulted in similar values of c[e], ranging between 0.025 and 0.039. Introducing a cross-front gradient in f of magnitude β = 2 × 10^−13 cm^−1 s^−1 gave essentially identical results to the f-plane results shown here. b. Equilibration of local surface cooling A second set of high-resolution numerical calculations is now considered in which the strong frontal region results from spatial inhomogeneities in the surface buoyancy flux. These calculations complement the unforced spindown calculations from the previous section in several ways. First, the forced problems approach a statistical equilibrium in which a strong frontal region is maintained, whereas the unforced front loses considerable potential energy over the course of integration. Second, the front in the forced problems is generated by a very different mechanism than in the unforced problems. Third, the eddies that form in the forced problem have a strong barotropic component and do not look much like the two-layer hetons of the unforced problems. Finally, in the forced problems the two primary parameters, the alongfront velocity V[m] and the cross-front density difference Δρ, change in time, with their values at equilibrium being determined by the efficiency of the lateral eddy heat transport, while in the spindown configuration these parameters are set by the initial conditions. Therefore, the forced problem provides a test of the generality of the theoretical ideas presented in section 2. The forced problems follow the shallow convection calculations described by Chapman (1998) . A constant negative buoyancy flux (i.e., cooling) is applied within a circular region of radius at the surface of a resting, homogeneous ocean of depth The forcing abruptly vanishes outside the radius . This is not terribly realistic, but it is a case that has received considerable attention and it ensures that a narrow front forms, that is, with the horizontal scale of the internal deformation radius. Initially, the dense water produced beneath the buoyancy flux mixes rapidly to the bottom, so the density anomaly increases linearly with time. A front is established around the edge of the forcing region, which begins to slump radially outward at the bottom and inward at the surface, adjusting toward geostrophy. This generates a rim current flowing around the edge of the forcing region, cyclonic at the surface and anticyclonic at the bottom. The rim current is baroclinically unstable, so waves grow rapidly into eddies that break away from the rim current and exchange dense water from beneath the imposed buoyancy flux with ambient water. Eventually a quasi equilibrium is approached in which the loss of buoyancy at the surface is balanced, in a statistical sense, by the eddy exchange across the rim current. By assuming such an equilibrated state, Visbeck et al. (1997) derived expressions for the equilibrium density anomaly within the forcing region and the time required to reach equilibrium in the shallow convection case, based on externally imposed parameters, is defined as in Visbeck et al. (1997) did not actually test , but Chapman (1998) has shown that is reasonable, at least for a few examples. Therefore, we use the same basic model configuration as Chapman (1998) to estimate c[e] for several parameter combinations. The model is the semispectral primitive equation model described by Haidvogel et al. (1991). The model domain is a straight channel with periodic boundaries at the open ends. The boundaries are placed far enough from the forcing region that they have negligible influence during the model calculations. A rectangular grid is used in the horizontal with either 1-km or 1.5-km resolution in each direction, depending on the parameter choices. Nine Chebyshev polynomials are used to resolve the vertical structure. A convective adjustment scheme mixes the density field whenever it is statically unstable, and small lateral Laplacian subgrid-scale mixing is used to ensure numerical stability. The model is run until the density anomaly below the center of the forcing region approaches a quasi-steady value. Further model details may be found in Chapman The horizontal velocity together with the density anomaly at both the surface and the bottom are shown in Figs. 5a and 5b for a typical calculation as equilibration is approached. Several large eddies can be seen moving away from the forcing region (indicated by the solid circle). Their surface velocities are clearly cyclonic (Fig. 5a) with a weaker cyclonic signature at the bottom (Fig. 5b ). Careful examination of the bottom velocities shows that each cyclonic eddy has an anticyclonic partner that is horizontally offset and has little, if any, signature at the surface. Cross sections of density or velocity (not shown) reveal that the eddy pairs are tilted in the vertical and overlap, somewhat like those described for the two-layer system (section 2), despite their barotropic nature. Time sequences of the velocity and density fields show that the eddy pairs indeed propagate away from the forcing region, much like the eddy pairs in the unforced problem described above. We might then expect the overall behavior to be consistent with the theoretical development in section 2. The efficiency constant can be estimated from calculations like that shown in Fig. 5 , as the system approaches equilibration, by solving As stated above, the density anomaly beneath the buoyancy flux initially increases linearly with time. After the eddies have grown large enough to break away from the rim current (as in Fig. 5 ), the density anomaly oscillates about a quasi-equilibrium value, from which Δ is estimated by averaging the surface density anomaly within a small area in the center of the forcing region. Table 1 shows estimates of for five model calculations along with other model parameters. The estimates of fall within the range 0.02–0.03, close to the value obtained by Visbeck et al. (1996) for deep convection and not far from the values obtained for the unforced problems in section 3a . The uncertainty in represents the effects of individual eddy formation events. The values are somewhat smaller than the theoretical value of 0.045, but considering the numerous assumptions in section 2 that are not strictly applicable to these calculations, the agreement is quite good. It is interesting to point out that eddies form prior to equilibration, but these eddies are smaller than those formed during equilibration (because of the smaller internal deformation radius), and the heat flux they carry is not sufficient to balance the surface cooling. Therefore, the density anomaly continues to increase. Because c[e] is limited in magnitude by the eddy–eddy interactions, as discussed in section 2, the system can only approach equilibration when the density anomaly has increased sufficiently to form larger eddies, which propagate faster and carry more mass. 4. Summary We have derived a quantitative means to estimate the amplitude of lateral heat transport by baroclinic eddies generated in narrow frontal zones in terms of the properties of the mean flow. The theory predicts that the eddy heat flux is linearly related to the product of the alongfront velocity scale and the cross-front density gradient as is an efficiency constant, is the maximum alongfront velocity for a front of deformation radius width, Δ is the density change across the front, is the isopycnal displacement across the front, and is the resting depth of the isopycnal. This expression for the eddy heat flux is similar to the form proposed by Green (1970) to parameterize eddy fluxes in the atmosphere and applied more recently to the ocean by Visbeck et al. (1996 Jones and Marshall (1997) Chapman and Gawarkiewicz (1997) , and Chapman (1998) Our approach in deriving this relationship is quite different from the energetic arguments used by Green (1970), and the scaling approach of Jones and Marshall (1997). The eddy heat flux is interpreted as the product of the average density anomaly of an eddy and its propagation speed away from the front, as given by (4). The advantage of this approach is that it eliminates the need to estimate the correlation between the eddy swirl velocity and the perturbation density (typically much less than one) as required for the traditional definition of the eddy heat flux. The problem then becomes that of determining the propagation speed of an eddy in terms of the frontal parameters where the eddy was formed. By developing the theory based explicitly on the way eddies interact and transport heat, we are able to analytically calculate the efficiency constant c[e] that determines the amplitude of the cross-front heat flux, or the efficiency of the heat flux relative to the strength of the front. The efficiency constant c[e] may be thought of as the ratio of the eddy propagation speed to the alongfront velocity. If it is assumed that the heat transport is carried primarily by quasigeostrophic eddy pairs of uniform potential vorticity, the efficiency constant c[e] can be represented in simple integral form, which produces a theoretical estimate of c[e] = This estimate was tested using three-dimensional, eddy-resolving, primitive equation models for two flow configurations. One set of calculations was initialized with a narrow frontal region and allowed to evolve in the absense of external forcing. The second set of calculations applied a region of surface cooling (negative buoyancy flux) over an initially motionless, homogeneous ocean, which develops a narrow front along the edge of the cooling region. In both cases, the alongfront current is baroclinically unstable, leading to lateral heat transport by baroclinic eddies. The quantitative value of c[e] derived from these eddy-resolving numerical models varied between 0.02 and 0.04 over a wide range of model parameters. This compares reasonably well with the theoretical estimate of c[e] = 0.045. The reduced efficiency in the numerical models probably arises from the finite width of the baroclinic fronts and the time it takes for meanders to reach large amplitude; both effects are neglected in the theory. Despite the quantitative differences, these results clearly support the form for the eddy heat flux in (2) and also indicate that c[e] is basically independent of external parameters. The theory was derived assuming flat-bottom, f-plane, quasigeostrophic dynamics, although the theoretical estimate is found to be reasonably accurate well beyond the formal quasigeostrophic limits. Allowing for either a sloping bottom or a variable Coriolis parameter introduces another length scale into the problem, l = U/β, where β is the cross-front variation in the background vorticity. For the surface intensified, narrow frontal problems studied here, the influences of bottom topography or variations in the Coriolis parameter are negligible because the cross-front potential vorticity gradient is dominated by the change in stratification across the front. These effects may become more important for wide frontal regions, for weak stratification, or for estimating the eddy heat flux far from a narrow front. Even in these cases, however, baroclinic eddy pairs may remain a primary heat transport mechanism (with modifications due to β), although not necessarily the only one, and, if so, the general arguments presented here should remain relevant. It has been assumed that the frontal region remains narrow and baroclinically unstable, and that the surrounding waters are not strongly populated with eddies. We recognize that steep bottom topography, planetary vorticity gradients, and large-scale confluent flows can stabilize even strong baroclinic fronts and inhibit the formation of eddies, resulting in regimes for which the present theory is not appropriate. Further, the present theory may need modification when applied to wider frontal regions because the properties of the eddies (e.g., density anomaly) will depend on their origin and mixing along their path. However, it is encouraging that Chapman (1998) found eddy heat fluxes to be only weakly dependent on the width of the baroclinic zone, so the essential mechanisms of eddy heat transport may not be strongly dependent on this length scale. We have assumed that all eddies formed at the front propagate away from the front and never return. We have not attempted to predict their ultimate evolution and fate. That is, we do not attempt to predict the divergence of the eddy heat flux (or equivalently the eddy flux far away from the frontal region), a quantity that is perhaps of more practical interest for large-scale climate models. While the correspondence between large eddy energies and variations in the mean flow (Treguier et al. 1997) suggests that eddies decay rapidly away from their source region, the relationship between the divergence of the eddy flux and the mean flow is not clear. The present local parameterization of the eddy flux does not consider nonlocal sources, such as advection of eddy variance by the mean flow or coherent vortices generated at distant regions (such as meddies or Agulas rings). Support for this work was provided by the Office of Naval Research (MAS, Contract N00014-97-1-0088) and the National Science Foundation as part of the Arctic System Science (ARCSS) program, which is administered through the Office of Polar Programs, (DCC, Grant OPP-9422292). Comments from two anonymous reviewers helped to clarify the discussion. Joe Pedlosky is thanked for providing comments on an early version of the manuscript. • Chapman, D. C., 1998: Setting the scales of the ocean response to isolated convection. J. Phys. Oceanogr.,28, 606–620. • ——, and G. Gawarkiewicz, 1997: Shallow convection and buoyancy equilibration in an idealized coastal polynya. J. Phys. Oceanogr.,27, 555–566. • Gent, P. R., and J. C. McWilliams, 1990: Isopycnal mixing in ocean circulation models. J. Phys. Oceanogr.,20, 150–155. • Green, J. S., 1970: Transfer properties of the large-scale eddies and the general circulation of the atmosphere. Quart. J. Roy. Meteor. Soc.,96, 157–185. • Haidvogel, D. B., J. L. Wilkin, and R. Young, 1991: A semi-spectral primitive equation ocean circulation model using vertical sigma coordinates and orthogonal curvilinear horizontal coordinates. J. Comput. Phys.,94, 151–185. • Hogg, N. G., and H. M. Stommel, 1985: The heton, an elementary interaction between discrete baroclinic geostrophic vortices and its implications concerning eddy heat-flow. Proc. Roy. Soc. London, A397, 1–20. • Jones, H., and J. Marshall, 1997: Restratification after deep convection. J. Phys. Oceanogr.,27, 2276–2287. • Killworth, P. D., 1983: On the motion of isolated lenses on a beta plane. J. Phys. Oceanogr.,13, 368–376. • Larichev, V. D., and I. M. Held, 1995: Eddy amplitudes and fluxes in a homogeneous model of fully developed baroclinic instability. J. Phys. Oceanogr.,25, 2285–2297. • Legg, S., H. Jones, and M. Visbeck, 1996: A heton perspective of baroclinic eddy transfer in localized open ocean convection. J. Phys. Oceanogr.,26, 2251–2266. • Pakyari, A., and J. Nycander, 1996: Steady two-layer vortices on the beta-plane. Dyn. Atmos. Oceans,25, 67–86. • Pedlosky, J., 1985: The instability of continuous heton clouds. J. Atmos. Sci.,42, 1477–1486. • Polvani, L. M., 1991: Two-layer geostrophic vortex dynamics. Part 2. Alignment and two-layer V-states. J. Fluid Mech.,225, 241–270. • Stone, P. H., 1972: A simplified radiative–dynamical model for the static stability of rotating atmospheres. J. Atmos. Sci.,29, 405–418. • Spall, M. A., 1995: Frontogenesis, subduction, and cross-front exchange at upper ocean fronts. J. Geophys. Res.,100, 2543–2557. • Treguier, A. M., I. M. Held, and V. D. Larichev, 1997: On the parameterization of quasigeostrophic eddies in primitive equation ocean models. J. Phys. Oceanogr.,27, 567–580. • Visbeck, M., J. Marshall, and H. Jones, 1996: Dynamics of isolated convective regions in the ocean. J. Phys. Oceanogr.,26, 1721–1734. • ——, ——, T. Haine, and M. Spall, 1997: On the specification of eddy transfer coefficients in coarse-resolution ocean circulation models. J. Phys. Oceanogr.,27, 381–402. Fig. 1. Schematic diagrams of (a) vertical section through the baroclinc front and (b) plan view of a heton eddy pair. Citation: Journal of Physical Oceanography 28, 11; 10.1175/1520-0485(1998)028<2275:OTEOBE>2.0.CO;2 Fig. 2. (a) Efficiency constant c[e] as a function of vortex offset δ/r[0]. Solid line: formal estimate from (5) and (6) for uniform potential vorticity eddies. Dashed line: approximate closed form solution (12). (b) Efficiency constant c[e] as a function of Burger number B = (L[d]/r[0])^2 and interface displacement h/H from (5) and (6), assuming δ/r[0] = 1. Citation: Journal of Physical Oceanography 28, 11; 10.1175/1520-0485(1998)028<2275:OTEOBE>2.0.CO;2 Fig. 3. Horizontal velocity (plotted every other grid point) and potential vorticity on day 14 over a subregion of the model domain for the two layer spindown front problem withh/H = 0.25: (a) layer 1 and (b) layer 2. Citation: Journal of Physical Oceanography 28, 11; 10.1175/1520-0485(1998)028<2275:OTEOBE>2.0.CO;2 Fig. 4. (a) Time series of the efficiency constant c[e] for the two layer case with h/H = 0.25 calculated from the model fields using (13). The solid line is the daily value; the dashed line is a running average over a time period of 200Ri/f = 93 days. (b) Maximum value of the space and time averaged (over 200Ri/f) c[e] as a function of the interface displacement across the front h/H for both the two layer cases (squares) and the three layer cases (stars). Citation: Journal of Physical Oceanography 28, 11; 10.1175/1520-0485(1998)028<2275:OTEOBE>2.0.CO;2 Fig. 5. Velocity vectors (plotted every third grid point) and density anomaly at the (a) surface and (b) bottom for run 1 (Table 1) after 14 days of constant negative buoyancy flux applied at the surface within the circle. Citation: Journal of Physical Oceanography 28, 11; 10.1175/1520-0485(1998)028<2275:OTEOBE>2.0.CO;2 Table 1. Parameters and efficiency constant c[e] for the forced numerical calculations discussed in section 3b. For each calculation, the initial density is ρ[0] = 1000 kg m^−3, and the depth is H = 50 m. Units are m^2 s^−3 for B[0], km for r[0], s^−1 for f, m^2 s^−1 for the lateral viscosity ν[u]. Strictly speaking, (2) defines an eddy density flux, not an eddy heat flux. However, for simplicity, we assume the density is linearly proportional to the temperature and independent of salinity. Thus, (2) is equivalent to an eddy heat flux. Visbeck et al. (1996) used a velocity scale in their scaling arguments for deep convection that is three times the actual estimated vertical change in geostrophic velocity associated with the rim current [see Jones and Marshall’s (1997) Eq. (2.4)]. Therefore, c[e] used here in (2) is three times the α′ defined by Visbeck et al. (1996). Note that Chapman (1998) used the surface velocity in his derivation of the equilibrium quantities, rather than the total vertical change in geostrophic velocity over the depth H, as used here to define V[m] in (2). Consequently, Chapman’s eddy exchange coefficient α is twice our efficiency constant c[e] for the shallow convection case.
{"url":"https://journals.ametsoc.org/view/journals/phoc/28/11/1520-0485_1998_028_2275_oteobe_2.0.co_2.xml","timestamp":"2024-11-12T02:05:33Z","content_type":"text/html","content_length":"521137","record_id":"<urn:uuid:27e236aa-dc84-415b-a23b-d35d7127a9d5>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.50/warc/CC-MAIN-20241112014152-20241112044152-00107.warc.gz"}
How to Insert Infinity Symbol (∞) in Excel? How to Insert Infinity Symbol in Excel? The infinity symbol (∞) is widely used in various fields, such as mathematics, physics, and philosophy. It represents the concept of infinity, an unbounded quantity that is greater than any real number. In mathematics, it’s often used in calculus, number theory, and set theory to denote an endless process or quantity. In this article, I will show you some simple ways to insert the Infinity symbol in Excel Keyboard Shortcut to Insert Infinity Symbol in Excel Below are the keyboard shortcuts to insert Infinity symbol in Excel: On Windows in Excel: Shortcut: Alt + 236 1. Ensure Num Lock is on. 2. Hold down the Alt key. 3. Type 236 on the numeric keypad. 4. Release the Alt key to insert the infinity symbol. On Mac in Excel: There is no direct keyboard shortcut for the infinity symbol in Excel for Mac. It is usually inserted using the Symbol dialog box or by copying from a character map or some other place/document. Inserting Infinity using the Symbol Dialog Box To insert the Infinity symbol in Excel using the Symbol Dialog Box, follow these steps: 1. Click on the cell where you want to insert the symbol. 2. Go to the Insert tab on the ribbon. 3. Click on Symbol in the Symbols group. 4. In the Symbol dialog box, select Mathematical Operators from the Subset dropdown. 5. Locate and select the Infinity symbol (∞). 6. Click Insert, then close the Symbol dialog box. The above steps would insert the infinity symbol in the cell that we selected in step 1. Inserting Infinity Symbol using Formula You can use the below formula to insert the Infinity symbol in Excel using the UNICHAR formula: Copy and Paste the Infinity Symbol into Excel And finally, there is always an option to copy the infinity symbol from any website or document and paste it into a cell in Excel. You can copy the infinity symbol from below: Below is a table that summarizes all the methods to insert the Infinity symbol in Excel: Description Details Name of Symbol Infinity Symbol Symbol Text ∞ Shortcut for Windows Alt + 236 Shortcut for Mac No direct shortcut; use the Symbol dialog box Inserting using Symbol Dialog Box Find ∞ in Mathematical Operators in Symbol dialog box Inserting using Formula =UNICHAR(8734) Other Excel articles you may also like: Leave a Comment
{"url":"https://spreadsheetplanet.com/insert-symbols-excel/infinity/","timestamp":"2024-11-01T19:30:27Z","content_type":"text/html","content_length":"109523","record_id":"<urn:uuid:b5a94b57-6648-4a1c-b318-1e3676c8039b>","cc-path":"CC-MAIN-2024-46/segments/1730477027552.27/warc/CC-MAIN-20241101184224-20241101214224-00041.warc.gz"}
Printable Architect Printable Architectural Scale - Printable scale ruler scale 1:50 imperial. So you just need to work out the ratio of the scale you. 8/1 x 12 = scale factor 96 to convert an. In gis, scale is usually expressed as a ratio. Web select the desired scale. Web printable scale ruler scale 1:20 imperial. Use these scale rulers to measure architectural plans, maps and other scaled objects. We suggest 11 0 lb. Sometimes your scale is just plain lost. Printable scale ruler scale 1:100 imperial. How to Use an Architect Scale Ruler 2020 MT Copeland Web sometimes a coworker stole your scale. Sometimes your scale is just plain lost. Use these scale rulers to measure architectural plans, maps and other scaled objects. 8/1 x 12 = scale factor 96 to convert an. Web emergency printable simple scale instructions 1. 1 16 Architectural Scale Ruler Printable Printable Ruler Actual Size Web 12 architectural scale ruler aluminum architect scale triangular scale ruler for architects, draftsman, students and. Web sometimes a coworker stole your scale. Web a free pdf for drafting scales & sheet sizes a free pdf for drafting scales & sheet sizes 1/16/2019 i've been told. Web engineering scale ruler cut ruler out *print at full scale (100%) actual size. Printable Architect Scale Web you can also download a simple printable version from archtoolbox. Web a free pdf for drafting scales & sheet sizes a free pdf for drafting scales & sheet sizes 1/16/2019 i've been told. Web sometimes a coworker stole your scale. How to print at an architectural scale?helpful? Web 12 architectural scale ruler aluminum architect scale triangular scale ruler for. Printable Architectural Scale Ruler 1 4 Printable Ruler Actual Size We suggest 11 0 lb. Web a free pdf for drafting scales & sheet sizes a free pdf for drafting scales & sheet sizes 1/16/2019 i've been told. Web emergency printable simple scale instructions 1. Sometimes your scale is just plain lost. Use these scale rulers to measure architectural plans, maps and other scaled objects. Printable Architectural Scale Printable World Holiday Sometimes your scale is just plain lost. So you just need to work out the ratio of the scale you. 8/1 x 12 = scale factor 96 to convert an. Web select the desired scale. Web printable scale ruler scale 1:20 imperial. architect scale 12 inch ruler printable ruler architectural scale Web you can also download a simple printable version from archtoolbox. Printable scale ruler scale 1:50 imperial. Web below is a table showing some of the most usable architectural model scales along with transcription to standard us inch/foot. Web scales:(3/8=1')(3/4=1')(1 1/2=1')(3=1') architectural scale ruler measures 1 if printed correctly *print at full. Web architect scales, such as 1/4 ̋=. Printable ruler, Ruler, Printables Printable scale ruler scale 1:50 imperial. Web printable scale ruler scale 1:200. In his introduction to reading blueprints. Web emergency printable simple scale instructions 1. Use these scale rulers to measure architectural plans, maps and other scaled objects. Architect s Scale Architect, Scale, How to plan Web sometimes a coworker stole your scale. Web select the desired scale. To understand architectural scale, and its application in technical theatre.in this video lesson we discuss the importance of. Web printable scale ruler scale 1:20 imperial. Web you can also download a simple printable version from archtoolbox. Printable Architectural Scale Printable Blank World No worries, we have you. In gis, scale is usually expressed as a ratio. Web 12 architectural scale ruler aluminum architect scale triangular scale ruler for architects, draftsman, students and. Printable scale ruler scale 1:50 imperial. This architect scale has eleven different measurement scales on one ruler. 12" Architectural Scale Ruler Aluminum Architect Scale Triangular Scale Web select the desired scale. Web 1:50.000 to 1:2.000 the scope of small scales of representation, that is, drawings that are reductions of reality, are. Web 12 architectural scale ruler aluminum architect scale triangular scale ruler for architects, draftsman, students and. Please support me on patreon:. So you just need to work out the ratio of the scale you. Use these scale rulers to measure architectural plans, maps and other scaled objects. In gis, scale is usually expressed as a ratio. This architect scale has eleven different measurement scales on one ruler. Printable scale ruler scale 1:100 imperial. Web learn how to use an architectural scale ruler so that you can read scaled drawings and blueprints. No worries, we have you. 8/1 x 12 = scale factor 96 to convert an. Web printable scale ruler scale 1:20 imperial. Web engineering scale ruler cut ruler out *print at full scale (100%) actual size your printer might cut off a bit of the ruler at. So you just need to work out the ratio of the scale you. Web a free pdf for drafting scales & sheet sizes a free pdf for drafting scales & sheet sizes 1/16/2019 i've been told. Web emergency printable simple scale instructions 1. In his introduction to reading blueprints. Before you download and use our rulers, make sure. Print to your preferred paper medium. Web sometimes a coworker stole your scale. Web scales:(3/8=1')(3/4=1')(1 1/2=1')(3=1') architectural scale ruler measures 1 if printed correctly *print at full. They are used to measure interior and exterior. Web architect scales, such as 1/4 ̋= 1 ́0 ̋(1/48 size) or 1/8 ̋= 1 ́0 ̋(1/96 size), are used for structures and buildings. Sometimes your scale is just plain lost. This Architect Scale Has Eleven Different Measurement Scales On One Ruler. Print to your preferred paper medium. Web printable scale ruler scale 1:20 imperial. Web learn how to use an architectural scale ruler so that you can read scaled drawings and blueprints. Web you can also download a simple printable version from archtoolbox. Web 12 Architectural Scale Ruler Aluminum Architect Scale Triangular Scale Ruler For Architects, Draftsman, Students And. Web architect scales, such as 1/4 ̋= 1 ́0 ̋(1/48 size) or 1/8 ̋= 1 ́0 ̋(1/96 size), are used for structures and buildings. No worries, we have you. Before you download and use our rulers, make sure. To understand architectural scale, and its application in technical theatre.in this video lesson we discuss the importance of. Web Printable Scale Ruler Scale 1:200. Web emergency printable simple scale instructions 1. In his introduction to reading blueprints. Web description of how (and a calculator) to convert drawings from one architectural or engineering scale to another. So you just need to work out the ratio of the scale you. Printable Scale Ruler Scale 1:50 Imperial. Please support me on patreon:. Web below is a table showing some of the most usable architectural model scales along with transcription to standard us inch/foot. Web engineering scale ruler cut ruler out *print at full scale (100%) actual size your printer might cut off a bit of the ruler at. 8/1 x 12 = scale factor 96 to convert an. Related Post:
{"url":"https://tineopprinnelse.tine.no/en/printable-architectural-scale.html","timestamp":"2024-11-13T06:31:39Z","content_type":"text/html","content_length":"29762","record_id":"<urn:uuid:e4e1621b-2f5a-48c4-bd29-abf86995ee2a>","cc-path":"CC-MAIN-2024-46/segments/1730477028326.66/warc/CC-MAIN-20241113040054-20241113070054-00104.warc.gz"}
Unit Method Estimating - Construction Tuts The unit method estimating consists of choosing a standard unit of accommodation and multiplying an approximate cost per unit. Estimate the building cost base on the depends on the population unit. Unit Method Estimate = Standard units of accommodation x Cost/Unit The unit cost method of estimation is used for project design estimates and bid estimates. The cost estimate is obtained by multiplying the number of accommodation for a new building with the cost per unit of accommodation based on a suitable building. The current unit of accommodation can be obtained by calculating from the sketch design or by obtaining information from the client. For example: • Schools – Costs per pupil place • Hospitals – Costs per bed place • Roads – Per Kilometers • Car parks – Costs per car space NGO will build a house for 1500 widows who are affected by natural disaster. Estimate the total cost to build houses. From a suitable cost data. The cost is $7500 for a house. Total Cost = Standard units of accommodation x Cost/Unit =$11, 250, 000 The technique is based on the fact that there is usually some close relationship between the cost of a construction project and the number of functional units it accommodates. Functional units are those factors which express the intended use of the building better than any other. This method is extremely useful on occasions where the building’s client requires a preliminary estimate based on little more information than the basic units of accommodation. The units adopted to facilitate this analysis depend on the type of project under consideration. • Site condition • Specification changes • Market conditions • Regional changes • Inflation Using unit method estimating method can generate a rough estimate quickly, but the lack of accuracy will render it of little use in the cost planning procedure outlined earlier. However, this method is often used to determine the very first notion of a price in early discussions of a project and as a crude means of comparing the known costs of different buildings.
{"url":"https://www.constructiontuts.com/unit-method-estimating/","timestamp":"2024-11-11T00:06:14Z","content_type":"text/html","content_length":"79888","record_id":"<urn:uuid:b1f1d70f-3772-43c7-8b86-81bb03f3c471>","cc-path":"CC-MAIN-2024-46/segments/1730477028202.29/warc/CC-MAIN-20241110233206-20241111023206-00839.warc.gz"}
Quantum Mechanics A complete set of lecture notes for a graduate quantum mechanics course. Topics covered include fundamentals of quantum mechanics, angular momentum, perturbation theory, identical particles, scattering, and relativistic electron theory.. The lecture notes are availible in a number of formats: A fully hyperlinked HTML document. A book based on the lecture notes and published by World Scientific. Click here to get list of other courses available on this site. Richard Fitzpatrick Last modified: Tue Jul 1 12:29:00 CDT 2014
{"url":"https://farside.ph.utexas.edu/teaching/qm/qm.html","timestamp":"2024-11-07T15:55:53Z","content_type":"text/html","content_length":"2042","record_id":"<urn:uuid:68b6c40d-ccb0-4cf1-a7de-3ee1be2bffb7>","cc-path":"CC-MAIN-2024-46/segments/1730477028000.52/warc/CC-MAIN-20241107150153-20241107180153-00571.warc.gz"}
Collaborative Filtering with Temporal Dynamics Customer preferences for products are drifting over time. Product perception and popularity are constantly changing as new selection emerges. Similarly, customer inclinations are evolving, leading them to ever redefine their taste. Thus, modeling temporal dynamics is essential for designing recommender systems or general customer preference models. However, this raises unique challenges. Within the ecosystem intersecting multiple products and customers, many different characteristics are shifting simultaneously, while many of them influence each other and often those shifts are delicate and associated with a few data instances. This distinguishes the problem from concept drift explorations, where mostly a single concept is tracked. Classical time-window or instance decay approaches cannot work, as they lose too many signals when discarding data instances. A more sensitive approach is required, which can make better distinctions between transient effects and long-term patterns. We show how to model the time changing behavior throughout the life span of the data. Such a model allows us to exploit the relevant components of all data instances, while discarding only what is modeled as being irrelevant. Accordingly, we revamp two leading collaborative filtering recommendation approaches. Evaluation is made on a large movie-rating dataset underlying the Netflix Prize contest. Results are encouraging and better than those previously reported on this dataset. In particular, methods described in this paper play a significant role in the solution that won the Netflix contest. 1. Introduction Modeling time drifting data is a central problem in data mining. Often, data is changing over time, and models should be continuously updated to reflect its present nature. The analysis of such data needs to find the right balance between discounting temporary effects that have very low impact on future behavior, while capturing longer term trends that reflect the inherent nature of the data. This led to many works on the problem, which is also widely known as concept drift; see, e.g., Schlimmer and Granger, and Widmer and Kubat.^15, 20 Temporal changes in customer preferences bring unique modeling challenges. One kind of concept drift in this setup is the emergence of new products or services that change the focus of customers. Related to this are seasonal changes, or specific holidays, which lead to characteristic shopping patterns. All those changes influence the whole population, and are within the realm of traditional studies on concept drift. However, many of the changes in user behavior are driven by localized factors. For example, a change in the family structure can drastically change shopping patterns. Likewise, individuals gradually change their taste in movies and music. Such changes cannot be captured by methods that seek a global concept drift. Instead, for each customer we are looking at different types of concept drifts, each occurs at a distinct time frame and is driven toward a different direction. The need to model time changes at the level of each individual significantly reduces the amount of available data for detecting such changes. Thus we should resort to more accurate techniques than those that suffice for modeling global changes. For example, it would no longer be adequate to abandon or simply underweight far in time user transactions. The signal that can be extracted from those past actions might be invaluable for understanding the customer herself or be indirectly useful to modeling other customers. Yet, we need to distill long-term patterns while discounting transient noise. These considerations require a more sensitive methodology for addressing drifting customer preferences. It would not be adequate to concentrate on identifying and modeling just what is relevant to the present or the near future. Instead, we require an accurate modeling of each point in the past, which will allow us to distinguish between persistent signal that should be captured and noise that should be isolated from the longer term parts of the model. Modeling user preferences is relevant to multiple applications ranging from spam filtering to market-basket analysis. Our main focus in the paper is on modeling user preferences for building a recommender system, but we believe that general lessons that we learn would apply to other applications as well. Automated recommendations are a very active research field.^12 Such systems analyze patterns of user interest in items or products to provide personalized recommendations of items that will suit a user's taste. We expect user preferences to change over time. The change may stem from multiple factors; some of these factors are fundamental while others are more circumstantial. For example, in a movie recommender system, users may change their preferred genre or adopt a new viewpoint on an actor or director. In addition, they may alter the appearance of their feedback. For example, in a system where users provide star ratings to products, a user that used to indicate a neutral preference by a "3 stars" input may now indicate dissatisfaction by the same "3 stars" feedback. Similarly, it is known that user feedback is influenced by anchoring, where current ratings should be taken as relative to other ratings given at the same short period. Finally, in many instances, systems cannot separate different household members accessing the same account, even though each member has a different taste and deserves a separate model. This creates a de facto multifaceted meta-user associated with the account. A way to distinguish between different persons is by assuming that time-adjacent accesses are being done by the same member (sometimes on behalf of other members), which can be naturally captured by a temporal model that assumes a drifting nature of a All these patterns and the likes should have made temporal modeling a predominant factor in building recommender systems. Nonetheless, with very few exceptions (e.g., Ding and Li, and Sugiyama et al. ^4, 16), the recommenders' literature does not address temporal changes in user behavior. Perhaps this is because user behavior is composed of many different concept drifts, acting in different timeframes and directions, thus making common methodologies for dealing with concept drift and temporal data less successful. We show that capturing time drifting patterns in user behavior is essential for improving accuracy of recommenders. Our findings also give us hope that the insights from successful time modeling for recommenders will be useful in other data mining applications. Our test bed is a large movie-rating dataset released by Netflix as the basis of a well-publicized competition.^3 This dataset combines several merits for the task at hand. First, it is not a synthetic dataset, but contains user-movie ratings by real paying Netflix subscribers. In addition, its relatively large size above 100 million date-stamped ratings makes it a better proxy for real-life large-scale datasets, while putting a premium on computational efficiency. Finally, unlike some other dominant datasets, time effects are natural and are not introduced artificially. Two interesting (if not surprising) temporal effects that emerge within this dataset are shown in Figure 1. One effect is an abrupt shift of rating scale that happened in early 2004. At that time, the mean rating value jumped from around 3.4 stars to above 3.6 stars. Another significant effect is that ratings given to movies tend to increase with the movie age. That is, older movies receive higher ratings than newer ones. In Koren,^8 we shed some light on the origins of these effects. The major contribution of this work is presenting a methodology and specific techniques for modeling time drifting user preferences in the context of recommender systems. The proposed approaches are applied on the aforementioned extensively analyzed movie-ratings dataset, enabling us to firmly compare our methods with those reported recently. We show that by incorporating temporal information, we achieve best results reported so far, indicating the significance of uncovering temporal effects. The rest of the paper is organized as follows. In the next section we describe basic notions and notation. Then, in Section 3, our principles for addressing time changing user preferences are evolved. Those principles are then incorporated, in quite different ways, into two leading recommender techniques: factor modeling (Section 4) and item item neighborhood modeling (Section 5). 2. Preliminaries 2.1. Notation We are given ratings for m users (aka customers) and n items (aka products). We reserve special indexing letters to distinguish users from items: for users u, v, and for items i, j. A rating r[ui] indicates the preference by user u of item i, where high values mean stronger preference. For example, values can be integers ranging from 1 (star) indicating no interest to 5 (stars) indicating a strong interest. We distinguish predicted ratings from known ones, by using the notation [ui] for the predicted value of r[ui]. The scalar t[ui] denotes the time of rating r[ui]. One can use different time units, based on what is appropriate for the application at hand. For example, when time is measured in days, then t[ui] counts the number of days elapsed since some early time point. Usually the vast majority of ratings are unknown. For example, in the Netflix data 99% of the possible ratings are missing because a user typically rates only a small portion of the movies. The (u, i) pairs for which r[ui] is known are stored in the set K = {(u, i)|r[ui] is known}, which is known as the training set. Models for the rating data are learned by fitting the previously observed ratings. However, our goal is to generalize those in a way that allows us to predict future, unknown ratings. Thus, caution should be exercised to avoid overfitting the observed data. We achieve this by using a technique called regularization. Regularization restricts the complexity of the models, thereby preventing them from being too specialized to the observed data. We employ L2-regularization, which penalizes the magnitude of the learned parameters. Extent of regularization is controlled by constants which are denoted as: λ[1], λ[2], ... 2.2. The Netflix data We evaluated our algorithms on a movie-rating data-set of more than 100 million date-stamped ratings performed by about 480,000 anonymous Netflix customers on 17,770 movies between 31 December 1999 and 31 December 2005.^3 Ratings are integers ranging between 1 and 5. On average, a movie receives 5,600 ratings, while a user rates 208 movies, with substantial variation around each of these averages. To maintain compatibility with results published by others, we adopted some common standards. We evaluated our methods on two comparable sets designed by Netflix: a holdout set ("Probe set") and a test set ("Quiz set"), each of which contains over 1.4 million ratings. Reported results are on the test set, while experiments on the holdout set show the same findings. In our time-modeling context, it is important to note that the test instances of each user come later in time than his/her training instances. The quality of the results is measured by their root mean squared error (RMSE) The Netflix data is part of the Netflix Prize contest, with the target of improving the accuracy of Netflix movie recommendations by 10%. The benchmark is Netflix's proprietary system. Cinematch, which achieved an RMSE of 0.9514 on the test set. The grand prize was awarded to a team that managed to drive this RMSE to 0.8554 after almost 3 years of extensive efforts. Achievable RMSE values on the test set lie in a quite compressed range, as evident by the difficulty to win the grand prize. Nonetheless, there is evidence that small improvements in RMSE terms can have a significant impact on the quality of the top few presented recommendations.^7 The algorithms described in this work played a central role in reaching the grand prize. 2.3. Collaborative filtering Recommender systems are often based on collaborative filtering (CF), a term coined by the developers of the first recommender system Tapestry.^5 This technique relies only on past user behavior e.g., their previous transactions or product ratings without requiring the creation of explicit profiles. CF analyzes relationships between users and inter-dependencies among products, in order to identify new user item associations. A major appeal of CF is that it is domain-free and avoids the need for extensive data collection. In addition, relying directly on user behavior allows uncovering complex and unexpected patterns that would be difficult or impossible to profile using known data attributes. As a consequence, CF attracted much of attention in the past decade, resulting in significant progress and being adopted by some successful commercial systems, including Amazon,^10 TiVo,^1 and Netflix. The two primary areas of CF are the neighborhood methods and latent factor models. The neighborhood methods are centered on computing the relationships between items or, alternatively, between users. The item-oriented approach evaluates the preference of a user to an item based on ratings of "neighboring" items by the same user. A product's neighbors are other products that tend to be scored similarly when rated by the same user. For example, consider the movie "Saving Private Ryan." Its neighbors might include other war movies, Spielberg movies, and Tom Hanks movies, among others. To predict a particular user's rating for "Saving Private Ryan," we would look for the movie's nearest neighbors that were actually rated by that user. A dual to the item-oriented approach is user-oriented approach, which identifies like-minded users who can complement each other's missing ratings. Latent factor models comprise an alternative approach that tries to explain the ratings by characterizing both items and users on, say, 20 200 factors inferred from the pattern of ratings. For movies, factors discovered by the decomposition might measure obvious dimensions such as comedy vs. drama, amount of action, or orientation to children; less well-defined dimensions such as depth of character development or "quirkiness," or completely uninterpretable dimensions. For users, each factor measures how much the user likes movies that score high on the corresponding movie factor. One of the most successful realizations of latent factor models is based on matrix factorization; see, e.g., Koren et al.^9 3. Tracking Drifting Customer Preferences One of the frequently mentioned examples of concept drift is changing customer preferences over time, e.g., "customer preferences change as new products and services become available."^6 This aspect of drifting customer preferences highlights a common paradigm in the literature of having global drifting concepts influencing the data as a whole. However, in many applications, including our focus application of recommender systems, we also face a more complicated form of concept drift where interconnected preferences of many users are drifting in different ways at different time points. This requires the learning algorithm to keep track of multiple changing concepts. In addition the typically low amount of data instances associated with individual customers calls for more concise and efficient learning methods, which maximize the utilization of signal in the data. In a survey on the problem of concept drift, Tsymbal^19 argues that three approaches can be distinguished in the literature. The instance selection approach discards instances that are less relevant to the current state of the system. A common variant is time-window approaches were only recent instances are considered. A possible disadvantage of this simple model is that it is giving the same significance to all instances within the considered time-window, while completely discarding all other instances. Equal significance might be reasonable when the time shift is abrupt, but less so when time shift is gradual. Thus, a refinement is instance weighting were instances are weighted based on their estimated relevance. Frequently, a time decay function is used, underweighting instances as they occur deeper into the past. The third approach is based on ensemble learning, which maintains a family of predictors that together produce the final outcome. Those predictors are weighted by their perceived relevance to the present time point, e.g., predictors that were more successful on recent instances get higher weights. We performed extensive experiments with instance weighting schemes, trying different exponential time decay rates on both neighborhood and factor models. The consistent finding was that prediction quality improves as we moderate that time decay, reaching best quality when there is no decay at all. This finding is despite the fact that users do change their taste and rating scale over the years, as we show later. However, much of the old preferences still persist or, more importantly, help in establishing useful cross-user or cross-product patterns in the data. Thus, just underweighting past actions lose too many signals along with the lost noise, which is detrimental, given the scarcity of data per user. As for ensemble learning, having multiple models, each of which considers only a fraction of the total behavior may miss those global patterns that can be identified only when considering the full scope of user behavior. What makes them even less appealing in our case is the need to keep track of the independent drifting behaviors of many customers. This, in turn, would require building a separate ensemble for each user. Such a separation will significantly complicate our ability to integrate information across users along multiple time points, which is the cornerstone of collaborative filtering. For example, an interesting relation between products can be established by related actions of many users, each of them at a totally different point of time. Capturing such a collective signal requires building a single model encompassing all users and items together. All those considerations led us to the following guidelines we adopt for modeling drifting user preferences. • We seek models that explain user behavior along the full extent of the time period, not only the present behavior (while subject to performance limitations). Such modeling is key to being able to extract signal from each time point, while neglecting only the noise. • Multiple changing concepts should be captured. Some are user-dependent and some are item-dependent. Similarly, some are gradual while others are sudden. • While we need to model separate drifting "concepts" or preferences per user and/or item, it is essential to combine all those concepts within a single framework. This combination allows modeling interactions crossing users and items thereby identifying higher level patterns. • In general, we do not try to extrapolate future temporal dynamics, e.g., estimating future changes in a user's preferences. Extrapolation could be very helpful but is seemingly too difficult, especially given a limited amount of known data. Rather than that, our goal is to capture past temporal patterns in order to isolate persistent signal from transient noise. The result, indeed, helps in predicting future behavior. Now we turn to how these desirable principles are incorporated into two leading approaches to CF matrix factorization and neighborhood methods. 4. Time-Aware Factor Model 4.1. The anatomy of a factor model Matrix factorization is a well-recognized approach to CF.^9, 11, 17 This approach lends itself well to an adequate modeling of temporal effects. Before we deal with those temporal effects, we would like to establish the foundations of a static factor model. In its basic form, matrix factorization characterizes both items and users by vectors of factors inferred from patterns of item ratings. High correspondence between item and user factors leads to recommendation of an item to a user. More specifically, both users and items are mapped to a joint latent factor space of dimensionality f, such that ratings are modeled as inner products in that space. Accordingly, each user u is associated with a vector p[u] ^f and each item i is associated with a vector q[i] ^f. A rating is predicted by the rule The major challenge is computing the mapping of each item and user to factor vectors q[i], p[u] ^f. After this mapping is accomplished, we can easily compute the ratings a user will give to any item by using Equation 1. Such a model is closely related to singular value decomposition (SVD), which is a well-established technique for identifying latent semantic factors in the information retrieval. Applying SVD in the CF domain would require factoring the user item rating matrix. Such a factorization raises difficulties due to the high portion of missing values, due to the sparseness in the user item ratings matrix. Conventional SVD is undefined when knowledge about the matrix is incomplete. Moreover, carelessly addressing only the relatively few known entries is highly prone to overfitting. Earlier works^13 relied on imputation to fill in missing ratings and make the rating matrix dense. However, imputation can be very expensive as it significantly increases the amount of data. In addition, the data may be considerably distorted due to inaccurate imputation. Hence, more recent works (e.g., Koren, Paterek, and Takacs et al.^7, 11, 17) suggested modeling directly only the observed ratings, while avoiding overfitting through an adequate regularized model. In order to learn the factor vectors (p[u] and q[i]), we minimize the regularized squared error on the set of known ratings: Minimization is typically performed by stochastic gradient descent. Model (1) tries to capture the interactions between users and items that produce the different rating values. However, much of the observed variation in rating values is due to effects associated with either users or items, independently of their interaction, which are known as biases. A prime example is that typical CF data exhibits large systematic tendencies for some users to give higher ratings than others, and for some items to receive higher ratings than others. After all, some products are widely received as better (or worse) than others. Thus, it would be unwise to explain the full rating value by an interaction of the form q[i]^Tp[u]. Instead, we will try to identify the portion of these values that can be explained by individual user or item effects (biases). The separation of interaction and biases will allow us to subject only the true interaction portion of the data to factor modeling. We will encapsulate those effects, which do not involve user item interaction, within the baseline predictors. These baseline predictors tend to capture much of the observed signal, in particular much of the temporal dynamics within the data. Hence, it is vital to model them accurately, which enables better identification of the part of the signal that truly represents user item interaction and should be subject to factorization. A suitable way to construct a static baseline predictor is as follows. Denote by μ the overall average rating. A baseline predictor for an unknown rating r[ui] is denoted by b[ui] and accounts for the user and item main effects: The parameters b[u] and b[i] indicate the observed deviations of user u and item i, respectively, from the average. For example, suppose that we want a baseline estimate for the rating of the movie Titanic by user Joe. Now, say that the average rating over all movies, μ, is 3.7 stars. Furthermore, Titanic is better than an average movie, so it tends to be rated 0.5 stars above the average. On the other hand, Joe is a critical user, who tends to rate 0.3 stars lower than the average. Thus, the baseline estimate for Titanic's rating by Joe would be 3.9 stars by calculating 3.7 − 0.3 + 0.5. The baseline predictor should be integrated back into the factor model. To achieve this we extend rule (1) to be Here, the observed rating is separated to its four components: global average, item-bias, user-bias, and user item interaction. The separation allows each component to explain only the part of signal relevant to it. Learning is done analogously to before, by minimizing the squared error function Schemes along these lines were described in, e.g., Koren and Paterek.^7, 11 The decomposition of a rating into distinct portions is convenient here, as it allows us to treat different temporal aspects in separation. More specifically, we identify the following effects: (1) user-biases (b[u]) change over time; (2) item biases (b[i]) change over time; and (3) user preferences (p[u]) change over time. On the other hand, we would not expect a significant temporal variation of item characteristics (q[i]), as items, unlike humans, are static in their nature. We start with a detailed discussion of the temporal effects that are contained within the baseline predictors. 4.2. Time changing baseline predictors Much of the temporal variability is included within the baseline predictors, through two major temporal effects. The first addresses the fact that an item's popularity may change over time. For example, movies can go in and out of popularity as triggered by external events such as the appearance of an actor in a new movie. This is manifested in our models by treating the item bias b[i] as a function of time. The second major temporal effect allows users to change their baseline ratings over time. For example, a user who tended to rate an average movie "4 stars," may now rate such a movie "3 stars." This may reflect several factors including a natural drift in a user's rating scale, the fact that ratings are given in relevance to other ratings that were given recently and also the fact that the identity of the rater within a household can change over time. Hence, in our models we take the parameter b[u] as a function of time. This induces a template for a time sensitive baseline predictor for u's rating of i at day t[ui]: Here, b[u](·) and b[i](·) are real valued functions that change over time. The exact way to build these functions should reflect a reasonable way to parameterize the involving temporal changes. Our choice in the context of the movie-rating dataset demonstrates some typical considerations. A major distinction is between temporal effects that span extended periods of time and more transient effects. In the movie-rating case, we do not expect movie likeability to fluctuate on a daily basis, but rather to change over more extended periods. On the other hand, we observe that user effects can change on a daily basis, reflecting inconsistencies natural to customer behavior. This requires finer time resolution when modeling user-biases compared with a lower resolution that suffices for capturing item-related time effects. We start with our choice of time-changing item biases b[i](t). We found it adequate to split the item biases into time-based bins, using a constant item bias for each time period. The decision of how to split the timeline into bins should balance the desire to achieve finer resolution (hence, smaller bins) with the need for enough ratings per bin (hence, larger bins). For the movie-rating data, there is a wide variety of bin sizes that yield about the same accuracy. In our implementation, each bin corresponds to roughly 10 consecutive weeks of data, leading to 30 bins spanning all days in the dataset. A day t is associated with an integer Bin(t) (a number between 1 and 30 in our data), such that the movie bias is split into a stationary part and a time changing part: While binning the parameters works well on the items, it is more of a challenge on the users' side. On the one hand, we would like a finer resolution for users to detect very short-lived temporal effects. On the other hand, we do not expect enough ratings per user to produce reliable estimates for isolated bins. Different functional forms can be considered for parameterizing temporal user behavior, with varying complexity and accuracy. One simple modeling choice uses a linear function to capture a possible gradual drift of user-bias. For each user u, we denote the mean date of rating by t[u]. Now, if u rated a movie on day t, then the associated time deviation of this rating is defined as Here |t - t[u]| measures the number of days between dates t and t[u]. We set the value of β by cross-validation; in our implementation β = 0.4. We introduce a single new parameter for each user called α[u] so that we get our first definition of a time-dependent user-bias A more flexible spline-based rule is described in Koren.^8 A smooth function for modeling the user-bias meshes well with gradual concept drift. However, in many applications there are sudden drifts emerging as "spikes" associated with a single day or session. For example, in the movie-rating dataset we have found that multiple ratings, a user gives in a single day, tend to concentrate around a single value. Such an effect need not span more than a single day. The effect may reflect the mood of the user that day, the impact of ratings given in a single day on each other, or changes in the actual rater in multiperson accounts. To address such short-lived effects, we assign a single parameter per user and day, absorbing the day-specific variability. This parameter is denoted by b[ut]. Notice that in some applications the basic primitive time unit to work with can be shorter or longer than a day. In the Netflix movie-rating data, a user rates on 40 different days on average. Thus, working with b[ut] requires, on average, 40 parameters to describe each user-bias. It is expected that b[ut] is inadequate as a stand-alone for capturing the user-bias, since it misses all sorts of signals that span more than a single day. Thus, it serves as an additive component within the previously described schemes. The time-linear model (8) becomes A baseline predictor on its own cannot yield personalized recommendations, as it misses all interactions between users and items. In a sense, it is capturing the portion of the data that is less relevant for establishing recommendations. Nonetheless, to better assess the relative merits of the various choices of time-dependent user-bias, we compare their accuracy as stand-alone predictors. In order to learn the involved parameters we minimize the associated regularized squared error by using stochastic gradient descent. For example, in our actual implementation we adopt rule (9) for modeling the drifting user-bias, thus arriving at the baseline predictor To learn the involved parameters, b[u], α[u], b[ut], b[i], and b[i,Bin(t)], one should solve Here, the first term strives to construct parameters that fit the given ratings. The regularization term, λ[7] (b[u]^2 + ...), avoids overfitting by penalizing the magnitudes of the parameters, assuming a neutral 0 prior. Learning is done by a stochastic gradient descent algorithm running 20 30 iterations, with λ[7] = 0.01. Table 1 compares the ability of various suggested baseline predictors to explain signal in the data. As usual, the amount of captured signal is measured by the RMSE on the test set. As a reminder, test cases come later in time than the training cases for the same user, so predictions often involve extrapolation in terms of time. We code the predictors as follows: • static, no temporal effects: b[ui] = μ + b[u] + b[i], • mov, accounting only for movie-related temporal effects: b[ui] = μ + b[u] + b[i] + b[i,Bin(tui)], • linear, linear modeling of user-biases: b[ui] = μ + b[u] + α[u] · dev[u](t[ui]) + b[i] + b[i,Bin(tui)], and • linear^+, linear modeling of user-biases and single day effect: b[ui] = μ + b[u] + α[u] · dev[u](t[ui]) + b[u, tui] + b[i] + b[i, Bin(tui)]. The table shows that while temporal movie effects reside in the data (lowering RMSE from 0.9799 to 0.9771), the drift in user-biases is much more influential. In particular, sudden changes in user-biases, which are captured by the per-day parameters, are most significant. Beyond the temporal effects described so far, one can use the same methodology to capture more effects. A prime example is capturing periodic effects. For example, some products may be more popular in specific seasons or near certain holidays. Similarly, different types of television or radio shows are popular throughout different segments of the day (known as "dayparting"). Periodic effects can be found also on the user side. As an example, a user may have different attitudes or buying patterns during the weekend compared to the working week. A way to model such periodic effects is to dedicate a parameter for the combinations of time periods with items or users. This way, the item bias of (7) becomes For example, if we try to capture the change of item bias with the season of the year, then period(t) However, we have not found periodic effects with a significant predictive power within the movie-rating dataset, thus our reported results do not include those. Another temporal effect within the scope of basic predictors is related to the changing scale of user ratings. While b[i](t) is a user-independent measure for the merit of item i at time t, users tend to respond to such a measure differently. For example, different users employ different rating scales, and a single user can change his rating scale over time. Accordingly, the raw value of the movie bias is not completely user-independent. To address this, we add a time-dependent scaling feature to the baseline predictors, denoted by c[u](t). Thus, the baseline predictor (10) becomes All discussed ways to implement b[u](t) would be valid for implementing c[u](t) as well. We chose to dedicate a separate parameter per day, resulting in: c[u](t) = c[u] + c[ut]. As usual, c[u] is the stable part of c[u](t), whereas c[ut] represents day-specific variability. Adding the multiplicative factor c[u](t) to the baseline predictor lowers RMSE to 0.9555. Interestingly, this basic model, which captures just main effects disregarding user item interactions, can explain almost as much of the data variability as the commercial Netflix Cinematch recommender system, whose published RMSE on the same test set is 0.9514.^3 4.3. Time changing factor model In Section 4.2 we discussed the way time affects baseline predictors. However, as hinted earlier, temporal dynamics go beyond this, they also affect user preferences and thereby the interaction between users and items. Users change their preferences over time. For example, a fan of the "psychological thrillers" genre may become a fan of "crime dramas" a year later. Similarly, humans change their perception on certain actors and directors. This effect is modeled by taking the user factors (the vector p[u]) as a function of time. Once again, we need to model those changes at the very fine level of a daily basis, while facing the built-in scarcity of user ratings. In fact, these temporal effects are the hardest to capture, because preferences are not as pronounced as main effects (user-biases), but are split over many factors. We modeled each component of the user preferences p[u](t)^T = (p[u](t)[1], p[u](t)[2], ..., p[u](t)[f]) in the same way that we treated user-biases. Within the movie-rating dataset, we have found modeling after (9) effective, leading to Here p[uk] captures the stationary portion of the factor, α[uk] · dev[u](t) approximates a possible portion that changes linearly over time, and p[ukt] absorbs the very local, day-specific At this point, we can tie all pieces together and extend the SVD factor model (4) by incorporating the time changing parameters. The resulting model will be denoted as timeSVD, where the prediction rule is as follows: The exact definitions of the time drifting parameters b[i](t), b[u](t), and p[u](t) were given in Equations 7, 9, and 12. Learning is performed by minimizing the associated squared error function on the training set using a regularized stochastic gradient descent algorithm. The procedure is analogous to the one involving the original SVD algorithm. Time complexity per iteration is still linear with the input size, while wall clock running time is approximately doubled compared to SVD, due to the extra overhead required for updating the temporal parameters. Importantly, convergence rate was not affected by the temporal parameterization, and the process converges in around 30 iterations. 4.4. Comparison The factor model we are using in practice is slightly more involved than the one described so far. The model, which is known as SVD++,^7 offers an improved accuracy by also accounting for the more implicit information recorded by which items were rated (regardless of their rating value). While details of the SVD++ algorithm are beyond the scope of this article, they do not influence the introduction of temporal effects, and the model is extended to account for temporal effects following exactly the same procedure described in this section. The resulting model is known as timeSVD++, and is described in Koren.^8 In Table 2 we compare results of three matrix factorization algorithms. First is SVD, the plain matrix factorization algorithm. Second is the SVD++ method, which improves upon SVD by incorporating a kind of implicit feedback. Third is timeSVD++, which also accounts for temporal effects. The three methods are compared over a range of factorization dimensions (f). All benefit from a growing number of factor dimensions that enables them to better express complex movie user interactions. Addressing implicit feedback by the SVD++ model leads to accuracy gains within the movie-rating dataset. Yet, the improvement delivered by timeSVD++ over SVD++ is consistently more significant. We are not aware of any single algorithm in the literature that could deliver such accuracy. We attribute this to the importance of properly addressing temporal effects. Further evidence of the importance of capturing temporal dynamics is the fact that a timeSVD++ model of dimension 10 is already more accurate than an SVD model of dimension 200. Similarly, a timeSVD++ model of dimension 20 is enough to outperform an SVD++ model of dimension 200. 4.5. Predicting future days Our models include day-specific parameters. An aparaent question would be how these models can be used for predicting ratings in the future, on new dates for which we cannot train the day-specific parameters? The simple answer is that for those future (untrained) dates, the day-specific parameters should take their default value. In particular for Equation 11, c[u](t[ui]) is set to c[u], and b [u, tui] is set to zero. Yet, one wonders, if we cannot use the day-specific parameters for predicting the future, why are they good at all? After all, prediction is interesting only when it is about the future. To further sharpen the question, we should mention the fact that the Netflix test sets include many ratings on dates for which we have no other rating by the same user and hence day-specific parameters cannot be exploited. To answer this, notice that our temporal modeling makes no attempt to capture future changes. All it is trying to do is to capture transient temporal effects, which had a significant influence on past user feedback. When such effects are identified, they must be tuned down, so that we can model the more enduring signal. This allows our model to better capture the long-term characteristics of the data, while letting dedicated parameters absorb short-term fluctuations. For example, if a user gave many higher than usual ratings on a particular single day, our models discount those by accounting for a possible day-specific good mood, which does not reflect the longer term behavior of this user. This way, the day-specific parameters contribute to cleaning the data, which improves prediction of future dates. 5. Temporal Dynamics at Neighborhood Models The most common approach to CF is based on neighborhood models. While typically less accurate than their factorization counterparts, neighborhood methods enjoy popularity thanks to some of their merits, such as explaining the reasoning behind computed recommendations, and seamlessly accounting for new entered ratings. Recently, we suggested an item item model based on global optimization,^7 which will enable us here to capture time dynamics in a principled manner. The static model, without temporal dynamics, is centered on the following prediction rule: Here, the set R(u) contains the items rated by user u. The item item weights w[ij] and c[ij] represent the adjustments we need to make to the predicted rating of item i, given a known rating of item j. It was proven greatly beneficial to use two sets of item item weights: one (the w[ij]s) is related to the values of the ratings, and the other disregards the rating value, considering only which items were rated (the c[ij]s). These weights are automatically learned from the data together with the biases b[i] and b[u]. The constants b[uj] are precomputed according to Equation 3. Recall that R (u) is the set of items rated by user u. When adapting rule (14) to address temporal dynamics, two components should be considered separately. First component, μ + b[i] + b[u], corresponds to the the baseline predictor portion. Typically, this component explains most variability in the observed signal. Second component, |R(u)|^−1/2 Σ[j](r[uj]-b[uj])w[ij] + c[ij], captures the more informative signal, which deals with user item interaction. As for the baseline part, nothing changes from the factor model, and we replace it with μ + b[i](t[ui]) + b[u](t[ui]), according to Equations 7 and 9. However, capturing temporal dynamics within the interaction part requires a different strategy. Item item weights (w[ij] and c[ij]) reflect inherent item characteristics and are not expected to drift over time. The learning process should capture unbiased long-term values, without being too affected from drifting aspects. Indeed, the time changing nature of the data can mask much of the longer term item item relationships if not treated adequately. For instance, a user rating both items i and j high within a short time period is a good indicator for relating them, thereby pushing higher the value of w[ij]. On the other hand, if those two ratings are given 5 years apart, while the user's taste (if not her identity) could considerably change, this provides less evidence of any relation between the items. On top of this, we would argue that those considerations are pretty much user dependent; some users are more consistent than others and allow relating their longer term actions. Our goal here is to distill accurate values for the item item weights, despite the interfering temporal effects. First we need to parameterize the decaying relations between two items rated by user u . We adopt exponential decay formed by the function e^−βu·Δt, where β[u] > 0 controls the user-specific decay rate and should be learned from the data. We also experimented with other decay forms, like the computationally cheaper (1 + β[u]Δt)^−1, which resulted in about the same accuracy, with an improved running time. This leads to the prediction rule The involved parameters, b[i](t[ui]) = b[i] + b[i, Bin(tui)], b[u](t[ui]) = b[u] + α[u] · dev[u](t[ui]) + b[u, tui], β[u], w[ij] and c[ij], are learned by minimizing the associated regularized squared error Minimization is performed by stochastic gradient descent. As in the factor case, properly considering temporal dynamics improves the accuracy of the neighborhood model within the movie-ratings dataset. The RMSE decreases from 0.9002^7 to 0.8885. To our best knowledge, this is significantly better than previously known results by neighborhood methods. To put this in some perspective, this result is even better than those reported by using hybrid approaches such as applying a neighborhood approach on residuals of other algorithms.^2, 11, 18 A lesson is that addressing temporal dynamics in the data can have a more significant impact on accuracy than designing more complex learning algorithms. We would like to highlight an interesting point related to the basic methodology described in Section 3. Let u be a user whose preferences are quickly drifting (β[u] is large). Hence, old ratings by u should not be very influential on his status at the current time t. One could be tempted to decay the weight of u's older ratings, leading to "instance weighting" through a cost function like Such a function is focused at the current state of the user (at time t), while de-emphasizing past actions. We would argue against this choice, and opt for equally weighting the prediction error at all past ratings as in Equation 16, thereby modeling all past user behavior. Therefore, equal-weighting allows us to exploit the signal at each of the past ratings, a signal that is extracted as item item weights. Learning those weights would equally benefit from all ratings by a user. In other words, we can deduce that two items are related if users rated them similarly within a short time frame, even if this happened long ago. 6. Conclusion Tracking the temporal dynamics of customer preferences to products raises unique challenges. Each user and product potentially goes through a distinct series of changes in their characteristics. Moreover, we often need to model all those changes within a single model thereby interconnecting users (or, products) to each other to identify communal patterns of behavior. A mere decay of older instances or usage of multiple separate models lose too many signals, thus degrading prediction accuracy. The solution we adopted is to model the temporal dynamics along the whole time period, allowing us to intelligently separate transient factors from lasting ones. We applied this methodology to two leading recommender techniques. In a factorization model, we modeled the way user and product characteristics change over time, in order to distill longer term trends from noisy patterns. In an item item neighborhood model, we showed how the more fundamental relations among items can be revealed by learning how influence between two items rated by a user decays over time. In both factorization and neighborhood models, the inclusion of temporal dynamics proved very useful in improving quality of predictions, more than various algorithmic enhancements. This led to the best results published so far on a widely analyzed movie-rating dataset. 1. Ali, K., van Stam, W. TiVo: making show recommendations using a distributed collaborative filtering architecture. In Proceedings of the 10th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (2004), 394 401. 2. Bell, R., Koren, Y. Scalable collaborative filtering with jointly derived neighborhood interpolation weights. IEEE International Conference on Data Mining (ICDM'07) (2007), 43 52. 3. Bennet, J., Lanning, S. The Netflix Prize. KDD Cup and Workshop, 2007. www.netflixprize.com. 4. Ding, Y., Li, X. Time weight collaborative filtering. In Proceedings of the 14th ACM International Conference on Information and Knowledge Management (CIKM'04) (2004), 485 492. 5. Goldberg, D., Nichols, D., Oki, B.M., Terry, D. Using collaborative filtering to weave an information tapestry. Commun. ACM 35 (1992), 61 70. 6. Kolter, J.Z., Maloof, M.A. Dynamic weighted majority: A new ensemble method for tracking concept drift. In Proceedings of the IEEE Conference on Data Mining (ICDM'03) (2003), 123 130. 7. Koren, Y. Factorization meets the neighborhood: A multifaceted collaborative filtering model. In Proceedings of the 14th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD'08) (2008), 426 434. 8. Koren, Y. Collaborative filtering with temporal dynamics. In Proceedings of the 15th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD'09) (2009), 447 456. 9. Koren, Y., Bell, R., Volinsky, C. Matrix factorization techniques for recommender systems. IEEE Comput. 42 (2009), 30 37. 10. Linden, G., Smith, B., York, J. Amazon.com recommendations: Item-to-item collaborative filtering. IEEE Internet Comput. 7 (2003), 76 80. 11. Paterek, A. Improving regularized singular value decomposition for collaborative filtering. In Proceedings of the KDD Cup and Workshop (2007). 12. Pu, P., Bridge, D.G., Mobasher, B., Ricci, F. (eds.). In Proceedings of the 2008 ACM Conference on Recommender Systems (2008). 13. Sarwar, B.M., Karypis, G., Konstan, J.A., Riedl, J. Application of dimensionality reduction in recommender system A case study. WEBKDD'2000. 14. Sarwar, B., Karypis, G., Konstan, J., Riedl, J. Item-based collaborative filtering recommendation algorithms. In Proceedings of the 10th International Conference on the World Wide Web (2001), 285 15. Schlimmer, J., Granger, R. Beyond incremental processing: Tracking concept drift. In Proceedings of the 5th National Conference on Artificial Intelligence (1986), 502 507. 16. Sugiyama, K., Hatano, K., Yoshikawa, M. Adaptive web search based on user profile constructed without any effort from users. In Proceedings of the 13th International Conference on World Wide Web (WWW'04) (2004), 675 684. 17. Takacs, G., Pilaszy, I., Nemeth, B., Tikk, D. Major components of the gravity recommendation aystem. SIGKDD Explor. 9 (2007), 80 84. 18. Toscher, A., Jahrer, M., Legenstein, R. Improved neighborhood-based algorithms for large-scale recommender systems. KDD'08 Workshop on Large Scale Recommenders Systems and the Netflix Prize 19. Tsymbal, A. The problem of concept drift: Definitions and related work. Technical Report TCD-CS-2004-15. Trinity College Dublin, 2004. 20. Widmer, G., Kubat, M. Learning in the presence of concept drift and hidden contexts. Mach. Learn. 23, 69 (1996), 101. A previous version of this paper appeared in the Proceedings of the 15th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (2009), 447 456. DOI: http://doi.acm.org/10.1145/1721654.1721677 Figure 1. Two temporal effects emerging within the Netflix movie-rating dataset. Top: the average movie-rating made a sudden jump in early 2004 (1,500 days since the first rating in the dataset). Bottom: ratings tend to increase with the movie age at the time of the rating. Here, movie age is measured by the time span since its first rating event within the dataset. In both charts, each point averages 100,000 rating instances. Table 1. Comparing baseline predictors capturing main movie and user effects. As temporal modeling becomes more accurate, prediction accuracy improves (lowering RMSE). Table 2. Comparison of three factor models: prediction accuracy is measured by RMSE (lower is better) for varying factor dimensionality (f). For all models accuracy improves with growing number of dimensions. Most significant accuracy gains are achieved by addressing the temporal dynamics in the data through the timeSVD++ model. ©2010 ACM 0001-0782/10/0400 $10.00 Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. The Digital Library is published by the Association for Computing Machinery. Copyright © 2010 ACM, Inc. No entries found
{"url":"https://acmwebvm01.acm.org/magazines/2010/4/81486-collaborative-filtering-with-temporal-dynamics/fulltext","timestamp":"2024-11-02T12:41:05Z","content_type":"text/html","content_length":"89289","record_id":"<urn:uuid:c7932b20-4414-4123-b172-7fcf12bede4b>","cc-path":"CC-MAIN-2024-46/segments/1730477027710.33/warc/CC-MAIN-20241102102832-20241102132832-00354.warc.gz"}
Multivariate Time Series Forecasting of Daily Reservations in the Hotel Industry Using Recurrent Neural Networks Multivariate Time Series Forecasting of Daily Reservations in the Hotel Industry Using Recurrent Neural Networks submitted in partial fulfillment for the degree of master of science Sai Wing Mak master information studies data science faculty of science university of amsterdam Internal Supervisor External Supervisor Title, Name Ms. Inske Groenen Mr. Rik van Leeuwen Affiliation UvA, IvI Ireckonu Multivariate Time Series Forecasting of Daily Reservations in the Hotel Industry Using Recurrent Neural Networks Sai Wing Mak University of Amsterdam [email protected] Reservation forecasting is a multivariate time series problem in-volving multiple seasonal patterns and a combination of exogenous factors. While a variety of models was previously implemented for the task, the adoption of Recurrent Neural Networks, in particular Long- and Short-Term Memory (LSTM) models which have proven to be capable of capturing temporal dynamics, remains scarce in hospitality. This study proposes a seasonal LSTM + residual LSTM (sLSTM+rLSTM) architecture to evaluate the e�ectiveness of LSTM in reservation forecasting. The result shows that sLSTM with only seasonal features already outperforms the "Same date-Last year" (SdLy) model that is widely adopted in the industry. However, the sLSTM+rLSTM model, despite the out-performance of the SdLy model, fails to learn the residual function, probably due to its re-quirement of an enormous amount of training data. Room rate is incorporated into the prediction using the concept of choice sets to produce forecasts for several rate categories, yet it is shown that the predictions of daily reservations with room rates above a spe-ci�c amount are rendered inaccurate. The research concluded with the argument that a simpler model might be preferred over a com-plicated LSTM structure for hotels with relatively small training samples, given the computational complexity the latter introduces. Accurate forecasting of daily reservation numbers is an important part of revenue management in hotel industry. Hotel revenue man-agement is perceived as the sales decision in which the hotel sells the right number of rooms to the right target at the right time and price with the ultimate objective of maximizing its revenue [25]. In particular, forecasting techniques have been adopted over the years to provide an estimate, using historical reservation history and current reservation activities, of sales in the future to achieve the purpose [30]. As new business models keep emerging, scholars highlighted the need of developing new and better forecasting models over time, given the increasing complexity of modeling the real time dynamic of revenue management [6][31]. In recent years, Recurrent Neural Network (RNN), in particular Gated Recurrent Units (GRUs) and Long- and short-term memory (LSTM), have proven to be e�ective in capturing temporal dependency in sequential data [4][16][24]. These methods, however, are not popularly applied in hotels as the industry has been widely adopting traditional forecasting tech-niques, in particular the ’same date-Last year’ (SdLy) approach due to their simplistic nature [9][20]. The number of daily reservations is a time series data, which is de�ned as a series of observations taken at successive points of time. Its natural temporal ordering distinguish itself from cross-sectional data, where observations of a subset of variables are recorded at a speci�c point of time [3]. In the context of reservation forecasting, the number of reservations is characterized by numerous factors, such as seasonality, room rate and competitor price, etc. [13], which drives the prediction into a multivariate time series problem. Tra-ditional models on multivariate time series forecasting have long been established, such as vector autoregressive and transfer func-tion models to capture the interdependencies among time series [3]. Time series data usually exhibits seasonal patterns, which refers to the regular periodic �uctuations over a �xed and known interval, such as monthly and weekly [3]. For the hotel industry, seasonality poses a signi�cant in�uence to the number of reservations [25]. It has been shown that deseasonalizing a time series before producing forecasts would e�ectively reduce forecast error [12] [32]. In view of this, this research attempts to �rst extract the seasonal compo-nents to investigate if it provides an improvement in the forecast performance. Groenen [12] adopted the residual learning framework in time series forecasting, where the seasonal components are allowed to skip the network by using a separate GRU model, which in turn forces the network to learn the residual function. The result has suggested a signi�cant improvement in the model performance and motivates the application of residual learning in this research given the similar nature in terms of multivariate time series prediction. All in all, the goal of the research is to propose a forecasting model for an accurate prediction of the total number of reservations in day-to-day operations in the hotel industry using the multivariate time series data provided by the company. The research is split into the following sub-questions: (1) What is the performance of the current baseline model of reservation prediction in the hotel industry? (2) How to incorporate reservations with di�erent room rates into the prediction? (3) Does the selected forecasting model improve the perfor-mance of the current practice of sales forecasting in hotel industry? The contribution of the research lies in the application of RNNs in solving hotel reservation forecasting problems that have not been intensively studied before. Besides, as this research adopts the models proposed by Groenen [12], this research also serves as a reproducibility study on the e�ectiveness of combining RNNs for seasonality extraction and residual learning to generate forecasts. This research is structured in the following: �rst, previous re-search work in summarized (Section 2), followed by the rere-search methodology (Section 3). Then, the implementation and evaluation is conducted (Section 4). Lastly, conclusions are drawn based on the �ndings (Section 5). This section describes previous researches on the forecasts in the hotel industry (2.1), recurrent neural networks (2.2), seasonality extraction (2.3) and residual learning (2.4). 2.1 Forecasts in the hotel industry 2.1.1 Approaches of hotel forecasts. Extensive researches have been previously done in reservation fore-casting using a variety of models, such as Holt-Winters method [20][22], exponential smoothing [5][28], moving average [28], re-gression [5][28] and Monte Carlo simulation approach [30]. Em-pirical evidence has shown, however, not a single model would universally outperform the others on all occasions as di�erent mod-els suit di�erent tourism forecasting context [21]. On the other hand, the concept of combination forecasts was suggested [8], in which forecasts are produced by a range of models combined and it is proven to yield a better performance [5][23]. Despite of an abundance of researches, however, the hotel industry tends to use a simpler method called ’Same date-Last year’ (SdLy) model due to its simplistic nature that can be easily explained and the re-quirement of only one data point [9], which su�ciently eases the implementation of reservation forecasts. In recent years with the rise of recurrent neural networks (NNs), scholars have started to exploit these modern architecture, for in-stance, the adoption of NNs to model the trend component in the reservation numbers [29], yet researches remain relatively scarce. As such, this research tends to further explore the ability of NNs, in particular recurrent neural networks (2.2) in reservation forecast-ing. To be able to compare to the current practice of the industry, SdLy is served as a baseline model in this research. 2.1.2 Incorporation of room rates. Room rate is an in�uential factor to reservation predictions [13][25]. Customers’ utility is maximized if they could pay a lower price dur-ing the decision process. When the price increases, customers’ util-ity diminishes, assuming all other conditions remain the same [2]. To take rate into account, the concept of choice sets is introduced, which refers to the coherent sequence of rates that a customer is willing to pay when making a reservation [14]. It provided an insight on how customer choice behavior can be accounted for in the forecast. Based on the assumption of diminishing utility, it is proposed that the model would output forecasts for di�erent choice sets as a way to incorporate rate plans into the research framework. 2.2 Recurrent Neural Networks Recurrent neural networks, or RNNs, are a type of arti�cial neural networks specialized for handling sequential data. It is designed in a way that the state at some time t read information from the previous state t-1 and the external input xtto make predictions [10]. They have gained substantial attention and have proven e�ective in handling problems such as time series classi�cation, machine translation and speech recognition [10][11][16][24]. One pitfall of the traditional RNN, however, is its vulnerability to the vanishing gradient issue. During model training, a cost function is computed to measure, most commonly, the sum of squared di�erence between the actual and the predicted value. The back propagation algorithm allows the cost to �ow backward through time to compute the gradients, and eventually update the network weights to improve the model. When the number of steps continuously increases, the gradients could become vanishingly small, which prevents models from further updating the weights [10]. To tackle this, scholars have developed new models such as Gated Recurrent Units (GRUs) and Long- and Short-Term Memory (LSTM). These models apply a gating mechanism through the adop-tion of forget gate, input gate and output gate to learn both short and long-term dependencies. This special architecture make GRUs and LSTMs powerful sequential models and have proven e�ective in practical applications [10]. In this study, LSTM networks are incorporated into this research as they have been shown to more easily learn long-term dependencies than other simple recurrent architectures [11] 2.3 Seasonality Extraction Seasonality extraction has long been an ongoing �eld of research as it is shown that forecasting models perform better when the input data is deseasonalized [32]. The common approaches are to decompose the time series into trend, seasonal, cyclical and residual components by the means of moving average, smoothing or time series decomposition [25]. On the other hand, seasonal autoregressive integrated moving average (SARIMA) model is one of widely used models, as it speci�es, not only autoregression, di�erencing and moving averages for the time series, but also those of the seasonal component [1]. As one could expect the hotel industry might experience several seasonality patterns, mostly monthly, weekly and daily [25], the complexity of seasonality extraction increased drastically and some traditional seasonal models might not be su�ciently adequate. It therefore further consolidates the rationale behind adopting of RNNs in handling seasonality extraction. 2.4 Residual Learning The increase in the depth of neural networks makes them more di�cult to train and might even result in the degradation of training accuracy [15][18]. Therefore, residual learning was introduced as a means to tackle the issue and is achieved by connecting the output of previous layers to the output of new layers [15]. The underlying mapping H(x) after several stacked layers is �t to F(x) + x, where F(x) is the residual function and x the input to these stacked lay-ers. The identical mapping of x ! H(x) guarantees the model to have training error no greater than its shallower counterpart and at the same time forces the intermediate layers to learn the residual function F(x) [15]. It has proven to be e�ective in various image recognition, sequential classi�cation [15][26][27] and in particular time series problems [12]. Inspired by previous researches, this paper adopts residual learning to allow seasonal in�uences, namely monthly, weekly and daily to bypass the network for an improve-ment in model performance. As He [15] also suggested that, while F(x) can be in a �exible form, performance improvement is only observed when the it has more than one single layer. Therefore, the residual models in this research consist of stacked LSTM cells. 3 MODELS The section describes the baseline models (3.1) and the residual models (3.2). 3.1 Baseline Models 3.1.1 Same date-Last year (SdLy). The SdLy model uses the number of reservations with the same calendar period and day of week in the previous year as this year’s prediction. For instance, the forecast of reservation numbers for Thursday, 21stFebruary 2019 would be that of Thursday, 22nd Feb-ruary 2018. Formally, F[0],w,d =R[0] 1,w,d (1) where F ,w,d[denotes the reservation forecast for day-of-week d] (1 d 7) in week w (1 w 52) of year , and R 1,w,d[denotes] the observed reservation number for the same day-of-week in the same week of last year. Only the check-in date (t = 0) re�ecting the total reservation numbers is calculated as partial bookings (t = 1, 2, ...M) are not of hotels’ major interest. 3.1.2 Weighted Seasonal Components (WSC). The monthly, weekly and daily seasonality patterns are extracted from the number of daily reservations and are calculated as the weighted sum of the seasonal in�uences. This can be expressed as: W SCi =’ Wmcmic+Wwcwic+Wdcdic (2) where W SCiis a vector of the weighted seasonal prediction at time step i of c rate categories. Wmc denotes the weight of mic, which is the di�erence between the daily reservation number for the month of time step i and the mean daily number over all months of rate category c;Wwcdenotes the weight of wic, which represents the di�erence between the daily reservation number for the week of time step i and the mean daily number over all weeks of rate category c; and Wdc denotes the weight of dic, which is the di�er-ence between the daily reservation number for the day of week of time step i and the mean daily number over all weekdays of rate category c. The weights are found using Adam optimization. To predict n-step ahead the number of reservations, ˆY = WSC(xsc) (3) where ˆY is a matrix of n predictions for c rate categories, and xscis a vector containing the seasonal features of the n prediction steps of c rate categories. 3.1.3 Long and Short-Term Memory (LSTM). A LSTM unit comprises a cell, an input gate, an output gate and a forget gate. The model contains two stacked LSTM cells with 75 neurons each, where the parameters are found by grid search. A stochastic optimization i.e. using a batch size of one is implemented to speed up the learning process while achieving a low value of cost function [10]. At each time step t, the �rst LSTM cell inputs both seasonal and residual explanatory features xtand the hidden state from the previous time step h1 t 1to output the hidden state h1t. This is achieved using the gating mechanism of LSTM as depicted in Equations 4-9: f[(t)]i = ©≠ ´ ’ j U[i, j]f x[(t)]j +’ j W[i, j]f hj[(t 1)]+b[i]f™Æ ¨ (4) i (t)= ©≠ ´ ’ j U[i, j]x[(t)]j +’ j W[i, j] hj[(t 1)]+b[i]™Æ ¨ (5) ˜si (t)=tanh ©≠ ´ ’ j Ui, jx[(t)]j +’ j Wi, jhj[(t 1)]+bi™Æ ¨ (6) s[(t)]i =f[(t)]i si[(t 1)]+ i (t) ˜s(t)i (7) qi[(t)]= ©≠ ´ ’ j U[i, j]o x[(t)]j +’ j W[i, j]oh[(t 1)]j +bo[i] ™Æ ¨ (8) hi[(t)]=qi[(t)] tanh⇣si[(t)]⌘ (9) where fi (t)denotes the forget gate, (t)i the input gate, ˜si(t)the state unit, si (t)the updated state unit, qi(t)the output gate, and hi[(t)]the output for time step t and cell i. Matrices b, U and W are respectively biases, input weights and recurrent weights into the LSTM cell. and tanh denote respectively sigmoid and hyperbolic activation functions, and denotes element-wise multiplication. The second stacked LSTM cell at time step t takes as input the hidden state of the �rst cell h1 tand its hidden state from the previous time step h2 t 1and output the updated hidden state ht2using the same formulas as depicted in Equations 4-9. After processing 56 time steps, the last hidden state of the second LSTM cell is passed into a dense layer ofc⇥n units with no activation function for the n prediction time steps of the c rate categories. This �nal prediction is expressed as: ˆY = H(h2 t) (10) where h2 t denotes the last hidden state of the second LSTM cell and ˆY denotes a vector of c ⇥ n predictions. 3.1.4 seasonal LSTM (sLSTM). The architecture of sLSTM is identical to that of LSTM, in which the only di�erence is, instead of inputting both seasonal and residual features xt, sLSTM only takes as input the monthly, weekly and seasonal features as that of WSC as depicted in Equation 2. The model contains two stacked cells with 50 neurons each, where the parameters are found by grid search. The implementation remains the same, where at each time step t, the �rst LSTM cell inputs the seasonal explanatory features xs and the hidden state from the previous time step h1 t 1to output the hidden state h1t, and the second cell takes as input the hidden state of the �rst cell h1 t and its hidden state from the previous time step ht 12to output the updated hidden state h2 t using Equations 4-9. sLSTM is also able to process 56 time steps. After that, the last hidden state of the second LSTM cell is passed into a dense layer of n units with no activation function to generate the seasonal features of the n prediction time steps. This �nal prediction is expressed as: where S(xsc) denotes a vector of seasonal features of the n steps of prediction and xsc denotes a vector of monthly, weekly and daily seasonal di�erences of the n steps of prediction for the c rate categories. 3.2 Residual Models 3.2.1 Weighted Seasonal Components + residual Long- and Short-Term Memory (WSC+rLSTM). This model combines both the WSC and LSTM, whereby the ar-chitecture of the WSC is identical to the one in the baseline model. The rLSTM, instead of mapping the input xtto the output ˆY, now maps the input xr to F(xr), where xr denotes the input vector of representing the residual features at time t and F(xr) the di�erence between the WSC prediction and the actual reservation number for all prediction steps. It is formalized as: ˆY = WSC(xsc) + F(xr) (12) Instead of taking 56 steps before making a prediction, the rLSTM takes in 42 steps as it is observed from the explanatory data analysis that more 80% of the reservations were made six weeks before the check-in date. After processing 42 steps of information, the last hidden state of the second LSTM cell is passed to a dense layer of c⇥ n units with no activation and be concatenated with W SC as shown in Equation 12. 3.2.2 Seasonal Long- and Short-Term Memory + residual Long- and Short-Term Memory (sLSTM+rLSTM). This model is a combination of two LSTMs, whereby the seasonal patterns are �rst extracted using the �rst LSTM, hereinafter sLSTM that have the same structure as the one in the baseline model. This is followed by the learning of the di�erence by rLSTM with the information from the booking horizon, and it has an identical structure as the one in WSC+rLSTM. It is formalized as: ˆY = S(xsc) + F(xr) (13) This section describes the data used in this research (4.1), the adop-tion of booking matrix for residual learning (4.2), the incorporaadop-tion of choice sets to account for room rates (4.3), the model imple-mentation details (4.4), the accuracy measures (4.5), the exhibited seasonality patterns in the data (4.6) and the experimentation results of the proposed models (4.7). 4.1 Data The data set used in this research contains: (1) Reservation records from January 2013 to March 2019 with-out booking date; (2) Reservation records from July 2016 to March 2019 with: • the booking date; • the source of booking; and • the room rate at the time of making reservation. The hotel changed from one Property Management System (PMS) to another during April-June 2016, so 1) the booking date for each reservation; and 2) the rate plan for each reservation are only avail-able starting July 2016, and the construction of a booking matrix (4.2) is therefore only possible on the time series data between July 2016 and March 2019. Despite of that, the in�uence of multiple seasonality patterns is to be modeled using all available data since January 2013 to provide a better understanding of how reservations are a�ected by these seasonal factors. As guests might spend more than one night in the hotel, a tech-nique to handle the issue is by decomposing the n-night reservation for arrival day d into n single-night reservations with di�erent dates of stay (from d to d + n 1) and corresponding lead time (from t to t + n 1) [9]. After applying the technique, the number of reservations represents the number of rooms that are occupied for a particular night. From the data set, the series of daily reservation number is ex-tracted as the target variable. For seasonal models, three features re-garding seasonality are constructed for each target variable, namely the di�erence between the daily reservation number for the month and the mean daily number over all months (mi), the di�erence be-tween the daily reservation number for the week and the mean daily number over all weeks (wi), and the di�erence between the daily reservation number for the day of week and the mean daily number over all weekdays (di). Therefore, there are 3 seasonal explanatory variables for both WSC and sLSTM of each rate category. The residual features for each target variable are the count of daily reservations, the average rate, the number of bookings through the four available sources: online travel agency (OTA), direct reservations (DIR), web reservation (WEB) and Global Dis-tribution Systems (GDS) over the booking horizon. A dummy vari-able representing whether or not the check-in date is also included. These variables are extracted, as suggested by [13], and also because of the high correlation detected during explanatory data analysis. This results in 10 explanatory variables (3 seasonal, 7 residuals). As the model is going to give predictions on 3 rate categories (4.3), all the aforementioned features, except the average rate and the holiday dummy, are constructed three times to account for the ac-cording seasonality and residual patterns. At the end, there are 26 explanatory variables (9 seasonal, 17 residuals) for respective reser-vation forecasting of the three rate categories for WSC+rLSTM, sLSTM+rLSTM and LSTM. Five n-steps ahead predictions are evaluated: {3, 7, 10, 14, 21} to investigate the e�ectiveness of both long- and short-term predic-tions of the models. A sliding window approach is adopted, that being said, when new information becomes available at t + 1, pre-dictions are made using this new data point together with the input time steps. 4.2 Booking matrix The booking matrix, as shown in Table 1, is a means to illustrate how the �nal reservation number is accumulated by partial bookings over the booking horizon [17]. For a particular check-in date d, the number of guests who make a reservation t days before the check-in date is denoted as Rd t where t = 0,1,2...,M with M the length of the booking horizon. Each column includes all partial advanced bookings for check-in date d, and the column sum returns the �nal reservation number. As described in Section 3.2, after the seasonal models capture the monthly, weekly and daily in�uences, rLSTM is set up using Table 1: Booking matrix t denotes the total number of reservations at time t days before the check-in date d. The question marks represent the reservation numbers that are not known yet on date d. ... d-2 d-1 d d+1 d+2 ... d+M t ... Rd 2 0 Rd 10 Rd0 ? ? ? ? 0 ... Rd 2 1 Rd 11 Rd1 Rd+11 ? ? ? 1 ... Rd 2 2 Rd 12 Rd2 Rd+12 Rd+22 ? ? 2 ... Rd 2 3 Rd 13 Rd3 Rd+13 Rd+23 ... ? 3 . . . . ... Rd 2 M 1 Rd 1M 1 RdM 1 Rd+1M 1 Rd+2M 1 ... ? M-1 ... Rd 2 M Rd 1M RdM Rd+1M Rd+2M ... Rd+MM M Equations 5-9 with each Rd t is the input to the network for t = 0,1,2...,M to further learn the residual function F(x). Figure 1 shows the accumulated partial bookings over a 90-day booking horizon. It is generally observed that these bookings in-crease exponentially when the check-in date approaches. Figure 1: Booking horizon 4.3 Choice Sets Room rate is incorporated using the concept of choice sets (2.1.2). Reservations are divided into three bins, representing the number of guests who paid more than e 0 (i.e. all guests), e 100, e 150. By doing so, the company could gain insight into the pricing strategy by approximating the number of guests to expect in each rate bin. The model is con�gured accordingly to output �ve n-step ahead forecasts for each of the three bin categories. 4.4 Implementation Details The train set covers daily reservation numbers from 1 July 2016 to 11 September 2018, which consists of 741 observations. During model training, the train set is randomly shu�ed and further divided into train and validation sets, where 666 observations are used to train the model and 75 observations for validation purpose. All models are evaluated on the test set that covers the period from 12 September 2018 to 31 March 2019, which results in 201 observations in total. Grid search was implemented to search for the optimal hyperpa-rameters of the models. Values being searched were, for the number of layers of sLSTM and LSTM, {1, 2}; for the number of neurons of sLSTM, LSTM, WSC+rLSTM and sLSTM+rLSTM, {50, 75, 100, 150}; and for the dropout rate in each LSTM cell of LSTM, sLSTM, WSC+rLSTM and sLSTM+rLSTM, {0, 0.25, 0.5, 0.75}. As for the rL-STM component in WSC+rLrL-STM and sLrL-STM+rLrL-STM, 2 layers are constructed as with reference to [15]. To train WSC+rLSTM, the previously trained sLSTM was �rst loaded and all layers are frozen for optimizing the rLSTM cells. After that, the sLSTM layers were unfrozen to optimize the entire model. The same applied to sLSTM+rLSTM, whereby instead of WSC, the sLSTM was �rst frozen. All LSTM models were constructed using Keras [7]. During model training, Adam optimization with a step decay was applied, in which the learning rate was initially set at 0.01, and was dropped by half every 10 epochs. Early stopping with a patience of 50 epochs was applied as an implicit regularization. After the training was stopped, the most optimal parameters were returned and the models were considered trained. 4.5 Evaluation The following metrics are used, namely Root Mean Squared Error (RMSE), Mean Absolute Error (MAE) and Weighted Mean Absolute Percentage Error (WMAPE) to evaluate the model performance. In essence, these measures quantify the di�erence between the actual and the prediction reservation numbers. The smaller the value, the higher the accuracy. RMSE = sÕn i=1( i ˆi)2 n (14) MAE = Õn i=1| i ˆi| n (15) W MAPE = Õn i=1| i ˆi| Õn i=1 i ⇤ 100% (16) where i=actual daily reservations ˆi=predicted daily reservations n =total number of observations 4.6 Seasonality of the data Figures 2, 3 and 4 visualize the monthly, daily and weekly seasonal patterns respectively. It can be observed in Figure 2 that the winter time i.e. from December to February is the o� season of the hotel, with daily reservation numbers reaching only two-third of the maximum capacity of 225 rooms. The number of daily reservations remains relatively steady in other months, except a slight decrease in July. The day-of-week seasonality in Figure 3 reveals the hotel took in the least number of guests on Sunday while Saturday appears to be the most popular day for staying. As for the weekly seasonality, it can be seen in Figure 4 that the number of reservations plummeted in the 51stweek, very likely due to the Christmas holiday. Also, the bookings are comparatively lower in the �rst few weeks than that of the other weeks, which explains the low reservation numbers in December, January and February as shown in the Figure 2. The relatively lower number of reservations in July can also be attributed to the slump in the 31st[week.] Figure 2: Monthly average of reservation numbers in the anonymized hotel in Amsterdam from January 2013 to March 2019 Figure 3: Day of week average of reservation numbers in the anonymized hotel in Amsterdam from January 2013 to March 2019 4.7 Results This section evaluates the baseline and residual models as described in Section 3 and presents the result of these experiments. 4.7.1 Experiment 1: SdLy model. The �rst experiment is to construct the SdLy model that is currently being used by the hotel for the prediction of all reservations. It is shown in Table 2 that the model achieves very similar performance for all the time steps given its naive nature. The performance is fairly acceptable, given the usage of one data point to make future prediction, which further solidify the reason why the hotel chooses to adopt this naive model. 4.7.2 Experiment 2: Evaluating models with only seasonal features. Table 2 shows that both prediction models with only the seasonality have already outperformed the SdLy model. This suggested that the reservation numbers of the hotel are signi�cantly in�uenced by seasonal factors, so only capturing the monthly, weekly and daily di�erences could already provide a more accurate forecast. The reason that the sLSTM is achieving a lower accuracy score for all the time steps than WSC might be due to the reason that only a relatively small amount of data was used to train the model. If more training data could be obtained, the performance would further be optimized. For reservation prediction with room rate above e 100 and e 150, both WSC and sLSTM perform poorly. Both models fail to capture the number of guests who will pay more than a certain price. This might be explained by the coconut uncertainty, a term coined by Makridakis et al. [19], which refers to the events that can hardly be envisioned. While there is a clear seasonal patterns for the daily reservations thus resulting in a fairly accurate predictions, the two time series representing reservation numbers with room rate above e 100 and e 150 suggested otherwise. For reservations priced above e 150, for instance, the large proportion of days in which the reservation numbers are zero inevitably creates noises and hinders the ability of the model to learn the non-zero irregular pattern. In addition to the fact that model is trained with a relatively small amount of data, the forecasts fail to generalize to the two rate categories. 4.7.3 Experiment 3: Evaluating LSTM model with both seasonal and residual features. As can be observed from Tables 2, 3 and 4, the performance of using both seasonal and residual features in the LSTM model does not provide any signi�cant improvement. The calculated metrics for all the three rate categories are similar to those of sLSTM, in which the di�erences are very likely due to random �uctuations. This is surprising, considering the fact that the room rate and the source of booking are correlated to the reservation numbers. A possible explanation, apart from the relatively small training samples, is that the variables are in the form of sparse vectors, so the LSTM decided to abandon these features and eventually return a similar mapping as that of sLSTM. It is not uncommon for some days over the booking horizon to have zero reservations. The room rate and the source of booking thus also end up in zeros. As the seasonal features already provide some predictability to the reservation numbers, these sparse variables might be dropped for the sake of simpler and more e�ective learning. 4.7.4 Experiment 4: Evaluating the performance using WSC for sea-sonality extraction and rLSTM for residual learning. The results in Tables 2, 3 and 4 suggest that combining WSC with rLSTM does not improve the model performance, not align with the �ndings in [12]. The computed metrics are very similar to those in WSC, revealing the model solely returns the identical mapping x ! H(x) and fail to learn the residual function F(x). A similar explanation to the previous experiment (4.7.3) might be drawn, in which the sparsity of features in addition to the relatively small sample size hinders the learning of F(x). 4.7.5 Experiment 5: Evaluating the performance of using sLSTM for seasonality extraction and rLSTM for residual learning. A similar result to the previous experiment (4.7.4) is observed, in which sLSTM+rLSTM does not improve the model performance as the evaluation metrics only slightly �uctuate around those of sLSTM, so only the identical mapping is returned with no minimal learning of F(x). Given the similar behavior of the model as that of LSTM and WSC+rLSTM, the same rationale behind this observation Figure 4: Weekly average of reservation numbers in the anonymized hotel in Amsterdam from January 2013 to March 2019 Table 2: t-step ahead prediction of reservation numbers for all reservations. t Measures SdLy Seasonality Models LSTM Residual Models WSC sLSTM WSC+rLSTM sLSTM+rLSTM RMSE 28.01 22.66 22.82 25.26 22.98 23.45 t+3 MAE 19.87 16.44 17.50 19.01 16.70 17.84 WMAPE 11.07% 9.33% 10.06% 10.92% 9.52% 10.31% RMSE 28.02 22.63 23.63 21.48 22.71 23.96 t+7 MAE 19.91 16.70 17.33 16.21 16.58 18.76 WMAPE 11.09% 9.48% 9.95% 9.31% 9.44% 10.77% RMSE 28.01 22.73 23.02 23.81 22.85 23.83 t+10 MAE 19.88 16.73 17.43 18.01 16.61 18.35 WMAPE 11.04% 9.53% 9.99% 10.32% 9.48% 10.52% RMSE 28.06 22.81 23.94 21.40 22.87 23.16 t+14 MAE 19.93 16.78 18.08 16.47 16.62 17.80 WMAPE 11.07% 9.55% 10.36% 9.44% 9.48% 10.20% RMSE 27.88 22.53 22.44 22.60 22.90 23.33 t+21 MAE 19.75 16.45 17.05 17.21 16.71 18.50 WMAPE 10.92% 9.33% 9.77% 9.86% 9.53 10.60% is drawn which is attributed to the sparsity of features on top of the relatively small sample size. In this study, the e�ectiveness of LSTM on reservation forecast-ing in the hotel industry is evaluated. It is shown, based on the three evaluation metrics, that sLSTM outperforms the SdLy baseline model that is currently being adopted in the hotel industry, though it achieved a slightly lower accuracy than a simpler WSC model. One conclusion that can be reasonably drawn is that seasonality plays an important part in hotels, align with previous research [25], and thus the adoption of seasonal models, either a simple or a complicated one, already contributes to the forecast improvement with respect to current practice. Combining sLSTM or WSC with rLSTM fails to improve the model performance as expected. As aforementioned, two possible rationales behind this are 1) the comparatively small size of training samples during model implementation, thus preventing the full functionality of LSTM to be exploited, and 2) the sparsity of features of room rates and the source of booking over the booking horizon, which makes the LSTM to make the decision of dropping these operations. This might be handled by dimensionality reduction techniques in future researches. The concept of choice sets is adopted to provide forecasts of reservation numbers in di�erent rate category in conjunction with the proposed models and the result shows that, except the fairly accurate forecast of daily reservation numbers of all reservations, the predictions for the two speci�c rate bins i.e. room rates above e100 and e 150 are not promising given the high forecast errors. The rationale behind this observation potentially attributes to the coconut uncertainty, where the irregular non-zero patterns of these Table 3: t-step ahead prediction of reservation numbers for reservations with a room rate above e 100. t Measures SdLy Seasonality Models LSTM Residual Models WSC sLSTM WSC+rLSTM sLSTM+rLSTM RMSE - 39.45 42.47 42.22 39.35 41.12 t+3 MAE - 29.27 33.40 32.05 28.80 31.61 WMAPE - 21.13% 24.63% 23.64% 20.90% 23.32% RMSE - 39.80 42.87 40.45 38.40 43.53 t+7 MAE - 29.59 32.94 30.31 28.54 32.99 WMAPE - 21.37% 24.32% 22.39% 20.86% 24.37% RMSE - 38.84 45.97 45.38 38.81 46.00 t+10 MAE - 29.03 37.11 34.40 28.81 36.61 WMAPE - 21.16% 27.43% 25.42% 21.07% 27.06% RMSE - 39.37 42.41 41.21 39.53 42.03 t+14 MAE - 29.26 31.81 30.90 29.14 31.48 WMAPE - 21.41% 23.55% 22.87% 21.38% 23.31% RMSE - 39.34 43.44 41.02 39.37 43.26 t+21 MAE - 29.25 33.81 30.04 28.88 33.65 WMAPE - 21.23% 25.12% 22.32% 21.13% 25.00% Table 4: t-step ahead prediction of reservation numbers for reservations with a room rate above e 150. t Measures SdLy Seasonality Models LSTM Residual Models WSC sLSTM WSC+rLSTM sLSTM+rLSTM RMSE - 46.29 56.48 52.80 46.86 54.29 t+3 MAE - 31.39 37.64 35.98 32.54 37.10 WMAPE - 67.59% 78.93% 75.44% 67.83% 77.79% RMSE - 51.53 57.97 53.98 45.35 59.7 t+7 MAE - 33.32 38.87 37.58 30.26 42.23 WMAPE - 68.81% 81.36% 78.68% 67.02% 82.87% RMSE - 47.72 59.45 53.60 45.39 58.59 t+10 MAE - 31.34 39.61 37.49 30.41 40.57 WMAPE - 68.95% 80.81% 76.49% 67.72% 82.77% RMSE - 45.39 55.95 54.03 45.73 54.39 t+14 MAE - 30.32 38.34 38.06 30.84 38.21 WMAPE - 68.61% 79.76% 79.18% 67.68% 79.49% RMSE - 45.65 58.54 54.05 45.98 57.08 t+21 MAE - 30.63 41.22 38.97 31.34 42.28 WMAPE - 68.14% 85.51% 80.84% 68.14% 87.69% time series makes it di�cult, if not impossible, for the LSTM model to capture. One limitation of this research, that has already been outlined, is the comparatively small number of training samples for func-tionality of LSTM to be fully exploited. If one is to replicate this study, more samples are to be expected so LSTM could learn from data involving a longer period and potentially anticipate an im-proved performance. Another limitation is attributed to the current setting of LSTM, where the model connects to a dense layer to provide 5 prediction steps for 3 rate categories. As the model fails to generalize the result to the second and third rate categories, the inclusion of these two series is believed to deteriorate the overall model performance. In this study, the construction of separate mod-els was not made possible due to a limited computational budget, but one might expect a better result if each time step is optimized To conclude, the unpromising result of the research might po-tentially explain why RNNs have not been widely adopted in the hotel industry attributed to its data-hungry nature, in contrast to the SdLy model where only one data point is needed and its easy interpretation [9]. Despite its outperformance of SdLy model, in the cases where hotels do not have a multitude of training samples and the reservations exhibit a variety of prominent seasonality patterns, a simpler method such as WSC might even be preferred, given the computational complexity of the more complicated LSTM architecture with fairly similar results. I would like to thank Ireckonu for providing all the necessary data I needed and created a fun work environment to work in. I am grateful to Rik van Leeuwen for delivering his professional knowl-edge of the hotel industry and clari�ed any doubts I had at his best e�orts. Also, many thanks to my friends, in particular my house-mate Marius Zeevaert for all the positive encouragement. Last but de�nitely not the least, I would like to express my sincere gratitude to Inske Groenen for her unparalleled support during my thesis. She had been most helpful and patient the entire time and provided constructive feedback all along. This thesis would not have been done without her. [1] George Athanasopoulos and Rob J. Hyndman. Forecasting: Principles and Practice. OTexts: Melbourne, Australia, 2013. [2] Moshe Ben-Akiva and Steven R. Lerman. Discrete Choice Analysis: Theory and Application to Travel Demand. Cambridge, Mass. : MIT Press, 1985. [3] B.L. Bowerman, R.T. O’Connell, and A.B. Koehler. Forecasting, Time Series, and Regression: An Applied Approach. Duxbury advanced series in statistics and decision sciences. Thomson Brooks/Cole, [4] Rohitash Chandra and Mengjie Zhang. Cooperative coevolution of elman re-current neural networks for chaotic time series prediction. Neurocomputing, 86:116–123, 2012. [5] Christopher Chen and Soulaymane Kachani. Forecasting and optimisation for hotel revenue management. Journal of Revenue and Pricing Management, 6(3):163– 174, Sep 2007. [6] Wen-Chyuan Chiang, Jason C.H. Chen, and Xiaojing Xu. An overview of research on revenue management: current issues and future research. Int. J. Revenue Management, 1(1):97–128, 2007. [7] François Chollet et al. Keras. https://keras.io, 2015. [8] Robert T. Clemen. Combining forecasts: a review and annotated bibliography. International Journal of Forecasting, (5):559–584, 1989. [9] Anna Maria Fiori and Ilaria Foroni. Reservation forecasting models for hospi-tality smes with a view to enhance their economic sustainability. Sustainability, 11(5):1274, 2019. [10] Ian Goodfellow, Yoshua Bengio, and Aaron Courville. Deep learning. Mas-sachusetts Institute of Technology, Cambridge, MA., 2016. [11] Alex Graves, Abdel rahman Mohamed, and Geo�rey Hinton. Speech recognition with deep recurrent neural networks. arXiv e-prints, 2013. [12] Inske Groenen. Representing seasonal patterns in gated recurrent neural networks for multivariate time series forecasting. Master Thesis, University of Amsterdam, 2018. [13] Peng Guo, Baichun Xiao, , and Jun Li. Unconstraining methods in revenue management systems: Research overview and prospects. Advances in Operations Research, 2012. [14] Alwin Haensel and Ger Koole. Estimating unconstrained demand rate functions using customer choice sets. Journal of Revenue and Pricing Management, 10(5):438– 454, 2011. [15] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2016. [16] Michael Hüsken and Peter Stagge. Recurrent neural networks for time series classi�cation. Neurocomputing, 50(C):223–235, 2003. [17] Anthony Owen Lee. Airline reservations forecasting: probabilistic and statistical models of the booking process. Cambridge, Mass.: Flight Transportation Laboratory, Dept. of Aeronautics and Astronautics, Massachusetts Institute of Technology, 1990. [18] Roi Livni, Shai Shalev-Shwartz, and Ohad Shamir. On the computational e�-ciency of training neural networks. In Z. Ghahramani, M. Welling, C. Cortes, N. D. Lawrence, and K. Q. Weinberger, editors, Advances in Neural Information Processing Systems 27, pages 855–863. Curran Associates, Inc., 2014. [19] Spyros Makridakis, Robin M. Hogarth, and Anil Gaba. Forecasting and uncer-tainty in the economic and business world. International Journal of Forecasting, 25(4):794 – 812, 2009. Special section: Decision making and planning under low levels of predictability. [20] Luis Nobre Pereira. An introduction to helpful forecasting methods for hotel revenue management. International Journal of Hospitality Management, 58:13–23, [21] Bo Peng, Haiyan Song, and Geo�rey I. Crouch. A meta-analysis of international tourism demand forecasting and implications for practice. Tourism Management, 45:181 – 193, 2014. [22] Mihir Rajopadhye, Mounir Ben Ghalia, Paul P. Wang, Timothy Baker, and Craig V. Eister. Forecasting uncertain hotel room demand. Information Sciences, 132(1):1 – 11, 2001. [23] Shujie Shen, Gang Li, and Haiyan Song. Combination forecasts of international tourism demand. 2011. [24] Ilya Sutskever, Oriol Vinyals, and Quoc V Le. Sequence to sequence learning with neural networks. In Z. Ghahramani, M. Welling, C. Cortes, N.D. Lawrence, and K.Q. Weinberger, editors, Advances in Neural Information Processing Systems 27 (NIPS 2014), pages 3104–3112. Neural Information Processing Systems Foundation, Inc., 2014. [25] Kalyan Talluri and Garrett van Ryzin. The Theory and Practice of Revenue Man-agement. Springer Science+Business Media, Inc., 233 Spring Street, New York, NY 10013, USA., 2004. [26] Fei Wang, Mengqing Jiang, Chen Qian, Shuo Yang, Cheng Li, Honggang Zhang, Xiaogang Wang, and Xiaoou Tang. Residual attention network for image clas-si�cation. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), July 2017. [27] Yiren Wang and Fei Tian. Recurrent residual learning for sequence classi�cation. In Proceedings of the 2016 conference on empirical methods in natural language processing, pages 938–943, 2016. [28] Larry R. Weatherford and Sheryl E. Kimes. A comparison of forecasting methods for hotel revenue management. Cornell University, School of Hotel Administration, 19(3):401–415, 2003. [29] Athanasius Zakhary, Neamat El Gayar, and Sanaa El-Ola. H. Ahmed. Exploit-ing neural networks to enhance trend forecastExploit-ing for hotels reservations. In Friedhelm Schwenker and Neamat El Gayar, editors, Arti�cial Neural Networks in Pattern Recognition, pages 241–251, Berlin, Heidelberg, 2010. Springer Berlin Heidelberg. [30] Athanasius Zakharya, Amir F. Atiyab, Hisham El-Shishinyc, and Neamat E.Gayar. Forecasting hotel arrivals and occupancy using monte carlo simulation. Journal of Revenue and Pricing Management, 10 (4):344–366, 2011. [31] Hossam Zaki. Forecasting for airline revenue management. The Journal of Business Forecasting Methods Systems, 19(1):2–6, 2000. [32] G.Peter Zhang and Min Qi. Neural network forecasting for seasonal and trend time series. European Journal of Operational Research, 160(2):501–514, 2005.
{"url":"https://5dok.net/document/4yr694oy-multivariate-series-forecasting-reservations-industry-recurrent-neural-networks.html","timestamp":"2024-11-02T23:29:47Z","content_type":"text/html","content_length":"189868","record_id":"<urn:uuid:91c1153e-41a1-45bc-9ae0-8a0daab94a7d>","cc-path":"CC-MAIN-2024-46/segments/1730477027768.43/warc/CC-MAIN-20241102231001-20241103021001-00208.warc.gz"}
Relationship of Nakshatra padas and Navamsa I have already mentioned that each Nakshatra is divided into 4 padas and the pada has a characteristic of the sign of the zodiac which starts from Mesha. If you take 3 nakshatras you get 12 padas (3 x 4) which can be equated to the 12 rasis. However each rasi has been assigned only 2¼ Nakshatras or 9 padas only. So counting from Mesha the 9^th sign falls in Dhanus. So the next Nakshatra pada should automatically start from Makara. Count 9 signs from Makara to end up in Kanya. The next Nakshatra pada should start from Thula. Similarly counting from Thula the 9^th sign falls in Mithuna and naturally the next Nakshatra pada will begin from Karka. So the order of beginning is Mesha, Makara, Thula and Karka which tallies with the basic principle laid down in scriptures and as explained above. To make matters simple I will conclude that the signs of padas and the Navamsa signs are the same. Now let us examine the sample calculation made above according to this new rule: Mercury is in 15^0 25’ in Kanya. The Nakshatra is Hasta-2^nd pada. Count the padas from Ashwini to Hasta – 2^nd. Ashiwini to Uttaraphalguni it is 12 Nakshatras or 12 x 4 = 48 pada. Add 2 padas of hasta and the total pada is 50. For the 12 rasis we are allotting 12 padas. Hence 4 x 12 = 48 padas gets allotted to all the 12 rasis. 2 padas remain – starting from Mesha the 2^nd rasi will be Vrishabha which gets the 50^th pada. So Mercury will be in Vrishabha in the Navamsa Chart which tallies with our principle stated earlier. Let us take the 2^nd example also. Mars is in Karka 24^0 10’. The Nakshatra is Aslesha – 3^rd pada. From Ashwini till pushyam 8 Nakshatras or 32 padas are there. Adding the 3 padas of Aslesha we get 35 padas. Staring from Mesha allot each pada and you will end up in Kumbha as the 35^th pada. This is what we have arrived at earlier. To complete this calculation you must definitely require the Nakshatra and the pada in which the planet is placed or the longitude of the planet. I give below a table containing all the 108 Navamsas for all Nakshatra Padas: The purpose of this table is if you know the longitude of a planet without the sign name and nakashatra pada you can find the Rasi, Nakshatra, Pada and navamsa of the planet by looking at the cumulative longitude column. If you know the Rasi and the Longitude then select Rasi column and Longitude column and note the Nakshatra, Pada and navamsa. 222.5 Kb. Share with your friends:
{"url":"https://ininet.org/sri-ganapati-is-the-elephant-headed-son-of-sri-shiva-belonging.html?page=3","timestamp":"2024-11-04T14:03:59Z","content_type":"text/html","content_length":"15977","record_id":"<urn:uuid:edc6fe11-3573-4898-9f21-a532b278a922>","cc-path":"CC-MAIN-2024-46/segments/1730477027829.31/warc/CC-MAIN-20241104131715-20241104161715-00252.warc.gz"}
101. Total Time | HotDocs Developers top of page 101. Total Time Total the time entries from a list or a repeated dialog. Repeated Dialog: SET TotalHours-n TO 0 SET TotalMins-n TO 0 REPEAT Time Entries // ** If you need to compute elapsed time (Hours & Mins), do it here ** SET TotalHours-n TO TotalHours-n + ZERO( Hours-n ) SET TotalMins-n TO TotalMins-n + ZERO( Mins-n ) // Dump extra mins into hours IF TotalMins-n > 59 SET TotalHours-n TO TotalHours-n + TRUNCATE( TotalMins-n / 60, 0 ) SET TotalMins-n TO MOD( TotalMins-n / 60 ) // Nice output "«TotalHours-n» hour" IF TotalHours-n != 1 RESULT + "s" RESULT + " and «TotalMins-n» minute" IF TotalMins-n != 1 RESULT + "s" SET TotalHours-n TO 0 SET TotalMins-n TO 0 SET TotalHours-n TO TotalHours-n + ZERO( HoursA-n ) SET TotalHours-n TO TotalHours-n + ZERO( HoursB-n ) SET TotalHours-n TO TotalHours-n + ZERO( HoursC-n ) SET TotalHours-n TO TotalHours-n + ZERO( HoursD-n ) SET TotalMins-n TO TotalMins-n + ZERO( MinsA-n ) SET TotalMins-n TO TotalMins-n + ZERO( MinsB-n ) SET TotalMins-n TO TotalMins-n + ZERO( MinsC-n ) SET TotalMins-n TO TotalMins-n + ZERO( MinsD-n ) // Dump extra mins into hours IF TotalMins-n > 59 SET TotalHours-n TO TotalHours-n + TRUNCATE( TotalMins-n / 60, 0 ) SET TotalMins-n TO REMAINDER( TotalMins-n, 60 ) // Nice output "«TotalHours-n» hour" IF TotalHours-n != 1 RESULT + "s" RESULT + " and «TotalMins-n» minute" IF TotalMins-n != 1 RESULT + "s" This computation totals time entries from either a repeated dialog or from a list. The calculated hours and minutes values are placed in number variables, and a nicely formatted string is also Variables: The computation assumes the following variables: • Hours-n - A number variable which holds the number of hours for a single time entry. • Mins-n - A number variable which holds the number of minutes for a single time entry. • TotalHours-n - A number variable which holds the total hours computed from the time entries. • TotalMins-n - A number variable which holds the total minutes computed from the time entries. • Time Entries - A repeated dialog box which contains two variables, Hours-n and Mins-n. Elapsed Time: This computation assumes that you already know the elapsed time for each time entry. If, however, your time entries simply have a start time and and end time, you will need to compute the elapsed time as well. Computation #0100: Elapsed Time shows how to do this. You would include the elapsed time calculation within the REPEAT where shown. The computation should be quite straightforward. We run through the REPEAT, adding each Hours-n and Mins-n value into TotalHours-n and TotalMins-n. Note that we use the ZERO model to give unanswered variables a value of 0. Once we have summed all of the time entries, the TotalMins-n variable will likely have a value much greater than 60. These extra minutes should be converted into hours. We add one hour to TotalHours-n for every 60 minutes in TotalMinutes-n, then set TotalMinutes-n to the REMAINDER. bottom of page
{"url":"https://www.hotdocsdevelopers.com/computations/101.-total-time","timestamp":"2024-11-07T10:37:27Z","content_type":"text/html","content_length":"851405","record_id":"<urn:uuid:18f0afae-adce-450f-9cd5-5ad102bea61b>","cc-path":"CC-MAIN-2024-46/segments/1730477027987.79/warc/CC-MAIN-20241107083707-20241107113707-00050.warc.gz"}
Subworkflows and sub-launch plans {"serverless": "Serverless", "byoc": "BYOC"} ["serverless", "byoc"] Subworkflows and sub-launch plans# In Union it is possible to invoke one workflow from within another. A parent workflow can invoke a child workflow in two ways: as a subworkflow or via a sub-launch plan. In both cases the child workflow is defined and registered normally, exists in the system normally, and can be run independently. But, if the child workflow is invoked from within the parent by directly calling the child’s function, then it becomes a subworkflow. The DAG of the subworkflow is embedded directly into the DAG of the parent and effectively become part of the parent workflow execution, sharing the same execution ID and execution context. On the other hand, if the child workflow is invoked from within the parent by calling the child’s launch plan, this is called a sub-launch plan and it results in a new top-level workflow execution being invoked with its own execution ID and execution context. It also appears as a separate top-level entity in the system. The only difference is that it happens to have been kicked off from within another workflow instead of from the command line or the UI. Here is an example: def sub_wf(a: int, b: int) -> int: return t(a=a, b=b) # Get the default launch plan of sub_wf, which we name sub_wf_lp sub_wf_lp = LaunchPlan.get_or_create(sub_wf) def main_wf(): # Invoke sub_wf directly. # An embedded subworkflow results. sub_wf(a=3, b=4) # Invoke sub_wf through its default launch plan, here called sub_wf_lp # An independent subworkflow results. sub_wf_lp(a=1, b=2) When to use subworkflows# Subworkflows allow you to manage parallelism between a workflow and its launched sub-flows, as they execute within the same context as the parent workflow. Consequently, all nodes of a subworkflow adhere to the overall constraints imposed by the parent workflow. Here’s an example illustrating the calculation of slope, intercept and the corresponding y-value. from flytekit import task, workflow def slope(x: list[int], y: list[int]) -> float: sum_xy = sum([x[i] * y[i] for i in range(len(x))]) sum_x_squared = sum([x[i] ** 2 for i in range(len(x))]) n = len(x) return (n * sum_xy - sum(x) * sum(y)) / (n * sum_x_squared - sum(x) ** 2) def intercept(x: list[int], y: list[int], slope: float) -> float: mean_x = sum(x) / len(x) mean_y = sum(y) / len(y) intercept = mean_y - slope * mean_x return intercept def slope_intercept_wf(x: list[int], y: list[int]) -> (float, float): slope_value = slope(x=x, y=y) intercept_value = intercept(x=x, y=y, slope=slope_value) return (slope_value, intercept_value) def regression_line(val: int, slope_value: float, intercept_value: float) -> float: return (slope_value * val) + intercept_value # y = mx + c def regression_line_wf(val: int = 5, x: list[int] = [-3, 0, 3], y: list[int] = [7, 4, -2]) -> float: slope_value, intercept_value = slope_intercept_wf(x=x, y=y) return regression_line(val=val, slope_value=slope_value, intercept_value=intercept_value) The slope_intercept_wf computes the slope and intercept of the regression line. Subsequently, the regression_line_wf triggers slope_intercept_wf and then computes the y-value. It is possible to nest a workflow that contains a subworkflow within yet another workflow. Workflows can be easily constructed from other workflows, even if they also function as standalone entities. For example, each workflow in the example below has the capability to exist and run independently: def nested_regression_line_wf() -> float: return regression_line_wf() When to use sub-launch plans# Sub-launch plans can be useful for implementing exceptionally large or complicated workflows that can’t be adequately implemented as dynamic workflows or map tasks. Dynamic workflows and map tasks share the same context and single underlying Kubernetes resource definitions. Sub-launch plan invoked workflows do not share the same context. They are executed as a separate top-level entities and thus can be distributed among different Flytepropeller workers and shards, allowing for better parallelism and scale. Here is an example of invoking a workflow multiple times through its launch plan: from flytekit import task, workflow, LaunchPlan from typing import List def my_task(a: int, b: int, c: int) -> int: return a + b + c def my_workflow(a: int, b: int, c: int) -> int: return my_task(a=a, b=b, c=c) my_workflow_lp = LaunchPlan.get_or_create(my_workflow) def wf() -> List[int]: return [my_workflow_lp(a=i, b=i, c=i) for i in [1, 2, 3]]
{"url":"https://docs.union.ai/byoc/user-guide/core-concepts/workflows/subworkflows-and-sub-launch-plans","timestamp":"2024-11-09T19:29:43Z","content_type":"text/html","content_length":"61805","record_id":"<urn:uuid:cd11dd00-c579-4b24-be6c-6333b36e00a4>","cc-path":"CC-MAIN-2024-46/segments/1730477028142.18/warc/CC-MAIN-20241109182954-20241109212954-00613.warc.gz"}
[Series of lectures 2/21, 2/23, 2/28] An introduction to geometric representation theory and 3d mirror symmetry • Date: 2023-02-21 (Tue) 10:30 ~ 12:00 2023-02-23 (Thu) 10:30 ~ 12:00 2023-02-28 (Tue) 10:30 ~ 12:00 • Place: 129-104 (SNU) • Title: An introduction to geometric representation theory and 3d mirror symmetry • Speaker: Justin Hilburn (Perimeter Institute) • Abstract: The Beilinson-Bernstein theorem, which identifies representations of semi-simple Lie algebra \mathfrak{g} with D-modules on the flag variety G/B, makes it possible to use powerful techniques from algebraic geometry, especially Hodge theory, to attack problems in representation theory. Some successes of this program are the proofs of the Kazhdan-Lusztig and Jantzen conjectures as well as discovery that the Bernstein-Gelfand-Gelfand categories O for Langlands dual Lie algebras are Koszul dual. The modern perspective on these results places them in the context of deformation quantizations of holomorphic symplectic manifolds: The universal enveloping algebra U(\mathfrak{g}) is isomorphic to the ring of differential operators on G/B which is a non-commutative deformation of the ring of functions on the cotangent bundle T^*G/B. Thanks to work of Braden-Licata-Proudfoot-Webster it is known that an analogue of BGG category O can be defined for any associative algebra which quantizes a conical symplectic resolution. Examples include finite W-algebras, rational Cherednik algebras, and hypertoric enveloping algebras. Moreover BLPW collected a list of pairs of conical symplectic resolutions whose categories O are Koszul dual. Incredibly, these “symplectic dual” pairs had already appeared in physics as Higgs and Coulomb branches of the moduli spaces of vacua in 3d N=4 gauge theories. Moreover, there is a duality of these field theories known as 3d mirror symmetry which exchanges the Higgs and Coulomb branch. Based on this observation Bullimore-Dimofte-Gaiotto-Hilburn showed that the Koszul duality of categories O is a shadow of 3d mirror symmetry. In this series of lectures I will give an introduction to these ideas assuming only representation theory of semi-simple Lie algebras and a small amount of algebraic geometry.
{"url":"https://qsms.math.snu.ac.kr/index.php?mid=board_sjXR83&listStyle=viewer&order_type=desc&l=en&document_srl=2438&page=2","timestamp":"2024-11-09T03:10:54Z","content_type":"text/html","content_length":"23254","record_id":"<urn:uuid:09c15d2a-9de8-4ec7-9570-d8f224cad545>","cc-path":"CC-MAIN-2024-46/segments/1730477028115.85/warc/CC-MAIN-20241109022607-20241109052607-00559.warc.gz"}
Texture Measurement with EMATs I Directed ultrasonic velocity measurements predict formability by taking advantage of the effects of directional anisotropy that exists in the worked sheet (induced by the rolling process). One consequence of directionality is a change in mechanical properties with direction. For example, the yield strength and ductility may change with the orientation at which a laboratory tensile specimen is cut from a sheet. Generally, minimum and maximum values of these quantities occur at 0 degrees, in the vicinity of 45 degrees and at 90 degrees with respect to the rolling direction (see Figure 1). Any formation of ears in drawing operations (two fold and four fold) will also generally take place along these axes. When forming sheet metal, practical consequences of directionality include such phenomena as excess wrinkling, puckering, ear-formation, local thinning, or actual rupture. At best, these can cause individual pieces to be scrapped. A more serious consequence is the down time required to correct the manufacturing process. A number of specialized laboratory mechanical tests have been developed to identify the severity of directionality. Included are measurements of plastic strain ratios in tensile tests, limiting drawing ratio measurements, cupping tests, etc. Of particular interest here is the plastic strain ratio, defined as Where e[w] is the strain ratio in the width direction and e[t] is the strain in the thickness direction of a tensile coupon loaded in the plastic regime. The plastic strain ratio determines the relative tendency of deformation to occur in the plane of the sheet (e[w]) as opposed to through the thickness (e[t]). In general, r will vary with the angle at which the tensile coupon is cut with respect to the rolling direction of the sheet. Directions with large values of r will generally correspond to directions of ear formation when a cup is deep drawn, as sketched in Figure 2. The "RD" indicates the rolling direction, with respect to which the angles that are measured. The upper set of curves shows the variation of r with angle. The lower sketches represent the resulting cup contour. Two commonly used figures of merit are the average plastic strain ratio or normal anisotropy, defined as $r = \frac{r(0°)+2r(45°)+r(90°)}{4}$ and the planar anisotropy, defined as $\delta-r = \frac{r(0°)+2r(45°)+r(90°)}{2}$ Formability of a drawing quality sheet depends largely on two factors: drawability (capability to be drawn from the flange area of the blank into the die cavity) and stretchability (capability to be stretched under biaxial tension to the contours of the punch). Drawability is related primarily to plastic anisotropy, and the average plastic strain ratio, r, is a common measure of its value. This is schematically illustrated in Figure 2. The planar anisotropy, Ar, is thought to be a measure of the tendency to form ears. As will be discussed shortly, directionality is sensed in ultrasonic velocity measurements by taking advantage of another one of its consequences, the dependence of elastic properties on direction. These are determined nondestructively from the elastic wave speeds. Figure 3 illustrates the causes for the existence of directionality (anisotropy) in the processed sheet. There are two kinds of anisotropy: one is caused by the alignment of the nonmetallic inclusions existing in the ingot (called mechanical fibering or fiber texture) and the other is due to the alignment of the grains or crystals, and is called preferred orientation or crystallographic texture. The effects of preferred orientation have more profound implications in deep drawing operations, and it is this property that is sensed by ultrasonic measurements. Figure 4 demonstrates how the preferred orientation is developed through the effects of the rolling operations on the grains of the unprocessed sheet. In response to the force imposed in working the metal, extensive plastic deformation must take place. At a microscopic level, this may be thought of as a result of dislocation motion along planes of low resistance. Two interrelated phenomena result: an elongation of the grains that could be observed visibly, and a change in the crystallographic orientation of the grains. The latter is believed to be the primary cause of directionality of properties associated with deep drawing. It can be sensed by X-ray diffraction or by ultrasonic wave speed measurements. As an example of the effects of texture on the drawing capability of the sheets, one can qualitatively consider the impact of idealized textures in low carbon steel sheets. The bcc structure of the steel is strongest when measured along its cube diagonal or [111] direction, less strong along its edge diagonal [110] and weakest along its face diagonal [100], as defined in Figure 5. It is known that when the material assumes the cube-on-corner texture (in which the crystals line up with the strongest direction, [111]) normal to the sheet, the most favorable normal anisotropy is obtained. On the other hand, an unfavorable normal anisotropy is associated with the cube-on-face texture, Figure 6. Signal processing includes the estimation of normal anisotropy from ultrasonic determinations of the strengths of such texture components. It is obviously desirable to monitor texture as early as possible in the rolling process to better control the amount of annealing and cold work necessary for a proper drawability. In practice, the crystallites in commercial metal sheets do not only exhibit these few ideal orientations. Instead, they have a continuum of orientations which is best described by the crystallite orientation distribution function (CODF), giving the probability that a grain will have a particular orientation. There will be peaks in the CODF near ideal orientations, but the maxima are not necessarily sharp. The conventional metallurgical technique for obtaining the grain orientations has been the measurement of pole figures using X-ray diffraction. A pole figure can only give an incomplete assessment of the orientations in a two dimensional form. However, computer programs have been developed to generate a complete description of the orientations (i.e. the CODF) based on the analysis of multiple pole figures. Although a complete description, the complexity of the CODF, which is a function of the Euler angles describing possible crystallite orientation, renders its direct use awkward for many purposes. An alternate approach is to represent the CODF as a superposition of simple, known functions, much as a waveform might be represented as a sum of sine and cosine functions in a Fourier series. Formally, one writes $W(\xi,\Psi,\phi)=\sum^{\infty}_{l=0}\sum^{l}_{m=-l}\sum^{l}_{n=-l}w_{lmn}z_{lmn}\xi e^{-im\Psi}e^{-in\phi}$ where ??, ?? are Euler angles describing the crystallite orientation with respect to the plate, 4=cos(O), z[lmn] are generalized Legendre function, and the w[lmn] are constants, known as orientation distribution coefficients (ODC's). Thus, the ODC's are analogous to the constants in a Fourier series. Given an experimental determination of the CODF, the ODC's can be determined using well-known mathematical manipulations. Alternatively, knowledge of the ODC's fully specifies the CODF. Hence, these two contain equivalent information that fully specifies the texture. The ODC's may be thought of as measures of the severity of the directional properties of the sheet. Figure 7 summarizes the procedure employed in determining the ODC's from X-ray pole figures. Measurement techniques use the angular variation of the ultrasonic waves in the sheet to detect texture and directionality. The effects of texture on velocity of an ultrasonic wave is to slow it down in one direction and make it faster in another (Figure 8). Ultrasonic velocity measurements take advantage of this effect, thus determining the formability and texture parameters such as the r's and
{"url":"https://www.nde-ed.org/NDETechniques/Ultrasonics/EquipmentTrans/emattexture1.xhtml","timestamp":"2024-11-11T23:25:23Z","content_type":"application/xhtml+xml","content_length":"59869","record_id":"<urn:uuid:3db441ab-1e40-4f81-bf54-098dd4dd8c6c>","cc-path":"CC-MAIN-2024-46/segments/1730477028240.82/warc/CC-MAIN-20241111222353-20241112012353-00417.warc.gz"}
Modeling the search for the least costly opportunity With the continuing growth in the number of opportunities available at virtual stores over the Internet there is also a growing demand for the services of computer programs capable of scanning a large number of stores in a very short time. We assume that the cost associated with each scan is linear in the number of stores scanned, and that the resulting list of price quotes is not always satisfactory to the customer, in which case an additional scan is performed, and so on. In such a reality the customer, wishing to minimize her expected cost, must specify the requested sample size and a rule (control limit) to stop the search. In the context of search theory, the above search model can be categorized as "fixed-sample-size, sequential, with infinite horizon". According to this model the expected search cost is a function of two decision variables: the sample size and the control limit. We prove that for arbitrary sample size the expected search cost is either quasi-convex or strictly decreasing in the control limit, and that the optimal expected search cost is quasi-convex in the sample size. These properties allow an efficient calculation of the optimal policy. We also develop analytic formulas to calculate the cost's variance, allowing customers to choose a slightly higher expected cost if there is a considerable decrease in the variance. Finally, we present detailed examples for price quotes that are distributed uniformly or exponentially. • Comparison shopping agents • Optimal stopping rule • Search theory Dive into the research topics of 'Modeling the search for the least costly opportunity'. Together they form a unique fingerprint.
{"url":"https://cris.tau.ac.il/en/publications/modeling-the-search-for-the-least-costly-opportunity","timestamp":"2024-11-09T14:25:48Z","content_type":"text/html","content_length":"51647","record_id":"<urn:uuid:6dc34d03-e30d-4657-aee8-15ea63be0b9e>","cc-path":"CC-MAIN-2024-46/segments/1730477028118.93/warc/CC-MAIN-20241109120425-20241109150425-00694.warc.gz"}
What are Excel Function arguments Excel function arguments are the inputs we need provide to Excel functions to perform a particular task. Depending on what formula you are using, the number of arguments or the type of argument differs. For example, let us consider Excel SUM function. SUM() function needs numeric values as the input to find the sum of those numeric values. Similarly, TRIM() is a Text function, it needs text as the function argument. If a function needs more than one argument, those arguments are separated by using " , " (Comma) character. Different Excel functions have different types of argument requirements. Some functions do not require arguments. For example; for PI() function, there is no argument required. Excel PI() function just returns the value of π. Some functions require only one argument. For example; SECOND() function needs the time serial as its argument. Excel SECOND() function returns the seconds of a time value. Similarly, some other Excel functions require more than one argument. For example; Excel DATE() function require three arguments. YEAR, MONTH and DAY. Excel DATE() function can take YEAR, MONTH and DATE as arguments and then combine those arguments and return a date value. Some Excel functions have optional arguments also. By-default, EXCEL LEFT() function returns the single left-most character of a text value. If you want 3 left-most characters, you can provide 3 as an optional argument. Examples of Excel function arguments Let us assume that we need to calculate the sum of numbers 1700, 2500, 3800, 12025, 13001 in an Excel worksheet, using Excel SUM function. We can use the Excel SUM function, with arguments just as above numbers in an Excel Cell. To input arguments for a function, first we need to type-in "=" character, then we need to type-in the name of the function. After typing the name of the function, we type an " ( " bracket. Once all the arguments are typed-in, we close it with a " ) " bracket. As you can see from below image, Excel function arguments are separated by using a " , " (Comma) character. Please refer below image. Following image shows Cell addresses as Excel function arguments. Following image shows Range address as Excel function argument. Written by Jajish Thomas. Last updated on 27^th January, 2022.
{"url":"https://www.omnisecu.com/excel/formulas-and-functions/what-are-excel-function-arguments.php","timestamp":"2024-11-13T14:52:46Z","content_type":"text/html","content_length":"45339","record_id":"<urn:uuid:47bb64c1-39cf-4ad4-8bcc-ac65bd07bfdc>","cc-path":"CC-MAIN-2024-46/segments/1730477028369.36/warc/CC-MAIN-20241113135544-20241113165544-00133.warc.gz"}
A Hybrid System for Customer Churn Prediction and Retention Analysis via Supervised Learning Computers, Materials & Continua A Hybrid System for Customer Churn Prediction and Retention Analysis via Supervised Learning 1Department of Computer Science, COMSATS University, Islamabad, Attock Campus, Pakistan 2Department of Computer Science, COMSATS University, Islamabad, Islamabad Campus, Pakistan *Corresponding Author: Khalid Iqbal. Email: khalidiqbal@cuiatk.edu.pk Received: 24 November 2021; Accepted: 11 February 2022 Abstract: Telecom industry relies on churn prediction models to retain their customers. These prediction models help in precise and right time recognition of future switching by a group of customers to other service providers. Retention not only contributes to the profit of an organization, but it is also important for upholding a position in the competitive market. In the past, numerous churn prediction models have been proposed, but the current models have a number of flaws that prevent them from being used in real-world large-scale telecom datasets. These schemes, fail to incorporate frequently changing requirements. Data sparsity, noisy data, and the imbalanced nature of the dataset are the other main challenges for an accurate prediction. In this paper, we propose a hybrid model, name as “A Hybrid System for Customer Churn Prediction and Retention Analysis via Supervised Learning (HCPRs)” that used Synthetic Minority Over-Sampling Technique (SMOTE) and Particle Swarm Optimization (PSO) to address the issue of imbalance class data and feature selection. Data cleaning and normalization has been done on big Orange dataset contains 15000 features along with 50000 entities. Substantial experiments are performed to test and validate the model on Random Forest (RF), Linear Regression (LR), Naïve Bayes (NB) and XG-Boost. Results show that the proposed model when used with XGBoost classifier, has greater Accuracy Under Curve (AUC) of 98% as compared with other methods. Keywords: Telecom churn prediction; data sparsity; class imbalance; big data; particle swarm optimization DATA volume is significantly growing in recent decade due to advancements in information technology. Concurrently, an enormous development is being made in machine learning algorithms to process this data and discover hidden patterns independently. Machine learning techniques learn through data and also have the potential to automate the analytical model building. Machine learning is divided into three categories such as: 1) supervised, 2) semi-supervised, and 3) unsupervised learning. Supervised learning is used to discover hidden patterns from labeled datasets. Whereas unsupervised learning is used to discover hidden patterns from unlabeled data. Therefore, unsupervised learning is beneficial in finding the structure and useful insights from an unknown dataset. However, semi-supervised learning falls in between unsupervised and supervised learning [1]. Customers are the most important profit resource in any industry. Therefore, telecom industry fear customer churn due to changing interests, and demand for new applications and services. Customer churn is defined as the movement of people from one company to another company for several reasons. The organization’s major concern is to retain their unsatisfied customers to survive in a highly competitive environment. To reduce customer churn, the company should be able to predict the behavior of customers correctly. For this, a churn prediction model is used to get an estimate of customers, who may switch a given service provider in the near future [2]. Along with the telecom industry, churn prediction has also been used in subscription-based services, e-commerce, and industrial area [3,4]. Nowadays, the use of the mobile phone is integral to our everyday life. This increases competition in the telecom sector, as it is less costly for a customer to switch services. The behavior of a churner typically relies on multiple attributes. From a company perspective, the expense of attracting new customers is 6 to 7 times higher than retaining the unsatisfied ones [5]. On the contrary, companies are aware of the fact of losing customers, resulting in decrease profits. Therefore, Customer Relationship Management (CRM) needs an effective model to predict future churners for automating the retention mechanism. Many operators also use simple pattern matching programs to identify potential churners. However, these programs require regular maintenance, and fail to incorporate the changing requirements. Therefore, machine learning algorithms have the potential to continually learn from new data and adapt as new patterns emerge. In the past decade, significant research was performed to predict customer churn in telecom companies. Most approaches used machine learning for churn prediction [6]. One study combined two different ML models for customer churn prediction. The ML models are Back Propagation Artificial Neural Networks and Self Organizing Maps [7]. Various classification algorithms are used in this research [8] and compared their result to discover the most accurate algorithm for prediction of customer churn in businesses. In other research, the author creates a custom classification model using a combination of Artificial Neural Networks, Fuzzy modeling, and Tree-based models [9]. The proposed research is called the locally linear model tree (LOLIMOT). The results show that the LOLIMOT model achieved accurate classification as compared to other classification algorithms even in extremely unbalanced datasets. Similarly, researchers have suggested a large number of classifier-based techniques like Enhanced Minority Oversampling Technique (EMOTE), Support Vector Nachine (SVM), Support Vector Data Description (SVDD), Netlogo (agent-based model), Fuzzy Classifier, Random Forest (RF) (e.g., [10–18]). Also, there are hybrid techniques for churn prediction that merge two classifiers like K-Means with Decision Tree (DT), DT along with Logistic Regression (LR) and K-Means and classic rule inductive technique (FOIL) [19–21]. Also, Particle Swarm Optimization (PSO) based technique was proposed for feature selection in [22–25]. Nowadays, customer churn is a major issue for every organization. The operation of customer churn prediction has become more complicated. Therefore, there is a need to develop some new and effective that accurately predict customer churn that help the company in more effective allocation of its resources. In this paper, our main concern is churn prediction for large datasets. We collected data from the international competition KDD Cup, 2009 (provided by Orange, Inc) [26]. We proposed a hybrid system, name as HCPRs, based on PSO feature selection and classification via different classifiers with the aim to generate better performance, and we addressed the class imbalance problem by using SMOTE. Furthermore, we reduce the overall computational cost through features selection. This method targets the issue of predicting customer churn, and retention analysis in telecom industry. The proposed system may help telecom companies to retain existing customers along with attracting new ones. The following are our main contribution of this paper: • We propose a hybrid model, named as A Hybrid System for Customer Churn Prediction and Retention Analysis via Supervised Learning (HCPRs) to address the issue of imbalance class data and feature selection. It uses a PSO based feature selection model which makes churn identification more quickly. • We perform stratified five-fold cross validation for better testing of data, and performance evaluation. • To demonstrate the effectiveness of HCPRs, we evaluate the proposed model against prominent techniques. Experimental results show an improved performance on RF compared to other classifiers. The rest of the paper is organized as follows. In Section 2, we present previous work along with the statistics related to past work. Motivation and research questions are presented in Section 3. In Section 4, we describe the widely known performance metrics used in the Churn prediction problem. In Section 5, we discuss our proposed work. In Section 6, the prediction and evaluation are presented for multiple machine learning classifiers. In Section 7, experiments and results have been discussed. Finally, Section 8 concludes the paper and future work. Churn in the telecom industry has been a long-term challenge for telecom companies. Typically, experts would manually perform churn analysis and make predictions accordingly. However, with the ever-increasing number of mobile subscribers and cellular data, it is not possible to predict manually. Hence, the research community has been attracted to explore the use of classifiers-based and PSO-based models for churn prediction. 2.1 Classifier Based Churn Prediction In some previous researches, supervised learning approaches were used to identify churn like Naive Bayes, Logistic Regression, Support Vector Machines, Decision Tree and Random Forest (e.g., [27–29 ]). Awang et al. [30] presented a regression-based churn prediction model. This model utilizes the customer’s feature data for analysis and churns identification. Vijaya et al. [31] proposed a predictive model for customer churn using machine learning techniques like KNN, Random Forest, and XG Boost. The author compared the accuracy of several machine learning algorithms to determine the better algorithm of higher accuracy. One more research [16] proposed a fuzzy based churn prediction model and compared the accuracy of several classifiers with the fuzzy model. The author proved in predicting customer churn that fuzzy classifiers are more accurate as compared to others. De Bock et al. [32] design the GAMensplus classification algorithm for interpretability and strong classification. Karanovic et al. [5] proposed a questionnaire-based data collection technique processed over Enhanced Minority Oversampling Technique (EMOTE) classifier. Maldonado et al. [33] proposed relational and the non-relational learner’s classifiers handling data sparsity by social network analytic method. In earlier days, many researchers show that single model based churn prediction techniques do not produce satisfactory results. Therefore, researchers switched on to hybrid models [18–20]. The basic principle with the hybrid model is to combine the features of two or more techniques. One study combined two different ML models for customer churn prediction such as Back Propagation Artificial Neural Networks and self-organizing Maps [7]. A data filtration process was performed using a hybrid model combining two neural networks. After that, data classification was performed using Self-Organized Maps (SOM). The proposed hybrid model was evaluated through two fuzzy testing sets and one general testing set. The evaluation results show that the proposed hybrid model was outperforming in prediction and classification accuracy, using a single neural network baseline model. In one more research, the author creates a custom classification model using a combination of Artificial Neural Networks, Fuzzy modeling, and Tree based models [9]. 2.3 PSO-Based Churn Prediction PSO-based techniques were proposed to solve the problem of customer churn (e.g., [21,24,28,31]). Huang et al. [21] proposed a technique for churn prediction using particle swarm optimization (PSO). Furthermore, the author proposed three variants of PSO that are 1) PSO incorporated with feature selection, 2) PSO embedded with simulated annealing and 3) PSO with a combination of both feature selection and simulated annealing. It was observed that proposed PSO and its variants give better results in imbalanced scenarios. Guyon et al. [34] designed a model for efficient churn prediction by using data mining techniques. In preprocessing stage, the k-means algorithm is used. After preprocessing, attributes are selected by employing the minimum Redundancy and Maximum Relevance (mRMR) approach. This technique uses the Support Vector Machine with Particle Swarm Optimization (SVM with PSO) to examine the customer churn separation or prediction. The experiments show that the proposed model attain better performance as compared to the existing models in terms of accuracy, true-positive rate, false-positive rate, and processing time. Vijaya et al. [31] handled imbalanced data distribution by features selection using PSO. It used Principle Component Analysis (PCA), Fisher’s ratio, F-score, and Minimum Redundancy and Maximum Relevance (mRMR) techniques for feature selection. Moreover, Random Forest (RF) and K Nearest Neighbor (KNN) classifiers are utilized to evaluate the performance. In this section, we present the overall architecture of the proposed model along with its major component descriptions. The performance of the proposed model is evaluated in a telecom churn prediction model after knowing the problem statement in this era. Telecom providers T = {t1, t2, t3,…,tk} competing each other that may result in customers churn where C = {c1, c2, c3,…., cn}. Telecom providers (ti ⊆ T) require an identification system for churners (c ⊆ C) having a high possibility to churn. Considered multiple features F = {f1, f2, f3,….fj} of customers either to churn or non-churn along with a class label (L). The feature selection process is done to get the prediction result on considering valuable features (f ⊆ F). The overall work flow diagram of the proposed system is illustrated in Fig. 1. We explain the components of the churn prediction proposed model in a step-by-step manner. In the first step, data pre-processing is performed that comprises the data cleaning process, removal of imbalanced data features, and normalized the data. The synthetic Minority Oversampling (SMOTE) technique is used to balance the imbalanced data in telecommunication industries to improve the performance of churn prediction. In the second step, important features are extracted from data using the particle swarm optimization (PSO) mechanism. In the third step, different classification algorithms are employed for categorizing the customers into the churn and non-churn customers. The classification algorithms consist of Logistic Regression (LR), Naïve Bayes (NB), Random Forest (RF), and XG-boost. A publicly available Orange Telecom Dataset (OTD) is provided by the French telecom company [35]. The orange dataset consists of churners and non-churners. The dataset contains a large amount of information related to the customers and mobile network services. This information is considered in the KDD cup held for customer relationship prediction [36]. The dataset consists of 15000 variables and 50000 instances; the dataset is further divided into five chunks (C1, C2, C3, C4, C5) that contain an equal number of samples (10,000 each). Furthermore, out of 50000 samples, 3672 and 46328 samples were churners and non-churners. The approximate percentage ratio between churner and non-churner in OTD is 7:93. Due to which class imbalance problem occurs in such data set, Fig. 2 shows a graphical representation between churner vs. non-churners. The names of the features are not defined to respect the customer’s privacy. OTD is a heterogeneous dataset that consists of noisy data with variations in the measurement scale, features with null values, features with missing values, and data sparsity. Hence, for which data pre-processing is a requirement on such kind of dataset. The dataset consists of noisy features, sparsity and missing values. In the dataset it was noticed that approximately 19:70% of the data have missing values. The main purpose of data pre-processing step is to consider data cleaning for missing values and noisy data and data transformation. For data normalization, there are several methods like Z-Score, Decimal Scaling and Min-Max. Resolving the data sparsity problem, we used Min-Max normalization method. Min-Max normalization method performs a linear transformation on the data. In this method, we normalize the data in a predefined interval that is valued 0 and 1. Class Imbalance Distribution of the dataset where one class has a very large number of instances compared to the other class. The class with few samples is the minority class, and the class with relatively additional instances is the majority class. The imbalance between two classes is represented by the use of the “Ratio Imbalance” which is defined as the ratio between a number of samples of the majority class and that of a minority class. In forecasting the customer churn rate, the number of nonsense are relatively high compared to the churn number. Several techniques have been proposed to solve the problems associated with an unbalanced dataset. These techniques can be classified into four categories such as [36]: • Data level approaches, • Algorithm level approaches, • Cost-sensitive learning approaches, and • Classifier Ensemble techniques Data level oversampling technique reduces the imbalance ratio of the skewed dataset by duplicating minority instances. The most commonly used an oversampling technique is the Synthetic Minority Oversampling Technique (SMOTE) [37]. A SMOTE introduces additional synthetic samples into the minority class instead of directly duplicating instances. Using synthetic samples helps create larger and less specific decision regions. The algorithm first finds k nearest neighbors from each minority class sample using Euclidean distance as a measure of distance. Synthetic examples are generated along the line segments connecting the original minority class sample to its nearest k neighbors. The value of k depends on the number of artificial instances that need to be added. Steps for generating synthetic samples [36]: 1. Generate a random number between 0 and 1 2. Compute the difference between the feature vector of the minority class sample and its nearest neighbor 3. Then Multiply this difference by a random number (as generated in step 1) 4. After multiplication, adds the result of multiplication of the feature vector of the minority class sample 5. The resulting feature vector determines the newly generated sample In this paper, we considered the Orange dataset, the distribution of the number of churners and the number of non-churner had a large amount of difference in the dataset. Computed values between the churner against the nonchurner are 3672 (7:34%) and 46328 (92:65%), respectably. It shows the ratio of 1:13 between churners and non-churns. In customer churn prediction, the number of non-churners is relatively high with respect to the number of churners as shown in Fig. 3. For such an unbalanced distribution of the two classes, the few churners getting the same weight in a cost function as the non-churner will result in a high misclassification rate. As the classifier will be biased towards the majority class. To resolve this imbalance issue, we used an advanced oversampling technique SMOTE. The working of generic SMOTE is demonstrated in Fig. 4. A synthetic oversampling technique performed rather than simple Random Under Sampling (RUS) and Random Over Sampling (ROS) technique. We resolved the imbalance issue by making the minority class (churners) equal to the majority one (nonchurner) with a ratio of 1:1. 3.4 PSO Based Feature Selection Choosing subsets of features from an original dataset or eliminating unnecessary features are the fundamental principle underlying feature selection. Having irrelevant functionality in the dataset can reduce the accuracy of classification models and force the classification algorithm to process based on irrelevant functionality. A subset that must represent the original data necessarily and reasonably while still being useful for analytical activities. The feature selection activity focuses on finding an optimal solution in a generally large search space to mitigate classification activity. Therefore, it is recommended that performs a feature selection task before training a model. In this work, we use PSO-based feature selection mechanism of generating the best optimum subset for each of the chunks individually. It is a suitable algorithm for addressing feature selection problems for the following reasons: easy feature encoding, global search function, computationally reasonable, fewer parameters, and easy implementation [3]. The PSO is implemented for feature selection because of the above reasons. The algorithm was introduced as an optimization technique for natural number spaces and solved complex mathematical problems. This algorithm work on the principle of interaction to share information between the members. This method performs the search of the optimal solution through particles. Each particle can be treated as the feasible solution to the optimization problem in the search space. The flight behavior of particles is considered as the search process by all members. PSO is initialized with a group of particles, and each particle moves randomly. A particle i is defined by its velocity vector, vi, and its position vector to xi. Each particle's velocity and position are dynamically updated in order to find the best set of features until the stopping criterion is met. The stopping criteria can be a maximum number of iterations or a good fitness value. In PSO, each particle updates its velocity VE and positions PO with following equations: where i denote the index of the swarm global best particle, VE is the velocity and ξ is the inertia weighting factor which is dynamically reduced; r1 and r2 are random variables generated from the uniform distribution on the interval [0, 1]; c1 and c2 parameters denote as acceleration coefficients; pbest(i, t) is the historically best position until iteration t and gbest is the global best particle with best position in the swarm (giving the best fitness value) are defined as: where Np is the total number of particles, f is the fitness function, p is the position and t is the current iteration number. The first part of Eq. (1) (i.e., ξVEi(t)) is known as inertia that represents the previous velocity, whereas thesecond part (i.e., c1r1(pbest(i,t)−poi(t))) is known as the cognitive component that encourages the particles to move towards their own best position, and in the third part (i.e., c2r2(gbest(t)−poi(t))) is known as cooperation component that represents the collaborative effect of the particle [25]. After the feature selection stage, we will obtain meaningful global best selected features X = [x1, x2, x3, …., xi] (i.e., (X(f)). Hence, after doing all the above process step by step we are in a position to have a purified Orange Telecom Dataset, visualized in Fig. 5. In purified dataset class, imbalance issue is removed, the dataset has no null or missing values, dataset values are normalized in-between [0, 1], and the most relevant features have been selected. It is necessary to evaluate the model in order to determine which one is more reliable. Cross-validation is one of the most used methods to assess the generalization of a predictive model and avoid overfitting. There are three categories of cross-validation: 1) Leave-one-out cross-validation (LOOCV), 2) k-Fold cross-validation, and 3) Stratified cross-validation. This study focuses primarily on the stratification k-fold cross-validation (SK). The Stratification k-fold cross-validation (SK) works as the following steps: 1) SK splits the data into k folds, making particular, each fold is a proper representation of the original data • The proportion of the feature of interest in the training and test sets is the same as in the original dataset. 2) SK selects the first fold as the test set. • The test set selects one by one in order. For instance, in the second iteration, the second fold will be selected as the test 3) Repeat steps 1 and 2 for k times In this paper, stratified k-fold forward cross-validation is used, which is an improved version over traditional k-fold cross-validation for evaluating explorative prediction power of models. Instead of randomly partitioning the dataset, the sampling is performed so that the class proportions in the individual subsets reflect the proportions in the learning set. Stratified k-fold cross-validation is an improved version of traditional k-fold cross-validation. SK can preserve the imbalanced class distribution for each fold. Instead of randomly partitioning the dataset, stratified sampling is performed in such a way that the samples are selected in the same proportion as they appear in the population as shown in Fig. 6. For example, if he learning set contains n = 100 cases of two classes, the positive and the negative class, with n+ = 80 and n− = 20. If random sampling is done without stratification, then some validation sets may contain only positive cases (or only negative cases). With stratification, however, each validation set of 5-fold cross-validation is guaranteed to contain about eight positive cases and two negative cases, thereby reflecting the class ratio in the learning set. In this section, we used multiple classifiers that are Naïve Bayes (NB), Logistic Regression (LR), Random Forest (RF), and XG-Boost to get accurate and efficient prediction results of customer’s Naive Bayes (NB) [38,39] is a type of classification algorithm based on the Bayesian theorem. It determines the probabilities of classes on every single instance and feature [x1, x2, x3, . . . , xi] to derive a conditional probability for the relationships between the feature values and the class. The model contains two types of probabilities that can be calculated directly from the training data: (i) the probability of each class and (ii) the conditional probability for each class given each x value. Here, the Eq. (4) used for Bayes Theorem is given as, where yi is the target class, and x1,x2,x3,…,xi is the data, P(yi) is the class probability (prior probability), P(x1,x2,x3,…,xi) is the predictor probability (prior probability), P(x1,x2,x3,…,xi|yi) is the probability based on the conditions of the hypothesis, P(yi|x1,x2,x3,…,xi) is a hypothesis probability-based on conditions (posterior probability). Hence, the equation of Bayes theorem can also be written In this paper, Naïve Bayes algorithm was implemented to predict either the customer will churn or not. Logistic regression (LR) is the machine learning technique that solved binary classification problems. LR takes the real valued inputs and estimates the probability of an object belonging to a class. In this paper, a regression algorithm is evaluated to classify the customer churn and non-churners. where y is the dependent variable and x is the set of independent variables. The value of y is ‘1’ implies the churned customer or y is ‘0’ implies the non-churn customer. LR is estimated through the following equation: where β is the coefficients to be learned and Pr is the probability of churn or not churn. If the value of probability pr is >0.5 then it takes the output as class 0 (i.e., non-corners) otherwise it takes the output as class 1 (i.e., churners). Random Forest (RF) [40,41] is a classification algorithm that builds up many decision trees. It adds a layer of randomness that aggregates the decision trees using the “bagging” method to get a more precise and stable prediction. Therefore, RF performs very well compared to many other classifiers. Both classification and regression tasks can be accomplished with RF. It is robust against overfitting and very user-friendly [41]. The random forest technique's main idea is as follows: 1. Feature selection is accomplished on the decision tree to purify the classified data set. GINI index is taken as the purity measurement standard: where G represents the GINI function; q represents the number of categories in sample D; Pi represents the proportion of category i samples to the total number of samples and k represents that sample D is divided into k parts, that is, there are k Dj data sets. When the value of the GINI index (Eq. (8)) reach the maximum, then the node splitting is accomplished. 2. The generated multiple decision trees establish the random forest, and a majority voting mechanism is adopted to complete the prediction. The final classification decision is shown in Eq. (9). where L(X) represents the combined classification algorithm; li represents the classification algorithm of ith decision tree, and y is the target variable. I(•) is the indicative function. Fig. 7 presents the generic working of the RF ensemble model on a purified orange dataset with final predictions. In this work, we adopt the Extreme Gradient Boosting (XGBoost) an algorithm as a machine learning algorithm that is employed for classification and regression problems. XGBoost is also a decision-tree-based ensemble algorithm that uses a gradient boosting framework that boosts weak learners to become stronger. XGBoost experimented on the Orange Telecom Dataset. XGBoosting algorithm only takes numerical values that are a suitable technique to use the orange dataset. It is famous due to the speed and performance factors. XGBoost used the primary three gradients boosting techniques such as 1) Regularized, 2) Gradient, and 3) Stochastic boosting to enhance and tune the model. Moreover, it can reduce and control overfitting and decrease time consumption. The advantage of XGboost is that it can use multiple core parallel and fasten the computation by combining the results. Accuracy gained with the XGBoost algorithm was better than all the previous methods. 3.6.5 Multiple Linear Regression MLR determines the effect when a variety of parameters are involved. For example, while predicting the behavior of churn users, multiple factors could be considered such as: cost, services, customer dealing which a telecom company provides. The effect of these different variables is used to calculate y (dependent variable) is calculated by multiplying each term with assigned weight value and adding all the results. In this study, Accuracy, Precision, Recall, F1-Measure, and Accuracy Under Curve (AUC) based evaluation measures are used to quantify the accuracy of the proposed HCPRs churn prediction model. These well-known performance metrics are employed due to their popularity considered in existing literature for evaluating the quality of the classifiers that are used for churn prediction [42–44]. The following evaluation measures are used: Accuracy is defined as the ratio of the correct classifications of the number of samples to the total number of samples for a given test dataset. It is mainly used in classification problems for the correct prediction of all types of predictions. Mathematically, it is defined in Eq. (10). Acc =(TP+TN)/N(10) N= TP + TN + FP + FN where ‘TN’ is True Negative, ‘TP’ is True Positive, ‘FN’ is False Negative and ‘FP’ is False Positive. Precision is defined as the ratio of the correct classifications of positive samples to all numbers of the classified positive samples. It describes that the part of the prediction data is positive. Mathematically, it is defined in Eq. (11). Precision =TP/(TP + FP)(11) Recall measures the ratio of correctly classified relevant instances to the total amount of relevant instances. It can be showed for the churn and non-churn classes by the following equations, The F1 score is defined as a weighted average precision and recall. Where the F1 best score value is 1, and the worst score value is 0. The relative part of precision and recall to the F1 score are equal. Mathematically, it is defined in Eq. (13). 4.5 Accuracy Under Curve (AUC) We also used the standard scientific accuracy indicator, AUC (Area under the Curve) ROC (Receiver Operating Characteristics) curve to evaluate the test data. An excellent model with the best performance has higher the area under the curve (AUC) in an ROC. Mathematically, it is defined in Eq. (14). where P represents the number true churners and N shows the number of true non-churners. Arranging the churners in descending order, rank n is assigned to the highest probability customer, the next with rank n-1 and so on. The proposed HCPRs approach is validated with the comprehensive experimentation carried on respective combinations of sampling, feature selection, and classification methodologies. In this section, a comparative analysis of HCPRs with other existing approaches are also included. Orange Telecom Dataset (OTD) is used, as discussed in Section 3.4 performance evaluation of the proposed churn prediction model. In this study, 5-fold cross-validation testing is adopted for analyzing the performance of the proposed model. OTD dataset is further divided into five chunks (C1, C2, C3, C4, C5) that contains an equal number of samples. 5.1 Classifiers Performance Evaluation on Split OTD We used multiple classifiers by not rely only on a single classifier as evaluation results vary from classifier to classifier. Classifiers considered in our study are Naïve Bayes (NB), Logistic Regression (LR), Random Forest (RF), and XG-Boost. Performance metrics used to predict the efficiency of the model chunk wise are Accuracy, Recall, Precision, F-Measure, whose results can be visualized in Fig. 8. It can be seen that chunk 3 shows a high accuracy rate of Random Forest, XGBoost, Logistic Regression 95%, 96%, 88%, respectively whereas Naïve Bayes has a low accuracy rate of 74%. From the experimental results, the XGBoost outperformed the other classification algorithms on accuracy evaluation measures. Random Forest, XGBoost, Logistic Regression performed very well at Precision 96%, 9%, and 90%, respectively. Although Naïve Bayes has a lower precision rate of 83%. Although XGBoost outperformed the other algorithms on precision. XGBoost and Random Forest classifier gives higher recall score, i.e., 96%, 95% compared to Naïve Bayes and Logistic Regression classifiers. As displayed in Fig. 8, we confirm that the XGBOOST algorithm and Random Forest outperformed the rest of the tested algorithms with an F1-measure value of 95%. Logistic Regression algorithm occupied second place with an F1-measure value of 88%, while Naïve Bayes came last in the F1-measure ranking with a value of 73%. From the experimental results, XGBoost algorithm outperformed the other classification algorithms on most evaluation measures. 5.2 Split OTD Area under ROC Curve Visualization with Multiple Classifiers This section presents the ROC curve of LR, Naive Byes, Random Forest and XGBOOST. ROC curve is widely used to measure the test’s ability as a criterion. In general, the ROC curve is used to predict the model accuracy. The area under the ROC curve (AUC) is a measure of how well parameters can distinguish between churned and not churned groups. The AUC value is between 0:5 and 1. AUC value is better if it is close to 1. Moreover, the mean value of folds is also computed on the classifiers. Fig. 9 shows an overall view of the ROC curve on multiple classifiers that visualized on a split orange dataset along with mean AUC values. Stratified 5-fold cross-validation is used on each of the chunks. After 5-fold cross-validation, the Naïve Bayes Classifier means the AUC value of each chunk is 0:75 as shown in Fig. 9. After that, we implemented a Logistic Regression classifier to obtain more accurate results. Similarly, like the previous technique, the environment is the same, and the mean AUC value is predicted 0:93 of C1, C2, and C4, and the mean AUC obtained on C3 and C5 is 0:94. Moreover, to predict churner more accurately, we also used an ensemble technique, Random Forest. ROC curve showed much more required results of each chunk with a mean AUC value of 0:962. Furthermore, we also used a boosting technique (i.e., XGBoost) on the same split orange dataset to obtain more accurate results. ROC curve gave more accurate and efficient results than all the previous techniques. The mean AUC value of each chunk (i.e., C1, C2, C3, C4, C5) is 0:98. Results showed that the ensemble technique outperformed on the Orange dataset. Fig. 10 demonstrates the graphical representation of mean AUC results reported by the classifiers used in our research. 5.3 Performance Comparison with other Existing Approaches Numerous approaches applied with different classifiers in the domain of churn prediction. The comparison was taken based on performance evaluation metrics such as accuracy, precision, recall, F1-measure, and the ROC/AUC with the same dataset and different telecom datasets. These metrics were chosen to identify the performance of the HCPRs technique. Comparison of the HCPRs technique with K-MEANS-DT [45], Hybrid Firefly [26], PSO with a combination of both feature selection and simulated annealing (PSO-FSSA) [31], Weighted K-means and a classic rule inductive technique (FOIL) (WK-FOIL) [21] and Artificial Neural Networks and Multiple Linear Regression models (ANN-MLR) [46] were performed to measure the difference in performance levels Tab. 1 shows the comparison of the current technique with different approaches with the same dataset. From the experimental results, the proposed HCPRs significantly perform better as compared to the other algorithms on most evaluation measures in predicting telecom churners when evaluated on an Orange Telecom Dataset (OTD). Although it performed not very well at Precision, it outperformed the other algorithms on Accuracy, Recall, F1-score, and ROC/AUC. In addition, the predictive performance of our proposed model of the ROC curve is most excellent. A hybrid PSO based churn prediction model is presented in this paper. We tested our model on an orange telecom dataset. For preprocessing, SMOTE technique is used for data cleaning, and removal of imbalanced data features. After that important features are extracted from data with PSO. Furthermore, Logistic Regression (LR), Naive Bayes (NB) and Random Forest (RF) are used for categorizing customers into two categories i.e., churn, and non-churn customers. It is shown through results that using a stratified 5-fold cross validation procedure improves the performance of our prediction model. Naive Bayes is given 0:75 least accurate result on AUC in comparison with Logistic Regression and Random Forest giving 0:934 and 0:962 respectively. For the future work, we plan to automate the retention mechanism based on these prediction methods, which is now a days a necessary requirement of a telecom company. Furthermore, we intend to perform experiments with increasing number of folds up to 10-folds for gaining accurate results. Funding Statement: The authors received no specific funding for this study. Conflicts of Interest: The authors declare that they have no conflicts of interest to report regarding the present study. 1. A. Saran Kumar and D. Chandrakala, “A survey on customer churn prediction using machine learning techniques,” International Journal of Computer Applications, vol. 154, no. 10, pp. 13–16, 2016. [ Google Scholar] 2. L. Katelaris and M. Themistocleous, “Predicting customer churn: Customer behavior forecasting for subscription-based organizations,” in European, Mediterranean, and Middle Eastern Conf. on Information Systems, EMCIS 2017: Coimbra, Portugal, pp. 128–135, 2017. [Google Scholar] 3. N. Gordini and V. Veglio, “Customers churn prediction and marketing retention strategies. An application of support vector machines based on the AUC parameter-selection technique in B2B e-commerce industry,” Industrial Marketing Management, vol. 62, no. 3, pp. 100–107, 2017. [Google Scholar] 4. A. Bansal, “Churn prediction techniques in telecom industry for customer retention: A survey,” Journal of Engineering Science, vol. 11, no. 4, pp. 871–881, 2020. [Online]. Available: https:// www.jespublication.com. [Google Scholar] 5. M. Karanovic, M. Popovac, S. Sladojevic, M. Arsenovic and D. Stefanovic, “Telecommunication services churn prediction-deep learning approach,” in 2018 26th Telecommunications Forum (TELFOR), Belgrade, Serbia, pp. 420–425, 2018. [Google Scholar] 6. P. Lalwani, M. K. Mishra, J. S. Chadha and P. Sethi, “Customer churn prediction system: A machine learning approach,” Computing, vol. 104, no. 8, pp. 1–24, 2021. [Google Scholar] 7. C. F. Tsai and Y. H. Lu, “Customer churn prediction by hybrid neural networks,” Expert Systems with Application, vol. 36, no. 10, pp. 12547–12553, 2009. [Google Scholar] 8. S. R. Labhsetwar, “Predictive analysis of customer churn in telecom industry using supervised learning,” ICTACT Journal of Soft Computing, vol. 10, no. 2, pp. 2054–2060, 2020. [Google Scholar] 9. A. Ghorbani, F. Taghiyareh and C. Lucas, “The application of the locally linear model tree on customer churn prediction,” in 2009 Int. Conf. of Soft Computing and Pattern Recognition, Las Vegas, pp. 472–477, 2009. [Google Scholar] 10. S. Babu and N. R. Ananthanarayanan, “Enhanced prediction model for customer churn in telecommunication using EMOTE,” in Int. Conf. on Intelligent Computing and Applications, Sydney, Australia, pp. 465–475, 2018. [Google Scholar] 11. R. Dong, F. Su, S. Yang, X. Cheng and W. Chen, “Customer churn analysis for telecom operators based on SVM,” in Int. Conf. on Signal and Information Processing, Networking and Computers, Chongqing, China, pp. 327–333, 2017. [Google Scholar] 12. S. Maldonado and C. Montecinos, “Robust classification of imbalanced data using one-class and two-class SVM-based multiclassifiers,” Intelligent Data Analysis, vol. 18, no. 1, pp. 95–112, 2014. [ Google Scholar] 13. S. Maldonado, Á. Flores, T. Verbraken, B. Baesens and R. Weber, “Profit-based feature selection using support vector machines--General framework and an application for customer retention,” Applied Soft Computing, vol. 35, no. 3–4, pp. 740–748, 2015. [Google Scholar] 14. M. Óskarsdóttir, C. Bravo, W. Verbeke, C. Sarraute, B. Baesens et al., “Social network analytics for churn prediction in telco: Model building, evaluation and network architecture,” Expert Systems with Appications, vol. 85, no. 10, pp. 204–220, 2017. [Google Scholar] 15. S. D’Alessandro, L. Johnson, D. Gray and L. Carter, “Consumer satisfaction versus churn in the case of upgrades of 3G to 4G cell networks,” Marketing Letters, vol. 26, no. 4, pp. 489–500, 2015. [ Google Scholar] 16. M. Azeem, M. Usman and A. C. M. Fong, “A churn prediction model for prepaid customers in telecom using fuzzy classifiers,” Telecommunication Systems, vol. 66, no. 4, pp. 603–614, 2017. [Google 17. M. Azeem and M. Usman, “A fuzzy based churn prediction and retention model for prepaid customers in telecom industry,” International Journal of Computational Intelligence System, vol. 11, no. 1, pp. 66–78, 2018. [Google Scholar] 18. E. Shaaban, Y. Helmy, A. Khedr and M. Nasr, “A proposed churn prediction model,” International Journal of Engineering Research and Applications, vol. 2, no. 4, pp. 693–697, 2012. [Google Scholar] 19. Y. Huang, “Telco churn prediction with big data,” in Proc. of the 2015 ACM SIGMOD Int. Conf. on Management of Data, Australia, Melbourne, pp. 607–618, 2015. [Google Scholar] 20. A. De Caigny, K. Coussement and K. W. De Bock, “A new hybrid classification algorithm for customer churn prediction based on logistic regression and decision trees,” European Journal Operational Research, vol. 269, no. 2, pp. 760–772, 2018. [Google Scholar] 21. Y. Huang and T. Kechadi, “An effective hybrid learning system for telecommunication churn prediction,” Expert Systems with Applications, vol. 40, no. 14, pp. 5635–5647, 2013. [Google Scholar] 22. J. Pamina, J. B. Raja, S. S. Peter, S. Soundarya, S. S. Bama et al., “Inferring machine learning based parameter estimation for telecom churn prediction,” in Int. Conf. on Computational Vision and Bio Inspired Computing, Coimbatore, India, pp. 257–267, 2019. [Google Scholar] 23. A. Idris, M. Rizwan and A. Khan, “Churn prediction in telecom using random forest and PSO based data balancing in combination with various feature selection strategies,” Computers & Electrical Engineering, vol. 38, no. 6, pp. 1808–1819, 2012. [Google Scholar] 24. B. Xue, M. Zhang and W. N. Browne, “Particle swarm optimization for feature selection in classification: A multi-objective approach,” IEEE Transactions on Cybernetics, vol. 43, no. 6, pp. 1656–1671, 2012. [Google Scholar] 25. S. M. Sladojevic, D. R. Culibrk and V. S. Crnojevic, “Predicting the churn of telecommunication service users using open source data mining tools,” in 2011 10th Int. Conf. on Telecommunication in Modern Satellite Cable and Broadcasting Services (TELSIKS), Nis, Serbia, vol. 2, pp. 749–752, 2011. [Google Scholar] 26. A. A. Q. Ahmed and D. Maheswari, “Churn prediction on huge telecom data using hybrid firefly based classification,” Egyption Informatics Journal, vol. 18, no. 3, pp. 215–220, 2017. [Google 27. J. Burez and D. Van den Poel, “Handling class imbalance in customer churn prediction,” Expert System with Applications, vol. 36, no. 3, pp. 4626–4636, 2009. [Google Scholar] 28. B. Huang, M. T. Kechadi and B. Buckley, “Customer churn prediction in telecommunications,” Expert System with Applications, vol. 39, no. 1, pp. 1414–1425, 2012. [Google Scholar] 29. D. Ruta, D. Nauck and B. Azvine, “K nearest sequence method and its application to churn prediction,” in Int. Conf. on Intelligent Data Engineering and Automated Learning, Burgos, Spain, pp. 207–215, 2006. [Google Scholar] 30. M. K. Awang, M. N. A. Rahman and M. R. Ismail, “Data mining for churn prediction: Multiple regressions approach,” in Computer Applications for Database, Education, and Ubiquitous Computing, Gangneug, Korea, Springer, pp. 318–324, 2012. [Google Scholar] 31. J. Vijaya and E. Sivasankar, “An efficient system for customer churn prediction through particle swarm optimization based feature selection model with simulated annealing,” Cluster Computing, vol. 22, no. S5, pp. 10757–10768, 2019. [Google Scholar] 32. K. W. De Bock and D. den Poel, “Reconciling performance and interpretability in customer churn prediction using ensemble learning based on generalized additive models,” Expert System with Applications, vol. 39, no. 8, pp. 6816–6826, 2012. [Google Scholar] 33. S. Maldonado, Á. Flores, T. Verbraken, B. Baesens and R. Weber, “Profit-based feature selection using support vector machines-General framework and an application for customer retention,” Applied Soft Computing Journal, vol. 35, no. 3–4, pp. 240–248, 2015. [Google Scholar] 34. I. Guyon, V. Lemaire, M. Boullé, G. Dror, D. Vogel et al., “Analysis of the KDD Cup 2009: Fast Scoring on a Large Orange Customer Database,” in KDD-Cup 2009 Competation, pp. 1–22, 2009. [Online]. Available:http://www.kddcup-orange.com/. [Google Scholar] 35. Orange Telecom Datase, “link https//www.kdd.org/kdd-cup/view/kdd-cup-2009. [Google Scholar] 36. N. V. Chawla, K. W. Bowyer, L. O. Hall and W. P. Kegelmeyer, “SMOTE: Synthetic minority over-sampling technique,” Journal of Artifical Intelligence Research, vol. 16, pp. 321–357, 2002. [Google 37. https://www.analyticsvidhya.com/. [Google Scholar] 38. E. M. M. van der Heide, R. F. Veerkamp, M. L. van Pelt, C. Kamphuis, I. Athanasiadis et al., “Comparing regression, naive Bayes, and random forest methods in the prediction of individual survival to second lactation in Holstein cattle,” Journal of Dairy Science, vol. 102, no. 10, pp. 9409–9421, 2019. [Google Scholar] 39. P. Asthana, “A comparison of machine learning techniques for customer churn prediction,” Interational Journal of Pure and Applied Mathematics, vol. 119, no. 10, pp. 1149–1169, 2018. [Google 40. L. Breima, “Random forests,” Machine Learning, vol. 45, no. 1, pp. 5–32, 2001. [Google Scholar] 41. Andy Liaw and Matthew Wiener, “Classification and regression by randomForest,” R News, vol. 2, no. 3, pp. 18–22, 2002. [Google Scholar] 42. Q. Tang, G. Xia, X. Zhang and F. Long, “A customer churn prediction model based on xgboost and mlp,” in 2020 Int. Conf. on Computer Engineering and Application (ICCEA), Guangzhou, China, pp. 608–612, 2020. [Google Scholar] 43. H. Jain, A. Khunteta and S. Srivastava, “Churn prediction in telecommunication using logistic regression and logit boost,” Procedia Computer Science, vol. 167, no. 1, pp. 101–112, 2020. [Google 44. T. Xu, Y. Ma and K. Kim, “Telecom churn prediction system based on ensemble learning using feature grouping,” Applied Science, vol. 11, no. 11, pp. 4742, 2021. [Google Scholar] 45. S. Y. Hung, D. C. Yen and H. Y. Wang, “Applying data mining to telecom churn management,” Expert Systems with Applications, vol. 31, no. 3, pp. 515–524, 2006. [Google Scholar] 46. M. Khashei, A. Zeinal Hamadani and M. Bijari, “A novel hybrid classification model of artificial neural networks and multiple linear regression models,” Expert Systems with Application, vol. 39, no. 3, pp. 2606–2620, 2012. [Google Scholar] This work is licensed under a Creative Commons Attribution 4.0 International License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
{"url":"https://www.techscience.com/cmc/v72n3/47468/html","timestamp":"2024-11-04T10:20:30Z","content_type":"application/xhtml+xml","content_length":"141519","record_id":"<urn:uuid:10b60532-038a-4138-bd8a-cfa9a6563343>","cc-path":"CC-MAIN-2024-46/segments/1730477027821.39/warc/CC-MAIN-20241104100555-20241104130555-00167.warc.gz"}
In this paper, we introduce an operation that creates families of facet-defining inequalities for high-dimensional infinite group problems using facet-defining inequalities of lower-dimensional group problems. We call this family sequential-merge inequalities because they are produced by applying two group cuts one after the other and because the resultant inequality depends on the order of the … Read more
{"url":"https://optimization-online.org/2007/05/","timestamp":"2024-11-13T21:50:52Z","content_type":"text/html","content_length":"105877","record_id":"<urn:uuid:195b34fe-846d-4be7-aeb4-d197b9975222>","cc-path":"CC-MAIN-2024-46/segments/1730477028402.57/warc/CC-MAIN-20241113203454-20241113233454-00034.warc.gz"}
pluso n m kprocedure +o n m kprocedure A goal that unifies two logic variables n and m such that the bit-lists they represent sum to k when added. Think of this as if you were doing (define k (+ n m)), except that it is relational. (run 1 (q) (let ((a (build-num 4)) (b (build-num 3))) (fresh (n k) (== k a) (== n b) (pluso n q k)))) ; => (1)
{"url":"https://api.call-cc.org/5/doc/mini-kanren/pluso","timestamp":"2024-11-05T17:00:44Z","content_type":"text/html","content_length":"2949","record_id":"<urn:uuid:3c8dbb1c-67c1-4467-97d1-bd6745427970>","cc-path":"CC-MAIN-2024-46/segments/1730477027884.62/warc/CC-MAIN-20241105145721-20241105175721-00840.warc.gz"}
[Solved] One way to determine the trend of a data | SolutionInn Answered step by step Verified Expert Solution One way to determine the trend of a data set is to find ______________________. Question 36 options: a) percentage of non-overlapping data b) the split One way to determine the trend of a data set is to find ______________________. Question 36 options: a) percentage of non-overlapping data b) the split middle line of progress c) percentage of overlapping data d) the mean, median and range of the data There are 3 Steps involved in it Step: 1 To determine the trend of a data set one effective approach is to find b the split middle line of pr... Get Instant Access to Expert-Tailored Solutions See step-by-step solutions with expert insights and AI powered tools for academic success Ace Your Homework with AI Get the answers you need in no time with our AI-driven, step-by-step assistance Get Started Recommended Textbook for Authors: Elayn Martin Gay 7th edition 321955048, 978-0321955043 More Books Students also viewed these Mathematics questions View Answer in SolutionInn App
{"url":"https://www.solutioninn.com/study-help/questions/one-way-to-determine-the-trend-of-a-data-set-1288301","timestamp":"2024-11-09T00:29:38Z","content_type":"text/html","content_length":"101471","record_id":"<urn:uuid:1175969a-c579-4ee5-9f1f-cc6d3c349469>","cc-path":"CC-MAIN-2024-46/segments/1730477028106.80/warc/CC-MAIN-20241108231327-20241109021327-00109.warc.gz"}
One of Paul Erdős' favorite problems was the sunflower conjecture, due to him and Rado. Erdős offered $1000 for its proof or disproof. The sunflower problem asks how many sets of some size (n) are necessary before there are some (3) whose pairwise intersections are all the same. The best known bound was improved in 2019 to something the form ( \log(n)^{n(1+o(1))} ); see here for the original paper and here for a slightly better bound. The sunflower conjecture asks whether there is a bound (c^n) for some constant (c).
{"url":"https://metaforecast.org/questions/metaculus-7550","timestamp":"2024-11-13T22:58:46Z","content_type":"text/html","content_length":"224883","record_id":"<urn:uuid:701e5491-8e3f-40a3-b2c0-a8dd1c3169a1>","cc-path":"CC-MAIN-2024-46/segments/1730477028402.57/warc/CC-MAIN-20241113203454-20241113233454-00142.warc.gz"}
CS计算机代考程序代写 algorithm 3. Introducing finite-state automata 3. Introducing finite-state automata First some standard stage-setting definitions: (1) For any set Σ, we define Σ∗ as the smallest set such that: • ε∈Σ∗,and • ifx∈Σandu∈Σ∗ then(x:u)∈Σ∗. We often call Σ an alphabet, call the members of Σ symbols, and call the members of Σ∗ strings. (2) Foranytwostringsu∈Σ∗ andv∈Σ∗,wedefineu+vasfollows: • ε+v=v • (x:w)+v=x:(w+v) Although these definitions provide the “official” notation, I’ll sometimes be slightly lazy and abbreviate ‘x:ε’ as ‘x’, and abbreviate both ‘s:t’ and ‘s + t’ as just ‘st’ in cases where it should be clear what’s intended. I’ll generally use x, y and z for individual symbols of an alphabet Σ, and use u, v and w for strings in Σ∗. This should help to clarify whether a ‘:’ or a ‘+’ has been left out. 1 Finite-state automata, informally Below, in (3) and (4), are graphical representations of two finite-state automata (FSAs). The circles represent states. The initial states are indicated by an “arrow from nowhere”; the final or accepting states are indicated by an “arrow to nowhere”. The FSA in (3) generates the subset of {C, V}∗ consisting of all and only strings that have at least one occurrence of ‘V’. The FSA in (4) generates the subset of {C, V}∗ consisting of all and only strings that contain either two adjacent ‘C’s or two adjacent ‘V’s (or both). Ling185A, Winter 2021 — Tim Hunter, UCLA VV 42 The FSA in (5) generates the subset of {C, V}∗ consisting of all and only strings which end in ‘VC’. C VC 123 If we think of state 1 as indicating syllable boundaries, then FSA in (6) generates sequences of syllables of the form ‘(C)V(C)’. The string ‘VCV’, for example, can be generated via two different paths, 1-1-2-1 and 1-3-1-1, corresponding to different syllabifications. C 12 V VCVV 3 Formal definition of an FSA A finite-state automaton (FSA) is a five-tuple (Q, Σ, I, F, ∆) where: • Q is a finite set of states; • Σ, the alphabet, is a finite set of symbols; • I ⊆ Q is the set of initial states; • F ⊆ Q is the set of ending states; and • ∆⊆Q×Σ×Qisthesetoftransitions. So strictly speaking, (4) is a picture of the following mathematical object: (8) 􏰀 {40, 41, 42, 43}, {C, V}, {40}, {43}, {(40, C, 40), (40, C, 41), (40, V, 40), (40, V, 42), (41, C, 43), (42, V, 43), (43, C, 43), (43, V, 43)} 􏰁 You should convince yourself that (4) and (8) really do contain the same information. Now let’s try to say more precisely what it means for an automaton M = (Q, Σ, I, F, ∆) to generate/accept a string. (9) For M to generate a string of three symbols, say x1x2x3, there must be four states q0, q1, q2, and q3 such that Ling185A, Winter 2021 — Tim Hunter, UCLA • q0∈I,and • (q0, x1, q1) ∈ ∆, and • (q1, x2, q2) ∈ ∆, and • (q2, x3, q3) ∈ ∆, and • q3∈F. (10) More generally, M generates a string of n symbols, say x1x2 . . . xn, iff: there are n + 1 states q0, q1, q2, …qn such that • q0∈I,and • for every i ∈ {1,2,…,n}, (qi−1,xi,qi) ∈ ∆, and • qn∈F. To take a concrete example: (11) The automaton in (4)/(8) generates the string ‘VCCVC’ because we can choose q0, q1, q2, q3, q4 and q5 to be the states 40, 40, 41, 43, 43 and 43 (respectively), and then it’s true that: • 40∈I,and • (40,V,40)∈∆,and • (40,C,41)∈∆,and • (41,C,43)∈∆,and • (43,V,43)∈∆,and • (43,C,43)∈∆,and • 43∈F. Side remark: Note that abstractly, (10) is not all that different from: (12) A tree-based grammar will generate a string x1x2 . . . xn iff: there is some collection of nonter- minal symbols that we can choose such that • those nonterminal symbols and the symbols x1, x2, etc. can all be clicked together into a tree structure in ways that the grammar allows, and • the nonterminal “at the top” is the start symbol. (Much more on this in a few weeks!) We’ll write L(M) for the set of strings generated by an FSA M. So stated roughly, the important idea is: (13) w∈L(M) ⇐⇒ 􏰌 all possible paths p ⇐⇒ 􏰌 all possible paths p 􏰆string w can be generated by path p􏰇 􏰆 􏰘 􏰂step s is allowed and generates the appropriate part of w􏰃􏰇 all steps s in p It’s handy to write I(q0) in place of q0 ∈ I, and likewise for F and ∆. Then one way to make (13) precise is: (14) x1x2…xn∈L(M) ⇐⇒ 􏰌 􏰌 ··· 􏰌 􏰌 􏰆I(q0)∧∆(q0,x1,q1)∧···∧∆(qn−1,xn,qn)∧F(qn)􏰇 q0∈Q q1∈Q qn−1∈Q qn∈Q But it’s practical and enlightening to break this down in a couple of different ways. Ling185A, Winter 2021 — Tim Hunter, UCLA 2.1 Forward values For any FSA M there’s a two-place predicate fwdM , relating states to strings in an important way: (15) fwdM (w)(q) is true iff there’s a path through M from some initial state to the state q, emitting the string w Given a way to work out fwdM(w)(q) for any string and any state, we can easily use this to check for membership in L(M): (16) w∈L(M)⇐⇒ 􏰌􏰆fwdM(w)(qn)∧F(qn)􏰇 qn ∈Q We can represent the predicate fwdM in a table. Each column shows fwdM values for the entire prefix consisting of the header symbols to its left. The first column shows values for the empty string. (17) Here’s the table of fwdM values for prefixes of the string ‘CVCCVVC’ for the FSA in (5). State CVCCVVC Notice that filling in the values in the leftmost column is easy: this column just says which states are initial states. And with a little bit of thought you should be able to convince yourself that, in order to fill in a column of this table, you only need to know: • the values in the column immediately to its left, and • the symbol immediately to its left. More generally, this means that: (18) The fwdM values for a non-empty string x1 . . . xn depend only on • the fwdM values for the string x1 . . . xn−1, and • the symbol xn. This means that we can give a recursive definition of fwdM : (19) fwdM (ε)(q) = I(q) fwdM(x1…xn)(q)= 􏰌 􏰆fwdM(x1…xn−1)(qn−1)∧∆(qn−1,xn,q)􏰇 qn−1 ∈Q This suggests a natural and efficient algorithm for calculating these values: write out the table, start by filling in the leftmost column, and then fill in other columns from left to right. This is where the name “forward” comes from. 2.2 Backward values We can do all the same things, flipped around in the other direction. For any FSA M there’s a two-place predicate bwdM , relating states to strings in an important way: (20) bwdM (w)(q) is true iff there’s a path through M from the state q to some ending state, emitting the string w Ling185A, Winter 2021 — Tim Hunter, UCLA Given a way to work out bwdM(w)(q) for any string and any state, we can easily use this to check for membership in L(M): (21) w∈L(M)⇐⇒ 􏰌􏰆I(q0)∧bwdM(w)(q0)􏰇 q0 ∈Q We can represent the predicate bwdM in a table. Each column shows bwdM values for the entire suffix consisting of the header symbols to its right. The last column shows values for the empty string. (22) Here’s the table of bwdM values for suffixes of the string ‘CVCCVVC’ for the FSA in (5). State CVCCVVC In this case, filling in the last column is easy, and each other column can be filled in simply by looking at the values immediately to its right. (23) The bwdM values for a non-empty string x1 . . . xn depend only on • the bwdM values for the string x2 . . . xn, and • the symbol x1. So bwdM can also be defined recursively. Now (25) And (26) bwdM (ε)(q) = F (q) bwdM(x1…xn)(q)= 􏰌 􏰆∆(q,x1,q1)∧bwdM(x2…xn)(q1)􏰇 q1 ∈Q Forward values and backward values together we can say something beautiful: uv∈L(M) ⇐⇒ 􏰌􏰆fwdM(u)(q)∧bwdM(v)(q)􏰇 in fact (16) and (21) are just special cases of (25), with u or v chosen to be the empty string: w∈L(M) ⇐⇒ 􏰌􏰆fwdM(w)(q)∧bwdM(ε)(q)􏰇 ⇐⇒ 􏰌􏰆fwdM(w)(q)∧F(q)􏰇 q∈Q q∈Q w∈L(M) ⇐⇒ 􏰌􏰆fwdM(ε)(q)∧bwdM(w)(q)􏰇 ⇐⇒ 􏰌􏰆I(q)∧bwdM(w)(q)􏰇 q∈Q q∈Q Ling185A, Winter 2021 — Tim Hunter, UCLA
{"url":"https://www.cscodehelp.com/%E5%B9%B3%E6%97%B6%E4%BD%9C%E4%B8%9A%E4%BB%A3%E5%86%99/cs%E8%AE%A1%E7%AE%97%E6%9C%BA%E4%BB%A3%E8%80%83%E7%A8%8B%E5%BA%8F%E4%BB%A3%E5%86%99-algorithm-3-introducing-finite-state-automata/","timestamp":"2024-11-05T20:25:18Z","content_type":"text/html","content_length":"58631","record_id":"<urn:uuid:15b0908a-fc60-4e1e-a6d6-627014438c86>","cc-path":"CC-MAIN-2024-46/segments/1730477027889.1/warc/CC-MAIN-20241105180955-20241105210955-00089.warc.gz"}
MathAction SandBox Aldor Category Theory Basics Miscellaneous Logical helper functions (1) -> <aldor> #include "axiom" define Domain:Category == with; +++ A set is often considered to be a collection with "no duplicate elements." +++ Here we have a slightly different definition which is important to +++ understand. We define a Set to be an arbitrary collection together with +++ an equivalence relation "=". Soon this will be made into a mathematical +++ category where the morphisms are "functions", by which we mean maps +++ having the special property that a=a' implies f a = f a'. This definition +++ is more convenient both mathematically and computationally, but you need +++ to keep in mind that a set may have duplicate elements. define Set:Category == Join(Domain, Printable) with { =:(%,%) -> Boolean; +++ A Preorder is a collection with reflexive and transitive <=, but without +++ necessarily being symmetric (x<=y and y<=x) implying x=y. Since +++ (x<=y and y<=x) is always an equivalence relation, our definition of +++ "Set" is always satisfied in any case. define Preorder:Category == Set with { <=: (%,%) -> Boolean; >=: (%,%) -> Boolean; < : (%,%) -> Boolean; > : (%,%) -> Boolean; default { (x:%) =(y:%):Boolean == (x<=y) and (y<=x); (x:%)>=(y:%):Boolean == y<=x; (x:%)< (y:%):Boolean == (x<=y) and ~(x=y); (x:%)> (y:%):Boolean == (x>=y) and ~(x=y) define TotalOrder:Category == Preorder with { min: (%,%) -> %; max: (%,%) -> %; min: Tuple % -> %; max: Tuple % -> %; default { min(x:%,y:%):% == { x<=y => x; y }; max(x:%,y:%):% == { x<=y => y; x }; import from List %; min(t:Tuple %):% == associativeProduct(%,min,[t]); max(t:Tuple %):% == associativeProduct(%,max,[t]); +++ Countable is the category of collections for which every element in the +++ collection can be produced. This is done by the generator "elements" +++ below. Note that there is no guarantee that elements will not produce +++ "duplicates." In fact, a Countable may not be a Set, so duplicates may +++ have no meaning. Also, Countable is not guaranteed to terminate. define Countable:Category == with { elements: () -> Generator % -- I'm using an empty function elements() rather than a constant elements -- to avoid some compiler problems. +++ CountablyFinite is the same as Countable except that termination is +++ guaranteed. define CountablyFinite:Category == Countable with +++ A "Monoids" is the usual Monoid (we don't use Monoid to avoid clashing +++ with axllib): a Set with an associative product (associative relative to +++ the equivalence relation of the Set, of course) and a unit. define Monoids:Category == Set with { *: (%,%) -> %; 1: %; ^:(%,Integer) -> %; monoidProduct: Tuple % -> %; -- associative product monoidProduct: List % -> %; default { (x:%)^(i:Integer):% == { i=0 => 1; i<0 => error "Monoid negative powers are not defined."; associativeProduct(%,*,x for j:Integer in 1..i) monoidProduct(t:Tuple %):% == { import from List %; monoidProduct(t) } monoidProduct(l:List %):% == { import from NonNegativeInteger; #l = 0 => 1; +++ Groups are Groups in the usual mathematical sense. We use "Groups" +++ rather than "Group" to avoid clashing with axllib. define Groups:Category == Monoids with { inv: % -> % +++ Printing is a whole area that I'm going to have a nice categorical +++ solution for, but still it is convenient to have a low level Printable +++ signature for debugging purposes. define Printable:Category == with { coerce: % -> OutputForm; coerce: List % -> OutputForm; coerce: Generator % -> OutputForm; default { (t:OutputForm)**(l:List %):OutputForm == { import from Integer; empty? l => t; hconcat(coerce first l, hspace(1)$OutputForm) ** rest l; coerce(l:List %):OutputForm == empty() ** l; coerce(g:Generator %):OutputForm == { import from List %; empty() ** [x for x in g]; +++ This evaluates associative products. associativeProduct(T:Type,p:(T,T)->T,g:Generator T):T == { l:List T == [t for t in g]; associativeProduct(T:Type,p:(T,T)->T,l:List T):T == { if empty? l then error "Empty product."; mb(t:T,l:List T):T == { empty? l => t; mb( p(t,first l), rest l) }; mb(first l,rest l) +++ Evaluates the logical "For all ..." construction forall?(g:Generator Boolean):Boolean == { q:Boolean := true; for x:Boolean in g repeat { if ~x then { q := false; break } } +++ Evaluates the logical "There exists ..." construction exists?(g:Generator Boolean):Boolean == { q:Boolean := false; for x:Boolean in g repeat { if x then { q := true; break } }; +++ The category of "Maps". There is no implication that a map is a +++ function in the sense of x=x' => f x = f x' define MapCategory(Obj:Category,A:Obj,B:Obj):Category == with { apply: (%,A) -> B; hom: (A->B) -> %; +++ One convenient implementation of MapCategory Map(Obj:Category,A:Obj,B:Obj):MapCategory(Obj,A,B) == add { Rep ==> A->B; apply(f:%,a:A):B == (rep f) a; hom (f:A->B):% == per f +++ This strange function turns any Type into an Aldor Category define Categorify(T:Type):Category == with { value: T +++ The null function null(A:Type,B:Type):(A->B) == (a:A):B +-> error "Attempt to evaluate the null function." +++ A handy package for composition of morphisms. "o" is meant to suggest morphism composition g "o" f, to be coded "g ** f". o(Obj:Category,A:Obj,B:Obj,C:Obj): with **: (B->C,A->B) -> (A->C) == add (g:B->C)**(f:A->B):(A->C) == (a:A):C +-> g f a</aldor> Compiling FriCAS source code from file using Aldor compiler and options -O -Fasy -Fao -Flsp -lfricas -Mno-ALDOR_W_WillObsolete -DFriCAS -Y $FRICAS/algebra -I $FRICAS/algebra Use the system command )set compiler args to change these "/var/lib/zope2.10/instance/axiom-wiki/var/LatexWiki/basics.as", line 1: #include "axiom" [L1 C1] #1 (Error) Could not open file `axiom'. The )library system command was not called after compilation. Test Funtions #include "axiom" #library lbasics "basics.ao" import from lbasics T:Domain == add S:Set == Integer add P:Preorder == Integer add TO:TotalOrder == Integer add --CC:Countable == Integer add -- import from Integer, Generator(Integer) -- elements():Generator(Integer) == generator (1..10) M:Monoids == Integer add G:Groups == Fraction Integer add MAPS:MapCategory(SetCategory,Integer,Float) == Map(SetCategory,Integer,Float) add INTS:Categorify(Integer) == add value:Integer == 1 sincos(x:Expression Integer):Expression Integer == import from o(SetCategory,Expression Integer,Expression Integer,Expression Integer) ( ((a:Expression Integer):Expression Integer +-> sin(a)) ** ((b:Expression Integer):Expression Integer +-> cos(b)) ) (x) Compiling FriCAS source code from file using Aldor compiler and options -O -Fasy -Fao -Flsp -lfricas -Mno-ALDOR_W_WillObsolete -DFriCAS -Y $FRICAS/algebra -I $FRICAS/algebra Use the system command )set compiler args to change these "/var/lib/zope2.10/instance/axiom-wiki/var/LatexWiki/7819217643380372071-25px002.as", line 1: #include "axiom" [L1 C1] #1 (Error) Could not open file `axiom'. The )library system command was not called after compilation. )show T The )show system command is used to display information about types or partial types. For example, )show Integer will show information about Integer . T is not the name of a known type constructor. If you want to see information about any operations named T , issue )display operations T )show S The )show system command is used to display information about types or partial types. For example, )show Integer will show information about Integer . S is not the name of a known type constructor. If you want to see information about any operations named S , issue )display operations S )show P The )show system command is used to display information about types or partial types. For example, )show Integer will show information about Integer . P is not the name of a known type constructor. If you want to see information about any operations named P , issue )display operations P )show TO The )show system command is used to display information about types or partial types. For example, )show Integer will show information about Integer . TO is not the name of a known type constructor. If you want to see information about any operations named TO , issue )display operations TO --)show CC )show MAPS The )show system command is used to display information about types or partial types. For example, )show Integer will show information about Integer . MAPS is not the name of a known type constructor. If you want to see information about any operations named MAPS , issue )display operations MAPS )show INTS The )show system command is used to display information about types or partial types. For example, )show Integer will show information about Integer . INTS is not the name of a known type constructor. If you want to see information about any operations named INTS , issue )display operations INTS sincos(x::Expression Integer) There are no exposed library operations named sincos but there is one unexposed operation with that name. Use HyperDoc Browse or )display op sincos to learn more about the available operation. Cannot find a definition or applicable library operation named sincos with argument type(s) Perhaps you should use "@" to indicate the required return type, or "$" to specify which version of the function you need. There are no library operations named Null Use HyperDoc Browse or issue )what op Null to learn if there is any operation containing " Null " in its Cannot find a definition or applicable library operation named Null with argument type(s) Perhaps you should use "@" to indicate the required return type, or "$" to specify which version of the function you need. SandBox Aldor Category Theory Categories
{"url":"https://wiki.fricas.org/SandBoxAldorCategoryTheoryBasics?root=SandBox%20Aldor%20Category%20Theory","timestamp":"2024-11-13T19:40:12Z","content_type":"application/xhtml+xml","content_length":"35208","record_id":"<urn:uuid:29ac7870-a16e-412a-b801-8219ac51e9dc>","cc-path":"CC-MAIN-2024-46/segments/1730477028387.69/warc/CC-MAIN-20241113171551-20241113201551-00783.warc.gz"}
MRI White Matter Reconstruction Sub-Challenge #2 Challenge name All models are wrong: are yours useful? Purpose and relevance of the challenge With this challenge, we aim to understand the current ability of the MRI field in modelling white matter (WM) tissue microstructure. The challenge will consist of a number of simulated WM digital environments (or “substrates”), on the scale of individual voxels, generated by changing in a controlled fashion a range of microstructural features, including: • self-diffusivity (D, um^2/ms) • intra-fibre volume fraction (f, unitless) • fibre radius index [1] (a, um) • permeability (k, um/ms). Participants will be given the simulated signal acquired from the acquisition protocols PGSE, DDE and DODE of sub-challenge #1 and #3. Participants are then asked to estimate any (or all) of the above microstructural features, according to their proposed model of WM microstructure. Participants are also encouraged to submit potential biomarkers that may be indicative of these features even if they do not directly estimate them. The outcome of this challenge will be an evaluation of the accuracy and sensitivity of microstructural parameter estimation. 4 winners will be determined by the most accurate estimates of each of the four indices. The dataset for this challenge is composed of 3D substrates of WM tissues designed following the methods in [2], which provide flexibility and control on microstructural features. The substrates are combined with the Monte Carlo based simulator in Camino [3] to provide synthetic dMRI signal. The challenge data will be based upon 256 substrates representative of an array of white matter voxels. For each substrate, each of the 4 tissue parameters is swept through a range of physiologically realistic values (for example, D from 0.7-3 um2/ms, f from 0.4-0.7, etc.), and six different levels of signal-to-noise-ratio(SNR is hidden from participants). The signal is simulated using the three acquisition protocols from sub-challenges #1 and #3: a PGSE acquisition with 1300 unique datapoints using a multi-shell strategy, a DDE with 2 different diffusion times, and a DODE with 5 different frequencies, 5 b-values, and 72 directions each. [In an extended version of this sub-challenge, planned for next year, we will work to make available any user-defined acquisition, in which each participant will be able to create and submit a “scheme” file describing the diffusion sensitizing gradients of any experimental protocol they would like to Link to Data: See Registration and Data Access tab Participation (Data given to the participants) The task will be to estimate any (or all) of the microstructural parameters of interest. Participants are free to use all or any subset of the generated signals from any set of acquisitions. Participants will be provided with the following files. Sample scripts to load the data will be provided for popular working environments (MATLAB, Python, C/C++). Note that for each acquisition type, we include files for (1) a protocol description, (2) the acquisition parameters, and (3) the MR signal. For example, the dataset for each sequence is provided in a text file and consists of M columns, where M = 1536 is the total number of simulated voxels to be analysed (256 voxels * 6 noise levels) and N rows, where N is the number of measurements. [N=3011 measurements for PGSE, N=2000 measurements for DODE, N=800 for DDE). 1. PGSE_ProtocolDescription.txt: Description of the acquisition parameters for the PGSE sequences 2. PGSE_AcqParams.txt: Acquisition parameters for all measurements of the PGSE shell dataset. It is a NxA matrix with N rows and A columns, where N is the number of measurements and A is the number of sequence parameters. 3. PGSE_Simulations.txt: Dataset for PGSE shell sequence 4. DDE_ProtocolDescription.txt: Description of the acquisition parameters for the DDE sequences 5. DDE_AcqParams.txt: Acquisition parameters for all measurements of the DDE dataset. 6. DDE_Simulations.txt: Dataset for DDE sequences 7. DODE_ProtocolDescription.txt: Description of the acquisition parameters for the DODE sequences 8. DODE_AcqParams.txt: Acquisition parameters for all measurements of the DODE dataset. 9. DODE_Simulations.txt: Dataset for DODE sequences Participants are asked to submit any or all of the following for each of the 256 environments: • self-diffusivity (D, um^2/ms); • intra-fibre volume fraction (f, unitless); • fibre radius index [1] (a, um) computed as: assuming a Gamma distribution P(a) of axonal radii a. Submission for each metric estimated must be submitted as a single compressed folder. All metrics will be submitted separately (for example separate submission for volume fraction and for Within this folder will be two files. 1. {SEQ}.txt, where SEQ = {’PGSE’, ’DDE’, ’DODE’} describes the acquisition used to estimate the index of choice. This file should contain a 1×1536 matrix. 1536 is the number of simulated 2. Info.txt: this file contains relevant information for the challenge organizers. For all sub-challenges, this file must contain: the (1) submission name, (2), submission abbreviation, (3), team name, (4) team members who made meaningful contributions, (5) member affiliation, (6) brief one sentence submission description, (7) extended submission description (to enable reproducibility of methods) (8) all relevant citations, (9) observations (optional), and (9) relevant discussion points (optional). For sub-challenge #2 specifically, we also ask for: (10) number of free model parameters, (11) the type of model (signal/tissue), (12) noise assumptions, (13) model parameters estimation and algorithm optimization strategies, (14) pre-processing (outlier strategies), (15) data and subsets of data used to form predictions. We want tp emphasize that f you wish to submit a potential biomarker that is not a direct estimator of the four parameters, but wish for it to be included in evaluation (we will assess correlation of markers both with each other and with ground truth data, and assess robustness to SNR) you may still submit this as a standard submission. This will simply be a 1×1536 matrix, Example submissions files will be made available with the data Reference “gold standard” for this sub-challenge will be ground truth values for the four specific microstructural metrics, known by design of the numerical simulation study. We will evaluate accuracy (or bias) and precision (or statistical dispersion) of dMRI measured metrics related to the microstructural features, compared to ground truth values known by design from numerical simulations. Specifically, accuracy for each estimated microstructural feature f[j|j=1..4]will be quantified by mean normalized error across the N[c]microstructural scenarios taken into Precision will be evaluated as interquartile range of the normalized error: Overall score of accuracy and precision will be quantified by the mean squared error: The code for the evaluation metrics will be open source. There will be a winning submission for each microstructural measure j, defined by the submission (for that feature) with the lowest score[j]. How to get the data Please see “Registration and Data Access” Page. Sub-challenge Chair Marco Palombo <University College London> Daniel Alexander <University College London> [1] Alexander, D. C., Hubbard, P. L., Hall, M. G., Moore, E. A., Ptito, M., Parker, G. J., & Dyrby, T. B. (2010). Orientationally invariant indices of axon diameter and density from diffusion MRI. Neuroimage, 52(4), 1374-1389. [2] Hall, M. G., & Alexander, D. C. (2009). Convergence and parameter choice for Monte-Carlo simulations of diffusion MRI. IEEE transactions on medical imaging, 28(9), 1354-1364. [3] Y. B. P. A. Cook, S. Nedjati-Gilani, K. K. Seunarine, M. G. Hall, G. J. Parker, D. C. Alexander,, “Camino: Open-Source Diffusion-MRI Reconstruction and Processing,” in 14th Scientific Meeting of the International Society for Magnetic Resonance in Medicine, Seattle, WA, USA, May 2006, p. 2759.
{"url":"https://my.vanderbilt.edu/memento/sub-challenge-2/","timestamp":"2024-11-15T04:23:07Z","content_type":"text/html","content_length":"20982","record_id":"<urn:uuid:12b26f3a-ded4-4ba8-9c05-09be4201fcf9>","cc-path":"CC-MAIN-2024-46/segments/1730477400050.97/warc/CC-MAIN-20241115021900-20241115051900-00322.warc.gz"}
Using The Forecast Function In Excel - ExcelAdept Key Takeaway: • The FORECAST function in Excel enables users to predict or forecast future values based on existing data and trends. This is useful for financial projections, sales forecasting, and other applications that require predicting future trends. • The syntax of the FORECAST function requires two inputs – the x value (which is the point in time for which the forecast is being made) and the known_y values (which represent the historical data that is used to make the forecast). Users can also provide optional arguments for known_x values and a value for the ‘const’ parameter. • While the FORECAST function can be a powerful tool for making predictions and forecasts in Excel, it is important to consider its limitations and potential inaccuracies. Users should be cautious when using the function with incomplete or missing data and should consider alternative forecasting methods when necessary. Struggling to understand how the FORECAST function works in Excel? You’re not alone. Let’s explore how this powerful feature can help you make accurate predictions and simplify complex data. The FORECAST Function in Excel This section will guide you to comprehend the FORECAST Function in Excel. It has three sub-sections: 1. Understanding the FORECAST Function 2. Syntax of the FORECAST Function 3. Examples of Using the FORECAST Function These will help make this intricate feature simpler to comprehend and emphasize its importance. Understanding the FORECAST Function The FORECAST Function in Excel allows users to predict future values based on existing data. By analyzing trends and patterns, businesses can make informed decisions. This function uses regression analysis to forecast future data points along with statistical methods to calculate the accuracy. To use the FORECAST Function in Excel, users must have a set of data points and a target value for forecasting. The function then calculates the slope and y-intercept of the linear regression equation. These values are used to forecast future data points. It is essential to ensure that the data used in the function does not contain any outliers or errors as this can significantly affect results. Additionally, it is advisable to include more significant amounts of historical data for increased accuracy. By using this function, businesses can better understand their past performances and forecast future outcomes accurately. It is crucial to note that while it is a helpful tool, it should not be solely relied upon for decision-making purposes. Get ready to speak Excel fluently with the ‘Syntax of the FORECAST Function’. Syntax of the FORECAST Function If you are looking to forecast future values in Excel, then the FORECAST function is what you need. The syntax of this function is straightforward, and it requires an x-value and a known y-range. Here is a quick 4-step guide on how to use the FORECAST function: 1. Open an Excel worksheet and navigate to the cell where you want to display the result. 2. Type =FORECAST(x-value, known_y's, known_x's) into the cell. 3. Replace “x-value” with your desired forecast value and “known_y’s” and “known_x’s” with your respective ranges of data. 4. Hit Enter, and Excel will return your forecasted value. It is important to note that the x-value must be within the range of known x-values for accurate forecasting. Additionally, if you do not have a range of known x-values, you can use the TREND function instead. Lastly, I once used the FORECAST function in a business presentation where I accurately predicted future sales figures based on past data trends. This not only impressed my colleagues but also allowed us to make strategic decisions for our company’s growth. Predicting the future with Excel’s FORECAST function: because who needs a crystal ball anyway? Examples of Using the FORECAST Function The potential of FORECAST Function in Excel lies in its efficiency to predict future trends by evaluating past data. Here’s how you can use it. 1. Open Microsoft Excel and select ‘FORECAST’ from the list of functions. 2. Select ‘known x’ array and ‘known y’ array representing the dependent and independent variables respectively, for which prediction needs to be made. 3. Mention the value of new x for which prediction has to be done. 4. Press ‘Enter’, and you’ll get the forecasted value for the given input. By adjusting known values of x, one can evaluate multiple predicted values utilizing a single set of known y-values. This function greatly benefits businesses forecasting trends using data analysis. Emphasizing its importance in business forecasting, not using tools like this might lead to inaccurate predictions which will consequently impair decision-making processes. Adopting such technologically advanced tools will enable organizations with predictive insights that can help businesses save costs, make informed strategic decisions while also allowing companies to gain a competitive edge in their respective markets. Before relying on the FORECAST function too heavily, consider consulting a Magic 8-ball for a second opinion. Limitations and Considerations of the FORECAST Function Understand the limits & considerations of using Excel’s FORECAST function. Know its sub-sections: • Accuracy & Reliability • Handling Missing Data • Alternatives This will enhance your forecasting efficiency. Accuracy and Reliability of the FORECAST Function The effectiveness and credibility of the FORECAST function in Excel are notable. However, certain situations may lead to the limitation of its precision. It is crucial to note that the accuracy and reliability of the FORECAST function depend on various factors such as historical data, trends and patterns observations, forecast period length, etc. As FORECAST works based on past trends and patterns analysis, it could be inaccurate when sudden changes occur in the related market. Therefore, it is required to supervise and analyze historical data frequently for significant alterations in trends. This process ensures that the future assumptions are accurate and reliable. Another factor to consider is the number of periods calculated for future forecasting. Anything too high or too low can affect accuracy as underfitting or overfitting can come into play. To avoid this circumstance, always stick with valid historical data and make forecasts with reasonable number of periods only. To get more precise information while using FORECAST function in Excel, one can either increase data granularity or use advanced forecasting methods such as exponential smoothing or regression analysis depending on their needs. Advanced solutions improve clarity by evaluating multiple variables instead of relying entirely on past analyses. Using powerful forecasting models integrated with advanced techniques assures greater efficiency than simply using basic functions or single parameter based prediction methods like Linear Regression Models etc. By following good practices and appropriate forecasting method selection according to problem requirements will eventually lead you towards better results showcasing more accuracy & reliability on your predictions using FORECAST function in Excel. Why cry over missing data when Excel’s FORECAST function can just make a wild guess for you? Handling Missing Data in the FORECAST Function To utilize the FORECAST function in Excel, addressing missing or incomplete data sets is important. In such cases, you can use interpolation or extrapolation techniques to estimate the missing It is vital to keep in mind that using the FORECAST function with missing data points might lead to inaccurate results as it assumes a linear relationship between variables. Therefore, one must be cautious while interpreting such forecasts. In addition to this, an alternative approach to deal with missing values is to use other forecasting methods such as exponential smoothing or time-series models. These approaches are effective and give better accuracy than the linear regression-based FORECAST function. One suggestion would be to measure the forecast accuracy using metrics like MAPE or RMSE. It can help determine the degree of error and highlight areas for improvement in the forecasting model. Moreover, having a larger dataset will reduce errors and improve forecasts by providing more information for analysis. Excel may offer alternatives to the FORECAST function, but let’s be real, sometimes we all just need a crystal ball. Alternatives to the FORECAST Function in Excel When forecasting data in Excel, there are various options available instead of relying solely on the FORECAST function. Here are some Semantic NLP variations: • Different Methods of Estimating Data Trends in Excel • Alternative Approaches to Forecasting in Excel • Other Ways to Predict Future Outcomes in Excel • Options Outside of the FORECAST Function for Trend Analysis in Excel One alternative is the TREND function, which is similar to FORECAST but allows for predicting multiple future data points. Another option is using moving averages to calculate trends more accurately over time. Regression analysis can also be utilized when predicting outcomes based on multiple independent variables rather than just one. Lastly, consider using a statistical software like R or Python for more advanced forecasting techniques. It’s important to note that while these alternatives may provide additional insights and accuracy, they may also require a higher level of statistical knowledge and programming skills. Understanding the limitations and complexities of each method will enable you to make informed decisions based on your specific needs and level of expertise. In 1964, Holt-Winters Exponential Smoothing was developed as a popular method for time series forecasting. It involves using past observations along with exponential smoothing methods for trend and seasonality components. Since its development, it has been widely used in various industries such as energy demand forecasting and financial market predictions. Some Facts About Using the FORECAST Function in Excel: • ✅ The FORECAST function in Excel is used to predict a future value based on past data. (Source: Excel Easy) • ✅ The function requires two sets of data: the known_x’s, which are the independent variable data points, and the known_y’s, which are the dependent variable data points. (Source: Excel Jet) • ✅ The function can be used for both linear and exponential trends, depending on the type of data being analyzed. (Source: Microsoft) • ✅ The FORECAST function has some limitations, including not accounting for external factors that may affect the data and not working well with seasonal data. (Source: Wall Street Prep) • ✅ There are other forecasting functions in Excel, such as FORECAST.ETS and FORECAST.ETS.CONFINT, that allow for more advanced forecasting with additional features and options. (Source: Excel FAQs about Using The Forecast Function In Excel What is the FORECAST Function in Excel? The FORECAST function in Excel is a statistical function used to predict a future value for a set of data based on a linear trend. What are the syntax and arguments of the FORECAST Function? The syntax and arguments of the FORECAST function are as follows: FORECAST(x, y, known_y’s, known_x’s) where x is the value for which you want to predict the y-value, y is the set of x-values for which you want to predict the corresponding y-values, known_y’s are the set of y-values for the known x-values, and known_x’s are the set of known x-values. What is the difference between FORECAST and TREND functions in Excel? The main difference between the FORECAST and TREND functions in Excel is that the FORECAST function predicts a future value based on a linear trend, while the TREND function predicts a future value based on a polynomial trend. Additionally, the TREND function can predict multiple future values, while the FORECAST function can only predict a single future value. What are some tips for using the FORECAST Function in Excel effectively? Some tips for using the FORECAST function in Excel effectively include: ensuring that your data is organized properly, using the correct range of data for the function, and using the function within its limitations (i.e. for linear trends only). How accurate are the predictions made using the FORECAST Function in Excel? The accuracy of the predictions made using the FORECAST function in Excel depends on various factors, such as the quality and quantity of the data used, the reliability of the linear trend being used to make the prediction, and more. It’s important to remember that the FORECAST function is only a prediction, and should not be relied upon as an exact representation of future values. Can the FORECAST Function be used with non-linear trends? No, the FORECAST function in Excel can only be used with linear trends. For non-linear trends, a different function, such as SLOPE or GROWTH, would need to be used.
{"url":"https://exceladept.com/using-the-forecast-function-in-excel/","timestamp":"2024-11-06T13:39:33Z","content_type":"text/html","content_length":"70218","record_id":"<urn:uuid:10b3ed75-fb47-40d8-92a8-a9bef337c9eb>","cc-path":"CC-MAIN-2024-46/segments/1730477027932.70/warc/CC-MAIN-20241106132104-20241106162104-00023.warc.gz"}
Trigonometry Tutors in Jammu | Math Tuitions - MyPrivateTutor My name is Azhar. I have been teaching Mathematics for the past 5 years, from grade 1 to 12. If you find any difficulties, please let me know. Hi, my name is Azhar. I am from Jammu and Kashmir. I teach Mathematics and have experience teaching grades 1 to 12. If you encounter any difficulty, j...
{"url":"https://www.myprivatetutor.com/mathematics/trigonometry-tutors-in-jammu","timestamp":"2024-11-10T19:12:11Z","content_type":"text/html","content_length":"815893","record_id":"<urn:uuid:e0b65cd0-adeb-4abd-86d5-52af8eeeeeaf>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.61/warc/CC-MAIN-20241110170046-20241110200046-00859.warc.gz"}
Split Text and Numbers - Free Excel Tutorial This post will guide you how to separate text and numbers from one cell in excel. How do I extract numbers from text string with formulas in Excel. Split Text and Numbers with Formula If you want to split text and numbers from one cell into two different cells, you can use a formula based on the FIND function, the LEFT function or the RIGHT function and the MIN function. Just do the following steps: #1 type the following formula in the formula box of cell C1 to get the Text part from text string in Cell A1. #2 type the following formula in the formula box of cell D1 to get the number part from the text string in cell A1. 3# select the cell C1 and D1, then drag the AutoFill Handle over other cell to apply those formula to split text and numbers. Split Text and Numbers with Flash Fill Feature You can also use the Flash Fill function to split the text and numbers from the Cell A1, just do the following steps: #1 type the text part of your first text string into the adjacent blank Cell B1. #2 select the range B1:B4 where you want to fill the text part, and go to DATA tab, click Flash Fill command under Data Tools group. All text part is filled into the cells B1:B4. #3 enter the number of your first text string into Cell C1. #4 select the range D11:D4 where you want to fill the number part, and go to DATA tab, click Flash Fill command. All numbers are filled into the cells D1:D4. #5 Lets see the result. Related Functions
{"url":"https://www.excelhow.net/split-text-and-numbers.html","timestamp":"2024-11-05T06:15:48Z","content_type":"text/html","content_length":"88933","record_id":"<urn:uuid:c39e0b5b-354c-45ef-9c69-8303a47a6c7d>","cc-path":"CC-MAIN-2024-46/segments/1730477027871.46/warc/CC-MAIN-20241105052136-20241105082136-00698.warc.gz"}
page3 Class 9 maths 1. Number Systems - CBSE Notes - Toppers Study Study Materials: CBSE Notes Our CBSE Notes for page3 Class 9 maths 1. Number Systems - CBSE Notes - Toppers Study is the best material for English Medium students cbse board and other state boards students. Notes ⇒ Class 9th ⇒ Mathematics ⇒ 1. Number Systems page3 Class 9 maths 1. Number Systems - CBSE Notes - Toppers Study Topper Study classes prepares CBSE Notes on practical base problems and comes out with the best result that helps the students and teachers as well as tutors and so many ecademic coaching classes that they need in practical life. Our CBSE Notes for page3 Class 9 maths 1. Number Systems - CBSE Notes - Toppers Study is the best material for English Medium students cbse board and other state boards students. page3 Class 9 maths 1. Number Systems - CBSE Notes - Toppers Study CBSE board students who preparing for class 9 ncert solutions maths and Mathematics solved exercise chapter 1. Number Systems available and this helps in upcoming exams 2024-2025. You can Find Mathematics solution Class 9 Chapter 1. Number Systems • All Chapter review quick revision notes for chapter 1. Number Systems Class 9 • NCERT Solutions And Textual questions Answers Class 9 Mathematics • Extra NCERT Book questions Answers Class 9 Mathematics • Importatnt key points with additional Assignment and questions bank solved. NCERT Solutions do not only help you to cover your syllabus but also will give to textual support in exams 2024-2025 to complete exercise 3 maths class 9 chapter 1 in english medium. So revise and practice these all cbse study materials like class 9 maths chapter 3 in english ncert book. Also ensure to repractice all syllabus within time or before board exams for ncert class 9 maths ex 3 in See all solutions for class 9 maths chapter 1 exercise 1 in english medium solved questions with answers. page3 class 9 Mathematics Chapter 1. Number Systems Sure! The following topics will be covered in this article • Page3 Class 9 Maths 1. Number Systems - CBSE Notes - Toppers Study • Class 9 Ncert Solutions • Solution Chapter 1. Number Systems Class 9 • Solutions Class 9 • Chapter 1. Number Systems Page3 Class 9 Notice: Undefined offset: 5 in /home/atpeduca/public_html/toppersstudy.com/view-home.php on line 123 1. Number Systems | page3 | page3 Class 9 maths 1. Number Systems - CBSE Notes - Toppers Study This Page is under construction. This will be published soon. Jaust wait ................... Other Pages of this Chapter: 1. Number Systems Select Your CBSE Classes Important Study materials for classes 06, 07, 08,09,10, 11 and 12. Like CBSE Notes, Notes for Science, Notes for maths, Notes for Social Science, Notes for Accountancy, Notes for Economics, Notes for political Science, Noes for History, Notes For Bussiness Study, Physical Educations, Sample Papers, Test Papers, Mock Test Papers, Support Materials and Books.
{"url":"https://www.toppersstudy.com/view-cbse_notes-745","timestamp":"2024-11-07T06:43:37Z","content_type":"text/html","content_length":"31093","record_id":"<urn:uuid:e6228219-ca2c-41a3-a6ca-a3e1bedd26d5>","cc-path":"CC-MAIN-2024-46/segments/1730477027957.23/warc/CC-MAIN-20241107052447-20241107082447-00431.warc.gz"}
(10 marks) Consider the maze shown below, where the successors of a ce (10 marks) Consider the maze shown below, where the successors of a cell are the cells directly to the east, west, north and south of the cell, except that you are not allowed to pass through the thick wall indicated by the double line. For example, the successors of cell I are H and N (and not D or J). Trace the operation of A* search applied to the problem of getting from cell R to cell G. The heuristic function is just the Manhattan distance, ignoring the existence of the wall. The heuristic values for each node are summarized for you in the table to the right. • Draw the search tree, starting from cell R. Beside each node in your tree, indicate the f, g and h scores for the node (in the format g(n)th(n) = f(n)). Assume that the cost of each move is 1. • To the side of your search tree, provide a list of the order in which paths are expanded. Show the contents of the frontier after each node is expanded. To make it easier to keep track of what you are doing, include both the f value and h value as subscripts when you write a path in your priority queue. For example, WXY Z11,7 would indicate that WXY Z is a path such that f(WXYZ) = 11 and h(Z) = 7. •If two or more paths are tied for the lowest f value, give priority to the one with minimum h value. Use alphabetical order if there is still a tie. • For this question, you can avoid generating states that have already been expanded. For example, you should not consider going west from cell I to cell H if you had already expanded cell H in an earlier step. • Also, when inserting a path onto the priority queue, if a path to the same state is already on the queue, just keep the copy with the lower f value. A B с DE KIL M N 0 S T la b P Q R U V h(cell) cell What is the solution path returned by the algorithm, and what is its path cost? Fig: 1
{"url":"https://tutorbin.com/questions-and-answers/10-marks-consider-the-maze-shown-below-where-the-successors-of-a-cell-are-the-cells-directly-to-the-east-west-north-and","timestamp":"2024-11-10T05:03:05Z","content_type":"text/html","content_length":"70662","record_id":"<urn:uuid:c2025679-395d-4b50-8b7b-846055da2045>","cc-path":"CC-MAIN-2024-46/segments/1730477028166.65/warc/CC-MAIN-20241110040813-20241110070813-00794.warc.gz"}
st: AW: questions [Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] st: AW: questions From "Martin Weiss" <[email protected]> To <[email protected]> Subject st: AW: questions Date Mon, 2 Nov 2009 13:55:36 +0100 You can take a look at its calculations via (careful: clutter hazard!) sysuse auto, clear reg pr we le tu tr set tr on nlcom (ratio1: _b[length]/_b[_cons]) set tr off but I would rather look at BTW, if you use Stata 11, you should have the manuals, at least in pdf format, and page 1209 of [R] shows you all the necessary formulae... -----Ursprüngliche Nachricht----- Von: [email protected] [mailto:[email protected]] Im Auftrag von Tunga Kantarci Gesendet: Montag, 2. November 2009 12:05 An: [email protected] Betreff: st: questions I would like to ask two unrelated questions. 1. The regression output presents ANOVA analysis. Take for example, the MS column for the residuals, which is the mean of the sum of squared residuals. I would like to calculate a quantity using the "nlcom" command and in this command I would like to indicate the estimated MS residual. What is the command for calling the MS residual in, for example the nlcom command? Put more simply, we call for example a coefficient estimate of a variable with "_b[variablename]". What I am asking is how do I call a figure from the ANOVA table? For example the MS of residuals? I could just use the estimated figure for MS residuals but this is not what I want. 2. Suppose that I wish to calculate the standard error of _b[_cons]/_b[variablename]. Stata help system indicates that Stata uses the "delta" method to calculate this standard error. But I wish to see what Stata exactly calculates. How do I see it? Or do I need the Stata manual books for this? Stata help system does not indicate any formula it uses. Thank you in advance. * For searches and help try: * http://www.stata.com/help.cgi?search * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/ * For searches and help try: * http://www.stata.com/help.cgi?search * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/
{"url":"https://www.stata.com/statalist/archive/2009-11/msg00028.html","timestamp":"2024-11-15T03:15:09Z","content_type":"text/html","content_length":"10369","record_id":"<urn:uuid:d03459ad-3732-4a08-8733-9ff509676354>","cc-path":"CC-MAIN-2024-46/segments/1730477400050.97/warc/CC-MAIN-20241115021900-20241115051900-00092.warc.gz"}
3.14159 Is Rational Or Irrational - Buaft.com 3.14159 is rational or irrational 3.14159 is rational or irrational, is an important irrational number called PI is denoted by mathematical symbol π Which denote the constant ratio of the circumstance of any circle to the length of the diameter FORMULA OF Pi (π) $\large&space;\pi$ = An approximate value of $\large&space;\pi$ is 22/7, a better approximation is 355/113 and a still better approximation is 3.14159.correct to 5 lac decimal places has been determined with the help of the computer hence, 3.14159 is rational or irrational prove upper discussion. 0.5 is rational number Yes, 0.5 = 1/2 is a rational number by rational number definition. Which number can be written in the form p/q where, p is an any number, q is also an any number and q ≠ 0 mathematical definition of rational number p, q ∈ Z (set of integer) and (∧) q ≠ 0 The numbers √16, 3.7, 4 are the example of rational number. √16 can be reduced to the form p/q where p, q ∈ Z and q does not equal to zero (q≠0) √16=4= 4/1 The number cannot be written in the form p/q Where p, q ∈ Z and q ≠ 0 The numbers √2, √3, 7/√5, √5/16 are irrational numbers. Non-terminating decimal: A non-terminating, non-recurring decimal is a decimal which is neither terminate nor it is recurring .it is nor possible to convert such a decimal into common fraction. Thus, a non-terminating , non-recurring decimal represent as an irrational number (1) 1.4042135……. is an irrational number. (2) 1.719975987….. is an irrational number. (3) 0.123455678……………. is an irrational number. Terminating decimal: A decimal, only a finite number which has only a finite number of digits is called a terminating decimal. Example of terminating decimal : (1) 0 .17 (=17/100) is a rational number terminating. 0,4444 (=4444/1000) is a rational number terminating Recurring terminating decimal: another type of rational number recurring or periodic decimal is a decimal in which one or more digit repeat definitely Example of recurring terminating decimal (1) 0.124124124….. is recurring terminating decimal (2) 4.14561456…… is recurring terminating decimal.
{"url":"https://buaft.com/3-14159-is-rational-or-irrational/","timestamp":"2024-11-06T12:11:14Z","content_type":"text/html","content_length":"70761","record_id":"<urn:uuid:d17855be-e84c-4524-a1af-8e13936d5a9a>","cc-path":"CC-MAIN-2024-46/segments/1730477027928.77/warc/CC-MAIN-20241106100950-20241106130950-00723.warc.gz"}
Making Sense of Multiplication - UrbanMoms I hated Math in school. It just didn’t make sense. The teacher stood at the front of the room, drew numbers and diagrams on the board and told us to memorize the facts and formulas. For the kids who were able to intuitively "break the code" and understand what was going on, Math was great. But for the rest of us Math remained a constant struggle. Does this sound like you or your kids? Well, I’m here to tell you, it doesn’t have to be that way. But how do you help a child who struggles in Math? Practising the same questions over and over doesn’t help understanding. The key is finding another way to explain it … a way that makes sense to him or her—building on what he/she already knows. With this in mind, let’s take a look at multiplication … What is multiplication? At its very basic level, multiplication is just a fast way of adding groups. For example 3×4 is just another way of saying "3 groups of 4" or "4+4+4" or skip-counting by 4’s three times … 4, 8, 12. If your child is having problems understanding this, it’s time to break out the dried beans or pasta! Have your child organize macaroni into groups of 4’s. Then lead them, "How many noodles are in 3 groups of 4? Let’s count. 1-2-3-4…5-6-7-8…9-10-11-12. So three groups of 4 noodles equals 12 noodles altogether." One way to extend this would be to then be to ask him/her to estimate (an informed guess) how many noodles would be in say 4 groups of 4, then check their answer. The key is to keep it simple, hands-on, and take it slowly–building on success. While there is definitely a place for memorization of these basic facts, it should come after the child really understands what they mean. Memorizing vs. understanding At the next level, students have to learn how to multiply larger number like 6×34. When we went to school, we were taught to put the "6" under the "34" and start by multiplying "6×4". Problems started when we couldn’t put the answer "24" down. We were taught to record the "4" and "carry the 2" to the next column. (now referred to as "regrouping"). Why? Most of us never knew–we just did it, and that’s why we made so many mistakes. We then multiplied 6×3 (18) and added the extra 2 to make 20. We recorded the 20 to the left of the 4 and got the answer 204–hopefully. Unfortunately for most, this method is more about memorizing a process, than understanding what you are really doing with the numbers. Developing number sense The trick here is to help the kids to really get a good sense of the numbers. They need to understand, for example, that 34 is the same is "30+4". Building on that concept you can turn 6×34 into (6×30) + (6×4). Take a look at the example. This time when you multiply 6×4 record the whole 24. The next step is to multiply not 6×3, but 6×30 (what it really means), and record the answer below the 24. Being sure to line up the columns, add the two numbers together. You get the same answer, and hopefully it is a little clearer, reducing the chance of making a mistake. It’s easy when you see how it works. If only a teacher could have shown me other ways of solving Math problems, I wouldn’t have had to wait 20 years before I finally "got it". Help you kids make sense of Math. If they don’t "get it" show them, or ask someone else to show them different ways. About the author As an award-winning educator and Parenting & Youth Coach, Rob Stringer BA, BEd, CPC has spent almost two decades helping kids, teens, and adults meet with success, and live lives they LOVE!. Although based outside of Toronto Ontario, Rob’s coaching practice is global, with clients across Canada, the United States, Australia, and Asia. In addition to Parenting with Intention, he most recently launched, Youth Coach Canada–a non-profit organization dedicated to making affordable professional life coaching services available to youth aged 11-21. Interested in having Rob speak at your child’s school, church, or organization? For more information on speaking engagements, programs, and upcoming workshops for parents and youth, visit www.YouthCoachCanada.com or call 905.515.9822.
{"url":"https://urbanmoms.ca/parenting/education/making_sense_of_multiplication/","timestamp":"2024-11-06T18:27:58Z","content_type":"text/html","content_length":"57366","record_id":"<urn:uuid:28b5b571-4223-4339-b4a4-beaba7115925>","cc-path":"CC-MAIN-2024-46/segments/1730477027933.5/warc/CC-MAIN-20241106163535-20241106193535-00699.warc.gz"}
Calculate mean aspect for polygons 03-20-2012 11:43 PM HelloI have a dem and a polygon featureclass for which i want to calculate the mean aspect for every single polygon.I found a topic about this problem in the old esri forum , but unfortunaly i do not get a correct result.1) Generate the ASPECT for the dem2) Calculate COS and SIN for the Aspect-raster (COS = ([ASPECT] * 0.01745329)) (SIN([ASPECT] * 0.01745329))3) Zonal Statistics to SUM the SIN and COS raster for every Polygon.4) RASTER = 360 + ATAN2([SUM_COS], [SUM_SIN]) * 180 / pi5) RESULT = MOD([RASTER], 360)The result i get from the calculations seems to be correct for a few areas, but in others it is shifted of 90 or 180 degrees.I don't know what i am doing wrong.Thank you. 03-22-2012 05:19 PM 12-07-2013 06:23 AM 01-03-2018 07:50 PM 05-11-2022 01:00 AM 09-09-2023 09:43 PM 3 weeks ago
{"url":"https://community.esri.com/t5/arcgis-spatial-analyst-questions/calculate-mean-aspect-for-polygons/td-p/67951","timestamp":"2024-11-06T04:46:18Z","content_type":"text/html","content_length":"504656","record_id":"<urn:uuid:dc0e52cb-f745-4b24-8030-d425227f032d>","cc-path":"CC-MAIN-2024-46/segments/1730477027909.44/warc/CC-MAIN-20241106034659-20241106064659-00298.warc.gz"}
Is this allowed in exams for maths? Hmmm, not too sure. There really isn't a need to use the C3 formulae, the C2 one's more or less suffice. If you used the rule from C3 correctly, and came to the correct answer using that method, I can't see that you'd lose many, if any, marks. I can't imagine them setting a question on a C2 paper though that would require using C3 methods. Or even if you could use C3 methods in the C2 question, I can't see how it'd be quicker/neater/simpler, if that's what you were thinking. Also, most ideas from C3 are just extensions of those from C2. Do you have a particular question in mind that you've come across?
{"url":"https://www.thestudentroom.co.uk/showthread.php?t=168909","timestamp":"2024-11-11T05:21:48Z","content_type":"text/html","content_length":"455972","record_id":"<urn:uuid:799f3b06-ba7e-430e-901d-4a73bf2a704e>","cc-path":"CC-MAIN-2024-46/segments/1730477028216.19/warc/CC-MAIN-20241111024756-20241111054756-00877.warc.gz"}
7th grade mathematics chart Vow! What a teacher! Thanks for making Algebra easy! The software provides amazing ways to deal with complex problems. Anyone caught up there and finds it hard to solve out must buy a copy. Youll get a great tool at a reasonable price. Matt Canin, IA Thank you and congratulations for your impressive Algebra program which truly it helped me a lot with my math. Billy Hafren, TX I really liked the ability to choose a particular transformation to perform, rather than blindly copying the solution process.. Chuck Jones, LA Although I have always been good at math, I use the Algebra Professor to make sure my algebra homework is correct. I find the software to be very user friendly. Im sure I will be using it when I start college in about one year. Perry Huges, KY Algebra Professor is the best software I've used! I never thought I was going to learn the different formulas and rules used in math, but your software really made it easy. Thank you so much for creating it. Now I don't dread going to my algebra class. Thanks! Catherine, IL Students struggling with all kinds of algebra problems find out that our software is a life-saver. Here are the search phrases that today's searchers used to find our site. Can you find yours among Search phrases used on 2014-12-31: • Algebra With Pizzazz Answers objective quadratic equations • equations fractions calculator • matlab solving simultaneous equations using matrix • calculate linear feet • factor worksheets • how to teach adding and subtracting negative numbers • multiply divide fractions worksheet • greatest common factor learn example • pre algebra with pizzazz-double cross • solving proportions sample test • how to square a fraction • greatest common factor 121 • hex ti decimal • cheat on cubes • "Pre-algebra math problems" • different kinds of squae pyramids • hard math equation • solving equations by dividing • equivalent decimals worksheet • were i can get my algebra homework done for free • multiplying and dividing irrational number examples • negative and positive integers worksheets • synthetic division worksheets • polynomial simultaneous equation+matlab • pre algreba examples • Pre-Algebra with pizzazz • inverse log+ti-89 • square root excel formula • factoring online calculator algebra • calculate cube root with TI business calculator • square root problems online • algebra and trigonometry mcdougal littell book 2 answers • 9th grade algebra • saxon algebra 1 study guide • Reading worksheets with answer • "range" "mathematics" "GRE" • fraction equation solve for a calculator • simplifying exponential expressions calculator • similar triangles worksheets GED • Free College Algebra Help • glencoe mcgraw hill algebra 1 answers • graph the equation plotting points • trig calculater downloads • GMAT practise tests • how to do algebar • adding radicals and a whole number • math world problem • math trivial worksheets • good inequalities software for ti 83 • solving equation systems with ti 89 • tool to find LCM • convert decimal worksheet • maths papers for year 7 high level online • indiana pre algebra book cheat sheet • Free Algebra Solver • common denominator converter • algebra with square root • polynom divider • prentice hall chemistry practice book answers • help in maths rational numbers seventh grade • discrete math workbook for 4th/5th graders • how to cube on a ti-83 • square and cubes in real life activities • convert whole number and fraction percentages to a fraction • identify polynomial functions with fraction exponent • sample quiz grade school polynomials • solving systems linear equations matrices word problems • graphing inequalities on a number line • probability answers uop • lesson plans lcm • glencoe pre algebra practice answers • IOWA EXAMINATIONS SAMPLE PAPERS 9th grade • Advanced Algebra Worksheets • evaluating expression worksheet • practicing imperfect square roots • scott foresman addison wesley diamond edition free worksheets • algebra one inequalities worksheet • algebra worksheets year 11 • algebra mixture solver • simplifying exponential fractions • worksheets on geometric patterns • online scientific gcf calculator • polar equation project examples
{"url":"https://algebra-net.com/algebra-net/trinomials/7th-grade-mathematics-chart.html","timestamp":"2024-11-06T07:34:32Z","content_type":"text/html","content_length":"87075","record_id":"<urn:uuid:13dc80e3-1909-4341-9ac9-2e530f056560>","cc-path":"CC-MAIN-2024-46/segments/1730477027910.12/warc/CC-MAIN-20241106065928-20241106095928-00850.warc.gz"}
Using OpenSCAD to produce drawings for 3D Printing Re: Using OpenSCAD to produce drawings for 3D Printing Finally came back to the shell problem... It's a really interesting one - very tough, very instructive. I've a feeling I'm going to be coming back to it for years, chipping away at improving it with each new trick I learn in OpenSCAD. I certainly haven't cracked it yet, although managed to model each shell separately and pivot them all from a point some way from the origin around which they were Code: Select all // The width of the largest shell maxWidth = 7; // [0:10] // The side length of the longest shell maxSideLength = 4; // [0:0.5:5] // Angle described by each shell shellAngle = 37; // [30:40] // How far from the origin of the shells to the pivot point pivotOffset = 2; // [0:0.5:4] module ShellShape(width, straightlen, angle){ baseThickness = 0.5; shellOverlap = 1/3; //proportion of each shell to overlap widthStepDown = 0.25; // how much to reduce the width by for each successive shell sideLengthStepDown = 0.1; // how much to reduce the side length by for each successive shell mirror( [i, 0, 0] ) difference() { union() { for (i=[0:3]) { No idea how it will print, of course. Potentially some large overhangs, and the base I've added may make it difficult to remove the supports... Re: Using OpenSCAD to produce drawings for 3D Printing SimonWood wrote: ↑Wed Jan 12, 2022 9:00 pm No idea how it will print, of course. Potentially some large overhangs, and the base I've added may make it difficult to remove the supports... I think it would print ok in resin but FDM would be difficult I suspect. Re: Using OpenSCAD to produce drawings for 3D Printing What I can say is they print brilliantly in resin as I have already done some. The picture shows the vents printed and supports broken off, but no cleaning up has been done. If anyone wants the torpedo vent stl file, just let me know. The shell vents were done in openscad using the code provided and tweaked for size, hence the 2 sizes. I also printed a dome, it came out very well on the top circle it, but the flare was very bitty, but workable. The buck stops here ....... Ditton Meadow Light Railway (DMLR) Member of Peterborough and District Association Re: Using OpenSCAD to produce drawings for 3D Printing -steves- wrote: ↑Thu Jan 13, 2022 9:18 am What I can say is they print brilliantly in resin as I have already done some. Excellent! That's fast work - and they look great, better than I thought they'd turn out! Re: Using OpenSCAD to produce drawings for 3D Printing I don't know if anyone has tried it, but Tinkercad also has a code type way of doing things called codeblocks. This is a screenshot of an example one that I was playing around with. The buck stops here ....... Ditton Meadow Light Railway (DMLR) Member of Peterborough and District Association Re: Using OpenSCAD to produce drawings for 3D Printing -steves- wrote: ↑Thu Jan 13, 2022 9:41 am I don't know if anyone has tried it, but Tinkercad also has a code type way of doing things called blocks. Didn't even know about that - that looks really useful, especially given how easy it is to quickly get something together in TinkerCAD, and it looks like this could enable progression to quite precise refinements. Re: Using OpenSCAD to produce drawings for 3D Printing Well done to both of you as a collaborative effort. The prints have turned out pretty much as I thought they would. Re: Using OpenSCAD to produce drawings for 3D Printing This is an example of CodeBlocks working. It has differing speeds, I went from medium to fast after a couple of runs through. The buck stops here ....... Ditton Meadow Light Railway (DMLR) Member of Peterborough and District Association Re: Using OpenSCAD to produce drawings for 3D Printing I meant to update this thread with progress on my headboard, but forgot to do so. I'd got to the point where I could generate a headboard but the letter spacing was causing me a headache: SimonWood wrote: ↑Sat Jan 01, 2022 9:14 am Even after fiddling the parameters you can see the letter spacing is a problem more generally - since really having each letter spaced evenly never works. An 'I' is much narrower than a 'W'. I haven't found anything in OpenSCAD to solve this, although I think from Googling around there may be libraries other people have developed which I can use - I need to look into this more... Well, I found the library to do this, and in the process I also found a new OpenSCAD command which I think is going to be really useful for structuring projects. The library is fontmetrics which includes the function measureText() which returns the width of a string - exactly what I needed! So I can now measure the width of each letter individually and use that to work out the angle to move through before position the next letter. Job done! Code: Select all function angle_subtended_by_arc(radius,arc_length) = arc_length*360/(2*radius*PI); function add_elements(array, to=-1, from=0) = from < len(array) - 1 && (from < to || to == -1) ? array[from] + add_elements(array, to, from + 1) : array[from]; function angles_of_letters(the_text,radius,font_size) = [for (i=[0:len(the_text)-1]) angle_subtended_by_arc(radius,measureText(the_text[i],size=font_size))]; module text_arc(the_text,radius,font_size,center=false) { center_offset = center ? add_elements(angles_of_letters,len(the_text)-1)/2 : 0; for (i = [0:len(the_text)-1]) { rotate(i == 0 ? center_offset : center_offset-add_elements(angles_of_letters,i-1)) But that code won't work yet, because the function measureText() isn't defined in my project, it's in fontmetrics.scad. So first of all I need to download fontmetrics (and its companion fontmetricsdata.scad) and I can then either place it in the same directory (folder) as my project, or in one of the places OpenSCAD looks for libraries - I won't cover the details of that here because it depends which operating system you're on, but it's an option if you think that's neater. Once I've done that, I have to include a line in my project to reference fontmetrics. There are actually two OpenSCAD commands that can do this: include<> and use<>. If you include<file.scad>; it basically pulls in everything in file.scad as if it were in your file. But if you use<file.scad>; it only brings in the modules and functions from file.scad so you can call them from with your project. I prefer this: it means if I define a constant in my project that happens to share the same name as a constant in the file I want to reference, it won't cause a clash. Anyway, having added fontmetrics.scad to the same folder as headboard.scad, all I need to do now is add this line: Code: Select all use <fontmetrics.scad>; Now my function above will be able to use measureText(). The benefit of this command is that I can now break down my projects into multiple files. So if I were drawing a wagon I could, say, have files axlebox.scad and solebar.scad and coupling.scad etc. and then in reference them in wagon.scad with 'use<axlebox.scad>;' etc. Not only will this make it more organised to work on and easier to find the bits of code I'm looking for, but potentially I can reuse bits (for example I might want to use 'axlebox.scad' in 'van.scad'). Admittedly this also increases the potential for complexity... and this has made me think about versioning. I'm experimenting with using GitHub for this - so you can now find the latest working headboard files on GitHub. I've also finally got round to putting the headboard project on Thingiverse. So even if you don't want to play with the OpenSCAD code you can now download it and type your own headboard text in as a parameter to generate an STL file for any headboard you like. Re: Using OpenSCAD to produce drawings for 3D Printing Fascinating stuff, but how the heck do you print those domes and vents on a filament printer. I have a Flashforge AD3 and I use Tinkercad. The Codeblocks looks interesting though. Is it easy enough to use Steve. The OpenSCAD is way beyond me though... Life is so easy when I run my trains. https://gardenrails.org/forum/viewtopic ... 41&t=11364 Re: Using OpenSCAD to produce drawings for 3D Printing FWLR wrote: ↑Tue May 30, 2023 9:26 am Fascinating stuff, but how the heck do you print those domes and vents on a filament printer. I have a Flashforge AD3 and I use Tinkercad. The Codeblocks looks interesting though. Is it easy enough to use Steve. The OpenSCAD is way beyond me though... I don't find codeblocks easy to use, it's about the same as SCAD, it requires some coding knowledge, of which I have zero. As for printing those bits, I doubt a filament printer would do a good job of it, they really require a resin printer to get that kind of finish on small parts like these. The buck stops here ....... Ditton Meadow Light Railway (DMLR) Member of Peterborough and District Association Re: Using OpenSCAD to produce drawings for 3D Printing Thanks Steve, that's good to know. Life is so easy when I run my trains. https://gardenrails.org/forum/viewtopic ... 41&t=11364
{"url":"https://gardenrails.org/viewtopic.php?f=54&t=13348&sid=629c1c9be2aee2fd5a0c18730b4f627b&start=40","timestamp":"2024-11-09T06:14:04Z","content_type":"text/html","content_length":"78372","record_id":"<urn:uuid:a8804d04-3671-47b4-ae23-b8711e236bff>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.30/warc/CC-MAIN-20241109053958-20241109083958-00498.warc.gz"}
Cryptology - Cryptanalysis, Encryption, Decryption | Britannica Cryptanalysis, as defined at the beginning of this article, is the art of deciphering or even forging communications that are secured by cryptography. History abounds with examples of the seriousness of the cryptographer’s failure and the cryptanalyst’s success. In World War II the Battle of Midway, which marked the turning point of the naval war in the Pacific, was won by the United States largely because cryptanalysis had provided Admiral Chester W. Nimitz with information about the Japanese diversionary attack on the Aleutian Islands and about the Japanese order of attack on Midway. Another famous example of cryptanalytic success was the deciphering by the British during World War I of a telegram from the German foreign minister, Arthur Zimmermann, to the German minister in Mexico City, Heinrich von Eckardt, laying out a plan to reward Mexico for entering the war as an ally of Germany. American newspapers published the text (without mentioning the British role in intercepting and decoding the telegram), and the news stories, combined with German submarine attacks on American ships, accelerated a shift in public sentiment for U.S. entry into the war on the side of the Allies. In 1982, during a debate over the Falkland Islands War, a member of Parliament, in a now-famous gaffe, revealed that the British were reading Argentine diplomatic ciphers with as much ease as Argentine code clerks. Basic aspects While cryptography is clearly a science with well-established analytic and synthetic principles, cryptanalysis in the past was as much an art as it was a science. The reason is that success in cryptanalyzing a cipher is as often as not a product of flashes of inspiration, gamelike intuition, and, most important, recognition by the cryptanalyst of pattern or structure, at almost the subliminal level, in the cipher. It is easy to state and demonstrate the principles on which the scientific part of cryptanalysis depends, but it is nearly impossible to convey an appreciation of the art with which the principles are applied. In present-day cryptanalysis, however, mathematics and enormous amounts of computing power are the mainstays. Cryptanalysis of single-key cryptosystems (described in the section Cryptography: Key cryptosystems) depends on one simple fact—namely, that traces of structure or pattern in the plaintext may survive encryption and be discernible in the ciphertext. Take, for example, the following: in a monoalphabetic substitution cipher (in which each letter is simply replaced by another letter), the frequency with which letters occur in the plaintext alphabet and in the ciphertext alphabet is identical. The cryptanalyst can use this fact in two ways: first, to recognize that he is faced with a monoalphabetic substitution cipher and, second, to aid him in selecting the likeliest equivalences of letters to be tried. The table shows the number of occurrences of each letter in the text of this article, which approximates the raw frequency distribution for most technical material. The following cipher is an encryption of the first sentence of this paragraph (minus the parenthetical clause) using a monoalphabetic substitution: Letter frequency distribution for a sample English text letter number of occurrences frequency letter number of occurrences frequency E 8,915 .127 Y 1,891 .027 T 6,828 .097 U 1,684 .024 I 5,260 .075 M 1,675 .024 A 5,161 .073 F 1,488 .021 O 4,814 .068 B 1,173 .017 N 4,774 .067 G 1,113 .016 S 4,700 .067 W 914 .013 R 4,517 .064 V 597 .008 H 3,452 .049 K 548 .008 C 3,188 .045 X 330 .005 L 2,810 .040 Q 132 .002 D 2,161 .031 Z 65 .001 P 2,082 .030 J 56 .001 W occurs 21 times in the cipher, H occurs 18, and so on. Even the rankest amateur, using the frequency data in the table, should have no difficulty in recovering the plaintext and all but four symbols of the key in this case. It is possible to conceal information about raw frequency of occurrence by providing multiple cipher symbols for each plaintext letter in proportion to the relative frequency of occurrence of the letter—i.e., twice as many symbols for E as for S, and so on. The collection of cipher symbols representing a given plaintext letter are called homophones. If the homophones are chosen randomly and with uniform probability when used, the cipher symbols will all occur (on average) equally often in the ciphertext. The great German mathematician Carl Friedrich Gauss (1777–1855) believed that he had devised an unbreakable cipher by introducing homophones. Unfortunately for Gauss and other cryptographers, such is not the case, since there are many other persistent patterns in the plaintext that may partially or wholly survive encryption. Digraphs, for example, show a strong frequency distribution: TH occurring most often, about 20 times as frequently as HT, and so forth. With the use of tables of digraph frequencies that partially survive even homophonic substitution, it is still an easy matter to cryptanalyze a random substitution cipher, though the amount of ciphertext needed grows to a few hundred instead of a few tens of letters. Types of cryptanalysis There are three generic types of cryptanalysis, characterized by what the cryptanalyst knows: (1) ciphertext only, (2) known ciphertext/plaintext pairs, and (3) chosen plaintext or chosen ciphertext. In the discussion of the preceding paragraphs, the cryptanalyst knows only the ciphertext and general structural information about the plaintext. Often the cryptanalyst either will know some of the plaintext or will be able to guess at, and exploit, a likely element of the text, such as a letter beginning with “Dear Sir” or a computer session starting with “LOG IN.” The last category represents the most favourable situation for the cryptanalyst, in which he can cause either the transmitter to encrypt a plaintext of his choice or the receiver to decrypt a ciphertext that he chose. Of course, for single-key cryptography there is no distinction between chosen plaintext and chosen ciphertext, but in two-key cryptography it is possible for one of the encryption or decryption functions to be secure against chosen input while the other is vulnerable. One measure of the security of a cryptosystem is its resistance to standard cryptanalysis; another is its work function—i.e., the amount of computational effort required to search the key space exhaustively. The first can be thought of as an attempt to find an overlooked back door into the system, the other as a brute-force frontal attack. Assume that the analyst has only ciphertext available and, with no loss of generality, that it is a block cipher (described in the section Cryptography: Block and stream ciphers). He could systematically begin decrypting a block of the cipher with one key after another until a block of meaningful text was output (although it would not necessarily be a block of the original plaintext). He would then try that key on the next block of cipher, very much like the technique devised by Friedrich Kasiski to extend a partially recovered key from the probable plaintext attack on a repeated-key Vigenère cipher. If the cryptanalyst has the time and resources to try every key, he will eventually find the right one. Clearly, no cryptosystem can be more secure than its work function. The 40-bit key cipher systems approved for use in the 1990s were eventually made insecure, as is mentioned in the section Cryptology: Cryptology in private and commercial life. There are 2^40 40-bit keys possible—very close to 10^12—which is the work function of these systems. Most personal computers (PCs) at the end of the 20th century could execute roughly 1,000 MIPS (millions of instructions per second), or 3.6 × 10^12 per hour. Testing a key might involve many instructions, but even so, a single PC at that time could search a 2^40-key space in a matter of hours. Alternatively, the key space could be partitioned and the search carried out by multiple machines, producing a solution in minutes or even seconds. Clearly, by the year 2000, 40-bit keys were not secure by any standard, a situation that brought on the shift to the 128-bit key. Because of its reliance on “hard” mathematical problems as a basis for cryptoalgorithms and because one of the keys is publicly exposed, two-key cryptography has led to a new type of cryptanalysis that is virtually indistinguishable from research in any other area of computational mathematics. Unlike the ciphertext attacks or ciphertext/plaintext pair attacks in single-key cryptosystems, this sort of cryptanalysis is aimed at breaking the cryptosystem by analysis that can be carried out based only on a knowledge of the system itself. Obviously, there is no counterpart to this kind of cryptanalytic attack in single-key systems. Similarly, the RSA cryptoalgorithm (described in the section Cryptography: RSA encryption) is susceptible to a breakthrough in factoring techniques. In 1970 the world record in factoring was 39 digits. In 2009 the record was a 768-digit RSA challenge. That achievement explains why standards in 2011 called for moving beyond the standard 1,024-bit key (310 digits) to a 2,048-bit key (620 digits) in order to be confident of security through approximately 2030. In other words, the security of two-key cryptography depends on well-defined mathematical questions in a way that single-key cryptography generally does not; conversely, it equates cryptanalysis with mathematical research in an atypical way.
{"url":"https://www.britannica.com/topic/cryptology/Cryptanalysis","timestamp":"2024-11-15T03:45:33Z","content_type":"text/html","content_length":"106777","record_id":"<urn:uuid:f73b2509-b5bb-4162-af49-a8b501cc124c>","cc-path":"CC-MAIN-2024-46/segments/1730477400050.97/warc/CC-MAIN-20241115021900-20241115051900-00741.warc.gz"}
Using Cluster Analysis to Segment Your Data - Image by Pexels Machine Learning (ML for short) is not just about making predictions. There are other unsupervised processes, among which clustering stands out. This article introduces clustering and cluster analysis, highlighting the potential of cluster analysis for segmenting, analyzing, and gaining insights from groups of similar data What is Clustering? In simple terms, clustering is a synonym for grouping together similar data items. This could be like organizing and placing similar fruits and vegetables close to each other in a grocery store. Let’s elaborate on this concept further: clustering is a form of unsupervised learning task: a broad family of machine learning approaches where data are assumed to be unlabeled or uncategorized a priori, and the aim is to discover patterns or insights underlying them. Specifically, the purpose of clustering is to discover groups of data observations with similar characteristics or properties. This is where clustering is positioned within the spectrum of ML techniques: To better grasp the notion of clustering, think about finding segments of customers in a supermarket with similar shopping behavior, or grouping a large body of products in an e-commerce portal into categories or similar items. These are common examples of real-world scenarios involving clustering processes. Common clustering techniques There exist various methods for clustering data. Three of the most popular families of methods are: • Iterative clustering: these algorithms iteratively assign (and sometimes reassign) data points to their respective clusters until they converge towards a “good enough” solution. The most popular iterative clustering algorithm is k-means, which iterates by assigning data points to clusters defined by representative points (cluster centroids) and gradually updates these centroids until convergence is achieved. • Hierarchical clustering: as their name suggests, these algorithms build a hierarchical tree-based structure using a top-down approach (splitting the set of data points until having a desired number of subgroups) or a bottom-up approach (gradually merging similar data points like bubbles into larger and larger groups). AHC (Agglomerative Hierarchical Clustering) is a common example of a bottom-up hierarchical clustering algorithm. • Density-based clustering: these methods identify areas of high density of data points to form clusters. DBSCAN (Density-Based Spatial Clustering of Applications with Noise) is a popular algorithm under this category. Are Clustering and Cluster Analysis the Same? The burning question at this point might be: do clustering and clustering analysis refer to the same concept? No doubt both are very closely related, but they are not the same, and there are subtle differences between them. • Clustering is the process of grouping similar data so that any two objects in the same group or cluster are more similar to each other than any two objects in different groups. • Meanwhile, cluster analysis is a broader term that includes not only the process of grouping (clustering) data, but also the analysis, evaluation, and interpretation of clusters obtained, under a specific domain context. The following diagram illustrates the difference and relationship between these two commonly mixed-up terms. Practical Example Let’s focus from now on cluster analysis, by illustrating a practical example that: 1. Segments a set of data. 2. Analyze the segments obtained NOTE: the accompanying code in this example assumes some familiarity with the basics of Python language and libraries like sklearn (for training clustering models), pandas (for data wrangling), and matplotlib (for data visualization). We will illustrate cluster analysis on the Palmer Archipelago Penguins dataset, which contains data observations about penguin specimens classified into three different species: Adelie, Gentoo, and Chinstrap. This dataset is quite popular for training classification models, but it also has a lot to say in terms of finding data clusters in it. All we have to do after loading the dataset file is assume the ‘species’ class attribute is unknown. import pandas as pd penguins = pd.read_csv('penguins_size.csv').dropna() X = penguins.drop('species', axis=1) We will also drop two categorical features from the dataset which describe the penguin’s gender and the island where this specimen was observed, leaving the rest of the numerical features. We also store the known labels (species) in a separate variable y: they will be handy later on to compare clusters obtained against the actual penguins’ classification in the dataset. X = X.drop(['island', 'sex'], axis=1) y = penguins.species.astype("category").cat.codes With the following few lines of code, it is possible to apply the K-means clustering algorithms available in the sklearn library, to find a number k of clusters in our data. All we need to specify is the number of clusters we want to find, in this case, we will group the data into k=3 clusters: from sklearn.cluster import KMeans kmeans = KMeans(n_clusters = 3, n_init=100) X["cluster"] = kmeans.fit_predict(X) The last line in the above code stores the clustering result, namely the id of the cluster assigned to every data instance, in a new attribute named “cluster”. Time to generate some visualizations of our clusters for analyzing and interpreting them! The following code excerpt is a bit long, but it boils down to generating two data visualizations: the first one shows a scatter plot around two data features -culmen length and flipper length- and the cluster each observation belongs to, and the second visualization shows the actual penguin species each data point belongs to. plt.figure (figsize=(12, 4.5)) # Visualize the clusters obtained for two of the data attributes: culmen length and flipper length X[X["cluster"]==0]["flipper_length_mm"], "mo", label="First cluster") X[X["cluster"]==1]["flipper_length_mm"], "ro", label="Second cluster") X[X["cluster"]==2]["flipper_length_mm"], "go", label="Third cluster") plt.plot(kmeans.cluster_centers_[:,0], kmeans.cluster_centers_[:,2], "kD", label="Cluster centroid") plt.xlabel("Culmen length (mm)", fontsize=14) plt.ylabel("Flipper length (mm)", fontsize=14) # Compare against the actual ground-truth class labels (real penguin species) plt.plot(X[y==0]["culmen_length_mm"], X[y==0]["flipper_length_mm"], "mo", label="Adelie") plt.plot(X[y==1]["culmen_length_mm"], X[y==1]["flipper_length_mm"], "ro", label="Chinstrap") plt.plot(X[y==2]["culmen_length_mm"], X[y==2]["flipper_length_mm"], "go", label="Gentoo") plt.xlabel("Culmen length (mm)", fontsize=14) plt.ylabel("Flipper length (mm)", fontsize=14) Here are the visualizations: By observing the clusters we can extract a first piece of insight: • There is a subtle, yet not very clear separation between data points (penguins) allocated to the different clusters, with some gentle overlap between subgroups found. This does not necessarily lead us to conclude that the clustering results are good or bad yet: we have applied the k-means algorithm on several attributes of the dataset, but this visualization shows how data points across clusters are positioned in terms of two attributes only: ‘culmen length’ and ‘flipper length’. There might be other attribute pairs under which clusters are visually represented as more clearly separated from each other. This leads to the question: what if we try visualizing our cluster under any other two variables used for training the model? Let’s try visualizing the penguins’ body mass (grams) and culmen length (mm). X[X["cluster"]==0]["culmen_length_mm"], "mo", label="First cluster") X[X["cluster"]==1]["culmen_length_mm"], "ro", label="Second cluster") X[X["cluster"]==2]["culmen_length_mm"], "go", label="Third cluster") plt.plot(kmeans.cluster_centers_[:,3], kmeans.cluster_centers_[:,0], "kD", label="Cluster centroid") plt.xlabel("Body mass (g)", fontsize=14) plt.ylabel("Culmen length (mm)", fontsize=14) This one seems crystal clear! Now we have our data separated into three distinguishable groups. And we can extract additional insights from them by further analyzing our visualization: • There is a strong relationship between the clusters found and the values of the ‘body mass’ and ‘culmen length’ attributes. From the bottom-left to the top-right corner of the plot, penguins in the first group are characterized by being small due to their low values of ‘body mass’, but they exhibit largely varying bill lengths. Penguins in the second group have medium size and medium to high values of ‘bill length’. Lastly, penguins in the third group are characterized by being larger and having a longer bill. • It can be also observed that there are a few outliers, i.e. data observations with atypical values far from the majority. This is especially noticeable with the dot at the very top of the visualization area, indicating some observed penguins with an overly long bill across all three groups. Wrapping Up This post illustrated the concept and practical application of cluster analysis as the process of finding subgroups of elements with similar characteristics or properties in your data and analyzing these subgroups to extract valuable or actionable insight from them. From marketing to e-commerce to ecology projects, cluster analysis is widely applied in a variety of real-world domains. Iván Palomares Carrascosa is a leader, writer, speaker, and adviser in AI, machine learning, deep learning & LLMs. He trains and guides others in harnessing AI in the real world.
{"url":"https://digitalinfowave.com/using-cluster-analysis-to-segment-your-data/","timestamp":"2024-11-14T04:06:52Z","content_type":"text/html","content_length":"101631","record_id":"<urn:uuid:5ba5c80e-5fcd-434e-ba9a-1d79df6d2b0b>","cc-path":"CC-MAIN-2024-46/segments/1730477028526.56/warc/CC-MAIN-20241114031054-20241114061054-00063.warc.gz"}
Battle stats | Late Legends All legends have specific stats for battle called battle stats. These are calculated using level and core stats and can be increased with the choice of equipment. Each point in level and core stat increase the following battle stats: • Level - 1 power, 10 max health • Strength - 3 Physical block chance, 1/4 Power, 1/2 Physical armor • Intelligence - 3 Magical block chance, 1/4 Power, 1/2 Magical armor • Endurance - 8 Max health, 1 Block bonus • Agility - 3% Dodge chance, 1/3 Movement, 1/2 Bonus true damage • Sensory - 1 Critical bonus, 1/2 bonus shield • Luck - 2% Critical chance, 1 Save height Calculate rounded down Whenever you divide a number in the game, round down if you end up with a fraction, even if the fraction is one-half or greater. For example: when we take a look at the core stats, 3 points of agility are needed to gain 1 movement, this means 2 points is not enough to gain 1 movement. All battle stats explanation and scaling Max health - Determines the maximum amount of health the legend has. A legend start with 50 max health and can be increased by 8 per level and increases by 8 per point in endurance. Movement - Movement is the amount of tiles a legend can move while using an action to move. A legend starts with 3 movement and it is increased by 1 for per 3 points in agility. Range - Range is the amount of tiles you may take to select a target for your attacks. This solely determined by your choice of equipment. For example: A sword has 1m range (1 tile), and a bow has 8m range (8 tiles). Critical chance - Critical chance is used to determine success for critical rolls. A successful roll is if the result of using 1d100 plus the legend’s critical chance is higher than 100. Each legend starts on 10 critical chance and it can be increased by 2 for each point in Luck. Power - Power is used to determine your damage output for attacks and is also used in skills. Power is mainly gained by equipment, bit is also increased by one each level and by spending 4 points in either Strength or Intelligence. Critical bonus - On each successful critical roll the critical bonus is added to the power of the attack or skill. A legend starts with 10 critical bonus and can be increased by spending 1 point on Bonus true damage - All forms of true damage is automatically increased by Bonus true damage, no roll required. A legend can only increase this by spending two points in Agility. Dodge chance - Dodge chance is used to determine success for dodge rolls. A successful roll is if the result of using 1d100 plus the legend’s dodge chance is higher than 100. A Legend increases dodge chance by 3 for each point in Agility. Physical block chance - Physical block chance is used to determine success for block rolls that have a physical damage source. A successful roll is if the result of using 1d100 plus the legend’s physical block chance is higher than 100. Physical block chance is increased by 3 for each point in Strength. Magical block chance - Magical block chance is used to determine success for block rolls that have a magical damage source. A successful roll is if the result of using 1d100 plus the legend’s magical block chance is higher than 100. Magical block chance is increased by 3 for each point in Intelligence. Physical armor - Each time the legend takes damage of a physical type, the incoming damage is reduced by physical armor. Physical armor is mainly gained by equipment, but it can also be increased by 1 per 2 points in strength. Magical armor - Each time the legend takes damage of a magical type, the incoming damage is reduced by magical armor. Magical armor is mainly gained by equipment, but it can also be increased by 1 per 2 points in intelligence. Block bonus - On each successful block roll the block bonus is added to the defender’s armor. A legend starts with 5 block bonus and can be increased by 1 per point in Sensory and by 1 per 2 points in Endurance. Bonus shield - All types of shielding are automatically increased by Bonus Shield, no roll required. A legend can increase this by spending two points in sensory or with specific equipment. Save height - Determines the height a target has to have on saves against your skills. A legend starts with 10 save height and it is increased by 1 per two Levels and by spending a point in Luck.
{"url":"https://latelegends.com/battle-stats/","timestamp":"2024-11-06T08:32:08Z","content_type":"text/html","content_length":"25716","record_id":"<urn:uuid:c475b9eb-8651-4688-bd02-d58cc97b9784>","cc-path":"CC-MAIN-2024-46/segments/1730477027910.12/warc/CC-MAIN-20241106065928-20241106095928-00121.warc.gz"}
The class Level_interval<FaceHandle> represents intervals for the minimum and maximum value of the z-coordinate of a face of a triangulation. #include <CGAL/Level_interval.h> The value_type of FaceHandle must be Face, which must have a nested type Vertex, which must have a nested type Point, whose Kernel_traits<Point>Kernel must have a nested type FT. These requirements are fulfilled, if one uses a CGAL triangulation and a CGAL kernel. typedef FT Value; The type of the $z$-coordinate of points stored in vertices of faces. Level_interval<FaceHandle> i; Default constructor. Level_interval<FaceHandle> i ( FaceHandle fh); Constructs the interval with smallest and largest z coordinate of the points stored in the vertices of the face fh points to. FaceHandle i.face_handle () returns the face handle. ostream& ostream& os << i Inserts the interval i into the stream os. Precondition: The output operator for *Face_handle is defined. Is Model for the Concepts
{"url":"https://doc.cgal.org/Manual/3.2/doc_html/cgal_manual/Interval_skip_list_ref/Class_Level_interval.html","timestamp":"2024-11-14T07:40:53Z","content_type":"text/html","content_length":"5937","record_id":"<urn:uuid:4bc79bcf-25cd-4822-ab6e-cbd89004984d>","cc-path":"CC-MAIN-2024-46/segments/1730477028545.2/warc/CC-MAIN-20241114062951-20241114092951-00324.warc.gz"}
Next: MAXIMUM INDEPENDENT SET Up: Subgraphs and Supergraphs Previous: Subgraphs and Supergraphs &nbsp Index • INSTANCE: Graph • SOLUTION: A clique in G, i.e. a subset V' are joined by an edge in E. • MEASURE: Cardinality of the clique, i.e., • Good News: Approximable within 90]. • Bad News: Not approximable within 243]. • Comment: Not approximable within PP [243]. The same problem as MAXIMUM INDEPENDENT SET on the complementary graph. Approximable within 14] and [367]. The same good news hold for the vertex weighted version [224]. • Garey and Johnson: GT19 Viggo Kann
{"url":"https://www.csc.kth.se/~viggo/wwwcompendium/node33.html","timestamp":"2024-11-11T08:16:14Z","content_type":"text/html","content_length":"5031","record_id":"<urn:uuid:c8199915-7bc8-4c91-b2c1-9cfed75b92da>","cc-path":"CC-MAIN-2024-46/segments/1730477028220.42/warc/CC-MAIN-20241111060327-20241111090327-00831.warc.gz"}
How Much Peat Moss Do I Need- Are You Calculating Right? - Dailyoutdoortips Peat moss improves pH balancing, nutrients of the soil, and proper drainage. But how much peat moss do I need to use in my garden to get all those benefits? A simple calculation can help you get the appropriate amount of peat moss. A bale of 1, 2.2, and 3.8 cubic foot of peat moss can be enough for 24, 50, and 90 square feet area, respectively. Now, all you have to do is to measure the square footage of your garden. To get to know more about accurate calculations, please keep reading. Note: To get the actual idea, use our peat moss calculator. How Much Peat Moss Do I Need Since the mid-1900s, peat moss has revolutionized gardening. But to prevent waste and find the appropriate amounts, a simple mathematical calculation can help us. Peat moss is available in bales (1, 2.2, and 3.8-cubic foot) and in bags (3-cubic foot). And both units will give you an inch of depth over top soil. The foundation of the calculation is how much land surface these different sized bales and bags of peat moss can cover. So, when we know how much land we need to have layered by peat moss, we just have to divide that number by the land area each unit can cover. As we have mentioned, a 1, 2.2, or 3.8 cubic foot peat moss bale can cover 24, 50, or 90 square feet of your garden respectively. For simplicity’s sake, let’s say, your garden is 60 feet long and 40 feet wide. So, your garden has a 2400 square feet area to cover. You can get to the squared measurement of any surface simply by multiplying the length and the width. ❖ Let’s start the calculation by using a one cubic foot bale of peat moss. We need to divide the total footage of the garden by 24. We are using 24 since a 1-cubic foot bale of peat moss can cover 24 square feet of the land surface. So you are going to need 100 bales of 1-cubic foot peat moss. And that will give a one-inch depth to the land covered by peat moss. Let’s see the calculation in numbers: ❖ If you want to use 2.2 cubic foot bales for your garden, you need 48 bales of peat moss. The garden is 2400 square feet and one with 2.2-cubic foot peat moss you can use on 50 square feet area. So, the calculation goes like this: 2400/50 which equals 48. The depth of the peat moss will be one inch. So, to get a 2-inch depth, you will need 96 2.2-inch cubic foot bales of peat moss. ❖ Using the same formula, the number of 3.8 cubic foot bales of peat moss will be 27 (2400/90=26.66, let’s round the fraction just to be on the safe side). And if you want to get the depth to 2 inches, double the number of the bales. That’s the arithmetic for pretty much every space – like I can use the same formula to find how much peat moss do I need for my lawn. Let’s talk about the right amount of peat moss soil mix we are going to need for different kinds of uses. How Much Peat Moss Do I Need For Overseeding Before calculating, we should be careful about some characteristics of peat moss. Peat moss starts expanding when you take them out from the bag or lay it flat from the bale. A skinny layer can swell up in weight and size – after absorbing water. That’s why a mere 1/8th inch of thin peat moss layer is enough for overseeing the lawn. But it is better to aim at a depth of one-fourth of an inch. So, let’s say when the lawn is 300 square feet, we will need either 13, 6, or 4 bales of peat moss. The calculation goes as follows: But if you do peat moss lawn afterward, it is easier to buy peat moss bags. A single 3-cubic foot bag should be enough to have a depth from ¼ to 1/8th of an inch spreading over a 300-square feet lawn. Now measure your lawn and do the calculations using the formula. That’s the way to calculate how much peat moss I need to cover grass seed. How Much Peat Moss Do I Need For Blueberries Using peat moss for blueberries is usually dependent on organic beings as well as the pH of the soil. For blueberries, the best ratio would be to use a single 3.8-cubic foot bale for every 10 bushes. So, if you want to use 1 or 2.2-cubic foot bales, you need 4 or 2 bales respectively. Just to be on safe side, we round the fractions to the next full digit. How Much Peat Moss Do I Need For Vegetable garden Peat moss is great for vegetables which benefit from acidic soil. So, if you are wondering if is peat moss good for tomatoes – now you know the answer. The best depth of peat moss in a vegetable garden should be a 2-3 inch spread incorporated over a 12-inch depth of soil. Using the same 2400 square feet space as an example, we can say that a little more than 2000 cubic feet of soil is necessary for your vegetable garden. Pros Of Peat Moss Frequently Asked Questions (FAQs) How much peat moss do I need for my lawn? If I have a 300 square feet lawn and like to use 3.8-cubic foot bales peat moss, I am going to need 4 bales of peat moss for my lawn. How many sq ft does a bag of peat moss cover? A bag of 1-cubic foot peat moss should comfortably cover 100 square feet with an inch depth. How many cubic feet of peat moss do I need? If I have a 60 by 40 feet garden to cover with 1-inch thick peat moss, I am going to need 100 cubic feet of peat moss. Final Words It is easy to figure out how much peat moss I need for various uses with a few simple calculations. Peat moss enhances the quality of soil by resisting compaction and providing aeration. They can also hold moisture and improve drainage. But they are also expensive, and overuse of releases methane and carbon dioxide into the atmosphere. So, it’s necessary that you do accurate calculations before using and avoid overuse whenever possible.
{"url":"https://dailyoutdoortips.com/how-much-peat-moss-do-i-need-are-you-calculating-right/","timestamp":"2024-11-14T00:15:26Z","content_type":"text/html","content_length":"72330","record_id":"<urn:uuid:78e32548-7ac8-4ba7-af28-dfdefd804291>","cc-path":"CC-MAIN-2024-46/segments/1730477028516.72/warc/CC-MAIN-20241113235151-20241114025151-00529.warc.gz"}
ADC sampling rate matlab/simulink M-PSK 5 years ago ●14 replies● latest reply 5 years ago 599 views Dear all, I am trying to confirm Nyquist-Shannon theorem through matlab-simulink. More particular I am trying to confirm what would be the minimum achievable sampling rate for a given M-PSK waveform. In a real scenario the detected signal of certain bandwidth Y would be reaching the ADC in its continuous time form.Theoretically the ADC would have to sample at a rate 2xY for that signal to be recovered properly after digitization. In a simulation environment the waveform is being generated "digitally" meaning that is effectively a set of samples at a particular rate.The BW of that signal is being plotted using FFT under the assumption that in the far ends of the spectrum frequencies of half the sampling rate will appear. How will one confirm the Nyquist-Shannon theory given the above? If one tries to "select" some of those samples (try to approximate the behavior of an ADC,with no quantization noise applied) will introduce down-sampling. That will have dissimilar effects to the ones introduced by ADC sampling on a ct waveform (and most likely interpolation will be also necessary on the other end). What is the way usually followed to confirm the theory given the above? Could you please provide some pointers? [ - ] Reply by ●December 30, 2019 "How will one confirm the Nyquist-Shannon theory given the above?" Forgive me if English is not your native language, but there's a terminology issue here. It's not clear what you mean by "confirm". If you mean "prove" using simulation -- no, you can't. Mathematical theorems, like Nyquist-Shannon, are proven using logic, symbolically. A simulation can only cover one case. If you mean "demonstrate", then yes, you can demonstrate specific examples, but you cannot make an overall proof. As a counterexample, consider a signalling system that encodes $n$ bits into $M = 2^n$ symbols, where each symbol is a perfect rectangle $T$ seconds long and takes the value $e^{j 2 \pi k/M}$, where $k \in [0, M)$ and the values of $k$ are randomly chosen. The signal will have infinite bandwidth (its spectrum, if I'm getting the constants right, will be $\frac{sin \omega T}{\omega T}$). But if you sample at the center of each symbol, then in the absence of noise you'll have perfect reception. This counterexample is not the only one -- there are whole families of signals that fall into this general category. There is theory that pertains to this, but it's not the Shannon-Nyquist sampling My suggestion is that if you're ready to get away from cookbook explanations of Nyquist-Shannon (like mine, shameless plug: https://www.wescottdesign.com/articles/Sampling/sampling.pdf) that you either carefully read the modern literature on the subject, or that you look at communication's theory's two founding papers by Shannon. They're references 1 and 2 for this paper: http:// www.hit.bme.hu/~papay/edu/Conv/pdf/origins.pdf and if you search on the titles you'll find them. You'll need to understand the Fourier transform in and out, and how to model sampling as multiplication as a train of dirac impulses, but if you can do that and if you work at it you should be able to follow the proof. Then, having really proved it to yourself, you won't have to depend on inherently unreliable demonstrations. (I can't seem to get MathJax working today. I'm leaving this stand for now, though). [ - ] Reply by ●December 30, 2019 Thank you very much for your answer!Sorry for the confusion that my English has created. I am trying to provide some clarity on what would be the minimum sampling rate for an M-PSK waveform generated in Matlab/simulink.To be more precise random numbers are being "mapped" by a Matlab library to M-PSK complex symbols.Then those symbols are being passed through a RRC interpolation filter.The rate that the random numbers are generated and the interpolation factor are effectively defining the sample rate.However that sequence of symbols is where I need to consider "continuous time" and prove that if I sample Bellow a certain rate I wont be able to recover the information.That is the reason why I have used the name Nyquist.Forgive me if that was not very accurate [ - ] Reply by ●December 30, 2019 You keep using "prove" in the same statements with Matlab. In other words, you're asking how to prove something using experimental mathematics. You can use experimental mathematics to prove that a particular solution to a problem exists (by solving it). But that solution using experimental mathematics only defines one point in a problem space with many, possibly infinite dimensions. You cannot make general proofs using experimental methods in mathematics. And there's a proof for it -- in informal terms, to do so would be the same as filling in a continuous multidimensional space with a finite number of points -- and there's a pretty easy proof of why that's impossible even just on a line segment, much less a multidimensional space. So -- Matlab or real life, if you have a well-formed signal for which the timing is known then the minimum effective sampling rate is exactly once per symbol. This is well known from theory. Life gets more difficult if you need to recover carrier and bit timing -- the typical number quoted is twice the symbol timing, but just off the top of my head I can think of ways to do this with less if you allow me irregular sampling. There's a name for the condition you need to impose on the signal that makes it well-formed enough for that "once per symbol" conclusion to hold, but I can't remember it off the top of my head -- but the important part about it is that it takes half a page to prove mathematically, if you have the background, and you would never, ever be able to prove it experimentally because of that whole "fill a continuous space with a finite number of points" problem. [ - ] Reply by ●December 30, 2019 Very helpful,thank you so much! [ - ] Reply by ●December 30, 2019 At Tx you need two samples per symbol minimum. Why? for shaping because if you send symbols in air without shaping the bandwidth you will get in trouble with law affecting a wide spectrum. That is why you use RRC to shape the bandwidth. In air the information part of your signal is actually every other sample (your symbols). At Rx you will naturally receive that signal. You then exploit the two samples per symbol before downsampling to one sample per symbol. There is no need to prove what is already proven regarding Nyquist rule at ADC. That requires proper time dimension which does not exist in Matalab. [ - ] Reply by ●December 30, 2019 Thank you for your answer! [ - ] Reply by ●December 30, 2019 Is there a compelling reason why you want to prove the theorem with M-PSK signal? It is far easier to prove the theorem using a simple sine wave that is close to half the sample rate, and then interpolate (digitally) down from there to show how the spectrum folds around the Fs/2 [ - ] Reply by ●December 30, 2019 Thank you very much for your repky,yes a M-PSK waveform is the requirement.Since we talk about a communication system that would run using M-PSK. [ - ] Reply by ●December 30, 2019 It will depend very much on the waveform that you are using (or generating). If, for instance, you have square-wavelike behavior, your Fourier transform will present a Gibbs phenomenon at the corners; the same will be true at any discontinuity of the waveform. You can minimize the phenomenon by adding more terms, but never get rid of it. You are also assuming that the ADC is ideal, which is usually not. Any filtering introduced by the ADC will require for some oversampling above the Nyquist frequency. The same is true if the signal is digitally generated. You can try to confirm the Nyquist theorem by using as-clean-as-possible ADC simulators and well behaved signals, in a very long time interval (the idea of reconstructing via FT applies to perfectly periodic signals; once your time interval is bounded the signal can no longer be seen as periodic). [ - ] Reply by ●December 30, 2019 Thank you very much for your reply.Consider simply that I have a Matlab library "mapping" random numbers to M-PSK symbols.Those random numbers are generated at a certain rate which effectively defines the sampling rate of the system.Then those symbols are passed through an interpolation RRC.After that is the point that I would need to prove Nyquist theorem.So effectively, I need to prove that "sampling" that waveform Bellow a certain rate would effectively render it unrecoverable.And that rate I assume that would be the Nyquist rate?The confusing bit is that the waveform is already discrete so we can talk only about downsampling? [ - ] Reply by ●December 30, 2019 If a matched filter is applied and the sample timing is synchronized to the symbols, you only need one sample per symbol to fully recover M-PSK. [ - ] Reply by ●December 30, 2019 Only one sample per symbol even for non BPSK waveforms? [ - ] Reply by ●December 30, 2019 That is correct, any M-PSK or M-QAM or M-ASK signal requires only one (complex) sample per symbol to recover the information. Again, this assumes that a matched filter is applied (e.g., a matching RRC filter), and the sample timing is synchronized to the symbols centers. [ - ] Reply by ●December 30, 2019 Here is a Simulink model that can be used to answer your question. Random numbers between -1 and +1 with a period of 1/100 s are generated from a Random Number block. These can be viewed as in-phase components for an M-PSK or Q-PSK. With the Zero-Order Hold block, will these components scanned at 10000 Hz, so that 100 identical values occur in each period. The in-phase carrier for the modulation must then have a frequency between 0 and 10000/2 = 5000 Hz. A carrier frequency of 1000 Hz was selected in the model The demodulator is simulated with the same carrier. The double frequency is suppressed with the FIR filter from the Block Discrete FIR Filter. The demodulated sequence is shown on the Scope 2 block together with the input signal. On the Spectrum Analyzer blocks (at the top) you can observe the PSD at various points in the model. The average power of the input signal is determined in the model and shown on the display block. In the model, the PSD is also determined from the autocorrelation and displayed in the lower part. With the zoom function, the same PSD can be viewed as shown on the Spectrum Analyzer1. The autocorrelation shown on Array Plot block is: The PSD of the modulated Signal, as it is shown on Spectrum Analyser2 block is: The Simulink model M_PSK_2.slx
{"url":"https://www.dsprelated.com/thread/10114/adc-sampling-rate-matlab-simulink-m-psk","timestamp":"2024-11-02T07:46:46Z","content_type":"text/html","content_length":"62462","record_id":"<urn:uuid:da33e0d7-675c-451f-8a8d-416509be71a8>","cc-path":"CC-MAIN-2024-46/segments/1730477027709.8/warc/CC-MAIN-20241102071948-20241102101948-00611.warc.gz"}