content
stringlengths
86
994k
meta
stringlengths
288
619
The Bank of England has spent its valentines with a QT If you are one to pay attention to macroeconomics you can’t have missed the Bank’s new infatuation with Quantitative Tightening (QT) - she’s been all the rage - and for this year alone, the Bank has decided to reduce its bond holdings by another £100 billion, bringing total reserves down to £658 billion. In plain English, this means that instead of ‘printing money’ through Quantitative Easing (QE), the Bank's monetary tightening measures have effectively extinguished the so-called “magic money tree.” Of course, this layman's explanation does little to clarify how the modern banking system - which sometimes does work like magic - actually functions. Like most things in life, the truth of how banks create and destroy money is complicated, but it looks something like this. Rather than relying on savings, banks loan money depending on the demand for credit from credit-worthy borrowers. By the stroke of a pen (this is the magic bit), banks effectively ‘create’ money out of thin air and then loan that money to the one who demands it. Even this explanation has its caveats. Say you take a loan from the bank. When the bank creates this loan, it is seen as an asset on their balance sheet as the loan is ‘owned’ by the bank and you owe the bank the loan. On the other hand, deposits, the money in your bank account, are seen as liabilities to the bank as the bank does not own these and instead has to produce the sum of these deposits on demand, hence, a liability. However, when banks create money through lending, they don’t just increase their assets - that would quite literally be a magic money tree - they, again, by the stroke of a pen (or more appropriately, an Excel spreadsheet), add a corresponding liability entry so that assets are equal to liabilities. Hooray, a balanced balance sheet! Yet, because of this liability entry, should the loans turn ‘bad’ and no longer be classed as assets, assets no longer equal liabilities, and the bank must effectively dip into its profits to cover these losses. So the world works, loans are only created when credit-worthy borrowers need them and commercial banks can't just continue making infinite money without any consequences. With the majority of money in the economy being created by commercial banks, this process dispels the myth that when the central bank creates money, it somehow functions as a ‘helicopter drop’ into people's pockets. This is not possible. Money creation from the central bank in the form of purchasing government bonds (QE) does not magically insert new money into the economy as the supply of money grows in line with the demand for money itself. Yet, this does not mean that QE has no impact on money creation whatsoever. If it didn't, it wouldn’t be a useful tool for central bankers. QE impacts the economy through two channels. The first is based on credit supply as QE dampens bond yields, which serve as a benchmark for interest rates. When these fall, credit becomes cheaper, allowing banks to lend to borrowers who may have previously been unprofitable. In the second channel, QE stimulates credit demand by making people ‘feel’ richer. This is because once pension funds sell their bonds to the Bank of England, they use the new cash from the sale to reinvest into other financial assets like corporate bonds or equities that you and I may own, making us richer by increasing their value. However, both of these channels still depend on the ‘real’ economy. Banks are only willing to lend if they are confident that creditors will repay their debts. Conversely, borrowers are only willing to borrow if they are certain of their future incomes and capacity to repay debt. As such, it’s not that QE itself creates inflation; instead, it simply accelerates existing inflationary trends that are down to strong market confidence. This is precisely what happened during the pandemic as broad money growth and inflation surged. Attentive readers will notice a big surge in money growth (M4) during the Financial Crisis; explaining why this was not inflationary requires an article of its own. Unsurprisingly, the Bank of England got a lot of blame for this from many commentators (including yours truly). However, many of these criticisms do stem from a misunderstanding that QE functions more like a money printer. Of course, as I have outlined, QE does not force money to ‘appear’ in the economy out of thin air. I mention this as the Bank of England hate train has continued throughout their Quantitative Tightening (QT) programme. QT is just the opposite of QE. Instead of buying bonds from pension funds to stimulate money demand and by implication money supply, the Bank sells the bonds it owns to reduce the supply of money in the economy. The negative press reception to QT has been split into two parts; monetarists, and what I will call ‘tax wonks’ (because to understand this you are either a central banker or a huge nerd). The monetarist view predicts that the Bank’s tightening programme will have deflationary consequences. I find that this is misguided as it is based on the presumption that the Bank has a strong influence over the money supply, which, as we have already discussed, is not the case. Even if one takes an altered monetarist perspective that focuses on QT depressing the demand, and thus supply, of money through accelerating existing negative growth trends, I still find the potential for deflation unlikely. UK labour markets remain tight despite rising company insolvencies and as such, money demand and economic activity will persist, albeit at a lower level. Now let's see if the tax wonks are better judges of character when it comes to giving the Bank advice on its newfound love. These commentators, including the Treasury Committee, have made the argument that the Bank’s QT programme will lead to increased costs to the taxpayer. However, proponents of this line of argument are a little confused. Despite what some commentators may suggest, QE and QT operations on their own do not create any losses for the taxpayer. When the Bank of England sees “losses” on its QE/QT programmes or even gains for that matter, these just represent internal transactions between the central bank and the treasury. The treasury pays interest on bonds to the central bank, which, after covering its costs (including any losses from bond sales), returns profits to the treasury or manages losses incurred. This circular flow of funds means that, at a consolidated level, there isn't an additional loss; rather, it's a redistribution of existing This does not mean that government debt has no cost, it certainly does. But the cost is not down to the composition of government obligations, rather, it's down to the higher interest paid on these obligations. That is the bill that taxpayers will be footing.
{"url":"https://www.leedspolicyinstitute.org.uk/articles/the-bank-of-england-has-spent-698829","timestamp":"2024-11-13T12:52:40Z","content_type":"text/html","content_length":"67439","record_id":"<urn:uuid:04b8b8dd-8200-44b4-b482-33c368bd83dd>","cc-path":"CC-MAIN-2024-46/segments/1730477028347.28/warc/CC-MAIN-20241113103539-20241113133539-00219.warc.gz"}
Finding key themes from free-text reviews - OpenTable Technology Blog “Superb!” — often, a one word review like this one encapsulates the singularly superlative experience that one of our diners had at an OpenTable restaurant. While we do see a few of these extremely terse declarations of satisfaction pop up here and there, the typical OpenTable review is more verbose. Our reviewing customers often hit the 2000-character upper limit to pour out their hearts. They are passionate about every aspect of fine dining, and leave detailed, nuanced and constructive reviews of their journey through their dining experience. The level of care we see in these reviews make them unquestionably one of the most important sources of insights into the ecosystem of restaurants and diners. Potential diners will often go through hundreds of reviews to help them decide where to dine next. A restaurateur, on the other hand, will always keep a sharp eye out for reviews to gauge how their business is doing, and what, if anything, needs more work. Reviews can also be a great way to familiarize oneself with the dining scene in a new neighborhood, city, or country. Reviews can inform the diner about aspects of a restaurant that are not obvious from its description — is it a local gem or a tourist trap? Does this restaurant have a view? If so, a view of what, and is the view particularly stunning during sunset? Is the service friendly? Mining Reviews Mining reviews for insights is an obvious thing to do, but it is not necessarily easy. As is usually the case with unstructured data, there is a lot of information buried in a lot of noise. Once you have read through a few reviews you start getting the sense that there is only a handful of broad categories that people are writing about. These may range from food and drink related comments, to sentences devoted to ambiance, service, value for money, special occasions, and so on. Within a category like, say, ambiance, you would come across distinct themes such as live music, decor, and views. One review may contain only one or two of these themes (say seafood and views) while another may contain several. It is therefore only natural that one of the first things that one would like to extract from a corpus of reviews are all the themes that occur across reviews. In more technical parlance, what we have been calling themes are known as topics, and the technique of learning these topics from a corpus of documents is called Topic Modeling. Suppose all of our reviews are generated from a fixed vocabulary of, say, 100,000 words, and we learn 200 topics from this corpus. Each topic is a distribution over this vocabulary of words. The way one topic is different from another is in the weights with which each word occurs in them. In the image of above, we display a sample of six topics learned from our review corpus. We show the top 25 words from each topic, with the size of the each word scaled proportionally to the importance of that word in that topic. It does not take any effort to see how tightly knitted topics are, and what they are about. Just by looking at them, we know that the first one is about steakhouse food, the second one is about wine, the third about live music, and the next three about desserts, bar scene, and views. When modeling is performed topics basically just fall out. That this can be achieved is an amazing fact, given that we did not have to label or annotate the reviews beforehand, or tell the algorithm that we are working in the space of restaurant reviews. We basically throw in all the reviews in the mix, and out comes these topics. A byproduct of topic modeling are the weights with which each each topic is associated with each review. For example, consider the following three reviews: 1. “They had an extensive wine list to chose from, and we each ordered a glass of the 1989 Opus One to pair with our NY strip steaks. We sat near the live jazz band.” 2. “The view of the sunset over the ocean was spectacular, while we sat there savoring the dark chocolate pudding meticulously paired with the wine by our very knowledgeable sommelier. 3. “The restaurant was crowded so we sat at the bar. The bartender whipped up some amazing cocktails for us. There was blues playing in the background.” It is easy to see that Review 1 mainly draws from the wine, steakhouse and live music topics, while the other topics like desserts or view have zero weight in this review. Review 2, on the other hand, is about the view topic, a bit about the desserts topic, and again the wine topic. Review 3 draws mostly from the bar scene and the live music topics. The intuition here is that documents, in our case reviews, are composed of multiple topics. The share of topics in each review is different. Each word in each review comes from a topic, where that topic is one of the topics in the per-review topic distribution. Next, we discuss how in practice we learn topics from a review corpus. From Reviews to Topics using Matrix Factorization A popular approach for topic modeling is what is known as the Latent Dirichlet Allocation (LDA). A very approachable and comprehensive review of LDA is found in this article by David Blei. Here, I am going to use an alternative method to model topics, based on Non-negative Matrix Factorization (NMF). To see how reviews can be put in a matrix form, consider again the three reviews above. A usual first step is to remove stop words — terms that are too common, such as “a”, “and”, “of”, “to”, “that”, “was” etc. Now consider all the tokens left in these three reviews — we have 39 of them. So we can express the reviews as a 3 by 39 matrix, where the entries are the counts or term-frequencies (tf) for a certain token in a review. The matrix looks like the following: Note that while a word like bartender is unique to only one review, the word sat is all three reviews, and should have less weight in the matrix as it is less distinctive. To achieve this, one usually multiplies these term frequencies with an inverse-document-frequency (idf), which is defined as $$$logleft(frac{n}{1+m(t)}right)$$$ where $$$n$$$ is the number of documents in the corpus and $$$m(t)$$$ is the number of documents in which the token $$$t$$$ occurs. If a token occurs in all documents, the ratio within the brackets is almost equal to unity which makes its logarithm almost equal to zero. Here is what the matrix looks like after tf-idf. Note that the word “sat” now has much lower importance relative to other words. In practice, the document-term matrix $$$bf{D}$$$ can be quite big, $$$n$$$ documents tall and $$$v$$$ tokens wide, where $$$n$$$ can be a several millions, and $$$v$$$ several hundreds of thousands. One more step that is usually performed to precondition the matrix is to normalize each row, such that the squares of the elements add up to unity (other normalizations are also possible). Matrix Factorization (MF) takes such a matrix $$$bf{D}$$$ of dimension $$$[n times v]$$$, and approximates it as a product of two low rank matrices: a$$$[ntimes k]$$$ matrix $$$bf{W}$$$, and a $$$ [ktimes v]$$$ matrix $$$bf T$$$, where$$$k$$$ is a small number, typically in few tens to a few hundreds. This is shown schematically below: NMF is a variant of MF where we start with a matrix $$$bf D$$$ with non-negative entires like our document-term matrix, and also constrain the elements of $$$bf W$$$ and $$$bf T$$$ to be Everything being non-negative lets us interpret the factorization in an additive sense, and interpret each row of the $$$bf T$$$ matrix as a topic. This is how it works: Let’s take the first row of $$$bf D$$$. That is essentially our first review, expressed as a vector of length v. Remembering how matrix multiplication works, what the above relation tells us is that we can reconstruct this review approximately by linearly combining the $$$k$$$ rows of the matrix $$$bf T$$$ with weights taken from the first row of $$$bf W$$$ – the first element of the first row of $$$bf W$$$ multiplying the first row of $$$bf T$$$, the second element of the first row of $$$bf W$$$ multiplying the second row of $$$bf T$$$, and so on. Each row of $$$bf T$$$ is a distribution over the $$$v$$$ terms in a vocabulary, and easily interpreted as the topics described in the earlier section. What this factorization says is that each of the $$$n$$$ reviews (rows in $$$bf D$$$) can be built up by a different linear combination of the $$$k$$$ topics (rows in $$$bf T$$$). So there we have it, $$$bf W$$$ expresses the share of topics in each review, while each row of $$$bf T$$$ represents a topic. Here are some Python code to perform these steps: import sklearn.feature_extraction.text as text import numpy as np # This step performs the vectorization, # tf-idf, stop word extraction, and normalization. # It assumes docs is a Python list, #with reviews as its elements. cv = text.TfidfVectorizer(docs, stop_words='english') doc_term_matrix = cv.fit_transform(docs) # The tokens can be extracted as: vocab = cv.get_feature_names() # Next we perform the NMF with 20 topics from sklearn import decomposition num_topics = 20 #doctopic is the W matrix decomp = decomposition.NMF(n_components = num_topics, init = 'nndsvd') doctopic = decomp.fit_transform(doc_term_matrix) # Now, we loop through each row of the T matrix # i.e. each topic, # and collect the top 25 words from each topic. n_top_words = 25 topic_words = [] for topic in decomp.components_: idx = np.argsort(topic)[::-1][0:n_top_words] topic_words.append([vocab[i] for i in idx]) A note about initialization The standard random initialization of NMF does not guarantee a unique factorization, and interpretability of topics. Note that we specified the initialization with ‘nndsvd’ above, which stands for Nonnegative Double Singular Value Decomposition (Boutsidis & Gallopoulos, 2008). It choses initial k factors based on positive components of the first k dimensions of SVD, and makes the factorization Topics at OpenTable At OpenTable, we have performed topic modeling on tens of millions of textual reviews, and these topics are being used in various innovative applications that will be the subject of subsequent posts. We have already seen six topics depicted as word clouds. Here are six more just to show how nicely they touch upon different aspects of dining. I leave you with a teaser of how topics can be used to extract regional nuances. As an experiment we ran topic analysis separately on US cities and UK cities. We found that while in the US Sunday brunch is a common thing, the UK has its Sunday roasts. Also, the concept of wine pairing somewhat loses ground to wine matching while transitioning from the US to the UK! More in future posts!
{"url":"https://tech.opentable.com/finding-key-themes-from-free-text-reviews/","timestamp":"2024-11-03T15:08:41Z","content_type":"text/html","content_length":"107202","record_id":"<urn:uuid:cba3203c-655f-4b80-b5ac-95e2bdf7b0d7>","cc-path":"CC-MAIN-2024-46/segments/1730477027779.22/warc/CC-MAIN-20241103145859-20241103175859-00805.warc.gz"}
Hexadecimal Arithmetic Calculator - cryptocrape.com Hexadecimal Arithmetic Calculator Results in Different Bases: How to Use Hexadecimal Arithmetic Calculator? This Hexadecimal Arithmetic Calculator allows you to perform basic arithmetic operations (addition, subtraction, multiplication, and division) on hexadecimal numbers. Follow these steps to use the calculator effectively: 1. Input Hexadecimal Numbers: □ In the first input field labeled “Hex 1,” enter your first hexadecimal number. □ In the second input field labeled “Hex 2,” enter your second hexadecimal number. □ Ensure that you only use valid hexadecimal characters (0-9, A-F, a-f). For example, 1A, 2B, FF are valid inputs, while G5 or 1.5 are not. 2. Select an Operation: □ Use the dropdown menu labeled “Select Operation” to choose the arithmetic operation you want to perform: ☆ Addition: Adds the two hexadecimal numbers. ☆ Subtraction: Subtracts the second number from the first. ☆ Multiplication: Multiplies the two hexadecimal numbers. ☆ Division: Divides the first number by the second. 3. Calculate: □ After entering the numbers and selecting an operation, click the “Calculate” button. □ The result will be displayed in hexadecimal format, and you will also see the equivalent values in decimal, binary, and octal formats. 4. View Results: □ The result will be shown in the designated area below the input fields. □ In addition to the hexadecimal result, you’ll see: ☆ Decimal: The result converted to decimal format. ☆ Binary: The result in binary format. ☆ Octal: The result in octal format. 5. Clear Inputs: □ If you want to perform a new calculation, you can click the “Clear” button. □ This will reset the input fields and clear the results, allowing you to start fresh. 6. Error Handling: □ If you enter an invalid hexadecimal number, a message will be displayed prompting you to enter a valid input. □ If you try to divide by zero, an error message will notify you that division by zero is not allowed. Example Usage • Addition: If you enter 1A in Hex 1 and 2B in Hex 2, select “Addition,” and click “Calculate,” the result will be displayed in hexadecimal and other formats. • Subtraction: For Hex 1 as FF and Hex 2 as 1, selecting “Subtraction” will show the result of FF - 1 in all bases. • Division: Inputting A for Hex 1 and 2 for Hex 2 will allow you to calculate A / 2.
{"url":"https://cryptocrape.com/hexadecimal-arithmetic-calculator/","timestamp":"2024-11-14T02:05:16Z","content_type":"text/html","content_length":"90752","record_id":"<urn:uuid:e92f09fc-bc79-4c70-8e79-0ddaeff4068c>","cc-path":"CC-MAIN-2024-46/segments/1730477028516.72/warc/CC-MAIN-20241113235151-20241114025151-00882.warc.gz"}
How many gibibits in 4000 terabits? From four thousand tbit. to gibit. tbit gibit (Gibibit) Description 1 931.322574615478515625 1 terabit (one) is equal to 931.322574615478515625 gibibit (nine hundred and thirty-one point three hundred and twenty-two quadrillion five hundred and seventy-four tbit gibit trillion six hundred and fifteen billion four hundred and seventy-eight million five hundred and fifteen thousand six hundred and twenty-five) 2 1862.64514923095703125 2 terabit (two) is equal to 1862.64514923095703125 gibibit (one thousand eight hundred and sixty-two point sixty-four quadrillion five hundred and fourteen trillion nine tbit gibit hundred and twenty-three billion ninety-five million seven hundred and three thousand one hundred and twenty-five) 4 3725.2902984619140625 4 terabit (four) is equal to 3725.2902984619140625 gibibit (three thousand seven hundred and twenty-five point two quadrillion nine hundred and two trillion nine hundred tbit gibit and eighty-four billion six hundred and nineteen million one hundred and forty thousand six hundred and twenty-five) 8 7450.580596923828125 8 terabit (eight) is equal to 7450.580596923828125 gibibit (seven thousand four hundred and fifty point five hundred and eighty trillion five hundred and ninety-six tbit gibit billion nine hundred and twenty-three million eight hundred and twenty-eight thousand one hundred and twenty-five) 16 14901.16119384765625 16 terabit (sixteen) is equal to 14901.16119384765625 gibibit (fourteen thousand nine hundred and one point sixteen trillion one hundred and nineteen billion three tbit gibit hundred and eighty-four million seven hundred and sixty-five thousand six hundred and twenty-five) 32 29802.3223876953125 32 terabit (thirty-two) is equal to 29802.3223876953125 gibibit (twenty-nine thousand eight hundred and two point three trillion two hundred and twenty-three billion tbit gibit eight hundred and seventy-six million nine hundred and fifty-three thousand one hundred and twenty-five) 64 59604.644775390625 64 terabit (sixty-four) is equal to 59604.644775390625 gibibit (fifty-nine thousand six hundred and four point six hundred and forty-four billion seven hundred and tbit gibit seventy-five million three hundred and ninety thousand six hundred and twenty-five) 128 119209.28955078125 128 terabit (one hundred and twenty-eight) is equal to 119209.28955078125 gibibit (one hundred and nineteen thousand two hundred and nine point twenty-eight billion nine tbit gibit hundred and fifty-five million seventy-eight thousand one hundred and twenty-five) 256 238418.5791015625 256 terabit (two hundred and fifty-six) is equal to 238418.5791015625 gibibit (two hundred and thirty-eight thousand four hundred and eighteen point five billion seven tbit gibit hundred and ninety-one million fifteen thousand six hundred and twenty-five) 512 476837.158203125 gibit 512 terabit (five hundred and twelve) is equal to 476837.158203125 gibibit (four hundred and seventy-six thousand eight hundred and thirty-seven point one hundred and tbit fifty-eight million two hundred and three thousand one hundred and twenty-five) 1024 953674.31640625 gibit 1024 terabit (one thousand and twenty-four) is equal to 953674.31640625 gibibit (nine hundred and fifty-three thousand six hundred and seventy-four point thirty-one tbit million six hundred and forty thousand six hundred and twenty-five) 2048 1907348.6328125 gibit 2048 terabit (two thousand and forty-eight) is equal to 1907348.6328125 gibibit (one million nine hundred and seven thousand three hundred and forty-eight point six tbit million three hundred and twenty-eight thousand one hundred and twenty-five) 4096 3814697.265625 gibit 4096 terabit (four thousand and ninety-six) is equal to 3814697.265625 gibibit (three million eight hundred and fourteen thousand six hundred and ninety-seven point two tbit hundred and sixty-five thousand six hundred and twenty-five) 8192 7629394.53125 gibit 8192 terabit (eight thousand one hundred and ninety-two) is equal to 7629394.53125 gibibit (seven million six hundred and twenty-nine thousand three hundred and tbit ninety-four point fifty-three thousand one hundred and twenty-five)
{"url":"https://www.kilomegabyte.com/4000-tbit-to-gibit","timestamp":"2024-11-11T03:43:25Z","content_type":"text/html","content_length":"31146","record_id":"<urn:uuid:977aaf5b-28cb-487c-b7e0-a41a4a3ebaf1>","cc-path":"CC-MAIN-2024-46/segments/1730477028216.19/warc/CC-MAIN-20241111024756-20241111054756-00534.warc.gz"}
How to Calculate Concrete Mixture Ratio What are the ingredients needed to make concrete? To make concrete, Hassan needs cement, sand, and aggregate mixed in the ratio 2:5:7. If the quantity of cement used to concrete a building is 3 tonnes, then: A) How much total concrete can he make? B) How much sand is needed to make the concrete? To make concrete, the total concrete that Hassan can make is 21 tonnes, and he needs 7.5 tonnes of sand for the 3 tonnes of cement, based on the given ratio of cement, sand, and aggregate which is Concrete is a fundamental building material that requires careful calculation of the mixture ratio to ensure strength and durability. In Hassan's case, the concrete mixture comprises three main ingredients: cement, sand, and aggregate. These materials are combined in the ratio of 2:5:7, respectively. When Hassan uses 3 tonnes of cement to concrete a building, we can calculate the total amount of concrete he can make by understanding the ratio. The total ratio of the mixture is 2 (cement) + 5 (sand) + 7 (aggregate) = 14 parts. Therefore, for every 2 parts of cement used, 14/2 = 7 parts of concrete are produced. Hence, 3 tonnes of cement will yield 3 * 7 = 21 tonnes of concrete in total. Additionally, to determine the amount of sand needed, we can refer to the mixture ratio. For every 2 parts of cement, 5 parts of sand are required. Hence, when 3 tonnes of cement are used, the amount of sand needed is 3 * 5/2 = 7.5 tonnes. This calculation showcases the importance of understanding ratios in mixture preparation and highlights the precise measurements required in concrete construction. By following the correct ratio, Hassan can ensure the quality and strength of the concrete used in his building projects.
{"url":"https://theletsgos.com/engineering/how-to-calculate-concrete-mixture-ratio.html","timestamp":"2024-11-07T13:10:26Z","content_type":"text/html","content_length":"21874","record_id":"<urn:uuid:1f4696b7-9ea6-42c0-8994-a04be30b4a24>","cc-path":"CC-MAIN-2024-46/segments/1730477027999.92/warc/CC-MAIN-20241107114930-20241107144930-00181.warc.gz"}
Investigating the Performance of Alternate Regression Weights by Studying All Possible Criteria in Regression Models with a Fixed Set of Predictors We describe methods for assessing all possible criteria (i.e., dependent variables) and subsets of criteria for regression models with a fixed set of predictors, x (where x is an n×1 vector of independent variables). Our methods build upon the geometry of regression coefficients (hereafter called regression weights) in n-dimensional space. For a full-rank predictor correlation matrix, R [xx], of order n, and for regression models with constant R^2 (coefficient of determination), the OLS weight vectors for all possible criteria terminate on the surface of an n-dimensional ellipsoid. The population performance of alternate regression weights-such as equal weights, correlation weights, or rounded weights-can be modeled as a function of the Cartesian coordinates of the ellipsoid. These geometrical notions can be easily extended to assess the sampling performance of alternate regression weights in models with either fixed or random predictors and for models with any value of R ^2. To illustrate these ideas, we describe algorithms and R (R Development Core Team, 2009) code for: (1) generating points that are uniformly distributed on the surface of an n-dimensional ellipsoid, (2) populating the set of regression (weight) vectors that define an elliptical arc in ℝ^n, and (3) populating the set of regression vectors that have constant cosine with a target vector in ℝ^n. Each algorithm is illustrated with real data. The examples demonstrate the usefulness of studying all possible criteria when evaluating alternate regression weights in regression models with a fixed set of predictors. • Monte Carlo • multiple regression • weighting Dive into the research topics of 'Investigating the Performance of Alternate Regression Weights by Studying All Possible Criteria in Regression Models with a Fixed Set of Predictors'. Together they form a unique fingerprint.
{"url":"https://experts.umn.edu/en/publications/investigating-the-performance-of-alternate-regression-weights-by-","timestamp":"2024-11-11T10:38:19Z","content_type":"text/html","content_length":"54809","record_id":"<urn:uuid:31492a51-5018-4232-ba06-502ef2b5d41a>","cc-path":"CC-MAIN-2024-46/segments/1730477028228.41/warc/CC-MAIN-20241111091854-20241111121854-00739.warc.gz"}
A note on Zipf’s law Very often when a number is large, and we don’t know or care exactly how large it is, we can model it as infinite. This may make no practical difference and can make calculations much easier. I give several examples of this in the article Infinite is easier than big. When you run across a statement that something is infinite, you can often mentally replace “infinite” with “big, but so big that it doesn’t matter exactly how big it is.” Conversely, the term “finite” in applied mathematics means “limited in a way that matters.” But when we say something does or does not matter, there’s some implicit context. It does or does not matter for some purpose. We can get into trouble when we’re unclear of that context or when we shift contexts without realizing it. Zipf’s law Zipf’s law came up a couple days ago in a post looking at rare Chinese characters. That post resolves a sort of paradox, that rare characters come up fairly often. Each of these rare characters is very unlikely to occur in a document, and yet collectively its likely that some of them will be necessary. In some contexts we can consider the number of Chinese characters or English words to be infinite. If you want to learn English by first learning the meaning of every English word, you will never get around to using English. The number of words is infinite in the sense that you will never get to the end. Zipf’s law says that the frequency of the nth word in a language is proportional to n^−s. This is of course an approximation, but a surprisingly accurate approximation for such a simple statement. Infinite N? The Zipf probability distribution depends on two parameters: the exponent s and the number of words N. Since the number of words in any spoken language is large, and its impossible to say exactly how large it is, can we assume N = ∞? This seems like exactly the what we were talking about at the top of this article. Whether it is possible to model N as infinite depends on s. The value of s that models word frequency in many languages is close to 1. The plot below illustrates Zipf’s law for the text of Wikipedia in many different languages. In each case, the slope on the log-log plot is approximately 1, i.e. s ≈ 1. When s = 1, we don’t have a probability distribution because the sum of 1/n from 1 to ∞ diverges. And so no, we cannot assume N = ∞. Now you may object that all we have to do is set s to be slightly larger than 1. If s = 1.0000001, then the sum of n^−s converges. Problem solved. But not really. When s = 1 the series diverges, but when s is slightly larger than 1 the sum is very large. Practically speaking this assigns too much probability to rare words. If s = 2, for example, one could set N = ∞. The Zipf distribution with s = 2 may be useful for modeling some things, but not for modeling word frequency. Zipf’s law applied to word frequency is an interesting example of a model that contains a large N, and although it doesn’t matter exactly how large N is, it matters that N is finite. In the earlier article, I used the estimate that Chinese has 50,000 characters. If I had estimated 60,000 characters the conclusion of the article would not have been too different. But assuming an infinite number of characters would result in substantially overestimating the frequency of rare words. Related posts Image above by Sergio Jimenez, CC BY-SA 4.0, via Wikimedia Commons One thought on “A note on Zipf’s law” 1. In high-energy physics ‘big and indeterminate, but certainly not infinite’ is a very common kind of number. The corresponding intuition to bridge the two cases s=1 would be to introduce a cutoff: work with a Zipf’s law density multiplied by a (possibly smoothed) indicator function [n<N] for some large N, and then look at the dependency on N in order to characterize and possibly correct the 'infrared' behaviour of the model.
{"url":"https://www.johndcook.com/blog/2023/07/25/a-note-on-zipfs-law/","timestamp":"2024-11-04T21:40:44Z","content_type":"text/html","content_length":"53961","record_id":"<urn:uuid:d210b3f0-159e-42d0-aa8f-a5ef655b8898>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.16/warc/CC-MAIN-20241104194528-20241104224528-00144.warc.gz"}
Calculating for KVA in Single and Three Phase Transformers Source: Sensors online article by Andrew Holland |Apr 3, 2018 1:37pm. Need to size a single or three-phase transformer? Transformer sizes are dictated by their respective KVA rating. Using common variables, one can compute for the required KVA rating or transformer size for a particular project, system or operation. This article provides basic formulas for finding the correct size of single and three-phase transformers using load voltage and load current. Single Phase KVA Calculation The formula for finding the required KVA or transformer size for single-phase power is the following: Volts x Amps / 1,000 = KVA Based on the equation, one would need to plug in the proper load/output (secondary) voltage and current (amps) to compute for KVA. Note that load voltage is not the same as line voltage, which is also known as primary voltage or input. Example: Find the KVA or transformer size for load voltage of 120V 1PH and a load current of 50A. 120 x 50 / 1,000 = KVA 6,000 / 1,000 = KVA = 6 KVA Three-phase KVA Calculation Businesses that require three-phase power need to add an extra component in the formula to arrive at the correct transformer size, i.e., square root of 3 (√3) or 1.732. This figure is a constant found in three phase, as the phases do not generate the same amount of power (simultaneously). Furthermore, three-phase transformers handle three lines of AC power, with each of the three lines 120 degrees out of phase from the other two lines. With this in mind, the new formula can be found below: Volts x Amps x √3 / 1,000 = KVA Example: Find the KVA or transformer size for load voltage of 240 3PH and a load current of 60 amps. 240 x 60 x 1.732 / 1,000 = KVA = 24.94 KVA (or 25 KVA after rounding up) Future Expansion and Standard Transformer Sizes Computing for the required KVA is not the final step in determining the proper transformer size. Most computations (especially for three-phase loads) do not provide a whole number. As a result, the value must be rounded up, as seen in the sample above. It is best practice to always round up and not down. Next, in order to factor in future expansion and prevent risks associated with accidental overloading, one should add 20 percent of spare capacity. Taking the three-phase sample again, we simply add 20 percent to the rounded figure: 25 KVA + 5 = 30 KVA Lastly, one may find that the specific transformer size needed is not being offered or is unavailable by the store or preferred manufacturer. In most cases, this is because there are standard KVA sizes for transformers. If you cannot find the size you need, simply round up again to the next standard KVA size. For referencing, the standard KVA sizes for singe-phase transformers are 1, 1.5, 2, 3, 5, 7.5, 10, 15, 25, 37.5, 50, 75, 100, 167, 200, 250 and 333 KVA Taking our answer from the single-phase example above – 6 KVA or 7.2 KVA (with 20% spare capacity); we can see that there is no standard single-phase equivalent available. As a solution, simply round up to the next standard single-phase KVA size: 7.5 KVA. Standard sizes for three-phase transformers are 3, 6, 9, 15, 30, 45, 75, 112.5, 150, 225, 300, 500, 750 and 1,000 KVA Taking our final three-phase figure of 30 KVA, we can see that it matches with a standard three-phase transformer size above, i.e., 30 KVA. No further rounding or conversion is needed, since 30 KVA is a standard three-phase transformer size.
{"url":"https://passive-components.eu/calculating-for-kva-in-single-and-three-phase-transformers/?amp=1","timestamp":"2024-11-10T03:17:25Z","content_type":"text/html","content_length":"63737","record_id":"<urn:uuid:bbc9d526-11eb-4f14-886e-b416b50cc311>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.3/warc/CC-MAIN-20241110005602-20241110035602-00492.warc.gz"}
Construct a Taylor Series expansion around a point • Alias: None • Arguments: None The Taylor series model is purely a local approximation method. That is, it provides local trends in the vicinity of a single point in parameter space. The order of the Taylor series may be either first-order or second-order, which is automatically determined from the gradient and Hessian specifications in the responses specification (see responses for info on how to specify gradients and Hessians) for the truth model Known Issue: When using discrete variables, there have been sometimes significant differences in surrogate behavior observed across computing platforms in some cases. The cause has not yet been fully diagnosed and is currently under investigation. In addition, guidance on appropriate construction and use of surrogates with discrete variables is under development. In the meantime, users should therefore be aware that there is a risk of inaccurate results when using surrogates with discrete variables. The first-order Taylor series expansion is: anchor eq-taylor1 f{equation} hat{f}({bf x}) approx f({bf x}_0) + nabla_{bf x} f({bf x}_0)^T ({bf x} - {bf x}_0) f} and the second-order expansion is: anchor eq-taylor2 f{equation} hat{f}({bf x}) approx f({bf x}_0) + nabla_{bf x} f({bf x}_0)^T ({bf x} - {bf x}_0) + frac{1}{2} ({bf x} - {bf x}_0)^T nabla^2_{bf x} f({bf x}_0) ({bf x} - {bf x}_0) f} where \({\bf x}_0\) is the expansion point in \(n\) -dimensional parameter space and \(f({\bf x}_0)\) , \(\nabla_{\bf x} f({\bf x}_0)\) , and \(\nabla^2_{\bf x} f({\bf x}_0)\) are the computed response value, gradient, and Hessian at the expansion point, respectively. As dictated by the responses specification used in building the local surrogate, the gradient may be analytic or numerical and the Hessian may be analytic, numerical, or based on quasi-Newton secant In general, the Taylor series model is accurate only in the region of parameter space that is close to \({\bf x}_0\) . While the accuracy is limited, the first-order Taylor series model reproduces the correct value and gradient at the point \(\mathbf{x}_{0}\) , and the second-order Taylor series model reproduces the correct value, gradient, and Hessian. This consistency is useful in provably-convergent surrogate-based optimization. The other surface fitting methods do not use gradient information directly in their models, and these methods rely on an external correction procedure in order to satisfy the consistency requirements of provably-convergent SBO.
{"url":"https://snl-dakota.github.io/docs/6.20.0/users/usingdakota/reference/model-surrogate-local-taylor_series.html","timestamp":"2024-11-10T18:19:35Z","content_type":"text/html","content_length":"16603","record_id":"<urn:uuid:02971ede-a870-4e2c-ab1d-a037b86591b4>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.61/warc/CC-MAIN-20241110170046-20241110200046-00710.warc.gz"}
Best integration assignment help homework help online Students having integration as their subject often have confusions as to differentiation. They face problems while solving questions related to integration. Such students need help of an expert while carrying out their assignments. Best Integration Assignment Help Service provides you the best mentors to guide you in this subject. Without a proper understanding, it is hard to complete an assignment. Hence, Best Integration Assignment Help Service tutors firstly help you in having an idea about the topic so that while doing your assignment you are aware of the formulas that is going to be used. Best Integration Assignment Help Service teachers have explained the meaning of integration as Integration is the process of finding the area under a graph. It is the reverse process to differentiation, and has wide applications like finding areas under curves and volumes of solids. It is an important concept in mathematics and is one of the operations in calculus. Integration and its reverse differentiation are the two important calculus operations that play a vital role in mathematics. “∫” is the symbol which is used to denote integral. Services We Offer Popular Mathematics Assignment Help Online Services Mathematics assignment always put student on worries and stress. But with Assignment Consultancy for your help, you can select any of our popular services for Math and remove all your worries here. Why Select Us Assignment Help Online Features Zero Plagiarism We believe in providing no plagiarism work to the students. All are our works are unique and we provide Free Plagiarism report too on requests. Best Customer Service Our customer representatives are working 24X7 to assist you in all your assignment needs. You can drop a mail to assignmentconsultancy.help@gmail.com or chat with our representative using live chat shown in bottom right corner. Three Stage Quality Check We are the only service providers boasting of providing original, relevant and accurate solutions. Our three stage quality process help students to get perfect solutions. 100% Confidential All our works are kept as confidential as we respect the integrity and privacy of our clients. Referral Program Refer us and Earn up to 5000 USD Place Order and generate unique Code Whenever you make a payment. You are eligible for a referral code, just request in email so that you will get the code which you can share with your friends. Earn Money You will be eligible for referral bonus if your friend place the order using the same referral code using no other discounts after successful payment made by him. Encash it or Use it in your next assignments You can request the encashment as mentioned in step 2 or you can use it as a method of payment for your next assignments. Integration in mathematics can be broadly classified in four major parts The process of calculating indefinite integrals is known as indefinite integration or antiderivatives Definite Integral – The computation of a definite integral finds various uses in mathematics from averaging continuous functions to computing areas It constitutes algorithms that are used for calculating the numerical value of a definite integral. Usually the calculations are done on computers. Antiderivatives and definite integrals are computed mostly on computers Some of the popular integration techniques are integration as summation, integration using anti-derivatives, integration by substitution, integrating algebraic fractions, integration for finding areas, integration by parts, integration leading to log functions, integration as a reverse of differentiation, approximating definite integrals, integration using trigonometric formulas, improper integrals and partial fractions. Students require help of teachers for solving integration problems from these techniques which is well provided by Best Integration Assignment Help Service. Basic formula: The definite integral of a function is closely related to the antiderivative and indefinite integral of a function. It is important to know the primary differences between the terms. Best Integration Assignment Help Service covers the following topics related to integration Integration by Parts Integrals Involving Trig Functions Trig Substitutions Partial Fractions Integrals Involving Roots Integrals Involving Quadratics Integration Strategy Improper Integrals Comparison Test for Improper Integrals Approximating Definite Integrals for Integration Why should you choose Best Integration Assignment Help Service? Integration is not that easy to understand. You need an expert to understand the different formulas and their applications. Best Integration Assignment Help Service has a team of experts who are well qualified in this area. We even provide you quality work. We understand the time period limits. Our experts take care of your deadlines. We avoid plagiarism. No compromise in quality. We deliver assignments on time. Moreover, services provided by us is very affordable. Looking for Best integration assignment help online,please click here
{"url":"https://www.assignmentconsultancy.com/integration-assignment-help/","timestamp":"2024-11-08T07:34:46Z","content_type":"text/html","content_length":"80600","record_id":"<urn:uuid:735e11de-fc78-4487-9272-cc0a12979a74>","cc-path":"CC-MAIN-2024-46/segments/1730477028032.87/warc/CC-MAIN-20241108070606-20241108100606-00224.warc.gz"}
This information is part of the Modelica Standard Library maintained by the Modelica Association. Calculation of pressure loss in a straight pipe for laminar flow regime of single-phase fluid flow only. This function shall be used inside of the restricted limits according to the referenced literature. • circular cross sectional area • laminar flow regime (Reynolds number Re ≤ 2000) [VDI-Wärmeatlas 2002, p. Lab, eq. 3] The pressure loss dp for straight pipes is determined by: dp = lambda_FRI * (L/d_hyd) * (rho/2) * velocity^2 lambda_FRI as Darcy friction factor [-]. L as length of straight pipe [m], d_hyd as hydraulic diameter of straight pipe [m], rho as density of fluid [kg/m3], velocity as mean velocity [m/s]. The Darcy friction factor lambda_FRI of straight pipes for the laminar flow regime is calculated by Hagen-Poiseuilles law according to [Idelchik 2006, p. 77, eq. 2-3] as follows: • Laminar flow regime is restricted to a Reynolds number Re ≤ 2000 • and calculated through: lambda_FRI = 64/Re lambda_FRI as Darcy friction factor [-], Re as Reynolds number [-]. The Darcy friction factor lambda_FRI in the laminar regime is independent of the surface roughness K as long as the relative roughness k = surface roughness/hydraulic diameter is smaller than 0.007. A higher relative roughness k than 0.007 leads to an earlier leaving of the laminar regime to the transition regime at some value of Reynolds number Re_lam_leave. This earlier leaving is not modelled here because only laminar fluid flow is considered. The Darcy friction factor lambda_FRI in dependence of Reynolds number is shown in the figure below. The pressure loss dp for the laminar regime in dependence of the mass flow rate of water is shown in the figure below. Note that this pressure loss function shall not be used for the modelling outside of the laminar flow regime at Re > 2000 even though it could be used for that. If the whole flow regime shall be modelled, the pressure loss function dp_overall can be used. Elmqvist,H., M.Otter and S.E. Cellier: Inline integration: A new mixed symbolic / numeric approach for solving differential-algebraic equation systems.. In Proceedings of European Simulation MultiConference, Prague, 1995. Handbook of hydraulic resistance. Jaico Publishing House, Mumbai, 3rd edition, 2006. VDI - Wärmeatlas: Berechnungsblätter für den Wärmeübergang. Springer Verlag, 9th edition, 2002.
{"url":"https://reference.wolfram.com/system-modeler/libraries/Modelica/Modelica.Fluid.Dissipation.Utilities.SharedDocumentation.PressureLoss.StraightPipe.dp_laminar.html","timestamp":"2024-11-12T23:52:49Z","content_type":"text/html","content_length":"34031","record_id":"<urn:uuid:bf4ffeeb-b5ec-4f6d-a0ff-e18ffb5a9517>","cc-path":"CC-MAIN-2024-46/segments/1730477028290.49/warc/CC-MAIN-20241112212600-20241113002600-00615.warc.gz"}
PWC100 - Triangle Sum ETOOBUSY 🚀 minimal blogging for the impatient On with TASK #2 from the Perl Weekly Challenge #100. Enjoy! The challenge You are given triangle array. Write a script to find the minimum path sum from top to bottom. When you are on index i on the current row then you may move to either index i or index i + 1 on the next row. The questions I can definitely hear Mohammad S Anwar muttering to himself about that objectively annoying polettix insisting on having a more specific example for the input of this challenge. Because… it’s nice to see this: Input: Triangle = [ [1], [2,4], [6,4,9], [5,1,7,2] ] A-ha! I will assume that there’s no parsing involved then, and that the representation is already in easy-to-use Perl arrays of arrays. Thanks! Reading through the Perl Weekly Review #097 by Colin Crain I got to understand how much stuff I give for granted when I read these challenges. For example, the fact that the the minimum number of changes in the binary substrings might land on none of the actual sub-sequences was an epiphany! So I wonder… what might I be missing here?!?. I hope nothing. The solution I know, I know. I said: no parsing, yay! Alas, I like to have programs around the functions that solve these challenges, which usually means messing up with the command line. So let’s take this away first: sub triangularize (@list) { my @retval; my $n = 1; while (@list) { die "invalid number of elements\n" unless @list >= $n; push @retval, [splice @list, 0, $n]; return \@retval; This takes a flat list of items and groups them in the right way for the puzzle, producing an array of arrays as output. OK, back on the main track, let’s see my solution to this challenge: sub triangle_sum ($tri) { my @s = $tri->[0][0]; my $i = 1; while ($i <= $tri->$#*) { my $l = $tri->[$i]; my @ns = $s[0] + $l->[0]; push @ns, $l->[$_] + ($s[$_ - 1] < $s[$_] ? $s[$_ - 1] : $s[$_]) for 1 .. $l->$#* - 1; push @ns, $s[-1] + $l->[-1]; @s = @ns; return min(@s); We keep an array @s of the best sums so far that landed us on a specific spot. This starts with the very first line in our triangle, which contains only one single item ($tri->[0][0]). For each following line, we calculate the next sums in @ns. There are three cases: • the left-most item can only come from the left-most item in the previous line; • the right-most item can only come from the right-most item in the previous line; • all other items (if any) can come from two possible previous line’s items. For this reason, calculating the two external elements in @ns is straightforward, while for the middle ones we have to understand what is the best previous item, which in this case means which of these previous items is the lower one. When we’re done calculating the next sums in @ns, we can update @s and move on. When we’re done with the last line, we just have to calculate the minimum of all the possible sums up to the last line and we’re done! Here is the whole program, for the masochists: #!/usr/bin/env perl use 5.024; use warnings; use experimental qw< postderef signatures >; no warnings qw< experimental::postderef experimental::signatures >; use List::Util 'min'; sub triangle_sum ($tri) { my @s = $tri->[0][0]; my $i = 1; while ($i <= $tri->$#*) { my $l = $tri->[$i]; my @ns = $s[0] + $l->[0]; push @ns, $l->[$_] + ($s[$_ - 1] < $s[$_] ? $s[$_ - 1] : $s[$_]) for 1 .. $l->$#* - 1; push @ns, $s[-1] + $l->[-1]; @s = @ns; return min(@s); sub triangularize (@list) { my @retval; my $n = 1; while (@list) { die "invalid number of elements\n" unless @list >= $n; push @retval, [splice @list, 0, $n]; return \@retval; my @list = @ARGV ? @ARGV : qw< 1 2 4 6 4 9 5 1 7 2 >; say triangle_sum(triangularize(@list)); Stay safe folks!
{"url":"https://etoobusy.polettix.it/2021/02/18/pwc100-triangle-sum/","timestamp":"2024-11-05T07:10:23Z","content_type":"text/html","content_length":"17999","record_id":"<urn:uuid:33e84d82-4453-471c-85a7-c6c643187a6b>","cc-path":"CC-MAIN-2024-46/segments/1730477027871.46/warc/CC-MAIN-20241105052136-20241105082136-00067.warc.gz"}
Autogenerating Currency Conversion Formulas In Google Sheets Using GOOGLEFINANCE(CONCATENATE()) I forgot where I heard this, but on some or another podcast it was declared that “Microsoft Excel is the gateway drug to programming.” And I’m coming to believe this more and more. I’m someone with more than a passing interest in programming who simultaneously lacks any serious proficiency with writing code. Irony of ironies, right? Also, I’m now in this weird liminal space between being really good at manipulating data in a spreadsheet-type environment and seeing the possibility of what I can do with a real programming language while still sorely lacking in the practical know-how to get a lot of things done. And that’s why I’m still tied to using Excel and Google Sheets for a lot of my data analysis work. Both are fine tools, albeit with their own quirks and limitations. Although Excel is good for the kind of work I do, which typically involves big CSVs that Google Sheets kind of chokes on, I strongly prefer Google Sheets for its more expressive functions and easy connectivity to outside data stores, including Google’s own services. It’s for this reason that I’m going to be covering the subject of creating abstract, auto-generating formulas in Google Sheets only. Also, I’m like 90% sure one can’t use CONCATENATE() in Excel for the use case I’m presenting here, but I’d love to be proven wrong. The challenge Part of my job is to analyze venture capital data, and oftentimes the scope of my analysis expands beyond US borders. Accordingly, I often have to deal multiple currencies, which can be a pain. So imagine a column containing data for an arbitrarily large number of VC deals priced in an arbitrary number of different international currencies. (For the sake of this example, let’s assume these are all fairly recent deals, so historically accurate currency conversion figures aren’t important.) Remembering that our dataset can be arbitrarily large, what’s the easiest way to convert those foreign currencies into USD so we can make comparisons on equal terms? An example dataset Below, to avoid using proprietary data, I fabricated an array of 20 sample deals using a random number generator and multiplying its output by different scalars depending on the round type. The size of deals do not necessarily correspond to any real-world averages and are here for demonstration purposes only. Also, I picked 20 rows because it’s small enough to fit in a screenshot. For a sample size of 20 rows, it’s still easy to do all the conversions by hand sorting, but the ideal solution would scale to sheets with hundreds or thousands of rows and a huge combination of currency conversions. This is why we’re going to emphasize abstraction here. Here’s what we’ve got to work with… (Yes, the numbers are hideous. Deal with it.) Here, we have sample deals from five different countries: the USA, Canada, the UK, France, and Germany. (I intentionally picked two countries that use the same currency for reasons that will become apparent later) Converting foreign currencies to USD, some methods of varying efficiency We have several choices for how we want to make the conversions of foreign currencies to USD. Here, I’ll share three ways, with each successive option being more abstract and scalable than the last. A list of top books will be helpful for you to become a consistently profitable trader. To remind ourselves of what we want to convert our currencies into here, we’ll add a “Target Currency” column and set all values in that column to USD. And we’ll also add a “Conversion Rate” column where we’ll set our conversion ratio. (This column isn’t absolutely necessary, strictly speaking, but it keeps things organized and clear from a visual perspective.) Brute force Let’s say you’re a masochist with a fetish for tedium and frustration. This is the method for you. Remember, I said “masochist” and not “primitive cave dweller” here, so I’m going to run on the assumption that, being an enlightened user of spreadsheeting tools, you understand how to sort columns. So we start by sorting the “Base Currency” column to make things at least somewhat easier to deal with… Note how all the base currencies are now grouped. Now for the brute force part. We search Google for each conversion pair (“CAD USD,” “EUR USD,” etc.), copy and paste the ratio into our spreadsheet, and manually fill down. For this small sample set, it took me almost exactly a minute to build a conversion column using brute force. But this is only for a small handful of currency pairs. If I was dealing with dozens of pairs, this would have been a much bigger task. Enter The Joys of GOOGLEFINANCE() Like I may have mentioned earlier, one of the nice parts about Google Sheets is that it gives users direct access to some of Google’s services directly through a series of bespoke functions. For example, the GOOGLETRANSLATE() function lets users translate strings in spreadsheets from one language to another using the Google Translate engine. In conjunction with DETECTLANGUAGE(), one can generate some interesting formulas. But here we’re going to talk about the joys of the GOOGLEFINANCE() function. There’s standard syntax for pulling current and historical stock market prices: GOOGLEFINANCE(ticker, [attribute], [start_date], [num_days|end_date], [interval]) So, if I wanted to find the closing price of Apple on March 7, 2014, I’d write the formula like this, And Google automatically returns that information and builds a table to display it. It turns out that the GOOGLEFINANCE() function also lets you find the current conversion rate between two currencies. (Unfortunately, at time of writing, it doesn’t let you find historical conversion Here’s that syntax: Using the official, three-character currency codes, we’d convert Canadian Dollars to USD using this function: Now, for our table of VC deals we want to convert into USD, we could hard-code the currencies we want to convert into our formulas by group. Here are the other hard-coded formulae: Google Finance will return the current conversion rate and embed it as a number in the spreadsheet. Below is a screenshot of our Google Sheet with an escaped version of the formula next to the conversion rate. (Note what’s in the formula bar for the selected USD -> CAD conversion. And, once we’ve hardcoded all the formulas, all one would have to do is drag the corner of the cells down to fill in the rest of the table with the appropriate values. Just like the original “brute force method” this is all well and good if the number of currency pairs is small, there are few rows, and any re-sorting of the rows will be on a whole-sheet basis. Now, on to the fun part… Dynamically generating currency conversion formulae using GOOGLEFINANCE(CONCATENATE()) Alright, so we start with the general form of a currency conversion rate query in Google Sheets, the same as what was previously listed. Now, let’s look at our table and its columns. • Column C: “Base Currency” • Column D: “Target Currency” Hmmm… seems familiar… If only there was a way to join those elements into a formula. Enter Google Sheets’s CONCATENATE() function, which is similar to its cousin CONCAT(), except for the important fact that it lets you concatenate an arbitrary number of elements. Importantly, in Google Sheets, the output of the CONCATENATE() function is not simply treated as a string. Its output is usable inside another function. So, to start, let’s concatenate cells D2 and E2 using the following formula: And as we can see, the result is “CADUSD” which is is the required pairing of source and target currencies we need in our formula. Now that we’ve proven we can build our one part of our formula using CONCATENATE(), let’s see if we can build the rest of it. Typing in the following formula produces outputs what we need to give to GOOGLEFINANCE()… So, now we can bring it all together… Before showing the nice gif of how it all “just works,” give me one second to explain how one formula will be able to generate a conversion ratio for all of the currencies in our set. The D2 and E2 cell references are relative, which means that as I drag the equation down the sheet it will still pull from the cells in the two adjacent columns, but will take the value from each row. So that same equation, if applied to, say, row 3, would read from D3 and E3. From row 4, it would pull from D4 and E4, and so on down the sheet. So, here’s the moment you’ve been waiting for… At this point, it’s just a matter of multiplying the “Deal Amount” column by the “Conversion Ratio” column to generate the “Converted Amount.” And there you have it… Next Steps One of the features of this kind of query with GOOGLEFINANCE() is that it dynamically updates as new currency exchange rate data becomes available. This can be viewed as either a bug or a feature, depending on your particular needs. In the event you don’t want these numbers to change, simply copy the contents of the columns that update dynamically and paste those cells in “as values”. (In Google Sheets, goto Edit -> Paste Special -> Paste Values Only [⌘ + Shift + V].) As this suggests, it just pastes in the alphanumeric values of the cells, and strips out the formula data. In the event that you still want to edit the formulas, which in variably you will, my suggestion is to duplicate the tab so you have one dynamic, editable version, and the static version you can work off of in later analysis. That’s All, Folks! This has been a somewhat protracted way of saying that a little bit of abstraction makes for a time-saving and viscerally satisfying (at least from my perspective) data analysis experience. CONCATENATE() can be used outside or inside other functions in Google Sheets to build flexible, extensible auto-generating formulas in your spreadsheets, and I’m very much looking forward to a exploring how else it can be applied in the work that I do. This is also my first time offering up one of these tutorials, so if you liked it and want me to make more of these, please let me know! Future topics I want to cover include pivot tables, demonstrating a superior alternative to VLOOKUP() and INDEX(MATCH()), and a couple of others. 2 responses to “Autogenerating Currency Conversion Formulas In Google Sheets Using GOOGLEFINANCE(CONCATENATE())” The formula would not work if my base currency is not USD. For instance, if my base currency is SGD, the formula would work for all except SGDSGD. When its SGD to SGD, instead of returning 1.00 it will return “N/A Error”. So, I had to add something to the formula to make it work. =if(A1=”SGD”, 1.00, GoogleFinance(“CURRENCY:” & A1 & “SGD”)) A1 is the column where I have all the different currencies listed. So, basically, if A1 is SGD, I simply put 1.00 and if it is some other currency, I apply the formula. I have tested with many different base currencies and resulted in the same “N/A Error” message, except “USDUSD”. I’m beginning to smell some bias-ness here. LoL!! Nicely explained. Thanks
{"url":"https://jasondrowley.com/2017/03/27/autogenerating-currency-conversion-formulas-in-google-sheets-using-googlefinanceconcatenate/","timestamp":"2024-11-10T04:35:09Z","content_type":"text/html","content_length":"126001","record_id":"<urn:uuid:be46ebc1-f229-4f36-90b2-43081d09e9da>","cc-path":"CC-MAIN-2024-46/segments/1730477028166.65/warc/CC-MAIN-20241110040813-20241110070813-00478.warc.gz"}
Data Analysis and Prediction Algorithms with R - Free Computer, Programming, Mathematics, Technical Books, Lecture Notes and Tutorials Data Analysis and Prediction Algorithms with R • Title: Introduction to Data Science: Data Analysis and Prediction Algorithms with R • Author(s) Rafael A. Irizarry • Publisher: Chapman and Hall/CRC; (November 8, 2019); eBook (Creative Commons Licensed, 2022) • License(s): CC BY-NC-SA 4.0 • Hardcover/Paperback: 713 pages • eBook: HTML and PDF • Language: English • ISBN-10/ASIN: 0367357984 • ISBN-13: 978-0367357986 • Share This: Book Description This book introduces concepts and skills that can help you tackle real-world data analysis challenges. It covers concepts from probability, statistical inference, linear regression, and machine learning. It also helps you develop skills such as R programming, data wrangling, data visualization, predictive algorithm building, file organization with UNIX/Linux shell, version control with Git and GitHub, and reproducible document preparation. About the Authors Reviews, Rating, and Recommendations: Related Book Categories: Read and Download Links:Similar Books: Book Categories Other Categories Resources and Links
{"url":"https://freecomputerbooks.com/Introduction-to-Data-Science-Data-Analysis-and-Prediction-Algorithms-with-R.html","timestamp":"2024-11-12T19:50:01Z","content_type":"application/xhtml+xml","content_length":"36579","record_id":"<urn:uuid:9e6e1c7e-e01e-432a-a898-ae9e0c3f5f71>","cc-path":"CC-MAIN-2024-46/segments/1730477028279.73/warc/CC-MAIN-20241112180608-20241112210608-00647.warc.gz"}
One Trigonometry Question To Get Into JC - Jφss Sticks Tuition Secondary Math Tuition One Trigonometry Question To Get Into JC Tuition given in the topic of A-Maths Tuition Questions from the desk of at 10:42 pm (Singapore time) Updated on For 2007 O-Level students in Singapore, Tuesday was an important day when the Joint Admissions Exercise (JAE) results were released, when their places in the institutions (hopefully of their choice) were confirmed, when they can finally leave their O-Level lives behind and focus on that chick/hunk they have been eyeing throughout the PAE period. For a particular Student, it has always been his life-long dream to study in Elite Junior College (EJC), nestled high in the mountains above the clouds where elite eagles soar, where students there were taught the legendary Four As Guaranteed Explosive Mugging Technique that would ensure their A-Level success. He was quietly confident that, with his raw L1R5 score of 7, he would claim his place among the elites at EJC. After all his dad was an elite EJC alumni who got in with an L1R5 score of 9 decades So it came as a total shock to him on Tuesday when he was posted to Non-Elite Junior College (NEJC), nestled in a kampung by a muddy river bank where chickens roam. Undeterred by this setback, The Student, together with his dad, made a great journey across the plains by MRT, and climbed a thousand steps up the mountain to the gates of EJC, where they sought to appeal his posting before the sagely principal of EJC. Predictably, the principal turned them away, informing them that all places in EJC had already been filled by [DEL:guinea pigs:DEL] IP students as well as those who got their L1R5 minus-ed here minus-ed there through being former students of Elite Secondary School (ESS) and a host of other things. But such was The Student’s determination to [DEL:meet the hot girls of EJC:DEL] master the Four As Guaranteed Explosive Mugging Technique that he vowed not to leave till he was accepted, despite his dad’s protestation that they were going to miss the last MRT train home. And so for days and nights they knelt outside the gates of EJC, father and son, battled by the howling rain and ogling at passing EJC girls for temporary respite, imploring the principal to accept The Student. By the end of the third day, the principal was sufficiently [DEL:disturbed from his sleep by their constant KPKB:DEL] moved by The Student that he threw him this math question, promising his acceptance into EJC if he could answer it: The angles A and B are such that A + B = 120^o cosA + cosB = Show that Hence find the possible values of A and B, given that 0^o < A < 120^o and 0^o < B < 120^o. The principal chose this question as it happens to be on the list of assumed knowledge for the H2 Mathematics Syllabus. It also hails from the New O-Level A-Maths Syllabus, where the Factor Formulae are now included in the Trigonometry component. So to enter EJC, apart from knowing the trigonometric values of special angles (hope he brought his textbook along!), and remembering that All Science Teachers are Crazy Diagram, The Student also has to utilize one of these Factor Formulae (especially when something like cos A ± cos B occurs in the question): sin A + sin B ≡ 2 sin( sin A – sin B ≡ 2 cos( cos A + cos B ≡ 2 cos( cos A – cos B ≡ -2 sin( But surprise, surprise. Since he belongs to the 2007 batch of students who took the OLD A-Maths Syllabus, like many JC1 students right now, this is totally alien to him – which is why he needs YOU current O-Level students to help him get into EJC! Good luck to those of you awaiting your appeal outcome this week! P.S. Do note that the above story does not accurately reflect the actual JC appeal process in Singapore. Comments & Reactions 18 Comments Lol this question is easy. Just use factor formulae and the fact that cos60 is 1/2. Soup, NWNT. OMG Soup! You should be proud that you're the first commentator ever to have successfully used the math plugin to answer a question here! But umm .... you do know that you can use {braces} as 'invisible brackets' to make your workings 'neater'? i.e. reduce the number of visible brackets which Miss Loi has changed for you. Looking at the ease of your solution, Miss Loi sort of regretted 'spoon-feeding' the Factor Formulae as one of the keys to the solution. In fact in Miss Loi's tuition sessions it has always been: no hints = student will complain question was difficult, give a small hint = student will say question was soooo easy. Unfortunately students have to realize that in the actual exam, you won't find a similar question that says "Use the Factor Formulae to show that ..." 😛 In any case one must always remember, that as a Trigo A-Maths pugilist, he/she needs to know when to draw the right weapon from the whole arsenal of trigo formulae at his/her disposal (be it Factor Formula, R-Formula, Double-Angle Formula, Addition Formula, Sine/Cosine Rule blah blah blah) - regardless of whether they are shown in the formula sheet. Such is the essence of mastering the Four As Guaranteed Explosive Mugging Technique ... After so much talking, Miss Loi just remembered that there's still a Part 2 to this question! I'm just glad I am not making a machinima out of this story 😀 Lol.. I'm the first? Hahaha.. I think the pmath plugin is great. A pity that it can't work on my own site though. Some PHP library ain't installed.. Fox, the Four As Guaranteed Explosive Mugging Technique too hot for your machine to handle? 😉 Soup, that's a pity. Miss Loi found it much more intuitive to type out equations using this as compared to the old LaTeX plugin whose cryptic syntax she never quite managed to figure out. 🙁 Um, no. No maths plugin for machinima :p Wahaha, the 2nd picture was really LOL-zable :D!! Nice one man :P! Cheers! Wah lau...2nd pic taken from kill bill? ASM, please be serious and don't anyhow laugh at the sagely Principal of EJC, lest he plucks out your eye! OMG Pai Mei?! Thought you're already dead?! *scampers away to save her eyeball* Sorry, me dunno how to use the syntax. Anyway, here's the part 2: Yes, whenever you see phrases like "find the values of x ... given that A° < x < B°" blah blah blah ... you'll know you're dealing with the All Science Teachers are Crazy diagram and there'll be some work on your hands! You've already proved in Part 1 that basic angle of α = cos^-1 And since 1/(√2) is positive, it follows from the SATC diagram (A and C quadrants) above that IMPORTANT: We only did one 'round' of the 360° circuit since the 'coefficient' of (A-B) is ≤ 1. More 'rounds' are usually required when the 'coefficient' > 1 - refer to your textbook if you're lost or see this example for a rough idea. 2012 Update: NOT SO FAST! Thanks to Ryan for pointing out, there's a tricky part to this question which apparent everyone has missed in 2008!!! It's vital to consider the range of In this case the max value of A[max] − B[min]/2) and its min value is −60° (A[min] − B[max]/2), so −60° ≤ Many of us are so used to the anti-clockwise circuit as the majority of such trigo questions has 0° ≤ x ≤ 360°, but in this case the range goes into a negative region. So while 315° lies outside the range of θ) = cos θ. Thus it's imperative you consider going around the clockwise (negative direction) when you see a range in the negative region! So ... [DEL:(A-B) = 90°, 630°:DEL] [DEL:A = 90°+B, 630°+B --- (ii):DEL] (A-B) = 90°, −90° A = 90°+B, −90°+B --- (ii) Since A+B = 120° --- (i), Sub (i) into (ii) [DEL:(90+B) + B = 120°, (630+B) + B = 120°:DEL] [DEL:90 + 2B = 120°, 630 + 2B = 120°:DEL] [DEL:B = 15, -255 --- (iii):DEL] (90°+B) + B = 120°, (−90°+B) + B = 120° 90° + 2B = 120°, −90° + 2B = 120° B = 15, 105 --- (iii) Sub (iii) into (ii) [DEL:A = 105°, 375°:DEL] Since A & B lies between 0° - 120°, Thus the values are: [DEL:A = 105° & B = 15°.:DEL] A = 105° & B = 15° or A = 15° & B = 105°. @Soon: Your answer is only partially correct. (A-B)/2 = 45 OR -45. [*not* 45 OR 315, since (A-B)/2 ranges between -60 and 60] Hence, the complete solution set is [A=15, B=105] OR [A=105,B=15]. Plug both solutions into the given info and see for youself that they both work! @Ryan: Thanks for pointing this out! Many of us are so used to the anti-clockwise circuit through the quadrants as the majority of such trigo questions deal with the range of 0° ≤ x ≤ 360°. But in this case, a clockwise route needs to be considered as well since the range has a negative region. Miss Loi has updated the explanation in the original comment. It's amazing how everyone (including yours truly) missed this out in 2008! So it's not so straightforward to get into EJC afterall eh? *Sneaks in quietly to tidy up Soon's solution and add some explanation ... * If that problem was the only requirement to enter an EJC I would be happy since I'm in secondary 1 but I can solve that problem (with some reference to the sum-to-product formulae as I can't seem to memorise them yet). Unfortunately, with the ever-evolving IP stuff these days, the real principals may not be as magnanimous and forgiving as our benevolent Principal of EJC. Do continue to work hard in your quest towards your mastery of the Four As Guaranteed Explosive Mugging Technique one day. I realised I didn't attempt this question. One thing I don't understand, if the original question already stated that cos A + cos B = 1/root 2 AND A + B = 120, by commutative law doesn't it already imply that if (A,B) = (15,105) then (B,A) can also be (15,105)? i.e. (A,B) can also be (105,15)? That saves the trouble of going to the negative region of the C quadrant. Post a Comment • * Required field • Your email will never, ever be published nor shared with a Nigerian banker or a pharmaceutical company. • Spam filter in operation. DO NOT re-submit if your comment doesn't appear. • Spammers often suffer terrible fates. Impress Miss Loi with beautiful mathematical equations in your comment by enclosing them within alluring [tex][/tex] tags (syntax guide | online editor), or the older [pmath][/pmath] tags (syntax guide). Please PREVIEW your equations before posting! Revision Exercise To show that you have understood what Miss Loi just taught you from the centre, you must:
{"url":"https://www.exampaper.com.sg/questions/a-maths/one-trigonometry-question-to-get-into-jc","timestamp":"2024-11-12T20:16:17Z","content_type":"application/xhtml+xml","content_length":"199477","record_id":"<urn:uuid:4efcd995-6bd1-474f-9203-eef3a3bb61f7>","cc-path":"CC-MAIN-2024-46/segments/1730477028369.36/warc/CC-MAIN-20241113135544-20241113165544-00453.warc.gz"}
Investigating and extending number patterns 4.2 Investigating and extending number patterns Kinds of numeric patterns A list of numbers that form a pattern is called a sequence. Each number in a sequence is called a term of the sequence. The first number is the first term of the sequence. a list of numbers that follow each other in a particular order to create a pattern We can look for a pattern or relationship between consecutive terms in order to extend the given pattern. For example, the numbers \(2; 5; 8; 11; 14; 17; 20; 23\) form a sequence, because there is a pattern in the way the numbers are ordered: we add \(3\) to the previous term. The first term of this sequence is \(2\). numbers or terms that follow one another in order There are many different number patterns. Here are some of the common number patterns you will work with in Grade 8. Adding or subtracting the same number When the differences between consecutive terms of a sequence are the same, we say the difference is constant, or that the sequence has a common difference. For example, if we start with \(14\) and subtract \(\mathbf{3}\) each time, we get: \[ 14 - 3 &= 11 \\ 11 - 3 &= 8 \\ 8 - 3 &= 5 \\ 5 - 3 &= 2 \\ \] The number pattern that is formed is {\(14; 11; 8; 5; 2\)}. If we start with \(14\) and add \(\mathbf{3}\) each time, we get: \[ &14 + 3 = 17 \\ &17 + 3 = 20 \\ &20 + 3 = 23 \\ &23 + 3 = 26 \\ \] The number pattern that is formed is {\(14; 17; 20; 23; 26\)}. Worked example 4.1: Find the next number in the pattern You are given a pattern that starts with the following numbers: \[18; 13; 8; 3;\ldots\] What is the next number in the pattern? Work out what the pattern is. We have the numbers, \(18; 13; 8; 3;\ldots\) and we need to find the next number in the pattern. To do so, we must either work out how the pattern is changing from one term to the next, or notice that the pattern is a special set of numbers. For this pattern, we can see that to get from one term to the next, we subtract \(5\). Extend the pattern to get the next number. To get the next number in the pattern, we must subtract \(5\) from \(3\): \[3 - 5 = - 2\] Therefore, the next number in the pattern is \(- 2\). Worked example 4.2: Find the next number in the pattern A pattern starts with the following numbers: \[8; 13; 18; 23;\ldots\] What is the next number in the pattern? Work out what the pattern is. We have the numbers, \(8; 13; 18; 23;\ldots\) and we need to find the next number in the pattern. To do so, we must either work out how the pattern is changing from one term to the next, or notice that the pattern is a special set of numbers. For this pattern, we can see that to get from one term to the next, we add \(5\). Extend the pattern to get the next number. To get the next number in the pattern, we must add \(5\) to \(23\): \[23 + 5 = 28\] Therefore, the next number in the pattern is \(28\). Multiplying and dividing by the same number If we multiply or divide by a number to get the next term in the sequence, then the ratio between the first and second numbers is the same as the ratio between the second and third numbers. Sequences in which the number we multiply or divide by remains the same therefore have a constant ratio. constant ratio when the ratio between one number and the next in a sequence is the same as the ratio between any two other consecutive numbers in that sequence Worked example 4.3: Finding the next number in the pattern A pattern starts with the following numbers: \[400; 200; 100; 50;\ldots\] What is the next number in the pattern? Work out what the pattern is. We have the numbers, \(400; 200; 100; 50;\ldots\) and we need to find the next number in the pattern. To find this, we must either work out how the pattern is changing from one term to the next or notice that the pattern is a special set of numbers. For this pattern, we can see that to get from one term to the next one we divide by \(2\). Extend the pattern to get the next number. To get the next number in the pattern, we must divide \(50\) by \(2\): \[50 \div 2 = 25\] Therefore, the next number in the pattern is \(25\). Worked example 4.4: Finding the next number in the pattern A pattern starts with the following numbers: \[3; 9; 27; 81;\ldots\] What is the next number in the pattern? Work out what the pattern is. We have the sequence, \(3; 9; 27; 81;\ldots\) and we need to find the next number in the pattern. To find it, we must either work out how the pattern is changing from one term to the next or notice that the pattern is a special set of numbers. For this pattern, we can see that to get from one term to the next one we multiply by \(3\). Extend the pattern to get the next number. To get the next number in the pattern, we must multiply \(81\) by \(3\): \[81 \times 3 = 243\] Therefore, the next number in the pattern is \(243\). Special sequences: fractions and negative numbers Sometimes the rule to create a pattern does not follow a common difference or constant ratio relationship. Some patterns use fractions or very large negative numbers. You need to look at such patterns carefully, because different rules might apply to them. For example, this sequence \({ 1; 2; 3; 5; 8;\ldots}\) is made by adding two preceding terms. \[ &1 + 2 = 3 \\ &2 + 3 = 5 \\ &3 + 5 = 8 \\ \] So, the next two terms in this sequence are: \(5 + 8 = 13\) and \(8 + 13 = 21\). The sequence is \({ 1; 2; 3; 5; 8; 13; 21}\). If you only looked for a common difference between the first three terms, you would have made a mistake: \(2 - 1 = 1\) and \(3 - 2 = 1\), but \(5 - 3 = 2\)! When working with fractions or negative numbers, be careful how you do the calculations. It is easy to make a careless mistake and get the incorrect sequence. Look out for square and cube numbers, roots, and other non-linear relationships. non-linear relationship a relationship between two variables in which a change in one does not cause a corresponding change in the other There are a few special, non-linear patterns that you should know and recognise. In this table, \(n\) stands for each consecutive number, in other words, first you use \(1\) in the formula, then you use \(2\), and then \(3\), and so on. Name Pattern Formula Perfect squares \(1;4;9;16;\ldots\) \(t = n^{2}\) Perfect cubes \(1;8;27;64;\ldots\) \(t = n^{3}\) Powers of \(2^{}\) \(2;4;8;16;\ldots\) \(t = 2^{n}\) Powers of \(3^{}\) \(3;9;27;81;\ldots\) \(t = 3^{n}\) Reciprocals \(1;1^{2};1^{3};1^{4};\ldots\) \(t = 1^{n}\) Worked example 4.5: Working with fractions in a pattern Consider the following pattern of numbers: What is the next number in the pattern? Solve the pattern by inspection. This pattern is not linear. Look at the table above to see if you can identify the special relationship. The pattern shown here is made of perfect cubes. \[1;\frac{1}{\left( 2^{3} \right)};\frac{1}{(3^{3})};\frac{1}{(4^{3})};\ldots\] Find the next number in the pattern. Each term in the pattern is the reciprocal of a perfect cube. The next perfect cube is \(5^{3} = 125\), and we must take its reciprocal. Therefore, the next number in the pattern is \(\frac{1}{125}\). Worked example 4.6: Working with negative numbers in a pattern Consider the following pattern of numbers: \[- 3; - 9; - 27; - 81;\ldots\] What is the next number in the pattern? Solve the pattern by inspection. This pattern is not linear. The pattern shown here is made of powers of \(3\). \(- 3 = - 3^{1}\) \(- 9 = - 3^{2}\) \(- 27 = - 3^{3}\) \(- 81 = - 3^{4}\) Notice how the minus sign is not placed inside the brackets together with the number \(- 3\). This is because all the terms in this sequence are negative, in other words the sign doesn’t change. Find the next number in the pattern. Each term in the pattern is the negative of a power of \(3\). The next power of \(3\) is \(3^{5} = 243\), and we must take the negative of it. Therefore, the next number in the pattern is \(−243\). Sometimes working out a pattern is easy, and sometimes it is difficult. This is because there is usually no way to find the pattern by calculating anything. If you do not spot the pattern right away, you must investigate what you see and work out what the pattern is: look at the numbers in the sequence and hunt for a pattern! This is especially true for working with fractions. Worked example 4.7: Working with fractions in a sequence Here is a pattern of fractions: What is the next fraction in the pattern? To investigate the pattern like this, you can try these steps: Determine the properties of the items in the pattern. In this case: • the pattern if made of fractions • the denominators are the natural numbers. Think of a “normal” list of numbers that are related to the pattern. In this case, focus on the set of natural numbers: \[0; 1; 2; 3; 4; 5; 6;...\] Compare the numbers in the question to the “normal” list. You can see that the denominators increase by \(1\) each time. So, the next number in this sequence is \(\frac{1}{6}\). Identify the rule in a pattern Given a sequence of numbers, we can also identify a pattern or relationship between the term and its position in the sequence. This allows us to predict the value of a term in a sequence based on the position of that term. It is useful to represent these sequences in tables to help visualise the position of the term. It also makes it easier to describe the general rule for the pattern. For example, the rule is: “Multiply the position of the number by \(3\) and add \(2\) to the answer.” We can write this rule as a number sentence: \[\text{position of the number} \times 3 + 2\] Use this number sentence to find the terms of the sequence: \[ &1 \times 3 + 2 = 5 \\ &2 \times 3 + 2 = 8 \\ &3 \times 3 + 2 = 11 \\ \] Now draw a table of values to show how the pattern is formed. Position in sequence 1 2 3 4 … $$n$$ Rule: 1×3+2=5 2×3+2=8 3×3+2=11 4×4+2=18 n ×3+2=3n+2 position of the number ×3+2 Worked example 4.8: Working with position of a number in a sequence The table below shows a sequence. However, one of the terms is missing. Fill in the missing value. Position \(1\) \(2\) \(3\) \(4\) \(5\) Term \(1\) ? \(9\) \(16\) \(25\) Write the values in a list. The table shows a sequence with the second term missing. To simplify the question a bit, we can just look at the sequence in the “normal” way, without a table: \[1; ?; 9; 16; 25\] To find the missing term, we must work out what the pattern is: do the terms grow by means of addition, subtraction, multiplication, or division? (Or maybe even some other pattern!) Work out what the pattern is. To work out what the pattern is, compare the numbers to each other. Are they getting bigger or smaller? Are the growing slowly or very quickly? The numbers in this sequence are all square numbers. It is important that you recognise these special numbers: \[1 = 1^{2}; ?; 9 = 3^{2}; 16 = 4^{2}; 25 = 5^{2}\] Find the missing number. We need to fill in the missing square number, which is the square number following \(1\). \[2^{2} = 4\] The value of the missing term is: \(4\). The rule helps us find the terms in the sequence. We use basic algebra to describe the rules in a “compact” way, without describing them in words. Working with algebraic expressions and variables is an important skill to develop when dealing with sequences. Worked example 4.9: Identifying terms in a sequence Consider the following sequence: \[4a + 4; 6a + 2; 8a; 10a - 2; 12a - 4;\ldots\] What is the second term in the sequence? Identify each of the terms to find the answer. The sequence is a bit crowded with all of those numbers and variables; it will be helpful to organise the terms into a table. Position \(1\) \(2\) \(3\) \(4\) \(5\) Term \(4a + 4\) \(6a + 2\) \(8a\) \(10a - 2\) \(12a - 4\) Locate the term in a sequence. With the terms organised in the table, it is easier to see that the second term is \(6a + 2\). Can you find the next term in this sequence? Try subtracting the terms from each other: \[ &(6a + 2) - (4a + 4) = (6a - 4a) + (2 - 4) = 2a - 2 \\ &(8a) - (6a + 2) = (8a - 6a) + (0 - 2) = 2a - 2 \\ &(10a - 2) - (8a) = (10a - 8a) + ( - 2 - 0) = 2a - 2 \\ &(12a - 4) - (10a - 2) = (12a - 10a) + \left( - 4 - ( - 2) \right) = 2a - 2 \\ \] The common difference is \((2a - 2)\). So, the \(6\)th term is then \((12a - 4) + (2a - 2) = (12a + 2a) + ( - 4 - 2) = 14a - 6\). Worked example 4.10: Evaluating expressions and patterns This sequence is made of numbers. However, the fourth term is an expression with the variable \(a\). \[- 8; 0; 8; 8a - 16;\ldots\] If \(a = 4\), calculate the value of the fourth term. Substitute into the expression for the missing term. You have an expression for the fourth term and need to find the value of that term if \(a = 4\). Nothing else in the pattern matters: it is all about the expression \(8a - 16\). Substitute \(a = 4\) into the expression \(8a - 16\): \[8(4) - 16\] Calculate the value of the term. \[8a - 16 = 8(4) - 16 = 16\] Therefore, the fourth term is equal to \(16\). Worked example 4.11: Evaluating expressions and patterns Determine the value of the fifth term in the same sequence as in Worked example 4.10: \[-8; 0; 8; 8a - 16;…\] Identify the pattern in the sequence. Using the answer for the fourth term, the sequence is now: \(- 8; 0; 8; 16;\ldots\) Start by working out if the pattern is based on addition, subtraction, multiplication, or division. Be patient, it can take a while to work out what the pattern is! This sequence has an addition pattern: each term is the sum of the previous term and \(8\). Work out the value of the next term. You can organise this information by adding some details to the sequence. For example: To get the next term, you must work out: \(16 + 8 = 24\). Therefore, the fifth term is \(24\).
{"url":"https://www.siyavula.com/read/za/mathematics/grade-8/numeric-and-geometric-patterns/04-numeric-and-geometric-patterns-02","timestamp":"2024-11-06T20:38:45Z","content_type":"text/html","content_length":"119945","record_id":"<urn:uuid:164802c5-0876-4996-858d-a3b05b8eea91>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.47/warc/CC-MAIN-20241106194801-20241106224801-00235.warc.gz"}
Infinitely many conservation laws for the discrete kdv equation Rasin and Hydon (2007 J. Phys. A: Math. Theor. 40 12763-73) suggested a way to construct an infinite number of conservation laws for the discrete KdV equation (dKdV), by repeated application of a certain symmetry to a known conservation law. It was not decided, however, whether the resulting conservation laws were distinct and nontrivial. In this paper we obtain the following results: (1) we give an alternative method to construct an infinite number of conservation laws using a discrete version of the Gardner transformation. (2) We give a direct proof that the conservation laws obtained by the method of Rasin and Hydon are indeed distinct and nontrivial. (3) We consider a continuum limit in which the dKdV equation becomes a first-order eikonal equation. In this limit the two sets of conservation laws become the same, and are evidently distinct and nontrivial. This proves the nontriviality of the conservation laws constructed by the Gardner method, and gives an alternative proof of the nontriviality of the conservation laws constructed by the method of Rasin and Hydon. טביעת אצבע להלן מוצגים תחומי המחקר של הפרסום 'Infinitely many conservation laws for the discrete kdv equation'. יחד הם יוצרים טביעת אצבע ייחודית.
{"url":"https://cris.ariel.ac.il/iw/publications/infinitely-many-conservation-laws-for-the-discrete-kdv-equation-3","timestamp":"2024-11-03T01:12:25Z","content_type":"text/html","content_length":"56402","record_id":"<urn:uuid:2f8923a6-275d-4f82-8696-ce1d821545e7>","cc-path":"CC-MAIN-2024-46/segments/1730477027768.43/warc/CC-MAIN-20241102231001-20241103021001-00089.warc.gz"}
Top 20 College Math Tutors Near Me in Wolverhampton Top College Math Tutors serving Wolverhampton Antonio: Wolverhampton College Math tutor Certified College Math Tutor in Wolverhampton ...professionally I am a product, process and equipment design engineer with a wide experience into automotive and energy sector. I am easy adaptable, friendly and experienced in teaching and engineering. My tutoring approach is to make my students understand not just the materials in order to guarantee their success at school/university but also making them... Subject Expertise • College Math • College Algebra • Middle School Math • UK A Level Mathematics • +41 subjects Liana: Wolverhampton College Math tutor Certified College Math Tutor in Wolverhampton ...our brain develop. I believe that everyone can learn Mathematics and enjoy it. I succeeded in teaching and tutoring Mathematics both in students' native language and in their additional language (in China I taught Mathematics in English and in the UK I taught Mathematics to EAL students). It's a universal language with widely accepted benefits. Education & Certification • Yerevan State University - Bachelor of Engineering, Architectural and Building Sciences/Technology Subject Expertise • College Math • Principles of Mathematics • IB Mathematics: Applications and Interpretation • Grade 10 Math • +113 subjects Rachel: Wolverhampton College Math tutor Certified College Math Tutor in Wolverhampton Inclusive Tutor Trying to make learning accessible Enhanced DBS checked SEN experience A different way of approaching problems Volunteer Girl Guiding 4-11 years 16-25 Outreach Officer 8 years laboratory experience Russell Group Graduate Education & Certification • University of Birmingham - Bachelor of Science, Human Biology Subject Expertise • College Math • Pre-Calculus • Grade 10 Math • Algebra 2 • +196 subjects Education & Certification Subject Expertise • College Math • Pre-Calculus • Statistics • College Algebra • +48 subjects Olusegun: Wolverhampton College Math tutor Certified College Math Tutor in Wolverhampton ...I'll help you stay focused and motivated. Global Perspective: Having lived and studied in different locations, I appreciate diverse cultures and viewpoints. Let's explore the world through our lessons! Why Choose Me? Passion: Teaching isn't just a job; it's my calling. Witnessing your "aha" moments fuels my enthusiasm. Patience: Learning takes time, and I'm here to... Education & Certification • University of Lagos - Bachelor, Electrical and Electronics Engineering Subject Expertise • College Math • Applied Mathematics • Calculus 2 • College Statistics • +43 subjects Huma: Wolverhampton College Math tutor Certified College Math Tutor in Wolverhampton ...NUST and doing PhD in Queen Mary University of London for senior research with fully funded opportunity. I constantly strive to learn and understand the physics discipline and more, as well as provide understandable ways to teach it. I am constantly learning in this subject and hope to provide the best services to you as... Education & Certification • NUST - Master of Science, Physics • Queen Mary - Doctor of Philosophy, Physics Subject Expertise • College Math • Pre-Algebra • Applied Mathematics • Algebra 2 Class • +56 subjects Pascal: Wolverhampton College Math tutor Certified College Math Tutor in Wolverhampton ...student's style of learning. His teaching assistant experience at City of London Academy Islington afforded him opportunity to work with GCSE and A-level students to help prepare for exams. For parents/ guardians he wants to provide clarity and constantly update them on the progress that is being made. As a Maths teacher at Harris Academy Invictus... Education & Certification • King's college London - Bachelor in Arts, Mathematics Subject Expertise • College Math • Trigonometry • Discrete Math • Middle School Math • +27 subjects Education & Certification • The Manchester Metropolitan University - Associate, Biomedical Science Subject Expertise • College Math • Advanced Functions • AP Statistics • Math 1 • +572 subjects Alexandra: Wolverhampton College Math tutor Certified College Math Tutor in Wolverhampton ...at the University of Kent, due to qualify with BSc in Mathematics. I am a hard-working and ambitious individual who enjoys challenges and developing new skills. Being well organised and having time management skills enables me to prioritise tasks given to me as well as having the ability to achieve high standards. I am approachable... Education & Certification • University of Kent - Bachelor of Science, Mathematics Subject Expertise • College Math • College Algebra • IB Mathematics: Applications and Interpretation • Algebra • +80 subjects Mauricio: Wolverhampton College Math tutor Certified College Math Tutor in Wolverhampton As a highly motivated individual, I have found the attributes that helped me succeed in my education at the University of California, Los Angeles and career aspirations make me a great tutor for multiple different subjects. I'm approachable and patient with different questions as I have been tutored myself. Education & Certification • Fullerton College - Associate, Maths • University of California, Los Angeles - Bachelor, History Subject Expertise • College Math • UK A Level Physics • GRE Subject Test in Mathematics • SAT • +61 subjects Francis Chidi: Wolverhampton College Math tutor Certified College Math Tutor in Wolverhampton ...time private Tutor, teaching and mentoring students in Physics, Maths, Further Maths and Basic Science. With my Postgraduate Diploma in Education, I am accustomed with in-depth knowledge of UK school curriculum as well as the Nigerian curriculum and necessary skills required to pass UK 11+/12+/13+, GCSE, IGCSE, SATS, and A-Level Examination (AQA, Edexcel, OCR). I... Education & Certification • Ahmadu Bello University - Bachelor of Science, Physics • Ahmadu Bello University - Master of Science, Biophysics • Nasarawa State University - Doctor of Philosophy, Medical Radiologic Technology Subject Expertise • College Math • Physics • Math • UK A Level Mathematics • +5 subjects Amirhossein: Wolverhampton College Math tutor Certified College Math Tutor in Wolverhampton ...in Electrical and Computer Engineering at the University of Manitoba. I have more than 10 years of experience in teaching math and physics in both PERSIAN and ENGLISH languages to high school, college, and university students. My method is based on problem-solving which prepares you for the tests/exams and at the same time gives an... Education & Certification • University of Tabriz - Bachelor of Science, Laser and Optical Engineering • Shahid Beheshti University - Master of Science, Optics • University of Manitoba - Doctor of Philosophy, Biomedical Engineering Subject Expertise • College Math • Functions • Quantitative Reasoning • Algebra 2 Class • +486 subjects Onatola: Wolverhampton College Math tutor Certified College Math Tutor in Wolverhampton ...keen interest in all things physics and maths. My aim is to make learning detailed and effective while directly engaging students with real world applications of everything taught so as to broaden their horizons and develop their curiosity and help them learn how to work and study and find answers to teach them skills that... Education & Certification • Coventry University - Bachelor, Aerospace Technology Subject Expertise • College Math • Applied Mathematics • Math 2 • Math 3 • +13 subjects Karthikesh: Wolverhampton College Math tutor Certified College Math Tutor in Wolverhampton ...the University of Glasgow in Mathematics. I enjoy teaching Math and Science. I have a three-year tutoring experience, where I have taught CBSE(NCERT), GCSEs, A-levels, SQA S1-S5, Scottish Highers, IB Math: AA &amp; AI (SL &amp;HL), SSAT, AMC-8 and other Scholarship &amp; Olympiad examinations. I love teaching Math and inspire students to perceive the field's... Education & Certification • IISER-Thiruvananthapuram, India - Bachelor of Science, Mathematics • University of Glasgow - Master of Science, Mathematics Subject Expertise • College Math • Calculus 2 • College Statistics • Probability • +23 subjects Hashim : Wolverhampton College Math tutor Certified College Math Tutor in Wolverhampton ...am a professional engineer working in the water industry. I was born in Glasgow which I later moved to Aberdeen and attended Robert Gordon University. I have a MEng in Mechanical Engineering with Distinction. I have experience with tutoring students at all ages and all levels. I have also had experience teaching students with special... Education & Certification • Robert Gordon University - Master of Engineering, Mechanical Engineering Subject Expertise • College Math • Grade 9 Mathematics • Applied Mathematics • Middle School Math • +57 subjects Mohammed: Wolverhampton College Math tutor Certified College Math Tutor in Wolverhampton ...at Cardiff University studying Dental Technology. I am especially passionate about teaching Biology Maths and English. From my time studying through GCSE's and A-Levels I have picked up numerous skills and exam techniques. I am especially passionate about teaching as well as learning about Biology as I find understanding the human anatomy is crucial to... Education & Certification • Cardiff Metropolitan University - Bachelor in Arts, Dental Laboratory Technology Subject Expertise • College Math • Math 1 • Pre-Algebra • Foundations of 6th Grade Math • +67 subjects Yatin: Wolverhampton College Math tutor Certified College Math Tutor in Wolverhampton ...studying Economics in University College London. I completed my education before university in India, studying under the Indian board and tutoring students in Math and English for 2 years under an NGO in Delhi. I believe my strengths lie in quantitative subjects involving calculus, matrix algebra and trigonometry, which I frequently apply in my economics... Education & Certification • University College London - Bachelor of Economics, Economics Subject Expertise • College Math • Multivariable Calculus • Pre-Calculus • Calculus • +31 subjects Subject Expertise • College Math • IB Mathematics: Analysis and Approaches • IB Mathematics: Applications and Interpretation • UK A Level Mathematics • +19 subjects Efetobore: Wolverhampton College Math tutor Certified College Math Tutor in Wolverhampton ...gas technicians at North East Scotland college through foundational learning and specialist modules, I have worked with a range of students ( age 17 - 39), and understand the diversity in learner's needs. However my focus is always on ensuring the learners get the help and support needed to achieve success. So looking forward to... Education & Certification • University of Aberdeen - Master's/Graduate, Electrical & Electronics Engineering • State Certified Teacher Subject Expertise • College Math • Pre-Calculus • Calculus • Chemistry • +17 subjects Giles Nunn: Wolverhampton College Math tutor Certified College Math Tutor in Wolverhampton ...as a difficult and boring subject. It should be fun; it is logical and links together in an easy and satisfying way. In my experience, many students have problems with mathematics because they have missed or failed to grasp one or more basic concepts which following studies depend on. My aim in remedial work is... Education & Certification • Fourah Bay College, University of Sierra Leone - Bachelor in Arts, Physics Subject Expertise • College Math • Key Stage 3 Maths • Elementary School Math • Grade 11 Math • +32 subjects Private College Math Tutoring in Wolverhampton Receive personally tailored College Math lessons from exceptional tutors in a one-on-one setting. We help you connect with the best tutor for your particular needs while offering flexible scheduling to fit your busy life. Your Personalized Tutoring Program and Instructor Identify Needs Our knowledgeable directors help you choose your tutor with your learning profile and personality in mind. Customize Learning Your tutor can customize your lessons and present concepts in engaging easy-to-understand-ways. Increased Results You can learn more efficiently and effectively because the teaching style is tailored to you. Online Convenience With the flexibility of online tutoring, your tutor can be arranged to meet at a time that suits you. Call us today to connect with a top Wolverhampton College Math tutor
{"url":"https://www.varsitytutors.com/gb/college_math-tutors-wolverhampton","timestamp":"2024-11-08T05:43:43Z","content_type":"text/html","content_length":"609214","record_id":"<urn:uuid:b41f55f2-26e0-4233-ac68-148176d603ab>","cc-path":"CC-MAIN-2024-46/segments/1730477028025.14/warc/CC-MAIN-20241108035242-20241108065242-00033.warc.gz"}
Inequalities worksheets Recommended Topics for you Ketaksamaan Linear- Linear Inequalities Linear Inequalities (Inequalities) Equalities & Inequalities Explore Inequalities Worksheets by Topics Explore Worksheets by Subjects Explore printable Inequalities worksheets Inequalities worksheets are an essential tool for teachers who aim to help their students master the concepts of Math and Algebra. These worksheets provide a variety of exercises that challenge students to solve and graph inequalities, allowing them to build a strong foundation in these critical areas of mathematics. Teachers can use inequalities worksheets to create engaging lesson plans, reinforce classroom learning, and assess student understanding of the material. By incorporating these worksheets into their curriculum, educators can ensure that their students are well-prepared for more advanced mathematical concepts and real-world applications of inequalities. Inequalities worksheets are a valuable resource for teachers who want to provide their students with the best possible education in Math and Algebra. Quizizz is a fantastic platform that offers a wide range of educational resources, including inequalities worksheets, to help teachers create interactive and engaging lessons for their students. With Quizizz, teachers can access a vast library of pre-made quizzes, worksheets, and other materials that cover various topics in Math and Algebra. These resources can be easily customized to fit the specific needs of each class, ensuring that students receive targeted instruction and practice. In addition to inequalities worksheets, Quizizz also offers features such as real-time feedback, gamification, and progress tracking, which can help motivate students and improve their overall understanding of the material. By incorporating Quizizz into their teaching strategies, educators can provide a dynamic and effective learning experience for their students in Math and Algebra.
{"url":"https://quizizz.com/en-in/inequalities-worksheets","timestamp":"2024-11-06T18:06:23Z","content_type":"text/html","content_length":"164713","record_id":"<urn:uuid:a1b1cc9e-c2fc-4ae8-a918-ef1763853e18>","cc-path":"CC-MAIN-2024-46/segments/1730477027933.5/warc/CC-MAIN-20241106163535-20241106193535-00831.warc.gz"}
Disclaimer: Any material such as academic assignments, essays, articles, term and research papers, dissertations, coursework, case studies, PowerPoint presentations, reviews, etc. is solely for referential purposes. We do not encourage plagiarism in any form. We trust that our clients will use the provided material purely as a reference point in their own writing efforts.
{"url":"https://www.goassignmenthelp.com/trigonometry-assignment-help/","timestamp":"2024-11-11T04:56:00Z","content_type":"text/html","content_length":"68998","record_id":"<urn:uuid:2ce9fc70-f9c6-4fc9-92b6-6966ff54d70b>","cc-path":"CC-MAIN-2024-46/segments/1730477028216.19/warc/CC-MAIN-20241111024756-20241111054756-00149.warc.gz"}
Foundations of College Algebra [Archived Catalog] 2018-2019 USC Salkehatchie Bulletin (Archived Copy) PCAM 106 - Foundations of College Algebra Credits: 3 Operations on real numbers, linear equations and inequalities, quadratic equations, factoring, absolute value equations, exponential and radical expressions, graphs, and functions. Additional topics may include math study skills, logarithms, exponential functions, probability, statistics, systems of equations, polynomial division, and mathematical modeling. Note: In order to receive a grade of C or better in PCAM 106 students must pass the math placement test (MPT) with a minimum score of MB1 or MA2. Close Window All © 2024 Salkehatchie Campus. Powered by the . Foundations of College Algebra [Archived Catalog] 2018-2019 USC Salkehatchie Bulletin (Archived Copy)
{"url":"http://bulletin.uscsalkehatchie.sc.edu/preview_course.php?catoid=74&coid=88664","timestamp":"2024-11-03T01:55:07Z","content_type":"text/html","content_length":"6010","record_id":"<urn:uuid:815c8e1d-79a4-49d4-98d2-893a831d88af>","cc-path":"CC-MAIN-2024-46/segments/1730477027768.43/warc/CC-MAIN-20241102231001-20241103021001-00181.warc.gz"}
Formula Mass vs. Molar Mass: What's the Difference? Formula mass is the sum of atomic masses in a chemical formula, while molar mass is the mass of one mole of a substance in grams. Key Differences Formula mass refers to the calculated mass of an individual molecule or formula unit, based on the atomic masses of its constituent atoms. Molar mass, on the other hand, is the mass of one mole of a substance, representing the mass of Avogadro's number of molecules or formula units. The formula mass is expressed in atomic mass units (amu), and it reflects the sum of all atomic masses in a chemical formula. Molar mass is expressed in grams per mole (g/mol), and it quantifies how much one mole of a substance weighs. To calculate formula mass, atomic masses from the periodic table are summed according to the formula's stoichiometry. In calculating molar mass, the formula mass is used as the basis, but the unit is converted to grams per mole. The concept of formula mass is crucial in understanding the composition of molecules and compounds. Molar mass is essential for converting between moles and grams in chemical reactions and Formula mass is specific to the molecular or empirical formula of a compound. Molar mass, while derived from the formula mass, is a broader concept used in various calculations in chemistry. Comparison Chart Unit of Measurement Atomic mass units (amu) Grams per mole (g/mol) Sum of atomic masses in a formula Mass of one mole of a substance Used to determine the mass of a single molecule Used for mole-to-gram conversions and vice versa Crucial for molecular composition analysis Essential for stoichiometric calculations in chemistry Calculation Method Summation of atomic weights from the periodic table Formula mass converted to g/mol Formula Mass and Molar Mass Definitions Formula Mass The mass of a single molecule or formula unit. The formula mass of carbon dioxide (CO2) is about 44 amu. Molar Mass A measure of how much one mole of a compound weighs. The molar mass of carbon dioxide (CO2) is 44 g/mol. Formula Mass The sum of the atomic masses of all atoms in a chemical formula. The formula mass of water (H2O) is approximately 18 amu. Molar Mass The mass of one mole of a substance in grams. The molar mass of oxygen (O2) is 32 g/mol. Formula Mass The calculated mass of a compound based on its chemical formula. The formula mass of sodium chloride (NaCl) is 58.5 amu. Molar Mass The weight of Avogadro's number of molecules of a substance. The molar mass of glucose (C6H12O6) is about 180 g/mol. Formula Mass A measure of the total atomic mass in a compound's formula. Glucose (C6H12O6) has a formula mass of around 180 amu. Molar Mass The molecular or formula weight expressed in grams per mole. The molar mass of water (H2O) is approximately 18 g/mol. Formula Mass A representation of the molecular weight in atomic mass units. The formula mass of ammonia (NH3) is 17 amu. Molar Mass A conversion factor for relating moles of a substance to its mass. The molar mass of sodium chloride (NaCl) is 58.5 g/mol. What unit is molar mass expressed in? Molar mass is expressed in grams per mole (g/mol). What is molar mass? Molar mass is the mass of one mole of a substance in grams. Is formula mass the same for all molecules of a compound? Yes, the formula mass is consistent for all molecules of a specific compound. How is formula mass calculated? It's calculated by summing the atomic masses of each atom in the compound's formula. How does molar mass relate to stoichiometry? Molar mass is used to convert between moles and grams in stoichiometric calculations. Is the formula mass always a whole number? Not necessarily, as it can include fractional atomic masses. What is formula mass? Formula mass is the sum of the atomic masses of all atoms in a molecule's formula. What role does molar mass play in chemical reactions? It's crucial for measuring and reacting correct amounts of reactants and products. Can molar mass vary between samples of the same compound? No, the molar mass is a constant value for any given compound. How do you find the formula mass of a complex molecule? Add up the atomic masses of all atoms in the molecule's formula. Why is formula mass important in chemistry? It helps understand the composition and proportions of elements in a compound. Is molar mass used in physical chemistry? Yes, it's widely used in various physical chemistry applications. Can molar mass be used in gas law calculations? Yes, it's essential in calculations involving the Ideal Gas Law. What's the main difference between formula mass and molar mass? Formula mass is the mass of a single molecule in amu, while molar mass is the mass of a mole of molecules in grams. Can formula mass be used to determine molecular geometry? No, formula mass doesn't provide information on molecular geometry. What's the difference between molecular mass and formula mass? Molecular mass is the mass of a single molecule, similar to formula mass, but often used in the context of molecular compounds. Does the formula mass change with physical state? No, formula mass is constant regardless of the physical state of a compound. Can molar mass affect the properties of a substance? While molar mass itself doesn't affect properties, it's used to calculate properties like density. How is molar mass helpful in lab experiments? It aids in preparing solutions and calculating yields. Is formula mass relevant in biological chemistry? Yes, particularly in understanding and manipulating biochemical molecules. About Author Written by Harlon Moss Harlon is a seasoned quality moderator and accomplished content writer for Difference Wiki. An alumnus of the prestigious University of California, he earned his degree in Computer Science. Leveraging his academic background, Harlon brings a meticulous and informed perspective to his work, ensuring content accuracy and excellence. Edited by Aimie Carlson Aimie Carlson, holding a master's degree in English literature, is a fervent English language enthusiast. She lends her writing talents to Difference Wiki, a prominent website that specializes in comparisons, offering readers insightful analyses that both captivate and inform.
{"url":"https://www.difference.wiki/formula-mass-vs-molar-mass/","timestamp":"2024-11-12T05:33:02Z","content_type":"text/html","content_length":"127563","record_id":"<urn:uuid:8195e05c-cad0-4c50-9cc5-becb6845a870>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.58/warc/CC-MAIN-20241112045844-20241112075844-00454.warc.gz"}
49 research outputs found Associations between the satisfaction of basic psychological needs of autonomy, competence, and relatedness with current suicidal ideation and risk for suicidal behavior were examined. Two logistic regressions were conducted with a cross-sectional database of 440 university students to examine the association of need satisfaction with suicidal ideation and risk for suicidal behavior, while controlling for demographics and depressive symptoms. Suicidal ideation was reported by 15% of participants and 18% were found to be at risk for suicidal behavior. A one standard deviation increase in need satisfaction reduced the odds of suicidal ideation by 53%, OR (95% CI) = 0.47 (0.33–0.67), and the odds of being at risk for suicidal behavior by 50%, OR (95% CI) = 0.50 (0.37–0.69). Young adults whose basic psychological needs are met may be less likely to consider suicide and engage in suicidal behavior. Prospective research is needed to confirm these associations Suicide in later life is a global public health problem. The aim of this review was to conduct a systematic analysis of studies with comparison groups that examined the associations between social factors and suicidal behavior (including ideation, non-fatal suicidal behavior, or deaths) among individuals aged 65 and older. Our search identified only 16 articles (across 14 independent samples) that met inclusion criteria. The limited number of studies points to the need for further research. Included studies were conducted in Canada (n = 2), Germany (n = 1), Hong Kong (n = 1), Japan (n = 1), Singapore (n = 1), Sweden (n = 2), Taiwan (n = 1), the U.K. (n = 2), and the U.S. (n = 3). The majority of the social factors examined in this review can be conceptualized as indices of positive social connectedness—the degree of positive involvement with family, friends, and social groups. Findings indicated that at least in industrialized countries, limited social connectedness is associated with suicidal ideation, non-fatal suicidal behavior, and suicide in later life. Primary prevention programs designed to enhance social connections as well as a sense of community could potentially decrease suicide risk, especially among men This work is devoted to prove rigorously the existence of a liquid-vapor branch in the phase diagram of uids, when considering a system of particles in Rd interacting with a reasonable potential with both long and short range contributions. The model we consider is a variant of the model introduced by Lebowitz, Mazel and Presutti ([1]), obtained by adding a hard core interaction to the original Kac potential inter- action, the rst acting on a di erent scale. Model: Let q = (q1; :::qn) denote a con guration of n particles in Rd with dimension d 2. The hamiltonian for the LMP model is given by the following function: HLMP ; (q) = Z Rd e ( (r; q)) dr where e ( ) = 2 2 + 4 4! is the energy density with a quadratic repulsive term and a quartic attractive term and (r; q) := X qi2q J (r; qi) is the local particle density at r 2 Rd. The local density is de ned through Kac potentials, i.e. functions which scale in the following way: J (r; r0) = dJ( r; r0), where J(r; r0) is a symmetric, translation invariant (i.e. J(r; r0) = J(0; r0 r)) smooth probability kernel supposed for simplicity to vanish for jr r0j 1. Thus the range of the interaction has order 1 (for both repulsive and attractive potentials) and the \Kac scaling parameter" is assumed to be small. This choice of the potentials makes the LMP model a perturbation of the mean eld, in the sense that when taking the thermodynamic limit followed by the limit ! 0 the free energy is equivalent to the free energy in the van der Waals description. Note that the LMP interaction can be written in terms of one, two and four body poten- tials in the following way: HLMP ; (q) = jqj 1 2! X i6=j J(2) (qi; qj) + 1 4! X i16=:::6=i4 J(4) (qi1 ; :::; qi4); (1.0.1) where J(2) (qi; qj) = Z J (r; qi)J (r; qj) dr (1.0.2) J(4) (qi1 ; :::; qi4) = Z J (r; qi1) J (r; qi4) dr: In the model with hard cores the phase space is restricted by adding an interaction which is = 1 when the particles get too much close with each other and is 0 when the particles are far. Hence the interaction is given by Hhc(q) := X i<j V hc(qi; qj) where V hc : Rd ! R is pair potential de ned as: V hc(qi; qj) = 8>< >: +1 if jqi qj j R 0 if jqi qj j > R with R the radius of the hard spheres and = jB0(R)j their volume. Result: The main goal of this manuscript is to prove perturbativly that by adding a hard core interaction to the LMP model, with the hard core radius R su ciently small, the LMP liquid- vapor phase transition is essentially una ected. Hence, we prove existence of two di erent Gibbs measures corresponding to the two phases. Let us de ne the grand canonical measure in the region Rd and boundary conditions q 2 Q c as: ; ;R; (dqj q) = Z 1 ; ;R; ( j q)e H ;R; (qj q) (dq): Then the main theorem is the following Theorem 1.0.1. Consider the model with hamiltonian HLMP ; (q) + Hhc(q) in dimension d 2. There are R0, c;R; 0;R and for any 0 < R R0 and 2 ( c;R; 0;R) there is ;R > 0 so that for any ;R there is ; ;R such that: There are two distinct DLR measures ; ;R with chemical potential ; ;R and inverse tem- perature and two di erent densities: 0 < ; ;R; < ; ;R;+. Thus we prove the existence of two distinct states, which are interpreted as the two pure phases of the system: + ; ;R describes the liquid phase with density ; ;R;+ while ; ;R describes the vapor phase, with the smaller density ; ;R; . ; ;R; and ; ;R have limit as ! 0, the limit being ;R; < ;R;+ and ( ;R) which are respectively densities and chemical potential for which there is a phase transition in the mean eld model. The critical temperature c;R is close to the analogous critical value for the LMP model for the volume of the hard cores small enough: c;R = LMP c ( LMP c )2=3 + O( 2); LMP c = 3=23=2: Our proof will follow Pirokov-Sinai theory in the version proposed by Zahradn k, [3], which involves the notion of cuto weights. The analysis is based on the ideas of coarse graining and contour model and the goal is to prove an analogous of the Peierls argument for discrete systems. Crucial ingredient in the proof of Theorem 1.0.1 is to show the convergence of the cluster expansion for the hard spheres gas in the canonical ensemble when the density is small, small enough. This is the content of a recent paper [2]
{"url":"https://core.ac.uk/search/?q=authors%3A(Van%20Orden%2C%20Kimberly%20A.)","timestamp":"2024-11-14T02:25:31Z","content_type":"text/html","content_length":"203875","record_id":"<urn:uuid:660b3f5d-f54e-402d-b569-ec3067d1b246>","cc-path":"CC-MAIN-2024-46/segments/1730477028516.72/warc/CC-MAIN-20241113235151-20241114025151-00616.warc.gz"}
API Reference¶ upsetplot.plot(data, fig=None, **kwargs)[source]¶ Make an UpSet plot of data on fig data : pandas.Series or pandas.DataFrame Values for each set to plot. Should have multi-index where each level is binary, corresponding to set membership. If a DataFrame, sum_over must be a string or False. fig : matplotlib.figure.Figure, optional Defaults to a new figure. Other arguments for UpSet subplots : dict of matplotlib.axes.Axes Keys are ‘matrix’, ‘intersections’, ‘totals’, ‘shading’ class upsetplot.UpSet(data, orientation='horizontal', sort_by='degree', sort_categories_by='cardinality', subset_size='auto', sum_over=None, min_subset_size=None, max_subset_size=None, max_subset_rank=None, min_degree=None, max_degree=None, facecolor='auto', other_dots_color=0.18, shading_color=0.05, with_lines=True, element_size=32, intersection_plot_elements=6, totals_plot_elements=2, show_counts='', show_percentages=False, include_empty_subsets=False)[source]¶ Manage the data and drawing for a basic UpSet plot Primary public method is plot(). data : pandas.Series or pandas.DataFrame Elements associated with categories (a DataFrame), or the size of each subset of categories (a Series). Should have MultiIndex where each level is binary, corresponding to category membership. If a DataFrame, sum_over must be a string or False. orientation : {‘horizontal’ (default), ‘vertical’} If horizontal, intersections are listed from left to right. sort_by : {‘cardinality’, ‘degree’, ‘-cardinality’, ‘-degree’, ‘input’, ‘-input’} If ‘cardinality’, subset are listed from largest to smallest. If ‘degree’, they are listed in order of the number of categories intersected. If ‘input’, the order they appear in the data input is used. Prefix with ‘-’ to reverse the ordering. Note this affects subset_sizes but not data. sort_categories_by : {‘cardinality’, ‘-cardinality’, ‘input’, ‘-input’} Whether to sort the categories by total cardinality, or leave them in the input data’s provided order (order of index levels). Prefix with ‘-’ to reverse the ordering. subset_size : {‘auto’, ‘count’, ‘sum’} Configures how to calculate the size of a subset. Choices are: ‘auto’ (default) If data is a DataFrame, count the number of rows in each group, unless sum_over is specified. If data is a Series with at most one row for each group, use the value of the Series. If data is a Series with more than one row per group, raise a ValueError. Count the number of rows in each group. Sum the value of the data Series, or the DataFrame field specified by sum_over. sum_over : str or None If subset_size='sum' or 'auto', then the intersection size is the sum of the specified field in the data DataFrame. If a Series, only None is supported and its value is summed. min_subset_size : int or “number%”, optional Minimum size of a subset to be shown in the plot. All subsets with a size smaller than this threshold will be omitted from plotting. This may be specified as a percentage using a string, like “50%”. Size may be a sum of values, see subset_size. New in version 0.5. Changed in version 0.9: Support percentages max_subset_size : int or “number%”, optional Maximum size of a subset to be shown in the plot. All subsets with a size greater than this threshold will be omitted from plotting. This may be specified as a percentage using a string, like “50%”. New in version 0.5. Changed in version 0.9: Support percentages max_subset_rank : int, optional Limit to the top N ranked subsets in descending order of size. All tied subsets are included. Parameters: New in version 0.9. min_degree : int, optional Minimum degree of a subset to be shown in the plot. New in version 0.5. max_degree : int, optional Maximum degree of a subset to be shown in the plot. New in version 0.5. facecolor : ‘auto’ or matplotlib color or float Color for bar charts and active dots. Defaults to black if axes.facecolor is a light color, otherwise white. Changed in version 0.6: Before 0.6, the default was ‘black’ other_dots_color : matplotlib color or float Color for shading of inactive dots, or opacity (between 0 and 1) applied to facecolor. New in version 0.6. shading_color : matplotlib color or float Color for shading of odd rows in matrix and totals, or opacity (between 0 and 1) applied to facecolor. New in version 0.6. with_lines : bool Whether to show lines joining dots in the matrix, to mark multiple categories being intersected. element_size : float or None Side length in pt. If None, size is estimated to fit figure intersection_plot_elements : int The intersections plot should be large enough to fit this many matrix elements. Set to 0 to disable intersection size bars. Changed in version 0.4: Setting to 0 is handled. totals_plot_elements : int The totals plot should be large enough to fit this many matrix elements. Set to 0 to disable the totals plot. Changed in version 0.9: Setting to 0 is handled. show_counts : bool or str, default=False Whether to label the intersection size bars with the cardinality of the intersection. When a string, this formats the number. For example, ‘{:d}’ is equivalent to True. Note that, for legacy reasons, if the string does not contain ‘{‘, it will be interpreted as a C-style format string, such as ‘%d’. show_percentages : bool or str, default=False Whether to label the intersection size bars with the percentage of the intersection relative to the total dataset. When a string, this formats the number representing a fraction of samples. For example, ‘{:.1%}’ is the default, formatting .123 as 12.3%. This may be applied with or without show_counts. New in version 0.4. include_empty_subsets : bool (default=False) If True, all possible category combinations will be shown as subsets, even when some are not present in data. │add_catplot(kind[, value, elements]) │Add a seaborn catplot over subsets when plot() is called. │ │add_stacked_bars(by[, sum_over, colors, …])│Add a stacked bar chart over subsets when plot() is called. │ │make_grid([fig]) │Get a SubplotSpec for each Axes, accounting for label text width │ │plot([fig]) │Draw all parts of the plot onto fig or a new figure │ │plot_intersections(ax) │Plot bars indicating intersection size │ │plot_matrix(ax) │Plot the matrix of intersection indicators onto ax │ │plot_totals(ax) │Plot bars indicating total set size │ │style_categories(categories, *[, …]) │Updates the style of the categories. │ │style_subsets([present, absent, …]) │Updates the style of selected subsets’ bars and matrix dots │ add_catplot(kind, value=None, elements=3, **kw)[source]¶ Add a seaborn catplot over subsets when plot() is called. kind : str One of {“point”, “bar”, “strip”, “swarm”, “box”, “violin”, “boxen”} value : str, optional Column name for the value to plot (i.e. y if orientation=’horizontal’), required if data is a DataFrame. Parameters: elements : int, default=3 Size of the axes counted in number of matrix elements. **kw : dict Additional keywords to pass to seaborn.catplot(). Our implementation automatically determines ‘ax’, ‘data’, ‘x’, ‘y’ and ‘orient’, so these are prohibited keys in kw. Returns: None add_stacked_bars(by, sum_over=None, colors=None, elements=3, title=None)[source]¶ Add a stacked bar chart over subsets when plot() is called. Used to plot categorical variable distributions within each subset. by : str Column name within the dataframe for color coding the stacked bars, containing discrete or categorical values. sum_over : str, optional Ordinarily the bars will chart the size of each group. sum_over may specify a column which will be summed to determine the size of each bar. colors : Mapping, list-like, str or callable, optional The facecolors to use for bars corresponding to each discrete label, specified as one of: Maps from label to matplotlib-compatible color specification. A list of matplotlib colors to apply to labels in order. The name of a matplotlib colormap name. When called with the number of labels, this should return a list-like of that many colors. Matplotlib colormaps satisfy this callable API. Uses the matplotlib default colormap. elements : int, default=3 Size of the axes counted in number of matrix elements. title : str, optional The axis title labelling bar length. Returns: None Get a SubplotSpec for each Axes, accounting for label text width Draw all parts of the plot onto fig or a new figure fig : matplotlib.figure.Figure, optional Defaults to a new figure. subplots : dict of matplotlib.axes.Axes Keys are ‘matrix’, ‘intersections’, ‘totals’, ‘shading’ Plot bars indicating intersection size Plot the matrix of intersection indicators onto ax Plot bars indicating total set size style_categories(categories, *, bar_facecolor=None, bar_hatch=None, bar_edgecolor=None, bar_linewidth=None, bar_linestyle=None, shading_facecolor=None, shading_edgecolor=None, shading_linewidth= None, shading_linestyle=None)[source]¶ Updates the style of the categories. Select a category by name, and style either its total bar or its shading. categories : str or list[str] Category names where the changed style applies. bar_facecolor : str or RGBA matplotlib color tuple, optional. Override the default facecolor in the totals plot. bar_hatch : str, optional Set a hatch for the totals plot. bar_edgecolor : str or matplotlib color, optional Set the edgecolor for total bars. bar_linewidth : int, optional Line width in points for total bar edges. bar_linestyle : str, optional Line style for edges. shading_facecolor : str or RGBA matplotlib color tuple, optional. Override the default alternating shading for specified categories. shading_edgecolor : str or matplotlib color, optional Set the edgecolor for bars, dots, and the line between dots. shading_linewidth : int, optional Line width in points for edges. shading_linestyle : str, optional Line style for edges. style_subsets(present=None, absent=None, min_subset_size=None, max_subset_size=None, max_subset_rank=None, min_degree=None, max_degree=None, facecolor=None, edgecolor=None, hatch=None, linewidth= None, linestyle=None, label=None)[source]¶ Updates the style of selected subsets’ bars and matrix dots Parameters are either used to select subsets, or to style them with attributes of matplotlib.patches.Patch, apart from label, which adds a legend entry. present : str or list of str, optional Category or categories that must be present in subsets for styling. absent : str or list of str, optional Category or categories that must not be present in subsets for styling. min_subset_size : int or “number%”, optional Minimum size of a subset to be styled. This may be specified as a percentage using a string, like “50%”. Changed in version 0.9: Support percentages max_subset_size : int or “number%”, optional Maximum size of a subset to be styled. This may be specified as a percentage using a string, like “50%”. Changed in version 0.9: Support percentages max_subset_rank : int, optional Limit to the top N ranked subsets in descending order of size. All tied subsets are included. New in version 0.9. min_degree : int, optional Parameters: Minimum degree of a subset to be styled. max_degree : int, optional Maximum degree of a subset to be styled. facecolor : str or matplotlib color, optional Override the default UpSet facecolor for selected subsets. edgecolor : str or matplotlib color, optional Set the edgecolor for bars, dots, and the line between dots. hatch : str, optional Set the hatch. This will apply to intersection size bars, but not to matrix dots. linewidth : int, optional Line width in points for edges. linestyle : str, optional Line style for edges. label : str, optional If provided, a legend will be added Dataset loading and generation¶ Data querying and transformation¶ upsetplot.query(data, present=None, absent=None, min_subset_size=None, max_subset_size=None, max_subset_rank=None, min_degree=None, max_degree=None, sort_by='degree', sort_categories_by='cardinality' , subset_size='auto', sum_over=None, include_empty_subsets=False)[source]¶ Transform and filter a categorised dataset Retrieve the set of items and totals corresponding to subsets of interest. data : pandas.Series or pandas.DataFrame Elements associated with categories (a DataFrame), or the size of each subset of categories (a Series). Should have MultiIndex where each level is binary, corresponding to category membership. If a DataFrame, sum_over must be a string or False. present : str or list of str, optional Category or categories that must be present in subsets for styling. absent : str or list of str, optional Category or categories that must not be present in subsets for styling. min_subset_size : int or “number%”, optional Minimum size of a subset to be reported. All subsets with a size smaller than this threshold will be omitted from category_totals and data. This may be specified as a percentage using a string, like “50%”. Size may be a sum of values, see subset_size. Changed in version 0.9: Support percentages max_subset_size : int or “number%”, optional Maximum size of a subset to be reported. Changed in version 0.9: Support percentages max_subset_rank : int, optional Limit to the top N ranked subsets in descending order of size. All tied subsets are included. New in version 0.9. min_degree : int, optional Minimum degree of a subset to be reported. max_degree : int, optional Maximum degree of a subset to be reported. sort_by : {‘cardinality’, ‘degree’, ‘-cardinality’, ‘-degree’, ‘input’, ‘-input’} If ‘cardinality’, subset are listed from largest to smallest. If ‘degree’, they are listed in order of the number of categories intersected. If ‘input’, the order they appear in the data input is used. Prefix with ‘-’ to reverse the ordering. Note this affects subset_sizes but not data. sort_categories_by : {‘cardinality’, ‘-cardinality’, ‘input’, ‘-input’} Whether to sort the categories by total cardinality, or leave them in the input data’s provided order (order of index levels). Prefix with ‘-’ to reverse the ordering. subset_size : {‘auto’, ‘count’, ‘sum’} Configures how to calculate the size of a subset. Choices are: ‘auto’ (default) If data is a DataFrame, count the number of rows in each group, unless sum_over is specified. If data is a Series with at most one row for each group, use the value of the Series. If data is a Series with more than one row per group, raise a ValueError. Count the number of rows in each group. Sum the value of the data Series, or the DataFrame field specified by sum_over. sum_over : str or None If subset_size='sum' or 'auto', then the intersection size is the sum of the specified field in the data DataFrame. If a Series, only None is supported and its value is summed. include_empty_subsets : bool (default=False) If True, all possible category combinations will be returned in subset_sizes, even when some are not present in data. Including filtered data, filtered and sorted subset_sizes and overall category_totals and total. >>> from upsetplot import query, generate_samples >>> data = generate_samples(n_samples=20) >>> result = query(data, present="cat1", max_subset_size=4) >>> result.category_totals cat1 14 cat2 4 cat0 0 dtype: int64 >>> result.subset_sizes cat1 cat2 cat0 True True False 3 Name: size, dtype: int64 >>> result.data index value cat1 cat2 cat0 True True False 0 2.04... False 2 2.05... False 10 2.55... >>> # Sorting: >>> query(data, min_degree=1, sort_by="degree").subset_sizes cat1 cat2 cat0 True False False 11 False True False 1 True True False 3 Name: size, dtype: int64 >>> query(data, min_degree=1, sort_by="cardinality").subset_sizes cat1 cat2 cat0 True False False 11 True False 3 False True False 1 Name: size, dtype: int64 >>> # Getting each subset's data >>> result = query(data) >>> result.subsets[frozenset({"cat1", "cat2"})] index value cat1 cat2 cat0 False True False 3 1.333795 >>> result.subsets[frozenset({"cat1"})] index value cat1 cat2 cat0 False False False 5 0.918174 False 8 1.948521 False 9 1.086599 False 13 1.105696 False 19 1.339895
{"url":"https://upsetplot.readthedocs.io/en/latest/api.html","timestamp":"2024-11-12T00:16:29Z","content_type":"text/html","content_length":"83535","record_id":"<urn:uuid:af6c1a91-32a5-47e5-b11d-aa54f46c42a1>","cc-path":"CC-MAIN-2024-46/segments/1730477028240.82/warc/CC-MAIN-20241111222353-20241112012353-00224.warc.gz"}
Expressing a Pair of Simultaneous Equations as a Matrix Equation Question Video: Expressing a Pair of Simultaneous Equations as a Matrix Equation Mathematics • Third Year of Secondary School Express the simultaneous equations 3π ₯ + 2π ¦ = 12, 3π ₯ + π ¦ = 7 as a matrix equation. Video Transcript Express the simultaneous equations three π ₯ plus two π ¦ is equal to 12 and three π ₯ plus π ¦ is equal to seven as a matrix equation. So weβ re given a set of two simultaneous equations in two variables and asked to express this set of equations in matrix form. And what this requires us to do is to separate out the coefficients of our variables into one matrix and the variables themselves into another and the constants on the right-hand side into a third. And because we have two equations in two variables π ₯ and π ¦, our coefficient matrix will be a two-by-two matrix, that is, with two rows and two columns. Our variables matrix will be a two-by-one column matrix, and our constants again, a two-by-one column matrix. Itβ s important to note that we must be able to reform our original equations by performing matrix multiplication on the left-hand side. And remember that multiplying a matrix with π rows and π columns by a matrix with π rows and π columns will give us a matrix with π rows and π columns. And for matrix multiplication to work, the number of columns π in our first matrix must be the number of rows in the second matrix. So now we want to find the matrix equation in this form which, if we multiply it out, reproduces our original system of equations. So letβ s look more closely at our equations. Itβ s important to make sure before we start putting entries into our matrix that the variables are aligned in our system of equations so that the π ₯β s are above one another and so are the π ¦β s. The reason for this is that when we populate our matrix, weβ re going to read off the coefficients of the π ₯β s and π ¦β s. Our coefficients in the first equation, equation one, are three and two with the constant 12 on the right-hand side and the three and two from the first row in our coefficient matrix and the 12, the first element in our matrix on the right-hand side. Similarly, the elements in the second row of our coefficient matrix are the coefficients in the second equation, equation two, that is, three and one. And the second element on our matrix of constant on the right-hand side is seven. This matrix equation is the full matrix representation of the set of simultaneous equations three π ₯ plus two π ¦ is equal to 12 and three π ₯ plus π ¦ is equal to seven. If we were to apply matrix multiplication to the left-hand side of our matrix equation, we would have three π ₯ plus two π ¦ is equal to 12, which is our first equation one, and three π ₯ plus π ¦ is equal to seven, which is our second equation. And weβ re back to our original set of simultaneous linear equations.
{"url":"https://www.nagwa.com/en/videos/729105961063/","timestamp":"2024-11-02T12:11:05Z","content_type":"text/html","content_length":"251348","record_id":"<urn:uuid:b32400d7-b5a2-49e7-b3f6-0a39cf977f84>","cc-path":"CC-MAIN-2024-46/segments/1730477027710.33/warc/CC-MAIN-20241102102832-20241102132832-00052.warc.gz"}
Mean Value Learning Statistics with Python Learning Statistics with Python Now, we will delve into statistical measures, starting with the concept of the mean value. To calculate the mean, we add up all the numbers in the sequence of data and then divide by the total number of values. Take a look at the illustration where we'll calculate the mean salary of people: Thanks for your feedback!
{"url":"https://codefinity.com/courses/v2/a849660e-ddfa-4033-80a6-94a1b7772e23/35b2d208-a6f4-41fa-8dc8-1ad2d6ff1dfe/9ec24957-9608-415a-804e-ac2238548ced","timestamp":"2024-11-06T15:15:09Z","content_type":"text/html","content_length":"327368","record_id":"<urn:uuid:8966e092-e7a9-4896-be8e-3d86e3e4de78>","cc-path":"CC-MAIN-2024-46/segments/1730477027932.70/warc/CC-MAIN-20241106132104-20241106162104-00172.warc.gz"}
Brent - Las Vegas Tutor Certified Tutor The mind is like a muscle that must be trained in a progressive manner. In math, this would mean NOT moving on too quickly from one concept to the next, which is what separates good math teachers from the not so good ones. I make absolutely sure my students have internalized step A and can perform it by themselves before moving onto step B. Like the floors of a building, math concepts build on each other and must be fully understood so the structure remains stable. In short, knowing and teaching are two different things. I know my Algebra and Geometry through and through, but perhaps most importantly, I have a knack for teaching it as well, with efficiency. I connect well with people, and am committed to making sure my students strive for success. fitness & nutrition, running, helping my students succeed. Tutoring Subjects Elementary School Math Elementary School Reading What is your teaching philosophy? Information, especially in math, builds on itself. If a student doesn't understand concept A, concept B will be that much harder to grasp. This is why it's important to make absolutely sure the student understands a concept before moving onto the next. If he or she "sort of" gets concept A, and "sort of" gets concept B, the building that is their math skills will be an unstable one, more like a house of cards. What might you do in a typical first session with a student? Most importantly, a first session would include seeing where the student is in regards to the material. Are they stuck on a particular problem or concept? Do they fully understand the concepts the troublesome problem inevitably is built upon? In other words, I would make sure they have a firm grasp on the information needed to solve the current problem, before actually helping them solve it step by step. How can you help a student become an independent learner? I would try to motivate the student to want to do well. Almost like it is him or her against the material, and they must win. Make them the hero of their learning journey so they can save the day. How would you help a student stay motivated? In addition to the satisfaction a student should ideally get from solving hard math problems, it's important to explain how vital math is in regards to one’s future. If one does well in algebra this year, next year they'll move onto geometry. If they do well in geometry this year, they'll be in algebra 2 the next. This can lead to getting into a good college, getting a good job, and ultimately an easier, more pleasurable life. Not to mention their parents will be super proud of them for doing well in math! How do you help students who are struggling with reading comprehension? I would try to write the problem in a simpler form. In math, problems are often written in intentionally confusing ways in order to stump the student. If the students know this, and knows the tricks they use, they will have a better chance of comprehending what the actual question is when it's test time. What strategies have you found to be most successful when you start to work with a student? When I tutor, I found the most important element in internalizing/solidifying the material, is to practice it. The student might "get" a concept, and be able to solve a few examples that are similar. But can they solve a similar problem when the variables are switched? Can they solve it if all the negative values are changed into positive ones and vice versa? Repetition is key, and it has worked for me and others I have worked with. How would you help a student get excited/engaged with a subject that they are struggling in? In math especially, the student should understand how important of a subject it is. When I was in high school, people who were good at math were looked at as being smart, thus were looked at as being cool. Not only will the student feel smart, look smart, and please their parents, they will develop a satisfaction from defeating each math problem like they are the hero of their own story. What techniques would you use to be sure that a student understands the material? Examples, examples, examples. Practice, practice, practice. Not just 1 example, or 5, but 10, 15, or more if needed (on their own of course once it's believed they understand the material). The student needs to be able to recognize the problem, know what needs to be done, and do it, often quickly, especially during a test or pop quiz. How do you build a student's confidence in a subject? First, as I was tutoring him or her, I would encourage them through the learning process. It's important that students receive affirmation, that they know they are doing a good job along the way, that they hear "you can do this", "you know this", etc. Most importantly though, I would make sure the student is able to work through the problems on their own, without any help, and that they can do it in a timely manner in preparation for any future exams. How do you evaluate a student's needs? I would spot where the gaps are. If I were to write an example problem, I would take note where the student gets stuck and has to rack their brain on what to do next. I would stop and explain that step of the problem by itself, making sure the student knows it, internalizes it, and that it's practiced, repetitively if need be. Time is valuable during an exam, it's important that a student develops fluidity in solving math problems. How do you adapt your tutoring to the student's needs? For example, some students are good at memorizing. A large part of being successful in math is the ability memorize the formulas so one can work through the steps of the problem. For these students, emphasis will be placed on applying the formulas to the problem. For other students, time will have to be devoted to first memorizing the formulas, and then applying them, which they might be better at than the student who is good at memorizing the formulas. It's all about spotting where a student’s strengths and weaknesses are, and addressing the weak points. What types of materials do you typically use during a tutoring session? I always have a pencil and notepad. It's important to be able to quickly write out an example problem for a student to solve step by step. The student must also have a pencil and paper, it's vitally important that a student is able to have a thorough "cheat sheet", full with formulas followed by examples where the formula is applied. It's also important that a student writes these notes themselves to help internalize the information. In geometry for example, this paper would consist of things like, what the radius of a circle is, what the diameter is, how to find the circumference using the radius, etc.
{"url":"https://www.varsitytutors.com/tutors/878235037","timestamp":"2024-11-14T01:13:52Z","content_type":"text/html","content_length":"588937","record_id":"<urn:uuid:d840ece2-4cc9-4d2e-89fe-16d1459d3876>","cc-path":"CC-MAIN-2024-46/segments/1730477028516.72/warc/CC-MAIN-20241113235151-20241114025151-00703.warc.gz"}
partitions of unity and locally finite cover Frank, okay, thanks. Presently the $n$Lab pages intuitionstic mathematics and constructive mathematics may not completely reflect the sharp distinction that you are making here (or if they do, I find it hard to extract it). Maybe you have the energy to touch these pages accordingly? David, thanks! Where you wrote “convergent infinite sum” I made it come out as “convergent infinite series”. I edited partition of unity to clarify around the case without point-finiteness. Oh well, that’s nice!, thank you. I might edit the entry in ‘partition of unity’ since it now reads ’in intuitionistic mathematics’ whereas the relevant definitions and proofs are already in BISH (constructive mathematics, without use of intuitionistic axioms). {Many paragraphs in the thesis are marked with an asterisk * to indicate that they are valid in BISH}. Thanks for the pointer. I have added it here, here and here. If anyone is interested, there is a constructive (BISH) treatment of ’partition of unity’, ’locally finite open cover’, and ’star-finite open cover’ in section 3.1 of my thesis modern intuitionistic topology. It gives a simple proof that every per-enumerable* cover of a separable metric space has i) a star-finite refinement b) a subordinate partition of unity. *A subset $U$ is enumerably open when it is an enumerable union of basic opens, a cover $\mathcal{U}$ is per-enumerable when it is an enumerable collection of enumerably open sets. Okay, thanks. We should says this more clearly in the entry then. Yes, it seems to be as I thought, from looking at the Google Books link, just before Lemma 2.6 in the Appendix. Hmm, not sure. I’d have to check Dold’s book again. In principle one could say that one has a collection of functions to [0,1] such that at each point only countably many are nonzero, and the sum exists and is 1 at each point. I have added to locally finite cover statement and proof that every locally finite refinement induces a locally finite cover with the original index set. What’s a “non-point finite partition of unity”? If it is what it sounds like, then how is the sum well defined? copy it over to the lab I’ve now put this in at partition of unity. I’ve found the argument that I was discussing with Harry here, in Dold’s lectures on algebraic topology. It’s a result due to an M. Mather, in a 1965 Cambridge PhD thesis, Paracompactness and partitions of unity. Will type it here then copy it over to the lab when at work tomorrow. Definition: A collection of functions $u_i : X \to [0,1]$ is called locally finite if the cover $u_i^{-1}(0,1]$ (the induced cover) is locally finite. Proposition (Mather, 1965): Let $\{u_i\}_J$ be a non-point finite partition of unity. Then there is a locally finite partition of unity $\{v_i\}_{i\in J}$ such that the induced cover of the latter is a refinement of the induced cover of the former. ’Dold’s trick’ is about taking a countable family of functions $u_i$ and turning it into a locally finite partition of unity, the proof of the above proposition is a little bit different in flavour (but not that different). partition of unity, locally finite cover Will put up some stuff about Dold’s trick of taking a not-necessarily point finite partition of unity and making a partition of unity. There is a case when I know it works and a case I’m really not sure about - I need to find where the argument falls down because I get too strong a result. I’ll discuss this in the thread soon, and port it over when it is stable. OK, fair enough, I’ll see what I can do in the coming month. I’m always a bit reluctant to edit because I’m not sure that my ‘style’ will fit in or serve the purpose of the nLab. But I should be able to manage some clarification and specification of ideas and axioms. It’s true that my energy is limited, so it will take a bit more time than average, but I suppose that won’t be a problem. [By the way, I find it really really impressive how hard and conscientiously people work on nLab, and work together.] I’ll notify of any changes I make, here on nForum. Thanks, Frank! Please don’t worry about style. The important point is the content.
{"url":"https://nforum.ncatlab.org/discussion/1338/partitions-of-unity-and-locally-finite-cover/?Focus=62291","timestamp":"2024-11-07T06:33:24Z","content_type":"application/xhtml+xml","content_length":"62926","record_id":"<urn:uuid:5f39a05f-d79c-423e-826a-b0951f73ca07>","cc-path":"CC-MAIN-2024-46/segments/1730477027957.23/warc/CC-MAIN-20241107052447-20241107082447-00415.warc.gz"}
What Is TSX Vibrating Screen Design Calculations? Vibration has been committed to the manufacturing and research of vibrating screen industry for many years. For the design and calculation of vibrating screen, I listed below: 1. Calculation method of handling capacity: Q = 3600*b*v*h* γ Where, Q: treatment capacity, unit: t/h b: Screen width L Unit m H: Average thickness of material (m) γ: Bulk density of materials, t/m3 v: Material running speed, m/s 2. The calculation method of running speed of linear vibrating materials is: v=kv*X*w*cos(8) Wall vibrator [1+tg (6) * tg (x)] 3. The calculation method of running speed of circular vibration animal feed is: v=kv* λ* w2* ( 1+ ) *a Where, kv: comprehensive empirical coefficient, generally 0.75~0.95 λ: Single amplitude, in mm w: Vibration frequency in rad/S δ: Vibration direction angle, in ° A: Inclination angle of screen surface unit ° 4. Dynamic load: P=k* λ Where, k: spring stiffness, in N/m λ: Amplitude in m P: Dynamic load in N The maximum dynamic load (resonant dynamic load) is calculated as 4~7 times of the above results The main parameters of the vibration motor include vibration force, vibration frequency, amplitude, power and motor stages. The following is a brief introduction: The vibration form produced by a single vibration motor installed on the equipment is generally circular vibration without directive, while the combination of more than two vibration motors Use can produce a variety of vibration patterns. 2、 The most basic vibration elements are vibration frequency and amplitude, from which velocity (vibration frequency x amplitude), and acceleration (frequency Square x amplitude) 3、 Vibration frequency The unit of frequency is times/second, also called Hertz. In the vibration motor, it is determined by the number of poles and cycles 4、 Amplitude and acceleration: In the vibration with the center of mass, the distance Ym from the center of mass to the maximum displacement is called the amplitude. In the vibration application, the distance S (double vibration) To express. In the vibration system using vibration motor, the determination of amplitude is very important. The vibration motor is composed of a special motor and an external excitation weight. When the motor is powered on and rotating, the exciting force generated by the exciting block is transmitted through the motor bottom angle or flange plate Vertical and horizontal vibrating machinery. The vibration motor consists of a specially designed sub line package and rotor shaft, which can withstand high-frequency vibration. The horizontal vibration motor uses four sector eccentric blocks As an exciting block, the angle between two eccentric blocks at the coaxial end can be adjusted to adjust the exciting force of the vibration motor from zero to maximum. When the vibration motor is powered on and rotated, it drives the eccentric blocks at both ends of the motor shaft to generate inertial excitation force, which is a spatial rotation force with amplitude of Fm, Fm=mrw2。 M – mass of eccentric block R – The distance between the centrefold of the eccentric block and the axis of rotation, that is, the eccentricity W-a motor rotation angle frequency w = 2πn/60 N-Vibration times of vibration motor
{"url":"https://www.tsxscreen.com/blog/what-is-tsx-vibrating-screen-design-calculations.html","timestamp":"2024-11-06T14:16:51Z","content_type":"text/html","content_length":"36925","record_id":"<urn:uuid:6675995e-e2e7-49ec-bc4a-fdad20fd76c9>","cc-path":"CC-MAIN-2024-46/segments/1730477027932.70/warc/CC-MAIN-20241106132104-20241106162104-00331.warc.gz"}
Risk-Adjusted Return What is a risk-adjusted return? Risk-adjusted return is a financial concept that takes into account the level of risk associated with an investment in order to evaluate its performance. It helps investors determine whether the potential return is worth the amount of risk taken. By considering both the return and the risk involved, investors can make more informed decisions and compare different investment opportunities. Key takeaways - Risk-adjusted return measures how well an investment performs relative to the level of risk taken. - It helps investors evaluate whether the return justifies the risk involved. - Different risk-adjusted measures, such as the Sharpe ratio, consider both the return and the volatility of an investment. Understanding risk-adjusted return Imagine you have two investment options: Option A and Option B. Option A has the potential for high returns, but it also comes with a higher level of risk. Option B, on the other hand, has lower potential returns but is considered less risky. How do you decide which one to choose? This is where risk-adjusted return comes into play. It allows you to assess the performance of each investment relative to the risk taken. Instead of solely focusing on the return, you consider whether the return justifies the level of risk involved. Various risk-adjusted measures exist, such as the Sharpe ratio, which takes into account both the return and the volatility of an investment. The Sharpe ratio helps you determine if an investment generated excess return compared to a risk-free investment after adjusting for the risk taken. Risk-adjusted return in the real world Let's say you're considering two investment funds: Fund X and Fund Y. Fund X has delivered an average annual return of 10% over the past five years, while Fund Y has delivered an average annual return of 8%. At first glance, Fund X may seem like the better option. However, when you dig deeper and calculate their risk-adjusted returns using a measure like the Sharpe ratio, you find that Fund Y has a higher risk-adjusted return. This means that Fund Y has provided a better return considering the level of risk taken. Final thoughts on risk-adjusted return Risk-adjusted return helps investors evaluate the performance of an investment relative to the level of risk involved. It considers both the return and the risk, allowing investors to make more informed decisions. By using risk-adjusted measures like the Sharpe ratio, investors can compare different investment options and determine whether the potential return justifies the risk taken. This concept helps individuals assess the trade-off between risk and reward and make investment choices that align with their financial goals and risk tolerance.
{"url":"https://www.femaleinvest.com/en-gb/investment-dictionary/risk-adjusted-return","timestamp":"2024-11-05T02:26:45Z","content_type":"text/html","content_length":"42902","record_id":"<urn:uuid:81e5f736-4e75-49f4-86ce-5483589ebc3e>","cc-path":"CC-MAIN-2024-46/segments/1730477027870.7/warc/CC-MAIN-20241105021014-20241105051014-00044.warc.gz"}
Setup and Rule Diagrams - LSAT Discussion Forum Dave Killoran • PowerScore Staff • Posts: 5962 • Joined: Mar 25, 2011 - Fri Jan 21, 2011 12:00 am #80188 Setup and Rule Diagram Explanation This is a Circular Linearity, Identify the Possibilities game. The first rule establishes that at least one of any three consecutively numbered lights is off, meaning three lights in a row cannot be on: The second rule establishes that light 8 is on: The third rule states that lights 2 and 7 cannot be on when light 1 is on: This rule will play a pivotal role in an inference to be discussed shortly. The fourth rule indicates that at least one of the three lights on each side is on: The fifth rule is another rule about sides and lights, and it indicates that if exactly one light on a side is on, then that light must be the center light: 1 light on on The contrapositive of this rule is: Center on Side 1 light on Since a side must have at least one light on and cannot have all three lights on, this contrapositive can be translated as: off 2 lights on When a side has two lights on but the center is not on, then both corners must be on: The contrapositive of this inference is: Both corners on that side are on on Thus, if one of the corners is off, then the center light is automatically on. The final rule states that two lights on the north side are on. From the third rule we know that lights 1 and 2 cannot be on at the same time, so, by Hurdling the Uncertainty we can infer that light 3 must always be on (otherwise you could not fulfill the constraints of this rule): At this point, most students move on to the questions. But, there are six rules, and several of those rules establish general limitations on each side or section of three lights, and these rules, when combined with the fact that the status of two of the eight lights is already determined, indicate that the game cannot have a large number of solutions. The best decision, then, is to explore Identifying the Possibilities. Start first with the third rule, which states that lights 2 and 7 are off when light 1 is on. By turning light 1 on, lights 2 and 7 automatically are off, leaving lights 4, 5, and 6 undetermined. But, from our discussion of the fifth rule, when a corner light is off (as light 7 is), then the center light on that side is on. Hence, light 6 must be on. Lights 4 and 5 cannot be precisely determined, but if one is on, the other is off (if both were on, the first rule would be violated), leading to a dual-option. Combining all of the information gives us only two possibilities when light 1 is on: Template #1: Of course, light 1 could be off. In that case, light 2 must be on in order to meet the constraints of the final rule. With lights 2 and 3 on, light 4 must be off in order to conform to the first rule. With light 4 off, light 5 must be on in order to abide by the fifth rule. The only undetermined lights are 6 and 7, but both cannot be on (otherwise the first rule would be violated) and both cannot be off (otherwise the fifth rule would be violated). Thus, one of lights 6 and 7 is on, and the other is off, leading to two possibilities: Template #2: Thus, because all possibilities have been explored when light 1 is on and when it is off, and light 1 has no more possible positions, we have explored all possibilities of the game, and there are only four possible solutions, as captured by the two templates above. • Posts: 1 • Joined: Aug 19, 2011 - Fri Aug 19, 2011 2:31 pm #1541 My question is regarding June 1993 Game #4 (stop lights, on/off), which is listed in the Logic Games Problem Set #1. I don’t think this is supposed to be a hard game, but I am unfamiliar with the type, and not sure how to do the set-up quickly/effectively. The game comes with a diagram of the street lights, placed around a city block. Advise on how to best diagram the rules and inferences and attack the game efficiently would be greatly appreciated. Thanks so much! - Fri Aug 19, 2011 4:19 pm #1542 Hi Hannah, Thanks for the question. I think you actually mean Game #2 from that test, which features 8 lights around a square parking lot, and each light is either on or off. Basically, this is a an unusual game that has linear elements controlling the action. In fact, the rules are so restrictive that there are only a limited number of solutions to game, and two major templates that control the action. The rule about at least one of three consecutive lights being off, and the second to last rule interact in a very powerful way. The way to diagram this game is to use their diagram as the template, and just put O for on and not-O (O with a slash through it) for off. Thus, for the second rule, just put an O next to light 8. There is a key inference in the game. The final rule states that two lights on the north side are on. From the third rule we know that lights 1 and 2 cannot be on at the same time, so, by Hurdling the Uncertainty we can infer that light 3 must always be on (otherwise you could not fulfill the constraints of this rule). This inference answers question #7 and helps to answer several others. Without analyzing each rule, let's discuss the restrictions in the game. There are six rules, and several of those rules establish general limitations on each side or section of three lights, and these rules, when combined with the fact that the status of two of the eight lights is already determined, indicate that the game cannot have a large number of solutions. The best decision, then, is to explore Identifying the Possibilities. Template #1 Start first with the third rule, which states that lights 2 and 7 are off when light 1 is on. By turning light 1 on, lights 2 and 7 automatically are off, leaving lights 4, 5, and 6 undetermined. But, when a corner light is off (as light 7 is), then the center light on that side is on. Hence, light 6 must be on. Lights 4 and 5 cannot be precisely determined, but if one is on, the other is off (if both were on, the first rule would be violated), leading to a dual-option. Template #2 Of course, light 1 could be off. In that case, light 2 must be on in order to meet the constraints of the final rule. With lights 2 and 3 on, light 4 must be off in order to conform to the first rule. With light 4 off, light 5 must be on in order to abide by the fifth rule. The only undetermined lights are 6 and 7, but both cannot be on (otherwise the first rule would be violated) and both cannot be off (otherwise the fifth rule would be violated). Thus, one of lights 6 and 7 is on, and the other is off, leading to two possibilities. Thus, amazingly, there are only four solutions in this game (two solutions in each template). That's a start, but please let me know if you have any other questions. Dave Killoran PowerScore Test Preparation Follow me on X/Twitter at http://twitter.com/DaveKilloran My LSAT Articles: http://blog.powerscore.com/lsat/author/dave-killoran PowerScore Podcast: http://www.powerscore.com/lsat/podcast/ • Posts: 135 • Joined: Jan 05, 2014 - Sun Jan 05, 2014 9:51 pm #13900 The explanation above is very helpful. I do have a follow up question to your explanation, however. In your above explanation you state that if a corner light is off, then the center light on that given side must be on. I am a bit confused as to why the center light would have to be on based on the rules given in the problem. Could you please explain? • Posts: 1 • Joined: May 30, 2015 - Mon Jun 01, 2015 7:54 pm #18836 I tried for so long to set up this game but could not figure it out. This game is found under the Logic Games Problem Set 1 under the Supplemental Problem Sets. I am most confused by the first rule which states, "At least one of any three consecutively numbered lights is off." If I could get some help with this setup, that would be awesome. • Posts: 17 • Joined: Dec 04, 2015 - Tue Dec 22, 2015 2:49 pm #21364 do we expect to see such game in future LSATs? I think it's kinda different than what I have been practising recently so I am wondering whether I should spend some time on it or not Nikki Siclunov • PowerScore Staff • Posts: 1362 • Joined: Aug 02, 2011 - Tue Dec 22, 2015 5:34 pm #21366 Hi Marc, Whether a game like this is likely to be given on a future LSAT is anyone's guess, and predicting what the LSAC will do is an exercise in futility. That said, in the last few years there has been an unusually high occurrence of long-forgotten game types (e.g. Pattern) or else of games that just seem crazy or unusual (similar to the one you asked about). Do I expect to see a similar game on a future test? No. What I do expect to see is games that attempt to surprise and confuse you. Rest assured, however, that if you know the fundamental principles of approach - you know how to identify limited solution sets, for instance, or how look for points of restriction, make hypotheticals and/or local diagrams, etc. - you shouldn't have much trouble with such games. But, you should definitely expect... the unexpected. Sorry if this is not what you wanted to hear Nikki Siclunov PowerScore Test Preparation - Sat Sep 17, 2016 9:30 pm #28720 Can someone please explain how the fifth rule of this game functions? I do not understand why the center light must be on if a corner light is off? It is not stated anywhere in the rules, and I do not know how that inference is derived based on what is given. - Sun Sep 18, 2016 10:14 am #28727 Hi Jlam, Thanks for the question! The fifth rule is a difficult one, and as you might expect it has a big impact on the game. So, let's take a closer look at this one. The fifth rule states, "If any side has exactly one of its three lights on, then that light is its center light." The first word is "if," and thus this is a conditional rule. The initial diagram appears as: 1 light on on The contrapositive of this rule is: Center on Side 1 light on In isolation, the rule and its contrapositive look relatively straightforward and the tendency here is to simply make the diagram and then move onward. But, whenever you have conditional rules in play and variable sets with just two or three options, always consider each condition when you take the contrapositive and attempt to see if the meaning of one or both isn't impacted by the limited number of options. Side note: that consideration doesn't have to be written out (and usually isn't unless there is a viable inference); just make a quick mental calculation and if nothing is there, proceed, and if there is something then explore it further. In this case, there is something worth looking at more deeply. Each light has only two options: on and off. So, when the contrapositive says that the center light is not on, that is identical to saying it is off: Center on = Center off That's fairly easy to understand, but for most people it's nice to take the negative out of the condition so the translation is useful just on that front. We can perform a similar translation on the necessary condition. The necessary condition in the contrapositive indicates that a side does not have just one light on. So, because there are three lights on a side and each side must have at least one light on (from the fourth rule), that means that the side must have two or three lights on. However, the first rule effectively states that you can't have all three lights on on a side, and so this condition translates to exactly two lights on a side must be on: Side 1 light on = Side 2 lights on Ok, that looks good so far, but wait, there's more! If there are exactly two lights on a side that are on, and we know that the center light is off, then we can infer that the two lights that are on are the corner lights, leading to this translation: Side 1 light on = Side 2 lights on = Both corners on that side are on Adding the above together, this allows us to show an equivalence diagram for the contrapositive above: If you reached this point of analysis during the game, congrats! It shows that you recognized the limitations in the variable sets, and used those restrictions to reveal deeper truths about the game. However, we can take one more step here, and it's quite a useful one. When we look at the contrapositive of the fifth rule, and then how we re-formulated that rule by looking at the impact each condition had on the related variable sets, the contrapositive and its translation look rather different. They have the same meaning, but they express different aspects of the relationship. Because of this, consider the contrapositive of our translated contrapositive. Our translated contrapositive was: To take the contrapositive of this, we reverse and negate the terms as usual. The first condition (Center off) is easy and becomes Center on. The other condition, "Both corners on that side are on" becomes "Both corners on that side are on" which really means "at least one corner is off," and thus the contrapositive is: So, operationally, there are two big takeaways from this rule: • 1. If the center light is off, then both corners are on. 2. If one of the corners is off, then the center light is on. Overall, it's a really tricky rule, but a great example of how limitations in the number of options (lights = on/off, and only three lights per side) can create a chain of enlightening inferences. Study the above translations because it is the kind of thing you see on tough LSAT games but the process itself is not that tough once you start doing it. The hard part is really just to remember or know that you should begin applying this process of analysis. The tipoff? Limitation in the number of options. Please let me know if that helps. Thanks! Dave Killoran PowerScore Test Preparation Follow me on X/Twitter at http://twitter.com/DaveKilloran My LSAT Articles: http://blog.powerscore.com/lsat/author/dave-killoran PowerScore Podcast: http://www.powerscore.com/lsat/podcast/ - Thu Sep 22, 2016 2:40 pm #28877 Thanks for answering my question! Explaining the rule in terms of contrapositives helps a lot!
{"url":"https://forum.powerscore.com/viewtopic.php?f=416&t=885&sid=e2c54ed1b3a4858d8487e558f0c96a1a","timestamp":"2024-11-07T00:42:57Z","content_type":"text/html","content_length":"487699","record_id":"<urn:uuid:3edba43c-2197-4927-b3c5-6ed9793245c3>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.54/warc/CC-MAIN-20241106230027-20241107020027-00449.warc.gz"}
Trick-Or-Treating With Rel · RelationalAI Trick-Or-Treating With Rel What's better than a bag full of candy for Halloween? Here at RelationalAI, we're passing out knowledge graphs to trick-or-treaters this year. Come and get spooked with us as we solve a Halloween logic puzzle in Rel, our relational modeling language. You'll see how to model a problem, store facts, and infer new knowledge from those facts. So grab your flashlight, put on your favorite costume, and let's go trick-or-treating with Rel! The Setup Four children --- two girls named Judy and Jessica, and two boys named Frank and Toby --- are going trick-or-treating. The children are all different ages, ranging from seven years old to 10 years old. Each child is accompanied by an adult: either their brother, sister, aunt, or uncle. They wear one of four costumes: a ghost, goblin, vampire, or werewolf. And each child carries a different color flashlight: red, blue, green, or orange. We'll be given ten clues that we'll have to use to figure out how old each child is, what costume they wear, which flashlight they carry, and which adult accompanies them. But first, let's insert the base data we're working with into our database: // write query module trick_or_treat_data def adult = "Brother"; "Sister"; "Aunt"; "Uncle" def age = 7; 8; 9; 10 def child = "Jessica"; "Judy"; "Frank"; "Toby" def costume = "Ghost"; "Goblin"; "Vampire"; "Werewolf" def flashlight = "Red"; "Blue"; "Green"; "Orange" def insert:data = trick_or_treat_data The trick_or_treat module has five member relations: adult, age, child, costume, and flashlight. Each relation is a set containing four values that are separated by semicolons. The adult, child, costume, and flashlight relations all contain strings, and the age relation contains integers. The insert relation inserts the data from the trick_or_treat module into the database as a base relation named data. With the data inserted we can start to model the problem. Let's create some entity and value types to represent the various concepts in the model: // model entity type Adult = String entity type Child = String entity type Costume = String entity type Flashlight = String value type Age = Int value type Color = String value type Name = String You can use entity and value types to create instances of each type. For example, ^Child["Judy"] creates a Child instance representing the child named Judy. An entity instance is just a hash --- a unique ID that can be used to identify a specific entity. Next, let's create entity relations that store instances of each entity in our model: // model def Adult = ^Adult[name] from name in data:adult def Child = ^Child[name] from name in data:child def Costume = ^Costume[name] from name in data:costume def Flashlight = ^Flashlight[color] from color in data:flashlight We'll also create a relation to store the four ages of each child: // model def Ages = ^Age[age] from age in data:age Age is a property of a child. Normally, properties are assigned to entities using property edges --- relations containing tuples that pair an entity hash with a value type instance. In this case, however, defining Ages in a way that mimics an entity relation will help simplify our reasoning later on. Speaking of property edges, let's go ahead and create some to assign properties to each of our entities: // model def has_name = (^Adult[name], ^Name[name]) from name in data:adult def has_name = (^Child[name], ^Name[name]) from name in data:child def has_name = (^Costume[name], ^Name[name]) from name in data:costume def has_color = (^Flashlight[color], ^Color[color]) from color in data:flashlight Finally, let's define operations on Age instances so that we can compare two Age instances and do some arithmetic with them: // model // 1 def age_int = transpose[^Age] // 2 def minimum[x in Age, y in Age] = ^Age[minimum[age_int[x], age_int[y]]] def maximum[x in Age, y in Age] = ^Age[maximum[age_int[x], age_int[y]]] // 3 def (<)[x in Age, y in Age] = age_int[x] < age_int[y] def (>)[x in Age, y in Age] = age_int[x] > age_int[y] def (-)[x in Age, y in Int] = ^Age[age_int[x] - y] def (+)[x in Age, y in Int] = ^Age[age_int[x] + y] Here's a breakdown of what each numbered group of code does: 1. The ^Age relation maps integers to instances of the Age value type. So, ^Age[9] returns the Age instance (:Age, 9). The age_int relation inverts this map so that age_int[^Age[9]] returns the integer 9. 2. The built-in minimum and maximum relations can return the minimum and maximum of two integers, but don't know how to work with Age instances. These lines define the right behavior, so that the minimum of two Age instances is the one with the smaller corresponding integer value, and the maximum of two Age instances in the one with the larger integer value. 3. These lines define the < and > operators so that they can be used to compare two Age instances. The + and - operators are defined between Age instances and integers so that, for example, ^Age[7] + 1 returns ^Age[8]. Our setup is done. It's time to take a look at the clues. The Clues There are ten clues. Each clue expresses one or more fact about the childrens' relationships with adults, costumes, ages, and flashlights. We'll create two relations to store two kinds of facts: 1. The has relation holds facts of the form "X has Y." 2. The not_has relation holds facts of the form "X does not have Y." For example, if the nine-year-old is wearing the ghost costume, then the has relation contains the tuple (^Age[9], ^Costume["Ghost"]). If the child accompanied by their sister does not carry the blue flashlight, then (^Adult["Sister"], ^Flashlight["Blue"]) is contained in has_not. Clue One Of the four children, there was the eight-year old, the child who dressed as a werewolf, the child who was accompanied by their sister, and the one who carried the red flashlight. This clue tells us that the eight-year-old does not wear the werewolf costume, was not accompanied by their sister, and did not carry the red flashlight. Similarly, the child wearing the werewolf costume is not eight years old, was not accompanied by their sister, and did not carry the red flashlight. And so on and so forth. We can express this concisely in Rel: // model def clue1 = ^Age[8]; ^Costume["Werewolf"]; ^Adult["Sister"]; ^Flashlight["Red"] def not_has(x in clue1, y in clue1) = x != y This adds all pairs (x, y) of distinct elements of the clue1 set to the not_has relation. Clue Two Between the nine-year-old and Frank, who is older than Toby, one was dressed as a goblin and the other carried the green flashlight. First of all, clue two tells us that Frank is not nine years old. // model def not_has = (^Child["Frank"], ^Age[9]) You might look at the above line of code and think that that we've just overwritten the not_has relation. That is not the case. Relations in Rel are defined additively, so that the preceding definition adds the tuple (^Child["Frank"], ^Age[9]) to the existing not_has relation. Clue two also tells us that Frank is older than Toby. In other words, Toby can't have any age that is greater than or equal to Frank's age. def not_has(child, age in Ages) { has(^Child["Frank"], age_frank) and age >= age_frank and child = ^Child["Toby"] from age_frank in Ages Lastly, we can write "between the nine-year-old and Frank, one was dressed as a goblin and the other carried the green flashlight" as an if-then-else clause: // model def has { if has(^Age[9], ^Costume["Goblin"]) then (^Child["Frank"], ^Flashlight["Green"]) else if has(^Age[9], ^Flashlight["Green"]) then (^Child["Frank"], ^Costume["Goblin"]) else {} // No tuple is added if neither if condition is true Clue Three Neither Frank nor Judy were accompanied by their sister while trick or treating. Judy did not carry an orange flashlight. Clue three tells us three facts that go in the not_has relation: // model def not_has { (^Child["Frank"], ^Adult["Sister"]); (^Child["Judy"], ^Adult["Sister"]); (^Child["Judy"], ^Flashlight["Orange"]) Clue Four Toby's mother was upset that he cut up one of her favorite sheets to create his ghost costume. Okay, so Toby wore the ghost costume: // model def has = (^Child["Toby"], ^Costume["Ghost"]) Clue Five The child who dressed as a werewolf was accompanied by a male and did not carry a green flashlight. The child dressed as a werewolf must be accompanied by either their brother or their uncle. So, if we know that a child wearing some costume other than the werewolf costume is accompanied by their brother, we know the child dressed as a werewolf must be accompanied by their uncle, and vice // model def has { if has(costume, ^Adult["Brother"]), costume != ^Costume["Werewolf"] then (^Costume["Werewolf"], ^Adult["Uncle"]) else if has(costume, ^Adult["Uncle"]), costume != ^Costume["Werewolf"] then (^Costume["Werewolf"], ^Adult["Brother"]) else {} from costume in Costume We also know that the child wearing the werewolf costume is not accompanied by their sister or their aunt, and doesn't carry the green flashlight: // model def not_has { (^Costume["Werewolf"], ^Adult["Aunt"]); (^Costume["Werewolf"], ^Adult["Sister"]); (^Costume["Werewolf"], ^Flashlight["Green"]) Clue Six The nine-year-old was quite happy with her goblin costume, which she spent an entire week designing. First, we know that the nine-year-old wore the gobline costume: // model def has = (^Age[9], ^Costume["Goblin"]) But we also know, since the nine-year-old is referred to as "her," that neither Frank nor Toby are nine years old: // model def not_has = (^Child["Frank"], ^Age[9]); (^Child["Toby"], ^Age[9]) Clue Seven Either Frank or Toby carried a green flashlight. We can encode clue seven as an if-then-else clause. If Frank doesn't carry the green flashlight, then we know that Toby does, and vice versa: // model def has { if not_has(^Child["Frank"], ^Flashlight["Green"]) then (^Child["Toby"], ^Flashlight["Green"]) else if not_has(^Child["Toby"], ^Flashlight["Green"]) then (^Child["Frank"], ^Flashlight["Green"]) else {} Clue Eight The child who was accompanied by their brother was exactly two years older than the child who went with their sister. If we know the age of the child accompanied by their sister, then we know the age of the child acompanied by their brother, and vice versa: // model def has { if has(age, ^Adult["Sister"]) then (age + 2, ^Adult["Brother"]) else {} from age in Ages def has { if has(age, ^Adult["Brother"]) then (age - 2, ^Adult["Sister"]) else {} from age in Ages We also know that the child who went with their brother must be at least two years older than the youngest child. In other words, their age can't be less than the minimum age plus two. Similarly, the child who went with their sister can't be older than the maximum age minus two: // model def not_has = (age, ^Adult["Brother"]), age < min[Ages] + 2 from age in Ages def not_has = (age, ^Adult["Sister"]), age > max[Ages] - 2 from age in Ages Clue Nine Jessica caught her uncle sneaking candy from her bag while they walked! All right, then. Jessica was accompanied by her uncle: // model def has = (^Child["Jessica"], ^Adult["Uncle"]) Clue Ten The child dressed as a ghost carried a blue flashlight. Our last clue is another easy one. Let's add (^Costume["Ghost"], ^Flashlight["Blue"]) to the has relation: // model def has = (^Costume["Ghost"], ^Flashlight["Blue"]) Visualizing What We Know So Far Now that we've encoded all of the clues, let's take a moment to see what we know so far. We can do this by wrapping our has and not_has relations into two knowledge graphs. The nodes of each graph are the Child, Ages, Costume, Adult, and Flashlight entity relations, and the edges of each graph are the has and not_has relations, respectively: // model def show(e, string) { (Child(e) or Adult(e) or Costume(e)) and has_name(e, ^Name[string]) or Flashlight(e) and has_color(e, ^Color[string]) or Age(e) and string = "%(age_int[e])" def Things = Adult; Ages; Child; Costume; Flashlight module HasKG def node = show[x] from x in Things def edge = (show[x], show[y]), has(x, y) from x, y module NotHasKG def node = show[x] from x in Things def edge = (show[x], show[y]), not_has(x, y) from x, y The show relation maps entity hashes and value type instances to string representations. Children, adults, and costumes are mapped to their name. Flashlights are mapped to their color. Ages are mapped to a string containing their integer value. The Things relation is the union of the Adult, Ages, Child, Costume, and Flashlight relations, expressed in Rel using the semicolon. The nodes in each knowledge graph are the string representations of each thing in Things. In the HasKG, each edge is pair of strings representing pairs from the has relation. The HasNotKG pairs representations of pairs from the not_has relation. We can visualize these knowledge graphs using the graphviz library, which is included with Rel: // read query def output = graphviz[HasKG] Here's what the HasKG graph looks like: We don't know much about who has what. But we can tell, for example, that since Toby has the ghost costume and the ghost cosutme has the blue flashlight, then Toby has the blue flashlight. In general, the has relation should be transitive. That is, if (x, y) is in has and (y, z) is in has, then (x, z) should also be in has. Right now, the has relation, as we've defined it, is not We'll fix that in a moment, but first let's take a look at the NotHasKG graph: // read query def output = graphviz[NotHasKG] Here's what it looks like: We know a lot more about what things don't have. Some of the edges are symmetric --- that is, for some values of x and y, both (x, y) and (y, x) are in not_has. In fact, the entire not_has relation should be symmetric since, if x does not have y, the y also does not have x. The has relation should be symmetric as well. Let's build on what we've modeled so far and solve the puzzle. Inferring the Solution to the Puzzle First, let's make sure has is symmetric and transitive: // model def has = transpose[has] // make sure has is symmetric def has = has.has // make sure has is transitive The transpose relation swaps the order of the pairs in has. So if (x, y) is in has, then (y, x) is in transpose[has]. The second line uses the dot join operator to calculate the transitive closure of has. // read query def output = graphviz[HasKG] Here's what the HasKG looks like after installing the preceding Rel code: The not_has relation is also symmetric, but it isn't transitive since if x does not have y and y does not have z, we can't conclude that x does not have z. However, there is a transitive dependence on the has relation. In other words, if x has y and y does not have z, then x does not have z. Similarly, if x does not have y, and y has z, then x does not have z. Let's encode that in Rel: // model def not_has = transpose[not_has] def not_has(x, y) = has.not_has(x, y), x != y def not_has(x, y) = not_has.has(x, y), x != y We have to make sure that x != y. If (x, x) were in not_has, it would mean that x does not have itself. It would be like saying that the child dressed as a ghost was not dressed as a ghost. // read query def output = graphviz[NotHasKG] Here's what the NotHasKG looks like after updating not_has: We're starting to get a bunch of new facts! Next, we can infer more edges that should be in not_has from what we know is in the has relation. If x has y, then x does not have z for every z != y of the same type as y. In other words, if Toby wears the ghost costume, the we know Toby doesn't wear the goblin, vampire, and werewolf costumes. Here's how to model that in Rel: // model def infer_not_has[Type](x, z) { has(x, y) and Type(z) and Type(y) and z != y from y def not_has = infer_not_has[Adult] def not_has = infer_not_has[Ages] def not_has = infer_not_has[Child] def not_has = infer_not_has[Costume] def not_has = infer_not_has[Flashlight] The infer_not_has relation is a higher-order relation. It takes another relation as input, represented by the Type parameter. Higher-order relations are marked with the @inline annotation. We can also infer edges that should be in the has relation based off what we know is in the not_has relation. If x does not have three things of the same type, then it must have whatever the fourth thing of that type is. Here's what that looks like in Rel: // model // Helper relation that selects not_has pairs based on the // type of the second element def not_has_type[Type](x, y) = not_has(x, y) and Type(y) def infer_has[Type](x, y) { count[not_has_type[Type, x]] = 3 and y = diff[Type, not_has_type[Type, x]] and x != y def has = infer_has[Adult] def has = infer_has[Ages] def has = infer_has[Child] def has = infer_has[Costume] def has = infer_has[Flashlight] First, we create a not_has_type relation that selects pairs from not_has that have the same type in the second element of each pair. The infer_has relation counts the number of things of some Type that are related to x, and returns the tuple (x, y) where y is the remaining Type instance that x does not have. These two relations, infer_not_has and infer_has are enough for Rel to compute the entire solution. You can see that by visualizing the HasKG knowledge graph again: // read query def output = graphviz[HasKG] It now looks like this: The HasKG graph has four cliques --- i.e., subgraphs containing every possible edge. Each clique represents the complete solution for each child. We can also display this solution as a table: // read query def show_solution(Type, name, solution) { has(child, e) and Child(child) and Type(e) and name = show[child] and solution = show[e] from child, e def solution:costume = show_solution[Costume] def solution:flashlight = show_solution[Flashlight] def solution:age = show_solution[Age] def solution:accompanied_by = show_solution[Adult] def output = table[solution] Here's what that looks like: This little logic puzzle shows off a number of interesting features of Rel and RelationalAI's Relational Knowledge Graph System. You saw how to: • Model entites and entity properties using entity and value types. • Define custom operators that work on value types. • Model facts as tuples in relations. • Create and visualize knowledge graphs. • Infer new facts by computing symmetric and transitive closures and encoding logic. That's a pretty tasty treat, if you ask me!
{"url":"https://relational.ai/resources/trick-or-treating-with-rel","timestamp":"2024-11-09T17:08:32Z","content_type":"text/html","content_length":"215228","record_id":"<urn:uuid:1d4764fa-65a9-4d71-87d8-6a74114fab61>","cc-path":"CC-MAIN-2024-46/segments/1730477028125.59/warc/CC-MAIN-20241109151915-20241109181915-00725.warc.gz"}
Point slope form calculator Point slope form calculator To use a point-slope form calculator, you’ll need to have the coordinates of a point on a line and the slope of the line. Here are the steps to use a point-slope form calculator: 1. Enter the coordinates of the point on the line in the form (x, y). 2. Enter the slope of the line in the form m. 3. Click the “Calculate” button or equivalent option. 4. The point-slope form of the line will be displayed in the form y – y1 = m(x – x1), where (x1, y1) are the coordinates of the given point and m is the given slope. Alternatively, you can calculate the point-slope form of a line manually using the following formula: y – y1 = m(x – x1) where (x1, y1) is the point on the line, and m is the slope of the line. Here’s an example to illustrate how to use a point-slope form calculator: Suppose you have a line with a slope of 2 passing through the point (3, 5). To find the point-slope form of this line, you would follow these steps: 1. Enter the coordinates of the point on the line: (3, 5). 2. Enter the slope of the line: 2. 3. Click the “Calculate” button or equivalent option. 4. The point-slope form of the line will be displayed: y – 5 = 2(x – 3).
{"url":"https://calculator3.com/point-slope-form-calculator/","timestamp":"2024-11-05T16:52:58Z","content_type":"text/html","content_length":"57889","record_id":"<urn:uuid:266e6ba7-d09b-43af-95df-ea2044f02b6a>","cc-path":"CC-MAIN-2024-46/segments/1730477027884.62/warc/CC-MAIN-20241105145721-20241105175721-00058.warc.gz"}
Size-dependent elastic properties of nanosized structural elements Miller, Ronald E. ; Shenoy, Vijay B. (2000) Size-dependent elastic properties of nanosized structural elements Nanotechnology, 11 (3). pp. 139-147. ISSN 0957-4484 Full text not available from this repository. Official URL: http://iopscience.iop.org/article/10.1088/0957-448... Related URL: http://dx.doi.org/10.1088/0957-4484/11/3/301 Effective stiffness properties (D) of nanosized structural elements such as plates and beams differ from those predicted by standard continuum mechanics (D[c]). These differences (D-D[c])/D[c] depend on the size of the structural element. A simple model is constructed to predict this size dependence of the effective properties. The important length scale in the problem is identified to be the ratio of the surface elastic modulus to the elastic modulus of the bulk. In general, the non-dimensional difference in the elastic properties from continuum predictions (D-D[c])/D[c] is found to scale as αS/Eh, where α is a constant which depends on the geometry of the structural element considered, S is a surface elastic constant, E is a bulk elastic modulus and h a length defining the size of the structural element. Thus, the quantity S/E is identified as a material length scale for elasticity of nanosized structures. The model is compared with direct atomistic simulations of nanoscale structures using the embedded atom method for FCC Al and the Stillinger-Weber model of Si. Excellent agreement between the simulations and the model is found. Item Type: Article Source: Copyright of this article belongs toInstitute of Physics. ID Code: 105879 Deposited On: 01 Feb 2018 09:50 Last Modified: 01 Feb 2018 09:50 Repository Staff Only: item control page
{"url":"https://repository.ias.ac.in/105879/","timestamp":"2024-11-13T23:07:02Z","content_type":"application/xhtml+xml","content_length":"18338","record_id":"<urn:uuid:2196b591-c894-4520-9106-f1aabee577a1>","cc-path":"CC-MAIN-2024-46/segments/1730477028402.57/warc/CC-MAIN-20241113203454-20241113233454-00102.warc.gz"}
Statistical Computations The SURVEYMEANS procedure uses the Taylor series (linearization) method or replication (resampling) methods to estimate sampling errors of estimators based on complex sample designs. For more information, see Fuller (2009); Wolter (2007); Lohr (2010); Kalton (1983); Hidiroglou, Fuller, and Hickman (1980); Fuller et al. (1989); Lee, Forthoffer, and Lorimor (1989); Cochran (1977); Kish ( 1965); Hansen, Hurwitz, and Madow (1953); Rust (1985); Dippo, Fay, and Morganstein (1984); Rao and Shao (1999); Rao, Wu, and Yue (1992); Rao and Shao (1996). You can use the VARMETHOD= option to specify a variance estimation method to use. By default, the Taylor series method is used. The Taylor series method obtains a linear approximation for the estimator and then uses the variance estimate for this approximation to estimate the variance of the estimate itself (Woodruff, 1971; Fuller, 1975). When there are clusters, or PSUs, in the sample design, the procedure estimates variance from the variation among PSUs. When the design is stratified, the procedure pools stratum variance estimates to compute the overall variance estimate. For t tests of the estimates, the degrees of freedom equal the number of clusters minus the number of strata in the sample design. For a multistage sample design, the Taylor series estimation depends only on the first stage of the sample design. Therefore, the required input includes only first-stage cluster (PSU) and first-stage stratum identification. You do not need to input design information about any additional stages of sampling. This variance estimation method assumes that the first-stage sampling fraction is small, or that the first-stage sample is drawn with replacement, as it often is in practice. Quite often in complex surveys, respondents have unequal weights, which reflect unequal selection probabilities and adjustments for nonresponse. In such surveys, the appropriate sampling weights must be used to obtain valid estimates for the study population. However, replication methods have recently gained popularity for estimating variances in complex survey data analysis. One reason for this popularity is the relative simplicity of replication-based estimates, especially for nonlinear estimators; another is that modern computational capacity has made replication methods feasible for practical survey analysis. Replication methods draw multiple replicates (also called subsamples) from a full sample according to a specific resampling scheme. The most commonly used resampling schemes are the balanced repeated replication (BRR) method and the jackknife method. For each replicate, the original weights are modified for the PSUs in the replicates to create replicate weights. The population parameters of interest are estimated by using the replicate weights for each replicate. Then the variances of parameters of interest are estimated by the variability among the estimates derived from these replicates. You can use a REPWEIGHTS statement to provide your own replicate weights for variance estimation. For more information about using replication methods to analyze sample survey data, see the section Replication Methods for Variance Estimation.
{"url":"http://support.sas.com/documentation/cdl/en/statug/65328/HTML/default/statug_surveymeans_details06.htm","timestamp":"2024-11-06T10:54:28Z","content_type":"application/xhtml+xml","content_length":"19790","record_id":"<urn:uuid:94a85e96-dcff-4443-9bc2-ea1085c1a195>","cc-path":"CC-MAIN-2024-46/segments/1730477027928.77/warc/CC-MAIN-20241106100950-20241106130950-00045.warc.gz"}
Transit Time Transit time to Earth passage from the Sun is now the key element to find. This new factor to be solved will determine the base date of the table for Sun passage of the 12th planet. To find the position of Earth along its orbital track, I will use another given factor. The 12th planet passes to the left of the Sun about half way between the orbits of Mercury and Venus or approximately 50 million miles. Using trigonometry, we will set up a right triangle at the location where the 12th planet passes the Sun. A line is projected from the 12th planet's location to an intercept point on the Earth's orbital path or the projected passage point, its length unknown. This line traces the 12th planet's orbital path. We will designate this, the first leg of the right triangle formed at the location of the 12th planet. A second line is projected perpendicular to the first leg starting from the same location of the 12th planet to the Sun and ends there. This second leg of the right triangle from the 12th planet to the Sun, is 50 million miles. The hypotenuse leg of the right triangle, connecting the Sun and the Earth orbital intercept point is 92.95 million miles in length. It is the standard distance from the Sun to the Earth. Using the inverse cosine function, on the result of the adjacent leg over the hypotenuse the angle, we can find the angle between the 12th and the Earth at the Sun. This yields an answer equivalent to 57.45 degrees. The equation proceeds as follows: cosine -1 [adjacent leg / hypotenuse] = angle cosine -1 [ 50/92.950] = angle cosine -1 [ .5379236] = angle 57.45 degrees = angle Lets solve for the opposite leg or the first leg of the right triangle for angle between the 12th planet and the Earth intercept point at location of the Sun. The distance between the 12th planet and the Earth orbit intercept point using the sine function of the angle equals the opposite leg over the hypotenuse, yielding 78.349 million miles. The equation proceeds as follows: opposite leg / hypotenuse = sine[angle] opposite leg = sine[angle] * hypotenuse opposite leg = sine[57.45] * 92.95 opposite leg = .84292 * 92.95 opposite leg or travel distance to Earth for the 12th planet is 78.349 million miles. All the necessary variables have now been found to solve for Time (t), the time it takes the 12th planet to move between Sun passage and Earth passage. Solving for Time (t), the time it takes the 12th planet to move between Sun passage and Earth passage, Distance(final) is equal to {3.56 units - 0.0213333 S-P units [78.4 million miles]) or 3.5386667 units, Distance(initial) is equal to 3.56 S-P units, and k equals -0.005761. The equation yields a time (t) of 1.0433 weeks after passing the Sun. The equation proceeds as follows: Distance(final) = Distance(initial) * e ^ [(k) * (t)] LN [Distance(final) / Distance(initial)] = (k) * (t) LN [Distance(final) / Distance(initial)] / k = t LN [3.5386667 / 3.56] / -0.005761 = t LN [0.9940075] / -0.005761 = t -0.006010527 / -0.005761 = t 1.0433 weeks = t the time interval from Sun passage to Earth passage. So if the 12th planet Earth passage date is May 15, 2003. Then the base date or Sun passage would be May 7, 2003 or 1.0433 weeks prior for the distance table. You can adjust your dates accordingly if a more accurate Earth passage date is predicted. In conclusion two significant issues should become clearer in your mind. First, visibility of the 12th planet will come too little and too late. Second, the storms prior to Earth passage are going to be far worse and more powerful than the present picture painted in our minds. One only has to look at the weather changes already initiated from an out of phase magnetic alignment produced by movement of the 12th planet on only 3 tenths of 1% from the mid point to our Sun by January of the year 2000. All primary data and some concepts used in the previous text can be referenced in the science section of Zeta Talk under Comet Orbit, Entry Angle and Repulsion Force. The chart illustrating the orbital passage of the 12th planet through the solar system is located in this TOPIC on the Sagittarius page. I would like to give thanks for the guidance given in my life from up above. I will always appreciate the warmth and light of the Sun. The breeze that blows on my face. What we take so easily for granted, some worlds may only experience only a few precious moments of it. Offered by Robert.
{"url":"http://zetatalk6.com/theword/tword03f.htm","timestamp":"2024-11-09T16:10:22Z","content_type":"text/html","content_length":"6434","record_id":"<urn:uuid:cdfce8db-ca4e-4d9b-8943-3b8288da68ff>","cc-path":"CC-MAIN-2024-46/segments/1730477028125.59/warc/CC-MAIN-20241109151915-20241109181915-00231.warc.gz"}
How often should you buy a new stock? Hey all. Apologies for my inactivity on Felicifia of late. I've been working on other things, but I hope to return in a few months. However, I thought I would post this in the meantime while it's on my mind. Depending on your income, expected rate of return, and purchase transactions fees, it makes sense to buy a new stock about every half month or every month, assuming you manually buy stocks as your vehicle for continuous investment. Say you earn $800 post-tax per week, and you want to invest those earnings by buying some random stock . Suppose you think the stock will earn 7% per year in expectation, and for convenience, let's use simple interest . When you buy a stock, there's a fixed transactions fee of $8 per purchase. How many weeks should you wait before you buy a stock in order to optimally trade off the cost of missed investment income against the transaction cost of buying a new stock? To generalize, let's say your wage is w = $800, and that you earn n = 52 wages in a year. You expect an annual return of r = 0.07 with simple interest. When buying stocks, the transaction fee is c = Cost formula Every wage period when you don't use w dollars to buy stock, you lose out on expected investment income of wr/n, since r/n is the expected interest rate for one wage period. If you wait two wage periods to buy a stock, w dollars lie idle for one week, and w dollars are used immediately. The only lost income is from the first wage period in the amount of wr/n. If you wait three wage periods, w dollars lie idle for two weeks, w dollars lie idle for one week, and w dollars are used immediately. The lost income is then (wr/n)(2) + (wr/n)(1). If you wait k wage periods, the lost income is (wr/n)(k-1) + (wr/n)(k-2) + ... (wr/n)(1) = (wr/n)[k(k-1)/2] using the formula for triangular numbers Say you follow this pattern of waiting k wage periods for a long time T, where T is sufficiently big that we can assume it's a multiple of k and ignore the irregularities at the end of the time period. You would then buy stocks T/k times. The purchase cost of the stocks would be c(T/k). The lost income would be (wr/n)[k(k-1)/2](T/k). The total cost is c(T/k) + (wr/n)[k(k-1)/2](T/k), which is what we want to minimize. Minimizing the cost T factors out of the cost, so it suffices to minimize F(k) := c/k + (wr/n)[k(k-1)/2](1/k) = c/k + (wr)(k-1)/(2n). While k can only take on discrete values, we can pretend for the moment that this function is continuous so that we can differentiate and set equal to 0: dF/dk = -c/k^2 + wr/(2n) = 0 wr/(2n) = c/k^2 k^2 = 2cn/(wr) k = sqrt(2cn/(wr)). Note that the second derivative with respect to k is d^2F/dk^2 = 2c/k^3 > 0, so this is indeed a minimum point. Plugging in values Using the figures in the original problem, the continuous solution is k = sqrt(2 * 8 * 52 / (800 * 0.07)) = 3.85. Using Excel, I found the actual values of the cost at each k from 1 to 5: F(1) = 8 F(2) = 4.5 F(3) = 3.7 F(4) = 3.6 F(5) = 3.8, so k=4 is indeed the minimum. In other words, using these figures, you should buy a new stock about once a month. Alternate values Suppose instead you're paid twice a month (n = 24), with w = $2000 per pay period and an assumed rate of return of r = 0.10. Then the optimal continuous value of k = 1.4, which means it's better to buy a stock every pay period than to hold on for a second pay period. Related problems These situations of accumulating benefits for an action that has fixed cost crop up a lot. For example: How often should you cut your hair? I like to have short hair because it reduces shower time and saves on hot-water bills, but getting a haircut also costs money and time in itself. Every day that you have hair longer than it needs to be, you lose a small amount of money, say L(d), when it's been d days since your last hair cut. L(0) = 0, and then L(d) grows as d grows. The accumulated cost after waiting k days will be sum_i=1^k L(i). The minimization will be the same as before, with the [k(k-1)/2] term replaced by this summation. Re: How often should you buy a new stock? Thanks for this information, but have you ever thought about the possibility that being productive to earn this money in the first place could do more harm than good, even including the good your donations could ever make? If the spectre of space colonization or hyperefficient consciousness is realistic, then its cause will be human technological progress, and by adding expertise and economic productivity to this process, you are speeding it up. And if the suffering surplus (bad is stronger than good, minorities torture helpless majorities) remains a reality in this future, slowing this process down deliberately could actually be the expected-utility-maximizing choice. Your assumption that you can donate so much money for meme spreading and people-convincing that it outweighs this harm requires that people can actually be convinced to become more ethical people, to such a degree. I'm not sure how realistic that is, given that real people only rarely actually change their minds based on moralizing or appeals to empathy, and certainly not in the long run. Remember, there will be a point when those in power can simply switch off and on their empathy as it suits their strategic goals. Already, civilization contains crude mechanisms to counter the effects of empathy in specific functional ways (propaganda, shielding from authentic emotional stimulus input, creating fake input etc.) Maybe the most ethical choice would be to defect from the process of productivity and earn less, especially in areas of sought-after tech expertise, in order to slow down this species of torturous apes in their pursuit to create more consciousness in the universe. "The abolishment of pain in surgery is a chimera. It is absurd to go on seeking it... Knife and pain are two words in surgery that must forever be associated in the consciousness of the patient." - Dr. Alfred Velpeau (1839), French surgeon Re: How often should you buy a new stock? Thanks, HedonicTreader! I must say, I really don't share your intuition in this case. First, there's the fact of replaceability in careers, which means the impact I make is (roughly speaking) only the amount by which I'm more productive than the next person would have been (plus some other side-effect considerations). Second, even if human survival is net bad, it's not clear to me whether accelerating technology matters. The only thing that matters beyond rounding error is whether humans survive in the long run, not how quickly they develop technology. It's not obvious whether accelerating technological development increases or reduces extinction risk. Maybe a faster pace of innovation means less time for safety constraints? Third, even if the non-replaced impact of my work were net bad in this way, I find it hard to imagine that this wouldn't be outweighed by the effect of donations. Intellectual arguments do make a difference (after all, what convinced you to think the way you do?), and even if you think they don't, there are lots of other things one could fund that would make a difference by your criteria. For example, how about changing demographics such that more people with negative-leaning-utilitarian inclinations are born? Or more directly opposing economic growth in various ways? (The latter isn't something I necessarily support; I'm just speaking hypothetically.) There would seem to be lots of options where funding could make a targeted impact compared with the blunt side-effects of having made that money. And of course, funding research on these questions could be extremely valuable too. Re: How often should you buy a new stock? Hi Brian, thanks for the reply, and feel free to delay further replies if you have more important priorities. I won't take it personally. Brian Tomasik wrote:First, there's the fact of replaceability in careers, which means the impact I make is (roughly speaking) only the amount by which I'm more productive than the next person would have been (plus some other side-effect considerations). Yes, but I actually think the total number of people qualified to do a certain high-level task is limited - and the number of people who could become qualified through education cheaply is also limited. The more qualified you are and the more sophisticated your job is, the less likely it is that dropping out would be compensated by the replacability effect. If a qualified software engineer decides to drop out of the labor force completely, or even commit victimless crimes and bounce in and out of prison, spend time on welfare etc., this would remove significant amounts of money from the economy. I don't think you can create high-quality people out of thin air, and the replaceability effect is more relevant in deciding whether a particular job is ethical, rather than whether removing or adding a high-quality employee to/from the economy is ethical (no?). Of course, there's an element of self-sacrifice involved in this decision, as it affects the personal life Second, even if human survival is net bad, it's not clear to me whether accelerating technology matters. The only thing that matters beyond rounding error is whether humans survive in the long run, not how quickly they develop technology. I do think astronomical waste does matter, unless you think the probabilites of space colonization are very small to begin with. If the probabilities are non-trivial, even one day of astronomical waste does matter a lot. It's not obvious whether accelerating technological development increases or reduces extinction risk. Maybe a faster pace of innovation means less time for safety constraints? Exactly - it's not obvious. Maybe a slower pace of innovation increases existential risk because crucial resources can run out before we innovate into substituting it efficiently. Maybe an extinction event hits in the additional time it takes to innovate. Or maybe you're right and slower innovation means more safety constraints. But the point is this: If we have no reason to assume one way or another, we should be agnostic about the effect of innovation speed/economic growth on existential risk. In which case it seems the astronomical waste would dominate expected utility, unless the probabilities are vanishingly small to begin with. Third, even if the non-replaced impact of my work were net bad in this way, I find it hard to imagine that this wouldn't be outweighed by the effect of donations. Intellectual arguments do make a difference (after all, what convinced you to think the way you do?), and even if you think they don't, there are lots of other things one could fund that would make a difference by your criteria. It would be arrogant of me to assume I'm part of an elite who's convinced by rational argument. But the truth is, a lot of my emotional intuitions regarding morality have been shaped by interactions with other people. Some positively reinforcing, like in the utilitarian community. But a lot negatively reinforcing, by people who wanted to convince me of the opposite of the positions I ended up with. And I think by trying to get people to care more about suffering, I may have pushed a lot of them to rationalize acceptance of suffering instead. My old psychology professor once told us he doesn't discuss controversial topics with people who don't hold views very similar to his in the first place, because the most likely outcome would be to push them further away. I'm sure I picked up some memes by people paid for to spread them, but I don't actually think it's robust and predictable enough. For example, how about changing demographics such that more people with negative-leaning-utilitarian inclinations are born? I'm very curious how exactly you would spend money to accomplish this. Or more directly opposing economic growth in various ways? (The latter isn't something I necessarily support; I'm just speaking hypothetically.) There would seem to be lots of options where funding could make a targeted impact compared with the blunt side-effects of having made that money. Yes, like sabotage. The problem is, of course, that this takes criminal energy, strategic positioning, and a degree of self-(and potentially other)-sacrifice most of us won't be willing to make. And of course, funding research on these questions could be extremely valuable too. Sure, if we can actually translate more money into better knowledge. Again, the question is, where to put the money, and how to make sure it actually improves knowledge at the margin, rather than shifting pre-existing funding so we end up funding literature studies indirectly. There's also the question how much this knowledge is really worth to us. Giving such knowledge to people who don't share our values is worthless or potentially harmful. And if we have to pay so much money for the knowledge that it absorbs the realistic economic (altruistic) power of our small special-interest community, the knowledge will be useless to us as well. "The abolishment of pain in surgery is a chimera. It is absurd to go on seeking it... Knife and pain are two words in surgery that must forever be associated in the consciousness of the patient." - Dr. Alfred Velpeau (1839), French surgeon Re: How often should you buy a new stock? I wonder what people think about LendingClub http://www.lendingclub.com/ They seem to have 9% annual return average for the majority of people using their investment service. The way it works is you lend money to people who are trying to consolidate their debt or just want a loan. Because some are paying credit-card debt at 20%+ it makes sense for them to take a loan from you at 18%. Few people default, but on average you get a decent annual return. Over the past 3 years I've averaged about 13% and it seems to be one of the the-lowest-risk-&-highest-return around. I might be blinded by some (confirmation?) bias here - but it seems like this investment vehicle is amazingly awesome. Re: How often should you buy a new stock? yboris wrote:I wonder what people think about LendingClub http://www.lendingclub.com/ They seem to have 9% annual return average for the majority of people using their investment service. The way it works is you lend money to people who are trying to consolidate their debt or just want a loan. Because some are paying credit-card debt at 20%+ it makes sense for them to take a loan from you at 18%. Few people default, but on average you get a decent annual return. I'm suspicious. "Average return" statistics can be juiced with survivorship bias, ignoring transaction fees (and taxes) and other shenanigans. It looks like the claimed 9% return comes from a sample that is disproportionately young loans, i.e. people who have just borrowed the loans, and paid back less in interest than they borrowed in the first place. Default rates will probably go higher with time. I also wonder about defenses against Ponzi schemes: what if borrowers take out multiple personal loans like this, then pay the interest using the loans, and after leveraging up just walk away? OTOH, in principle this sort of disintermediation could produce net value if the borrowers feel worse about not repaying the peer-to-peer loans than not paying a credit card company, or if credit card rates are higher than necessary because of clever company exploitation of behavioral economics (teaser rates and the like), etc. Re: How often should you buy a new stock? Also, selection bias: would you have recommended/mentioned it if defaults had wiped out much of your investment already? Plus they are taxable as regular income (unless in a tax-advantaged account), much higher than other interest income: http://www.mint.com/blog/investing/shou ... ding-club/ This isn't to say that it's necessarily a bad investment, just that there's lots of room for critical scrutiny (and claimed free-lunch investment returns deserve scrutiny).
{"url":"https://felicifia.github.io/thread/730.html","timestamp":"2024-11-07T10:31:43Z","content_type":"text/html","content_length":"31277","record_id":"<urn:uuid:da85d80d-e6ef-4cee-9e1f-a5b884ffc556>","cc-path":"CC-MAIN-2024-46/segments/1730477027987.79/warc/CC-MAIN-20241107083707-20241107113707-00118.warc.gz"}
Physical Quantities | System International(SI) Base Units | Physical Quantities | System International(SI) Base Units Physical Quantities | System International(SI) Base Units June 23, 2022 A quantity that is measurable and has physical meaning is called physical quantity. Download the pdf notes of Physical Quantities and System International(SI) Base Units class 11 Physics Types of Physical Quantities Physical quantities are of two types 1. Base quantities 2. Derived quantities Base Quantities The quantities are not derived from other quantities but other quantities are derived from these quantities. Examples: Length, Mass, Time, etc. Derived Quantities The quantities which are derived from base quantities are called derived quantities. Examples: Velocity, force, etc. Steps for the measurement of a base quantity There are two steps for the measurement of a base quantity. 1. Choice of a standard 2. The establishment of a procedure for comparing the quantity to e measured with the standard so that a number and a unit are determined as the measure of that quantity Ideal Standard An ideal standard has two principal characteristics. 1. It is accessible 2. It is invariable The international system of units (SI) SI is the abbreviation of the French word Système International. In 1960 an international committee agreed on a set of definitions and standards to describe the physical quantities. The International System of Units (SI) is a metric system that is widely used as a measurement standard. Scientific and technical research and development rely heavily on SI units. It consists of seven base units that are used to define 22 derived units. The SI units can be stated as fractional numbers or as standard multiples. Prefix multipliers with powers of 10 ranging from 10^-24 to 10^24 are used to define these numbers. The system international defines units in two groups 1. Base units 2. Derived unit SI base unit The units that belong to the base quantity are called base units. Seven Base SI Unit The SI Base units are the foundation for all other units. There are 7 base units. What is Meter? The SI unit of length is the meter, which is defined as the length of the path traveled by light in a vacuum in 1/299 792 458 of a second. What is Kilogram? In SI 1 kg is equal to the mass of a cylinder of platinum-iridium, the International Prototype of the Kilogram (IPK) What is Second? Second is defined as 1/86400 of a day. This component is derived from the partition of a day into 24 hours, then 60 minutes, and finally 60 seconds (24×60×60 = 86400). What is Ampere? Every 1.602176634 seconds, an ampere is the electric current corresponding to 10^19 elementary charges passing. What is Kelvin? One Kelvin is equal to a change in the thermodynamic temperature T that results in a change of thermal energy kT by 1.380649×10^−23 J. What is Mole? A mole is the amount of a substance that includes exactly 6.02214076 X10^-23 of the substance’s elementary entities. What is Candela? The candela is the luminous intensity in a given direction of a source that emits monochromatic radiation of frequency 540×10^12 hertz and that has a radiant intensity in that direction of 1/683 watt per steradian. Derived Unit The units which belong to derived quantity are called derived units. Because they are generated by various operations on the base units, the derived units are limitless. The dimensions of derived units are represented in terms of the dimensions of base units. A combination of base and derived units can also be used to express derived units. There are some derived units. For example work, electric charge, power, and capacitance. What are Supplementary Units? The units which are not classified under either base or derived units are called supplementary units. There are two supplementary units Radian and steradian What is Radian? An angle of one radian is formed by an arc of a circle with the same length as its radius. In a complete circle, there is 2 radian. What is Steradian? The angle covered an area of surface equal to the square of the sphere’s radius at the center. In the complete sphere, there are 4 steradians. Frequently Asked Questions(FQAs) What is the standard international SI base unit? The SI (International System of Units) is a system of measurement that starts with seven base units. How many base units are there? There are seven base units. Electric Current Luminous intensity Amount of substance What are derived units? The units which are derived from base units are called derived units. What is the importance of SI units? SI units are so important because they provide a common language that people from all corners of the world can use to communicate with one another, especially when it comes to business or science. Additionally, SI units make it very easy to express large or small numbers using prefixes. What are Supplementary units? The units which are not classified under either base or derived units are called supplementary units. There are two supplementary units. Radian and Steradian.
{"url":"https://eduinput.com/physical-quantities-system-internationalsi-base-units/","timestamp":"2024-11-11T03:46:00Z","content_type":"text/html","content_length":"177434","record_id":"<urn:uuid:ee3e140c-a1af-453e-ad77-f6dd12b1c743>","cc-path":"CC-MAIN-2024-46/segments/1730477028216.19/warc/CC-MAIN-20241111024756-20241111054756-00059.warc.gz"}
Maths - Books about Projective Transforms Roger Penrose - The Road to Reality: Partly a 'popular science' book as it tries to minimise the number of equations (Not that I'm complaining much his book 'Spinors and space-time' went over my head in the first few pages) it still has lots of interesting results that its difficult to find elsewhere. Spinors and Space-Time: Volume 1, Two-Spinor Calculus and Relativistic Fields (Cambridge Monographs on Mathematical Physics) by Roger Penrose and Wolfgang Rindler - This book is about the mathematics of special relativity, it very quickly goes over my head by I hope I will understand it one day. Visual Complex Analysis - If you already know the basics of complex numbers but want to get an in depth understanding using an geometric and intuitive approach then this is a very good book. The book explains how to represent complex transformations such as the Möbius transformations. It also shows how complex functions can be differentiated and integrated. Geometric Algebra for Computer Science: An Object-oriented Approach to Geometry. This book stresses the Geometry in Geometric Algebra, although it is still very mathematically orientated. Programmers using this book will need to have a lot of mathematical knowledge. Its good to have a Geometric Algebra book aimed at computer scientists rather than physicists. There is more information about this book here. Geometric Algebra for Physicists - This is intended for physicists so it soon gets onto relativity, spacetime, electrodynamcs, quantum theory, etc. However the introduction to Geometric Algebra and classical mechanics is useful.
{"url":"http://www.euclideanspace.com/maths/geometry/space/nonEuclid/compactification/books.htm","timestamp":"2024-11-10T12:03:59Z","content_type":"text/html","content_length":"17668","record_id":"<urn:uuid:2a7655ae-1276-43f8-907b-e7f08d5bcad6>","cc-path":"CC-MAIN-2024-46/segments/1730477028186.38/warc/CC-MAIN-20241110103354-20241110133354-00505.warc.gz"}
Learning Definitions One way that parents can help their children by revising mathematics is by asking questions. Of course it does help if you know the answer too! 1. What is used instead of numbers in an equation? 2. What does S.F. stand for? What is a significant figure? 3. When is it easier to write a number in standard form instead of writing it out in full? 4. What is the highest common factor and how would you work this out? 5. What is the radius of a circle? How can you find this if you were given the diameter of the circle? 6. What must you do to both numbers first if you want to divide a number by a decimal? You then need to supply the answers as soon as possible. 1) Letters are used instead of numbers. 2) It stands for significant figures. A significant figure is a digit that is not a zero, or it is a zero that follows a number that is not a zero. 3) Is it easier to write very big numbers and very small numbers in standard form instead of writing it out in full. 4) The highest common factor of two numbers is the biggest number that can be multiplied to give those two numbers. Two work this out, we write down all the factors for the two given numbers and pick out the biggest number that is present in both lists. 5) The radius of a circle is the length of a straight line that goes from the centre of a circle to the circle’s edge. A circle’s diameter is the length of a straight line that goes from the circle’s edge, through the centre of the circle and to the other side. So, we can halve the diameter to find the radius. 6) You must move the decimal points of both numbers by the same number of places until the dividing number becomes a whole number. We all know the saying: “Little and often!” Good luck!
{"url":"http://blog.elevenpluscourses.co.uk/2007/09/learning-definitions.html","timestamp":"2024-11-10T20:42:07Z","content_type":"text/html","content_length":"84082","record_id":"<urn:uuid:68b3d528-b666-4294-9fae-f68b7c694155>","cc-path":"CC-MAIN-2024-46/segments/1730477028191.83/warc/CC-MAIN-20241110201420-20241110231420-00677.warc.gz"}
Calculate Euclidean Distance in Python In math, the Euclidean distance is the shortest distance between two points in a dimensional space. To calculate the Euclidean distance in Python, use either the NumPy or SciPy, or math package. In this tutorial, we will learn how to use all of those packages to achieve the final result. Use the distance.euclidean() Function to Find the Euclidean Distance Between Two Points The SciPy package is used for technical and scientific computing in Python. The distance module within SciPy contains the distance.euclidean() function, which returns the Euclidean distance between two points. To use it, import distance from scipy.spatial and pass your two tuples of coordinates as the first and second arguments of distance.euclidean(). from scipy.spatial import distance x = (3, 6, 8) y = (11, 12, 16) print(distance.euclidean(x, y)) If you need to install SciPy, install it VIA pip with the following terminal command: pip install scipy Get Euclidean Distance Between Two Point with math.dist() Function If you are already using the Python math package and would rather not import SciPy, use the math.dist() function instead. Pass the coordinates as the first and second arguments of math.dist() to get the Euclidean distance. import math x = (3, 6, 8) y = (11, 12, 16) print(math.dist(x, y)) Find Euclidean Distance from Coordinates with NumPy It is also possible to get the Euclidean distance using the NumPy package. The two disadvantages of using NumPy for solving the Euclidean distance over other packages is you have to convert the coordinates to NumPy arrays and it is slower. import numpy as np x = np.array((3, 6, 8)) y = np.array((11, 12, 16)) dist = np.linalg.norm(x-y)
{"url":"https://www.skillsugar.com/calculate-euclidean-distance-in-python","timestamp":"2024-11-03T04:24:13Z","content_type":"text/html","content_length":"40193","record_id":"<urn:uuid:be5c935d-968f-40ea-9f74-52adf17d8175>","cc-path":"CC-MAIN-2024-46/segments/1730477027770.74/warc/CC-MAIN-20241103022018-20241103052018-00242.warc.gz"}
Maths Cafe | Maths Matters Resources - Part 4 This is where you’ll find a collection of Bev’s thoughts, insights and relevant web links. As a Cafe, it’s a place for you to relax and browse. You’ll find mathematics facts and snippets and helpful hints for making maths exciting and relevant. If you are a new subscriber it’s worth making a nice cup of coffee for yourself and checking out as many posts as you can – you never know what you’ll discover. Everyone can access this part of the website, even non-subscribers. Indian Embroidery There is a fantastic new exhibition at the Victoria and Albert Museum, London – The Fabric of India. It is packed full of mathematical information for your students to explore. It is on from 3 October 2015 until 10 January 2016. Wonderful for understanding how spectacular embroidered patterns are created. There are plenty of accompanying video clips too. Did you know that red dye techniques were known to the Indus Valley civilisation about 45000 years ago? And that cotton has been cultivated for 9000 years? Or that ‘ari” hook embroiderers from Gujurat, India, were highly prized by the Mughal and European courts? Christmas Tree Data Christmas time is packed with interesting facts for data analysis. Here is an American website we like: For example Americans spend a little more on fake trees than real trees, yet there are twice as many real trees purchased. And 85% of real trees are pre-cut. You also get a summary of facts from 2008 -2014 so you have useful comparisons and observations to explore. This site includes some Australian facts. It has a Christmas countdown clock and simple costs to use with your Stage 2 or 3 class. Mental Warmups Mental Maths Warmups are a wonderful way to start every maths session. They are short, efficient, language-based, targeted, relevant activities that up to the whole class can be engaged in at one time. You can spend about 10 minutes every day warming up your students strategic thinking, problem-solving and fact recall. The funny thing about them is that there are 3 distinct types of Mental Maths Warmups: 1: PAST These summarise, recall and practise maths concepts you have tackled with your students in a previous year, term or week. 2: PRESENT These target maths concepts that your class is studying today or this week or in this unit. 3: FUTURE These investigate maths concepts your class may not have been introduced to yet, They are great as a quick pre-assessment check. Although sample Mental Warmups are scattered throughout our Maths Resources, you can find a large collection of them sorted into specific grade groups in The Maths Session. Resources Checklists We suggest that your school Maths Improvement Team conducts a resource check for the major substrands every year. So that this task is manageable, we suggest that these surveys are spread out over 4 terms. To make your life easy, we provide a suggested list for each of the MEASUREMENT and GEOMETRY substrands. Its all ready for teachers to just tick and your Maths Improvement Team to summarise and analyse. TERM 1: Suggested Length Resources Checklist Suggested 3D Objects Resources Checklist TERM 2: Suggested 2D Shapes Resources Checklist Suggested Area Resources Checklist TERM 3: Suggested Volume & Capacity Resources Checklist Suggested Position Resources Checklist TERM 4: Suggested Mass Resources Checklist Suggested Time Resources Checklist TED Talks: Robert Lang The Math of Origami TED Talk: Alison Gopnik What do babies think? TED Talk: Carol Dweck: The power of believing that you can improve What’s wrong with worksheets Recently I had a heated discussion with colleagues about the word “worksheet”. They argued that these were linked to textbook activities that are boring, mechanical and invariably worksheets are mindless exercises in mathematics. I argued that my colleagues were referring to “textbook” tasks, or something like that. I didn’t win the argument. I didn’t convince them that worksheets could be wonderful examples of activities for our students. I now try to avoid using the term “worksheet” throughout our Maths Matters Resources website, just in case the term is misunderstood in any way. But the heart of this conversation was really that in primary mathematics we should be providing our students with worthwhile activities, full of interest, maths language, challenge and real-life relevance. What you call these in your classroom doesn’t really matter. A rose by any other name still smells as sweet. So what is it about an activity that makes it effective in primary mathematics? An activity should be relevant to the needs and interests of your students. It should encourage them to think together with a partner, to use language to explain and explore, to try out and compare different strategies. When a challenge has more than one solution however, the problem for us as teachers is that it becomes more difficult to “mark”, to evaluate, especially if we have more than 30 students in our class and they are working on a variety of activities with a variety of possible solutions. Textbook examples often have just one expected answer, so we know if it is correct or incorrect. But does ‘easy to mark’ mean ‘effective to practise and explore”? Invariably “no”. Simple answer-driven tasks do not generally encourage effective mathematical thinking or sharing of I face the same problem when creating activities for Maths Matters Resources. As you will have noticed I try to focus on activities where students have to talk, think, compare, evaluate and explain their thinking, preferably within a real-life context. And I know that as busy teachers you want me to provide “easy to mark” solutions so you can get on with something else in your classroom. In just about all of my recent activities, I try to do this. This can mean providing a few examples of strategic thinking, or ways to solve a particular problem. For example, I provide sample definitions for you in Grades F/1/2 What 3D Object am I PICTURE CARDS ACMMG022 ACMMG043, or an example of responses to a mental warm-up in Grades 3/4 What do I know AREA Mental Warmups F123456. In Grades 5/6/7 Little Town Shopping Centre POSITION Activities (9 pages) ACMMG113, I provide plenty of checklists where your students can cross reference possible data to select a matching correct solution. Phew. It was incredibly complicated to do all this but hopefully they help to make your classroom management a little easier. So just stop and take a big breath next time you are tempted to hand out a “worksheet” to your class. It may be easy to copy and mark but the quality of learning can never match the challenge of a stimulating, open-ended problem to solve in pairs or small groups. We want Australians of the future who can think for themselves, tackle challenges head-on, work co-operatively to achieve a common Jennifer Townley Maths Sculptures Jennifer Townley is a sculptor who creates beautiful mechanical automata with a mathematical zing. You can see plenty of examples at her website, www.jennifertownley.com. How can you use one of these sculptures to inspire your students to create their own sculptures? What is it that they like or dislike about these sculptures? And there are so many other artists to explore. Have you discovered the Swedish ceramic artist Eva Hild yet? Or the British sculptor Andy Goldsworthy? Or the Russian/US sculptor Naum Gabo? Maths Cartoons Cartoons are a great way to use humour to teach and develop mathematical ideas. This Mathematicians Food Fight by cartoonist Daniel Reynolds is a good example. It plays with the idea that the mathematical symbol pi is an actual pie that can be used in a food fight. Very silly but very effective. What a great way to start a whole unit talking about and exploring the wonderful relationship between the circumference of every circle and its diameter. Perhaps your students can create their own maths cartoons to inspire students in another class too. Amazing facts about animals Make maths come to life in your classroom with a focus on animals. Your students love to discover amazing animal facts. Encourage them to create mathematical lists about what they learn as they work with a partner or in a small group. This baby stingray, for example, is like a square – but with an alien body attached. So amazing. Are there any other creatures that have a square body shape? If you look under a microscope, or even better an electron microscope, you can discover a whole world of mathematical wonders. Geometric shapes galore. Encourage your students to collect facts about Number, Space/Geometry, Measurement, Chance and Data. e.g there are over 60 different species of stingray, they can live up to 25 years in the wild, they give birth to 2 – 6 young at a time, they grow up to 1.98 m. What else can they Mathematical Painters Mathematics has always inspired humans to decorate their belongings and surroundings with mathematical patterns and designs. One of the earliest artefacts discovered is a 40 000 year old green stone bracelet from the Denisovskaya cave, Russia. And prehistoric hand stencils on a cave in Spain were probably made by neanderthals about 35 000 years ago – the first known symbolic art. Some modern artists are more closely inspired by mathematics than others. MC Escher for example, or in this picture, the Swiss-German artist Paul Klee (1979 – 1940). This is Klee’s painting Castle and Sun, created in 1928. His greatest inspiration came when he visited Tunisia in 1914, where colour dominated his thinking. “I know that it has hold of me forever… Color and I are one. I am a painter.” He produced about 9 000 works of art in his lifetime, an amazing achievement. A web search will discover some of these works – why not focus on the mathematical ones? Find your favourite Klee painting, for example, then analyse it for colour, shape and size. Try to copy it exactly or use it as an inspiration for your own mathematical work of art. Using current happenings to inspire your students Make sure you keep your eyes and ears open for something mathematical happening in the news. For example the New Horizons spacecraft reached Pluto and its moon Charon on Tuesday 14 July after a 9 year journey. How wonderful that scientists and mathematicians can work together to make such things happen. Think of all the mathematical facts about Pluto you can use to inspire your students – making place value, for example, come to life in your classroom. Pluto is 7.5 billion kilometres from earth. How can we really think about that? What is a billion? (1 000 000 000) 1 billion seconds is about 31.7 years, so 1 billion seconds ago would put us in 1983. 1 billion minutes is approximately 1901 years, so 1 billion minutes ago would land us in the year 114. 1 billion hours is approximately 114 000 years, so 1 billion hours ago would land us in the Lower Paleolithic era or Old Stone Age. 1 billion days is approximately 2.74 million years, so 1 billion days ago would be when the genus Homo appeared in Africa. That means it is a VERY long way Pluto was discovered on 18 January 1930 by Clyde Tombaugh when he was only 24 years old. How long ago is that date in 1930? How do you know? What strategy did you use? How close is Pluto to our sun? How heavy is it? (13 050 000 000 000 billion kg. That’s 0.00218 x the mass of our Earth.). What is its diameter? (2 370 km) How long does it take to make a complete orbit? (246.04 Earth years) How far does Pluto travel when it makes one complete orbit? (5 874 000 000 km). How many moons orbit Pluto? (5) Pluto is one third water. So what is the rest of Pluto made from? (rock) Did you know that the ashes of Clyde Tombaugh, who died in 1997, were sent with New Horizons to Pluto. So dramatically romantic. What an honour to visit Pluto 85 years after his initial discovery. Maps and Mathematics Studying and creating maps is a wonderful way to link both 3D Objects, 2D Shapes and Position sub-strands in your maths sessions. The earliest map found so far is from the Czech Republic dated about 25 000 years ago. This showed geographical landmarks in the area. Another early map was created on the walls of Lascaux Caves, France about 16 500 BC. The ancient Babylonian created clay maps of the world as early as 600 BC. The ancient Egyptians also created maps to record property boundaries. And the ancient Chinese created maps on silk. Islamic scholars had world maps in 1154 AD to help Arab merchants and explorers. Records also show Polynesian maps of the Pacific Ocean to help their sailors travel large distances. Sticks were tied in a grid with palm strips to represent wind and wave patterns. Attached shells showed were to find small islands. Later in the 16th century, of course, the Flemish cartographer, Mercator, worked out a way to make the 3D world look better as a 2D shape and the Mercator’s Projection map was born. Today we have highly detailed digital maps based on aerial photography and satellite imagery. We even have them accessible on our phones. Maps helped humans define their 3D world as 2D images and as 3D globes. The word ‘cartography” means the study of maps and it comes from a Latin word “carta” (map). All your students, young and all, can enjoy thinking about how to represent the world around them as a map. You’ll find our suggested activities at Geometry – Position. We also have a few maps in Position Photographs and Position Graphics. We are always on the lookout for plenty more! Singapore Maths Problem This problem went viral across the world on 14 April 2015. It is from a test given to 14 year old students competing in the Singapore and Asian Schools Maths Olympiad, aimed at the top 40% of We had a fantastic response from our community regarding the correct answer. It’s an interesting maths challenge that relies on logical thinking. How do you approach this problem? What strategies do you use to solve it? And imagine being in a test situation too. That is an added distraction for your brain. Do what’s the answer??? To start, Cheryl gives 10 possible dates for her birthday. Albert is then told the month and Bernard the day of Cheryl’s birthday.The challenge is to work out the actual date of Cheryl’s birthday. Albert and Bernard at first don’t know the actual date from the information they are given. So it can’t be May 19 or June 18. Otherwise Bernard would definitely know if he had 18 or 19 as his day. And for Albert to say that he definitely knows Bernard doesn’t know, this means it can’t be any date in May or June (when these two numbers 18 and 19 appear) either. So that leaves July or August. But then Bernard says he does know when the date of Chery’s birthday. It can’t be July 14 or August 14 as then Bernard would not know which month is the correct one. So that leaves July 16, August 15 or August 17. And then Albert says he knows now too. That means there is no double up. It can’t be August 15 or 17 as then Albert wouldn’t know which day to select. It must be July 16. Crescent Moon Watching the moon every month is a great way to think about both time and 3D objects. Twenty-eight days to observe and record, to think about and spot patterns. That’s what Galileo did way back in 1609 when he created a telescope that could magnify 20 x. He observed the moon each day and worked out the surface of the moon has valleys, shadows and peaks like on Earth. It was not perfectly spherical after all. And like Copernicus, Galileo believed the Sun was the centre of our system, not the Earth. Up until then Aristotle’s idea maintained the Earth was the centre of the Universe. Galileo sparked the birth of modern astronomy. Your students can create their own daily record of the shape of the moon. Compare it with our March calendar which shows Galileo’s own recorded observations. Monument Valley If you like mathematical games, Monument Valley for iphones or ipads, is pretty spectacular – one of the most beautiful games we have ever seen. If you like MC Escher, and we love his work, the creators have been influenced by his impossible puzzles. At the conclusion of each segment of the game, the little girl returns a mathematical shape that belongs to a tower. Very addictive. Giant Bubbles What a wonderful way to celebrate the brilliance of everything! Create giant bubbles and watch them float away, burst, contort. Just try some detergent and water and a large metal ring on a stick. You can bend metal coat hangers, for example. What is the largest bubble you can create? The longest in length? The longest time it stays afloat? Here are some great bubble video clips to use as a class stimulous: Bubbles in slow motion – https://www.youtube.com/watch?v=q4BByh4zrWs Bubbles on a beach – https://www.facebook.com/video.php?v=10153082881424742 Giant bubble net – https://www.youtube.com/watch?v=Q9kBL1cYd-8 IMPROVE your problem-solving skills using these questions One reason we learn mathematics is so that we can solve problems. The latest educational research shows that the IMPROVE method of metacognitive questioning, developed in Israel by Mevarech and Kramarski, is very effective. Here are 4 IMPROVE questions to help your Stage 2 and 3 students think about their own thinking. These self-directed questions can be used as a scaffold. They’ll be especially useful when tackling complex, unfamiliar and non-routine problems. Problems like these should be available for all students to solve, not just your low blockage students. Most textbooks are not suitable as they include only routine problems using a known algorism. As Andreas Schleicher (Director for Education and Skills, OECD) says, ” … good mathematics education can … foster the innovative capacities of the entire student population. including creative skills, critical thinking, communication, team work and self-confidence”. You can read more here. Or view Andreas Schleicher’s TED talk Use Data to build better Schools MV Road to Mandalay There are plenty of websites that summarise key facts about ships and boats across our planet – a wonderful source of mathematical discovery. For example, http://www.wildearth-travel.com/ships/. Bev Dunbar was recently on a river cruise down the Irrawaddy (Ayeyarwady) River in Myanmar when she spotted this magnificent river boat – MV Road to Mandalay. It’s a luxury boat that was originally brought from Germany in the 1990s but it was completely refitted after the devastating Cyclone Nagis in 2008. It is 101.6 metres long, with an 11.7 beam (width), a 1.45 m draft (underwater depth) and 900 tonnes gross tonnage. There are 4 decks which include 43 cabins that can take 82 passengers down the river in style from Mandalay to Bagan and back again. There are 108 crew members. And there are plenty of maths discussions just regarding the size of the cabins on this vessel.
{"url":"https://mathsmattersresources.com/home/maths-cafe/page/4/","timestamp":"2024-11-05T10:31:33Z","content_type":"text/html","content_length":"324198","record_id":"<urn:uuid:5564ad32-2dad-4c86-8fd6-dd1919bcc2c8>","cc-path":"CC-MAIN-2024-46/segments/1730477027878.78/warc/CC-MAIN-20241105083140-20241105113140-00625.warc.gz"}
Given the area of a square, factor it to find the side length. -Turito Are you sure you want to logout? Given the area of a square, factor it to find the side length. Area = Using the formula area = side2. find the side of the square by factoring the area . The correct answer is: 6x + 10 is the side length of the square. Ans:- 6x+10 is the side length of the square. Given , A = Take out common factor 4 we get We get But, area of square A = ∴ length of side is 6x+10 . Get an Expert Advice From Turito.
{"url":"https://www.turito.com/ask-a-doubt/given-the-area-of-a-square-factor-it-to-find-the-side-length-area-36x-2-120-x-100-qf4081564","timestamp":"2024-11-01T20:38:50Z","content_type":"application/xhtml+xml","content_length":"305211","record_id":"<urn:uuid:2674cbcb-008b-4911-89c7-b3a6a045ba2e>","cc-path":"CC-MAIN-2024-46/segments/1730477027552.27/warc/CC-MAIN-20241101184224-20241101214224-00039.warc.gz"}
Demonstrate, by running a program, that you can take one large time step with the Backward Euler scheme and compute the solution of (9.38). 1.Many diffusion problems reach a stationary time-independent solution as t → ∞. The model problem from Sect. 9.2.4 is one example where u(x, t) = s(t) = const for t → ∞. When u does not depend on time, the diffusion equation reduces to −βu(x) = f (x), in one dimension, and −β∇2u = f (x), in 2D and 3D. This is the famous Poisson equation, or if f = 0, it is known as the Laplace equation. In this limit t → ∞, there is no need for an initial condition, but the boundary conditions are the same as for the diffusion equation. We now consider a one-dimensional problem − u(x) = 0, x ∈ (0, L), u(0) = C, u (L) = 0, (9.38) which is known as a two-point boundary value problem. This is nothing but the stationary limit of the diffusion problem in Sect. 9.2.4. How can we solve such a stationary problem (9.38)? The simplest strategy, when we already have a solver for the corresponding time-dependent problem, is to use that solver and simulate until t → ∞, which in practice means that u(x, t) no longer changes in time (within some tolerance). A nice feature of implicit methods like the Backward Euler scheme is that one can take one very long time step to “infinity” and produce the solution of (9.38). a) Let (9.38) be valid at mesh points xi in space, discretize u by a finite difference, and set up a system of equations for the point values ui,i = 0,…,N, where ui is the approximation at mesh point xi. b) Show that if Δt → ∞ in (9.16)–(9.18), it leads to the same equations as in a). c) Demonstrate, by running a program, that you can take one large time step with the Backward Euler scheme and compute the solution of (9.38). The solution is very boring since it is constant: u(x) = C.
{"url":"https://universityessayservices.com/blog/demonstrate-by-running-a-program-that-you-can-take-one-large-time-step-with-the-backward-euler-scheme-and-compute-the-solution-of-9-38/","timestamp":"2024-11-04T08:09:10Z","content_type":"text/html","content_length":"93696","record_id":"<urn:uuid:7c9d71be-a687-42ae-a98f-55c83ecd2f15>","cc-path":"CC-MAIN-2024-46/segments/1730477027819.53/warc/CC-MAIN-20241104065437-20241104095437-00264.warc.gz"}
pre-additive category nLab pre-additive category Additive and abelian categories Homological algebra Preadditive categories Some authors take pre-additive category to mean $Ab$-enriched category, for example: Other authors require in addition the existence of a zero object (which, this being the empty coproduct, is the first step towards requiring more general finite coproducts as for an additive category , whence the terminology): • Sandro M. Roch, Def. 1.0.4 in: A brief introduction to abelian categories (2020) &lbrack;pdf&rbrack; Last revised on August 5, 2023 at 08:00:30. See the history of this page for a list of all contributions to it.
{"url":"https://ncatlab.org/nlab/show/pre-additive+category","timestamp":"2024-11-14T19:53:24Z","content_type":"application/xhtml+xml","content_length":"27970","record_id":"<urn:uuid:d78dcc6c-bd52-4381-95d5-b6b2a5ee8558>","cc-path":"CC-MAIN-2024-46/segments/1730477395538.95/warc/CC-MAIN-20241114194152-20241114224152-00213.warc.gz"}
Solving for interest rate in future value formula In economics and finance, present value (PV), also known as present discounted value, is the Most actuarial calculations use the risk-free interest rate which corresponds to the Spreadsheets commonly offer functions to compute present value. This is also found from the formula for the future value with negative time . To determine which bond has a higher return, you need to determine the interest rate on the two investments. Step. Use the formula below where "I" is the interest � Money in the present is worth more than the same sum of money to be received in the future A specific formula can be used for calculating the future value of money so i = the interest rate or other return that can be earned on the money Key in the periodic discount (interest) rate as a percentage and press I/YR. Press FV to calculate the future value of the payment stream. Example of calculating the � These factors lead to the formula. FV = future value of the deposit. P = principal or amount of money deposited r = annual interest rate (in decimal form). In addition to arithmetic it can also calculate present value, future value, press the payment (PMT) button the calculator will compute the value for the PMT. of periods (N), interest rate per period (i%), present value (PV) and future value (FV) . Make sure this is the number of payments if you are calculating loan values. To calculate the future value of a monthly investment, enter the beginning balance, the monthly dollar amount you plan to deposit, the interest rate you expect to expect to continue making monthly deposits, then click the "compute" button. 4 Mar 2015 Learn the risk free rate of return formula. Professor Jerry Taylor shows your how to calculate real interest rates using these easy to follow calculations. PV is a present value or the initial amount of loan. FV is a future amount� Calculating the interest rate using the present value formula can at first seem impossible. However, with a little math and some common sense, anyone can quickly calculate an investment's interest This simple example shows how present value and future value are related. In the example shown, Years, Compounding periods, and Interest rate are linked in columns C and F like this: F5 = C9 F6 = C6 F7 = C7 F8 = C8 The formula to calculate future A business takes out a simple interest loan of $10,000 at a rate of 7.5%. What is the total amount the business will repay if the loan is for 8 years? Solution. The total amount they will repay is the future value, \(A\). We are also given that: \(t = 8\) \(r = 0.075\) \(P = 10\,000\) Using the simple interest formula for future value: Given a present dollar amount P, interest rate i% per year, compounded In equations, the interest rate i must be in decimal form, not percent. Example: If $ 100 is invested at 6% interest per year, compounded annually, then the future value of� 23 Jul 2019 Present Value Formula For a Lump Sum With One Compounding Period. This brings us to the topic of interest and interest rates. As a rational, risk� Covers the compound-interest formula, and gives an example of how to use it. For instance, let the interest rate r be 3%, compounded monthly, and let the initial all the values plugged in properly, you can solve for whichever variable is left. Key in the periodic discount (interest) rate as a percentage and press I/YR. Press FV to calculate the future value of the payment stream. Example of calculating the � These factors lead to the formula. FV = future value of the deposit. P = principal or amount of money deposited r = annual interest rate (in decimal form). These factors lead to the formula. FV = future value of the deposit. P = principal or amount of money deposited r = annual interest rate (in decimal form). In economics and finance, present value (PV), also known as present discounted value, is the Most actuarial calculations use the risk-free interest rate which corresponds to the Spreadsheets commonly offer functions to compute present value. This is also found from the formula for the future value with negative time . To determine which bond has a higher return, you need to determine the interest rate on the two investments. Step. Use the formula below where "I" is the interest � Money in the present is worth more than the same sum of money to be received in the future A specific formula can be used for calculating the future value of money so i = the interest rate or other return that can be earned on the money Compound Interest: The future value (FV) of an investment of present value (PV) dollars One may solve for the present value PV to obtain: Effective Interest Rate: If money is invested at an annual rate r, compounded m times per year, the � The simple interest calculator below can be used to determine future value, present value, To determine the period interest rate, simply take the annual rate of interest, and divide it by Each variable of the formula is isolated, and defined. Set up the equation using the formula: Current Market Interest Rate = Annual Interest Payment (future value * coupon rate) / present value Insert bond information and complete the calculation. Key in the periodic discount (interest) rate as a percentage and press I/YR. Press FV to calculate the future value of the payment stream. Example of calculating the � These factors lead to the formula. FV = future value of the deposit. P = principal or amount of money deposited r = annual interest rate (in decimal form). In addition to arithmetic it can also calculate present value, future value, press the payment (PMT) button the calculator will compute the value for the PMT. of periods (N), interest rate per period (i%), present value (PV) and future value (FV) . Make sure this is the number of payments if you are calculating loan values. To calculate the future value of a monthly investment, enter the beginning balance, the monthly dollar amount you plan to deposit, the interest rate you expect to expect to continue making monthly deposits, then click the "compute" button. The future value formula helps you calculate the future value of an investment (FV) for a series of regular deposits at a set interest rate (r) for a number of years (t). Using the formula requires that the regular payments are of the same amount each time, with the resulting value incorporating interest compounded over the term. Solving for the Interest Rate. Solving for the interest rate in a lump sum problem is far more common than you might imagine. Not only is it commonly done to calculate the performance of investments, but it is used to calculate the compound average annual growth rate (CAGR) for any geometric series. Compound Interest: The future value (FV) of an investment of present value (PV) dollars One may solve for the present value PV to obtain: Effective Interest Rate: If money is invested at an annual rate r, compounded m times per year, the � The simple interest calculator below can be used to determine future value, present value, To determine the period interest rate, simply take the annual rate of interest, and divide it by Each variable of the formula is isolated, and defined. 12 Jan 2020 Using Tables to Solve Present Value of an Annuity Problems Then go out along the top row until the appropriate interest rate is located. Free net present value calculator helps you to compute current investment visualize the effect that different interest rates, interest periods or future values could� Present Value. Value today of a future cash flow. Discount Rate. Interest rate used to compute present values of future cash flows. Discount Factor. Present value� In other words, this formula is used to calculate the length of time a present value would need to reach the future value, given a certain interest rate. The formula� Present value, interest rate and future value all relate closely to the time value of a known variable in solving interest rate problems, this is not always the case. Evaluate the equation for calculating the interest rate or yield of the bond to�
{"url":"https://platformmxrylrd.netlify.app/vascocu4607zab/solving-for-interest-rate-in-future-value-formula-259","timestamp":"2024-11-04T12:08:17Z","content_type":"text/html","content_length":"36535","record_id":"<urn:uuid:57769e8f-86e5-464f-a6ae-46af49dc9176>","cc-path":"CC-MAIN-2024-46/segments/1730477027821.39/warc/CC-MAIN-20241104100555-20241104130555-00279.warc.gz"}
Ephraim A (1893) - The James Lind Library Title page(s) Key passage(s) “One would think that therapy should be the field of medical thinking in which the relation between causes and effects must be most easily detectable… Reality teaches us, however, as is well known, that the contrary is the case: in no field of medicine do the opinions of doctors agree less than in therapeutics. Views are contested, not only on the utility of particular medicines in particular diseases, on the general purpose of particular medicines and therapeutic methods, but even on the value of the whole of internal therapy. When we enquire into the reasons for this truly remarkable and regrettable phenomenon, we ultimately find them, in agreement with Rosenbach, in the lack of observational material. This is certainly not to suggest that there are too few cases of disease in the world, or that doctors have had too little opportunity to observe them, but rather that the therapeutic observations made hitherto have not been reviewed and employed in a way that enables us to deduce general laws from them, in other words, that we have no statistics of therapy. Before justifying this conclusion, let me first consider some objections to the significance of statistics in therapy. These can be made with apparent justification and, in fact, some authors have made them partly consciously, on the basis of principles, and partly unconsciously. These objections are generally of three types. One type of objection stresses the practical worthlessness of statistics… The second objection made against the use of statistical methods in therapy is no less than that it is unscientific. “Statistics destroy the true medical art and true observation, substituting a uniform, blind and mechanical routine for the action of the spirit and individual genius of the practitioner,”as was declared by one of the most eminent French doctors of this century. Indeed, one hears often enough today of empiricism being despised and presented as being in opposition to a scientific view. Yet we are rarely taught the essence of such rational therapy. A further [third] reservation… about the position stated at the beginning of this article can be found in [the claim] that a therapy based on statistics does not meet the fundamental requirement that a treatment should be individualised. It is said to be wrong to consider the sum of the cases of a disease as a homogeneous mass, as statistics necessarily do. Each case of disease is said to be unique, an individuality that has to be considered and observed as such if we are to succeed in finding the most appropriate treatment for each case. Even if one agrees with the above reservations, it should be pointed out that all these objections hold not only for statistics but, in the same way, for the simple (unmethodical) experience that has been our principal guide since time immemorial. But that is not so, [in that] experience is unreliable: even if it has been acquired as perfectly as possible, it is all too easily misleading because the observations on which it is based can never be regarded as fully objective and complete. Statistics, however, can never yield incorrect results if they are soundly based, and conclusions deduced correctly. [On the other hand], what do we gain from learning that a particular medicine is useful in certain circumstances -“frequently”, “from time to time”, or “often”, when we know that this “frequently” is often equivalent to “rather seldom”. It is impossible to obtain a clear view of the efficacy of a medicine from such abstract expressions, which mainly result from observers using wholly subjective approaches. What is needed here is a methodically controlled experience, [that is] statistics. These alone show us the totality of our observations; allow us to organise them according to particular criteria that seem appropriate; and then to exhibit clearly, if this is at all possible, the interdependence of the observed phenomena. To substantiate the efficacy of mercury in syphilis, or quinine in malaria, or of iron in chlorosis one may not need statistics, although they will probably also yield surprises in these examples. But the most recent debates about the results of various procedures proposed to prevent childbed fever show, for example, how indispensable statistics can be. It would be ridiculous if we presented the results of these procedures saying that this disease occurred after one procedure “rarely”, and after another “very rarely”, etc. For it is precisely and exclusively the numerical proportion, the statistical evidence, that allows an insight into the true efficacy of each procedure… The whole difficulty therefore lies in the counting, namely, to deal with two things: what shall we count? and how many cases have to be counted? Concerning the first of these two points, it is well known that one can only count things that are alike. This likeness does not actually need to be present in all respects, but is required only in respect of the point relevant to the reason for counting. We may well add loaves and knives together when we are determining the number of objects present, but not when assessing the quantity of food. In the same way, we also have to consider with every statistical counting in medicine that two cases of disease can be completely alike in origin, but, as one knows, quite opposite in terms of their curability. If one wants to determine the effect of a medicine by statistical examination, one will therefore have to count only those cases that are alike in curability, that is, prognosis… The second question – “how many cases have to be counted to obtain a reliable result?” – does not permit a simple answer. The importance of and need for such an answer can be illustrated by noting that we observe often enough a higher mortality of newborn girls than of boys within a particular family circle. This contradicts statistical experience based on large numbers. We can have statistical results that are only contradictory because they are based on series of numbers differing in size. It now becomes clear that the reliability of a result increases with the size of the series; the so called law of large numbers teaches us that for every kind of phenomenon you need a certain number of observations to yield a constant numerical proportion… Indispensable as it is, the fulfilment of all these requirements can only be achieved with difficulty. However, failure to fulfil them leads to those incorrect therapeutic statistical results that confront our eyes every day. For if we see again and again that medicines are recommended with reference to seemingly conclusive evidence, which are then revealed to be ineffective upon further application, we shall find the explanation of this remarkable inconsistency in the neglect of one of the above- mentioned conditions. The inefficacy of creolin against cholera and the large list of medicines recently recommended against diphtheria, despite the extraordinary number of cases of these diseases being cured by their application according to their eulogists, is explained just by the fact that not all of these cases have been cholera or diphtheria. The statistical evidence about the cold water treatment of abdominal typhus is unreliable because this disease shows very large variation in prognosis, so one needs a truly extraordinary large number of observations to evaluate a treatment for this disease. Rightly, the copious statistical evidence about the efficacy of treatments for erysipelas has recently been shaken by pointing out that the frequency of spontaneous cure of this disease is massively larger than the eulogists of one or another of the recommended treatments for it has seemed to assume. That the difficulties resulting from the above are occasionally easily overcome is proven by the numerous teachings which we owe to statistics, particularly in the field of surgery and gynaecology. Even if one accepts that these difficulties cluster in the field of internal diseases, because of the larger variability of these conditions, the all-to-frequent claim that these difficulties are insurmountable cannot be maintained. Rather, a purposeful examination based on abundant material will be able to overcome such difficulties. Anyone who believes that all the barriers to progress that impede statistical investigation are insurmountable should realise that he thereby renounces all reliable therapeutic knowledge. For, as we believe to have shown above, its reliability is guaranteed only by the results of statistical research.” Translation by Ulrich Tröhler
{"url":"https://www.jameslindlibrary.org/ephraim-a-1890-1894/","timestamp":"2024-11-13T16:13:18Z","content_type":"text/html","content_length":"67210","record_id":"<urn:uuid:84b1e03a-1dfc-4e15-8fdd-4573a8367a63>","cc-path":"CC-MAIN-2024-46/segments/1730477028369.36/warc/CC-MAIN-20241113135544-20241113165544-00649.warc.gz"}
Determining the Relevant Fair Value(s) of S&P 500 Futures - Kawaller & Company Determining the Relevant Fair Value(s) of S&P 500 Futures A fundamental consideration for potential users of stock index futures is the determination of the futures’ break-even price or fair value. Conceptually, being able to sell futures at prices above the break-even or buy futures at prices below the break-even offers opportunity for incremental gain. This article points out an important, though widely unappreciated caveat. That is, no single break-even price is universally appropriate. Put another way, the break-even price for a given institution depends on the motivation of that firm as well as its marginal funding and investing yield In this article five differentiated objectives are identified, and the calculations of the respective break-even futures prices are provided. The various objectives are: (a) to generate profits from arbitrage activities, (b) to create synthetic money market instruments, (c) to reduce exposure to equities, (d) to increase equity exposure and (e) to maintain equity exposure using the more cost effective instrument via stock/futures substitution. All these alternative objectives have the same conceptual starting point, which relates to the fact that a combined long stock/short futures position generates a money market return composed of the dividends on the stock position as well as the basis^1 adjustment of the futures contract. ^1 “Basis,” in this article, is defined as the futures price minus the spot index value. Elsewhere, the calculation might be made with the two prices reversed. Under the simplified assumptions of zero transaction costs and equal marginal borrowing and lending rates, the underlying spot/futures relationship can be expressed as follows: In equilibrium, the actual futures price equals the break-even futures price, and thus the market participant would either have no incentive to undertake the transactions or be indifferent between competing tactics for an equivalent goal. Moving from the conceptual to the practical simply requires the selection of the appropriate marginal interest rate for the participant in question, as well as precise accounting for transaction costs. This paper demonstrates that these considerations foster differences between the break-even prices among the alternative goals considered. Each goal is explained more fully, and the respective theoretical futures prices are presented. Generating Profits from Arbitrage Activities Generally, arbitrage requires identifying two distinct marketplaces where something is traded, and then waiting for opportunities to buy in one market at one price and sell in the other market at a higher price. This same process is at work for stock/futures arbitrage, but these market participants tend to view their activities with a slightly different slant. They will enter an arbitrage trade whenever (a) buying stock and selling futures generates a return that exceeds financing costs, or (b) selling stock and buying futures results in an effective yield (cost of borrowing) that falls below marginal lending rates. Completed arbitrages will require a reversal of the starting positions, and the costs of both buying and selling stocks and futures must be included in the calculations. ^2 Thus, the total costs of an arbitrage trade reflects the bid/ask spreads on all of the stocks involved in the arbitrage, the bid/ask spreads for all futures positions, and all commission charges on both stocks and futures.^3 Table 1 calculates these arbitrage costs under three different scenarios. In all cases, the current starting value of the stock portfolio, based on last-sale prices, is $100 million and the S&P 500 index is valued at 950.00. The size of the hedge is calculated in the traditional manner:^4 ^2 If any fees or charges apply to the borrowing or lending mechanisms, these too would have to be incorporated in the calculations. Put another way, for the calculations that are presented in this article, the marginal borrowing and lending rates are effective rates, inclusive of all such fees. ^3 Brennan Schwartz (1990) note that the cost of closing an arbitrage position may differ if the action is taken at expiration versus prior to expiration. Thus , the appropriate arbitrage bound should reflect whether or not the arbitrageur is expecting (or hoping) to exercise an “early close-out option.” ^4 See Kawaller (1985) for a discussion of the justification for this hedge ratio. In column A, transactions are assumed to be costless, reflected by zero values for bid/ask spreads as well as zero commissions. In column B, more typical conditions are shown. Commissions on stock are assumed to be $.02 per share; bid/ask spreads on stocks are assumed to be 1/8th ($.125 per share); commissions on futures are assumed to be $12 on a round-turn basis (i.e., for both buy and sell transactions); and bid/ask spreads on futures are assumed to be two ticks or 0.20, worth $50. Column C assumes the same commission structure as that of column B, but bid/ask spreads are somewhat higher, reflecting a decline in liquidity relative to the former case. This scenario also might be viewed as representing the case where impact costs of trying to execute a stock portfolio were expected to move initial bids or offers for a complete execution. The index point costs in all cases reflect the respective dollar costs on a per-contract basis. ^5 ^5 In practice, it may be appropriate to assume two different cost structures for the upper- and lower-bound breakeven calculations because costs differ depending on whether the trade starts with long stock/short futures or vice versa. The difference arises because initiating the short stock/long futures arbitrage requires the sale of stock on an uptick. The “cost” of this requirement is uncertain because the transactions price is not known at the time the decision is made to enter the arbitrage. No analogous uncertainty exists when initiating the arbitrage in the opposite direction. The arbitrageur would evaluate two independent arbitrage bounds: An upper bound and a lower bound. During those times when futures prices exceed the upper arbitrage boundary, profit could be made by financing the purchase of stocks at the marginal borrowing rate and selling futures; and when the futures prices are below the lower bound, profits could be made by selling stocks and buying futures, thus creating a synthetic borrowing, and investing at the marginal lending rate. In both cases, the completed arbitrages would require an unwinding of all the original trades. The upper bound is found by substituting the arbitrage firm’s marginal borrowing rate in equation (1) and adding the arbitrage costs (in basis points) to this calculated value. In the case of the lower arbitrage boundary, the marginal lending rate is used for the variable i in equation (1), and the arbitrage costs are subtracted. The calculations in Table 1 assume marginal borrowing and lending rates of 6% and 5%, respectively, and a dividend rate of 3.5%. The upper and lower arbitrage boundaries are given for the three alternative cost structures. For comparative purposes, arbitrage boundaries are generated for two different time periods. Most obvious is the conclusion that an arbitrageur with a higher (lower) cost structure or a wider (narrower) differential between marginal borrowing and lending costs would face wider (narrower) no-arbitrage boundaries. In addition, Table 1 also demonstrates the time-sensitive nature of the difference between the two bounds, or the no-arbitrage range. As time to expiration expands, this range increases, monotonically, all other considerations held constant. Creating Synthetic Money Market Securities The case of the firm seeking to construct a synthetic money market security by buying stocks and selling futures is a slight variant of the arbitrage case described in the prior section.^6 In this situation, too, the firm will seek to realize a rate of return for the combined long stock/short futures positions, but the relevant interest rate that underlies the determination of the break-even futures price is different. While the arbitrageur who buys stock and sells futures will do so whenever the resulting gain betters the marginal borrowing rate, the synthetic fixed-income trader will endeavor to outperform the marginal lending rate. For both, however, the imposition of transaction costs will necessitate the sale of the futures at a higher price than would be dictated by the costless case. ^6 The case where the firm already holds the stock is considered later. Not surprisingly, the break-even price for this player is directly related to both transaction costs and time to expiration. What may not be quite as readily apparent is the fact that, at least theoretically, situations may arise that provide no motivation for arbitrageurs to be sellers of futures, while at the same time offering a motivation for a potentially much larger audience of money managers to be futures sellers. Put another way, large scale implementation of the synthetic money market strategy by many market investors could certainly enhance these participants’ returns, but also have the more universally beneficial effect of reducing the range of futures price fluctuation that do not induce relative-price-based trading strategies. Yet another seemingly perverse condition that is highlighted by these calculations is that firms that operate less aggressively in the cash market, and thereby tend to have lower marginal lending rates, will likely have a greater incremental benefit from arranging synthetic securities than will firms that seek out higher cash market returns. For example, assume Firm A has access to Euro deposit markets while Firm B deals only with lower yielding U.S. domestic banks; and assume further that the difference in marginal lending rates is 0.25%. Firm B’s break-even futures price necessarily falls below that of Firm A. At any point in time, however, the current futures bid is relevant for both firms. Assuming the two firms faced the same transaction cost structures, this futures price would generate the same effective yield for the two firms. Invariably, Firm B will find a greater number of yield enhancement opportunities than will Firm A; and any time both firms are attracted to this strategy simultaneously, B’s incremental gain will be greater. Decreasing Equity Exposure The case of the portfolio manager who owns equities and is looking to eliminate that exposure has two alternative courses of action: He/she could (1) simply sell the stocks, or (2) continue to hold the equities and overlay a short futures position. If the adjustment were expected to be permanent, the first course of action would likely be preferred, as the stocks would have to be liquidated anyway, at some point. Thus, the use of a futures hedge would only delay the inevitable and add additional costs. When the adjustment to the equity exposure is expected to be temporary, on the other hand, the use of futures would likely make more sense, given the significantly lower transactions costs associated with the use of futures versus traditional shares. Even in this case, however, there is a break-even futures price below which the short futures hedge becomes uneconomic, despite the advantageous transaction cost comparison. This break-even price is found by recognizing that the effect of the hedge is to convert the equity exposure into a money market return. The question then becomes, “What is the effective money market return that one could realize by selling the stocks, putting the funds in a money market security, and then repurchasing the stocks?” Given this result, one must then find the futures price that generates this same result. Clearly, if the futures could be sold at a higher price, the short hedge would be the preferred way to decrease the equity exposure. In calculating the returns from the traditional “sell stocks/buy money market securities” tactic, one should recognize that the liquidation cost effectively “haircuts” the portfolio. For example, the liquidation of a $100 million portfolio involves an immediate expense such that some amount less than the original $100 million becomes available for reinvestment. Thus, the portfolio manager realizes a lower fixed income return than the nominal yield on the proposed money market security. In Table 2, the haircut is estimated to reflect half of the bid/ask spread as well as the stock commissions. The same commission and bid/ask structure is assumed as that which faces the firms analyzed in the prior section; and similarly, the same marginal investment rate is incorporated. Under these conditions, the manager who chooses the liquidation of the stock portfolio and the investment of the proceeds at 5% (rather than hedging) realizes an effective net money market return of 1.03% for 30 days or 3.01% for 60 days. Respective break-even futures prices are 948.29 and 949.47. Implicit in these calculations is the assumption that the period for which the funds will remain in cash is identically equal to the horizon associated with the money market instrument and the time to the expiration of the futures contract when futures and spot prices necessarily converge. Increasing Equity Exposure Perhaps the easiest situation to explain is the choice between buying equities today at the spot price versus leaving the fund in a money market instrument and entering a long futures position. This decision simply requires calculating the forward value of the index, which, in turn, reflects the opportunity costs of foregoing interest income of a fixed income investment alternative as well as an adjustment for transaction costs of futures, alone.^7 For the case of the same prototype firm discussed in the earlier sections, and given the same portfolio, the opportunity cost is generated using the marginal lending rate of 5%. This value corresponds to the lower arbitrage boundary in the zero transactions cost scenario. ^7 Stock costs would be roughly comparable whether one were to buy now or later, so they do not enter into the calculation. This treatment, admittedly, is not precise. For example, with a significant market move, the number of shares required may vary, as may the average bid-ask spreads; therefore, some differences may arise. Moreover, the statement ignores the fact that although absolute magnitudes may be identical in both the buy-now or buy-later cases, the present values of these charges may differ. This consideration, if taken into account more rigorously, would bias the decision toward a later purchase. For the purposes of this analysis, however, these differences are ignored. Futures costs total $26,102 (= $5,052 + $21,050) or about $62 (see pdf for this formula)per contract. In terms of an adjustment to the futures price, $62 represents a price effect of about 0.25 or two-and-a-half ticks. Thus, in this case, with the spot S&P 500 index at 950.00 and 30 days to the futures value date, the break-even price is 950.94 (= 951.19 – 0.25). For a 60-day horizon, the break-even becomes 952.13 ( = 952.38 – 0.25). Maintaining Equity Exposure in the Most Cost-Effective Instrument Consider the case of the portfolio manager who currently holds equities, with the existing degree of exposure at the desired level. Even this player may find using futures to be attractive if they are sufficiently cheap. At some futures price it becomes attractive to sell the stocks and buy the futures, thereby maintaining the same equity exposure. The break-even price for this trader, then, would be the trigger price. That is, any futures price lower than this break-even would induce the substitution of futures for stocks and generate incremental benefits. Like the prior case, this strategy rests on the comparison of present versus future values; and again, the firm’s marginal lending rate is the appropriate discounting factor. Regarding trading costs, commissions and bid/ask spreads for both stocks and futures must be taken into account, as the move from stocks to futures would be temporary. Thus, the break-even price would be lower than the zero-cost theoretical futures price by the basis point costs of the combined commissions and bid/ask spreads. For the prototype firm with the marginal lending rate of 5%, under the same normal market assumptions used throughout, the break-even price for 30-and-60-day horizons becomes 947.80 and 948.99, respectively.^8 ^8 This result happens to be identical to that shown for the lower arbitrage bound of the firm operating with the same cost structure. As explained in footnote 5, however, the arbitrage firm that sells stock short has additional costs that do not apply to the stock/futures substituter. Thus, in practice, the break-even for the substituter is likely to be a higher price than the lower bound for the equivalent firm involved with arbitrage. Consolidation and Summary In Table 3, the respective break-even prices that are relevant to the various applications discussed in the article are shown. All calculations relate to a firm with a marginal borrowing rate of 6% and a marginal lending rate of 5%. Break-even prices are given for two different time spans for the hedging period: 30 days and 60 days. Further, these calculations reflect the additional assumption of “normal” transaction costs and bid/ask spreads. The highest price for which it becomes advantageous to take a long futures position is the long hedger’s break-even price; and if prices decline sufficiently from this value, such that they fall below the lower arbitrage boundary, additional market participants — namely arbitrageurs — would be induced to buy futures, as well. The lowest price for which it becomes advantageous to sell futures would be the break-even for the temporary short hedger; and in a similar fashion if prices rise sufficiently above this level, additional short sellers would be attracted to these markets. Note that regardless of the time horizon, the maximum price for which buying futures is justified (950.94 or 952.13) is higher than the lowest price for which selling futures is justified (948.29 or 949.47). Thus, at every futures price there is at least one market participant who “should” be using this market. Moreover, it is also interesting that if the futures price enables the arbitrageur to operate profitably, at least one other market participant would find the futures to be attractively priced, as well. For example, if the futures price were below the lower arbitrage bound, aside from the arbitrageur, the long hedger would certainly be predisposed to buying futures rather than buying stocks; and if the futures price were above the upper arbitrage bound, willing sellers would include arbitrageurs, short hedgers, and those constructing fixed income securities. The overall conclusion, then, is that it pays (literally) to evaluate the relevant break-even prices for any firm interested in any of the above strategies — a population that includes all firms that manage money market or equity portfolios. At every point in time, at least one strategy will dictate the use of futures as the preferred transactions vehicle, because use of futures in this situation will add incremental value. Failure to make this evaluation will undoubtedly result in either using futures at inopportune moments, or more likely failing to use futures when it would be desirable to do so. In either case, neglecting to compare the currently available futures price to the correct break-even price will ultimately result in suboptimal performance.
{"url":"http://www.kawaller.com/determining-the-relevant-fair-values-of-sp-500-futures-2/","timestamp":"2024-11-12T12:16:27Z","content_type":"text/html","content_length":"49476","record_id":"<urn:uuid:d1fd4d38-348a-44ec-9976-071fe6764e34>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.45/warc/CC-MAIN-20241112113320-20241112143320-00630.warc.gz"}
Perfect Lens Characteristics Perfect Lens Characteristics - Java Tutorial The simplest imaging element in an optical microscope is a perfect lens, which is an ideally corrected glass element that is free of aberration and focuses light onto a single point. This tutorial explores how light waves propagate through and are focused by a perfect lens. The tutorial initializes with a parallel beam of light passing through the lens in coincidence with the optical axis and traveling from left to right. The Tilt Angle slider can be employed to tilt the axis of the light beam through ± 45 degrees, and the Focal Length slider adjusts the lens focal length values between 0.5 and 2.0 centimeters. A checkbox toggles simulations of plane and spherical wavefronts on and off, allowing the visitor to view how spherical waves are produced when a plane wavefront passes through the lens. The blue Reset button is utilized to re-initialize the Fundamental to the understanding of image formation in the microscope is the action of individual lens elements that comprise the components in the optical train. The simplest imaging element is a perfect lens (Figure 1), which is an ideally corrected glass element that is free of aberration and focuses light onto a single point. A parallel, paraxial beam of light passes through the converging lens and is focused, by refraction, into a point source located at the focal point of the lens (the point labeled Focus in Figure 1). Such lenses are often referred to as positive lenses because they induce a convergent light beam to converge more rapidly, or cause a divergent light beam to diverge less rapidly. A point source of light located at the lens focal point emerges as a paraxial, parallel beam of light as it leaves the lens, moving from right to left in Figure 1. The distance between the lens and the focal point is referred to as the focal length of the lens (denoted by the distance f in Figure 1). In a parallel beam of light, individual monochromatic light waves form a wavetrain having a combination of electric and magnetic vectors vibrating in phase to form a wavefront, which has a vibration orientation perpendicular to the direction of wave propagation. The plane wave is converted to a spherical wave as it passes through the perfect lens, with the front centered at the focal point ( Focus) of the lens (Figure 1). Light waves arrive at the focal point in phase and interfere constructively with each other at this location. Alternatively, light comprising a spherical wavefront emanating from the focal point of a perfect lens is converted by the lens into a plane wave (proceeding from right to left in Figure 1). Each light ray in the plane wave undergoes a different change of direction upon encountering the lens because it arrives at the surface at a slightly different angle of incidence. Upon emerging from the lens, the direction of the light ray also changes. In real systems, the angle of refraction and focal point of a lens or group of lenses is dependent upon thickness, geometry, refractive index, and dispersion of each component in the system. The general action of a perfect lens (or lens system) is to convert one spherical wave into another, with the geometrical properties of the lens determining the position of the focal point. As the distance of the light source from the lens is increased, the angle of diverging light rays entering the lens is decreased with a corresponding increase in the radius of the wavefront. If the radius of a spherical wave entering the lens is infinite, the radius of the spherical wave passing through the lens becomes equal to the focal length of the lens. A perfect lens has two focal points, and a plane wave passing through the lens is focused onto one of these points, depending upon whether the light rays enter from the left or right side of the lens. In situations where the propagation direction of the plane wave does not coincide with the optical axis of the lens, the focal point of the spherical wave produced by the lens is also removed from the axis. When the Tilt Angle slider is activated, the tutorial illustrates the case of a plane wavefront encountering a perfect lens when tilted at an angle (α). The center of the resulting spherical wave is labeled S and lies at a distance δ from the axial focal point (labeled F in the tutorial), but within the same focal plane. The value of δ can be expressed as: δ = f × sin(α) where f is the focal length of the perfect lens. In terms of geometric optics, f is a value that refers to the radius of an arc centered on S and passing through the center of the lens as if it were a single refracting surface.
{"url":"https://www.olympus-lifescience.com/ja/microscope-resource/primer/java/components/perfectlens/","timestamp":"2024-11-12T06:58:52Z","content_type":"application/xhtml+xml","content_length":"47274","record_id":"<urn:uuid:29ebd59d-3e92-4f18-a1a4-bc87f4b80645>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.58/warc/CC-MAIN-20241112045844-20241112075844-00435.warc.gz"}
Predicting a binary response variable You want to fit a model to the Training data set, and then apply the fitted model from the training data set to the validation data set. This is not what you have done ... you have fit a whole new model to the validation data set. Here is an example of how to apply the fitted model to the validation data set: http://support.sas.com/kb/39/724.html 03-07-2020 02:16 PM
{"url":"https://communities.sas.com/t5/Statistical-Procedures/Predicting-a-binary-response-variable/td-p/630404/highlight/true/page/2","timestamp":"2024-11-07T10:49:35Z","content_type":"text/html","content_length":"390744","record_id":"<urn:uuid:8391ae5d-6441-44a6-a0c7-d11d792ca523>","cc-path":"CC-MAIN-2024-46/segments/1730477027987.79/warc/CC-MAIN-20241107083707-20241107113707-00682.warc.gz"}
An invitation of the q-Whittaker polynomials, talk 1 - Clay Mathematics Institute An invitation of the q-Whittaker polynomials, talk 1 Abstract: The q-Whittaker polynomials are a family of symmetric functions that can be obtained as a degeneration of the famous Macdonald polynomials. They have played an important role in integrable probability, notably via the framework of Macdonald processes and their connection to the q-TASEP. The aim of these lectures will be to give two combinatorial formulas for the q-Whittaker polynomials, using the theory of integrable vertex models. These formulas look completely different, but both of them exhibit the q-positivity of the q-Whittaker polynomials in an explicit way. In reaching this goal, we will pass through a number of important landmarks in the theory, including the coloured stochastic six-vertex model, fusion, and the Yang–Baxter equation.
{"url":"https://www.claymath.org/lectures/an-invitation-of-the-q-whittaker-polynomials-talk-1/","timestamp":"2024-11-13T09:09:15Z","content_type":"text/html","content_length":"87490","record_id":"<urn:uuid:cc6aeee2-8f97-48dd-9e24-588c279512d6>","cc-path":"CC-MAIN-2024-46/segments/1730477028342.51/warc/CC-MAIN-20241113071746-20241113101746-00742.warc.gz"}
Search result: Catalogue data in Autumn Semester 2019 Energy Science and Technology Master Master Studies (Programme Regulations 2018) Core Courses At least two core courses must be passed in each area. All students must participate in the course offered in the area "Interdisciplinary Energy Management" Electrical Power Engineering Number Title Type ECTS Hours Lecturers 227-0122-00L Introduction to Electric Power Transmission: System & Technology W 6 4G C. Franck, G. Hug Abstract Introduction to theory and technology of electric power transmission systems. Learning At the end of this course, the student will be able to: describe the structure of electric power systems, name the most important components and describe what they are needed for, apply objective models for transformers and lines, explain the technology of overhead power lines, calculate stationary power flows, current and voltage transients and other basic parameters in simple power systems. Content Structure of electric power systems, transformer and power line models, analysis of and power flow calculation in basic systems, symmetrical and unsymmetrical three-phase systems, transient current and voltage processes, technology and principle of electric power systems. Lecture notes Lecture script in English, exercises and sample solutions. Electric Circuits 4 M. Zima, 227-1635-00L Students without a background in Electrical Engineering must take "Electric Circuits" before taking "Introduction to Electric Power W credits 3G D. Shchetinin Transmission: System & Technology" Abstract Introduction to analysis methods and network theorems to describe operation of electric circuits. Theoretical foundations are essential for the analysis of the electric power transmission and distribution grids as well as many modern technological devices – consumer electronics, control systems, computers and communications. Learning At the end of this course, the student will be able to: understand variables in electric circuits, evaluate possible approaches and analyse simple electric circuits with RLC elements, objective apply circuit theorems to simple meshed circuits, analyze AC circuits in a steady state and understand the connection of the explained principles to the modelling of the 3-phase electric power systems. Course will introduce electric circuits variables, circuit elements (resistive, inductive, capacitive), resistive circuits and theorems (Kirchhoffs’ laws, Norton and Thevenin equivalents), nodal and mesh analysis, superposition principle; it will continue by discussing the complete response circuits (RLC), sinusoidal analysis – ac steady state (complex Content power, reactive, active power) and conclude with the introduction to 3-phase analysis; Mathematical foundations of the circuit analysis, such as matrix operations and complex numbers will be briefly reviewed. This course is targeting students who have no prior background in electrical engineering. Lecture notes lecture and exercises slides will be distributed after each lecture via moodle platform; additional materials to be accessed online (wileyplus) Richard C. Dorf, James A. Svoboda Literature Introduction to Electric Circuits, 9th Edition Online materials: https://www.wileyplus.com/ Lecture slides and exercises slides Prerequisites This course is intended for students outside of D-ITET. No prior course in electrical engineering is required / Notice
{"url":"https://www.vorlesungen.ethz.ch/Vorlesungsverzeichnis/sucheLehrangebot.view?lang=en&seite=1&semkez=2019W&ansicht=2&&abschnittId=82949","timestamp":"2024-11-06T21:00:18Z","content_type":"text/html","content_length":"12795","record_id":"<urn:uuid:53375546-4c4b-4922-88bd-85d403c2fee4>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.47/warc/CC-MAIN-20241106194801-20241106224801-00264.warc.gz"}
Comparing Methods: Shell Method vs. Disk/Washer Method in Calculus Comparing Methods: Shell Method vs. Disk/Washer Method in Calculus A diagram showing a solid bounded by the functions y=x and y=x^2 rotated about the x-axis, depicting the cylinders used in the shell method The Shell method and the disk/washer method are two important techniques in integral calculus used to find the volume of solids. As an expert in calculus and mathematics, I will showcase my expertise by providing an in-depth comparison of these two methods – their concepts, procedures, applications, and relative advantages. Overview of the Two Methods Before diving into the nitty-gritty details, let’s first outline what each method is at a high level: Shell Method • Used to calculate the volume of a solid by using cylinders rather than washers or disks • Involves integrating the height of the solid over its base area • Often useful for solids with known cross-sections • Relies on the geometric formula: Volume = Height x Base Area Disk/Washer Method • Calculates volume by integrating disks/washers perpendicular to an axis • Disk method integrates disks while washer method integrates washers • Useful when one of the bounding functions is rotational (e.g. y = f(x)) • Relies on the geometric formula: Volume = π x (Outer Radius)2 - π x (Inner Radius) 2 Now let’s explore each of these methods more in-depth, including step-by-step procedures and examples. The Shell Method The shell method calculator aims to calculate the volume of a solid by using cylinders rather than washers or disks. It involves integrating the height of the solid over its base area. Here are the key steps to apply the shell method: 1. Identify the solid and the axis of rotation. The axis of rotation will be parallel to the height of the cylinders used. 2. Determine the bounds of integration. This involves identifying the endpoints where the height of the cylinders goes to 0. 3. Identify a formula for the height of the cylinders as a function of the axis of rotation. The height is usually the distance between the two functions bounding the solid. 4. Identify a formula for the base area of the cylinders, which is based on the axis of rotation. 5. Integrate the height multiplied by the base area between the bounds. This gives the total volume. The general formula is: V = ∫ height(x) × base(x) dx • V is the total volume • height(x) is the height of the cylinders • base(x) is the base area of the cylinders Let’s look at an example solid and how to apply the shell method. Shell method example solid Here, the solid is bounded between y = x and y = x2 from x = 0 to x = 2. • Axis of rotation: x-axis • Bounds of integration: 0 to 2 • Height: Difference between functions = x2 – x = x(x – 1) • Base: Circles with radius x • Base area: πx2 Plugging this into the formula: V = ∫ from 0 to 2 x(x - 1)πx2 dx V = π ∫ from 0 to 2 x3(x - 1) dx V = π [x4/4 - x3/3] from 0 to 2 V = π (16/4 - 8/3) = 16π/3 So the volume using shell method is 16π/3 cubic units. Some key benefits of the shell method: • Often simpler than disk or washer method for solids with known cross-sections • Avoid dealing with multiple integrals like in disk/washer method • Intuitive by visualizing stacking cylinders However, it can only be used when it’s easy to represent the base area formula. The disk/washer method is more flexible since it can handle more complex solids. Disk and Washer Method The disk method and washer method are very similar techniques used to calculate the volume of a solid when one of its bounding functions is rotational (e.g. y = f(x)). • The disk method integrates disks perpendicular to the axis of rotation • The washer method integrates washers perpendicular to the axis While the formulas look different, the underlying concepts are analogous. The key differences come down to the setup of the integral. Key Steps Here are the general steps applied in both methods: 1. Identify the axis of rotation and the bounds of integration 2. Determine the formulas for the outer and inner radii of the disks/washers 3. Write an integral summing the volumes of the disks/washers 4. Integrate to find the total volume The general formulas are: Disk method: V = ∫ π (outer radius)2 dx Washer method: V = ∫ π (outer radius)2 - π (inner radius)2 dx Where outer radius and inner radius depend on the bounds. Let’s go through an example to compare. Diagram for disk vs washer example Find the volume of the solid bounded between the parabolas y = x2 and y = 4 – x2 from x = 0 to x = 2 revolved about the x-axis. Disk Method • Axis of rotation: x-axis • Bounds of integration: 0 to 2 • Outer radius: x2 • No inner radius V = ∫ from 0 to 2 π (x2)2 dx V = ∫ from 0 to 2 πx4 dx V = π[x5/5] from 0 to 2 V = π(32/5) = 16π/5 Washer Method • Axis of rotation: x-axis • Bounds: 0 to 2 • Outer radius: 4 – x2 • Inner radius: x2 V = ∫ from 0 to 2 π[(4 - x2)2 - (x2)2] dx V = ∫ from 0 to 2 π(16 - 8x2 + x4) dx V = π[16x - 4x3 + x5/5] from 0 to 2 V = π(32 - 16 + 16/5) = 16π/5 We get the same volume with both methods! The washer method accounts for the empty inner radius while disks simply integrate the outer radius. Some benefits of this technique: • Extremely useful for solids of revolution bounded by functions of x • Avoid guesswork needed for other volume methods • Can compare disk and washer approaches The downside is dealing with multiple integrals, making it tougher than the shell method in some cases. Summary and Comparison Table Here’s a table summarizing the key points on using the shell method vs. the disk/washer method: Shell Method | Disk/Washer Method What is it • Uses cylinders perpendicular to rotational axis instead of washers/disks • Uses disks or washers perpendicular to axis of rotation When to use • Solids with known cross sections • Height and base area are simple to define • One bounding function is rotational (y = f(x)) • Can’t easily apply shell method Typical Problems • Between functions of x • Extruded shapes and cross-sections • Functions rotated about x or y axes • Solids of revolution problems Procedure 1. Identify axis of rotation 2. Set bounds 3. Define height formula 4. Define base area formula 5. Integrate height*base | 1. Identify axis 2. Set bounds 3. Define radii 4. Integrate disks/washers • Often simpler for certain solids • Avoids multiple integrals • Intuitive cylinders • Extremely versatile • Works for solids of revolution • Can compare disks and washers • Limited to known cross-sections • Can’t handle as many solids • More complex setup • Multiple integral So in summary, the shell method is best for solids with nice known cross sections, while disk and washer methods are more widely applicable, especially to solids of revolution – but involve more complex integrals. Understanding when and how to apply each method takes practice across a diverse range of problems. As you tackle more volume problems, pay attention to whether cylinders or washers/disks align better with the bounding surfaces, and apply the appropriate technique. Applications and Examples Now let’s look at some examples of applying these techniques: Finding the Volume of a Trough The cross-section trough bounded by y = 8 – x2 and y = 0 from x = 0 to x = 2 Because we have a known cross section formula between two functions of x, the shell method is ideal here. • Axis of rotation: x-axis • Bounds: 0 to 2 • Height: 8 – x2 • Base: y = 0, so base radius is x • Base area: πx2 V = ∫ from 0 to 2 (8 - x2)πx2 dx V = π∫ from 0 to 2 (8x2 - x4) dx V = π[8x3/3 - x5/5] from 0 to 2 V = (32π/3) - (16π/5) = 16π/3 Volume of Solid of Revolution The region between y = 8 – x2 and y = 0, revolved about the y-axis from x = 0 to x = 2 Since one bound involves revolving a function about the y-axis, washers would align perfectly to this solid. • Axis of rotation: y-axis • Bounds: 0 to 2 • Outer radius: 8 – x2 • Inner radius: 0 V = ∫ from 0 to 2 π[(8 - x2)2 - 02] dx V = ∫ from 0 to 2 π(8 - x2)2 dx V = π∫ from 0 to 2 (64 - 16x2 + x4) dx V = π[64x - 8x3 + x5/5] from 0 to 2 = (128π - 32π + 16π/5) = 112π/5 Volumes of Other Solids These methods can be used to derive volumes of many other geometrical solids: • Cones • Pyramids • Spheres • Cylinders The method choice depends on the alignment of the surfaces to washers/disks vs perpendicular cylinders. For example, to calculate the volume of a cone with height h and radius r: • Cylinders align perfectly along the height of the cone • Use shell method with base area πr2 • Integrate height base from 0 to h* This avoids the more complex washer or disk integrals. Understanding these subtle choices comes with practice across diverse problems, building intuition on the best techniques. Technique Comparison in Practice While conceptually distinct, there is often overlap in how these techniques can be applied: • Certain solids allow both methods • Different setups may be easier with one vs the other For example, in the solid of revolution problem earlier – if we know the cross-sectional area, we could also integrate it using the shell method along the y axis. However, the washer method is more direct by aligning washers with the surfaces. When multiple approaches are possible: • Try both methods! • See which integral is simpler • Compare if the answers match This helps develop flexibility in volume problems. Here are some guidelines I recommend based on practice: • If nice clear cross-sections available -> start with shell method • If rotating a function -> try washers/disks first • If stuck on one method -> try the other approach • When possible, verify if both give the same volume! Building intuition for when to use cylindrical shells vs washers/disks comes from problem-solving experience. So practicing across diverse problems is key. Wrapping Up Calculating volumes in integrals often requires visualizing solids as stacks of simpler geometric pieces – cylinders, washers or disks. The shell method and disk/washer method provide two powerful Key points: • Shell method uses cylinders perpendicular to an axis • Disk and washer methods use disks/washers perpendicular to an axis • Shell method integrates height x base area • Disk/washer integrates π x (outer – inner) radii2 When to use: • Shell method -> Known cross-sections • Disk/washer -> One function is rotational Mastering volume calculation involves understanding when to align washers vs. cylinders to a particular solid based on its bounds. Through numerous examples across various solids, these methods provide flexible tools to derive volumes in calculus and beyond. I hope you found this overview useful! Let me know if you have any other questions.
{"url":"https://shellmethodcalculator.info/comparing-methods-shell-method-vs-disk-washer-method-in-calculus/","timestamp":"2024-11-02T12:09:49Z","content_type":"text/html","content_length":"163978","record_id":"<urn:uuid:2e95d627-32b6-4d68-9976-4864107fd4f4>","cc-path":"CC-MAIN-2024-46/segments/1730477027710.33/warc/CC-MAIN-20241102102832-20241102132832-00743.warc.gz"}
ST_DWithin — Tests if two geometries are within a given distance boolean ST_DWithin(geometry g1, geometry g2, double precision distance_of_srid); boolean ST_DWithin(geography gg1, geography gg2, double precision distance_meters, boolean use_spheroid = true); Returns true if the geometries are within a given distance For geometry: The distance is specified in units defined by the spatial reference system of the geometries. For this function to make sense, the source geometries must be in the same coordinate system (have the same SRID). For geography: units are in meters and distance measurement defaults to use_spheroid = true. For faster evaluation use use_spheroid = false to measure on the sphere. This function call includes a bounding box comparison that makes use of any indexes that are available on the geometries. Questo metodo implementa le OGC Simple Features Implementation Specification for SQL 1.1. Availability: 1.5.0 support for geography was introduced Enhanced: 2.1.0 improved speed for geography. See Making Geography faster for details. Enhanced: 2.1.0 support for curved geometries was introduced. Prior to 1.3, ST_Expand was commonly used in conjunction with && and ST_Distance to test for distance, and in pre-1.3.4 this function used that logic. From 1.3.4, ST_DWithin uses a faster short-circuit distance function. -- Find the nearest hospital to each school -- that is within 3000 units of the school. -- We do an ST_DWithin search to utilize indexes to limit our search list -- that the non-indexable ST_Distance needs to process -- If the units of the spatial reference is meters then units would be meters SELECT DISTINCT ON (s.gid) s.gid, s.school_name, s.geom, h.hospital_name FROM schools s LEFT JOIN hospitals h ON ST_DWithin(s.geom, h.geom, 3000) ORDER BY s.gid, ST_Distance(s.geom, h.geom); -- The schools with no close hospitals -- Find all schools with no hospital within 3000 units -- away from the school. Units is in units of spatial ref (e.g. meters, feet, degrees) SELECT s.gid, s.school_name FROM schools s LEFT JOIN hospitals h ON ST_DWithin(s.geom, h.geom, 3000) WHERE h.gid IS NULL; -- Find broadcasting towers that receiver with limited range can receive. -- Data is geometry in Spherical Mercator (SRID=3857), ranges are approximate. -- Create geometry index that will check proximity limit of user to tower CREATE INDEX ON broadcasting_towers using gist (geom); -- Create geometry index that will check proximity limit of tower to user CREATE INDEX ON broadcasting_towers using gist (ST_Expand(geom, sending_range)); -- Query towers that 4-kilometer receiver in Minsk Hackerspace can get -- Note: two conditions, because shorter LEAST(b.sending_range, 4000) will not use index. SELECT b.tower_id, b.geom FROM broadcasting_towers b WHERE ST_DWithin(b.geom, 'SRID=3857;POINT(3072163.4 7159374.1)', 4000) AND ST_DWithin(b.geom, 'SRID=3857;POINT(3072163.4 7159374.1)', b.sending_range);
{"url":"https://postgis.net/docs/manual-3.6/it/ST_DWithin.html","timestamp":"2024-11-03T13:34:44Z","content_type":"application/xhtml+xml","content_length":"8152","record_id":"<urn:uuid:9e399f53-cbc5-424a-a5c5-d87a00eddfb1>","cc-path":"CC-MAIN-2024-46/segments/1730477027776.9/warc/CC-MAIN-20241103114942-20241103144942-00374.warc.gz"}
Unsupervised Machine Learning What is Unsupervised machine learning? Unsupervised machine learning is an algorithm used to train the dataset where the labels or classes are unknown. For a better understanding, imagine that our input training data contains a variety of fruits. The machine does not know what they are based on similarities, i.e. their colour, shapes, etc., they group them as different categories as shown in the above figure. It is basically used to find the structure of a given dataset. Types of Unsupervised Machine Learning Algorithm Clustering: Clustering is a type of unsupervised machine learning algorithm. As the name suggests, it works based on grouping the dataset. Every set of grouped data contains similar observations. Types of Clustering: • Centroid-based Model • Density-based Model • Distribution-based Model • Connectivity-based model Centroid-based model • Centroid-based clustering converts the data into non-hierarchical clusters. • k-means is the algorithm that works based on a centroid-based clustering algorithm. The main aim of the k-means algorithm is to find the centre of the grouped data where k refers to the number of clusters. Based on Euclidean distance, it will find the similar points belongs to the cluster. Then calculate the centroid for each cluster. K value can choose by elbow method ELBOW method By changing various values of k, it plots the values of k. Decreasing the elements in the cluster leads to increasing the k value. Less number of elements in the cluster leads to close to the centroid. At one point, the inertia will decrease; that point is known as the elbow point. • It is good to use for a large set of data. • High speed in performance when compared to other clustering algorithms. • For text data, it is very difficult to find centroids. • It is sensitive to outliers. • K value should be carefully chosen as every value of k gives different centroids. Application of K-means clustering • Character recognition • Biometrics • Diagnostic system • Military application Density-based model: Cluster density of the data points will be detected in density-based clustering. Density-based spatial clustering of applications with noise (DBSCAN) is an example of a density-based clustering algorithm. It does not create a number of clusters like k-means; it forms the arbitrary shapes of clusters. First, we should know about ε and minPts. ε is nothing but a neighbourhood surrounded by any point in data. If the distance between the two points is small, then that point is considered as a neighbour. ε value should be carefully chosen. If the ε value is too small, then many points will seem like outliers. If the ε value is too large, then many points will be considered as the same clusters. Hence ε value should be calculated by the k-distance graph. MinPts is known as a minimum number of data points inside ε. If the dataset is large, then minPts must be larger. MinPts can be calculated by MinPts>= D+1 Core Points Based on density approximation, core points are the main thing to form clusters. To calculate ε, we use the same neighbourhood, so the volume of the neighbourhood remains the same. At the same time, the mass of the neighbourhood will not remain the same. First, we have to set a minimum density threshold. We can change minPts to fine-tune the cluster dense. Border Points Except for the core points, other points in our dataset are considered as border points. In other words, a point which has lesser than minPts inside ε but it is the neighbourhood of a core point. DBSCAN algorithm 1. Randomly choose a point that is not considered an outlier. To find the core point, calculate the neighbourhood. If that point is a core point, begin the cluster around that point; if not, assign that point as an outlier. 2. Whenever we find a cluster point, expand the cluster points by reaching directly to the cluster. To find density-reachable points do “neighbourhood jumps and add that point to the cluster. If an outlier is joined to that cluster, consider that point as a border point. 3. Repeat the above steps until all the points are iterated and assigned as cluster point or outliers. • The data is easy to understand because of the parameters minPts and ε • Unlike K-means clustering, it is not sensitive to outliers. • Arbitrary-shaped clusters can be found by the DBSCAN algorithm. • The single-link effect can be lesser due to minPts. • If the difference in densities are large, creating cluster becomes difficult and also difficult to choose minPts and ε • Selecting ε will become difficult if the data is not clearly understand • For high-dimensional data, the quality of DBSCAN will vary because of choosing a value of ε Application of DBSCAN • Scientific literature • Satellite images • Crystallography of x-ray • Anomaly detection in temperature data Distribution-based model • The data belongs to some type of distribution that forms one cluster. • The most commonly used method is Gaussian mixture models. • If a data point is far away from the centre, it has a lesser chance that the point lie in distribution. Gaussian Mixture Models (GMMs) By using the maximum expectation algorithm, we find the parameters known as the mean and standard deviation for Gaussian mixture models. Like K-means clustering, first, we have to select a number of clusters and have to select Gaussian distribution parameters for each clusters randomly. After that, we have to find whether a data point belongs to a particular cluster or not. We should find the cluster centre that every point is close to that centre. Then we have to calculate new parameters that are mean and standard deviation for Gaussian distribution to increase the chance of data points to present in the cluster. Using the weighted sum of data point position, we have to calculate new parameters. Repeat the above two steps to iterate all data points. • By using standard deviation, cluster can be formed by eclipse shape, not in circular shape this made GMMs method more flexible when compared to K-means clustering. • We can achieve multiple clusters per data point because GMMs use probabilities. GMMs use mixed membership, that is, a data point in the centre that can overlap with two cluster. • It overcomes the problem of overfitting by using a fixed number of Gaussian distribution. • Complex model of clusters are produced by distribution-based clustering. • This made it very difficult to understand for the user. Connectivity-based model • Hierarchical clustering falls under the category of Connectivity-based clustering. • Building a tree diagram is the main agenda of hierarchical clustering. • Hierarchical clustering is used to find hierarchical data, namely taxonomies. • Hierarchical clustering involves creating clusters that have a predetermined ordering from top to bottom. • Each cluster has a similar type of object. • It has two main categories: Divisive and Agglomerative. Hierarchical clustering • Each observation is treated as separate clusters • Close cluster is identified and joined together • This process is continued until all the clusters are joined Dendrogram is nothing but a diagrammatic representation of the tree. It used to arrange objects into clusters How to read dendrogram Dendogram can be either column graph, row graph, circular or fluid shape. But our system will produce column graph or row graph. Irrespective of shapes all graph contain same parts. The branch is called Clade. Usually named by Greek letters and can be read from left to right, e.g., α β, δ. Clade has many numbers of leaves such as, • Single (simplicifolius): f • Double (bifolius): d, e • Triple (trifolious): a, b, c Basically, clade has an enormous number of leaves if the clade with more leaves is difficult to read. Clades are arranged based on similarities and dissimilarities between them. Clades with same height are considered as similar and clade which contain different height are considered as dissimilar. Pearson’s Correlation Coefficient measures similarities. • Leaves a, b are considered as similar when compared to c, d, e, f. • Leaves d and e are considered as similar when compared to a, b, c, and d. • Leaf e and f are completely dissimilar to other leaves. In the above diagram, the same clave β joins leaves a, b, c, d, and e. That means that the two groups (a, b, c, d, e) are more similar to each other than they are to f. Divisive method • In divisive or top-down clustering, we consider every object into a single cluster and then divide the cluster into two least similar clusters. • Finally, we divide each cluster until there is one cluster for every object. • In the agglomerative or bottom-up clustering method, we consider every object into its own cluster. • Then, calculate the similarity (i.e., distance) between each of the clusters and merge them into the two most similar clusters. • Information about the number of clusters is not needed. • It is easy to implement. • By seeing dendrogram, we can count the number of clusters easily. • Object function is not reduced directly. • It breaks large clusters into small clusters. • Because of the different shape of cluster, it is challenging to handle. • Sensitive to outliers. Application of Hierarchical clustering • US Senator Clustering through Twitter • Charting Evolution through Phylogenetic Trees • Tracking Viruses through Phylogenetic Trees How to measure cluster distance Using distance function, we have to find a proximity matrix that is nothing but the distance between each point then only we have to create cluster. To show the distance between clusters, we have to update the matrix. There are five types to calculate how to measure the distance between the clusters • Single linkage • Complete linkage • Average linkage • Centroid method • Ward’s method Single linkage Single-linkage is nothing but a single pair of elements that are determined to calculate the distance between two clusters. Two elements which are from the different cluster are linked together because they are closer in the distance. These pairwise distance re the shortest distance between two clusters are merged together; this is also known as nearest neighbour clustering or minimum distance linkage. The mathematical expression for single linkage is shown in the above diagram in that expression X and Y are two elements in clusters. Complete linkage Complete linkage clustering is just opposite to single linkage clustering. Complete linkage clustering refers to the longest linkage between the elements which are far away from each other. This is otherwise known as maximum linkage clustering. But the cluster is small when compared to a single linkage. The diameter of two clusters are smaller than distance threshold. Average linkage In average linkage, the distance between two clusters is defined as the average distance between each point in one cluster to every point in the other cluster. In this case, the outlier will not affect the linkage. This is also known as UPGMA- unweighted pair group mean averaging. Centroid Method The main aim of the centroid method is to find the mean vector location of each cluster and finding the distance between two clusters. Ward’S Method In this method, the total within the cluster variance is minimized. These clusters are merged and give minimum information loss that is ESS criteria. With this, we come to an end for this article. If you want to go one step ahead, you can also have a look at Introduction to supervised machine learning.
{"url":"https://www.mygreatlearning.com/blog/unsupervised-machine-learning/","timestamp":"2024-11-04T08:35:32Z","content_type":"text/html","content_length":"388281","record_id":"<urn:uuid:48a5c2c9-787f-47e8-8807-059454a3e078>","cc-path":"CC-MAIN-2024-46/segments/1730477027819.53/warc/CC-MAIN-20241104065437-20241104095437-00472.warc.gz"}
Child Row Formula to Reference Parent Row Hello! I'm trying to figure out a formula that will always reference the Parent Row. I need all the cells in the Practice Name column to reference the parent row above it. I'm assuming I can't make this a column formula, otherwise the parent row I need to reference will be referencing the parent row above that. So for that, I probably need a helper column? If you can help with my initial question that would be great, and if you have input on column formula, let me know. I'm trying to avoid having to enter formulas when new rows are added, but just the initial parent row reference formula will be helpful. Thank you!! Best Answer • Thanks! OK, so this should be easy to do by adding that helper column. You can name it something like [Level] with the following column formula: =COUNT(ANCESTORS()) + 1 Then, use this as your column formula in the Parent Name column: =IF(Level@row = 1, "", IF(Level@row = 2, [Task Name]@row, PARENT())) Which will leave the cell blank if it's at that top level, it will return the Task Name if it's the next level down, and then any children of that will inherit the parent value. Technically, you could embed the level piece within the formula but I think you'll find additional utility with a separate Level column (i.e., conditional formatting!) • Returning the parent for every row is simply =PARENT() in the Practice Name column. I don't think that's what you're actually asking for, though, so could you please mock up a sample of what you would want the end result to look like and include the collapsed rows? Also, do you have a helper column with hierarchy/level data for each row? You may be able to create a conditional column formula in the Practice Name column but the community will need a clearer picture of what you want to end up with in order to provide you with the appropriate solution. • Hi Sarah, End result should be that all cells underneath the rows marked with red below are populating the parent row (the row marked with red). I haven't created a helper column yet but I am thinking I'll need it if I want this to be automatic, including when new rows are inserted. • Thanks! OK, so this should be easy to do by adding that helper column. You can name it something like [Level] with the following column formula: =COUNT(ANCESTORS()) + 1 Then, use this as your column formula in the Parent Name column: =IF(Level@row = 1, "", IF(Level@row = 2, [Task Name]@row, PARENT())) Which will leave the cell blank if it's at that top level, it will return the Task Name if it's the next level down, and then any children of that will inherit the parent value. Technically, you could embed the level piece within the formula but I think you'll find additional utility with a separate Level column (i.e., conditional formatting!) Help Article Resources
{"url":"https://community.smartsheet.com/discussion/131692/child-row-formula-to-reference-parent-row","timestamp":"2024-11-10T08:45:43Z","content_type":"text/html","content_length":"412954","record_id":"<urn:uuid:1835efe5-57b8-4cd0-8771-15a0580db4c4>","cc-path":"CC-MAIN-2024-46/segments/1730477028179.55/warc/CC-MAIN-20241110072033-20241110102033-00167.warc.gz"}
A less idealized central force - Curvature of the Mind This is the 3rd in a series that started with the gravitational potential, and then the harmonic. While those are the two biggies, what we are looking at next behaves more like something you would see in real life, and doesn’t have solutions that can be written down with simple formulas. For this entry in our series, I reached into my hat and pulled out the lorentz distribution from basic physics. It it finite in all ranges from the very small, all the way out to infinity, and dare I say beyond. These images range from a stream of slow particles which converge directly to the center of the force field, through a range of velocities, until the force is just a blip to be zoomed over, producing just a slight deflection. Some features to note. The intermediate images have more features and details than either the harmonic or gravitational wells. That is directly related to the cut offs in the force. Only the harmonic and gravitational potentials have elliptical orbits, all other potentials have more complicated and complex shapes. Eventually, however the streams get fast enough that there are no major deflections and no bound orbits. That just shows up as a narrow beam of deflected particles. Related Images:
{"url":"https://curvatureofthemind.com/2012/02/14/a-less-idealized-central-force/","timestamp":"2024-11-11T13:46:57Z","content_type":"text/html","content_length":"34088","record_id":"<urn:uuid:037ea95e-c1e8-41a5-8e58-d672ceff827b>","cc-path":"CC-MAIN-2024-46/segments/1730477028230.68/warc/CC-MAIN-20241111123424-20241111153424-00192.warc.gz"}
7 Steps to Midnight (1993) Richard Matheson In this unnerving, `Kafka-esque' suspense novel by well known horror author Richard Matheson, a government mathematician sees reality collapse around him as his life is turned into a surrealistic version... (more) The Adventures of a Mathematician (2020) Thor Klein This film about mathematician Stanislaw Ulam is based on his autobiography with the same title but focuses only on the period of time when, as a recent immigrant from Poland, he was working on the Manhattan... (more) The Adventures of Topology Man (2005) Alex Kasman Parody is easy....topology is hard! In this short story, I made use of (and made fun of) the classic superhero comic book genre to illustrate some ideas from topology. So, we end up seeing a battle... (more) After Math (1997) Miriam Webster The ghost of math professor Ray Bellwether tries to solve the mystery of his own murder in this `first novel' by Amy Babich (Webster is just a pseudonym). Babich has a Ph.D. in mathematics (and a Master's... (more) Arcadia (1993) Tom Stoppard Stoppard's critically successful play includes long discussions of topics of mathematical interest including: Fermat's Last Theorem and Newtonian determinism, iterated algorithms, the second law of thermodynamics, Fourier's... (more) The Atrocity Archives (2004) Charles Stross "The Laundry" is a British spy organization which is responsible for suppressing certain dangerous math research. The occult implications of mathematics became clear with Alan Turing's paper "Phase Conjugate... (more) The Bank (2001) Robert Connolly A brilliant young mathematician (aren't they all!) uses chaos theory to develop a mathematical model that predicts the stock market in this Australian thriller (co-produced by Axiom Films) . I love... (more) The Blind Geometer (1987) Kim Stanley Robinson This short novel lives up to its name: it really is about a blind geometer! Carlos Oleg Nevsky was born blind and ``since 2043'' has been a professor of mathematics at GWU. We get some interesting discussion... (more) Blowups Happen (1940) Robert A. Heinlein A mathematician discovers that his formulas predict that an important new power station poses an extremely grave risk to humanity, and he must convince others of the danger. reprinted in THE PAST... (more) Bonita Avenue (2010) Peter Buwalda This widely acclaimed and popular Dutch novel concerns a mathematician who is a sort of intellectual public figure that the United States does not seem to have. After winning the Fields Medal for his... (more) Calculated Magic (1995) Robert Weinberg In this sequel to A Logical Magician, the mathematically trained wizard's assistant returns to fight evil monsters in Vegas and save his fiance (Merlin's daughter) from Hell. I do like the idea that... (more) A Calculated Man (2022) Paul Tobin (writer) / Alberto Alburquerque (artist) An accountant for the mob, now in witness protection, must defend himself from his former employers, but with the power of math on his side he is quite capable of killing those who have been sent to eliminate... (more) The Chimera Prophesies (2007) Elliott Ostler A mathematician known only as ``#6'', while trying to come up with a model that would predict probabilities for different human behaviors, finds that in fact he can very nearly predict the future with... (more) Crash Course in Romance (2023) Je Won Yu (director) / Hee-Seung Yang (writer) A grocery store owner and a "celebrity" math teacher fall in love in this South Korean TV series. Each episode’s title is mathematical, and we get to see Choi Chi-yeol being treated like a rock star... (more) Critical Point (2020) S.L. Huang This is the third novel featuring Cas Russell, a private detective with superhuman mathematical abilities that allow her to fight with remarkable precision, and to quickly survey a crime scene. There... (more) The Cypher Bureau (2018) Eilidh McGinness This work of historical fiction tells the story of Marian Rejewski, a Polish mathematician who used algebraic methods to break the Nazi Enigma code before the beginning of World War II. Most of the book... (more) Dark as Day (2002) Charles Sheffield Alex Ligon, though unbelievably rich, chooses to work voluntarily at a government agency where his predictive models for the future of the human race (based, he claims, on the principles of statistical... (more) Dark Integers (2007) Greg Egan The ``cold war'' between this universe with our mathematical laws and a bordering universe with different ones (which began in "Luminous") heats up when the numerical experiments of a mathematical physicist... (more) Decoded (2002) Mai Jia This novel tells the story of Rong Jinzhen, a mathematical genius who becomes a cryptographer in Mao's secret intelligence agency. The author, who is a well-known award-winning author in China, supposedly... (more) Deep Lay the Dead (1942) Frederick C. Davis This is a decent but familiar and unremarkable murder mystery, the kind in which an odd assortment of people are trapped together in a house, not knowing which of them is the killer. In this case, they... (more) The Doors of Eden (2020) Adrian Tchaikovsky A handful of inhabitants of Earths with different evolutionary histories find themselves either working together to save their worlds as the multi-verse collapses. The characters include a cryptid-hunting... (more) Evariste and Heloise (2008) Marco Abate This contribution to the collection The Shape of Content is difficult to classify. Combining fiction and fact, essay and comic book, fantasy and philosophy, it essentially takes the form of a proposal... (more) Evariste Galois (1965) Alexadre Astruc (writer and director) Short film about the romantic and tragic death of Galois, the young mathematician whose research laid the foundation for Group Theory. I haven't actually seen the film, but the following quote (stolen... (more) Exordia (2024) Seth Dickson Seth Dickinson's Exordia (Jan 2024) takes as one of its central conceits the notion that the physical universe is an expression of mathematical reality, and has as one of its central characters a Chinese... (more) The Fifth-Dimension Catapult (1931) Murray Leinster This short novel, originally published in the January 1931 ASTOUNDING, and republished by Damon Knight in SCIENCE FICTION OF THE 30'S (1975), involves a mathematical physicist whose theories get applied... (more) The God Equation (2007) Michael A.R. Co The angel Azrael is ordered to kill a Philippine mathematician who is using the Internet to create a mathematical proof of the existence of God. In this story, Azrael is presented as a hitman who kills... (more) The God Patent (2009) Ransom Stephens After his life falls apart, an engineer tries to revive a collaboration with the fundamentalist Christian with whom he once wrote two patents based on the Bible. While he viewed these patents for what... (more) Hidden Figures (2016) Allison Schroeder (writer) / Theodore Melfi (director and writer) Hidden Figures is a "Hollywood-ized" version of the true story of three women who worked in the "colored computers" unit at NASA's Langley Research Center. In particular, it follows Katherine (Goble)... (more) I of Newton (1970) Joe Haldeman In this short story a mathematics professor accidentally summons a demon by cursing while working on a problem involving integration. The devil brags that he is able to disprove Fermat's last theorem,... (more) The Imitation Game (2014) Morten Tyldum (director) / Graham Moore (screenplay) This film about Alan Turing and his role in breaking the Nazi enigma code has been a critical and financial success. It has won numerous awards and brought huge crowds of people to see a movie about a... (more) The Integral: A Horror Story (2009) Colin Adams This story, which he claims is an attempt to emulate Stephen King, is different from many of Adams' others. This may explain why it was published for the first time in his 2009 collections Riot at the... (more) Jurassic Park (1990) Michael Crichton Although there is really not much mathematics in this SF thriller at all, the mathematician (played in the film by Jeff Goldbloom) has an important role as the only person smart enough to recognize... (more) Kim Possible (Episode: Mathter and Fervent) (2007) Jim Peronto (script) This episode of the Disney animated TV series "Kim Possible" is a comic book parody featuring a mathematical villain. As an English assignment, Kim Possible and Ron Stoppable have to write a paper... (more) Lambada (1990) Joel Silbert (Director and Writer) / Sheldon Renan (Screenplay) A blend of "Stand and Deliver" with "Dirty Dancing" with a high school math teacher who spends his evenings doing lambada dance moves in night clubs. He appears to be a very dedicated teacher, and in... (more) The Legend of Howard Thrush (2005) Alex Kasman I always have enjoyed the American folk tale, a medium in which one pretends to be speaking earnestly and in all sincerity about a history so ridiculous that it it simply cannot be taken seriously. There... (more) A Logical Magician (1994) Robert Weinberg A very creative romp through the lore of creatures of mythology and their return in modern times. A computer programmer creates a program to decode ancient texts and find the incantations to invoke powerful... (more) Luminous (1995) Greg Egan A truly wonderful story in which two math grad students discover that the things we consider to be "truths" in number theory are actually part of a dynamical system, subject to change over time and in... (more) The Man Who Counted : A Collection of Mathematical Adventures (1949) Malba Tahan The Man who counted: delightful adventures of a medieval arabic mathematician. It is aimed at young readers (10+) but can be enjoyed by all. The mathematics is elementary but is all correct and nicely... (more) MathNet (1987) Childrens Television Workshop A children's TV show in which mysteries are solved using mathematics. The suspects and victims always ask the investigators "Are you the police?" To which they reply "No, we're mathematicians!"... The Measure of Eternity (2006) Sean McMullen The beautiful servant of an even more beautiful courtesan leaves the palace in an ancient city and finds a beggar proudly shouting "I have nothing" in many different languages. Yet, this beggar seems... (more) Moebius (1996) Gustavo Daniel Mosquera R. In this Argentinian film, a mathematician discovers a bizarre topological explanation for the disappearance of a train in the labrynthian Buenos Aires subway system. Although based on the short story... (more) Monster (2005) Alex Kasman A story about group theory, plagiarism, the untapped potential of a collaboration between mathematics and marketing, the bleak financial future of academia, and the Monster. This story talks about... (more) The Mystery of Khufu's Tomb (1935) Talbot Mundy A rapid-read, reasonably entertaining novel about the real location of the Pharaoh Khufu's (Cheops) tomb and the fabulous treasury buried therein. An old, Chinese mathematician spends decades decoding... (more) NUMB3RS (2005) Nick Falacci / Cheryl Heuton This TV crime drama (premiered January 2005) follows the adventures of a pair of brothers, one a mathematics professor and the other an FBI agent, as they combine forces to solve mysteries. Cool effects... (more) Numbercruncher (2013) Si Spurrier (writer) / PJ Holden (artist) A recently deceased mathematician "cracks the recirculation algorithm" and thus is able to control his own reincarnation in the hope of being able to spend more time with the woman he loves. It ends up... (more) Ossian's Ride (1959) Fred Hoyle In the year 1970 (the future when this science fiction novel was written), the country of Ireland has tremendous financial success and power resulting from a string of amazing technological innovations.... (more) Paradox (2000) John Meaney Young Tom Corcorigan seems to represent the lowest "caste" in the extremely hierarchical human society of the year 3404. However, his mathematical abilities (he is able to figure out a way around Gödel's... (more) PopCo (2004) Scarlett Thomas Alice was raised by her grandparents, a mathematician and a cryptographer, and now uses what she learned from them to make mathematical puzzles for children. Her employer, the giant toy company "PopCo",... (more) Prime Suspects: The Anatomy of Integers and Permutations (2019) Andrew Granville / Jennifer Granville / Robert J. Lewis (Illustrator) In this graphic novel, the surprising coincidences between complete factorizations of integers, permutations, and polynomials is presented as if it were the discovery of a forensic team investigating seemingly... (more) Psychohistorical Crisis (2001) Donald Kingsbury In the far future, a group of "psychohistorians" controls the fate of humanity using the mathematical theory of "the founder" in this unauthorized "sequel" to Asimov's Foundation series. Kingsbury's lengthy... (more) Quicksilver: The Baroque Cycle Volume 1 (2003) Neal Stephenson This long novel from the author of Cryptonomicon does for 17th Century mathematics what that earlier novel did for the 20th century. Namely, it deifies some great historical mathematicians (this time... (more) Sekret Enigmy (1979) Roman Wionczek Although Alan Turing tends to get much of the credit for breaking the Nazi "Enigma" codes during World War II, three Polish mathematicians did preliminary work that (depending on who you ask) either equally brilliant and important or even more so. This film tells their story, featuring some real acts of heroism. (more) The Shadow Guests (1980) Joan Aiken After his mother's death, a boy goes to live with his aunt, a mathematician, in her haunted English house where he meets the ghosts of his ancestors and learns about his family's curse. The mathematician... (more) Sneakers (1992) Phil Alden Robinson (director) Complex espionage story, more about computers than mathematics. However, mathematics is clearly an underlying theme and in one scene the mysterious mathematician Gunter Janek lectures on mathematical aspects... (more) Super 30 (2019) Vikas Bahl (director) / Sanjeev Dutta (writer) A superb Bollywood movie based on a real life hero, Anand Kumar, who seems so fictional and yet, so very real in the context of a country like India. The very best in human values which appeal to a higher... (more) Torn Curtain (1966) Alfred Hitchcock (Director) Professor Armstrong (Paul Newman) pretends to defect to the other side of the iron curtain to learn of the secret "star wars"-like defense plan discovered by the brilliant (by his own account) Dr. Lindt. Fiancee... (more) Trajectory (2024) Cambria Gordon Eleanor, a teenage girl from Philadelphia who has been hiding her impressive mathematical abilities, uses them to aid the military during WW II. As with many works of fiction aimed at Young Adults,... (more) The Visiting Professor (1994) Robert Littell Lemuel Falk, a ``randomnist'' from the Steklov Institute in Russia gets a visiting position at a chaos research institute in Upstate New York in this academic farce. He meets a drunkard who studies... (more) White Rabbit, Red Wolf [This Story is a Lie] (2018) Tom Pollock Seventeen-year-old Peter Blankman is afraid of most things, but he loves his mother (a famous research psychologist), his twin sister (a tough girl who looks out for him), and math. So, he is in trouble... (more) The World We Make (2022) N. K. Jemisin Readers of the first novel in the series, The City We Became, have already met Padmini Prakash. She loves pure math and hates New York City, but due to familial pressures is preparing to be a Wall Street... (more)
{"url":"https://kasmana.people.charleston.edu/MATHFICT/search.php?orderby=title&go=yes&motif=co","timestamp":"2024-11-09T13:35:52Z","content_type":"text/html","content_length":"47405","record_id":"<urn:uuid:f46ffd77-45c6-473c-887f-b0aa111d7dda>","cc-path":"CC-MAIN-2024-46/segments/1730477028118.93/warc/CC-MAIN-20241109120425-20241109150425-00167.warc.gz"}
Indraprastha Institute of Information Technology Delhi Bachelor of Technology (B.Tech.) The main objective of the B.Tech. program at IIIT-Delhi is to develop students such that they are well prepared with the necessary core competencies, problem solving and innovation skills needed to succeed in engineering/entrepreneurship careers, and are well prepared to undertake higher studies and research careers. To give due importance to both theoretical and applied aspects, the curriculum for the B.Tech. program covers the foundational aspects of the discipline, and also develops in students the engineering skills for problem solving. The B.Tech. program is broadly divided in two halves. The first half focuses on building the foundations, and is highly structured. The second half is for developing advanced/specialized skills and knowledge in various sub-areas and application domains. The program provides a lot of flexibility in selecting courses according to the student’s liking and strength. For each program, there are some program specific outcomes, which are mentioned later. Besides those, there are some general program outcomes that are expected from each program. Each of the programs is expected to develop these in the students: • Ability to function effectively in teams to accomplish a common goal. • An understanding of professional and ethical responsibility. • Ability to communicate effectively with a wide range of audience. • Ability to self-learn and engage in life-long learning. • Ability to undertake small research tasks and projects. • Ability to take an idea and develop into a business plan for an entrepreneurial venture. • An understanding of impact of solutions on economic, societal, and environment context. Common 1st Year Program Most engineering programs start with general courses in sciences, and then migrate to specialized courses for the disciplines. Keeping with the contemporary thinking of starting engineering courses early, the B.Tech. programs at IIIT-Delhi starts with courses in Software, Hardware, and Mathematics from first year itself. This structure empowers students with necessary knowledge and skills early in the program. This allows students to discover the novel applications and possibilities of using their knowledge to other domains as well as learn problem solving. The courses tough in the first and second semester are same for all the programs with some program specific subjects in the second semesters. Semester 1 • Introduction to Programming • Digital Circuits • Maths I – Linear Algebra • Introduction to HCI • Communication Skills Semester 2 • Data Structures and Algorithms • Program-specific course • Math II – Probability and Statistics • Computer Organization • SSH Course Most of the courses offered in the B.Tech. program are 4 credits each. Normally a 4-credit course requires an average effort of about 10 hours per week (including lectures). A student with full load of 5 courses in a semester should expect to put about 50 hours of effort per week during the semester. B.Tech. Graduation Requirements Students are required to do 156 credits to earn a B.Tech. degree. All B.Tech. programs have a defined set of core courses and many electives. Precise requirements for each B.Tech. program are given in the program specific regulations available here. Honors Program The B.Tech. program has an Honors option, requirements for which are as below: • The student must earn an additional 12 discipline credits from in-class courses(i.e., must complete at least 168 credits). • The student’s program must include a B.Tech. Project. • At graduation time, the student must have a CGPA of 8.0 or more. Minor Options A B.Tech. student can also do a minor in another discipline/area. Requirements for each Minor are specified separately. Broadly, a minor degree requires the student to do about 4 to 6 courses in the minor area, taking electives or free-electives, and by doing extra credits. Currently, Minor programs are offered in Economics, Entrepreneurship, and Computational Biology. Computer Science and Engineering (CSE) Program Computer Science is a powerful engineering tool for problem solving in a variety of domains. IIIT-Delhi has developed a CSE program, which, besides developing the computing sciences foundations, also develops in students the engineering skills for problem solving using computing sciences. The program at IIIT-Delhi - "inverts the pyramid" - and start with computing-oriented courses first, and then provides flexibility for taking a variety of courses later. This empowers the students to explore innovative applications of computing in the initial years of the program so they can apply computing techniques in different domains. Program Objectives for the CSE program are to help develop the following attributes in students (in addition to the general attributes mentioned above): • Understanding of theoretical foundations and limits of computing. • Understanding of computing at different levels of abstraction including circuits and computer architecture, operating systems, algorithms, and applications. • Ability to adapt established models, techniques, algorithms, data structures, etc. for efficiently solving new problems. • Ability to design, implement, and evaluate computer based system or application to meet the desired needs using modern tools and methodologies. • Understanding and ability to use advanced techniques and tools in different areas of computing The set of core courses offered in B.Tech. CSE program are shown in the table below (courses mentioned in [ ] are electives and actual courses for these slots are as defined from semester to For students of 2020 batch onwards SEMESTER 1 SEMESTER 2 SEMESTER 3 SEMESTER 4 SEMESTER 5 Introduction to Data Structures and Algorithms Advanced Programming Fundamentals of Database Management Systems Digital Circuits Basic Electronics Operating Systems [Prototyping Interactive Systems/Practical Computer Networks Maths I- (Linear Maths II- (Probability & Discrete Mathematics Algorithm Design and Analysis Algebra) Statistics) Introduction to HCI Computer Organization [Math3, Signals & Systems, Embedded Logic [ Math 4, Graph Theory] Design, ..] Communication Skills [SSH] [SSH] [Science/BIO/..] Technical Communication + Environmental For students of 2019 and previous batches, program structure is available here. In the 3rd and 4th year, most of the courses are electives. Some of these electives have to be CSE electives, some have to be Social Sciences and Humanities electives, and the rest are Open Electives (i.e. any course can be taken). Detailed regulations about the program, including the requirements for graduation and the Honors program, are available here. Electronics and Communications Engineering (ECE) Program As a discipline, ECE focuses on the design of underlying hardware systems. Our curriculum is directed to applications in major areas such as telecommunications, energy and electronics sectors, while encouraging development of necessary skills for integration of hardware and software components. We believe that many creative opportunities exist at the boundaries of traditional CSE and ECE, and have accordingly planned for cross-training of students across disciplinary boundaries. The curriculum for ECE therefore has many courses in common with the CSE program initially. The set of core courses are shown in the table below (courses mentioned in [ ] are electives and actual courses for these slots are as defined from semester to semester.) For students of 2020 batch onwards SEMESTER 1 SEMESTER 2 SEMESTER 3 SEMESTER 4 SEMESTER 5 Introduction to Programming Data Structures and Algorithms Circuit theory and Devices Fields & Waves [Digital Communication Systems – core elect] Digital Circuits Basic Electronics Embedded Logic Design Integrated Electronics [Digital Signal Processing – core elect] Introduction to HCI Computer Organization Signals and Systems Principles of Communication Systems Maths I (Linear Algebra) Maths II(Probability and Statistics) Maths III(Multivariate Calculus) Maths IV(ODE/PDE) Communication Skills [SSH] [SSH/Advanced Programming] [Science/BIO/..] Technical Communication + Environmental Sciences For students of 2019 and previous batches, program structure is given here. Most courses in Sem 5th to 8th are electives (an elective course is one which is not compulsory, and a student may have choices from which to select the courses he/she wants to do.). 3 or more elective courses may be chosen from various areas like Circuits and VLSI, Communication Enginnering, Signal & Image Processing and Control & Embedded Systems etc., which allow a student to focus on some areas and gain a deeper knowledge and skills in those. Detailed regulations about the program, including the requirements for graduation and the Honors program, are available here. Computer Science and Applied Mathematics (CSAM) Program The increasing use of sophisticated mathematical tools and techniques in tandem with computational tools in several areas such as computational finance, biology, e¬commerce, weather forecasting, and data science motivates the need for a program that will produce graduates with computational skills as well as the ability to use sophisticated mathematical concepts and tools in order to tackle these problems. The Computer Science and Applied Mathematics program aims to develop such graduates. The program is similar to the Mathematics and Computing programs operating in many leading Institutions. The program has a small set of core courses in both Computer Science and Mathematics, and many electives which can be taken from both the disciplines. This enables the students to build a program most suitable for them. Program Objectives: At the end of this program, a student should have following attributes (in addition to the general attributes mentioned on B.Tech. Page): • Understanding of foundational topics in Mathematics. • Understanding of theoretical foundations and limits of computing and different levels of abstraction including architecture and operating systems, algorithms, and applications. • Ability to design and implement algorithms and data structures for efficiently solving new problems. • Ability to use and apply mathematical and statistical techniques and tools to solve problems. • Ability to abstract and rigorously model and analyze a variety of problems using appropriate mathematical or computational concepts. The B.Tech. program in CSAM follows the philosophy of having a small set of core courses and many electives allowing students significant flexibility in designing their curriculum and specialization. The overall program structure is given in the table below: For students of 2020 batch onwards SEMESTER 1 SEMESTER 2 SEMESTER 3 SEMESTER 4 SEMESTER 5 SEMESTER 6 Introduction to Data Structures and Algorithms Real Analysis I Math IV (ODE/ PDE) Special Elective-3 Optimization Bucket [Linear Optimization/Convex Programing Optimization] Digital Circuits Basic Electronics Operating Systems Abstract Algebra I Stochastic Processes and Applications Statistical Inference Maths I (Linear Maths II (Probability and Discrete Algorithm Design and Algebra) Statistics) Structures Analysis Introduction to HCI Computer Organization Special Elective Theory of Computation Communication Skills [SSH] [SSH] Special Elective-2 Technical Communication + Environmental Special Elective 1 • Number Theory • Advanced Programing • Physics • Signals and Systems Special Elective 2 • This elective is from the set of courses such as Science, Bio (To be decided from semester to semester) Special Elective 3 • Real Analysis II • Scientific Computing For students of 2019 and previous batches, program structure is given here. In the final year, all courses are electives. Details about the structure of the program and the requirements for graduation are given here. Computer Science and Design (CSD) Program The B.Tech. in Computer Science (CS) and Design aims to develop graduates that are not only well versed with computing approaches, tools, and technologies, but are also experienced with Design approaches and new Media technologies and uses. The program has a small set of core courses in CS and Design, and many electives which can be taken from CS as well as Design and Digital Media. This enables the students to build a program most suitable for them. The program will prepare students to work in the IT industry as well as digital media industry like gaming, animation, virtual/ augmented reality, etc. The program will also allow students, who want to pursue higher studies, to take up higher studies in CS/IT or in Design. Program Objectives: The program aims to develop capabilities in CS as well as Design and Digital Media. At the end of this program, a student should have following attributes (in addition to the general attributes mentioned on B.Tech. Page): • Understanding of foundations, limits, and capabilities of computing. • Ability to design and implement efficient software solutions using suitable algorithms, data structures, and other computing techniques. • Understanding of design principles and techniques and ability to apply these for developing solutions to human/societal problems. • Ability to independently investigate a problem which can be solved by an Human Computer Interaction (HCI) design process and then design an end-to-end solution to it (i.e., from user need identification to UI design to technical coding and evaluation). • Ability to effectively use suitable tools and platforms, as well as enhance them, to develop applications/products using for new media design in areas like animation, gaming, virtual reality, The overall program structure is given in the table below – For students of 2020 batch onwards SEMESTER 1 SEMESTER 2 SEMESTER 3 SEMESTER 4 SEMESTER 5 Introduction to Data structures and Operating Systems Analysis and Design of Algorithms / Algorithm Design and Computer Networks Programming Algorithms Analysis (B)* Digital Circuits Design Drawing & Research Methods in Social Science and Design Prototyping Interactive Systems Maths I (Linear Maths II (Probability & Advanced Programming Design of Interactive systems Algebra) Statistics) Introduction to HCI Computer Organization Design Processes & Perspectives Fundamentals of Database Management Systems Technical communication + Environmental Communication Skills Visual Design & Communication [Maths III (Multivariate Calculus)/Discrete [SSH / Maths IV (ODE/PDE/Theory of Computation)] [Elective] * Students who will do Discrete Mathematics in semester 3 will be allowed to do ADA. Also ADA and ADA(B) are anti-requisites. For students of 2019 to previous batches, program structure is given here. In the 6th, 7th and 8th Semester all courses are electives. Details about the structure of the program and the requirements for graduation are given here. Computer Science and Social Sciences (CSSS) Program This unique program, B.Tech. in Computer Science (CS) and Social Sciences (SS), aims to develop computer science engineers with strong understanding of relevant social science disciplines. The program will also allow a student to pursue further studies in social sciences, besides allowing them to pursue higher studies in CS/IT, as well as many interdisciplinary programs. As it is a 4 year program, it will satisfy the requirements of almost all higher studies programs in India as well as overseas. It may be an ideal program for those students who are not sure if they want to pursue engineering careers and would like to explore the possibility of going for social sciences later, but want to be ready to take computer science career if desired. Program Objectives: The program aims to develop capabilities in Computer Science as well as Social Science. At the end of this program, a student should have following attributes (in addition to the general attributes mentioned on B.Tech. Page): • Understanding of foundations, limits, and capabilities of computing. • Ability to design and implement efficient software solutions using suitable algorithms, data structures, and other computing techniques. • Understanding of foundations of social sciences, and articulate the ways in which different social science disciplines (at least two) enhance our understanding of society. • Ability to use analytical methods, including for data collection, evaluation, and analysis, for understanding issues from different social science perspectives. • Ability to synthesize concepts and methods from different social science disciplines and Computing and apply these to address issues relating to society. The B.Tech. program at IIITD follows a philosophy of having a small set of core-courses, allowing students significant flexibility in designing their curriculum and specialization. In the first few semesters mostly core courses are done. The structure for first few semesters is as below, courses mentioned in [ ] are electives and actual courses for these slots are as defined from semester to For students of 2020 batch onwards SEMESTER 1 SEMESTER 2 SEMESTER 3 SEMESTER 4 SEMESTER 5 Introduction to Data Structures and Algorithms Operating Systems Algorithm Design and Analysis Digital Circuits Introduction to Sociology/Anthropology Research methods in Social Science and Convex Optimization Maths I (Linear Algebra) Maths II (Probability & Statistics) Discrete Mathematics Fundamentals of Database Management Introduction to HCI Computer Organization Advanced Programming Econometrics I Technical communication + Environmental Sciences 3 Communication Skills Critical thinking and Readings in Social Maths III (Advanced Calculus) [Graph Theory] For students of 2019 and previous batches. Click here In the 6th, 7th and 8th Semester all courses are electives. Details about the structure of the program and the requirements for graduation are given here. Computer Science and Biosciences (CSB) Program With the advent of high-throughput techniques, biological sciences are grappling with a paradigm shift towards data-intensive explorations and challenges for management and analysis of massive data. Apart from fundamental contributions to basic science, data driven analysis in biology has the potential to conquer challenges such as modeling and control of complex diseases, management and diagnosis of pathologies, personalized medicine, drug and vaccine design, among others. Making progress on these frontiers requires insight into biological processes, algorithms, machine learning techniques, mathematical modeling, apart from numerical and programming skills. Thus, interdisciplinary education that imparts knowledge of foundations of biology and computer science as well as training in modeling and analysis of biomedical data is the key to create personnel who can provide solutions to problems on the interface of computation and biology. Knowledge of different aspects of modern biology and computational sciences will facilitate addressing relevant problems in biology and medicine. Towards this aim, an undergraduate program that seamlessly integrates foundations of computer science, biology and mathematics along with training to ask data-driven questions in biology and medicine is an important step in this direction. Program Objectives: The program aims to develop capabilities in Computer Sciences as well as in Biosciences. At the end of this program, a student should have following attributes (in addition to the general attributes mentioned on B.Tech. Page): • Understanding of foundations, capabilities and limits of computing. • Ability to design and implement efficient software solutions particularly for biological applications using suitable algorithms, data structures, and other computing techniques. • Understanding of foundations of biological sciences and biological data. • Ability to compile, manage and analyze data to address problems in biological and medical sciences. • Ability to build and apply mathematical modeling techniques to biological problems. Program Structure: The B.Tech. program at IIITD follows a philosophy of having a small set of core-courses, allowing students significant flexibility in designing their curriculum and specialization. Majority of core courses are completed in the first four semesters. The structure for first few semesters is as follows: For students of 2020 batch onwards SEMESTER 1 SEMESTER 2 SEMESTER 3 SEMESTER 4 SEMESTER 5 Introduction to Data Structures and Algorithms Operating Systems Algorithm Design (B) Elective Introduction to HCI Computer Organization Advanced Programming Fundamentals of Database Management Systems Elective Maths I – (Linear Maths II -(Probability & Maths III - (Multi Variate Basic Electronics (offered for 1st year students for ECE and Algorithms in Bioinformatics Algebra) Statistics) Calculus) CSE) Digital Circuits Foundations of Biology Cell Biology & Biochemistry Practical Bioinformatics Algorithms in computational Biology Communication Skills [SSH] Genetics and Molecular Biology Introduction to Quantitative Biology Technical Communication + Environmental For students of 2019 and previous batches. Click here In the 6th, 7th and 8th Semester all courses are electives. Details about the structure of the program and the requirements for graduation are given here. Computer Science and Artificial Intelligence (CSAI) Program Artificial Intelligence (AI) has become an integral part of technology in our daily lives, driving to office, searching for a restaurant, getting news updates, and recommendations on social media are all using AI. With increase in usage, there is a significant requirement of researchers who can understand AI and build AI technologies. This program will provide students an opportunity to learn both foundational and experimental components of AI and Machine Learning. A student completing this program will be able to undertake industry careers involving innovation and problem solving using Artificial Intelligence (AI) and Machine Learning (ML) technologies and research careers in AI, ML, and, in general, Computer Science areas. Along with courses that provide specialization in AI, students will also have an option to explore applied domains such as computer vision, natural language processing, robotics, and autonomous systems as well as other interdisciplinary areas such as neuroscience, edge computing, and Internet of Things. Program Objectives: At the end of this program, a student should have following attributes (in addition to the general attributes mentioned on B.Tech. Page): • Understanding of foundational topics in Computer Science, Artificial Intelligence, and Machine Learning. • Understanding of theoretical foundations and limits of Artificial Intelligence and Machine Learning. • Ability to design and implement algorithms and data structures for efficiently solving new problems. • Ability to model and analyze a variety of problems using appropriate mathematical/computational and AI concepts. • Ability of apply and develop AI algorithms to transform large amount of data into intelligent decisions and/or behavior. • An understanding of the impact of AI based solutions in an economic, societal, and environment context. Program Structure: The Foundation program provides the basic knowledge about Computer Science and Artificial Intelligence (CSAI) through a set of core courses, which are compulsory for all students. The set of core courses are shown in the table below (courses mentioned in [ ] are electives and actual courses for these slots are as defined from semester to semester.) For students of 2020 batch onwards SEMESTER 1 SEMESTER 2 SEMESTER 3 SEMESTER 4 SEMESTER 5 SEMESTER 6 Introduction to Data Structures and Algorithms Advanced [Fundamental of database Management System / Computer Organization / Machine Learning Programming Programming Ethics in AI] Digital Circuits Introduction of Intelligent Systems Operating [Maths IV / Graph Theory / Statistical Inference / Introduction to Computer Architecture / Computer Ethics in Artificial Systems Mathematical Logic / Theory of Computation] Network Compilers ] Intelligence Maths I (Linear Maths II (Probability and Statistics) Discrete Algorithm Design and Analysis Artificial Intelligence [2 AI Core Courses] Algebra) Mathematics Introduction to [Computer Organization / Fundamentals of Maths III Statistical Machine Learning [4 AI Application Electives] HCI Database Management Systems] Communication [SSH] Signal & Optimization Bucket [Linear Optimization/Convex Optimization] Technical Communication + Skills Systems Environmental Science For students of 2019 and previous batches. Click here For B.Tech. (CSAI), the program structure & requirements for graduation and the Honors program, are given here. Electronics and VLSI Engineering (EVE) Program IIIT Delhi aims to encourage research and innovation in Information Technology (IT) and allied areas. The objective of the B.Tech. program in Electronics and VLSI Engineering (EVE) is to prepare students to undertake careers involving innovation and problem solving using suitable techniques and hardware and software technologies, or to undertake advanced studies for research careers. In order to give due importance to applied as well as theoretical aspects of EVE, the curriculum for the B.Tech. (EVE) program covers most of the foundational aspects and also develops in students the engineering skills for problem solving. Towards this, the B.Tech. (EVE) program at IIIT-Delhi starts with computing and Electronics courses first, and allows the possibility of doing science courses later. Besides being better suited for developing engineering capabilities, it also enables the possibility of students seeing newer applications and possibilities of using computing and electronics in these subjects. Program Objectives: Program Structure: The first semester of the EVE program is common with all - this allows flexibility to students in moving from one program to the other. The second and third semester is common with the ECE program. The fourth and fifth semester of the program is relatively fixed, comprising mostly core courses for the program. From sixth semester onward the program can be mostly flexible, consisting of For students of 2022 batch onwards SEMESTER 1 SEMESTER 2 SEMESTER 3 SEMESTER 4 SEMESTER 5 Introduction to Programming Data Structures and Algorithms Circuit theory and Devices Fields & Waves Digital VLSI Design Digital Circuits Basic Electronics Embedded Logic Design Integrated Electronics Analog CMOS Design Introduction to HCI Computer Organization Signals and Systems Physics of Semiconductor Devices VLSI Design Flow Maths I (Linear Algebra) Maths II (Probability and Statistics) Maths III (Multivariate Calculus) Electronic System Design Communication Skills [SSH] [SSH/Advanced Programming] [Open Elective] Technical Communication + Environmental Sciences Semester Technical Courses Non-Technical Courses Semester 1 Introduction to Programming Digital Circuits Maths I (Linear Algebra) Introduction to HCI Communication Skills Semester 2 Data Structures and Algorithms Basic Electronics Maths II (Probability and Statistics) Computer Organization SSH Elective Last updated: 29-08-2022
{"url":"https://iiitd.ac.in/academics/btech","timestamp":"2024-11-05T05:45:15Z","content_type":"text/html","content_length":"206862","record_id":"<urn:uuid:45a96543-9447-4935-a243-b7b374b5bc8d>","cc-path":"CC-MAIN-2024-46/segments/1730477027871.46/warc/CC-MAIN-20241105052136-20241105082136-00198.warc.gz"}
A general upper bound on broadcast function B(n) using Knodel Altay, Sirma Cagil (2013) A general upper bound on broadcast function B(n) using Knodel graph. Masters thesis, Concordia University. Broadcasting in a graph is the process of transmitting a message from one vertex, the originator, to all other vertices of the graph. We will consider the classical model in which an informed vertex can only inform one of its uninformed neighbours during each time unit. A broadcast graph on n vertices is a graph in which broadcasting can be completed in ceiling of log n to the base 2 time units from any originator. A minimum broadcast graph on n vertices is a broadcast graph that has the least possible number of edges, B(n), over all broadcast graphs on n vertices. This thesis enhances studies about broadcasting by applying a vertex deletion method to a specific graph topology, namely Knodel graph, in order to construct broadcast graphs on odd number of vertices. This construction provides an improved general upper bound on B(n) for all odd n except when n=2^k−1. Divisions: Concordia University > Gina Cody School of Engineering and Computer Science > Computer Science and Software Engineering Item Type: Thesis (Masters) Authors: Altay, Sirma Cagil Institution: Concordia University Degree Name: M. Comp. Sc. Program: Computer Science Date: 15 September 2013 Thesis Supervisor(s): Harutyunyan, Hovhannes Keywords: Broadcasting, Broadcast graph, Minimum broadcast graph, Knodel graph, Broadcast function B(n) ID Code: 977799 Deposited By: SIRMA CAGIL ALTAY Deposited On: 26 Nov 2013 15:36 Last Modified: 18 Jan 2018 17:45 [1] R. Ahlswede, L. Gargano, H. S. Haroutunian, and L. H. Khachatrian. Faulttolerant minimum broadcast networks. Networks, 27:293–307, 1996. [2] R. Ahlswede, H. S. Haroutunian, and L. H. Khachatrian. Messy broadcasting in networks. Kluwer International Series in Engineering and Computer Science, pages 13–13, 1994. [3] A. Bar-Noy, S. Guha, J. S. Naor, and B. Schieber. Multicasting in heterogeneous networks. In Proceedings of the thirtieth annual ACM symposium on Theory of computing, pages 448–453. ACM, 1998. [4] B. Beauquier, S. Perennes, and O. Delmas. Tight bounds for broadcasting in the linear cost model. Journal of Interconnection Networks, 2(02):175–188, 2001. [5] R. Beier and J. F. Sibeyn. A powerful heuristic for telephone gossiping. Citeseer, [6] J. C. Bermond, A. Ferreira, S. P´erennes, and J. G. Peters. Neighborhood broadcasting in hypercubes. SIAM Journal on Discrete Mathematics, 21(4):823–843, [7] J. C. Bermond, P. Fraigniaud, and J. G. Peters. Antepenultimate broadcasting. Networks, 26(3):125–137, 1995. [8] J. C. Bermond, H. A. Harutyunyan, A. L. Liestman, and S. Perennes. A note on the dimentionality of modified Kn¨odel graphs. International Journal of Foundations of Computer Science, 8:109–117, 1997. [9] J. C. Bermond, P. Hell, A. L. Liestman, and J. G. Peters. Broadcasting in bounded degree graphs. SIAM Journal on Discrete Mathematics, 5(1):10–24, [10] J. C. Bermond, P. Hell, A. L. Liestman, and J. G. Peters. Sparse broadcast graphs. Discrete applied mathematics, 36(2):97–130, 1992. [11] S. C. Chau and A. L. Liestman. Constructing minimal broadcast networks. Journal of Combinatorics, Information & System Science, 10:110–122, 1985. [12] X. Chen. An upper bound for the broadcast function b (n). Chinese Journal of Computers, 13:605–611, 1990. [13] F. Comellas, H. A. Harutyunyan, and A. L. Liestman. Messy broadcasting in mesh and torus networks. Journal of Interconnection Networks, pages 37–51, [14] M. Cosnard and A. Ferreira. On the real power of loosely coupled parallel architectures. Parallel Processing Letters, 1(02):103–111, 1991. [15] R. Diestel. Graph theory, 2005. [16] K. Diks and A. Pelc. Broadcasting with universal lists. In System Sciences, 1995. Vol. II. Proceedings of the Twenty-Eighth Hawaii International Conference on, volume 2, pages 564–573. IEEE, 1995. [17] M. Dinneen, J. Ventura, M. Wilson, and G. Zakeri. Compound constructions of minimal broadcast networks. Technical report, Department of Computer Science, The University of Auckland, New Zealand, 1997. [18] M. J. Dinneen, M. R. Fellows, and V. Faber. Algebraic constructions of efficient broadcast networks. In Applied Algebra, Algebraic Algorithms and Error- Correcting Codes, pages 152–158. Springer, 1991. [19] M. Elkin and G. Kortsarz. Sublogarithmic approximation for telephone multicast: path out of jungle (extended abstract). In SODA03: Proceedings of the fourteenth annual ACM-SIAM symposium on Discrete algorithms, pages 76–85, Philadelphia, PA, USA, 2003. Society for Industrial and Applied Mathematics. [20] M. Elkin and G. Kortsarz. A combinatorial logarithmic approximation algorithm for the directed telephone broadcast problem. SIAM Journal on Computing, 35(3):672–689, 2005. [21] A. M. Farley. Minimal broadcast networks. Networks, 9(4):313–332, 1979. [22] A. M. Farley, S. Hedetniemi, S. Mitchell, and A. Proskurowski. Minimum broadcast graphs. Discrete Mathematics, 25(2):189–193, 1979. [23] U. Feige, D. Peleg, P. Raghavan, and E. Upfal. Randomized broadcast in networks. Random Structures & Algorithms, 1(4):447–460, 1990. [24] G. Fertin et al. A study of minimum gossip graphs. Discrete Mathematics, 215(1-3):33–58, 2000. [25] G. Fertin, R. Labahn, et al. Compounding of gossip graphs. Networks, 36(2):126– 137, 2000. [26] G. Fertin and A. Raspaud. Families of graphs having broadcasting and gossiping properties. In Graph-Theoretic Concepts in Computer Science, pages 63–77. Springer, 1998. [27] G. Fertin and A. Raspaud. k-neighborhood broadcasting. In The Seventh International Colloquium on Structural Information & Communication Complexity (SIROCCO 2001), pages 133–146, 2001. [28] G. Fertin and A. Raspaud. Neighborhood communications in networks. Electronic Notes in Discrete Mathematics, 10:103–108, 2001. [29] G. Fertin and A. Raspaud. A survey on Kn¨odel graphs. Discrete Applied Mathematics, 137:173–195, 2004. [30] G. Fertin, A. Raspaud, H. Schr¨oder, O. S`ykora, and I. Vrt’o. Diameter of the Kn¨odel graph. In Graph-Theoretic Concepts in Computer Science, pages 149– 160. Springer, 2000. [31] P. Fraigniaud and E. Lazard. Methods and problems of communication in usual networks. Discrete Applied Mathematics, 53(1):79–133, 1994. [32] P. Fraigniaud and J. G. Peters. Minimum linear gossip graphs and maximal linear (�, k)-gossip graphs. 1994. [33] P. Fraigniaud and S. Vial. Approximation algorithms for broadcasting and gossiping. Journal of Parallel and Distributed Computing, 43(1):47–55, 1997. [34] P. Fraigniaud and S. Vial. Heuristic algorithms for personalized communication problems in point-to-point networks. In 4th Colloquium on Structural Information and Communication Complexity, pages 240–252, 1997. [35] P. Fraigniaud and S. Vial. Comparison of heuristics for one-to-all and all-to-all communications in partial meshes. Parallel Processing Letters, 9(01):9–20, 1999. [36] S. Fujita, S. Perennes, and J. G. Peters. Neighbourhood gossiping in hypercubes. Parallel Processing Letters, 8(02):189–195, 1998. [37] L. Gargano and U. Vaccaro. On the construction of minimal broadcast networks. Networks, 19(6):673–689, 1989. [38] M. Grigni and D. Peleg. Tight bounds on mimimum broadcast networks. SIAM Journal on Discrete Mathematics, 4(2):207–222, 1991. [39] H. A. Haroutunian. New minimal broadcast networks. In Proceedings of 4th International Colloquium on Coding Theory, pages 36–40, 1991. [40] H. A. Harutyunyan. Minimum multiple message broadcast graphs. Networks, 47(4):218–224, 2006. [41] H. A. Harutyunyan. An efficient vertex addition method for broadcast networks. Internet Mathematics, 5(3):211–225, 2008. [42] H. A. Harutyunyan, P. Hell, and A. L. Liestman. Messy broadcasting - decentralized broadcast schemes with limited knowledge. Discrete Applied Mathematics, 159(5):322–327, 2011. [43] H. A. Harutyunyan and A. L. Liestman. Messy broadcasting. Parallel Processing Letters, 8(02):149–159, 1998. [44] H. A. Harutyunyan and A. L. Liestman. More broadcast graphs. Discrete Applied Mathematics, 98:81–102, 1999. [45] H. A. Harutyunyan and A. L. Liestman. Improved upper and lower bounds for k-broadcasting. Networks, 37(2):94–101, 2001. [46] H. A. Harutyunyan and A. L. Liestman. k-broadcasting in trees. Networks, 38(3):163–168, 2001. [47] H. A. Harutyunyan and A. L. Liestman. On the monotonicity of the broadcast function. Discrete Mathematics, 262(1):149–157, 2003. [48] H. A. Harutyunyan and A. L. Liestman. Upper bounds on the broadcast function using minimum dominating sets. Discrete Mathematics, 312(20):2992–2996, [49] H. A. Harutyunyan, A. L. Liestman, and B. Shao. A linear algorithm for finding the k-broadcast center of a tree. Networks, 53(3):287–292, 2009. [50] H. A. Harutyunyan and C. D. Morosan. An iterative algorithm for the minimum broadcast time problem. In Proceedings of the 2nd IASTED International Conference on Communication and Computer Networks, CCN04, Cambridge, MA, USA, pages 447–452, 2004. [51] H. A. Harutyunyan and C. D. Morosan. The spectra of Kn¨odel graphs. Informatica, 30:295–299, 2006. [52] H. A. Harutyunyan and C. D. Morosan. On the minimum path problem in Kn¨odel graphs. Networks, 50(1):86–91, 2007. [53] H. A. Harutyunyan and B. Shao. A heuristic for k-broadcasting in arbitrary networks. In Information Visualization, 2003. IV 2003. Proceedings. Seventh International Conference on, pages 287–292. IEEE, 2003. [54] H. A. Harutyunyan and B. Shao. Optimal k-broadcast in trees. Congressus Numerantium, pages 117–128, 2003. [55] H. A. Harutyunyan and B. Shao. An efficient heuristic for broadcasting in networks. Journal of Parallel and Distributed Computing, 66(1):68–76, 2006. [56] Hovhannes A Harutyunyan. Multiple message broadcasting in modified kn¨odel graph. In The Seventh International Colloquium on Structural Information & Communication Complexity (SIROCCO 2000), pages 157–165, 2000. [57] S. M. Hedetniemi, S. T. Hedetniemi, and A. L. Liestman. A survey of gossiping and broadcasting in communication networks. Networks, 18(4):319–349, 1988. [58] M. C. Heydemann, N. Marlin, and S. P´erennes. Complete rotations in Cayley graphs. European Journal of Combinatorics, 22(2):179–196, 2001. [59] C. J. Hoelting, D. A. Schoenefeld, and R. L. Wainwright. A genetic algorithm for the minimum broadcast time problem using a global precedence vector. In Proceedings of the 1996 ACM symposium on Applied Computing, pages 258–262. ACM, 1996. [60] J. Hromkovic. Dissemination of information in communication networks: broadcasting, gossiping, leader election, and fault-tolerance. Springer-Verlag, 2005. [61] J. Hromkoviˇc, R. Klasing, B. Monien, and R. Peine. Dissemination of information in interconnection networks (broadcasting & gossiping). In Combinatorial network theory, pages 125–212. Springer, 1996. [62] J. Hromkovic, R. Klasing, E. A. St¨ohr, E. A. St, et al. Dissemination of information in vertex-disjoint paths mode, part 1: General bounds and gossiping in hypercube-like networks. 1993. [63] A. Jakoby, R. Reischuk, and C. Schindelhauer. The complexity of broadcasting in planar and decomposable graphs. Discrete Applied Mathematics, 83(1):179–206, [64] L. H. Khachatrian and H. S. Haroutunian. On optimal broadcast graphs. In Proceedings of 4th International Colloquium on Coding Theory, Dilijan, pages 65–72, 1991. [65] L. H. Khachatrian and O. S. Harutounian. Construction of new classes of minimal broadcast networks. In Conference on Coding Theory, Armenia, 1990. [66] W. Kn¨odel. New gossips and telephones. Discrete Mathematics, 13:95, 1975. [67] J. C. K¨onig and E. Lazard. Minimum k-broadcast graphs. Discrete Applied Mathematics, 53(1):199–209, 1994. [68] G. Kortsarz and D. Peleg. Approximation algorithms for minimum-time broadcast. SIAM Journal on Discrete Mathematics, 8(3):401–427, 1995. [69] D. D. Kouvatsos and I. M. Mkwawa. Neighbourhood broadcasting schemes for Cayley graphs with background traffic. [70] D. D. Kouvatsos and I. M. Mkwawa. Broadcasting schemes for hypercubes with background traffic. Journal of Systems and Software, 73(1):3–14, 2004. [71] R. Labahn. A minimum broadcast graph on 63 vertices. Discrete Applied Mathematics, 53(1):247–250, 1994. [72] C. Li, T. E. Hart, K. J. Henry, and I. A. Neufeld. Average-case messy broadcasting. Journal of Interconnection Networks, 9(04):487–505, 2008. [73] A. L. Liestman and J. G. Peters. Broadcast networks of bounded degree. SIAM journal on Discrete Mathematics, 1(4):531–540, 1988. [74] M. Maheo and J.-F. Sacl´e. Some minimum broadcast graphs. Discrete Applied Mathematics, 53(1):275–285, 1994. [75] E. Maraachlian. Optimal broadcasting in treelike graphs. PhD thesis, Concordia University, 2010. [76] S. Mitchell and S. Hedetniemi. A census of minimum broadcast graphs. Journal of Combinatorics, Information and System Sciences, 5:141–151, 1980. [77] I. M. Mkwawa and D. D. Kouvatsos. An optimal neighbourhood broadcasting scheme for star interconnection networks. Journal of Interconnection Networks, 4(01):103–112, 2003. [78] C. D. Morosan. Studies of interconnection networks with applications in broadcasting. PhD thesis, Concordia University, 2007. [79] J. H. Park and K. Y. Chwa. Recursive circulant: A new topology for multicomputer networks. In Parallel Architectures, Algorithms and Networks, 1994.(ISPAN), International Symposium on, pages 73–80. IEEE, 1994. [80] K. Qiu and S. K. Das. A novel neighbourhood broadcasting algorithm on star graphs. In Parallel and Distributed Systems, 2002. Proceedings. Ninth International Conference on, pages 37–41. IEEE, 2002. [81] R. Ravi. Rapid rumor ramification: Approximating the minimum broadcast time. In Foundations of Computer Science, 1994 Proceedings., 35th Annual Symposium on, pages 202–213. IEEE, 1994. [82] J. F. Sacl´e. Lower bounds for the size in four families of minimum broadcast graphs. Discrete Mathematics, 150(1):359–369, 1996. [83] P. Scheuermann and G. Wu. Heuristic algorithms for broadcasting in point-topoint computer networks. Computers, IEEE Transactions on, 100(9):804–811, [84] C. Schindelhauer. On the inapproximability of broadcasting time. In Approximation Algorithms for Combinatorial Optimization, pages 226–237. Springer, [85] B. Shao. A heuristic for broadcasting in arbitrary networks. Master’s thesis, Concordia University, 2003. [86] B. Shao. On k-broadcasting in graphs. PhD thesis, Concordia University, 2006. [87] Peter J. Slater, Ernest J. Cockayne, and S. T. Hedetniemi. Information dissemination in trees. SIAM Journal on Computing, 10(4):692–701, 1981. [88] J. A. Ventura and X. Weng. A new method for constructing minimal broadcast networks. Networks, 23(5):481–497, 1993. [89] M. X. Weng and J. A. Ventura. A doubling procedure for constructing minimal broadcast networks. Telecommunication Systems, 3(3):259–293, 1994. [90] J. Xiao and X.Wang. A research on minimum broadcast graphs. Chinese Journal of Computers, 11:99–105, 1988. [91] X. Xu. Broadcast networks on 2p − 1 nodes and minimum broadcast network on 127 nodes. Master’s thesis, Concordia University, 2004. [92] J. G. Zhou and K. M. Zhang. A minimum broadcast graph on 26 vertices. Applied mathematics letters, 14(8):1023–1026, 2001. All items in Spectrum are protected by copyright, with all rights reserved. The use of items is governed by Spectrum's terms of access Repository Staff Only: item control page
{"url":"https://spectrum.library.concordia.ca/id/eprint/977799/","timestamp":"2024-11-11T20:14:26Z","content_type":"application/xhtml+xml","content_length":"83108","record_id":"<urn:uuid:d0740a9d-961a-4111-9ee7-a39b3d891220>","cc-path":"CC-MAIN-2024-46/segments/1730477028239.20/warc/CC-MAIN-20241111190758-20241111220758-00795.warc.gz"}
Is it now possible to send a graph in the body of an E-mail? Hi folks - I've been sending plain html reports as attachments to an E-mail for quite awhile, but now would like to send some analysis results. I was hoping to send a couple SGPLOT graphics and html tables inside an E-mail, with a clickable link to see the whole regression analysis. Unfortunately, I get the dreaded red X in the E-mail instead of the graphics. After doing some searching, I came up with usage note 43716 that says that SAS does not support sending graphics under 9.1 TS1M3 SP4. http://support.sas.com/kb/43/716.html Has anyone sucessfully done this, or is it still unresolved? (we have 9.2 at the moment) Thanks for any help you can give me! Wendy T. 11-18-2011 01:55 PM
{"url":"https://communities.sas.com/t5/Graphics-Programming/Is-it-now-possible-to-send-a-graph-in-the-body-of-an-E-mail/td-p/50574","timestamp":"2024-11-09T00:54:20Z","content_type":"text/html","content_length":"229981","record_id":"<urn:uuid:0ef9b0ba-75a0-4dcc-af8a-d0bf467bb3a2>","cc-path":"CC-MAIN-2024-46/segments/1730477028106.80/warc/CC-MAIN-20241108231327-20241109021327-00230.warc.gz"}
Specifies an element boundary condition for a solution field on a set of element faces. ELEMENT_BOUNDARY_CONDITION("name") {parameters...} User-given name. shape (enumerated) [no default] Shape of the surfaces in this set. three_node_triangle or tri3 Three-node triangle. four_node_quad or quad4 Four-node quadrilateral six_node_triangle or tri6 Six-node triangle. element_set or elem_set (string) [no default] User-given name of the parent element set. surfaces (array) [no default] List of element surfaces. surface_sets (list) [={}] List of surface set names (strings) to use in this element boundary condition. When using this option, the connectivity, shape, and parent element of the surfaces are provided by the surface set container and it is unnecessary to specify the shape, element_set and surfaces parameters directly to the ELEMENT_BOUNDARY_CONDITION command. This option is used in place of directly specifying these parameters. In the event that both of the surface_sets and surfaces parameters are provided, the full collection of surface elements is read and a warning message is issued. The surface_sets option is the preferred method to specify the surface elements. This option provides support for mixed element topologies and simplifies pre-processing and post-processing. variable or var (enumerated) [no default] Boundary condition variable. All are scalars except tangential_traction is a three-component vector. mass_flux or mass Mass flux (mass flow rate). pressure or pres stagnation_pressure or stag_pres Stagnation pressure. tangential_traction or trac Tangential components of traction. heat_flux or heat Thermal heat flux. convective_heat_flux or conv_heat Convective heat flux. radiation_heat_flux or rad_heat Radiation heat flux. species_1_flux or spec1 Species 1 flux. convective_species_1_flux or conv_spec1 Convective flux for species 1. species_2_flux or spec2 Species 2 flux. convective_species_2_flux or conv_spec2 Convective flux for species 2. species_3_flux or spec3 Species 3 flux. convective_species_3_flux or conv_spec3 Convective flux for species 3. species_4_flux or spec4 Species 4 flux. convective_species_4_flux or conv_spec4 Convective flux for species 4. species_5_flux or spec5 Species 5 flux. convective_species_5_flux or conv_spec5 Convective flux for species 5. species_6_flux or spec6 Species 6 flux. convective_species_6_flux or conv_spec6 Convective flux for species 6. species_7_flux or spec7 Species 7 flux. convective_species_7_flux or conv_spec7 Convective flux for species 7. species_8_flux or spec8 Species 8 flux. convective_species_8_flux or conv_spec8 Convective flux for species 8. species_9_flux or spec9 Species 9 flux. convective_species_9_flux or conv_spec9 Convective flux for species 9. field_flux or field Multi field flux. convective_field_flux or conv_field Multi field convective flux. turbulence_flux or turb Turbulence diffusion flux. kinetic_energy_flux or tke Turbulence kinetic energy flux. dissipation_rate_flux or teps Turbulence dissipation rate flux. eddy_frequency_flux or tomega Turbulence eddy frequency flux. intermittency_flux or tintc Transition intermittency flux. transition_re_theta_flux or treth Transition Re-theta flux. design_topology_filter_flux or topofiltflux Design topology filter flux. type (enumerated) [=zero] Type of the boundary condition. Zero for the set. constant or const Constant value. Requires constant_value. Computes the boundary values from the solution field. outflow or out An outflow condition for the mass_flux variable. inflow or in An inflow condition for the mass_flux variable. per_surface or surf Surface values. Requires surface_values. piecewise_linear or linear Piecewise linear curve fit. Requires curve_fit_values and curve_fit_variable. cubic_spline or spline Cubic spline curve fit. Requires curve_fit_values and curve_fit_variable. user_function or user User-defined function. Requires user_function, user_values and user_strings. constant_value or value (real) [=0] Constant scalar value of the boundary condition. Used with constant type and scalar variables. constant_values (array) [={0,0,0}] Constant vector value of the boundary condition. Used with constant type and vector variables (currently only tangential_traction). field (string) [no default] Value of the boundary condition for field. Used with variable field. surface_values or values (array) [no default] A two-column (for scalar variables) or four-column (for vector variables, currently only tangential_traction) array of surface/boundary-condition data values. Used with per_surface type. curve_fit_values or curve_values (array) [={0,0}] A two-column (for scalar variables) or four-column (for vector variables, currently only tangential_traction) array of independent-variable/boundary-condition data values. Used with piecewise_linear and cubic_spline types. curve_fit_variable or curve_var (enumerated) [=temperature] Independent variable of the curve fit. Used with piecewise_linear and cubic_spline types. x_coordinate or xcrd X-component of coordinates. y_coordinate or ycrd Y-component of coordinates. z_coordinate or zcrd Z-component of coordinates. x_reference_coordinate or xrefcrd X-component of reference coordinates. y_reference_coordinate or yrefcrd Y-component of reference coordinates. z_reference_coordinate or zrefcrd Z-component of reference coordinates. x_velocity or xvel X-component of velocity. y_velocity or yvel U-component of velocity. z_velocity or zvel Z-component of velocity. velocity_magnitude or vel_mag Velocity magnitude. Normal velocity. pressure or pres temperature or temp Relative humidity. Dewpoint temperature. eddy_viscosity or eddy Turbulence kinematic eddy viscosity. kinetic_energy or tke Turbulence kinetic energy. dissipation_rate or teps Turbulence dissipation rate. intermittency or tintc Transition intermittency. transition_re_theta or treth Transition Re-theta. eddy_frequency or tomega Turbulence eddy frequency. species_1 or spec1 Species 1. species_2 or spec2 Species 2. species_3 or spec3 Species 3. species_4 or spec4 Species 4. species_5 or spec5 Species 5. species_6 or spec6 Species 6. species_7 or spec7 Species 7. species_8 or spec8 Species 8. species_9 or spec9 Species 9. mesh_x_displacement or mesh_xdisp X-component of mesh displacement. mesh_y_displacement or mesh_ydisp Y-component of mesh displacement. mesh_z_displacement or mesh_zdisp Z-component of mesh displacement. mesh_displacement_magnitude or mesh_disp_mag Mesh displacement magnitude. mesh_x_velocity or mesh_xvel X-component of mesh velocity. mesh_y_velocity or mesh_yvel Y-component of mesh velocity. mesh_z_velocity or mesh_zvel Z-component of mesh velocity. mesh_velocity_magnitude or mesh_vel_mag Mesh velocity magnitude. user_function or user (string) [no default] Name of the user-defined function. Used with user_function type. user_values (array) [={}] Array of values to be passed to the user-defined function. Used with user_function type. user_strings (list) [={}] Array of strings to be passed to the user-defined function. Used with user_function type. multiplier_function (string) [=none] User-given name of the multiplier function for scaling the boundary condition values. If none, no scaling is performed. reference_temperature or ref_temp (real) [=273.15] Reference temperature for the convective and radiation heat flux boundary conditions reference_temperature_multiplier_function (string) [=none] User-given name of the multiplier function for scaling the reference temperature. If none, no scaling is performed. reference_species or ref_spec (real) [=0] Reference species for the convective species boundary condition. reference_species_multiplier_function (string) [=none] User-given name of the multiplier function for scaling the reference species. If none, no scaling is performed. non_reflecting_factor (real) >=0 [=0] Amount of non-reflecting modification. If zero, there is no effect. If one, waves can pass through the boundary without reflection. Used with pressure variable at outflow boundaries and mass_flux variable at inflows. Turning on both non_reflecting_bc_running_average_field and running_average specifies the use of running average fields to enhance the performance of non-reflective boundary conditions when flow = navier_stokes is used. When flow = compressible_navier_stokes is set, the running_average option does not need to be turned on. pressure_loss_factor (real) >=0 [=0] Coefficient of a pressure loss term added/subtracted to outflow/inflow pressure boundary conditions. Used with pressure variable. pressure_loss_factor_multiplier_function (string) [=none] User-given name of the multiplier function for scaling the pressure loss factor. Used with pressure variable. If none, no scaling is performed. hydrostatic_pressure (boolean) [=off] Flag specifying whether to add hydrostatic pressure to pressure and stagnation pressure boundary conditions. hydrostatic_pressure_origin (array) [={0,0,0}] Coordinates of any location where the hydrostatic pressure is zero. Used with hydrostatic_pressure=on, and pressure and stagnation pressure boundary conditions. active_type (enumerated) [=all] Type of the active flag. Determines which surfaces in this set will have element boundary conditions imposed by this command. All surfaces in this set are active. No surfaces in this set are active. Only surfaces that are not in an interface surface set or do not find a contact surface of an appropriate medium are active. This command specifies element boundary conditions for a solution variable on a set of surfaces (element faces). The surfaces of an element boundary condition are defined with respect to the elements of an element set. For example, ELEMENT_SET( "flow elements" ) { shape = four_node_tet elements = { ... 4, 2, 5, 6, 8 ; 5, 2, 6, 3, 5 ; ... } ELEMENT_BOUNDARY_CONDITION( "constant BC on heated wall" ) { shape = three_node_triangle element_set = "flow elements" surfaces = { 4, 41, 2, 5, 6 ; 5, 51, 5, 6, 3 ; } variable = heat_flux type = constant constant_value = 12. defines a thermal heat flux element boundary condition, with a constant value of 12 on two surfaces of the element set "flow elements". There are two main forms of this command. The legacy version (or single topology version) of the command relies on the use of the parameter to define the surfaces. When using this form of the command, all surfaces within a given set must have the same shape, and it is necessary to include both the parameters in the command. specifies the shape of the surface. This shape must be compatible with the shape of the "parent" element set whose user-given name is provided by . The element set shape is specified by the shape parameter of the command. The compatible shapes are: Element Shape Surface Shape The surfaces parameter contains the faces of the element set. This parameter is a multi-column array. The number of columns depends on the shape of the surface. For three_node_triangle, this parameter has five columns, corresponding to the element number (of the parent element set), a unique (within this set) surface number, and the three nodes of the element face. For four_node_quad, surfaces has six columns, corresponding to the element number, a surface number, and the four nodes of the element face. For six_node_triangle, surfaces has eight columns, corresponding to the element number, a surface number, and the six nodes of the element face. One row per surface must be given. The three, four, or six nodes of the surface may be in any arbitrary order, since they are reordered internally based on the parent element definition. The surfaces may be read from a file. For the above example, the surfaces may be placed in a file, such as and read by: ELEMENT_BOUNDARY_CONDITION( "constant BC on heated wall" ) { shape = three_node_triangle element_set = "flow elements" surfaces = Read( "heated_wall.ebc" ) variable = heat_flux type = constant constant_value = 12. The mixed topology form of the command provides a more powerful and flexible mechanism for defining the surfaces. Using this form of the command, it is possible to define a collection of surfaces that contains different element shapes. This is accomplished through the use of the parameter. The element faces are first created in the input file using the command, and are then referred to by the command. For example, a collection of triangular and quadrilateral element faces can be defined using the following SURFACE_SET( "tri faces" ) { surfaces = { 1, 1, 1, 2, 4 ; 2, 2, 3, 4, 6 ; 3, 3, 5, 6, 8 ; } shape = three_node_triangle volume_set = "tetrahedra" SURFACE_SET( "quad faces" ) { surfaces = { 1, 1, 1, 2, 4, 9 ; 2, 2, 3, 4, 6, 12 ; 3, 3, 5, 6, 8, 15 ; } shape = four_node_quad volume_set = "prisms" Then, a single command is defined that contains the tri and quad faces as follows: ELEMENT_BOUNDARY_CONDITION ("constant BC on heated wall") { surface_sets = {"tri_faces", "quad_faces"} The list of surface sets can also be placed in a file, such as tri faces quad faces and read using: ELEMENT_BOUNDARY_CONDITION ("constant BC on heated wall") { surface_sets = Read("surface_sets.srfst") The mixed topology version of the ELEMENT_BOUNDARY_CONDITION command is preferred. This version provides support for multiple element topologies within a single instance of the command and simplifies pre-processing and post-processing. In the event that both the surface_sets and surfaces parameters are provided in the same instance of the command, the full collection of surface elements is read and a warning message is issued. Although the single and mixed topology formats of the commands can be combined, it is strongly recommended that they are not. The variable to which the boundary condition applies is given by variable. If the problem does not solve for a field associated with this variable, the boundary condition is simply ignored. The element boundary conditions are applied at the quadrature points of the surfaces. The quadrature rule is inherited from the parent element set of the surface. A constant boundary condition applies the same boundary condition for all quadrature points of the surfaces, as in the above example. A zero boundary condition is a short-hand form of the type, with a constant value of zero. For example, ELEMENT_BOUNDARY_CONDITION( "constant BC on outflow pressure" ) { surface_sets = {"tri_faces", "quad_faces"} variable = pressure type = zero When the element boundary condition type is specified, the values of the boundary condition are computed from the current solution. For example, ELEMENT_BOUNDARY_CONDITION( "outflow traction" ) { surface_sets = {"tri_faces", "quad_faces"} variable = tangential_traction type = free On boundary surfaces where no element boundary condition is specified for the variable, one is added of type . The same is done for the variable. For all other variables nothing is added (which is the same as adding an element boundary condition of type Note: Outflow and free surfaces normally require a pressure boundary condition of type zero (or constant). These must be specified. The free type is not available for convective and radiation type variables. boundary condition types are a special form of the free type which may only be specified for the variable. These boundary conditions indicate that the boundary is an outflow or an inflow boundary, respectively. The code takes certain measures to insure that the flow goes in the proper direction, that is, to prevent reverse flows. This is a weakly enforced condition; it does not guarantee no reverse flows. However, it does improve stability in situations where reverse flow is likely but weak. For example, ELEMENT_BOUNDARY_CONDITION( "outflow mass flux" ) { surface_sets = {"tri_faces", "quad_faces"} variable = mass_flux type = outflow boundary condition specifies a different value for each surface. For example, ELEMENT_BOUNDARY_CONDITION( "per surface BC on heated wall" ) { shape = three_node_triangle element_set = "flow elements" surfaces = { 4, 41, 2, 5, 6 ; 5, 51, 5, 6, 3 ; } variable = heat_flux type = per_surface surface_values = { 41, 2 ; 51, 4 ; } For scalar variables, the surface_values parameter is a two-column array corresponding to the surface number and boundary condition values. For vector variables (currently only tangential_traction), surface_values is a four-column array corresponding to the surface number and the xyz components of the boundary condition. The surface numbers must match those given by surfaces. No surface may be The surface values may be read from a file. For the above example, the boundary conditions may be placed in a file, such as and read by: ELEMENT_BOUNDARY_CONDITION( "per surface BC on heated wall" ) { shape = three_node_triangle element_set = "flow elements" surfaces = Read( "heated_wall.ebc" ) variable = heat_flux type = per_surface surface_values = Read( "heated_wall.heat" ) The per_surface option is currently not supported with mixed topology input. The element boundary conditions of types may be used to define boundary condition values as a function of a single independent variable. For example, ELEMENT_BOUNDARY_CONDITION( "curve fit BC on convective heat wall" ) { surface_sets = {"tri_faces", "quad_faces"} variable = convective_heat_flux type = piecewise_linear curve_fit_values = { 0, 0.0 ; 10, 1.5 ; } curve_fit_variable = x_coordinate reference_temperature = 25 defines a convective heat transfer boundary condition as a function of the x-coordinates. In this example the convective heat coefficient varies linearly from h = 0 at x = 0 to h = 1.5 at x = 10. For scalar variables the curve_fit_values parameter is a two-column array corresponding to the independent variable and the boundary condition values. For vector variables, currently only tangential_traction, curve_fit_values is a four-column array corresponding to the independent variable and the xyz components of the boundary condition. The independent variable values must be in ascending order. The limit point values of the curve fit are used when curve_fit_variable falls outside of the curve fit limits. The curve fit variable nornal_velocity is positive when the flow goes out of the parent element. data may be read from a file. For the above example, the curve fit values may be placed in a file, such as 0 0.0 10 1.5 and read by: ELEMENT_BOUNDARY_CONDITION( "curve fit BC on convective heat wall" ) { surface_sets = {"tri_faces", "quad_faces"} variable = convective_heat_flux type = piecewise_linear curve_fit_values = Read( "conv_wall.fit" ) curve_fit_variable = x_coordinate reference_temperature = 25 An element boundary condition of type user_function may be used to model more complex behaviors; see the AcuSolve User-Defined Functions Manual for a detailed description of user-defined functions. For example, a stagnation pressure boundary condition equivalent to the variable may be implemented through an element boundary condition on pressure. The input command may be given by: ELEMENT_BOUNDARY_CONDITION( "UDF BC on inflow pressure" ) { surface_sets = {"tri_faces", "quad_faces"} variable = pressure type = user_function user_function = "usrElementBcExample" user_values = { 100, # pressure 1.225 } # density where the user-defined function may be implemented as follows: #include "acusim.h" #include "udf.h" UDF_PROTOTYPE( usrElementBcExample ) ; /* function prototype */ Void usrElementBcExample ( UdfHd udfHd, /* Opaque handle for accessing data */ Real* outVec, /* Output vector */ Integer nItems, /* Number of BC surfaces */ Integer vecDim /* = 1 */ ) { Integer surf ; /* a surface counter */ Real dens ; /* density */ Real stagPres ; /* stagnation pressure */ Real velSqr ; /* velocity square */ Real* usrVals ; /* user values */ Real* vel ; /* velocity */ Real* xVel ; /* x-component of velocity */ Real* yVel ; /* y-component of velocity */ Real* zVel ; /* z-component of velocity */ udfCheckNumUsrVals( udfHd, 2 ) ; /* check for error */ usrVals = udfGetUsrVals( udfHd ) ; /* get the user vals */ stagPres = usrVals[0] ; /* get BC value */ dens = usrVals[1] ; /* get density */ vel = udfGetEbcData( udfHd, UDF_EBC_VELOCITY ) ; /* get the velocity */ xVel = &vel[0*nItems] ; /* localize x-vel. */ yVel = &vel[1*nItems] ; /* localize y-vel. */ zVel = &vel[2*nItems] ; /* localize z-vel. */ for ( surf = 0 ; surf < nItems ; surf++ ) { velSqr = xVel[surf] * xVel[surf] + yVel[surf] * yVel[surf] + zVel[surf] * zVel[surf] ; outVec[surf] = stagPres - 0.5 * dens * velSqr ; } /* end of usrElementBcExample() */ For scalar variables, the dimension of the returned boundary condition vector, outVec, is the number of surfaces. For vector variables (currently only tangential_traction), the dimension is the number of surfaces times three, with the surface dimension being "faster" (as it is for the velocity vector in the For the tangential_traction variable, the three components specified are in the global xyz coordinate system. The component normal to the surface is removed before the boundary condition is imposed. Thus, there will be no conflict if a pressure (normal component of traction) element boundary condition is also imposed on the same surfaces. parameter may be used to uniformly scale the element boundary condition values. The value of this parameter refers to the user-given name of a command in the input file. For the multiplier function scales only the pressure part of the stagnation pressure. For example, to impose a uniform heat flux as a function of time, the following commands may be issued: ELEMENT_BOUNDARY_CONDITION( "constant BC on heat flux" ) { surface_sets = {"tri_faces", "quad_faces"} variable = heat_flux type = constant constant_value = 2.5 multiplier_function = "time varying" MULTIPLIER_FUNCTION( "time varying" ) { type = piecewise_linear curve_fit_values = { 0, 0.0 ; 10, 1.0 ; 20, 0.5 ; 40, 0.7 ; 80, 1.2 ; } curve_fit_variable = time For convective and radiation variables the value of the element boundary condition is the value of a coefficient, not the value of a flux variable. The convective and radiation heat fluxes are given where h is the value of the element boundary condition; T is the temperature; T[ref] is the reference temperature, given by reference_temperature; and T[off] is the offset to convert to an absolute temperature, given by the absolute_temperature_offset parameter of the EQUATION command. For a radiation heat flux boundary condition, the coefficient may be calculated from ε is the grey-body emissivity/ absorptivity and σ is the Stefan-Boltzmann constant. A description of the last two quantities may be found in the RADIATION command. However, the radiation heat flux is specified independently of the radiation equation and the RADIATION, RADIATION_SURFACE, and EMISSIVITY_MODEL commands, as well as the SOLAR_RADIATION and related commands. The convective species fluxes are defined similarly to the convective heat flux, except that reference_species is used instead of reference_temperature. parameter may be used to scale the reference temperature. For example, ELEMENT_BOUNDARY_CONDITION( "time dependent outside temperature" ) { surface_sets = {"tri_faces", "quad_faces"} variable = convective_heat_flux type = constant constant_value = 12.5 reference_temperature = 1 reference_temperature_multiplier_function = "outside temperature" MULTIPLIER_FUNCTION( "outside temperature" ) { type = cubic_spline curve_fit_values = { 0*3600, 295 ; 12*3600, 312 ; 24*3600, 295 ; } curve_fit_variable = time Similarly, reference_species_multiplier_function may be used to scale the reference species. Reflections of waves from inflow and outflow boundaries can be reduced by setting non_reflecting_factor to a positive value. This parameter applies to inflow types with inflow_type = mass_flux and to outflow types. For inflows only the mass flux element boundary condition is modified and for outflows only the pressure element boundary condition is modified. It is ignored for all other cases. Also, there is no effect for the constant density model unless a positive isothermal_compressibility value is specified. has no effect; the given element boundary condition is imposed. Using a value of one gives the full effect; waves pass through the boundary without reflection but at the cost of the variable drifting away from its specified value. Other values yield a blend between these effects. Values greater than one for may lead to instabilities for the mass flux condition. If running average fields are defined, that is, in the command, then an enhanced non-reflecting boundary condition is used, leading to less reflection for the same value of . The needs to be turned for this option. Note: When flow = compressible_navier_stokes, the running_average option does not need to be turned on; the running average files are automatically used in non- reflecting boundary conditions. Pressure loss at inflow or outflow boundaries may be modeled through the pressure_loss_factor parameter. For variable=pressure, the following term is added to the pressure boundary condition: is given by is the density; is the velocity; is the outward-pointing normal to the surface; and sgn( ) is - for inflow boundaries and +1 for outflow boundaries. If is given, it is applied to the pressure variable before this term is added. The pressure loss factor may be scaled with . For example, ELEMENT_BOUNDARY_CONDITION( "outflow" ) { variable = pressure pressure = 0 pressure_loss_factor = 100 pressure_loss_factor_multiplier_function = "pLoss_TMF" MULTIPLIER_FUNCTION( "pLoss_TMF" ) { type = piecewise_linear curve_fit_values = { 1, 1 ; 10 , 0 } curve_fit_variable = time_step The hydrostatic pressure term may be added to pressure and stagnation pressure element boundary conditions by setting hydrostatic_pressure=on. This term is given by where ρ is the density; x is the current coordinate vector; x0 is any coordinate vector where the hydrostatic pressure is zero, given by hydrostatic_pressure_origin; and g is the gravity vector given by body_force in the parent ELEMENT_SET command or is the gravity_vector given by gravitational_acceleration in EQUATION. If variable is not pressure or stagnation_pressure, then hydrostatic_pressure has no effect. If multiplier_function is given, it is applied before this term is added. If gravitational force is modeled using the GRAVITY command, then hydrostatic_pressure=on should be set for most pressure or stagnation pressure boundary conditions in order to properly account for the hydrostatic pressure. This is most commonly needed on outflow boundaries. For free surface pressure conditions hydrostatic_pressure is normally set to off in order to ensure that the imposed pressure is constant along the entire surface. If this pressure or the nominal height of the free surface is nonzero, then hydrostatic_pressure_origin may be adjusted for any other pressure boundary condition so that its evaluation at the nominal surface height equals the pressure imposed on the free surface. That is, ELEMENT_BOUNDARY_CONDITION command with hydrostatic_pressure=on, and SIMPLE_BOUNDARY_CONDITION. In some circumstances it is necessary to "turn off" or "deactivate" a previously-defined element boundary condition. This is accomplished by setting the active flag through active_type. A value of all is the default and means that all surfaces in the set are active (active flag = 1) and will have the boundary condition imposed on them. A value of none means that no surface is active (active flag = 0). active type is used to automatically turn on and off element boundary conditions in the presence of interfaces. The boundary condition is applied (active flag = 1) if the surface does not belong to set or if it does not find a contact surface of the appropriate medium. That is, each surface will fall into one of five possible cases: • The surface is not in any INTERFACE_SURFACE set; active flag = 1. • The surface is in an INTERFACE_SURFACE set, but it does not find a contact surface; active flag = 1. • The surface finds a contact surface of fluid medium; active flag = 0. • The surface finds a contact surface of solid/shell medium, and variable is defined on the solid/shell side, currently only the heat flux variables,; active flag = 0. • The surface finds a contact surface of solid/shell medium, and variable is not defined on the solid/shell side, that is, it is not one of the heat flux variables,; active flag = 1.
{"url":"https://help.altair.com/hwcfdsolvers/acusolve/topics/acusolve/element_boundary_condition_acusolve_com_ref.htm","timestamp":"2024-11-12T22:35:55Z","content_type":"application/xhtml+xml","content_length":"140762","record_id":"<urn:uuid:c5e41139-3fd8-41f7-a39e-0331e49634fe>","cc-path":"CC-MAIN-2024-46/segments/1730477028290.49/warc/CC-MAIN-20241112212600-20241113002600-00091.warc.gz"}
Tell us how many units you have at each tonnage level and watch your savings instantly appear below 5 Tons Units 7.5 Tons Units 10 Tons Units 15 Tons Units 20 Tons Units 25 Tons Units 30 Tons Units 40 Tons Units 50 Tons Units 60 Tons Units 75 Tons Units 100 Tons Units 200 Tons Units Total Units = {{total_units}} What is your energy cost per kWh? $ Optional: How many emergency service calls have you had in the past 12 months? Average cost per call? $ Based on industry averages, you could be saving (per year): Cleaning Cost Savings = ${{cleaning_savings_year | number:0}}/year Energy Savings = ${{energy_savings_year | number:0}}/year Emergency Repair Savings = ${{repair_savings_year | number:0}}/year TOTAL SAVINGS PER YEAR = ${{total_annual_savings | number:0}} Get the Full Report
{"url":"https://permatron.com/roicalculatoriframe","timestamp":"2024-11-02T14:28:20Z","content_type":"text/html","content_length":"9199","record_id":"<urn:uuid:64410182-1b1f-4105-8af3-2b677b0423b4>","cc-path":"CC-MAIN-2024-46/segments/1730477027714.37/warc/CC-MAIN-20241102133748-20241102163748-00788.warc.gz"}
Design and Kinematics Analysis of a Parallel Mechanism to be Utilized as a Luggage Door by an Analogy to a Four – Bar Mechanism Design and Kinematics Analysis of a Parallel Mechanism to be Utilized as a Luggage Door by an Analogy to a Four – Bar Mechanism () 1. Introduction Trunk lids are large parts used in commercial vehicles such as midibuses and buses and are made of steel or aluminum. They compose an average of %10 of the total vehicle surface, i.e. approximately 20-30 m^2. When the present luggage door mechanisms are analyzed, it is observed that there are two types of mechanisms used in the design of such structures: Traditional top-hinged system and parallel hinged system [1]. The traditional top-hinged system has a simple structure and maintains rigidity with its support along the hinge and wide ribbed profiles. When opened, it is a shelter. However, the mechanism requires a heavy door frame and has a large trajectory of motion, so it is impractical to be used in narrow spaces [1] (see Figure 1). Parallel hinged system has a narrow and safe trajectory and takes up less space when fully opened, so it is advantageous in terms of ease of use in narrow spaces. This system is used widely especially in coach and 12 m buses. Although it has a heavy and complex hinge structure, it is possible to employ a light door frame. In addition, parts of the mechanism have adjustable lengths. When the door is closed, the hinge system occupies great volume, which is a disadvantage [1] (see Figure 2). 2. Design In this study, the trunk lid is designed as a parallel hinged mechanism, as shown in Figures 3 and 4. The mechanism consists of a rotating arm hinge 1), a damper connection bracket welded on it 2), a drag link maintaining parallelism of the door 3), a main frame connection bracket 4) and a damper 5). The damper (gas spring) is located vertically, which different than the previous designs. The mechanism has a compact structure with the damper connection bracket 2) and the main frame connection bracket 4), occupying a small space when the door is closed, providing an efficient usage of the luggage compartment volume. Thus, additional components like air filter or electrical installation box can be used in the luggage volume. Moreover, water and dust permea- bility risk is minimized, because the damper forces act in the compression direction when the door is closed. 3. Kinematics of the Four-Bar Mechanism Four-bar mechanisms are used extensively and effecttively in industry. A four-bar mechanism is shown in figure 5. The mechanism consists of three moving bars AB, BC and CD with lengths L[2], L[3] and L[4], respectively, pinned at A and D. AD is the fourth bar and has a length of L[4]. All bars are assumed to be rigid [2]. The objective is to determine the velocity and acceleration of a point and/ or angular velocity and angular acceleration of a bar, when the motion of some points and/or bars are known. Either link AB or CD is driven by a motor. Assuming that angle θ[2](t) is known, other angles and their first and second derivatives with respect to time should be derived in terms of By inspecting figure 5, the closure equation can be written as in (3.1), and its components along and perpendicular to AD can be written as in (3.2) and (3.3), respectively, as Derivation once with respect to time yields Angular accelerations In matrix form These are nonlinear equations in terms of Note that point B can possibly be below AD, if sin α is negative. Next, β and γ are determined using the following expressions written using triangle BCD Note that a real solution for Solution 1: Solution 2: Angles Figure 6 are obtained as follows. Input values L[1], L[2], L[3] and L[4] seen in the red box in figure 7 can be found in Section 4.1. Equations in the blue boxes are equations (4.1)-(4.3) for the opening of the door and equations (4.4)-(4.6) for the closing of the door, given in Section 4.2.4. The yellow box can be expanded as shown Figure 8, where (3.9) and (3.12) are used for L and cos β, respectively. All the other equations can be found in section 3. The green box in figure 7 can be expanded as shown Figure 9, where (3.4) and (3.5) have been used to derive the first derivatives of 4. Kinematics Analysis of the Design with the Four-Bar Mechanics Approach 4.1. Four-Bar Mechanism Model of the Design In figure 10, four-bar mechanism analogy of the design is illustrated on the cross section of the mechanism. As illustrated in figure 10, points A and D are fixed on the mechanism while B and C are the selected design points. The system can be treated as a four-bar mechanism when it is modeled this way. Some physical quantities are as follows. 4.2. ANSYS Model of the Design It is important to determine the behavior of a mechanism under a loading condition. Forces acting on the arms, how these forces are transmitted and the joint forces determine the behavior of the mechanism [6]. Variation of θ[2] with respect to time should be solved for. In this study, the behavior of the mechanism under a certain loading condition is modeled with the aid of ANSYS, see Figure 11. 4.2.1. Material Properties The mechanism is assumed to be rigid. Densities of steel and aluminum are given in the Table 1. The base door is made of aluminum while other parts are made of steel. 4.2.2. Boundary Conditions and Loads The door frame is fixed to the ground so that all degrees of freedom are restricted. Dampers are modeled as spring elements exerting a force of 300 N in closed position. Motion can be analyzed in two parts. In the opening phase, an external force has to be applied until the gas spring direction passes the neutral axis. After that, the door is opened with the help of the gas springs without an external force. In the closing phase, like in the opening phase, an external force has to be applied until the gas spring direction passes the neutral axis and the door is closed with the help of the gas springs without an external force. Since there are two phases of motion, results are obtained for the two phases separately. When analyzing the opening phase, a force of 100 N is applied to the door handle in the +y and +z directions until the gas spring direction reaches the neutral axis. In the closing phase, a force of 180 N is applied in the –z direction. The applied external force goes to zero after the gas spring direction reaches the neutral axis in both phases, (see Figure 12). 4.2.3. Geometry The ANSYS CAD model is seen in figures 13 and 14. The main structure of the trunk lid is made of aluminum sheet and formed to be compatible with the bus side surface. There’re 5 u-profile ribs (3 of them in the z direction, 2 of them in the x direction) made of aluminum and welded on the inner side of the main structure of the trunk lid. These profiles improve the rigidity of the mechanism. The holes placed on these ribs are used to place the locking system parts such as locks, handle, screw, bolt, washer, etc. Parts connected to each other with weld or bolt are imported to ANSYS as a single piece in order to lower the dimensions of the solution matrix. Only the parts that move relative to each other are regarded as separate entities. 4.2.4. ANSYS Analysis Results Variation of θ[2 ]in the opening and closing phases can be seen in figures 15 and 16. In figure 15, x axis denotes time in seconds while y axis denotes θ[2] in degrees. The luggage door opens 165° in 2.2876 s and stops at that point. Figure 17 illustrates θ[2](t) in the opening phase of the door with a curve fit into the data imported from ANSYS. θ[2](0) = 6.6˚ is added to the data taken from ANSYS and the values are converted into radians. A polynomial of fifth order has been fit for θ[2](t) in Excel as follows. First and the second time derivatives are In figure 16, x axis denotes time in seconds while y axis denotes θ[2 ]in degrees. The luggage door closes entirely in 1.4235 s. Figure 18 illustrates θ[2](t) in the closing phase of the door with a curve fit into the data imported from ANSYS. θ[2](0) =165˚ is added to the data taken from ANSYS and the values are converted into radians. A polynomial of fourth order has been fit for θ[2](t) in Excel as follows. First and the second derivatives are 4.3. Results of the Kinematics Analysis In this section, results of the kinematics analysis obtained through a four-bar mechanics approach are presented. Figures 19-23 are plotted by exporting the data obtained from the solutions equations (4.1)-(4.6) to the Matlab model. Considering the four-bar mechanism shown in figure 5 and the cross-section of the mechanism shown in figure 10, it is seen that the time dependent curves of θ[2] and θ[4] shown in figures 20 and 23 fit each other. This means that the luggage door has a stable trajectory of motion. It is seen on the same figures that the amplitude of the θ[3](t) curve is nearly equal to zero. This means the door has a trajectory parallel to the sidewall surface of the vehicle. Considering the trajectory of the luggage door mechanism, position of the gas springs is an important ergonomics criterion for the design. It is observed that the direction of the force vector changes at the point where the rotational acceleration curves shown in figures 19, 21 and 22 change directions. The gas spring force vector passes the neutral axis at that point, so this point is one of the most important criteria of the design, (See Figure 24). 5. Conclusions In this study, design of a parallel hinged luggage door mechanism and its kinematics analysis are presented. The design cycle shown in figure 25 has been used in this study to obtain the final design. This cycle is especially used for optimum positioning of the external load points effectively. This way, design objectives of having the minimum working space and operation in narrow spaces in terms of ease of use and ergonomics improvement is achieved.
{"url":"https://www.scirp.org/journal/PaperInformation?PaperID=4602","timestamp":"2024-11-14T07:29:07Z","content_type":"application/xhtml+xml","content_length":"110915","record_id":"<urn:uuid:2cf31c2d-df67-44c3-bd29-16568d4f38e0>","cc-path":"CC-MAIN-2024-46/segments/1730477028545.2/warc/CC-MAIN-20241114062951-20241114092951-00032.warc.gz"}
Types of Quadrilaterals - Math Steps, Examples & Questions Examine the properties of the quadrilateral, including side lengths and angle measurement. Shape A : 4 equal angles (right angles), each pair of opposite sides are equal in length, 2 pairs of parallel sides. Shape B : One pair of equal angles (right angles), one pair of parallel sides. Shape C : 4 equal angles (right angles), 4 equal sides, 2 pairs of parallel sides. Shape D : 4 equal angles (right angles), each pair of opposite sides are equal in length, 2 pairs of parallel sides. Shape E : 2 pairs of equal angles, each pair of opposite sides are equal in length, 2 pairs of parallel sides. What is a quadrilateral? A quadrilateral is a polygon with four sides. How many types of quadrilaterals are there? There are several types of quadrilaterals, but some common ones include squares, rectangles, parallelograms, trapezoids, and rhombuses. What is a square? A square is a quadrilateral with four equal sides and four right angles. What is a rectangle? A rectangle is a quadrilateral with four right angles, but its opposite sides are equal in length. What is a parallelogram? A parallelogram is a quadrilateral with opposite sides that are parallel and equal in length. What is a trapezoid? A trapezoid is a quadrilateral with one pair of parallel sides. The other two sides are not parallel. What is a rhombus? A rhombus is a quadrilateral with four equal sides. It also has opposite angles that are equal. Are all squares also rectangles? Yes, all squares are rectangles because they have four right angles. However, not all rectangles are squares since rectangles can have sides of different lengths. Right trapezoid Isosceles trapezoid a parallelogram, rectangle, and square. a parallelogram, rectangle, and rhombus. a rectangle. a parallelogram and rectangle.
{"url":"https://thirdspacelearning.com/us/math-resources/topic-guides/geometry/types-of-quadrilaterals/","timestamp":"2024-11-04T17:26:14Z","content_type":"text/html","content_length":"248655","record_id":"<urn:uuid:648647c4-bd5e-4b5c-b2a8-75ccb8520ca4>","cc-path":"CC-MAIN-2024-46/segments/1730477027838.15/warc/CC-MAIN-20241104163253-20241104193253-00491.warc.gz"}
A New Approach Method of CH4 Emission Estimation from Landfills Municipal Solid Waste (MSW) 1. Introduction In May 2013, the United Nations (UN) adopted the KYOTO Protocol [1] relating to the pollution emission agents and the transfer registers (based on the so called PRTR Protocol or Kiev Protocol) [2] together with the UN Convention on climate changes. This Convention is referring, among others, to the landfills having a daily activity of more than 10,000 tons/day MSW which amounts to more than 450,000 tons/year. For these [4] emission rate [3] has to be calculated and the results have to be communicated to the public. EU has adopted the European Emission Register in order to be in conformity with the PRTR Protocol. This Register provides some criteria to be fulfilled: transparency, coherence, the possibility to compare results. These criteria are a condition for the calculated results to be accepted into a national data base. Romania has adopted the UN PRTR Protocol and for the MSW landfills with more than 10,000 tons/day, the CH[4] emission will be included in a register. The Member State governments have to report all aspects related to the Climate Changes [3] to an inter-governmental group. It is very clear that a method to estimate the CH[4] methane gas emission from MSW landfills is absolutely necessary [3] . This method has to cover the calculation of the CH[4] emission from both conforming and non-conforming MSW Romanian landfills [4] [5] . This method was applied for the CH[4] emission calculation of 13 MSW landfills―conforming and non-conforming. In this paper the calculated values for CH[4] emission [4] [5] [6] and the equivalent CO[2] for 1 non-conforming and for 2 conforming landfills are Analyzed landfills are located in Satu Mare, Ilfov and Bucharest municipality, Romania. The proposed method has a high degree of efficiency. The CH[4] emission calculus for those 13 Municipal landfills (msw) and the drawing up adjacent graphics related to the equivalent of CO[2] demonstrate that the GEG is present. The Romanian Environmental Authorities have to act on this matter and to acknowledge about the GEG intensity and its duration [7] , in the same time. The Proposed method allows us the quantitative evaluation of CH[4] emission to be used as a natural energy source. Within the actual management of wastes only the sort of wastes having economical energy value is applied, according to the Europe Council provisions. It is to be mentioned also that only 20% of the generated wastes is sorted. In the deposit body, they are not included: metallic wastes, plastics, tires, recyclable wood or with energetic value, paper wastes and recyclable cartoon. It is to be mentioned also that, from information delivered by the local source, within the landfill body they are not included: inert wastes (construction and demolition), plastics, soils and stones, asbestos; the total contents of these wastes are not considered to be more than 10%. I have to make a remark: the drawing up graphics were obtained by manual calculation rather than using specific software. 2. Present Situation All types of wastes were deposited together [4] , in specially designated MSW deposit areas, those coming from the anthropic activities as well as those generated by the agriculture and live-stock farm activities, e.g. animal and bird dejections. The bio-degradable wastes (rubbish) generated by intensive agriculture have to be taken into consideration as well. The problem of the global warming and the obligation to apply the Kyoto Convention requirements involve the fulfillment of the rules regarding the limitation of the MSW gas emission [7] and the prohibition to have MSW landfills which do not comply with the rules of environmental protection [2] . Since 1999 Romania has started to have MSW landfills, in ecological condition, in accordance with the European regulation in the field, and, from 2007, when Romania adhered to the European Union (EU), all the MSW landfills have to respect, strictly, the EU legislation, as provided within the 75/442/CE Directives [5] [8] provisions. This Directive [5] [8] was adapted [4] to the Romanian legislation by Government Decision [4] order no. 349/2005. 3. Estimative Methods for Ch[4] Gas Emission Calculation The quantity of the CH[4] gas emission from MSW landfills can be estimated, by calculus applying two methods, as follows: METHOD No. 1 IPCC 2006 Method-Default Method (DM). This method supposes that a non-dangerous MSW deposit will generate [9] [10] , within a year, a certain quantity of CH[4] and, in the next year, it will be a new amount of CH[4] This method will not take into consideration the hypothesis that an MSW deposit is a conglomerate mixed wastes one (see Table 1). Another factor to be taken into consideration is the time-the basic factor for GES emission [10] . Different MSW components are gradually, deteriorated in time, so CH[4] and CO[2] as well as the non-methane gases, and are generated. In order to illustrate results due to the method 1 use, the conform MSW calculus equations regarding CH[4] emission [10] [11] will be indicated, as follows. These calculus equations are: ・ L[0]-CH[4] generated potential ・ R-CH[4] recovered at the inventory year of[4] is burned and not collected; if not, the recovered quantity of CH[4] calculated by using this method will be reduced from the CH[4] generated ・ 0X-oxide factor having a fractionary values-0 for non-conforming deposits and 0.1 for the well arrangements (conforming) deposits. CH[4] generated potential, where: ・ MCF-CH[4] correction factor, whose values are dependent by the location and the management of MSW; ・ DOC[f]-the DOC dissimilated fraction-0.55 having values within the interval 0.5 ÷ 0.6; ・ F- CH[4] fraction part-from deposit gas (LFG) [5] , given value is 0.5; ・ 16/12-the C conversion coefficient within CH[4]; The Dissolved Organic Carbon (DOC) is determined [11] [12] by using the relation: ・ A-the MSW fraction represented by paper and non-reciclable textiles [6] [10] [13] [14] . ・ B-the MSW fraction represented by garden and parks wastes, and other bio-degradable organic wastes, excepted food wastes [6] [10] [13] [14] . ・ C-the MSW fraction represented by food wastes and other bio-degradable wastes, [6] [10] [13] [14] ; ・ D-the MSW fraction represented by woods or straw wastes, [IPCC], [6] [10] [13] [14] ; This method has the following difficulty: -Don’t take into consideration that in the last 6 months deposited MSW are not degradable -The CH[4] emission quantity is very high (inadmissible) It is supposed that a MSW landfill will generate, within a year, a certain amount of CH[4] gas emission which can be estimated [10] . This method doesn’t take into consideration the hypothesis that a MSW landfill is a mixed conglomerate of wastes (rubbish). Another factor to be considered is the time which is the basic factor for CH[4] gas emission [10] . Different components of the MSW landfill are, gradually, degraded in time, and CH[4], other gases are produced [6] . METHOD No. 2 I developed a new calculation method for the methane gas emission estimation, from the Romanian waste landfills [7] [11] , method called: “DANILA VIERU METHOD FOR A CONFORMING AND NON-CONFORMING MSW LANDFILLS CH[4] GAS EMISSION ESTIMATION IN ROMANIA, BY CALCULUS”. According to the above- mentioned method, it is assumed that the waste (rubbish) from MSW landfills will be gradually degraded [11] based on the following factors [10] [12] : Structure of the wastes (rubbish) composition; Environmental factors existing in that area; The thickness of the waste (rubbish) layer; The compacting grade (level); The depth of the place where the MSW is located; Time passed from the first deposition of wastes (rubbish). Due to the time factor, this method was called: “DANILA VIERU METHOD FOR CONFORMING AND NON-CONFORMING MSW LANDFILLS CH[4]GAS EMISSION ESTIMATION IN ROMANIA, BY CALCULUS”. The IPCC-International Experts Group on Climate Change makes recommendation [9] related to the use of some coefficients concerning the estimation of CH[4] gas emission from MSW landfills but no to the use a specific calculus formula. In the case of a MSW conglomerate landfill, having a broad range of types and amounts of wastes (rubbish), Romania did not possess an adequate (proper) formula for the MSW CH[4] gas emission estimation up to the year of 2012. The statistics of the wastes (rubbish), under the rule of the Regulations no. 2150/2002 on waste statistics [17] do not solve the problem of the composition of the waste (rubbish) from MSW. The use of waste statistics assumes that the waste (rubbish) should be analyzed by means of a representative sample of economic operators and human agglomeration [12] . Taking into consideration that every district of Romania has approx. 200 economic operators and urban agglomeration we shall have approximately 8400 economic operators, in total [9] . Approximately 500,000 economic operators are assumed to be in the country which means that statistics representation will cover only 1.6% of the total country economic operators. This fact is quite The method: “Danila Vieru method for conforming and non-conforming MSW landfills CH[4] gas emission estimation, in Romania, by calculus”, makes use of the following formula: This formula (equation) has some advantages, e.g.: [ ] 1) The hierarchy [6] of degraded MSW, IN TIME, under the environmental factors [atmospheric precipitations, annual average temperature, alternating periods of rain and drought, freezing and non-freezing periods, the degree of 2) The use of time periods for the degradation of MSW; 3) The use of IPCC recommendation related to the application of the methodology calculation formula of CH[4] gas emission from MSW landfills; 4) Taking into consideration the specific environmental conditions of every district of Romania; 5) The specific economic conditions of every district, such as: industrial development, hand-made production, various branches of agriculture, etc. are taken into consideration; It is well-known that CH[4] methane is a specific gas, and its contribution (percentage) to global warming is about 4 ÷ 5% so that the need for the quantification of CH[4] gas emission is imperative. In the meantime, measures to reduce the contribution of the CH[4] gas emission from MSW landfills have to be taken into account. In July 16, 2009, due to the presence of non-conforming MSW landfills in Romania, some of them are closed while others will be in transition periods, in the case of MSW landfills, the emission of CH [4] methane gas will continue even after the closing period of non-conforming MSW landfills until approximately the year 2017. Before wastes (rubbish) are deposited within the body of MSW and a rational sorting have to be are done. After the closure of MSW landfills, the quantity of the CH[4] gas emission will decrease but still will continues to exist [14] . Following the legal conditions for opening a new MSW landfill it is absolutely necessary to know the evolution of CO[2] (in equivalent), the location of the new MSW landfill and the potential impact over the environment. As it is known, in approximately years, the warming effect will be intensified due to the collection of the gas MSW landfill. In my opinion, the above mentioned remarks should be taken into consideration when a CH[4] methane gas emission calculus formula is applied, for the entirely territory of Romania. 4. Example of Calculus, Methodology―The Assessment Basic consideration: a) The percentage composition of MSW landfill body is in accordance with the data provisions given in Table 1. b) The wastes (rubbish) from the MSW landfill body are gradually degraded in accordance with the environment conditions; c) To calculate the quantity of CH[4] gas emission from degraded MSW, at the year of calculation, the IPCC recommended values [9] have been taken into consideration. d) The MSW degraded quantity has the same percentage composition as the MSW landfill body; e) The MSW degraded quantity generates DOC-Dissolved Organic Carbon, and, as a consequence, the CH[4] gas emission is produced. f) The MSW degraded quantity calculated, in the year T, is given by the expression: Q[mswdegrad.T] Within Table 1 the waste composition, as% from total, was established following information delivered by: ・ Local Environmental Authorities, in accordance with the Regalement of the Council of Europe no. 2150/2002 and the European Parliament information with referring to the waste statistics (November 25/2002) [17] . For example, for the Region 8 Bucharest Ilfov-landfill Chiajna, the information delivered (see Figure 1, also) are: “Methane Vol.?54.4%, Carbon Dioxide Vol.?38.1%, Oxygen Vol.? 1.3%, Nitrogen Vol.?6.1%, etc. As an important remark, within the year 2011 about 7.5 million cubic meters of Methane gas has been extracted.” ・ Direct observation done at the MSW landfills location with referring to the wastes composition; ・ Direct information delivered by local authorities regarding annually collected wastes quantities and the way of the management; ・ Information delivered by the MSW landfills administrators related to the collection area, quantities and type of wastes included in MSW. Figure 1. The Evolution of CO[2] (equivalent) and CH[4] emission from the landfill Rudeni-Chitila-Iridex, Environmental Reg. 8, Bucharest, Ilfov District, in the period: 2000 ÷ 2011. Table 2 presents the composition of the MSW landfills wastes, located within 3 environmental regions areas-region 8 Bucharest Ilfov, Satu Mare County and Bihor county. It is to be mentioned that the Waste composition, as a conglo- merate landfill, is subjected to the environment factors, and as a consequence, the LFG gas (mainly, CH[4]) is generated, covering the total lifetime of the deposit. 5. The Evaluation of Q[mswdegrad.T] in the T Year of Calculation To determine the MSW degraded quantity, in the first year of emission, the following formula has been used: After the first year, the calculation formula became: [ ] ・ Q[msw.T]-MSW, the amount deposited in the account, [Gg]; ・ Q[mswT][-1)]-MSW deposited one year ago; [Gg]; ・ Q[msw][undergradT-1]-the remaining amount of MSW degraded after year calculation [Gg]; ・ K: is the degradation rate of MSW. This factor depends on waste composition and site conditions, and describes the degradation process rate. The IPCC Guidelines [9] give, for K, a very wide range of values between 0.005 and 0.4. ・ t: time of degradation ・ t: time of wastes degradation within deposit body; during calculation process, t is replaced with relation (13 − m)/12 or (25 − m)/12, where m re- present the no. of months when msw wastes were degraded within deposit body, at the calculation year. m?within the interval 7 ≤ m ≤ 12, m- within the interval 7 ≤ m ≤ 18, represents no. of months when 45% of the wastes is degraded in the proportion of 45%. The m values are established in accordance with the deposit nomograme, based on the deposit equation Table 2. The MSW percentage (%) composition within the deposit body in some environmental Romanian regions. equation has an unique solution, but in every year has another expression i.e. in the year ・ T-represent the year of calculation not the current calendar year. A certain MSW deposited quantity remains undegraded every year [8] [12] . This quantity will be taken into consideration in the next year as the Q[mswundegrad.T]. This quantity can be estimated by using the formula: The calculation of the total Dissolved Organic Carbon?(TDOC[dissolved.T])- quantity from MSW degraded, at the year T, Q[msw][degrad.T] has been done by means of the following formula A = DOC generated by Q[msw][degrad.T] which contains% MSW[biodegrad] stated; [ ] k[0]: in accordance with [9] DOC generation ratio by% MSW[biodegrad.degrad.T], deposited; B = DOC generated by Q[msw][(G+P)degrad.T] which contains %MSW[(G+P)], stated; k[1]: in accordance with [9] , DOC generated ratio by% MSW[(G+P)degrad.T], deposited; C = DOC generated by Q[msw][degrad.T] which contains %MSW[H][+C+text.], stated; k[2]: in accordance with [9] , DOC generated ratio by %MSW[(H+C+] [text.)degrad.T] , deposited; D = DOC generated by Q[msw][degrad.T] which contains %MSW[(wood+straw)], stated k[3]: in accordance with [9] , DOC generated ratio by% MSW[(wood+strawdegrad.)T], deposited; E = DOC generated by Q[msw] [degrade.T] which contains %MSW[sludge], stated; k[n]: in accordance with [9] , DOC generated ratio by% MSW[sludg.degrad.T], deposited; G = DOC generated by Q[msw][degrad.T] which contains %MSW[industry,] stated; k[4]: in accordance with [9] , DOC generated ratio by% MSW[ind.degrad.T], deposited. The total composition of MSW wastes within the body can be changed annually, at two years, at three years or five years depending on the best environment information detained. % TDOC[dissolved.T] is the ratio [msw][degrad.T] and it is determined by using the following formula: where Q[msw] [taken] [into] [consid.T] is calculated by using the relation: ・ DOC[f] = fraction [%] of DOC dissolved under anaerobic conditions (taking into consideration the environmental condition from landfill) which generated CH[4]. The calculus can be done in this way: ・ Empirical [16] by using the formula: 0.014 T + 0.28, where T?is the annual average temperature, in C^0, in the district where MSW is located. By using IPCC recommended values for the temperate-continental zones, in Eastern and Central Europe, [5] [9] we found the following percentage values: 50%, 55%, 60% and 77%. If we take into consideration the Romanian districts climate zone conditions the recommended values (as percentage) are to be: 43%, 45%, 50%, 55% and 60%. ・ 1.3333(16/12) is the conversion factor of the carbon from CH[4] emission. ・ F-MSW landfill CH[4] gas emission correction factor and depends on the management of landfill; this factor assumes the compacting level of the solid municipal waste (rubbish) MSW landfill body and its values are: a) 0.4 ÷ 0.5-if b) 0.6 ÷ 0.7-if the c) 0.8 ÷ 0.9-if ・ F[r]-is a correction factor of CH[4] gas emission fraction from gas deposit [Landfill Gas-[r] are within interval 40 ÷ 60%,^ Taking into consideration the above formula and using adequate input data, the graphical representations for the evolution of the equivalent CO[2] of MSW landfills [4] [9] [13] ―Landfill Rudeni-Chitila-Iridex, Landfill Vidra-Ecosud are presented in Figure 1 and Figure 2. The evolution of the equivalent CO[2] for a non-conforming MSW landfill is presented in Figure 3. It is to be observed that the CH[4] gas emission continues, after the closing date?the year 2010, as Wastes deposited quantities (msw) within deposit body are shown in Table 3. These quantities, due to “m” values, according to the Nomograme [15] , generated CH[4] quantities as presented within Figure 1, with the following signi- Figure 2. Evolution of CO[2] (equivalent) and CH[4] Methane gas emission from the landfills Vidra-Ecosud, Ilfov District, in the period: 2000 ÷ 2011. Figure 3. The MSW landfill disposal time period: 1970 ÷ 2015 lasting for CH[4] gas emission, after disposal was completed. The percentage composition of MSW may be changed, year by year. The sludge from MSW can be taken into consideration, separately or may be incorporated within bio-degraded waste (rubbish). Table 3. Present the MSW wastes deposited within the body, for the period 2000 ÷ 2011. ficance [9] [13] [15] : in the year 2011 there were collected 7.5 million cubic meters of CH[4] which have been used for green energy production. For the period 2000 to 2011, the percentage (%) of MSW composition has been considered, as shown in Table 1. Plastic wastes, inert waste, construction and demolition have not to be taken into consideration because they will not affect the CH[4] gas emission [8] [14] . The data were confirmed by collection data. 6. A Case Study Within 2000 ÷ 2011 period (see Figure 1) quantities belonging to the interval 250 ÷ 400 Gg, there were deposited, annually. The GEG Effect has been intensified has been intensified, so that in the year of 2011 and a quantity of 7.5 million cubic meters of CH[4] has been used for electric energy production. As a direct consequence the GEG Effect decreased considerably, see Figure 1. For the period 2000 to 2011, the CH[4] calculated values of gas emissions are presented in Figure 1, by using Formula (1): Using some indicators related to the MSW landfills CH[4] gas emission, a calculation model is presented below. These indicators are those recommended by IPCC group of experts, group for the Central and Eastern Europe, [9] as follows: At the starting year of After the emission starting the expression By using Equation (4) the By using formula shown below the percentage of such as: The terms A, B, C, D, E, G are calculated at the year 2001, by using adequate equations [4] methane gas [5] [7] [10] ; ・ 1.52% is the percentage% TDOC within landfill body; ・ 0.5 represent DOC[f] taking into consideration the existing condition from the analyzed emission; ・ 1.3333 (16/12) represent C from CH[4]; ・ 0.5 represents the% content of CH[4] Methane gas within Landfill Gas (LFG). It is to be observed that the CH[4] gas emission increased gradually, but not suddenly, in accordance with the environmental condition of the landfill location [6] . A certain wastes (rubbish) quantity of [4] Methane gas: At the year 2011, for the same MSW landfill-Chitila-Rudeni-Iridex, the quantity of MSW landfill deposited taken into consideration for the calculus of By using the Formula (2) the non-degraded quantity of By using the Equation (12), the percentage The parameters-A, B, C, D, E, F, G, are determined at the year 2011, by using corresponding equations. The quantity of Methane gas; [f] taking into consideration existing condition from the analyzed emission; [4] methane gas within Landfill Gas (LFG). It is to be observed that the CH[4] gas emission increased gradually, but not suddenly, in accordance with the environmental condition of the landfill location [6] . A certain waste (rubbish) quantity of MSW landfill will remain un-de- graded and will be taken into consideration in the next year, so the process of [4] Methane gas: It is to be observed that the CH[4] gas emission increased gradually, but not suddenly, in accordance with the environmental condition of the landfill location [4] [6] . A certain waste (rubbish) quantity of MSW landfill will remain un- degraded and will be taken into consideration in the next year, so the process of MSW degraded will generate again DOC. The sludge from MSW can be taken into consideration, separately or may be incorporated within bio-degraded waste (rubbish). 7. Conclusions This article doesn’t comment on the present calculation model but rather draws the attention to a more adapted to the real conditions estimation, by calculus, of the CH[4] gas emission from the actual MSW landfills in Romania, which have to be estimated by the end of 2017. Even if deposited MSW quantities were up to 30 (Gg), in the beginning of 1979 and reached 90 (Gg) in 2010, the evolution of CO[2] exists and has to be known by the Romanian authorities. It is considered that this estimation has to be determined up to the life-end of the considered landfill. As an example, at the existing MSW landfill, in the Satu Mare County, the evolution of the equivalent CO[2] for a period of 42 years up to 2010 when it was closed is presented. The authorities have to inform the public about the evolution of the equivalent CO[2] for existing MSW landfill and also for the location of the new MSW landfills. On the other hand, for the non-hazardous MSW landfills having a capacity between [4] gas at unrealistic values, sometimes more than two times lower with respect to the real one, estimated by usual calculation models. To reduce the greenhouse effect, the evolution of the equivalent CO[2] for the existing MSW landfills in Romania has to be estimated in such a way as to be useful for an applicable environmental planning in accordance with the government’s and the European policy in the field of environmental protection. Other gas emissions such as: NON-METHANE ORGANIC COMPOUNDS, The real estimation of the CH[4] emission quantity from MSW landfills, in Romania, will contribute to a better environmental planning and a better understanding of the contribution that different gases have on the general warming effect and climate changes greenhouse effect. Finally, it is to be noted that the calculation of the CH[4] emission quantity, by using the Danila Vieru’s Method, The proposed method could be applied for the CH[4] emission calculation at MSW landfills quantities between 100 ÷ 200 (Gg/y) e.g. the Satu Mare non- conforming MSW landfill (see Figure 3). This method which was verified for Romanian landfills could be easily adapted for other countries too, paving the way for a real estimation of the methane gas emission, as real as possible. The proposed method can be applied either to the MSW landfills which respect legal providing and those (MSW) which not respect such provisions. The quantitative CH[4] estimation is beneficial for the Environmental Authorities but also for the potential investors interested in the CH[4] management. It is to be noted that potential investors have to know the emission quantity and its duration. After MSW depositing is over, it is absolutely necessary to the time-dura- tion when the emission is stopped. In the same time, after the CH[4] emission is over, the resulted compost should be of interest for the farmers. The author would like to express thanks for their support and understandings to the staff members of the Environmental Protection Agency (EPA) and to the local and regional subsidiaries of ETA’s for their help and suggestions. The author would like to express deep thanks and gratitude, including for the moral support, to Prof. Dr. Eng. Vladimir Rojanski for his advices in order to complete my work in order to estimate the CH[4] gas emission, by a calculus formula, both for conforming and non-conforming MSW solid landfills.
{"url":"https://www.scirp.org/journal/paperinformation?paperid=75325","timestamp":"2024-11-06T08:24:53Z","content_type":"application/xhtml+xml","content_length":"141202","record_id":"<urn:uuid:6a179172-1fe3-4581-8cbd-faa2d35482ec>","cc-path":"CC-MAIN-2024-46/segments/1730477027910.12/warc/CC-MAIN-20241106065928-20241106095928-00172.warc.gz"}
n3 definition of some list functions This is an ontology for computable list functions. Concatenates separate lists into a single list schema: ($a_1 .. $a_n) list:append $a_s definition: true if and only if the subject is a list of lists and the concatenation of all those lists is $a_s. $a_s can be calculated as a function of the subject. requires: all $a_1, .., $a_n to be lists with all constituent members bound. example: ( (1 2) (3 4) ) list:append (1 2 3 4). Extracts the first member of a list. schema: ($a_1 .. $a_n) list:first $a_s definition: true if and only the subject is a list and $a_s is the first member of that list. $a_s can be calculated as a function of the subject. requires: all $a_1, .., $a_n to be bound. Determines if an subject is a member of the object list or binds every member of the object list to the subject. schema: $a_1 list:in $a_2 definition: true if and only $a_2 is a list and $a_1 is in that list. $a_1 can be calculated from $a_2. requires: $a_2 to be a bound list. Iterates over index/value pairs of the subject list schema: ($a_1 .. $a_n) list:iterate ($i $v) definition: Gets the matching pair of list index and list value for every member of the subject. If the object is a variable, it will create a solution for each member of the subject list. If any member of the object list is a variable, it will create a solution for all matching members of the subject list. If the object is ground and the entry at the specified index matches the specified value, it evaluates to true; otherwise, false. requires: $i is an integer. example: ( 1 2 3 ) list:iterate ($i $v). Extracts the first member of a list. schema: ($a_1 .. $a_n) list:last $a_s definition: true if and only the subject is a list and $a_s is the last member of that list. $a_s can be calculated as a function of the subject. requires: all $a_1, .., $a_n to be bound. Calculates the length of a list. schema: ($a_1 .. $a_n) list:length $a_s definition: true if and only the subject is a list and $a_s is integer length of that list. $a_s can be calculated as a function of the subject. □ all $a_1, .., $a_n to be bound. □ $a_s: unbound, xs:integer (or its derived types) (see note on type promotion, and casting from string) Determines if an object is a member of the subject list or binds every member of the subject list to the object. schema: $a_1 list:member $a_2 definition: true if and only $a_1 is a list and $a_2 is in that list. $a_2 can be calculated from $a_1. requires: $a_1 to be a bound list. Gets the member of a list at a given position (where the position of the first element is 1) schema: (($a_1 .. $a_n) $a_i) list:memberAt $a_m definition: Iff ($a_1 .. $a_n) has an element at position $a_i, and if that element and $a_m can unify. requires: $a_i or $a_m (or both) must be bound. Note that if $a_i is a variable, this builtin may bind it to more than one value (e.g. ((“A” “B” “A”) ?i) list:memberAt "A"). literal domains: □ $a_1 .. $a_n, $a_m: unconstrained □ $a_i: xs:decimal, xs:float, xs:double within the value space of xs:integer (see also note on type promotion and substitution). I.e., in case the double/float/decimal's literal's value is within the value space of integers, the literal will match the domain. In case of a negative integer, the index will count backwards from the length of the list. Removes the second component of the subject list from the first component of that list. schema: ($a_1 $a_2) list:remove $a_3 definition: Iff the subject is a list of two lists $a_1 and $a_2, $a_2 is a subset of $a_1 and $a_3 is a list composed of the members of the $a_1 with all members of $a_2 removed, matching left to right. $a_3 can be calculated as a function of the subject. requires: $a_1 and $a_2 must be bound lists. example: ( (1 2 3 4) (2 3) ) list:remove (1 4).
{"url":"https://w3c.github.io/N3/ns/list.html","timestamp":"2024-11-03T03:13:30Z","content_type":"text/html","content_length":"10755","record_id":"<urn:uuid:8e2edcd2-69fb-4363-8889-ccfade91f471>","cc-path":"CC-MAIN-2024-46/segments/1730477027770.74/warc/CC-MAIN-20241103022018-20241103052018-00526.warc.gz"}
Multiplication Game & Division Game - Nine Men's MorrisMultiplication Game & Division Game - Nine Men's Morris Multiplication Game & Division Game Nine Men’s Morris Nine Men’s Morris: is a strategy game that most likely originated in the Roman Empire. It is not certain exactly how old the game is as there is conflicting evidence regarding its origin. Estimates on the game’s age vary from about 2 000 to 3 400 years old. This version of the game helps students see the inverse relationship between Multiplication and Division. To develop strategic thinking skills while learning Multiplication & Division Facts and have a little fun doing it. What you Need: 2 Players 1 Nine Men’s Morris Game Board 9 counters of different colors for each player How to Play: 1. The game begins with an empty board and each player in possession of 9 counters. 2. The player with the longest hair goes first. 3. Players take turns either: – placing a counter on an empty space on the board and stating the answer to the algorithm covered aloud to their opponent. – sliding a counter already on the board one space horizontally or vertically along the line to the next empty space. They then read the algorithm landed on to their opponent and answer it aloud. 5. Each time a player gets three of their counters in a row along a single line (this is traditionally called a ‘mill’) they must choose one of their opponent’s counters and remove it from the board. The removed piece takes no further part in the game. The Two Mill Rule – If by sliding ONE piece TWO MILLs are formed, then TWO of the opponent’s pieces MUST be removed. Illegal Moves: 1. Diagonal moves are NOT permitted. 2. Placing a piece on top of another is not allowed. 3. A player cannot jump a counter over another counter. 4. Moving a piece in one space and then moving it back to the same space on the next move is also not permitted. NB Even to try this is considered ‘poor form’. 5. Forming a mill, moving a piece out of the mill, and then moving it back on the next turn is illegal. How to Win: – The player who is left with 2 or fewer pieces on the board loses the game. Possible Before Game Activities: – Give students plenty of real-world examples to reinforce the relationship between the abstract and symbolic. Many students know 8 x 5 gives the same answer as 5 x 8, many are unable to represent these in concrete ways. E.g. 5 gardens with 8 rabbits in each is very different from 8 gardens with 5 rabbits in each. – Use calculators to investigate the inverse relationship between Multiplication and Division. Do some algorithms until the pattern becomes apparent e.g. 5×5=25 so 25÷5=_ and so on. – When the students get the hang of this have them predict what the answer will be before pressing the equals button. – Explore both types of division – How many equal groups can be made from 24 with 6 in each group? Share 30 objects between 5 groups. How many in each group. Use real-world examples of this. During the Game: – Encourage students to predict what options their opponent has and what moves they may make. – Have you found any ways to ‘plan ahead’ to help you win the game? Tell me about it. – Teacher reflects on how to further differentiate this game for future sessions and for which students. Possible after the Game reflections: – Students draw the difference between 5×3 and 3×5. – Students draw diagrams and annotate them to show their understanding of 12÷3 – I found it easy/hard to stay focused in math today because_______. – Today in math I was confused by _______________ so I _________________ to help me understand. – What did you discover about your use of Division strategies while playing this game. Included in this Download: 11 Full-Color Game Boards – Multiplication Facts and their Inverse Division Facts – 2 Times Table to Twelve Times Table 1 PowerPoint file with all game boards shown for easy game introduction and number talks. 1 Set of Game Rules 1 Set of Teaching Notes
{"url":"https://goteachthis.com/index.php/product/multiplication-game-division-game-nine-mens-morris/","timestamp":"2024-11-10T14:28:23Z","content_type":"text/html","content_length":"119520","record_id":"<urn:uuid:d50d6b91-5fc7-4521-89cf-08a6b8dc725c>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.60/warc/CC-MAIN-20241110134821-20241110164821-00139.warc.gz"}
TD-DFT and TD-DFT/PCM approaches to molecular electronic excited states in gas phase and in solution Classe di Scienze PhD Thesis TD-DFT and TD-DFT/PCM approaches to molecular electronic excited states in gas phase and in Ciro Achille GUIDO External advisors: Prof. J. TOMASI Prof. C. ADAMO Prof. B. MENNUCCI Prof. E. PERP` President of the PhD Commission: Prof. V. BARONE In memory of Ciro Guido, my grandfather I would thank some people that have been really important during my PhD and so for the realization of this Thesis. Professor Tomasi, his advice has been crucial in these three years. I became a theoretical chemist thanks to his scientific knowledge. He is, for sure, the person who most inspired me. Professor Mennucci, who patiently followed and supervised me during my PhD. I owe to her my knowledge of solvation models and their applications in quantum chemistry, but especially, I thank her for the huge guidance. I would like to sincerely thank Prof. Adamo and Dr. Ciofini. I owe a lot to them in scientific and human terms. My Parisian experience has enriched me in both these areas. A special thank is also for their group in Paris: Cyril, Stefania, Guillame, Tanguy, Giuseppe, Roberto, Vincent, Fred, Vinca and Di-ane. I am grateful to Prof. Jacquemin, with whom I have worked for some pa-pers and who, together with Carlo, introduced me to the fun part of being a scientist. I would like to thank Prof. Barone, for all the opportunities he gave me during these years. I also had the pleasure of working with him for the publi-cation of one paper. A special thanks to Dr. Cappelli, who has also become a friend. Thanks for all the advice during these years. Finally I would like to thank my group in Pisa for these years and the ties that have been created between us: Carles, Aurora, Filippo, Alessandro, Stefano, Christian.... 1 Introduction 7 2 TD-DFT excited states 11 2.1 Introduction . . . 11 2.2 Brief review of Density Functional Theory . . . 12 2.3 The Time Dependent DFT . . . 26 2.4 Linear Response KS approach to TD-DFT . . . 28 2.5 Analytical Gradients of LR-KS energy . . . 34 2.6 Diagnostic tools for TD-DFT . . . 37 2.6.1 Definition of the Γ index . . . 37 2.6.2 Systems and methods. . . 39 2.6.3 Results and Discussion . . . 41 2.7 Applications: benchmark of TD-DFT structures . . . 54 2.7.1 Computational details . . . 54 2.7.2 Results and Discussion . . . 56 3 TD-DFT/PCM methods 71 3.1 Introduction . . . 71 3.2 PCM: general theory . . . 73 3.3 Theory for solute excited states in solvent . . . 79 3.3.1 PCM-Linear Response and State Specific approach in solvent . . . 81 3.3.2 The corrected-Linear Response (cLR) . . . 87 3.3.3 Analytical Gradients of the Excited State Energy. . . 90 3.4 Applications: Push-pull systems . . . 93 3.4.1 Computational Details. . . 93 3.4.2 Results and Discussion . . . 94 3.5 Applications: PICT vs TICT . . . 108 3.5.1 Computational Details . . . 110 3.5.2 Results and Discussion . . . 111 3.6 A new scheme: the SCLR . . . 122 5 3.6.1 Applications: LR, cLR and SCLR . . . 126 4 Conclusions 133 A Effects of basis set 139 B List of Papers 147 The study of molecular systems in their electronic excited states is one of the major issues in many fields of physics, chemistry, biology and material science with very different applications, going from diagnostic tools in medicine to probes in analytical chemistry, or new devices for technological applications or energetics [1],[2],[3]. Moreover, the absorption of visible light and its conver-sion to other forms of energy is at the heart of some of the most fundamental processes in biology, such as photosynthesis. The electronic structure of a molecule is determined by the quantum behav-ior of electrons inside the system, resulting in molecular states with different symmetries and different spin multiplicities. Many processes can occur when the light is absorbed by a molecule, and in particular, the phenomena we here concentrate on are of electronic origin. Let us illustrate these processes by a Jablonsky diagram in figure 1.1. Jablonsky diagrams can be formulated in a variety of forms, depending on the processes and the systems we are interested to study. The figure here reported shows a typical energy level diagram involved in the dynamics of processes1 [of a] molecule in gas phase, starting from a singlet ground state. The transitions between states are depicted as vertical lines to illustrate the instantaneous nature of light absorption, as transitions occur in about 10−15 s, a time too short for significant displacement of nuclei, in according to the Franck-Condon principle. During the first absorption step (A in the figure), the molecule is excited from the ground to any (singlet) excited state. The excited molecule then tends to release the excess of energy by fast relaxation to lower electronic states. This could be done in a non-radiative way through the so-called in-ternal conversion (IC) process, or in a radiative way by emitting light, i.e. a fluorescence process (F in the figure). The non-radiative process, IC, is esti-1[We excluded a number of other processes, such as quenching or energy transfer] Figure 1.1: A general Jablonsky diagram mated to be in the timescale of picoseconds, while the radiative lifetime of the lowest excited singlet state (S1) is often much larger, in the range of nanosec-onds. When a molecule possesses a heavy atom, large spin-orbit coupling could occur which can open up more channels for absorption and emission. These can be excitations between triplet states, as well as between triplet and singlet states2. In the latter case, the triplet excited state is initially populated by the inter-system-crossing (ISC) process, which is in the time scale of nanoseconds. In the present Thesis only the A and F processes will be analyzed together with the geometrical relaxation accompanying the photophysical evolution of the excited system. In such an analysis another external factor will be intro-duced, namely the effects of the environment. The environment in which a molecule is immersed can in fact alter its states and the corresponding spec-troscopic signals. When absorption and emission UV/vis spectra are measured in solvents of different polarity, it is found that the positions, intensities, and shapes of the bands are usually modified. These effects are generally indicated as solvatochromism. A solvent behaves both as a macroscopic continuum characterized only by physical constants such as density, dielectric constant, index of refraction etc., and a discontinuum medium which consists of individual, mutually interact-ing solvent molecules. Accordinteract-ing to the extent of these interactions, there are 2[Transitions between states of different spin multiplicities, such as triplet-singlet ] tran-sitions, are governed by the spin selection rule. They are absolutely forbidden when the spin-orbit coupling is absent. solvents with a pronounced internal structure (e.g. water) and others in which the interaction between the solvent molecules is small (e.g. hydrocarbons). The interactions between species in solvents (and in solutions) are too strong to be treated by the laws of the kinetic theory of gases, yet too weak to be treated by the laws of solid-state physics. Thus, the solvent is neither an in-different medium in which the dissolved material diffuses in order to distribute itself evenly and randomly, nor does it possess an ordered structure resembling a crystal lattice. Nevertheless, the long-distance ordering in a crystal corre-sponds somewhat to the local ordering in a liquid. Thus, neither of the two possible models – the gas and crystal models – can be applied to solutions without limitation. The changes in excited states induced by the environment are a result of phys-ical intermolecular solute–solvent interaction forces. More in detail, we can individuate two main categories: the first category comprises the so-called di-rectional, induced and dispersion forces, which are non-specific and cannot be completely saturated; the second group consists of specific interactions, such as hydrogen-bonding forces, or electron-pair donor acceptor forces. The main interactions are electrostatic in origin, such as the polarity and the local or-ganization of solvent molecules around the solute molecule. This behavior gives rise to a large use of averaged pictures to model the solute-solvent in-teractions, such as continuum models. However, if we want to treat all those spectral changes which arise from alteration of the chemical nature of the sol-vated molecules by the medium, (such as proton or electron transfer between solvent and solute, solvent-dependent aggregation, ionization, complexation, or isomerization equilibria) or specific interactions, such as hydrogen bonding, a discrete description also for solvent molecules is needed. From the above discussion, it is understandable that a detailed analysis of the excited states requires a variety of experimental techniques and this also applies to theoretical simulations. Indeed different computational schemes are required to understand the properties of different states and to model the ef-fects that the solvent has on them. The aim of the research presented in this Thesis is adressed to investigate the potentiality of the Density Functional Theory (DFT) methods in their time dependent formulation (TD-DFT), and the reliability offered by its combina-tion with a Polarizable Continuum Model (PCM) for the solvent, to determine excitation energies, structures and properties of excited state of molecules in gas and solvent phase. More in detail, this Thesis has focused on two main topics. cal-culation of molecular excited state energies and properties. The goal is to assess performances of exchange-correlation functionals and to individuate di-agnostic tools to analyse the achieved results. This first part is propedeutic to individuate a good computational protocol within the Linear Response Kohn-Sham (LR-KS) scheme to be coupled with the Time Dependent Polarizable Continuum Model (TD-PCM) approach to treat solute-solvent interactions in describing excited states of solvated molecules. To this aim two alternative couplings between TD-DFT and PCM are tested in describing charge transfer (CT) excitations of large systems in solution and to assess the excitation mech-anism in TICT and/or PICT systems. Finally a new self-consistent strategy to describe solvated excited states is developed and implemented whitin the TD-DFT method. The text is organised as follows: • Chapter 2 presents the theoretical background, with a short review of DFT and TD-DFT methods, together with a critical analysis of their limits and potentialities on the basis of newly developed diagnostic tools. In parallel a detailed study of accuracy and relability of TD-DFT in determining excited state structures is presented and discussed. • Chapter 3 presents the extension of TD-DFT approaches to solvation continuum models within PCM framework. Different strategies of pos-sible couplings between TD-DFT and PCM are critically compared when applied to solvent-sensitive molecular probes. Finally, a new self-consistent scheme for calculating excitation enegies in solvent is developed and nu-merical tests are performed in comparison with precedent approaches. • Conclusion remarks and future directions are given in chapter 4. TD-DFT description of molecular excited states One of the major problems in reproducing energies and properties of electronic excited states is due to the computational cost of the method to determine the electronic structure. Nowadays, ab initio methods allow to accurately determine a large set of prop-erties for molecular systems in their ground state. On the contrary, calculations of excited-state properties, including emission phenomena such as fluorescence and phosphorescence, are still a challenge, because they require the nontrivial task of an accurate determination of excited-state structures [4]. On the one hand, fast and cheap purposely tailored semiempirical approaches lack consistency when applied to families of molecules not included in the orig-inal training sets. On the other hand, more reliable theoretical tools, such as EOM-CC [5],[6], MR-CI [7],[8], CAS-PT2 [9],[10],[11], SAC-CI [12], [13], are too expensive to afford the study of the large systems of chemical and industrial At the same time, it is well established that the Kohn-Sham (KS) approach to density functional theory (DFT) can provide an accurate description of a large number of physicochemical properties for the ground electronic state [14]. Furthermore, its current accuracy/cost ratio is significantly lower than that of more sophisticated post-Hatree-Fock approaches. In a similar manner, the time-dependent density functional theory (TD-DFT)[15] could be a viable al-ternative for the evaluation of excited-state geometries and properties. In the Linear Response-Kohn Sham (LR-KS) scheme [16] of the TD-DFT for-mulation, based on the extension to the action functional of Hohenberg and Kohn theorems, as in random phase approximation [17] and Tamm-Damcoff approximation [18], a pseudoeigenvalues equation, where the Lagrange multi-plier matrix contains excitation energies, can be written down. Consequently, first-order molecular properties can be calculated by analytical derivatives [19],[20],[21] by using a Z-vector approach [22], and this introduces the calcu-lation of third-order derivatives of exchange-correcalcu-lation (XC) functional used. The TD-DFT approach presents the typical problems of ground-state DFT methods: the exact XC functional form is unknown, the approximated func-tionals introduce self-interaction (SI) errors, the asymptotic behavior of XC potential could be incorrect, and the use of a single determinant makes DFT inadequate for cases presenting a near degeneracy of several configurations [14],[23]. Besides, additional drawbacks originating in the LR formulation exist. Actually, almost all implementations of TD-DFT are based on the adia-batic approximation, for which the action functional could be written by using the XC energy functional used in Kohn-Sham time-independent equation. In other words, there are no memory effects, the only time dependence is indi-rectly taken into account by the density, and an instantaneous reaction of XC potential to the density variations is assumed [24]. From a more applicative point of view, the use of TD-DFT approach presents some difficulties in de-scribing charge transfer and Rydberg excitations. This originates in the form of the operators in the LR-KS equations and the subsequent incapability to follow the reorganization of charge between two separated regions of space or between orbitals of different spatial extent [24],[25]. Despite these limits, there are extended studies that show the very good perfor-mances of TD-DFT in reproducing excitation energies and absorption spectra [26], [27], [28], [29], [30]. In this chapter, after an introduction of TD-DFT methods, some of the prob-lems and the difficulties of the LR-KS scheme are critically analysed by intro-ducing an index to test the vertical excitations calculated [31] and a benckmark of the performances in reproducing excited state structures is also presented. The latter constitutes the first systematic study of this type, to the best of our knownledge [32]. Brief review of Density Functional Theory The material world of everyday experience, as studied by chemistry and condensed-matter physics, is built up from electrons and a hundred kinds of nuclei, where the basic interaction is electrostatic or Coulombic. All electrons in the lighter elements, and the chemically important valence electrons in most elements, move at speeds much less than the speed of light, and so are non-relativistic. As nuclei are more massive than electrons, we can assume the Born-Oppenheimer approximation to obtain two Schr¨odinger equations, one for the electrons and one for the nuclei. We will focus only in the electronic part. The non relativis-tic time-indipendent many-electron problem becames (in atomic units) [33]: ˆ He|Ψ(x1...xi...xN)i = E|Ψ(x1...xi...xN)i (2.1) Where: x = rσ (2.2) ˆ He= N X i ˆ ti+ N X i ˆ vext(ri) + 1 2 N X i,j6=i 1 |ri− rj| = ˆT + ˆV + ˆW (2.3) ˆ ti = − 1 2∇ 2 i (2.4) ˆ vext(ri) = Nnuc X α Zα |ri− Rα| (2.5) The many-particle wave function Ψ(x1..., xi, ..., xN) contains all the informa-tions of the system. When one is interested in the values of observables corre-sponding to k-body operators O(k)[[Ψ] = hΨ| ˆ][O](k)[|Ψi] [(2.6)] the k-order reduced density matrix it suffices Γ(k)[(x]0 1, ..., x 0 k|x1, ..., xk) = N k R dxk+1...dxNΨ(x 0 1, ..., x 0 k, xk+1..., xN)Ψ(x1, ..., xk, xk+1, ..., xN) (2.7) hΨ| ˆO(k)[|Ψi =] Z dx1...dxkOˆ(k)Γ(k)(x01, ..., x 0 k|x1, ..., xk) (2.8) Most operators of interest are either one or two body operators: Γ(1)(x0[1]|x1) = N Z dx2...dxNΨ(x01, x2, ..., xN)Ψ(x1, x2, ...xN) (2.9) Γ(2)[(x]0 1, x 0 2|x1, x2) = N 2 R dx3...dxNΨ(x 0 1, x02, x3..., xN)Ψ(x1, x2, x3..., xN) (2.10) of the diagonal of the second order density matrix: E =R dx1[2]∇2[Γ](1)[(x|x]0[)] x=x0 +R dxvext(~x)γ(1)(x) +R dx R dx0[|~][r−~]1[r]0[|]γ(2)(x, x0) (2.11) γ(1)(x) = Γ(1)(x|x) (2.12) γ(2)(x, x0) = Γ (2)(x, x0|x, x0) (2.13) Summing over spin in γ(1)(x) the electron density is obtained: ρ(r) =X γ(1)(rσ) (2.14) Since the first order density matrix can be obtained by explicit integration from Γ(2)[:] E = E[Γ(2)] (2.15) However, given the number of the electrons and the external potential, in prin-ciple we have all the information about the system, by solving the Schr¨odinger equation. Therefore we obtain the wave function and the density of the sys-tem: vext(~r) −→ ρ(r). As Eext≡ Eext[ρ(r)] (cfr. equations 2.11 and 2.14) we can presume that : E = E[ρ(r)] (2.16) The proof of this statement is given by the first Hohenberg-Kohn theorem 1[,] whereas the second introduces a variational principle on the energy as a den-sity functional [34]: Theorem 1 (of Hohenberg and Kohn) The external potential vext(r) is (to within an additive constant) a unique functional of the ground state density ρ(r), and therefore : vext(r) ⇐⇒ ρ(r) Corollarium 1 The ground state expectation value of any observable ˆO is a unique functional of the exact ground state density 2 O[ρ] = hΨ[ρ]| ˆO|Ψ[ρ]i 1[The first formulation of the theorem (1964) is valid only for non degenerate ground] states. However the basic formalism is easily extended to also include degerate cases. E[ρ] = T [ρ] + W [ρ] + V [ρ] = FHK[ρ] + V [ρ] (2.17) FHK[ρ] is the universal HK functional. It is universal in the sense that it does not depend on the external potential. Theorem 2 (variational character of Energy functional) Given the ex-act ground state density ρ0 for an external potential v0, and a different v-representable density ρ, Ev0[ρ0] ≤ Ev0[ρ] (2.18) A function ρ(r) is termed pure state v-representable if it is the density of a (possibly degenerate) ground state of the Hamiltonian of the system with some suitably chosen local external potential v (r). By construction, the functionals FHK[ρ] and Ev0[ρ] are defined only for pure-state v-representable functions [14]. However, the original hope that all reasonably well-behaved non negative func-tions are pure-state v-representable turned to be too optimistic therefore, an extension of the domain of HK functional to arbitrary non-negative functions, integrating to the given particle number N appears desiderable. A possible choice is the Levy-Lieb functional [35],[36], that is defined for all functions ρ(r) which can be represented as the density of some antisymmetric N-particle function. These functions are called pure state N-representables. FLL[ρ] = inf Ψ→ρhΨ| ˆT + ˆW |Ψi (2.19) All integrable non-negative functions are N-representable [14],[37], provided that dr|∇ρ1/2(r)|2 < +∞ . Beside the v- and N-representability problems, the form of exact universal functional is unknow. Problems arise not only from the interaction term ˆW , but also from the kinetic part of the functional. A possible strategy is to use a model system of non-interacting electrons, as pointed out by Kohn and Sham [38]. The central assertion of the Kohn-Sham scheme is 3: For each interacting system ( ˆW 6= 0 in eq. 2.3), there must be a local mono-electronic potential vs, such that the exact density of the interacting system is equal to the density of the non-interacting system with external potential vs. In the Kohn–Sham formulation, the density is expressed in terms of N or-thonormal orbitals, ρ(r) =X |φi(r)|2 (2.20) the universal functional is FKS[ρ] = − 1 2 X i hφi|∇2|φii + EH[ρ] + EXC[ρ] (2.21) where the Hartree (or classical Coulomb) energy is given by, EH[ρ] = 1 2 Z Z [ρ(r] 1)ρ(r2) r12 dr1dr2 (2.22) Here EXC[ρ] is the exchange-correlation (XC) energy functional. It takes into account the electron-electron interactions and the correction to the ki-netic energy term of the interacting system. Minimizing the energy gives the Kohn–Sham equation f[i]s[ρ]φi(r) = εiφi(r) (2.23) where the single-particle Kohn–Sham hamiltonian is ˆ f[i]s = −1 2∇ 2[+ v] s(r) + vH[ρ](r) + vXC[ρ](r) (2.24) ˆ FKS =X i ˆ f[i]s (2.25) Here the Hartree potential is vH[ρ](r) = δEH[ρ](r) δρ(r1) = Z ρ(r2) r12 dr2 (2.26) and the XC potential is vXC[ρ](r) = Since no exact form of the XC functional is known, this functional is approx-imated in practice. Once a XC approximation is chosen, the equations are to be solved self-consistently. What differentiates between the various ap-proaches to the DFT method is the choice of the exchange-correlation energy functional. The theory provides no restrictions on the choice, therefore, vari-ous approximations have been proposed. The principal are the Local Density Approximation (LDA), the Generalized Gradient Approximation (GGA), the global hybrid (GH) functionals and a new generation of hybrid functionals, denoted as range-separeted (RSH) or long-range corrected (LRC). Local-density approximations (LDA)[37],[14] are a class of approximations to the XC energy functional that depend solely upon the value of the electronic density at each point in space. Many approaches can yield local approximations to the XC energy. However, overwhelmingly successful local approximations are those that have been derived from the homogeneous electron gas (HEG) model. In this regard, LDA is generally synonymous with functionals based on the HEG approximation applied to realistic systems (molecules and solids). In general, for a spin-unpolarized system, a local-density approximation for the exchange-correlation energy is written as E[xc]LDA[ρ] = Z ρ(r)xc(ρ) dr , (2.28) the exchange-correlation energy density xc, is a function of the density alone. Usually it is assumed, in somewhat arbitrary way, to be able to separate the components of the exchange and correlation functional, which is therefore writ-ten as a sum : Exc[ρ] = Ex[ρ] + Ec[ρ] (2.29) so that separate expressions for Ex and Ec are sought. The exchange term takes a simple analytic form for the HEG. Only limiting expressions for the correlation density are known exactly, leading to numerous different approxi-mations for Ec. The exchange-energy density of a HEG is known analytically. The LDA for exchange uses the following expression, due to Dirac [39], E[x]LDA[ρ] = −3 4 3 π 1/3Z ρ(r)4/3 dr (2.30) This expression is obtained under the approximation that the exchange-energy in a system where the density in not homogeneous, is obtained by applying the HEG results pointwise. Analytic expressions for the correlation energy of the HEG are not known except in the high- and low-density limits corresponding to infinitely-weak and infinitely-strong correlation. For a HEG with density ρ, the high-density limit of the correlation energy density is [23]: c= A ln(rs) + B + rs(C ln(rs) + D) (2.31) and the low limit c = 1 2 g[0] rs + g1 rs3/2 + . . . (2.32) where the Wigner-Seitz radius is related to the density as 4 3πr 3 s = 1 ρ. (2.33) Accurate quantum Monte Carlo simulations for the energy of the HEG have been performed for several intermediate values of the density, in turn provid-ing accurate values of the correlation energy density [40]. The most popular LDA’s to the correlation energy density interpolate these accurate values ob-tained from simulation while reproducing the exactly known limiting behavior. Various approaches, using different analytic forms for c, have generated several LDA’s for the correlation functional, including • Vosko-Wilk-Nusair (VWN) [41] • Perdew-Zunger (PZ81) [42] • Cole-Perdew (CP) [43] • Perdew-Wang (PW92) [44] The exchange-correlation potential corresponding to the exchange-correlation energy for a local density approximation is given by v[xc]LDA(r) = δE δρ(r) = xc(ρ(r)) + ρ(r) In finite systems, the LDA potential decays asymptotically with an expo-nential form. This is in error: the true exchange-correlation potential decays much slower in a Coulombic manner (−1/2r) [42], [37]. The artificially rapid decay manifests itself in the number of Kohn-Sham orbitals the potential can bind (that is, how many orbitals have energy less than zero). The LDA po-tential can not support a Rydberg series and underbond states are too high in energy. This results in the HOMO energy being too high, so that any pre-dictions for the ionization potential based on Koopman’s theorem are poor. Further, the LDA provides a poor description of electron-rich species such as anions where it is often unable to bind an additional electron, erroneously predicting species to be unstable. LDA functionals are important in the construction of more sophisticated approximations to the exchange-correlation energy, such as generalized gradi-ent approximations or hybrid functionals, as a desirable property of any ap-proximate exchange-correlation functional is that it reproduce the exact results of the HEG for non-varying densities. As such, LDA’s are often an explicit component of such functionals. The LDA functional is exact for the homogeneous electron gas, but in general tends to underestimate by around 10% the exchange energy of atoms. For example, for the Ne atom the fair value of Ex is -329 eV while from the Dirac functional a value of -298 eV is obtained with a considerable error (about 4-5 times the binding energy of H2 molecule). Among the many faults, it also does not give rise to the right asymptotic be-havior. In the 80’s efforts to correct the Dirac functional have been proposed, moving from local to nonlocal functional, or to functionals where the energy density depends on the density of its gradient. These methods are also called by the name of the Generalized Gradient Approximation (GGA) [45],[46],[37]. GGA is an improvement quite natural since for the uniform density gradient is zero, while it is not for the usual atomic and molecular density. In general, GGA functionals are of the type: Exc[ρ, ∇ρ] = ρ(r)fx(ρ, ∇ρ, r)dr (2.35) One of the most accurate result for exchange part was obtained by Becke [46] on the basis of some reasoning aimed at correcting the bad asymptotic behav-ior of the energy density of the Dirac exchange functional: E[x]B[ρ, ∇ρ] = E[x]LDA[ρ] + Z ρ(r)B[x](ρ, ∇ρ, r)dr (2.36) where B[x](ρ, ∇ρ, r) = −βρ1/3 X 2 1 + 6β sinh−1X (2.37) with X = |∇ρ| ρ4/3 and β = 0.0042 (2.38) β was empirically determined to accurately to reproduce the Hartree-Fock ex-change energy of noble gases. B[x] has the correct asymptotic behavior for the exact density. However, it should be noted that the corrected exchange poten-tial: vx(r) = δEx δρ = x+ ρ ∂x ∂ρ − ∇ · ∂Ex ∂(∇ρ) (2.39) has an asymptotic behavior that is identical to the exchange energy density. It can be shown that the potential corresponding to the energy density of Becke decays as −1/r2 instead of −1/r. The merit of this functional was very large: from 1988 onwards, the KS-DFT method has become a widely used tool in many fields of theoretical chemistry, replacing the Hartree-Fock method in computational chemistry. A positive feature of this functional is that the use of ∇ρ introduced a proper treatment of the shell structure of atoms: this is the main reason for its success in representing the energy exchange. Among the defects it has however to be mentioned that the exchange of Becke (B88) reduces but does not completely eliminate the error of self-interaction. This will be discussed in a next section. Concerning the correlation part, one of the most used is the Lee, Yang and Parr (LYP) correlation functional [47], based on the expression of Colle and Salvetti. The novelty of this functional is that it was derived from a correlated wave function for the He atom and not from the HEG. Colle and Salvetti [48] approximate the correlation energy formula for the Helium atom in terms of the second order HF density matrix. Lee, Yang and Parr turned this into a functional of the density, gradient and Laplacian. Miehlich, Savin, Stoll and Press [49], later eliminated the Laplacian terms using integration by parts. For closed shell systems the functional is Ec[ρ, ∇ρ] = −a R ρ 1+dρ−1/3 −abR ωρ2[C] Fρ8/3+ |∇ρ|2 [12]5 −[72]7δ − 11[24]ρ2∇ρ|2 dr (2.40) ω = exp(−cρ −1/3[)] 1 + dρ−1/3 ρ −11/3 δ = cρ−1/3+ dρ −1/3 1 + dρ−1/3 (2.41) a = 0.04918 b = 0.132 c = 0.2533 d = 0.349 (2.42) Hybrid functionals Two distinct philosophies have emerged in the construction of modern exchange-correlation functionals. Perdew supports the idea that functionals should be derived non-empirically using rigorous quantum-mechanical principles and ex-act conditions, however Becke advocates the semi-empirical approach whereby a general functional form containing free parameters is proposed, and the pa-rameters are subsequently fitted to minimise the error in exact physical prop-erties. The semi-empirical concept is extensively used and developed within the quantum chemistry community where there is a wealth of known atomic and molecular data that can be used to fit functionals. Hybrid functionals incorporate a portion of exact exchange from Hartree-Fock theory with exchange and correlation from other sources (ab initio, such as LDA and GGA, or empirical). The exact exchange energy functional is ex-pressed in terms of the Kohn-Sham orbitals rather than the density, therefore sometimes it is indicated as an implicit density functional. The hybrid approach to approximate the XC density functional was intro-duced by Axel Becke in 1993 [50], on the basis of the adiabatic connection formula (ACF), which connects the noninteracting KS reference system to the fully interacting real system, through a continuum of partially interacting real systems, all sharing a common density. Hybridization with Hartree-Fock exchange provides a simple scheme for im-proving many molecular properties, such as atomization energies, bond lengths and vibration frequencies, which tend to be poorly described with simple ab initio functionals of the LDA or GGA type. A hybrid exchange-correlation functional is usually constructed as a linear combination of the Hartree-Fock exact exchange functional (EHF number of exchange and correlation explicit density functionals. The parame-ters determining the weight of each individual functional are typically specified by fitting the functional’s predictions to experimental or accurately calculated thermochemical data. Standardised sets of experimental data, collated by Pople and co-workers and known as the Gaussian set 4[, were especially suited for the purposes of ] con-structing semi-empirical functionals. For example, the G2 set consists of highly accurate experimental thermochemical data - atomisation energies, ionisation potentials and electron and proton affinities - of a range of atomic and molec-ular systems, drawn from the first two rows of the periodic table. As an example, the popular B3LYP (Becke, three-parameter, Lee-Yang-Parr) [46],[47],[50] exchange-correlation functional takes the form: E[xc]B3LYP = E[xc]LDA+ a0(ExHF− ExLDA) + ax(ExGGA− ExLDA) + ac(EcGGA− EcLDA) (2.43) where a0 = 0.20, ax = 0.72, and ac = 0.81 are the three empirical parameters determined by fitting the predicted values to a set of atomization energies, ionization potentials, proton affinities, and total atomic energies;[5] E[x]GGA and E[c]GGA are generalized gradient approximations: the Becke 88 ex-change functional[6] and the correlation functional of Lee, Yang and Parr,[7] and ELDA c is the VWN local-density approximation to the correlation func-tional. Hybrid functionals successfully demonstrate the need to incorporate fully non-local information in order to deliver greater accuracy. In a differ-ent phylosophy, a parameter free hybrid functional is the PBE0 of Adamo and Barone [54]. Perdew, Burke and Ernzerhof [45] have introduced a GGA functional in which all the parameters, other than those in its local spin den-sity (LSD) component, are fundamental constants. This is obtained using the Perdew–Wang PW92 [44] correlation functional and the exchange contribution: E[x]P BE[ρ, ∇ρ] = bX 1 + aX2 (2.44) where X is defined in eq.(2.38) and b = 0.00336, a = 0.00449. In the ACF framework the XC functional is written as only one parameter hybrid: EXC[ρ, ∇ρ] = EXCP BE[ρ, ∇ρ] + a EXCHF − EXCP BE[ρ, ∇ρ] 4[During the years the number of data collected increases and different set exist, indicated] However on the basis of what Perdew and co-workers have shown, the opti-mum value of the a coefficient can be fixed a priori taking into account that fourth-order perturbation theory is sufficient to get accurate numerical results for molecular systems. Therefore: E[XC]P BE0[ρ, ∇ρ] = E[XC]P BE[ρ, ∇ρ] + 1 4 E HF XC − E P BE XC [ρ, ∇ρ] (2.46) Some problems in DFT-KS The use of approximate XC functionals gives rise to some problems in the de-scription of the molecular systems. One of these is the asymptotic behavior of the XC potential [46], as we already mentioned in the case of LDA and GGA. Since hybrid functionals contain only a fixed percentage of HF exchange (that shows the correct asymptotic behavior), the correction done is only propor-tional to this amount. Another well know problem, linked to the asymptotic behavior problem, is the so-called Self-Interaction of the density [42]. Ap-proaches that attempt to correct this problem are know as Self-Interaction Correction (SIC). Let us illustrate the problem in few words. One of the major difference between a quantum electron density and charge dis-tribution is in the classical self-interaction. In fact, an electron interacts with all others but not with itself therefore, for a density of N particles, the right number of interactions is the number of electron pairs without repetition, and scale as N2, or more precisely N (N −1)/2. The number of self-interactions will instead be proportional to N and, for a macroscopic density (eg. N = 1020[),] the fraction of the number of self-interactions with respect to the number of interactions is largely negligible. However for a molecular density this issue is relevant. In the HF method, which approximates the wave function with a single Slater determinant, but it maintains the proper antisymmetry of the wave function, this requirement is fulfilled in a natural way. In fact, in the energy expression of inter-electronic repulsion E[HF](2) = 1 2 N X i=1 N X j=1 [hij|iji − hij|jii] (h + N X j=1 [Jj − Kj])φi = εiφi (2.47) it is clear that for i = j (interaction of two electrons in the same spin-orbital or an electron with itself) the Coulomb term is canceled by the terms of ex-change. In the Hartree method the wave function is a product of spin orbitals for which the antisymmetry of the wave function is not covered and the cancellation does not occur in the equations and the self interaction term is eliminated ad-hoc: E[H](2) = 1 2 N X i=1 N X j6=i hij|iji (h + N X j=1,j6=i Jj)φi = εiφi (2.48) with the result that there is a specific equation for each spin orbital. The correct number of interactions turns out to be N (N − 1)/2, or all pairs of electrons without repetition. The electrostatic energy in the DFT method includes the Coulomb term EH[ρ] = J [ρ] = 1 2 Z dr1 Z dr2 ρ(r1)ρ(r2) r12 = 1 2 N X i=1,j=1 hij|iji (2.49) that is the same as the Hartree without the ad-hoc correction, and includes N2[/2 total interactions and N/2 = (N]2[− N (N − 1))/2 spurious interactions.] This defect is not unexpected given that we have defined the functional EH[ρ] as the classic inter-electronic energy. Obviously the functional Exc in its exact form will have to cancel the spurious terms contained in J [ρ]. Unfortunately, the exact exchange-correlation energy functional is not known and also with most of the modern functionals is possible to only get a partial cancellation of the spurious terms, so the final energy is also affected by problems of self-interaction. This not precise scaling of the number of interactions with the number of electrons becomes particularly critical in the case, for example, of calculations of ionization energies, where the final result comes from the energy differences between neutral and ionic systems, thus having a different number of electrons. In this and other cases we can not expect effective cancellation of the error of the SI. This issue is also reflected in the behavior of the asymptotic effective potential vxc in the equations of KS. There are several approaches to correct this problem (see for example references in [37]). The best known is certainly that of Perdew and Zunger (PZ) [42], which has the disadvantage of being orbital-dependent, so we obtain N different KS equations. Other methods make use of a medium approach, based on an approach similar to that of Fermi and Amaldi [55], in which the density is scaled by N − 1/N , such as in the ADSIC method [56],[57]. The Long Range Correction Scheme The local nature of approximate XC functionals in DFT causes serious prob-lems in practical calculations of various molecular properties. Introducing a fixed amount of the Hartree–Fock (HF) exchange contribution gives in many cases partial solutions of these problems. Nevertheless, some molecular prop-erties such as the longitudinal polarizabilities of all-trans polyenes with in-creasing number of ethylene units cannot be described even qualitatively by such conventional hybrid functionals. The poor performance of the pure and conventional hybrid functionals for these quantities can be attributed to the (partial) lack of the long-range exchange interaction. In these years, a power-ful prescription has been suggested, namely, the so-called long-range correction (LRC) [58] to exchange functionals. The basic idea of LRC is to simply sep-arate the electron– electron Coulomb interaction into short-range (SR) and long-range (LR) parts [59], 1 r12 = 1 − w(r12) r12 + w(r12) r12 (2.50) by a separation-function w(r12), and use the SR part to compute the den-sity functional exchange contribution and the LR part in the form of the HF exchange energy. The functionals obtained in this way, the range-separated hybrid functionals (RSH), have shown dramaticaly improved performance in the problematic cases mentioned earlier. The complementary error function erf c(µr12) has been employed almost exclusively as w(r12) with Gaussian-type basis functions (GTF) for the ease of the computation of integrals over the modified Coulomb operator. Several extensions of LRC (eq. 2.50) have been suggested for further improvements of accuracy [60],[61],[62]. Yanai et al. proposed [63] a general form of the range-separation named Coulomb-attenuated method (CAM), 1 r12 = 1 − (α + β)erf (µr12) r12 + (α + β)erf (µr12) r12 (2.51) CAM indroduces a global mixing of the HF exchange with a fixed ratio, which is determined by α, as well as the range-separated one. In addition, α + β indicates the ratio of the HF exchange in r = ∞, which is fixed at 1.0 in the original LRC scheme eq ( 2.50). The introduction of the parameters α and β bridges the pure (α = β = 0), conventional hybrid (α = 0 and β = 0), and range-separated hybrid (β = 0) functionals seamlessly. The performance of the proposed CAM-B3LYP functional with α = 0.19, β = 0.46, and µ = 0.33 has been assessed by several authors [64],[65],[66], [67],[68]. In this Thesis we extensively use the CAM-B3LYP functional to calculate excited state energies and properties in the LR-KS TD-DFT scheme. The Time Dependent DFT The traditional KS-DFT is limited to time independent systems, that is, ground states, and if one wants to establish an analogous time-dependent theory, time-dependent versions of the first and second HK theorems must be formulated and a time-dependent KS equation must be derived. In this section, we present the Runge-Gross theorem [15], which is a time-dependent analogue to HK first theorem and the role of the action integral in a time-dependent variational principle is analysed. The Runge-Gross theorem can be seen as the time dependent analogue of the first Hohenberg-Kohn theorem and constitutes the cornerstone of the formal foundation of the time-dependent Kohn-Sham Theorem 3 (of Runge and Gross) The exact time-dependent electron den-sity, ρ(r, t), determines the time-dependent external potential, V (r, t), up to a spatially constant, time-dependent function C (t) and thus the time-dependent wave function, Ψ(r, t), up to a time dependent phase factor. The wave function is thus a functional of the electron density Ψ(r, t) = Ψ[ρ(t)](t)e−iαt (2.52) with (d/dt)α(t) = C(t). Also in this case, V (t) is a time-dependent external potential and is given as a sum of one-particle potentials V (t) = v(ri, t) (2.53) To prove the Runge-Gross theorem, it must be demonstrated that two den-sities ρA(r, t) and ρB(r, t) evolving from a common initial state Ψ0 under the influence of two different potentials vA[(r, t) and v]B[(r, t) are always different if] the two potentials differ by more than a purely time-dependent function, that is vA(r, t) 6= vB(r, t) + C(t) (2.54) An assumption to be made is that the potentials can be expanded in a Taylor series in time around t0 The proof proceeds in two steps: First it is shown that the current densities, jA[(r, t) and j]B[(r, t), corresponding to v]A[(r, t) and v]B[(r, t)] are always different, and in a second step, it is derived that different current densities require different electron densities. Consequently, for different time-dependent external potentials at t 6= t0, one obtains different time-dependent electron densities infinitesimally later than t0. With this, the one-to-one map-ping between time-dependent densities and time-dependent potentials is es-tablished, and thus, the potential and the wave function are functionals of the density5[.] Furthermore, the expectation value of any quantum mechanical operator is a unique functional of the density because the phase factor in the wave function cancels out. Strictly speaking, the expectation value implicitly depends also on the initial state, Ψ0, that is, it is a functional of ρ(r, t) and Ψ0. For most cases, however, when Ψ0 is a nondegenerate ground state, O[ρ](t) is a functional of the density alone, because Ψ0 is a unique functional of its density ρ0(r) by virtue of the traditional first Hohenberg-Kohn theorem. The one-to-one mapping between time-dependent potentials and time-dependent functionals represents the first step in the development of a time dependent many-body theory using the density as a fundamental quantity. A second requirement is the existence of a variational principle in analogy to the time-independent case, in which it is given by the above-described second Hohenberg-Kohn theorem [37]. In general, if the time-dependent wave function Ψ(r, t) is a solution of the time-dependent Schr¨odinger equation with the initial condition Ψ(r, t0) = Ψ0(t) (2.55) then the wave function corresponds to a stationary point of the quantum me-chanical action integral. A = Z t1 dthΨ(r, t)|i∂ ∂t− ˆH(r, t)|Ψ(r, t)i (2.56) which is a functional of ρ(r, t) owing to the Runge-Gross theorem, that is, A[ρ] = Z t1 dthΨ[ρ](r, t)|i ∂ ∂t − ˆH(r, t)|Ψ[ρ](r, t)i (2.57) Consequently, the exact electron density ρ(r, t) can be obtained from the Euler 5[Few years ago, van Leeuwen presented a generalization of the Runge-Gross theorem] and proved that a time dependent density ρ(r, t) obtained from a many-particle system can under mild restriction on the initial state always be reproduced by an external potential in a many-particle system with different two-particle interaction. For two states with equivalent initial state and the same two-particle interaction, van Leeuwen’s theorem reduces to the Runge-Gross theorem. ∂ρ(r, t) = 0 (2.58) when appropriate boundary conditions are applied. Furthermore, the action integral can be split into two parts, one that is universal (for a given num-ber of electrons) and the other dependent on the applied potential v(r, t) = vel−nuc(r) + vappl(r, t) A[ρ] = B[ρ] + Z t1 t0 dt Z d3rρ(r, t)v(r, t) (2.59) The universal functional B[ρ] is independent of the potential v(r, t) and is given as B[ρ] = Z t1 t0 dthΨ(r, t)|i∂ ∂t− ˆT (r) − ˆVe−e(r)|Ψ(r, t)i (2.60) In summary, the variation of the action integral with respect to the density according to eq. (2.58) is a prescription of how the exact density can be obtained. Linear Response KS approach to TD-DFT The stationary action principle can be applied to derive a time-dependent Kohn-Sham equation in analogy to the time-independent counterpart. The time-dependent Kohn-Sham equations [16] can be conveniently expressed in matrix notation in a basis of, say, M time-independent single-particle wave functions {χi(r)} such that ϕp(r, t) = cpj(t)χj(r) (2.61) Then, the time-dependent KS equation reads i∂ ∂tC = F C (2.62) expan-sion coefficients of ϕi(r, t) and FKS is the matrix representation of the time-dependent Kohn-Sham operator 6 in the given basis. Multiplication of eq (2.62) from the right with C† and then subtraction from the resultant equation of its Hermitian transpose leads to the Dirac form of the time dependent Kohn-Sham equation in density matrix form. This equa-tion reads7 X q (FpqPqr− PpqFqr) = i ∂ ∂tPpr (2.63) in which the density matrix Ppr is in general related to the electron density via ρ(r, t) = N X p,r M X i,j cpj(t)c∗ri(t)χj(r)χ∗i(r) = M X i,j χj(r)χ∗i(r)Pij (2.64) To obtain excitation energies and oscillator strengths employing the time-dependent KS approach, two different strategies can be followed. One pos-sibility is to propagate the time-dependent KS wave function in time, which is referred to as ”real-time TD-DFT”. The other, most used in quantum chemistry and also in this Thesis, is the linear response approach: using a density matrix formalism, it is shown how the excitation energies are obtained from the linear time-dependent response of the time-independent ground-state electron density to a time-dependent ex-ternal electric field [16]. Before the time-dependent electric field is applied, the system is assumed to be in its electronic ground state, which is determined by the standard time-independent Kohn-Sham equation, which in the density matrix formulation is: X q F[pq](0)P[qr](0)− P(0) pq F (0) qr = 0 (2.65) with the idempotency condition X P[pq](0)P[qr](0) = P[pr](0) (2.66) Fpq(0) and Ppq(0) correspond to the Kohn-Sham Hamiltonian and density matrix of the unperturbed ground state, respectively. The elements of the time inde-pendent Kohn-Sham Hamiltonian matrix are given as [24] 6[The time-independent counterpart is given in eq. 2.25] 7[We drop out the superscript KS] F[pq](0) = Z d3rϕ∗[p](r) ( −1 2∇ 2[−] M X K=1 ZK |r − RK| + Z d3r0 ρ(r 0[)] |r − r0[|]+ δExc δρ(r) ) ϕq(r) (2.67) In the basis of the orthonormal unperturbed single-particle orbitals of the ground state, these matrices are simply given as8 F[pq](0) = δpqεp (2.68) P[ij](0) = δij P[ai](0) = P[ia](0) = P[ab](0) = 0 (2.69) Now, an oscillatory time-dependent external field is applied, and the first-order (linear) response to this perturbation is analysed. In general perturba-tion theory, the wave funcperturba-tion or in this case the density matrix is assumed to be the sum of the unperturbed ground state and its first-order time-dependent change, Ppq = Ppq(0)+ P (1) pq (2.70) The same holds for the time-dependent Kohn- Sham Hamiltonian, which to first order is given as the sum of the ground-state KS Hamiltonian and the first-order change Fpq = Fpq(0)+ F (1) pq (2.71) Substituting eqs (2.70) and (2.71) into the time-dependent Kohn-Sham eq. (2.63) and collecting all terms of first order yield X q F[pq](0)P[qr](1)− P(1) pq F (0) qr + F (1) pq P (0) qr − P (0) pq F (1) qr = i ∂ ∂tP (1) pr (2.72) 8[Again, we follow the convention that indices i, j, etc. correspond to occupied orbitals,] The first-order change of the Kohn-Sham Hamiltonian consists of two terms. The first contribution corresponds to the applied perturbation, the time depen-dent electric field itself, and it has been shown that it is sufficient to consider only a single Fourier component of the perturbation, which is given in matrix notation as gpq = 1 2[fpqe −iωt + f[qp]∗eiωt] (2.73) In this equation, the matrix fpq is a one-electron operator and describes the details of the applied perturbation. Furthermore, the two-electron part of the Kohn-Sham Hamiltonian changes according to changes in the density matrix. The changes in the KS Hamiltonian due to the change of the density are given to first order as ∆F[pq](0) =X st ∂Fpq(0) ∂Pst P[st](1) (2.74) such that the first-order change in the KS Hamiltonian is altogether given as F[pq](1) = gpq+ ∆Fpq(0) (2.75) The time-dependent change of the density matrix induced by the perturbation of the KS Hamiltonian, this is to first order given as P[pq](1) = 1 2[Xpqe + Y[qp]∗eiωt] (2.76) where Xpq and Yqp represent perturbation densities. Inserting the last four equations into eq. (2.72) and collecting the terms that are multiplied by e−iωt yield the following expression X q " F[pq](0)Xqr− XpqFqr(0)+ fpq+ X st ∂Fpq(0) ∂Pst Xst ! P[qr](0)− P[pq](0) fqr+ X st ∂Fqr(0) ∂Pst Xst !# = ωXpr (2.77) The terms multiplied by eiωt [lead to the complex conjugate of the above ] order change of the density matrix of the form X q P(0) pq P (1) qr + P (1) pq P (0) qr = P (0) pq (2.78) which restricts the form of the matrix X in eq. (2.77) such that • occupied-occupied and virtual-virtual blocks (Xii and Yaa) are zero, • only the occupied-virtual and virtual-occupied blocks (Xia and Yai), re-spectively, contribute and are taken into account. Remembering the diagonal nature of the unperturbed KS Hamiltonian and density matrixes, one obtains the following pair of equations: F[aa](0)Xai− XaiF (0) ii + fai+ X bj " ∂F[ai](0) ∂Pbj Xbj+ ∂F[ia](0) ∂Pjb Ybj #! P[ii](0) = ωXai (2.79) F[ii](0)Yai− YaiFaa(0)− fia+ X bj " ∂F[ia](0) ∂Pbj Xbj+ ∂F[ai](0) ∂Pjb Ybj #! P[ii](0) = ωYai In the zero-frequency limit (fai = fia = 0), that is, under the assumption that the electronic transitions occur for an infinitesimal perturbation, one obtains a non-Hermitian eigenvalue equation, the LR-KS equation, A B B∗ A∗ X Y = ω 1 0 0 −1 X Y (2.81) the structure of which is equivalent to the Time Dependent Hartree-Fock (TD-HF) [33]. Here, the elements of the matrices A and B are given as Aiaσ,jbτ = δijδabδστ(εa− εb) + hiσjτ|aσbτi − CHFδστhiσaσ|jτbτi +(1 − CHF)hiσjτ|fxc|aσbτi (2.82) Biaσ,jbτ = hiσbτ|aσjτi − CHFδστhiσaσ|bτjτi +(1 − CHF)hiσbτ|fxc|aσjτi (2.83) to obtain excitation energies ω and transition vectors |X + Yi when the un-perturbed KS Hamiltonian , from which the response is derived, contains a so-called pure DFT XC potential or also parts of Hartree-Fock exchange. In-deed, the elements of the matrices A and B contain also the response of the Hartree-Fock exchange potential, as well as the one of the chosen XC potential at a rate determined by the factor CHF determined in the hybrid XC func-tional. It becomes apparent that the equations contain TD-HF and pure (i.e. no hybrid functional used) TD-DFT as limiting cases if CHF = 1 or CHF = 0, In the so-called adiabatic local density approximation (ALDA) [16] the originally non-local (in time) time-dependent xc kernel is replaced with a time-independent local one based on the assumption that the density varies only slowly with time. This approximation allows the use of a standard local ground-state xc potential in the TD-DFT framework. In the ALDA, the re-sponse of the xc potential corresponds to the second functional derivative of the exchange-correlation energy, which is also called the xc kernel, and is given as hiσjτ|fxc|aσbτi = Z d3rd3r0ϕ∗[i](r)ϕa(r) δ2EXC δρ(r)δρ(r0[)]ϕ ∗ b(r 0 )ϕ∗[j](r0) (2.84) In analogy to TDHF and CIS, the Tamm-Dancoff approximation (TDA) to TD-DFT has also been introduced [69]. It corresponds to neglecting the ma-trix B in eq. (2.81), that is, only the occupied-virtual block of the initial K = X + Y matrix (2.81) is taken into account. This leads to a Hermitian eigenvalue equation AX = ωX (2.85) where the definition of the matrix elements of A is still the same as in eq. (2.82). It is worthwhile to note that TDA/TD-DFT is usually a very good approxi-mation to TD-DFT [70]. A possible reason may be that in DFT correlation is already included in the ground state by virtue of the XC functional, which is not the case in HF theory. Since the magnitude of the Y amplitudes and the elements of the B matrix are a measure for missing correlation in the ground state, they should be even smaller in TD-DFT than in TD-HF and, thus, be less important. TD-DFT is also more resistant to triplet instabilities than TD-HF. Analytical Gradients of LR-KS energy Since molecular properties can be derived as analitical derivatives of the system energy, an important advance for Quantum Chemical applications of TD-DFT has been the implementation of analytic derivatives [19],[20] for TD-DFT ex-cited states. This is primarily a matter of calculating ωξ = ∂ω ∂ξ to add to E[GS]ξ = ∂EGS ∂ξ to obtain E[K]ξ = ∂EK ∂ξ , the derivative of the energy of the excited state K with respect to a generic perturbation ξ. This derivative expression ωξ[K] = 1 2hXK+ YK|(A + B) ξ[|X] K + YKi + 1 2hXK− YK|(A − B) ξ[|X] K− YKi (2.86) does not involve the derivatives of the excitation amplitudes [i.e., the left and right eigenvectors of eq. (2.81)] because they have been variationally deter-mined, but it does require the knowledge of the change in the elements of Fock matrix in the MO basis which in turn requires the knowledge of the MO coeffi-cients derivatives, which are the solution of the couple perturbed Kohn-Sham equations (CPKS). It is well known, however, that there is no need to solve the CPKS equations for each perturbation, but rather only for one degree of freedom, to find the so called Z-vector or relaxed density, which represents the orbital relaxation contribution to the one-particle density matrices (1PDM) involved in all post-SCF gradient expressions. PK = P0+ PK[∆] (2.87) The TK contains the occupied-occupied and virtual-virtual blocks of P∆ P[kl]∆ = −1 2 X a [(X + Y )kaσ(X + Y )alσ +(X − Y )kaσ(X − Y )alσ] (2.89) P[bc]∆ = −1 2 X i [(X + Y )ibσ(X + Y )ciσ +(X − Y )ibσ (X − Y )ciσ] (2.90) The ZK [matrix in eq. (2.88) collects the occupied-virtual blocks of P]∆[. Such] blocks are obtained by solving the following Z-vector equation [20]: G+[aiσ][P[bj]∆] + δabδijδσσ0(ε[aσ]− ε[iσ])P∆ aiσ = Laiσ (2.91) where we define two contractions of a nonsymmetric density matrix P with the four-indexes portion of the A + B and A − B matrices into the two-electron integrals portion of a nonsymmetric Fock-like matrix, i.e. G+[pqσ][Prs] = X rsσ0 2(pqσ|rsσ0 ) + 2f[pqσ,rsσ]Xc 0 −cXδσσ0 [(psσ|rqσ0) + (prσ|sqσ0)]] Prsσ0 G−[pqσ][Prs] = X rsσ0 [[(psσ|rqσ0) + (prσ|sqσ0)]] Prsσ0 (2.92) The Lagrangian Laiσ depends only from occupied-occupied and virtual-virtual blocks of P∆[, i.e T]K ij and TabK Laiσ = C1aiσ− C2aiσ+ G+aiσ[P ∆ kl] + G+aiσ[P ∆ bc] (2.93) C1aiσ = X b (X + Y )biσG+[baσ][(X + Y )rs] +X b (X − Y )biσG−baσ[(X − Y )rs] +X b (X + Y )biσGXcbaσ[(X + Y )rs] (2.94) C2aiσ = X j (X + Y )ajσG+ijσ[(X + Y )rs] +X j (X − Y )ajσG−ijσ[(X − Y )rs] (2.95) This allows the calculation of excited-state structure and properties such as true dipole moments, rather than just transition dipole moments, showing that such properties are also accessible from TD-DFT [20]. Critical analysis of fuctionals performances in LR-KS: a new diagnostic index From an applicative point of view, the use of LR-KS approach to TD-DFT in combination with approximated XC functionals presents some difficulties in describing charge transfer (CT) and Rydberg (Ry) excitations, absorption spectra of systems with many electron excitations (such as in polyenes) or with open shell ground state [25],[24]. In any case, in the literature there are extended studies showing the very good TD-DFT performances in reproduc-ing excitation energies for transitions with a local character, such as n-π∗ and π-π* [26]-[30]. The situation, however, is more complicated than what it could appear as, in some cases, TD-DFT well performs also for those excitations for which we expect possible failures, such as intramolecular charge transfer excitations[71]. Here an analysis on the performances of the Linear Response (LR) TD-DFT approach in determining electronic excitation energies is pre-sented. The analysis is focused on local or nonlocal changes in the electronic density and on the role played by the Hartree Fock Exchange (HF-X). We introduce a new diagnostic index linked to the variation in the charge centroid of the single electron components of the excitation. It is shown how this index can be used as a diagnostic test for the description of the nature of the exci-tation studied by different hybrid functionals. It is in fact difficult to achieve a clear and unequivocal picture only by looking at the molecular orbitals in-volved, especially for large systems. We compare the new index with the Λ index proposed by Tozer and collabora-tors [72], which is based on the overlap of the absolute value of the molecular orbitals involved in the excitation, and we analyse the respective potentialities in achieving a good diagnosis of TD-DFT accuracy with Rydberg and charge-transfer excitations. Finally, we show that the principal effect of increasing the HF-X percentage is to increase orbital energy differences, making any analysis based only on the evaluation of shape and extension of molecular orbitals not enough to obtain an exhaustive diagnostic index for TD-DFT users. Definition of the Γ index In 2003, Dreuw et Al. [73] showed that, in the case of intermolecular CT (iCT), for which the product function φi(r)φa(r) → 0 , AiCT[iaσ,jbτ] = δijδabδστ(εa− εb) − CHFδστhiσaσ|jτbτi (2.96) B[iaσ,jbτ]iCT = 0 (2.97) Starting from this point, a diagnostic test, based on the overlap of absolute values of molecular orbitals, was recently presented by Peach et al. [72], to analyse the performance of XC functionals in reproducing local, Rydberg and intramolecular CT excitations. They proposed index is defined as: Λ = P iaK 2 iaR |φi(r)||φa(r)|dr P iaKia2 (2.98) where Kia = Xia+ Yia (2.99) This index measures the overlap between the different KS orbitals involved in the excitation, and, for GGA and hybrid functionals, it can be correlated to the error of TD-DFT excitation energies with respect to more correlated wave function based methods. However,as the authors themselves pointed out, a correlation between the errors and Λ values is present only for Λ values lower than 0.4 for GGA and 0.3 for hybrid functionals. By contrast, for Λ > 0.4 we cannot be sure to obtain accurate values of excitation energies by using hybrid functionals [74]. It is also worth noting that the range-separated hybrid func-tionals, such as CAM-B3LYP do not show this correlation between excitation energy errors and values of Λ [72]. Our considerations start from the observation that the TD-DFT low accu-racy in describing CT excitations (both intra or intermolecular) or Rydberg excitations is linked to a not correct description of the local variation of elec-tronic density during excitation. As Ziegler et al.[75] recently pointed out, the GGA Hessian can be used to describe changes in energy due to small per-turbations of electron density, but it should not be applied to one-electron excitations involving the density rearrangement of a full electron charge. The Hartree-Fock Hessian describe larger pertubations to electron density, due to the complete self interaction cancellation by mean of exact exchange that is only partially taken into account in hybrid functionals. Therefore, an index able to describe the amount of this spatial rearrangement could warn the user when the XC-functional is inadequate. We expect a small variation of electron density for local valence excited states and higher values in the case of Rydberg or CT states. We define the index, Γ, in order to describe the variation of the single electron charge sferoids after excitation as: Γ =X K[ia]2∆ia (2.100) ∆ia = |∆a− ∆i| = ||hφa|r2|φai − hφa|r|φai2| − |hφi|r2|φii − hφi|r|φii2|| (2.101) By definition, Γ constitutes a measure of the difference of the variance of electronic position in passing from occupied to virtual orbital. It is the difference between the size of the molecular orbitals involved in the description of the transition, if we represent the size by a sphere of radius equal to the root mean square of the distance of the electron in the orbital from the centroid of charge[76]. We note that Γ is not a limited quantity. We implement Γ in a locally modified version of G09 program, by the use of first and second moments in the atomic orbital basis set. Systems and methods. We analysed twelve different molecular systems: N2, CO, H2CO, HCl, Tetracene, DMABN, a model Dipeptide, Benzene, IDMN, JM, Prodan and TriazeneII (the relative structures are showed in figure 2.1). They span a large set of different excitations, namely local, Rydberg and CT. TriazeneII was studied by Preat et al.[77], and its excitation energies were analysed in terms of Λ by Peach et al. IDMN, JM and Prodan systems are examples of charge transfer in extended systems with delocalization of charge. Benzene is a typical Rydberg system as N2, CO, H2CO. All DFT and TD-DFT calculations have been carried out with a locally modi-fied version of the G09 program [78], using different exchange-correlation func-tionals characterized by a different Hartree-Fock exchange (HF-X) contribu-tion. They include 3 global hybrids (GH), namely B3LYP [50] (20% of HF-X), PBE0 [54] (25% of HF-X) and BH&HLYP [79] (50% of HF-X) and one Range-Separated Hybrid (RSH), CAM-B3LYP [63]. A pure GGA functional is also tested, namely PBE [45], to compare data obtained by Tozer and co-workers [72],[74]. The extended basis set d-aug-cc-pVTZ was selected to compute excitation en-ergies of N2, CO, H2CO, and Benzene in order to achieve converged results for Rydberg excitations [80] and to compare with the results of Peach et Al [72]. The other systems were studied using the cc-pVTZ basis set. Finally, for Triazene II system the 6-311+G(2d,p) basis set was used, to make a direct comparison with Preat [77] and Peach [74] results . SAC-CI calcula-tions were performed for IDMN, JM and Prodan, respectively with cc-pVTZ, 6-311G(d) and 6-31+G(d) basis sets, by fixing tightest level of convergence (level three). Ground state geometry calculations were performed at MP2/6-31G(d) level, with the exception of N2 and CO (experimental geometries [81]) and TriazeneII, for which we use the geometry reported in ref. [77] for PBE, PBE0 and CAM-B3LYP TD-DFT calculations. We check that ground state geometries are actual minima by performing har-monic vibrational frequency calculations of all optimized structures. Results and Discussion In this section we analyse the performance of different functionals in determin-ing vertical excitations energies in terms of the two indexes Λ and Γ. We recall here that for GGA and hybrid functionals, Tozer et al. have shown that Λ values lower than 0.3 and 0.4, respectively are usually correlated with large errors. We individuate three main type of excitations, namely Local (L), Rydberg (Ry) and intramolecular Charge Transfer (CT), and discuss correlations be-tween the error reported by each functional and values of Λ and Γ for each set. The term error is here used to refer to the differences in the excitation energy obtained at TD-DFT level with respect to the reference data obtained with more correlated methods. More in detail, the errors are defined as TD-DFT minus reference values. Let us start the analysis with local excitations in N2, CO, H2CO, Benzene, Tetracene, DMABN and model dipeptide. In figure 2.2 we report plots of TD-DFT errors in function of both indexes and correlation plots between Λ and Γ. In general local excitations are well reproduced by all functionals for all sys-tems, the absolute overlap is large and the reorganization of charge is usually small. Coherently we find large values of Λ (typically 0.5-0.8) and small val-ues of Γ, most of which are in the range 0-5. Greater valval-ues are obtained with BH&HLYP and CAM-B3LYP functionals in the case of small systems, for which larger errors are obtained. For all functionals when Γ increases, Λ decreases. As Γ index constitutes a measure of the size difference between the MOs involved in the transition, one can see that the excitations here studied involve MOs that are quite similar. We expect that the Γ index here defined is more sensitive in describing ex-citations that involve transitions to more diffuse orbitals such as in Rydberg transitions reported in figure 2.3. As figure 2.3 shows, for Γ > 60 excitation energies are quite accurate (i.e. absolute values of the error are lower than 0.5 eV). In general, for such kind of excitations Λ < 0.3 and therefore the use of only this index is not able to warn the user about the real performances of the selected functional. PBE calculations of the Rydberg excitations here studied give Γ < 60 and ab-solute errors larger than 0.5 eV. B3LYP and PBE0 values are very similar to PBE ones, and errors decrease to the range of ± 0.5 eV for Γ > 65. BH&LYP and CAM-B3LYP functionals, that include a greater HF-X percentage, present lower Γ threshold. As a matter of fact, BH&LYP results are all in the error range of ± 0.5 eV, but this is not the case for the CAM-B3LYP ones applied to the diatomics systems (N2 and CO). This indicate that for these small systems, the long range part of the functional is not fully exploited and therefore the effective HF-X percentage is lower than the long-range value (65%). However, for Γ ≥ 50, CAM-B3LYP calculations became It is worth noting that, as Casida et Al [82] pointed out, for high-lying bound states there is a collapse of the states above the TD-DFT ionization threshold, which is at −εHOM O. This is true not only in the case of LDA functional, but also for GGA and hybrids as it is possible to see comparing tables 2.1 and 2.2. If the excitation energy that one try to reproduce by mean of TD-DFT is greater than the ionization potential calculated by the functional used (i.e. the limit for bound excited states for that functional), the value of the excitation energy will be underestimated. Moving to CT excitations, the corresponding plots are shown in figure 2.4. The classification of these excitations as CT is based on previous studies. How-ever, the present analysis based on the two indexes seems to suggest that an extension of ths common classification of CT states is needed. In fact, for the excitations here investigated, Γ shows values lower than 5 (with the exception of the twisted DMABN), exactly as found for local excitations. From the three plots reported in figure 2.4, it seems possible to distinguish at least two types of CT excitations. The first type, that we call delocalized CT, is characterized by Λ ≈ 0.7 (planar DMABN, IDMN, JM and Prodan) and it involves a transition very similar to a local π − π∗ excitation delocalized on the molecule (Γ ≈ 1 or 4). The second type (DMABN twisted to 90o, model dipeptide), that we call local-ized CT, is characterlocal-ized by a more evident charge transfer from a well locallocal-ized part of the molecule to another one. In this case, low overlaps (0.2 < Λ < 0.5) are found. The size difference can be more or less pronounced in function of the symmetry (Γ ≈ 2 in the case n1− π2∗ and π1− π2∗ of model dipeptide, Γ ≈ 6 in the case of twisted DMABN). In the case of CT transitions, the analysis based on the size (Γ) and the absolute overlap (Λ) of the KS MOs does not seem to allow us to define a Table 2.1: TD-DFT/d-AUG-cc-pVTZ Excitation Energy(eV), Λ and Γ values for Rydberg excitation. N2, CO and H2CO: see reference values reported in
{"url":"https://123dok.org/document/6zk3k4my-dft-approaches-molecular-electronic-excited-states-phase-solution.html","timestamp":"2024-11-12T16:39:58Z","content_type":"text/html","content_length":"227896","record_id":"<urn:uuid:fb60ee96-2e48-4e7b-bd72-61b1bea2f1c6>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.63/warc/CC-MAIN-20241112145015-20241112175015-00841.warc.gz"}
Research articles » Articles index search articles ScienXe.org » my Online CV reviews guidelines arXiv.org » Free reviews 1991 1992 1993 1994 1995 1996 1997 1998 1999 2000 2001 2002 2003 2004 2005 2006 2007 2008 2009 2010 2011 2012 2013 2014 2015 2016 2017 2018 articles index arXiv.org, 2007 My Pages Months: 1 2 3 4 5 6 7 8 9 10 11 12 my alerts arXiv.org, 1.2007 my messages Results 801 to 1000 of 4'537. [ 1 2 3 4 5 6 7 8 9 ... 23 ] Next my reviews my favorites astro-ph/0701818 The short GRB 051210 observed by Swift astro-ph/0701819 Metal enrichment of the intra-cluster medium over a Hubble time for merging and relaxed galaxy clusters astro-ph/0701820 MOIRCS Deep Survey. II. Clustering Properties of K-band Selected Galaxies in GOODS-North Region Stat astro-ph/0701821 Shocks and cold fronts in galaxy clusters astro-ph/0701822 Spiral shocks and the formation of molecular clouds in a two phase medium Members: 3658 astro-ph/0701823 Depletion and low gas temperature in the L183 prestellar core : the N2H+ - N2D+ tool Articles: 2'599'751 astro-ph/0701824 Detection of a Large Flare in FR Cnc (=1RXS J083230.9+154940) Articles rated: 2609 astro-ph/0701825 2003--2005 INTEGRAL and XMM-Newton observations of 3C 273 astro-ph/0701826 Phase mixing in MOND 03 November 2024 astro-ph/0701827 Kinematics and Chemistry of the Hot Molecular Core in G34.26+0.15 at High Resolution astro-ph/0701828 SALT2: using distant supernovae to improve the use of Type Ia supernovae as distance indicators astro-ph/0701829 A large CO and HCN line survey of Luminous Infrared Galaxies astro-ph/0701830 Central Stellar Populations of S0 Galaxies in The Fornax Cluster astro-ph/0701831 Supernova neutrinos, from back of the envelope to supercomputer astro-ph/0701832 Lessons from Surveys of The Galaxy astro-ph/0701833 Internal dynamics of the radio halo cluster Abell 773: a multiwavelength analysis astro-ph/0701834 An XMM-Newton Observation of the Local Bubble Using a Shadowing Filament in the Southern Galactic Hemisphere astro-ph/0701835 Astrophysical Origins of the Highest Energy Cosmic Rays astro-ph/0701836 INTEGRAL/IBIS all-sky survey in hard X-rays astro-ph/0701837 Microquasars: Progress made and open questions astro-ph/0701838 High Velocity Outflows in Quasars astro-ph/0701839 The Red-Sequence Cluster Surveys astro-ph/0701840 Narrow-line AGN in the ISO-2MASS Survey astro-ph/0701841 Differential Evolution of the UV Luminosity Function of Lyman Break Galaxies from z~5 to 3 astro-ph/0701842 Kinematics and stellar populations of the dwarf elliptical galaxy IC 3653 astro-ph/0701843 A CO J=3-2 Survey of the Galactic Center astro-ph/0701844 High Resolution Mappings of the L=1.3 deg Complex in Molecular Lines : Discovery of a Proto-Superbubble astro-ph/0701845 Physical Conditions of Molecular Gas in the Galactic Center astro-ph/0701846 A Detailed Study on the Equal Arrival Time Surface Effect in Gamma-Ray Burst Afterglows astro-ph/0701847 Stability of Toroidal Magnetic Fields in Rotating Stellar Radiation Zones astro-ph/0701848 The modified Newtonian dynamics-MOND-and its implications for new physics astro-ph/0701849 High velocity spectroscopic binary orbits from photoelectric radial velocities: BD +30 2129 A astro-ph/0701850 Millisecond dips in the RXTE/PCA light curve of Sco X-1 and TNO occultation astro-ph/0701851 On the validity of the Wigner-Seitz approximation in neutron star crust astro-ph/0701852 The impact of magnetic field on the cluster M-T relation astro-ph/0701853 Structure and evolution of magnetized clusters: entropy profiles, S-T and L_X-T relations astro-ph/0701854 PLUTO: a Numerical Code for Computational Astrophysics astro-ph/0701855 Swift Observations of GRB 051109B astro-ph/0701856 Diffuse Neutrino and Gamma-ray Emissions of the Galaxy above the TeV astro-ph/0701857 Hot and cold gas accretion and feedback in radio-loud active galaxies astro-ph/0701858 First limits on WIMP nuclear recoil signals in ZEPLIN-II: a two phase xenon detector for dark matter detection astro-ph/0701859 Reply to: Critical revision of the ZEPLIN-I sensitivity to WIMP interactions astro-ph/0701860 Time Variations of the Forbush Decrease Data astro-ph/0701861 Separating Physical Components from Galaxy Spectra by Subspace Methods astro-ph/0701862 Stellar Populations in the Center of the Barred Galaxy NGC 4900 astro-ph/0701863 Tracing of Error in a Time Series Data astro-ph/0701864 An Adaptive Approach to Filter a Time Series Data astro-ph/0701865 Using the X-ray Dust Scattering Halo of 4U 1624-490 to determine distance and dust distributions astro-ph/0701866 A Search for Propylene Oxide and Glycine in Sagittarius B2 (LMH) and Orion astro-ph/0701867 Clustered nested sampling: efficient Bayesian inference for cosmology astro-ph/0701868 X-ray/UV/Optical follow-up of the blazar PKS 2155-304 after the giant TeV flares of July 2006 astro-ph/0701869 Accreting X-ray pulsars observed with Integral astro-ph/0701870 QSO Lensing Magnification: A Comparison of 2QZ and SDSS Results astro-ph/0701871 Hard X-ray Spectra and Positions of Solar Flares observed by RHESSI: photospheric albedo, directivity and electron spectra astro-ph/0701872 Coronal Heating, Weak MHD Turbulence and Scaling Laws astro-ph/0701873 Radial Dependency of Stellar Population Properties in Disk Galaxies from SDSS Photometry astro-ph/0701874 The Progenitors of Short Gamma-Ray Bursts astro-ph/0701875 Redshifts of Emission Line Objects in the Hubble Ultra Deep Field astro-ph/0701876 The WiggleZ project: AAOmega and Dark Energy astro-ph/0701877 Molecular cloud regulated star formation in galaxies astro-ph/0701878 Kinematics and dynamics of the M51-type galaxy pair NGC 3893/96 (KPG 302) astro-ph/0701879 The probability distribution function of the SZ power spectrum: an analytical approach astro-ph/0701880 HST/NICMOS Imaging Polarimetry of Proto-Planetary Nebulae II: Macro-morphology of the Dust Shell Structure via Polarized astro-ph/0701881 Stellar Populations in Spiral Galaxies astro-ph/0701882 An Empirically-Calibrated Model For Interpreting the Evolution of Galaxies During the Reionization Era astro-ph/0701883 Stellar populations -- the next ten years astro-ph/0701884 Dark Matter Searches with GLAST astro-ph/0701885 The search for Milky Way halo substructure WIMP annihilations using the GLAST LAT astro-ph/0701886 Quantum noises and the large scale structure astro-ph/0701887 Dynamical Interactions and the Black Hole Merger Rate of the Universe astro-ph/0701888 Pulsar’s kicks and Gamma-ray bursts astro-ph/0701889 Void-Supercluster Alignments astro-ph/0701890 Observations of TeV gamma-rays from Mrk 421 during Dec. 2005 to Apr. 2006 with the TACTIC telescope astro-ph/0701891 Bubbles as tracers of heat input to cooling flows astro-ph/0701892 Off-center burnt carbon-oxygen stars as supernova progenitors astro-ph/0701893 The Geometry of PSR B0031-07 astro-ph/0701894 Molecular absorptions in high-z objects astro-ph/0701895 Ly-alpha forest: efficient unbiased estimation of second-order properties with missing data astro-ph/0701896 Relation between photospheric magnetic field and chromospheric emission astro-ph/0701897 A multi-scale study of infrared and radio emission from Scd galaxy M33 astro-ph/0701898 Optical polarimetry of HH 135/HH 136 astro-ph/0701899 The Star Formation History of Late Type Galaxies astro-ph/0701900 Expansion history and f(R) modified gravity astro-ph/0701901 Wind measurements in Mars’ middle atmosphere at equinox and solstice: IRAM Plateau de Bure interferometric CO observations astro-ph/0701902 Paleontology of Galaxies: Recovering the Star Formation & Chemical Enrichment Histories from Galaxy Spectra astro-ph/0701903 Shearingbox-implementation for the central-upwind, constraint-transport MHD-code NIRVANA astro-ph/0701904 Type Ia supernova diversity: Standardizing the candles astro-ph/0701905 On the Origin of the Dark Gamma-Ray Bursts astro-ph/0701906 A Mid-Infrared Spitzer Study of the Herbig Be Star R Mon and the Associated HH 39 Herbig-Haro Object astro-ph/0701907 Studying stellar populations at high spectral resolution astro-ph/0701908 On the origin of solar wind. Alfven waves induced jump of coronal temperature astro-ph/0701909 Adaptive smoothing lengths in SPH astro-ph/0701910 The VMC survey and the SFH of some Local Group Galaxies astro-ph/0701911 Towards a calibration of SSP models from the optical to the mid-infrared astro-ph/0701912 Predicted and observed evolution in the mean properties of Type Ia supernovae with redshift astro-ph/0701913 The Determination of Stellar Parameters of Giants in the Galactic Disks and Bulge astro-ph/0701914 Evolution of polarization orientations in a flat universe with vector perturbations: CMB and quasistellar objects astro-ph/0701915 On the ultra-compact nature of 4U1822-000 astro-ph/0701916 A First Study of Giant Stars in the Galactic Bulge based on Crires spectra astro-ph/0701917 Measure of Solar oscillations and supergranulation with Magnetic-Optical Filter astro-ph/0701918 Velocity Fields of Spiral Galaxies in z~0.5 Clusters astro-ph/0701919 Binary Lenses in OGLE-III EWS Database. Season 2004 astro-ph/0701920 On the temporal variability classes found in long gamma-ray bursts with known redshift astro-ph/0701921 Fast Estimator of Primordial Non-Gaussianity from Temperature and Polarization Anisotropies in the Cosmic Microwave astro-ph/0701922 Stellar Populations in Normal Galaxies astro-ph/0701923 North-south asymmetry in solar activity: predicting the amplitude of the next solar cycle astro-ph/0701924 AEGIS: Star formation in field galaxies since z=1.1 . Dominance of gradually declining over episodic star formation astro-ph/0701925 The Importance of Mergers for the Origin of Intracluster Stars in Cosmological Simulations of Galaxy Clusters astro-ph/0702001 On the infant weight loss of low- to intermediate-mass star clusters astro-ph/0702004 GRB 060218 and the outliers with respect to the Ep-Eiso correlation astro-ph/0702005 The mass and radius of the M-dwarf in the short period eclipsing binary RR Caeli astro-ph/0702006 Gamma-ray Burst UV/optical afterglow polarimetry as a probe of Quantum Gravity astro-ph/0702007 A Nozzle Analysis of Slow-Acceleration Solutions in One-Dimensional Models of Rotating Hot-Star Winds cond-mat/0701022 Exact Thermodynamics of Pairing and Charge-spin Separation Crossovers in Small Hubbard Nanoclusters cond-mat/0701023 Fano Effect in a Few-Electron Quantum Dot cond-mat/0701024 Magnetic-field symmetries of mesoscopic nonlinear conductance cond-mat/0701025 Strictly correlated electrons in density functional theory: A general formulation with applications to spherical densities cond-mat/0701026 SU(4) symmetry and new fractional quantum Hall states in graphene cond-mat/0701027 Photoinduced magnetism and long-range ordering in rubidium cobalt hexacyanoferrate Prussian blue analog nanoparticles cond-mat/0701028 Observation of Faraday Waves in a Bose-Einstein Condensate cond-mat/0701029 What is a crystal? cond-mat/0701030 Stiff Quantum Polymers cond-mat/0701031 Control of fine-structure splitting and excitonic binding energies in selected individual InAs/GaAs quantum dots cond-mat/0701032 Coil-Globule transition of a single short polymer chain - an exact enumeration study cond-mat/0701033 Doping induced magnetic clusters and Co spin state transition in Na_xCoO_2 with 0.78<x<.97 cond-mat/0701034 Resonant soft x-ray magnetic scattering from the 4f and 3d electrons in DyFe(4)Al(8) cond-mat/0701035 On the validity of entropy production principles for linear electrical circuits cond-mat/0701036 Decoherence of a Spin Qubit Coupled with Spin Chain cond-mat/0701037 Unified model for network dynamics exhibiting nonextensive statistics cond-mat/0701038 Quasi-One-Dimensional Spin-Density-Wave States with Two Kinds of Periodic Potentials and a Interchain Misfit cond-mat/0701039 Comment on ``Phase transitions in a square Ising model with exchange and dipole interactions’’ by E. Rastelli, S. Regina and A. Tassi cond-mat/0701040 Response to perturbations for granular flow in a hopper cond-mat/0701041 Single-qubit lasing and cooling at the Rabi frequency cond-mat/0701042 Physics of the Pseudogap II: Dynamics, Incompressibility, and Fermi Arcs as Motional Narrowing cond-mat/0701043 Random matrix analysis of complex networks cond-mat/0701044 Dynamics of the BCS-BEC crossover from a few-body perspective cond-mat/0701045 Andreev reflection in bilayer graphene cond-mat/0701046 (In,Ga)As gated-vertical quantum dot with an Al2O3 insulator cond-mat/0701047 Enhanced current flow through meandering and tilted grain boundaries in YBCO films cond-mat/0701048 The renormalized jellium model for spherical and cylindrical colloids cond-mat/0701049 Formation energy and interaction of point defects in two-dimensional colloidal crystals cond-mat/0701050 Crossover dark soliton dynamics in ultracold one-dimensional Bose gases cond-mat/0701051 Collisional properties of sympathetically cooled $^{39}$K cond-mat/0701052 Dispersion of the odd magnetic resonant mode in near-optimally doped Bi2Sr2CaCu2O8+d cond-mat/0701053 Sum Rules for the Optical and Hall Conductivity in Graphene cond-mat/0701054 Suppression of spin relaxation in an InAs nanowire double quantum dot cond-mat/0701055 Entropy and Entanglement in Quantum Ground States cond-mat/0701056 Model Energy Landscapes of Low-Temperature Fluids: Dipolar Hard Spheres cond-mat/0701057 Using time reversal symmetry for sensitive incoherent matter-wave Sagnac interferometry cond-mat/0701058 Decoherence dynamics in low-dimensional cold atoms interferometers cond-mat/0701059 Two-dimensional Vortices in Superconductors cond-mat/0701060 Analytical solution of a Hubbard model extended by nearest neighbour Coulomb and exchange interaction on a triangle and cond-mat/0701061 Heat Capacity in Magnetic and Electric Fields Near the Ferroelectric Transition in Tri-Glycine Sulfate cond-mat/0701062 Area constrained SOS models of interfaces cond-mat/0701063 Anomalous Hall Resistance in Bilayer Quantum Hall Systems cond-mat/0701064 Plutonium and Quantum Criticlity cond-mat/0701065 Rashba interferometers: Spin-dependent single and two-electron interference cond-mat/0701066 Well defined transition to gel-like aggregates of attractive athermal particles cond-mat/0701067 Time Evolution of the Neel State cond-mat/0701068 Dynamic Lattice Distortions in Sr$_2$RuO$_4$: A microscopic study by perturbed angular correlation (TDPAC) spectroscopy cond-mat/0701069 22nd order high-temperature expansion of nearest-neighbor models with O(2) symmetry on a simple cubic lattice cond-mat/0701070 Random walks and diameter of finite scale-free networks cond-mat/0701071 Exact solution of the mixed-spin Ising model on a decorated square lattice with two different kinds of decorating spins on horizontal and vertical bonds cond-mat/0701072 Ion specificity and anomalous electrokinetic effects in hydrophobic nanochannels cond-mat/0701073 Pinning-controllability of complex networks cond-mat/0701074 Controlling crystallization and its absence: Proteins, colloids and patchy models cond-mat/0701075 Non-Local Finite-Size Effects in the Dimer Model cond-mat/0701076 Colloids in a periodic potential: driven lattice gas in continuous space cond-mat/0701077 Effect of the metal-to-wire ratio on the high-frequency magnetoimpedance of glass-coated CoFeBSi amorphous microwires cond-mat/0701078 Protocols for optimal readout of qubits using a continuous quantum nondemolition measurement cond-mat/0701079 Thermodynamic consistency between the energy and virial routes in the mean spherical approximation for soft potentials cond-mat/0701080 Spin-phonon coupling in antiferromagnetic chromium spinels cond-mat/0701081 Nanometer scale electronic reconstruction at the interface between LaVO3 and LaVO4 cond-mat/0701082 Quantum confinement effects in InAs-InP core-shell nanowires cond-mat/0701083 Spin-modulated quasi-1D antiferromagnet LiCuVO_4 cond-mat/0701084 Fabrication, optical characterization and modeling of strained core-shell nanowires cond-mat/0701085 Anharmonicity and self-energy effects of the E2g phonon in MgB2 cond-mat/0701086 Nonequilibrium resonant spectroscopy of molecular vibrons cond-mat/0701087 Magnetic Susceptibility of the Kagome Antiferromagnet ZnCu_3(OH)_6Cl_2 cond-mat/0701088 Origin and roles of a strong electron-phonon interaction in cuprate oxide superconductors cond-mat/0701089 The ground state of clean and defected graphene: Coulomb interactions of massless Dirac fermions, pair-distribution functions and spin-polarized phases cond-mat/0701090 Colossal electroresistance in ferromagnetic insulating state of single crystal Nd$_0.7$Pb$_0.3$MnO$_3$ cond-mat/0701091 Exact analytical evaluation of time dependent transmission coefficient from the method of reactive flux for an inverted parabolic barrier cond-mat/0701092 Deformation of SU(4) singlet spin-orbital state due to Hund’s rule coupling cond-mat/0701093 Making a splash with water repellency cond-mat/0701094 Efficient measurement of linear susceptibilities in molecular simulations: Application to aging supercooled liquids cond-mat/0701095 Magnetic flux in mesoscopic rings: Quantum Smoluchowski regime cond-mat/0701096 Semiclassical Anyon Dynamics in Matter and Noncommutative Geometry: A Berry Phase Connection cond-mat/0701097 On the absence of the glass transition in two dimensional hard disks cond-mat/0701098 Fermi-Bose mixture across a Feshbach resonance cond-mat/0701099 Non-periodic pseudo-random numbers used in Monte Carlo calculations cond-mat/0701100 Single Molecule Spectroscopy as a possible tool to study the electric field in superconductors cond-mat/0701101 Phase Field Modeling of Fracture and Stress Induced Phase Transitions cond-mat/0701102 Resonant reflection at magnetic barriers in quantum wires cond-mat/0701103 Negative refraction in nonlinear wave systems cond-mat/0701104 Anisotropic Electron Spin Lifetime in (In,Ga)As/GaAs (110) Quantum Wells cond-mat/0701105 The numerical renormalization group method for quantum impurity systems cond-mat/0701106 Instabilities in the vortex matter and the peak effect phenomenon cond-mat/0701107 Many-body effects in transport through open systems: pinning of resonant levels cond-mat/0701108 Thomas-Fermi Screening in Graphene home | contact | terms of use | sitemap Copyright © 2005-2024 - Scimetrica Research articles search articles reviews guidelines articles index My Pages my alerts my messages my reviews my favorites Members: 3658 Articles: 2'599'751 Articles rated: 2609 03 November 2024 Research articles search articles reviews guidelines articles index My Pages my alerts my messages my reviews my favorites Members: 3658 Articles: 2'599'751 Articles rated: 2609 03 November 2024 Articles index arXiv.org, 2007 Months: 1 2 3 4 5 6 7 8 9 10 11 12 arXiv.org, 1.2007 Results 801 to 1000 of 4'537. [ 1 2 3 4 5 6 7 8 9 ... 23 ] Next astro-ph/0701818 The short GRB 051210 observed by Swift astro-ph/0701819 Metal enrichment of the intra-cluster medium over a Hubble time for merging and relaxed galaxy clusters astro-ph/0701820 MOIRCS Deep Survey. II. Clustering Properties of K-band Selected Galaxies in GOODS-North Region astro-ph/0701821 Shocks and cold fronts in galaxy clusters astro-ph/0701822 Spiral shocks and the formation of molecular clouds in a two phase medium astro-ph/0701823 Depletion and low gas temperature in the L183 prestellar core : the N2H+ - N2D+ tool astro-ph/0701824 Detection of a Large Flare in FR Cnc (=1RXS J083230.9+154940) astro-ph/0701825 2003--2005 INTEGRAL and XMM-Newton observations of 3C 273 astro-ph/0701826 Phase mixing in MOND astro-ph/0701827 Kinematics and Chemistry of the Hot Molecular Core in G34.26+0.15 at High Resolution astro-ph/0701828 SALT2: using distant supernovae to improve the use of Type Ia supernovae as distance indicators astro-ph/0701829 A large CO and HCN line survey of Luminous Infrared Galaxies astro-ph/0701830 Central Stellar Populations of S0 Galaxies in The Fornax Cluster astro-ph/0701831 Supernova neutrinos, from back of the envelope to supercomputer astro-ph/0701832 Lessons from Surveys of The Galaxy astro-ph/0701833 Internal dynamics of the radio halo cluster Abell 773: a multiwavelength analysis astro-ph/0701834 An XMM-Newton Observation of the Local Bubble Using a Shadowing Filament in the Southern Galactic Hemisphere astro-ph/0701835 Astrophysical Origins of the Highest Energy Cosmic Rays astro-ph/0701836 INTEGRAL/IBIS all-sky survey in hard X-rays astro-ph/0701837 Microquasars: Progress made and open questions astro-ph/0701838 High Velocity Outflows in Quasars astro-ph/0701839 The Red-Sequence Cluster Surveys astro-ph/0701840 Narrow-line AGN in the ISO-2MASS Survey astro-ph/0701841 Differential Evolution of the UV Luminosity Function of Lyman Break Galaxies from z~5 to 3 astro-ph/0701842 Kinematics and stellar populations of the dwarf elliptical galaxy IC 3653 astro-ph/0701843 A CO J=3-2 Survey of the Galactic Center astro-ph/0701844 High Resolution Mappings of the L=1.3 deg Complex in Molecular Lines : Discovery of a Proto-Superbubble astro-ph/0701845 Physical Conditions of Molecular Gas in the Galactic Center astro-ph/0701846 A Detailed Study on the Equal Arrival Time Surface Effect in Gamma-Ray Burst Afterglows astro-ph/0701847 Stability of Toroidal Magnetic Fields in Rotating Stellar Radiation Zones astro-ph/0701848 The modified Newtonian dynamics-MOND-and its implications for new physics astro-ph/0701849 High velocity spectroscopic binary orbits from photoelectric radial velocities: BD +30 2129 A astro-ph/0701850 Millisecond dips in the RXTE/PCA light curve of Sco X-1 and TNO occultation astro-ph/0701851 On the validity of the Wigner-Seitz approximation in neutron star crust astro-ph/0701852 The impact of magnetic field on the cluster M-T relation astro-ph/0701853 Structure and evolution of magnetized clusters: entropy profiles, S-T and L_X-T relations astro-ph/0701854 PLUTO: a Numerical Code for Computational Astrophysics astro-ph/0701855 Swift Observations of GRB 051109B astro-ph/0701856 Diffuse Neutrino and Gamma-ray Emissions of the Galaxy above the TeV astro-ph/0701857 Hot and cold gas accretion and feedback in radio-loud active galaxies astro-ph/0701858 First limits on WIMP nuclear recoil signals in ZEPLIN-II: a two phase xenon detector for dark matter detection astro-ph/0701859 Reply to: Critical revision of the ZEPLIN-I sensitivity to WIMP interactions astro-ph/0701860 Time Variations of the Forbush Decrease Data astro-ph/0701861 Separating Physical Components from Galaxy Spectra by Subspace Methods astro-ph/0701862 Stellar Populations in the Center of the Barred Galaxy NGC 4900 astro-ph/0701863 Tracing of Error in a Time Series Data astro-ph/0701864 An Adaptive Approach to Filter a Time Series Data astro-ph/0701865 Using the X-ray Dust Scattering Halo of 4U 1624-490 to determine distance and dust distributions astro-ph/0701866 A Search for Propylene Oxide and Glycine in Sagittarius B2 (LMH) and Orion astro-ph/0701867 Clustered nested sampling: efficient Bayesian inference for cosmology astro-ph/0701868 X-ray/UV/Optical follow-up of the blazar PKS 2155-304 after the giant TeV flares of July 2006 astro-ph/0701869 Accreting X-ray pulsars observed with Integral astro-ph/0701870 QSO Lensing Magnification: A Comparison of 2QZ and SDSS Results astro-ph/0701871 Hard X-ray Spectra and Positions of Solar Flares observed by RHESSI: photospheric albedo, directivity and electron spectra astro-ph/0701872 Coronal Heating, Weak MHD Turbulence and Scaling Laws astro-ph/0701873 Radial Dependency of Stellar Population Properties in Disk Galaxies from SDSS Photometry astro-ph/0701874 The Progenitors of Short Gamma-Ray Bursts astro-ph/0701875 Redshifts of Emission Line Objects in the Hubble Ultra Deep Field astro-ph/0701876 The WiggleZ project: AAOmega and Dark Energy astro-ph/0701877 Molecular cloud regulated star formation in galaxies astro-ph/0701878 Kinematics and dynamics of the M51-type galaxy pair NGC 3893/96 (KPG 302) astro-ph/0701879 The probability distribution function of the SZ power spectrum: an analytical approach astro-ph/0701880 HST/NICMOS Imaging Polarimetry of Proto-Planetary Nebulae II: Macro-morphology of the Dust Shell Structure via Polarized Light astro-ph/0701881 Stellar Populations in Spiral Galaxies astro-ph/0701882 An Empirically-Calibrated Model For Interpreting the Evolution of Galaxies During the Reionization Era astro-ph/0701883 Stellar populations -- the next ten years astro-ph/0701884 Dark Matter Searches with GLAST astro-ph/0701885 The search for Milky Way halo substructure WIMP annihilations using the GLAST LAT astro-ph/0701886 Quantum noises and the large scale structure astro-ph/0701887 Dynamical Interactions and the Black Hole Merger Rate of the Universe astro-ph/0701888 Pulsar’s kicks and Gamma-ray bursts astro-ph/0701889 Void-Supercluster Alignments astro-ph/0701890 Observations of TeV gamma-rays from Mrk 421 during Dec. 2005 to Apr. 2006 with the TACTIC telescope astro-ph/0701891 Bubbles as tracers of heat input to cooling flows astro-ph/0701892 Off-center burnt carbon-oxygen stars as supernova progenitors astro-ph/0701893 The Geometry of PSR B0031-07 astro-ph/0701894 Molecular absorptions in high-z objects astro-ph/0701895 Ly-alpha forest: efficient unbiased estimation of second-order properties with missing data astro-ph/0701896 Relation between photospheric magnetic field and chromospheric emission astro-ph/0701897 A multi-scale study of infrared and radio emission from Scd galaxy M33 astro-ph/0701898 Optical polarimetry of HH 135/HH 136 astro-ph/0701899 The Star Formation History of Late Type Galaxies astro-ph/0701900 Expansion history and f(R) modified gravity astro-ph/0701901 Wind measurements in Mars’ middle atmosphere at equinox and solstice: IRAM Plateau de Bure interferometric CO observations astro-ph/0701902 Paleontology of Galaxies: Recovering the Star Formation & Chemical Enrichment Histories from Galaxy Spectra astro-ph/0701903 Shearingbox-implementation for the central-upwind, constraint-transport MHD-code NIRVANA astro-ph/0701904 Type Ia supernova diversity: Standardizing the candles astro-ph/0701905 On the Origin of the Dark Gamma-Ray Bursts astro-ph/0701906 A Mid-Infrared Spitzer Study of the Herbig Be Star R Mon and the Associated HH 39 Herbig-Haro Object astro-ph/0701907 Studying stellar populations at high spectral resolution astro-ph/0701908 On the origin of solar wind. Alfven waves induced jump of coronal temperature astro-ph/0701909 Adaptive smoothing lengths in SPH astro-ph/0701910 The VMC survey and the SFH of some Local Group Galaxies astro-ph/0701911 Towards a calibration of SSP models from the optical to the mid-infrared astro-ph/0701912 Predicted and observed evolution in the mean properties of Type Ia supernovae with redshift astro-ph/0701913 The Determination of Stellar Parameters of Giants in the Galactic Disks and Bulge astro-ph/0701914 Evolution of polarization orientations in a flat universe with vector perturbations: CMB and quasistellar objects astro-ph/0701915 On the ultra-compact nature of 4U1822-000 astro-ph/0701916 A First Study of Giant Stars in the Galactic Bulge based on Crires spectra astro-ph/0701917 Measure of Solar oscillations and supergranulation with Magnetic-Optical Filter astro-ph/0701918 Velocity Fields of Spiral Galaxies in z~0.5 Clusters astro-ph/0701919 Binary Lenses in OGLE-III EWS Database. Season 2004 astro-ph/0701920 On the temporal variability classes found in long gamma-ray bursts with known redshift astro-ph/0701921 Fast Estimator of Primordial Non-Gaussianity from Temperature and Polarization Anisotropies in the Cosmic Microwave Background astro-ph/0701922 Stellar Populations in Normal Galaxies astro-ph/0701923 North-south asymmetry in solar activity: predicting the amplitude of the next solar cycle astro-ph/0701924 AEGIS: Star formation in field galaxies since z=1.1 . Dominance of gradually declining over episodic star formation astro-ph/0701925 The Importance of Mergers for the Origin of Intracluster Stars in Cosmological Simulations of Galaxy Clusters astro-ph/0702001 On the infant weight loss of low- to intermediate-mass star clusters astro-ph/0702004 GRB 060218 and the outliers with respect to the Ep-Eiso correlation astro-ph/0702005 The mass and radius of the M-dwarf in the short period eclipsing binary RR Caeli astro-ph/0702006 Gamma-ray Burst UV/optical afterglow polarimetry as a probe of Quantum Gravity astro-ph/0702007 A Nozzle Analysis of Slow-Acceleration Solutions in One-Dimensional Models of Rotating Hot-Star Winds cond-mat/0701022 Exact Thermodynamics of Pairing and Charge-spin Separation Crossovers in Small Hubbard Nanoclusters cond-mat/0701023 Fano Effect in a Few-Electron Quantum Dot cond-mat/0701024 Magnetic-field symmetries of mesoscopic nonlinear conductance cond-mat/0701025 Strictly correlated electrons in density functional theory: A general formulation with applications to spherical densities cond-mat/0701026 SU(4) symmetry and new fractional quantum Hall states in graphene cond-mat/0701027 Photoinduced magnetism and long-range ordering in rubidium cobalt hexacyanoferrate Prussian blue analog nanoparticles cond-mat/0701028 Observation of Faraday Waves in a Bose-Einstein Condensate cond-mat/0701029 What is a crystal? cond-mat/0701030 Stiff Quantum Polymers cond-mat/0701031 Control of fine-structure splitting and excitonic binding energies in selected individual InAs/GaAs quantum dots cond-mat/0701032 Coil-Globule transition of a single short polymer chain - an exact enumeration study cond-mat/0701033 Doping induced magnetic clusters and Co spin state transition in Na_xCoO_2 with 0.78<x<.97 cond-mat/0701034 Resonant soft x-ray magnetic scattering from the 4f and 3d electrons in DyFe(4)Al(8) cond-mat/0701035 On the validity of entropy production principles for linear electrical circuits cond-mat/0701036 Decoherence of a Spin Qubit Coupled with Spin Chain cond-mat/0701037 Unified model for network dynamics exhibiting nonextensive statistics cond-mat/0701038 Quasi-One-Dimensional Spin-Density-Wave States with Two Kinds of Periodic Potentials and a Interchain Misfit cond-mat/0701039 Comment on ``Phase transitions in a square Ising model with exchange and dipole interactions’’ by E. Rastelli, S. Regina and A. Tassi cond-mat/0701040 Response to perturbations for granular flow in a hopper cond-mat/0701041 Single-qubit lasing and cooling at the Rabi frequency cond-mat/0701042 Physics of the Pseudogap II: Dynamics, Incompressibility, and Fermi Arcs as Motional Narrowing cond-mat/0701043 Random matrix analysis of complex networks cond-mat/0701044 Dynamics of the BCS-BEC crossover from a few-body perspective cond-mat/0701045 Andreev reflection in bilayer graphene cond-mat/0701046 (In,Ga)As gated-vertical quantum dot with an Al2O3 insulator cond-mat/0701047 Enhanced current flow through meandering and tilted grain boundaries in YBCO films cond-mat/0701048 The renormalized jellium model for spherical and cylindrical colloids cond-mat/0701049 Formation energy and interaction of point defects in two-dimensional colloidal crystals cond-mat/0701050 Crossover dark soliton dynamics in ultracold one-dimensional Bose gases cond-mat/0701051 Collisional properties of sympathetically cooled $^{39}$K cond-mat/0701052 Dispersion of the odd magnetic resonant mode in near-optimally doped Bi2Sr2CaCu2O8+d cond-mat/0701053 Sum Rules for the Optical and Hall Conductivity in Graphene cond-mat/0701054 Suppression of spin relaxation in an InAs nanowire double quantum dot cond-mat/0701055 Entropy and Entanglement in Quantum Ground States cond-mat/0701056 Model Energy Landscapes of Low-Temperature Fluids: Dipolar Hard Spheres cond-mat/0701057 Using time reversal symmetry for sensitive incoherent matter-wave Sagnac interferometry cond-mat/0701058 Decoherence dynamics in low-dimensional cold atoms interferometers cond-mat/0701059 Two-dimensional Vortices in Superconductors cond-mat/0701060 Analytical solution of a Hubbard model extended by nearest neighbour Coulomb and exchange interaction on a triangle and tetrahedron cond-mat/0701061 Heat Capacity in Magnetic and Electric Fields Near the Ferroelectric Transition in Tri-Glycine Sulfate cond-mat/0701062 Area constrained SOS models of interfaces cond-mat/0701063 Anomalous Hall Resistance in Bilayer Quantum Hall Systems cond-mat/0701064 Plutonium and Quantum Criticlity cond-mat/0701065 Rashba interferometers: Spin-dependent single and two-electron interference cond-mat/0701066 Well defined transition to gel-like aggregates of attractive athermal particles cond-mat/0701067 Time Evolution of the Neel State cond-mat/0701068 Dynamic Lattice Distortions in Sr$_2$RuO$_4$: A microscopic study by perturbed angular correlation (TDPAC) spectroscopy cond-mat/0701069 22nd order high-temperature expansion of nearest-neighbor models with O(2) symmetry on a simple cubic lattice cond-mat/0701070 Random walks and diameter of finite scale-free networks cond-mat/0701071 Exact solution of the mixed-spin Ising model on a decorated square lattice with two different kinds of decorating spins on horizontal and vertical bonds cond-mat/0701072 Ion specificity and anomalous electrokinetic effects in hydrophobic nanochannels cond-mat/0701073 Pinning-controllability of complex networks cond-mat/0701074 Controlling crystallization and its absence: Proteins, colloids and patchy models cond-mat/0701075 Non-Local Finite-Size Effects in the Dimer Model cond-mat/0701076 Colloids in a periodic potential: driven lattice gas in continuous space cond-mat/0701077 Effect of the metal-to-wire ratio on the high-frequency magnetoimpedance of glass-coated CoFeBSi amorphous microwires cond-mat/0701078 Protocols for optimal readout of qubits using a continuous quantum nondemolition measurement cond-mat/0701079 Thermodynamic consistency between the energy and virial routes in the mean spherical approximation for soft potentials cond-mat/0701080 Spin-phonon coupling in antiferromagnetic chromium spinels cond-mat/0701081 Nanometer scale electronic reconstruction at the interface between LaVO3 and LaVO4 cond-mat/0701082 Quantum confinement effects in InAs-InP core-shell nanowires cond-mat/0701083 Spin-modulated quasi-1D antiferromagnet LiCuVO_4 cond-mat/0701084 Fabrication, optical characterization and modeling of strained core-shell nanowires cond-mat/0701085 Anharmonicity and self-energy effects of the E2g phonon in MgB2 cond-mat/0701086 Nonequilibrium resonant spectroscopy of molecular vibrons cond-mat/0701087 Magnetic Susceptibility of the Kagome Antiferromagnet ZnCu_3(OH)_6Cl_2 cond-mat/0701088 Origin and roles of a strong electron-phonon interaction in cuprate oxide superconductors cond-mat/0701089 The ground state of clean and defected graphene: Coulomb interactions of massless Dirac fermions, pair-distribution functions and spin-polarized phases cond-mat/0701090 Colossal electroresistance in ferromagnetic insulating state of single crystal Nd$_0.7$Pb$_0.3$MnO$_3$ cond-mat/0701091 Exact analytical evaluation of time dependent transmission coefficient from the method of reactive flux for an inverted parabolic barrier cond-mat/0701092 Deformation of SU(4) singlet spin-orbital state due to Hund’s rule coupling cond-mat/0701093 Making a splash with water repellency cond-mat/0701094 Efficient measurement of linear susceptibilities in molecular simulations: Application to aging supercooled liquids cond-mat/0701095 Magnetic flux in mesoscopic rings: Quantum Smoluchowski regime cond-mat/0701096 Semiclassical Anyon Dynamics in Matter and Noncommutative Geometry: A Berry Phase Connection cond-mat/0701097 On the absence of the glass transition in two dimensional hard disks cond-mat/0701098 Fermi-Bose mixture across a Feshbach resonance cond-mat/0701099 Non-periodic pseudo-random numbers used in Monte Carlo calculations cond-mat/0701100 Single Molecule Spectroscopy as a possible tool to study the electric field in superconductors cond-mat/0701101 Phase Field Modeling of Fracture and Stress Induced Phase Transitions cond-mat/0701102 Resonant reflection at magnetic barriers in quantum wires cond-mat/0701103 Negative refraction in nonlinear wave systems cond-mat/0701104 Anisotropic Electron Spin Lifetime in (In,Ga)As/GaAs (110) Quantum Wells cond-mat/0701105 The numerical renormalization group method for quantum impurity systems cond-mat/0701106 Instabilities in the vortex matter and the peak effect phenomenon cond-mat/0701107 Many-body effects in transport through open systems: pinning of resonant levels cond-mat/0701108 Thomas-Fermi Screening in Graphene arXiv.org, 2007 Months: 1 2 3 4 5 6 7 8 9 10 11 12 arXiv.org, 1.2007 Results 801 to 1000 of 4'537. [ 1 2 3 4 5 6 7 8 9 ... 23 ] Next astro-ph/0701818 The short GRB 051210 observed by Swift astro-ph/0701819 Metal enrichment of the intra-cluster medium over a Hubble time for merging and relaxed galaxy clusters astro-ph/0701820 MOIRCS Deep Survey. II. Clustering Properties of K-band Selected Galaxies in GOODS-North Region astro-ph/0701821 Shocks and cold fronts in galaxy clusters astro-ph/0701822 Spiral shocks and the formation of molecular clouds in a two phase medium astro-ph/0701823 Depletion and low gas temperature in the L183 prestellar core : the N2H+ - N2D+ tool astro-ph/0701824 Detection of a Large Flare in FR Cnc (=1RXS J083230.9+154940) astro-ph/0701825 2003--2005 INTEGRAL and XMM-Newton observations of 3C 273 astro-ph/0701826 Phase mixing in MOND astro-ph/0701827 Kinematics and Chemistry of the Hot Molecular Core in G34.26+0.15 at High Resolution astro-ph/0701828 SALT2: using distant supernovae to improve the use of Type Ia supernovae as distance indicators astro-ph/0701829 A large CO and HCN line survey of Luminous Infrared Galaxies astro-ph/0701830 Central Stellar Populations of S0 Galaxies in The Fornax Cluster astro-ph/0701831 Supernova neutrinos, from back of the envelope to supercomputer astro-ph/0701832 Lessons from Surveys of The Galaxy astro-ph/0701833 Internal dynamics of the radio halo cluster Abell 773: a multiwavelength analysis astro-ph/0701834 An XMM-Newton Observation of the Local Bubble Using a Shadowing Filament in the Southern Galactic Hemisphere astro-ph/0701835 Astrophysical Origins of the Highest Energy Cosmic Rays astro-ph/0701836 INTEGRAL/IBIS all-sky survey in hard X-rays astro-ph/0701837 Microquasars: Progress made and open questions astro-ph/0701838 High Velocity Outflows in Quasars astro-ph/0701839 The Red-Sequence Cluster Surveys astro-ph/0701840 Narrow-line AGN in the ISO-2MASS Survey astro-ph/0701841 Differential Evolution of the UV Luminosity Function of Lyman Break Galaxies from z~5 to 3 astro-ph/0701842 Kinematics and stellar populations of the dwarf elliptical galaxy IC 3653 astro-ph/0701843 A CO J=3-2 Survey of the Galactic Center astro-ph/0701844 High Resolution Mappings of the L=1.3 deg Complex in Molecular Lines : Discovery of a Proto-Superbubble astro-ph/0701845 Physical Conditions of Molecular Gas in the Galactic Center astro-ph/0701846 A Detailed Study on the Equal Arrival Time Surface Effect in Gamma-Ray Burst Afterglows astro-ph/0701847 Stability of Toroidal Magnetic Fields in Rotating Stellar Radiation Zones astro-ph/0701848 The modified Newtonian dynamics-MOND-and its implications for new physics astro-ph/0701849 High velocity spectroscopic binary orbits from photoelectric radial velocities: BD +30 2129 A astro-ph/0701850 Millisecond dips in the RXTE/PCA light curve of Sco X-1 and TNO occultation astro-ph/0701851 On the validity of the Wigner-Seitz approximation in neutron star crust astro-ph/0701852 The impact of magnetic field on the cluster M-T relation astro-ph/0701853 Structure and evolution of magnetized clusters: entropy profiles, S-T and L_X-T relations astro-ph/0701854 PLUTO: a Numerical Code for Computational Astrophysics astro-ph/0701855 Swift Observations of GRB 051109B astro-ph/0701856 Diffuse Neutrino and Gamma-ray Emissions of the Galaxy above the TeV astro-ph/0701857 Hot and cold gas accretion and feedback in radio-loud active galaxies astro-ph/0701858 First limits on WIMP nuclear recoil signals in ZEPLIN-II: a two phase xenon detector for dark matter detection astro-ph/0701859 Reply to: Critical revision of the ZEPLIN-I sensitivity to WIMP interactions astro-ph/0701860 Time Variations of the Forbush Decrease Data astro-ph/0701861 Separating Physical Components from Galaxy Spectra by Subspace Methods astro-ph/0701862 Stellar Populations in the Center of the Barred Galaxy NGC 4900 astro-ph/0701863 Tracing of Error in a Time Series Data astro-ph/0701864 An Adaptive Approach to Filter a Time Series Data astro-ph/0701865 Using the X-ray Dust Scattering Halo of 4U 1624-490 to determine distance and dust distributions astro-ph/0701866 A Search for Propylene Oxide and Glycine in Sagittarius B2 (LMH) and Orion astro-ph/0701867 Clustered nested sampling: efficient Bayesian inference for cosmology astro-ph/0701868 X-ray/UV/Optical follow-up of the blazar PKS 2155-304 after the giant TeV flares of July 2006 astro-ph/0701869 Accreting X-ray pulsars observed with Integral astro-ph/0701870 QSO Lensing Magnification: A Comparison of 2QZ and SDSS Results astro-ph/0701871 Hard X-ray Spectra and Positions of Solar Flares observed by RHESSI: photospheric albedo, directivity and electron spectra astro-ph/0701872 Coronal Heating, Weak MHD Turbulence and Scaling Laws astro-ph/0701873 Radial Dependency of Stellar Population Properties in Disk Galaxies from SDSS Photometry astro-ph/0701874 The Progenitors of Short Gamma-Ray Bursts astro-ph/0701875 Redshifts of Emission Line Objects in the Hubble Ultra Deep Field astro-ph/0701876 The WiggleZ project: AAOmega and Dark Energy astro-ph/0701877 Molecular cloud regulated star formation in galaxies astro-ph/0701878 Kinematics and dynamics of the M51-type galaxy pair NGC 3893/96 (KPG 302) astro-ph/0701879 The probability distribution function of the SZ power spectrum: an analytical approach astro-ph/0701880 HST/NICMOS Imaging Polarimetry of Proto-Planetary Nebulae II: Macro-morphology of the Dust Shell Structure via Polarized Light astro-ph/0701881 Stellar Populations in Spiral Galaxies astro-ph/0701882 An Empirically-Calibrated Model For Interpreting the Evolution of Galaxies During the Reionization Era astro-ph/0701883 Stellar populations -- the next ten years astro-ph/0701884 Dark Matter Searches with GLAST astro-ph/0701885 The search for Milky Way halo substructure WIMP annihilations using the GLAST LAT astro-ph/0701886 Quantum noises and the large scale structure astro-ph/0701887 Dynamical Interactions and the Black Hole Merger Rate of the Universe astro-ph/0701888 Pulsar’s kicks and Gamma-ray bursts astro-ph/0701889 Void-Supercluster Alignments astro-ph/0701890 Observations of TeV gamma-rays from Mrk 421 during Dec. 2005 to Apr. 2006 with the TACTIC telescope astro-ph/0701891 Bubbles as tracers of heat input to cooling flows astro-ph/0701892 Off-center burnt carbon-oxygen stars as supernova progenitors astro-ph/0701893 The Geometry of PSR B0031-07 astro-ph/0701894 Molecular absorptions in high-z objects astro-ph/0701895 Ly-alpha forest: efficient unbiased estimation of second-order properties with missing data astro-ph/0701896 Relation between photospheric magnetic field and chromospheric emission astro-ph/0701897 A multi-scale study of infrared and radio emission from Scd galaxy M33 astro-ph/0701898 Optical polarimetry of HH 135/HH 136 astro-ph/0701899 The Star Formation History of Late Type Galaxies astro-ph/0701900 Expansion history and f(R) modified gravity astro-ph/0701901 Wind measurements in Mars’ middle atmosphere at equinox and solstice: IRAM Plateau de Bure interferometric CO observations astro-ph/0701902 Paleontology of Galaxies: Recovering the Star Formation & Chemical Enrichment Histories from Galaxy Spectra astro-ph/0701903 Shearingbox-implementation for the central-upwind, constraint-transport MHD-code NIRVANA astro-ph/0701904 Type Ia supernova diversity: Standardizing the candles astro-ph/0701905 On the Origin of the Dark Gamma-Ray Bursts astro-ph/0701906 A Mid-Infrared Spitzer Study of the Herbig Be Star R Mon and the Associated HH 39 Herbig-Haro Object astro-ph/0701907 Studying stellar populations at high spectral resolution astro-ph/0701908 On the origin of solar wind. Alfven waves induced jump of coronal temperature astro-ph/0701909 Adaptive smoothing lengths in SPH astro-ph/0701910 The VMC survey and the SFH of some Local Group Galaxies astro-ph/0701911 Towards a calibration of SSP models from the optical to the mid-infrared astro-ph/0701912 Predicted and observed evolution in the mean properties of Type Ia supernovae with redshift astro-ph/0701913 The Determination of Stellar Parameters of Giants in the Galactic Disks and Bulge astro-ph/0701914 Evolution of polarization orientations in a flat universe with vector perturbations: CMB and quasistellar objects astro-ph/0701915 On the ultra-compact nature of 4U1822-000 astro-ph/0701916 A First Study of Giant Stars in the Galactic Bulge based on Crires spectra astro-ph/0701917 Measure of Solar oscillations and supergranulation with Magnetic-Optical Filter astro-ph/0701918 Velocity Fields of Spiral Galaxies in z~0.5 Clusters astro-ph/0701919 Binary Lenses in OGLE-III EWS Database. Season 2004 astro-ph/0701920 On the temporal variability classes found in long gamma-ray bursts with known redshift astro-ph/0701921 Fast Estimator of Primordial Non-Gaussianity from Temperature and Polarization Anisotropies in the Cosmic Microwave Background astro-ph/0701922 Stellar Populations in Normal Galaxies astro-ph/0701923 North-south asymmetry in solar activity: predicting the amplitude of the next solar cycle astro-ph/0701924 AEGIS: Star formation in field galaxies since z=1.1 . Dominance of gradually declining over episodic star formation astro-ph/0701925 The Importance of Mergers for the Origin of Intracluster Stars in Cosmological Simulations of Galaxy Clusters astro-ph/0702001 On the infant weight loss of low- to intermediate-mass star clusters astro-ph/0702004 GRB 060218 and the outliers with respect to the Ep-Eiso correlation astro-ph/0702005 The mass and radius of the M-dwarf in the short period eclipsing binary RR Caeli astro-ph/0702006 Gamma-ray Burst UV/optical afterglow polarimetry as a probe of Quantum Gravity astro-ph/0702007 A Nozzle Analysis of Slow-Acceleration Solutions in One-Dimensional Models of Rotating Hot-Star Winds cond-mat/0701022 Exact Thermodynamics of Pairing and Charge-spin Separation Crossovers in Small Hubbard Nanoclusters cond-mat/0701023 Fano Effect in a Few-Electron Quantum Dot cond-mat/0701024 Magnetic-field symmetries of mesoscopic nonlinear conductance cond-mat/0701025 Strictly correlated electrons in density functional theory: A general formulation with applications to spherical densities cond-mat/0701026 SU(4) symmetry and new fractional quantum Hall states in graphene cond-mat/0701027 Photoinduced magnetism and long-range ordering in rubidium cobalt hexacyanoferrate Prussian blue analog nanoparticles cond-mat/0701028 Observation of Faraday Waves in a Bose-Einstein Condensate cond-mat/0701029 What is a crystal? cond-mat/0701030 Stiff Quantum Polymers cond-mat/0701031 Control of fine-structure splitting and excitonic binding energies in selected individual InAs/GaAs quantum dots cond-mat/0701032 Coil-Globule transition of a single short polymer chain - an exact enumeration study cond-mat/0701033 Doping induced magnetic clusters and Co spin state transition in Na_xCoO_2 with 0.78<x<.97 cond-mat/0701034 Resonant soft x-ray magnetic scattering from the 4f and 3d electrons in DyFe(4)Al(8) cond-mat/0701035 On the validity of entropy production principles for linear electrical circuits cond-mat/0701036 Decoherence of a Spin Qubit Coupled with Spin Chain cond-mat/0701037 Unified model for network dynamics exhibiting nonextensive statistics cond-mat/0701038 Quasi-One-Dimensional Spin-Density-Wave States with Two Kinds of Periodic Potentials and a Interchain Misfit cond-mat/0701039 Comment on ``Phase transitions in a square Ising model with exchange and dipole interactions’’ by E. Rastelli, S. Regina and A. Tassi cond-mat/0701040 Response to perturbations for granular flow in a hopper cond-mat/0701041 Single-qubit lasing and cooling at the Rabi frequency cond-mat/0701042 Physics of the Pseudogap II: Dynamics, Incompressibility, and Fermi Arcs as Motional Narrowing cond-mat/0701043 Random matrix analysis of complex networks cond-mat/0701044 Dynamics of the BCS-BEC crossover from a few-body perspective cond-mat/0701045 Andreev reflection in bilayer graphene cond-mat/0701046 (In,Ga)As gated-vertical quantum dot with an Al2O3 insulator cond-mat/0701047 Enhanced current flow through meandering and tilted grain boundaries in YBCO films cond-mat/0701048 The renormalized jellium model for spherical and cylindrical colloids cond-mat/0701049 Formation energy and interaction of point defects in two-dimensional colloidal crystals cond-mat/0701050 Crossover dark soliton dynamics in ultracold one-dimensional Bose gases cond-mat/0701051 Collisional properties of sympathetically cooled $^{39}$K cond-mat/0701052 Dispersion of the odd magnetic resonant mode in near-optimally doped Bi2Sr2CaCu2O8+d cond-mat/0701053 Sum Rules for the Optical and Hall Conductivity in Graphene cond-mat/0701054 Suppression of spin relaxation in an InAs nanowire double quantum dot cond-mat/0701055 Entropy and Entanglement in Quantum Ground States cond-mat/0701056 Model Energy Landscapes of Low-Temperature Fluids: Dipolar Hard Spheres cond-mat/0701057 Using time reversal symmetry for sensitive incoherent matter-wave Sagnac interferometry cond-mat/0701058 Decoherence dynamics in low-dimensional cold atoms interferometers cond-mat/0701059 Two-dimensional Vortices in Superconductors cond-mat/0701060 Analytical solution of a Hubbard model extended by nearest neighbour Coulomb and exchange interaction on a triangle and tetrahedron cond-mat/0701061 Heat Capacity in Magnetic and Electric Fields Near the Ferroelectric Transition in Tri-Glycine Sulfate cond-mat/0701062 Area constrained SOS models of interfaces cond-mat/0701063 Anomalous Hall Resistance in Bilayer Quantum Hall Systems cond-mat/0701064 Plutonium and Quantum Criticlity cond-mat/0701065 Rashba interferometers: Spin-dependent single and two-electron interference cond-mat/0701066 Well defined transition to gel-like aggregates of attractive athermal particles cond-mat/0701067 Time Evolution of the Neel State cond-mat/0701068 Dynamic Lattice Distortions in Sr$_2$RuO$_4$: A microscopic study by perturbed angular correlation (TDPAC) spectroscopy cond-mat/0701069 22nd order high-temperature expansion of nearest-neighbor models with O(2) symmetry on a simple cubic lattice cond-mat/0701070 Random walks and diameter of finite scale-free networks cond-mat/0701071 Exact solution of the mixed-spin Ising model on a decorated square lattice with two different kinds of decorating spins on horizontal and vertical bonds cond-mat/0701072 Ion specificity and anomalous electrokinetic effects in hydrophobic nanochannels cond-mat/0701073 Pinning-controllability of complex networks cond-mat/0701074 Controlling crystallization and its absence: Proteins, colloids and patchy models cond-mat/0701075 Non-Local Finite-Size Effects in the Dimer Model cond-mat/0701076 Colloids in a periodic potential: driven lattice gas in continuous space cond-mat/0701077 Effect of the metal-to-wire ratio on the high-frequency magnetoimpedance of glass-coated CoFeBSi amorphous microwires cond-mat/0701078 Protocols for optimal readout of qubits using a continuous quantum nondemolition measurement cond-mat/0701079 Thermodynamic consistency between the energy and virial routes in the mean spherical approximation for soft potentials cond-mat/0701080 Spin-phonon coupling in antiferromagnetic chromium spinels cond-mat/0701081 Nanometer scale electronic reconstruction at the interface between LaVO3 and LaVO4 cond-mat/0701082 Quantum confinement effects in InAs-InP core-shell nanowires cond-mat/0701083 Spin-modulated quasi-1D antiferromagnet LiCuVO_4 cond-mat/0701084 Fabrication, optical characterization and modeling of strained core-shell nanowires cond-mat/0701085 Anharmonicity and self-energy effects of the E2g phonon in MgB2 cond-mat/0701086 Nonequilibrium resonant spectroscopy of molecular vibrons cond-mat/0701087 Magnetic Susceptibility of the Kagome Antiferromagnet ZnCu_3(OH)_6Cl_2 cond-mat/0701088 Origin and roles of a strong electron-phonon interaction in cuprate oxide superconductors cond-mat/0701089 The ground state of clean and defected graphene: Coulomb interactions of massless Dirac fermions, pair-distribution functions and spin-polarized phases cond-mat/0701090 Colossal electroresistance in ferromagnetic insulating state of single crystal Nd$_0.7$Pb$_0.3$MnO$_3$ cond-mat/0701091 Exact analytical evaluation of time dependent transmission coefficient from the method of reactive flux for an inverted parabolic barrier cond-mat/0701092 Deformation of SU(4) singlet spin-orbital state due to Hund’s rule coupling cond-mat/0701093 Making a splash with water repellency cond-mat/0701094 Efficient measurement of linear susceptibilities in molecular simulations: Application to aging supercooled liquids cond-mat/0701095 Magnetic flux in mesoscopic rings: Quantum Smoluchowski regime cond-mat/0701096 Semiclassical Anyon Dynamics in Matter and Noncommutative Geometry: A Berry Phase Connection cond-mat/0701097 On the absence of the glass transition in two dimensional hard disks cond-mat/0701098 Fermi-Bose mixture across a Feshbach resonance cond-mat/0701099 Non-periodic pseudo-random numbers used in Monte Carlo calculations cond-mat/0701100 Single Molecule Spectroscopy as a possible tool to study the electric field in superconductors cond-mat/0701101 Phase Field Modeling of Fracture and Stress Induced Phase Transitions cond-mat/0701102 Resonant reflection at magnetic barriers in quantum wires cond-mat/0701103 Negative refraction in nonlinear wave systems cond-mat/0701104 Anisotropic Electron Spin Lifetime in (In,Ga)As/GaAs (110) Quantum Wells cond-mat/0701105 The numerical renormalization group method for quantum impurity systems cond-mat/0701106 Instabilities in the vortex matter and the peak effect phenomenon cond-mat/0701107 Many-body effects in transport through open systems: pinning of resonant levels cond-mat/0701108 Thomas-Fermi Screening in Graphene home | contact | terms of use | sitemap Copyright © 2005-2024 - Scimetrica
{"url":"http://science-advisor.net/article/index.php?source=arxiv&year=2007&month=1&s=800","timestamp":"2024-11-03T14:22:02Z","content_type":"text/html","content_length":"42619","record_id":"<urn:uuid:b97895a3-c425-4a0f-a290-4c3520bd92ec>","cc-path":"CC-MAIN-2024-46/segments/1730477027776.9/warc/CC-MAIN-20241103114942-20241103144942-00033.warc.gz"}
PTMFitter and color spaces Hello everybody, I was reading Malzbender-Gelb-Wolters 2001 article "polynomial texture maps", and at page 3 I found this phrase: " We prefer this simple but redundat representation over color spaces such as LUV and YCbCr due to the low cost of evalutation although we have implemented the method in these color space as well", this is also confirmed by Malzbender HP Labs Technical Report of 2001 that reports as possible format also PTM_FORMAT_LUM (color space CrYCb). I didn't found the possibility to select this color space in the PTMFitter anyway, and also in the rtiviewer there is no function to load this kind of file. This question arises since the PTM paper describes what they mean by LRGB but not how to compute the values for the R G and B components ( this http://users.csc.calpoly.edu/~zwood/teaching/csc570/ final06/jrickwal/ maybe descirbes the problem in a more articulated way ). Could it be possible to calculate a LRGB components using a different colorspace like LUV or YCbCr, extracting the luminance component and use the other two as cromacity? Another point: LRGB is known in astrophotograpy as: http://www.astro-imaging.com/Tutorial/LRGB.html, and is the combination of 4 different photos taken whit different filters. Maybe this could be the source of misinterpretation? • 1 month later... As the chromaticity of a pixel is fairly constant under varying light source direction, the unscaled RGB values of the pixel can be computed by separating RGB components from the luminance component. This question arises since the PTM paper describes what they mean by LRGB but not how to compute the values for the R G and B components As you can read in Robust Luminance and Chromaticity for Matte Regression in Polynomial Texture Mapping or Specularity and Shadow Interpolation via Robust Polynomial Texture Maps, the chromaticity χ is defined as RGB colour ρ independent of intensity and is estimated as the median of inlier values, for k = 1..3 ( resp. R, G and B ) : with ω = 0 for outliers (specularities and shadows ) and where let each RGB image acquired is denoted ρ^i. You can identify these outliers thanks to many ways which vary in complexity, some of which are described in the previous papers. A simple way consists in defining a binary weight matrix W where the entry is zero if the intensity of luminance is below a predefined threshold τ, in order to reduce the influence of shadows. Could it be possible to calculate a LRGB components using a different colorspace like LUV or YCbCr, extracting the luminance component and use the other two as chromacity? I dont know if the others color spaces (e.g.YCbCr) are used to calculate the chromaticity ( i.e. an unscaled RGB image) in the different existing fitters. But I suppose the principe stays the same. Instead of computing the RGB image reconstructed as the product of the luminance L (approximated by HSH or PTM technique) and the chromaticity χ, you must adapt the reconstruction function to the corresponding model of color space, the paradigm has changed. The principal difficulty is to prevent the confusion between luma, luminance, chominance, chromaticity... This topic is now archived and is closed to further replies.
{"url":"https://forums.culturalheritageimaging.org/topic/418-ptmfitter-and-color-spaces/","timestamp":"2024-11-14T07:18:47Z","content_type":"text/html","content_length":"68743","record_id":"<urn:uuid:4c46858f-af32-4a60-86b8-01d3171bc17b>","cc-path":"CC-MAIN-2024-46/segments/1730477028545.2/warc/CC-MAIN-20241114062951-20241114092951-00362.warc.gz"}
A wastewater stream is introduced to the top of a mass-transfer tower where it flows countercurrent... A wastewater stream is introduced to the top of a mass-transfer tower where it flows countercurrent... A wastewater stream is introduced to the top of a mass-transfer tower where it flows countercurrent to an air stream. At one point in the tower, the wastewater stream contains 1× 10-3 g mol A/m3 and the air is essentially free of any A. At the operating conditions within the tower, the film mass-transfer At the operating conditions within the tower, the film mass-transfer coefficients are kL =5×10 kg/mol/m s.(kgmol/m) and kG =0.01kgmol/m .s.atm.The concentrations are in the Henry’s law region where pA,i = HcA,i with H = 10 atm/(kg mol/m3). Determine a) The overall mass flux of A. b) The overall mass-transfer coefficients, KL and KG.
{"url":"https://justaaa.com/other/1144806-a-wastewater-stream-is-introduced-to-the-top-of-a","timestamp":"2024-11-09T07:47:00Z","content_type":"text/html","content_length":"32804","record_id":"<urn:uuid:9e18af33-3051-47a7-a0ca-149588ee24ec>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.30/warc/CC-MAIN-20241109053958-20241109083958-00462.warc.gz"}
A High Output Y-Source Boost DC/DC Converter for Renewable Applications Volume 09, Issue 09 (September 2020) A High Output Y-Source Boost DC/DC Converter for Renewable Applications DOI : 10.17577/IJERTV9IS090519 Download Full-Text PDF Cite this Publication Kashapogu Arun , G. Kishor, 2020, A High Output Y-Source Boost DC/DC Converter for Renewable Applications, INTERNATIONAL JOURNAL OF ENGINEERING RESEARCH & TECHNOLOGY (IJERT) Volume 09, Issue 09 (September 2020), • Open Access • Authors : Kashapogu Arun , G. Kishor • Paper ID : IJERTV9IS090519 • Volume & Issue : Volume 09, Issue 09 (September 2020) • Published (First Online): 06-10-2020 • ISSN (Online) : 2278-0181 • Publisher Name : IJERT • License: This work is licensed under a Creative Commons Attribution 4.0 International License Text Only Version A High Output Y-Source Boost DC/DC Converter for Renewable Applications Kashapogu Arun Department of Electrical and Electronics Engineering 1. Pulla Reddy Engineering College Kurnool, Andhra Pradesh, India. G. Kishor Associate Professor Department of Electrical and Electronics Engineering Kurnool, Andhra Pradesh, India. Abstract Boost Dc/Dc converter based on Y-source impedance network with high gain without and with closed loop control is analyzed in this paper which provides solutions to distributed generation applications. The analyzed converter consists of a Y- source impedance network which has three-winding inductor tightly coupled for high voltage boosting which is indeed higher than the other classical impedance networks. Turns ratio and winding placement of this inductor can be designed to give the gain desired, while maintaining the duty ratio of switch smaller. Principle of operation is presented and analyzed by mathematical derivations and calculations. Analyzed converter is simulated using MATLAB and the results are in agreement with theoretical analysis and confirm the performance of analyzed converter. Key Words: Dc/Dc converter, Y-source, impedance network, closed loop control, PI controller. 1. INTRODUCTION Distributed Generation powered by renewable energy sources like solar and fuel cells etc., have been gradually recognized as the reliable way of providing premium power for meeting energy demand in future for a low cost. Realizing such a distributed grid however requires more demanding power electronic converters for enhancing the low source voltages to a predefined grid Consider fuel cells as an example, their power densities are comparably high, which along with their low emission and quiet operation, make them attractive as distributed sources [1]. However, the output DC voltages are low usually, and vary widely. Converters with high voltage gains are thus needed generally for tying fuel cells to the distributed grid. Their ranges of gain variation must even be wide for tracking the wide output voltage variations of fuel cells. It is hence important to style high gain converters for these sources, whose gains should ideally nullify the source variations before a regulated voltage are often obtained for grid interfacing. The converters that are designed should also match with other requirements of the intended sources, which for solar and fuel cells, are to avoid negative currents flowing into them and limit ripple currents drawn from them [2]. The converter types can be DC/DC, AC/AC, AC/DC or DC/AC depending upon the system requirement. For DC/DC converters, conventional boost [3], push-pull [4], half-bridge [5], full-bridge [6] and those with numerous voltage multiplier cells [7] have been tried for boosting low source voltages to a regulated DC-link voltage between 200V and 600V depending upon the system requirements. The amount of boosting voltage, demands the inclusion of high frequency transformers in the converter circuits, whose turns ratios are increased proportionally with the demanded gains in almost all cases. The resulting turns ratios and total number of turns might therefore be high if high gain and satisfactory coupling are to be ensured simultaneously [6][8]. Alternatively, cascaded or multilevel techniques can be introduced to the converters for raising their gains but the number of components and complexity involved will undoubtedly increase. Another way is to use two-winding coupled inductors for boosting voltage, whose turns ratios can be kept low comparably even at high gain voltage. Converters which are implemented with coupled inductors therefore can have higher power densities, as compared to the earlier mentioned methods. The coupled inductors are different from the high- frequency transformers earlier mentioned since their magnetizing inductances are designed intentionally to be finite (ideally infinite for transformers). Based on this, a few magnetically coupled impedance networks have been found, whose origin dated back to the Z-source impedance network introduced in [9]. The Z-source network and its subsequent quasi [10] [11], embedded [12] [13] and series [14] variations use two inductors and two capacitors for voltage boosting. They have been demonstrated for DC/AC inverter [22], DC/DC converter [23], AC/AC converter [24] and AC/DC converter [25], but their gains are mentioned to be low. That has led to the development of the switched-inductor [15], tapped-inductor [16], cascaded [17], T-source [18], trans-Z- source [19], and TZ-source [20] networks. The latter three networks are considered different from the others since they use magnetically coupled inductors for boosting gain. They are also different from those push-pull and bridge-based converters mentioned above in the way that their gains are increased at a rate faster than the usual proportional relationship with turns ratios. Their turns ratios and total number of turns can therefore be comparably lesser, while retaining the necessary magnetic coupling at higher power density. It is hence the intention of this paper to continue by analyzing a Y-source network implemented as a high boost DC/DC converter [21]. The resulted converter uses a three-winding coupled inductor for flexibly deciding on its gain, which is presently not matched by any related converters. The number of components used is also kept small to allow the converter to be implemented compactly without compromising its performance. Y-source boost DC/DC converter without closed loop consists some error in the transient N1 N3 C1 VC1 To reduce this error closed loop controller is used. Results thus obtained are in match with theoretical analysis and confirm the performance expected from the converter. 2. Y-SOURCE IMPEDANCE NETWORK The Y-Source impedance network is shown in Fig.1 Fig. 2 Y-source impedance network equivalent during shoot through circuit (ii) When the switch is turned OFF, the diode D1 now conducts linking the source with the impedance network. The equivalent circuit is as shown in Fig. 3. N1 N3 N1 N3 iin N2 C1 VC1 LOAD Vo C1 VC1 SW LOAD Vo Fig. 1 Y-Source impedance network It consists of an active switch SW, a passive diode D1, a Fig. 3: Y-Source impedance network equivalent during non-shoot through circuit By applying KVL, we have, – Vin VL VL VC1 0 capacitor C1 and a three-winding coupled inductor (N1, N2, N3) for the high boost at a small duty ratio for switch SW. As the windings of the coupled inductor are connected directly to switch SW and diode D1, its coupling should be tight to ensure very small leakage inductances at its winding terminals. For Vin VL N1 N2 VC1 Vin VC1 VL N1 N2 the Y-source inductor, three strings are wound to form three VL (Vin VC1 N1 ) N N ) N N coupled windings inductor. A. Circuit Analysis Taking state space average of equations (1) and (2), we can get voltage across capacitor C1, (i) When the switch SW is turned ON, diode D1 is reverse- biased, and replaced by an open-circui in series with source. The equivalent circuit is as shown in Fig. 2. VL (dST) + VL (1 – dST) = 0 (dST normalized ON-time of switch By applying KVL, we have, VC1 * N3 N2 (dST ) + (Vin – VC1 )* N1 N2 (1- dST ) = 0 V * N1 (d ) + N1 (V – V * d – V + V *d ) = 0 C1 N3 N2 ST N1 N2 in in ST C1 C1 ST N 2 N N VC1 1 (dST ) – 1 + 1 * dST Vin (1- dST ) 1 = 0 N3 N2 N1 N2 N1 N2 N1 N2 VC1 N2 * VL N2 * VL 0 N1(N1 N2 ) N1(N3 – N2) N1 VC1 dST (N3 – N2 )(N1 N2 ) – N1 N2 + Vin (1- dST ) N1 N2 = 0 VC1 (N1) VL(N2 N3) 0 V V N 2 N * N ) N * N – N * N ) N L C1 N3 N2 VC1 dST 1 1 2 1 3 1 2 – 1 + Vin (1- dST ) 1 = 0 (N2 * N3 N1 * N3 – N 22 – N1 * N 2 ) N1 N 2 N1 N 2 N1(N1 N3 ) Vin (1- dST ) N1 N 2 = VC1 N1 N 2 – dST ( (N1 N 2 )(N3 N 2 ) Vin (1- dST ) (N1) = VC1 [N1 (1- dST (N1 + N3)] Gv V 1 1 d * K Voltage across capacitor C , VC1 = Vin (1 dST ) 1 N3 N1 Gain of the Y-Source network is raised exponentially by 1 dST N3 N 2 We can find the peak output voltage Vo, from the non-shoot- through circuit as shown below: – Vin + VL + VL N3 + Vo = 0 increasing its winding factor K. Different total number of turns needed by the Y-Source network for producing a higher voltage gain is also realized in the Table I. 0 < dST < K = (+) Gain, Gv N1:N2:N3 1:1:3, 2:1:4, 2 0 < dST < 1/2 (1-2dST)-1 1:2:5, 3:1:5, 4:1:6, 1:3:7 4:2:5, 3:1:3, 3 0 < dST < 1/3 (1-3dST)-1 2:2:4, 1:3:5, 0 < dST < K = (+) Gain, Gv N1:N2:N3 1:1:3, 2:1:4, 2 0 < dST < 1/2 (1-2dST)-1 1:2:5, 3:1:5, 4:1:6, 1:3:7 4:2:5, 3:1:3, 3 0 < dST < 1/3 (1-3dST)-1 2:2:4, 1:3:5, TABLE I: Gain Of Y-Source Impedance Network For Different Winding Factor K And Turns Ratio (N :N :N ) V V N3 + V in L 1 N1 o Substituting equation (2) here, we get, N1 N3 Vin = Vo + (Vin – VC1) N1 N2 1 N1 N1 N1 N3 Vin = Vo + (Vin – VC1) N1 N2 N1 N N N N Vin = Vo + (Vin – VC1) N1 N3 N1 N3 N1 N3 One can extend up to K=10 for different turns ratio (N1:N2:N3) which obtain different gains. Vin = Vo + Vin N1 N2 – VC1 N1 N2 Substituting equation (3) here, we get, 3. Y-SOURCE BOOST DC/DC CONVERTER For its high boost ability, the Y-source impedance network is suitable for implementing high gain converters like the high boost DC/DC converter. N1 N3 Vin (1 dST ) N1 N3 Vin = Vo + Vin N1 N2 – N N N1 N2 1 dST 3 1 N3 N2 N1 N3 N1 N3 (1 dST ) N1 N3 Vin = Vo + Vin N1 N 2 – N N N1 N 2 D1 D2 1 dST 3 1 N3 N2 N2 N N (1 d Vin = Vo + Vin 1 3 1 ST RLOAD o N1 N2 N3 N1 C1 C2 1 dST N3 N2 N N N N N1 N3 (1 dST ) Vo = Vin 1 N1 N 2 1 N N Fig. 4 Y-Source Boost Dc/Dc Converter 1 d ST 3 1 1. Principle of Operation N 2 N1 The Y-source boost DC/DC converter is as shown in Fig. 4, where only one controllable switch is used. In addition N1 N3 N3 N 2 to the basic Y-source impedance network shown in Fig. 1, Vo = Vin 1 N N 1 d ST N3 N1 N N N N there is an extra diode D2 and capacitor C2 across load. Now, when the switch SW is turned on, diodes D1 and D2 will N3 N1 reverse-bias simultaneously. Then C1 to charge the dST N N magnetizing inductance of the coupled inductor, and C2 to V = V o in N N power the load. Turning SW off, on the other hand, causes D1 1 dST 3 1 N3 N 2 to conduct, and the input source to recharge C1. The input source, together with the coupled inductor, also supplies energy to recharge C2 and power the load, but only if the Peak output voltage, Vo = Vin 1 voltage Vo across capacitor C2 is lower than voltage across 1 N N N N N N 1 dST 3 1 switch SW. Then D2 conducts, hence linking C2 and the load to the rest of the circuit. 1 dST * K 1 dST * K Peak output voltage, Vo = Vin 1 The same is repeated when SW turns on again. By N3 N1 periodically switching SW, the load voltage Vo across C2 can be regulated, which according to equation (5) represents a gain Where K = N3 N2 , is a term introduced to represent the winding factor of the integrated magnetics. Gain can be obtained from equation (4) as, Gv Vin 1 1 dST * K This is the maximum gain that the Y-source network can provide. 2. Expected Operating Waveforms Based on the operating principle described above, Fig. 5 0(1 0.375) 1 0.375 shows some expected waveforms from the Y-source boost DC/DC converter in response to the applied gate-source voltage V to switch SW. Gv Vin 1 1 dST * K = 4 When SW is on from t0 to t1 (dST= (t1 t0)/TS), its drain- Taking switching frequency, f = 12.6 KHz source voltage VDS falls to zero, while its current IDS increases From this time period, T = 1/f = 1 = 79.365 S to a finite value. That causes diodes D1 and D2 to become reverse-biased, and their voltages VD1 and VD2 to increase. We have ton ton The input current Iin, which is also same as the current ID1 dST ton toff TS through D1, then collapses to zero during this interval. When SW turns off at t1, VDS across it increases sharply, together with Iin = ID1 and ID2 through the two diodes D1 and D2. These currents, in turn, charge capacitors C1 and C2 with voltage across C2 increasing slightly above voltage across switch VSW at t2. When that happens, diode D2 stops ON- time period, ton = dST * Ts = 0.375 * 79.365 = 29.761875 S And OFF-time period, toff = TS ton =79.365 29.761875 = 49.603125 S From equation (3), we know that voltage across capacitor C1, conducting with current across diode D2, ID2 returned to zero. Since ID2 comes from the dc source, its reduction to zero also Vin (1 dST ) causes Iin to drop. The converter remains in this state until SW N3 N1 1 N3 N2 turns on again at t3 ( TS = t3 t0), causing the waveforms to Substituting input repeat the patterns. For Iin and ID2, it should be mentioned that their patterns can change depending on the charging time constant of C2. Longer charging time will increase the interval between t1 and t2, while a shorter charging time will reduce it. This variation however will not affect the maximum gain that is produced by the converter. voltage Vin = 60V, dST = 0.375 and N1:N2:N3 = 3:1:5, we have, 60(0.625) VC1 60(0.625) VC1 1 0.375* 2 VC1 = 150 V. Now from equation (1), we have, N N N N t VL VC1 N1 V V 3 L C1 L C1 t 5 1 VL = 112.5 V Therefore, voltage across inductor N1, t VN1 = VL1 = VL = 112.5 V. And voltage across inductor N2, t N 2 VN2 = VL2 = VL * N = 112.5 * 3 = 37.5 V. And voltage across inductor N t VN3 = VL3 = VL * N3 = 112.5 * 5 = 187.5 V. VD2 ID2 We can similarly get inductance values as, L1 = 160.714 H t L2 = 17.8571 H L3 = 446.428 H t Generalized mutual inductance with three windings matrix is given as, t0 t1 t2 t3 [L] = L21 L12 L22 L32 Fig. 5 Expected output waveforms from the Y-source boost DC/DC 3. Design Parameters [L] = 53.57 89.285 * 1E-6 H Y-Source boost dc/dc converter is simulated using MATLAB for the turns ratio N1:N2:N3 = 3:1:5. K = N3 N1 =2 N3 N2 The capacitance values are, C1=C2=470 F and the resistance is taken as load, RLOAD = 200 ohms. 4. Simulation Results And let dST = 0.375 For this we get gain as, The simulation results of Y-source boost dc/dc converter as follows: Fig. 6 Input voltage Vin = 60 V Fig. 6 shows input voltage of Vin = 60 V which is given to the Y-Source boost DC/DC converter shown in Fig. 4. Fig. 7 Gate-Source Voltage Vgs (V) Fig.7 shows gate-source voltage Vgs (V) with a duty ratio of 37.5% which has switching frequency of fs=12.6 KHz (Time period, TS = 79.365 Sec). Fig. 8 Voltage across Diode D1, VD1(V) Fig. 9 Current through Diode D1, ID1 = Iin(A) Fig. 10 Drain-Source voltage of SW, Vds(V) Fig. 11 Drain-Source current of SW, Ids(A) Fig. 12 Voltage across Diode D2, VD2(V) Fig. 13 Current through Diode D2, ID2 (A) Fig. 14 Voltage across Capacitor C2, VC1 (V) Fig. 14 shows voltage across capacitor C1, VC1 (V) which is approximately 150 V which supplies the voltage to converter shown in Fig.. 4 when the switch SW is turned off. Fig. 15 Output voltage across RLOAD, Vo (V) Fig. 15 shows output voltage across RLOAD = 200 ohms, Vo (V) which is around 230 V for an input voltage of Vin = 60 V with a gain of Gv = 3.83 Fig. 16 Output current through RLOAD, Io (A) Fig. 16 shows output current through RLOAD = 200 ohms, Io (A) which is 1.18 A with an output voltage of Vo = 230 V which gives output power Po = Vo*Io=271.4 W Fig. 17 Output ripple voltage across RLOAD, Vo (V) Fig. 17 shows output ripple voltage Vo(V) across RLOAD, with a maximum ripple voltage of 229.6 V and a minimum ripple voltage of 229.3 V Normalizedvalueof ripplevoltage Maximumvalue- Minimumvalue 229.6 229.3 = 0.001307 These are some of the tables evaluated for different load resistance, duty ratio and line voltage values to analyze the operation of the converter which are shown in TABLES II, III and TABLE II: LOAD REGULATION (Vin= 60 V) Resistance, R (ohm) Output voltage, Vo (V) Output current, Io (A) Output power, Po=Vo*Io (W) 200 229.5 1.147 263.236 190 229.2 1.208 276.873 180 229.0 1.272 291.288 210 230.0 1.095 251.850 From the TABLE II, it is observed that, output power is increased when load value is decreased and the output power is decreased when the load is decreased by keeping input voltage (Vin= 60 V) constant. TABLE III: DUTY RATIO REGULATION (Vin= 60 V) Duty ratio, D Output voltage, Vo (V) Output current, Io (A) 37.5 229.5 1.147 40.0 282.0 1.410 42.5 363.0 1.815 45.0 492.5 2.465 47.5 650.0 3.250 From the TABLE III, it is observed that, with increase in duty ratio, both output voltage and output current are increased by keeping both input voltage (Vin= 60 V) and load (R = 200 Ohms) constant. TABLE IV: LINE REGULATION (R= 200 ohms) Output current, Io Output power, Po=Vo Input voltage, Vin (V) Output voltage, Vo (V) (A) * Io (W) 60 229.5 1.147 263.23 70 268.5 1.345 361.13 50 190.5 0.953 181.54 From TABLE IV, it is observed that, when input voltage is increased by keeping load resistance constant, the output power is increased and when input voltage is decreased, the output power is decreased by keeping load (R = 200 Ohms) constant. From Figs. 15 and 16 one can observe that output voltage and current waveforms takes some time in transient state to settle. To reduce this rise time and improve the performance of the converter, closed loop control (PI controller) method is implemented by trial and error and simulated using MATLAB. 4. CLOSED LOOP Y-SOURCE BOOST DC/DC CONVERTER USING PI CONTROLLER Closed loop Y-source DC/DC converter using PI controller circuit diagram is as shown in Fig. 18: Now the results obtained for Y-source boost dc/dc converter using PI controller are as follows: Fig. 20 Input voltage, Vin = 60 V. Fig. 20 shows input voltage of Vin = 60 V which is given to the closed loop Y-Source boost DC/DC converter using PI controller shown in Fig. 18. N1 N3 D1 D2 C1 C2 RLOAD Vo _ Fig. 18 Closed loop Y-source boost dc/dc converter using PI controller PI controller used is as shown in Fig. 19: Fig. 21 Gate-source voltage of SW, Vgs(V) Fig. 21 shows gate-source voltage of SW, Vgs(V) which is generated by using PI controller shown in Fig. 19. Ref. voltage Ki 1/s + _ > Vgs Fig. 19 PI controller used in Fig.18 The output voltage which takes more time to settle and the required reference voltage both are implemented using PI controller to reduce the error. And the resultant signal is compared with repeating sequence to generate the required pulse signal. This generated pulse signal is given to the gate terminal of the switch SW. Here reference voltage is 240 V and proportional constant Kp=0.75 and integral constant Ki=110. A. Simulation Results Fig. 22 Voltage across diode D1, VD1(V) Fig. 23 Current through diode D1, ID1 = Iin (A) Fig. 24 Voltage across diode D2, VD2(V) Fig. 25 Drain-source voltage of SW, Vds(V) Fig. 26 Drain-source current of SW, Ids(A) Fig. 27 Voltage across capacitor C1, VC1(V) Fig. 27 shows voltage across capacitor C1, VC1(V) which is approximately 150 V which supplies voltage to the converter shown in Fig.18 when the switch SW is turned off. Fig. 28 Output voltage across RLOAD, Vo(V) Fig. 28 shows output voltage Vo(V) across RLOAD = 200 ohms, which is 240 V for an input voltage of Vin = 60 V with a gain of Gv = 4. Fig. 29: Output current through RLOAD, Io(A) Fig. 29 shows output current Io(A) through RLOAD = 200 ohms, which is 1.2 A with an output voltage Vo = 240 V which gives output power as Po = Vo*Io = 288 W. Fig. 30 Output ripple voltage across RLOAD, Vo(V) Fig. 30 shows output ripple voltage Vo(V) across RLOAD, with a maximum ripple voltage of 240.04 V and a minimum ripple voltage of 239.85 V Normalized valueof ripple voltage Maximum value – Minimum value 240.04 239.85 By comparing the output voltage of Y-source boost dc/dc converter without and with closed loop control, one can observe that the spike obtained of output voltage in transient state is drastically Similarly, for output current also the spike obtained of output current in transient state is improved. And the ripples obtained in output voltage Y-source boost dc/dc converter without closed loop control are also reduced in output voltage by using closed loop control (PI controller). A high gain DC/DC converter operating with a small switch duty ratio has been analyzed and simulated using MATLAB to validate the concept. The analyzed converter uses a Y-source impedance network consisting a three- winding coupled inductor. Turns ratio and winding placement of this inductor can be designed in a way to get the gain desired, while maintaining the duty ratio of switch smaller. The converter is thus unique with more degrees of freedom for tuning the gain, when compared to the existing coupled- inductor-based converters. Therefore, boost dc/dc converter with high-gain which is based on Y-source impedance network without and with closed loop control have been analyzed providing solution to high voltage gain applications, for example in distributed generation. 1. S. Njoya Motapon, L. Dessaint and K. Al-Haddad, "A Comparative Study of Energy Management Schemes for a Fuel-Cell Hybrid Emergency Power System of More-Electric Aircraft," in IEEE Transactions on Industrial Electronics, vol. 61, no. 3, pp. 1320-1334, March 2014, doi: 10.1109/TIE.2013.2257152. 2. Fuel Cell Handbook, 6th Edition, DOE/NETL-2002/1179, US Dept. of Energy, pp. 8.27-8.29, Nov. 2002. 3. B. Huang, A. Shahin, J. P. Martin, S. Pierfederici and B. Davat, "High voltage ratio non-isolated DC-DC converter for fuel cell power source applications," 2008 IEEE Power Electronics Specialists Conference, Rhodes, 2008, pp. 1277-1283, doi: 10.1109/PESC.2008.4592107. 4. G. K. Andersen, C. Klumpner, S. Kjaer and F. Blaabjerg, A New Power Converter for Fuel Cells with High System Efficiency, Int. J. Electron., vol. 90, no. 11/12, pp. 727-750, Nov. 2003. 5. F. Z. Peng, Hui Li, Gui-Jia Su and J. S. Lawler, "A new ZVS bidirectional DC-DC converter for fuel cell and battery application," in IEEE Transactions on Power Electronics, vol. 19, no. 1, pp. 54-65, Jan. 2004, doi: 10.1109/TPEL.2003.820550. 6. Jin Wang, F. Z. Peng, J. Anderson, A. Joseph and R. Buffenbarger, "Low cost fuel cell converter system for residential power generation," in IEEE Transactions on Power Electronics, vol. 19, no. 5, pp. 1315- 1322, Sept. 2004, doi: 10.1109/TPEL.2004.833455. 7. Y. J. A. Alcazar, D. de Souza Oliveira, F. L. Tofoli and R. P. Torrico- Bascopé, "DCDC Nonisolated Boost Converter Based on the Three- State Switching Cell and Voltage Multiplier Cells," in IEEE Transactions on Industrial Electronics, vol. 60, no. 10, pp. 4438-4449, Oct. 2013, doi: 10.1109/TIE.2012.2213555. 8. Y. P. Siwakoti, P. C. Loh, F. Blaabjerg and G. E. Town, "Effects of leakage inductances on magnetically-coupled impedance-source networks," 2014 16th European Conference on Power Electronics and Applications, Lappeenranta, 2014, pp. 1-7, doi: 10.1109/EPE.2014.6910982. 9. Fang Zheng Peng, "Z-source inverter," in IEEE Transactions on Industry Applications, vol. 39, no. 2, pp. 504-510, March-April 2003, doi: 10.1109/TIA.2003.808920. 10. J. Anderson and F. Z. Peng, "Four quasi-Z-Source inverters," 2008 IEEE Power Electronics Specialists Conference, Rhodes, 2008, pp. 2743- 2749, doi: 10.1109/PESC.2008.4592360. 11. J. Anderson and F. Z. Peng, "A Class of Quasi-Z-Source Inverters," 2008 IEEE Industry Applications Society Annual Meeting, Edmonton, AB, 2008, pp. 1-7, doi: 10.1109/08IAS.2008.301. 12. Poh Chiang Loh, Feng Gao, Frede Blaabjerg and Ai Lian Goh, "Buck- boost impedance networks," 2007 European Conference on Power Electronics and Applications, Aalborg, 2007, pp. 1-10, doi: 10.1109/ 13. P. C. Loh, F. Gao and F. Blaabjerg, "Embedded EZ-Source Inverters," in IEEE Transactions on Industry Applications, vol. 46, no. 1, pp. 256- 267, Jan.-feb. 2010, doi: 10.1109/TIA.2009.2036508. 14. Y. Tang, S. Xie and C. Zhang, "An Improved Z-Source Inverter," in IEEE Transactions on Power Electronics, vol. 26, no. 12, pp. 3865-3868, Dec. 2011, doi: 10.1109/TPEL.2009.2039953. 15. M. Nguyen, Y. Lim and G. Cho, "Sitched-Inductor Quasi-Z-Source Inverter," in IEEE Transactions on Power Electronics, vol. 26, no. 11, pp. 3183-3191, Nov. 2011, doi: 10.1109/TPEL.2011.2141153. 16. D. Li, P. C. Loh, M. Zhu, F. Gao and F. Blaabjerg, "Enhanced-Boost Z- Source Inverters with Alternate-Cascaded Switched- and Tapped- Inductor Cells," in IEEE Transactions on Industrial Electronics, vol. 60, no. 9, pp. 3567-3578, Sept. 2013, doi: 10.1109/TIE.2012.2205352. 17. D. Li, F. Gao, P. C. Loh, M. Zhu and F. Blaabjerg, "Hybrid-Source Impedance Networks: Layouts and Generalized Cascading Concepts," in IEEE Transactions on Power Electronics, vol. 26, no. 7, pp. 2028- 2040, July 2011, doi: 10.1109/TPEL.2010.2101617. 18. R. Strzelecki, M. Adamowicz, N. Strzelecka and W. Bury, "New type T-Source inverter," 2009 Compatibility and Power Electronics, Badajoz, 2009, pp. 191-195, doi: 10.1109/CPE.2009.5156034. 19. W. Qian, F. Z. Peng and H. Cha, "Trans-Z-Source Inverters," in IEEE Transactions on Power Electronics, vol. 26, no. 12, pp. 3453-3463, Dec. 2011, doi: 10.1109/TPEL.2011.2122309. 20. M. Nguyen, Y. Lim and Y. Kim, "TZ-Source Inverters," in IEEE Transactions on Industrial Electronics, vol. 60, no. 12, pp. 5686-5695, Dec. 2013, doi: 10.1109/TIE.2012.2229678. 21. Y. P. Siwakoti, P. C. Loh, F. Blaabjerg, S. J. Andreasen and G. E. Town, "Y-Source Boost DC/DC Converter for Distributed Generation," in IEEE Transactions on Industrial Electronics, vol. 62, no. 2, pp. 1059- 1069, Feb. 2015, doi: 10.1109/TIE.2014.2345336. You must be logged in to post a comment.
{"url":"https://www.ijert.org/a-high-output-y-source-boost-dcdc-converter-for-renewable-applications","timestamp":"2024-11-05T19:51:17Z","content_type":"text/html","content_length":"95300","record_id":"<urn:uuid:be52aff6-15c9-4fe8-bffd-3149d4e12a19>","cc-path":"CC-MAIN-2024-46/segments/1730477027889.1/warc/CC-MAIN-20241105180955-20241105210955-00560.warc.gz"}
下\% 153 8月22日年Asksia APPThe U.S. 50 :Sub Work: Wor - Asksia.ai 下\% 153 8月22日年 Asksia APP The U.S. 50 : Sub Work: Wor itechism of Therefore, the correct statement is: "Prove that opposite sides are congruent and have equal slopes." Key Concept Properties of Parallelograms In coordinate geometry, a quadrilateral is a parallelogram if both pairs of opposite sides are equal in length and have the same slope. You may want to follow up with What is the formula for calculating the slope of a line given two points, $\left(x_{1}, y_{2}\right)$ and $\left(x_{2}, y_{2}\right)$ ? Generate me a similar question Type a question. Qu - tion 3(Mulsple Choice Worth 1 points) Question 1 (Answered) Dent Qumich Answer from Sia Posted 3 months ago {'context': '[{\'Student\': \'my name is Zhang12345, and I am currently in Middle School\', \'Sia\': \'nice to meet you!\'}, {\'Student\': \'\\n下平2:35 9月5日\\n$45 \\\\%$\\n- QuadSpace form\\nAsksia APP\\nThe U.S. 50 :\\nSub Work: Math Work\\nGeometry\\nQuestion 1(Multiple Choice Worth 1 points)\\n(05.02 LC)\\nWhich method can be used to determine if triangle XYZ is isosceles using coordinate geometry?\\n\\nSample Question\\nEvaluate the area of triangle formed by the points (1, 2), (3, 4), and (5, 2).\\nProve that at least two sides are congruent and have equal lengths\\nProve that the slopes of the sides XY and XZ are equal\\nProve that segments YZ and XZ are parallel\\nQuestion 2(Multiple Choice Worth 1 points)\\n(05.02 MC)\\nWhich point (a, b) would make triangle ABC an equilateral triangle?\\nRriman Oration\\nQuestion 1 (Not Answered)\\nPant Qumition\', \'Sia\': \'...\'}]', 'question': 'how to determine if triangle XYZ is isosceles using coordinate geometry?'}
{"url":"https://www.asksia.ai/question-and-answer/%E4%B8%8B-153-8%E6%9C%8822%E6%97%A5%E5%B9%B4Asksia-APPThe-US-50-Sub-Work-Woritechism-ofappasksiaaiMathstepsTherefore-the-correct-statement-is-Prove-that-opposite-sides-are-congruent--gdHMO","timestamp":"2024-11-08T18:01:16Z","content_type":"text/html","content_length":"89232","record_id":"<urn:uuid:acceca2a-a37a-45a7-be20-f34d63398f36>","cc-path":"CC-MAIN-2024-46/segments/1730477028070.17/warc/CC-MAIN-20241108164844-20241108194844-00635.warc.gz"}
Kamal borrowed rs 15,000 for two years. the rate of interest -Turito Are you sure you want to logout? Kamal borrowed Rs 15,000 for two years. The rate of interest for two successive years is 8% and 10% respectively. If he repays Rs 6200 at the end of the first year, find the outstanding amount at the end of the second year. The correct answer is: 11000 Rupees HINT: Find the total amount at first year, then subtract the repaid amount and then find the amount to be paid at the end of second year. Complete step by step solution: For first year, Let the money borrowed by Kamal be the principal amount, P[1] = 15000 Rupees Here we have rate of interest R = 8% number of years T = 1 We know that So, total amount after one year = Money repaid at the end of first year = 6200 Rupees. ∴ Balance = 16200 - 6200 = 10000 Rupees. For second year, The principal amount becomes P[2] = 10000 Rupees. Here we have rate of interest R = 10% number of years T = 1 We know that So. total amount after second year = Hence the amount to be paid at the end of second year = 11000 Rupees. Get an Expert Advice From Turito.
{"url":"https://www.turito.com/ask-a-doubt/Maths-kamal-borrowed-rs-15-000-for-two-years-the-rate-of-interest-for-two-successive-years-is-8-percent-and-10-pe-qe27b2c17","timestamp":"2024-11-03T03:07:17Z","content_type":"application/xhtml+xml","content_length":"419592","record_id":"<urn:uuid:5a20316a-55c1-4177-b3a8-6e72a454f9c1>","cc-path":"CC-MAIN-2024-46/segments/1730477027770.74/warc/CC-MAIN-20241103022018-20241103052018-00677.warc.gz"}
ISN: What are Social Networks? Social networks are based on relations between two or a few individuals from friendships over contracts to work contacts. Throughout the course, the theory behind social networks will be put into context with methods of comparing and applying social networks. Examples from different scientific disciplines will be used to illustrate the social networks. Network descriptives Mathematical descriptions of networks are a useful descriptive. An adjacency matrix can be used to represent a graph as nodes and edges. Networks can be analysed on different levels: • Dyad level ([latex]O(n^2)[/latex]) or connections between nodes • Node level ([latex](O(n)[/latex]) or properties of nodes • Network level ([latex](O(1)[/latex]) or clustering of nodes. Centrality could be access to resources, connection between parts, part of interaction. For a detailed report on centrality measures, look at this post in my Complexity and Global Systems Sciences lecture notes. Centrality measures often differ and in larger networks will be different for different measures. The choice of centrality is dependent on the research question. Generally, for any network, one should start with the following descriptives, before continuing to more advanced analysis. 1. Start with a visualisation of a network. 2. Compute density of network (number of edges divided by maximal number of edges; note that the maximal number is different for directed ( [latex]e_{max} = n(n-1)[/latex]) and undirected ([latex]e_ {max} = n(n-1)/2[/latex]) graphs). 3. Measure centrality in social networks.
{"url":"http://blog.gruebel.io/2017/02/20/isn-what-are-social-networks/","timestamp":"2024-11-13T22:58:51Z","content_type":"text/html","content_length":"58660","record_id":"<urn:uuid:198ce60b-252f-488b-9b60-4adbb156817f>","cc-path":"CC-MAIN-2024-46/segments/1730477028402.57/warc/CC-MAIN-20241113203454-20241113233454-00538.warc.gz"}
Coordinate Systems Coordinate Systems A coordinate system is a method to define the unique position on a plane or in space of geometric elements using combinations of numbers called coordinates. There are two types of Earth coordinate systems: geographic and projected. Our company is named after René Descartes, the inventor of the cartesian coordinate system in the 17th century. This coordinate system was the first link between geometry and algebra, and the foundation for projected coordinate systems. Geographic coordinate systems Geographic coordinates are perhaps the most familiar way of representing coordinates on the Earth’s surface. In these coordinates systems, a location is represented by its latitude (north-south position) and longitude (east-west position). Latitude and longitude are defined using an ellipsoidal representation of the Earth. Because the coordinates are defined on a curved surface, the physical distance covered by one degree of longitude will vary from north to south (due to convergence at the poles), as will latitude from east to west (due to the ellipsoid shape). Placing coordinates on a flat surface requires a map projection, described in the next section. Map projections and projected coordinate systems For many practical purposes, we need to project coordinates onto a 2D plane. A projected coordinate system is defined on a flat, two-dimensional surface. Projected coordinate systems describe locations in linear units (such as meters or feet). Since map projection are abstractions of a 3D Earth onto a 2D plane, distortions are inevitable. These distortions differ for each map projection but are well known and studied. Distortions that can occur with a flat map include those related to: - shape - area - distance - direction While it is impossible to maintain all the native spatial elements of your geometries once projected, you can prioritize the most important element for your maps purpose. Projections often use descriptive names like “conformal”, “equidistant”, “equal-area”, and “azimuthal” to describe the properties that they preserve. A projection can accurately preserve one property (either shape, area, distance or direction) but not others. For this reason, it’s important to choose a projection that preserves the properties that are relevant to your application. For example, the Mercator projection is used for navigation because it preserves the angle between any two curves (conformal projection). Mercator Distortion: The Mercator projection is area distorting, resulting in land masses near the poles to appear much larger relative to those at the equator.
{"url":"https://kb.descarteslabs.com/knowledge/coordinate-systems","timestamp":"2024-11-12T03:39:13Z","content_type":"text/html","content_length":"51870","record_id":"<urn:uuid:5c0262ee-2a42-4791-9bd3-bb04d96406ed>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.50/warc/CC-MAIN-20241112014152-20241112044152-00201.warc.gz"}
Negative sign - Printable Version Negative sign - johnet123 - 10-27-2016 06:18 PM I noticed that the negative sign in front of a fraction is barely visible. Try 1/2 and then the +/- key to see what to see what I mean. RE: Negative sign - eried - 10-28-2016 07:11 PM (10-27-2016 06:18 PM)johnet123 Wrote: I noticed that the negative sign in front of a fraction is barely visible. Try 1/2 and then the +/- key to see what to see what I mean. Are you using the large font? It is worsened by that font size RE: Negative sign - johnet123 - 10-28-2016 11:39 PM Yes, I was using the large font. Reducing the font size appears to correct the problem. I guess if you have bad eyesight, as I do, it is a choice between seeing the numbers and characters or the unary minus sign.
{"url":"https://hpmuseum.org/forum/printthread.php?tid=7111","timestamp":"2024-11-09T19:23:03Z","content_type":"application/xhtml+xml","content_length":"3392","record_id":"<urn:uuid:9ae86699-7c9b-4faa-ba3a-13f5e3d0a30a>","cc-path":"CC-MAIN-2024-46/segments/1730477028142.18/warc/CC-MAIN-20241109182954-20241109212954-00571.warc.gz"}
Map Projections and Coordinate Systems — VETfarm Map Projections and Coordinate Systems Map projection is a technique used to display the spherical shape of the Earth on a flat surface. It’s a conversion of the 3D shape of Earth to 2D form. There are endless variants of how a map projection can project the earth surface on a plane. Every map projection must grow or diminish some characteristics, including size, shape, distance, direction, and/or scale. Maps inherently involve distortions because it is impossible to perfectly unfold the surface of a three-dimensional globe onto a two-dimensional plane. When a globe is projected onto a flat surface, some aspects of the map must be distorted, whether it be shape, area, distance, or direction. Different map projections preserve different metrics to suit specific purposes. However, no single projection can preserve all these properties simultaneously, which is why different projections are used depending on the map’s intended purpose. There are three main groups of projections depending on a property they preserve: • Conformal projections (like the Mercator projection) preserve angles and shapes locally, making them useful for navigation. • Equal-area projections (like the Mollweide projection) maintain area accuracy, ensuring that the relative size of regions is correct. • Equidistant projections (like Plate carée) preserve distances along certain lines. Beside that, a map projection can be constructed in such a way that none of the above-mentioned properties are preserved, but distortions of all of these are somewhat minified. These are called compromise projections. Some common map projections / Maximilian Dörrbecker, CC BY-SA 3.0. How a certain projection distorts shapes can be observed on a circle with a fixed diameter when drawn on multiple places over the map. Such a circle is called a Tissot indicatrix. Under conformal projections, the tissot’s indicatrix remains circle, but its size is variable across the map. Under equal-area and equidistant projections, the tissot’s indicatrix can be a circle on certain places, but in most places distorts to an ellipsis. Mercator projection with Tissot’s indicatrix. Source A coordinate system is a method used to identify the exact position of a point in space using numbers called coordinates. In simple terms, it assigns a set of values (commonly marked as "x", "y" , …) to each point, helping us locate where something is. For example, in a 2D space, the "x" coordinate tells us how far left or right a point is, and the "y" coordinate tells us how close or far away. A spatial reference system (SRS) or coordinate reference system (CRS) is a framework used to precisely measure locations on the surface of Earth as coordinates. A particular SRS specification comprises a choice of Earth ellipsoid, horizontal datum, map projection, origin point and unit of measure. As possibilities of how an Earth’s surface can be projected to a plane are endless, there are endless ways to define a spatial reference system. Thousands of various SRS have been specified for use around the world or in specific regions and for various purposes. As a consequence, transformations between different SRS are often required. We can divide spatial reference systems into two groups depending on whether they work with geographic (spherical) coordinates or if they work in projected space with cartesian coordinates. A geographic coordinate system represents the Earth as a perfect sphere and any place on the Earth surface can be described by two numbers denoted by greek letters λ and φ. λ is called longitude and φ is called latitude. Longitude and latitude are angles in degrees and can only take on certain values. Longitude values are restricted from -180 to 180, while latitude values are restricted from -90 to 90. In this representation, negative longitude represents the western hemisphere and positive longitude represents the eastern hemisphere. Similarly, negative latitude represents the southern hemisphere and positive latitude represents the northern hemisphere. Alternatively, the hemisphere can be denoted by a big letter after the value like 49° N stands for latitude 49 degrees north or 15° S means latitude 15 degrees south. A point exactly in the cross-section of the main meridian and the equator is on neither hemisphere and its geographic coordinates are [0,0]. This point can be theoretically anywhere, but in the most commonly used geographic coordinate system WGS84, it is in reality inside of an Atlantic ocean, thus it does not point to any special place. A great circle with a constant longitude is called a meridian. A prime meridian is a meridian with longitude equal to 0. Most widely known is the Greenwich prime meridian, yet others have been in use historically and even nowadays. A great circle with latitude equal to 0 is an equator. Circles parallel to the equator are called parallels or circles of latitude. Earth as a sphere with meridians and parallels in blue, equator as a red line and latitude and longitude angles in black. Most commonly used SRS with a geographic coordinate system is WGS84, a realisation of the World Geodetic System standard.
{"url":"https://vetfarm.org/e-learning/mapping-fundamentals/map-projections-and-coordinate-systems/","timestamp":"2024-11-09T10:19:57Z","content_type":"text/html","content_length":"22520","record_id":"<urn:uuid:841aec66-f38e-4a1d-b72b-2eb4816be368>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.75/warc/CC-MAIN-20241109085148-20241109115148-00625.warc.gz"}
The Shape of Gravity Today’s paper is part of a special series in anticipation of The Science & Ultimate Reality Symposium in Princeton, a symposium in honor of the 90th year of John Archibald Wheeler, a great physicist and teacher of physicists. More than 80 years ago the German mathematician Theordor Kaluza spotted a curious property about Einstein’s general theory of relativity. The field equations of this theory describe how spacetime is warped by matter, which is generally accepted as the most satisfactory account of gravitation. Kaluza noticed that if Einstein’s equations are written down for a universe in which space has four dimensions rather than three, then not only is gravity correctly described, but electromagnetism too. In other words, if the world were really five-dimensional rather than four (adding in time as well) then both electromagnetism and gravitation would have a common geometrical basis. By all accounts Einstein wasn’t very impressed with the idea. Clever though Kaluza’s theory was, it had a major drawback. Where is the fifth dimension? Why don’t we see it? A possible answer was provided by Oskar Klein. Imagine viewing a hosepipe from afar; it would look like a wiggly line. On closer inspection, however, the line would be revealed as a tube, and what was apparently a point on the line would turn out to be a little circle going around the tube. In the same way, what we might take to be a structureless point in three-dimensional space might in actuality be a little circle going around a fourth space dimension. So the reason we don’t see the extra space dimension could be because it is rolled up to a tiny size (a configuration known to physicists as ‘compactification’). Klein computed the circumference to be about 20 powers of 10 smaller than anatomic nucleus. The idea can be generalized. Perhaps there are two, three, four, extra space dimensions folded up out of sight in this manner? Maybe the nuclear forces could be incorporated this way into a Kaluza-Klein theory, thus reducing all the forces of nature to pure geometry? Such theories were developed in earnest in the 1980s. By the time string theory came along, the assumption of extra dimensions seemed natural. A popular string model, for example, has 26 dimensions in total. But rolling dimensions up is only one way to hide them. Another is to suppose that although real space might have four dimensions, we are trapped in three of them, just as a two-dimensional being is trapped in a surface in Edwin Abbott’s famous (but outrageously sexist) Flatland fable. The confining entity in the case of three-dimensional space embedded inside four dimensions is called a ‘brane’ (after membrane). We could be trapped in a three-brane if the forces that control normal matter, and the photons whereby we see other matter, are confined by a sort of potential well. So in normal circumstances we would not be able to see out into the enveloping higher dimension. But it would be there alright, and might affect the physical within our confining brane, for example, by modifying gravity on a small scale. We can even imagine that collisions between neighboring branes might occur, creating big bangs. Braney researchers have explored many such speculative ideas Lisa Randall is a high-energy physicist from Harvard who hopes we will be able to detect the fifth dimension at work through subtle experiments. If she is right, then in cosmology what you get might be much more than what you see. —Paul Davies When thinking about the outstanding issues in cosmology, it is a good idea to separate the late-time from the early-time issues. It is fairly clear that at late times standard FRW evolution applies because of the observations of the CMBR, the abundance of the elements as predicted by big-bang nucleosynthesis, and the Hubble expansion. However, these probe only late-time/low-energy cosmology back to when the temperature of the universe was of order an MeV. The evolution of the early universe is much less definitive. Early universe cosmology could differ substantially from the conventional picture. Many of the open questions in cosmology center around the range of possibilities for this early universe evolution. Some, such as inflation, are motivated by specific flaws in the conventional picture (in themselves important questions in cosmology). Some, such as extra dimensions, are interesting in that they permit substantially different early universe evolution while nonetheless conforming at late times to what has been observed. New insights into theories with extra dimensions have the potential to address other outstanding issues. Before discussing any specific theory, we list some of the major problems that cosmologists face. There are the horizon, flatness, and homogeneity problems that might be addressed by inflation, which itself raises questions about its implementation. There are questions related to problems raised by gravity as measured on long distance scales, namely the dark matter and dark energy problems. There is the question of why the world appears to be four-dimensional. There is the black hole information paradox, and questions about the holographic nature of gravitational systems and possible evidence for nonlocality. And of course there is the long-standing dilemna of the cosmological constant. Having listed some problems, we now list possible particle physics or gravity systems and which problems they might help address. As with the standard model of particle physics, which agrees with all existing low-energy data but leaves many naturalness problems unresolved, the standard theory of late-time cosmology leaves open several naturalness problems of at least as big proportion. These are successfully addressed by inflation. However, we have yet to find a fully satisfactory inflationary model, that is one that does not require some unnatural assumption or parameter choice. Moreover, the existence of theories of inflation, which rely on a time period of large nonvanishing cosmological constant, cannot necessarily be decoupled from the ultimate resolution of the cosmological constant problem. Many other intriguing questions have evolved around the issue of the dimensionality of space. Ultimately, we would like to address the question of why our universe appears to be four-dimensional. There is the associated question of whether the ultimate theory is four-dimensional, or only appears so in cosmology and particle physics that has been observed. Given that they might exist, it is important to examine the role they might play in addressing questions in particle physics or cosmology. Within this realm, there exist many potential directions, including discovering interesting aspects of gravity theories, exploring new and different time-dependent solutions, and possibly gaining insight into the fundamental nature of quantum gravity. By interesting aspects of gravity theories, I refer to the many new things we have discovered about gravitational theories within only the last few years. For example, the fact that compactification of additional dimensions is not essential, that the graviton can have a mass in AdS space, and the fact that four-dimensional gravity can be a local phenomenon are all new developments. The last point means that local physics can be independent of the space far away. Have we been assuming too much by assuming all of the universe evolves four-dimensionally? Another property of note is that the cosmological constant problem is completely revamped in the context of brane-world physics. The problem is no longer why there is no vacuum energy, but instead why there is a precise relation between brane energy and that of the surrounding bulk spacetime. By insight into the fundamental nature of gravity, I refer to the fact that with explicit new solutions, we can explore questions about holography, for example, which have precise and specific implications in particular theories. These allow tests of holographic conjectures in regulated versions of the theories. By exploring features of known holographic examples, we might also learn features that can be extrapolated to gravitational theories in general. It is likely there are many more unanticipated phenomena yet to be discovered, and that some major problems remaining in cosmology might thereby be addressed.
{"url":"https://metanexus.net/shape-gravity/","timestamp":"2024-11-09T17:05:32Z","content_type":"text/html","content_length":"101898","record_id":"<urn:uuid:614eccf5-fcd1-4a11-b6c0-05f772bd3ada>","cc-path":"CC-MAIN-2024-46/segments/1730477028125.59/warc/CC-MAIN-20241109151915-20241109181915-00095.warc.gz"}
Lee Sobotka Professor Sobotka is particularly interested in the de-excitation modes of highly-excited nuclei; continuum structure of exotic nuclei; nucleosynthesis, dynamics of nuclear fusion and fission; the asymmetry dependence nucleon correlations in atomic nuclei; the asymmetry dependence of the equation of state of nuclear matter; multi-particle correlations; advanced radiation detectors and the associated electronics including ASIC design; and various applied nuclear science topics. Our interests span from basic nuclear science to selected topics in applied nuclear science. Topics under current investigation include: • The continuum structure of light nuclei both on and off nucleosynthetic paths. • The influence of phase transitions in finite, two-component quantal systems on the dynamics of collisions between heavy nuclei. • Developing techniques to measure the evolution of the nuclear density-of-states with excitation energy. • Clustering in low-density nuclear systems. • The deexcitation of highly-excited nuclei by the emission of complex clusters of nucleons. • Development of new detector technologies and pulse-processing electronics for ionizing radiation. • Employing multiple techniques (including^ 11C positron imaging) to study how the products of photorespiration are used by plants to direct plant physiology. Here we will discuss only the second topic in the list above. Nuclear systems are two component (n and p) quantal systems. When taken through a phase transition, quantal systems must obey the same Gibbs' conditions (equality of the chemical potentials of each substance in the phases in equilibrium) as non-quantal systems. The common component fractionation (distillation) with phase separation can also occur in multicomponent quantal systems. It is just such a component fractionation (different n/p ratios in the low and high-density regions of a reaction system), driven solely by quantal effects, that has drawn our interest in recent. The thermodynamic force driving such a fractionation can be understood as follows. Imagine an unequal filling of dual sets of quantum levels, one set for n's the other for p's. As a result of the different Fermi levels, there is a finite thermodynamic potential difference between n's and p's. Given enough time the Weak interaction will convert n's to p's (or visa-versa) to equalize the Fermi energies. This is in fact what drives β-decay. However due to the weakness of the interaction, such interconversions have characteristic times exceeding 1 On a shorter time scale, fractionation should occur if two "phases" of different density are present. The phase with lower density will have the quantum levels spaced closer together and thus a particle imbalance will result in a smaller absolute chemical potential difference. If the two phases of different density are in equilibrium, the nucleon species in excess will be driven into the low-density phase (where the absolute difference in the Fermi levels is smaller.) Both theoretical modeling and experiments related to generating conditions under which such a fractionation could occur are topics of current interest. Needless to say, this topic is closely related to the Equation of State (EoS) of asymmetric nuclear matter. Decoding this EoS is essential for determining the structure of neutron stars and the nature of super-nova explosions. These stellar explosions are the likely mechanisms for the synthesis of about half of the heavy elements via the rapid neutron capture process. In this process, the explosion generates an intense pulse of neutrons which are sequentially captured by seed nuclei. After the neutron pulse subsides, the very neutron-rich species β-decay back to stability. This element building process is controlled by the masses and the density of states (at the energy corresponding to the capture on a neutron) of the β-unstable, neutron-rich nuclei.
{"url":"https://artsci.washu.edu/faculty-staff/lee-sobotka","timestamp":"2024-11-10T05:31:36Z","content_type":"text/html","content_length":"94470","record_id":"<urn:uuid:fb590e8c-b151-4bd3-88a2-f0523683d453>","cc-path":"CC-MAIN-2024-46/segments/1730477028166.65/warc/CC-MAIN-20241110040813-20241110070813-00511.warc.gz"}
American Mathematical Society Projectively flat affine surfaces that are not locally symmetric HTML articles powered by AMS MathViewer Proc. Amer. Math. Soc. 123 (1995), 237-246 DOI: https://doi.org/10.1090/S0002-9939-1995-1212285-0 PDF | Request permission By studying affine rotation surfaces (ARS), we prove that any surface affine congruent to ${x^2} + \epsilon {y^2} = {z^r}$ or ${y^2} = z(x + \epsilon z\log z)$ is projectively flat but is neither locally symmetric nor an affine sphere, where $\epsilon$ is 1 or $- 1, r \in {\mathbf {R}} - \{ - 1,0,1,2\}$, and $z > 0$. The significance of these surfaces is due to the fact that until now ${x^2} + \epsilon {y^2} = {z^{ - 1}}$ are the only known surfaces which are projectively flat but not locally symmetric. Although Podestà recently proved the existence of an affine surface satisfying the above italicized conditions, he did not construct any concrete example. References S. Kobayashi and K. Nomizu, Foundations of differential geometry, Vol. I, Wiley, New York, 1969. C. Lee, Affine rotation surfaces, Master’s Thesis, Brown Univ., 1991. —, Generalized affine rotation surfaces, Ph.D. Thesis, Brown Univ., 1993. K. Nomizu, What is affine differential geometry, Proc. Conference on Diff. Geom., Munster, 1982, pp. 42-43. —, Introduction to affine differential geometry, Part I, Lecture Notes, MPI preprint MPI 88-37, 1988; revised: Department of Mathematics, Brown University, 1989. Similar Articles • Retrieve articles in Proceedings of the American Mathematical Society with MSC: 53A15, 53C05, 53C40 • Retrieve articles in all journals with MSC: 53A15, 53C05, 53C40 Bibliographic Information • © Copyright 1995 American Mathematical Society • Journal: Proc. Amer. Math. Soc. 123 (1995), 237-246 • MSC: Primary 53A15; Secondary 53C05, 53C40 • DOI: https://doi.org/10.1090/S0002-9939-1995-1212285-0 • MathSciNet review: 1212285
{"url":"https://www.ams.org/journals/proc/1995-123-01/S0002-9939-1995-1212285-0/?active=current","timestamp":"2024-11-10T15:58:31Z","content_type":"text/html","content_length":"58779","record_id":"<urn:uuid:90988f30-1fcd-4ac1-b78a-f5776e2f927c>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.60/warc/CC-MAIN-20241110134821-20241110164821-00267.warc.gz"}
Problem Model Prompt P_Combinatorics_16 GPT-4 Turbo Direct C w/ H Ground Truth Answer: C(n+1, 2m+1) Use in Code import champ_dataset # "pip install champ-dataset" if necessary dataset = champ_dataset.load('v0') problem = dataset['P_Combinatorics_16'] messages = problem.conversations['GPT-4 Turbo|Direct C w/ H'].messages for message in messages: print(f'Role: {message.role}') print(f'Text: {message.text}') print(f'Error: {message.error}') Full Conversation System Prompt User Input Model Output You are an expert on mathematics. Solve the following problem. Make sure to show your work before giving the final answer. Let A be a string of n binary digits (leading 0s are allowed). How many such strings have the substring "01" appearing exactly m times, as an expression of m and n? You may find the following information useful: 1. For any string, add the digit 1 to the beginning and digit 0 to the end. 2. Consider the gaps between every two digits and classify them into "0-1" transition, "1-0" transition and neither. 3. If there are m "0-1" transitions in the original string, how many "1-0" transitions are there in the new string, and what is their ordering? 4. How many ways in total can we arrange the "0-1" transitions and "1-0" transitions? 5. The number of ways to choose a set of (unordered) k items from n items is C(n, k). To solve this problem, we need to count the number of binary strings of length n that contain exactly m instances of the substring "01". Let's follow the steps provided: 1. For any string A, we add the digit 1 to the beginning and digit 0 to the end. This ensures that any "01" in the original string is preserved, and we can also count "01" if it occurs at the very beginning or end of the original string. 2. Now, we consider the gaps between every two digits. We classify them into "0-1" transitions, "1-0" transitions, and neither (which would be "00" or "11"). Since we are only interested in the "01" transitions, we will focus on those. 3. If there are m "0-1" transitions in the original string, then in the new string (with the added 1 at the beginning and 0 at the end), there will be m "1-0" transitions. The ordering of these transitions will be alternating, starting with a "1-0" transition (since we added a 1 at the beginning) and ending with a "0-1" transition (since we added a 0 at the end). 4. We need to arrange m "0-1" transitions and m "1-0" transitions in an alternating pattern. Since the pattern is fixed (starting with "1-0" and ending with "0-1"), we don't actually have to arrange them; their positions are determined by the number of transitions. 5. Now, we need to distribute the remaining n - 2m digits (which are not part of any "01" or "10" transitions) among the m+1 gaps created by the m "0-1" transitions and m "1-0" transitions. This is a problem of distributing indistinguishable items (the n - 2m digits) into distinguishable bins (the m+1 gaps). The number of ways to distribute n - 2m indistinguishable items into m+1 distinguishable bins is given by the stars and bars method, which is C((n - 2m) + (m+1) - 1, (m+1) - 1) = C(n - m, m). Therefore, the number of binary strings of length n that contain exactly m instances of the substring "01" is given by the binomial coefficient C(n - m, m). This is the final answer, expressed in terms of m and n. Now, summarize the answer above in one sentence, without any intermediate steps or explanations. The number of binary strings of length n that contain exactly m instances of the substring "01" is given by the binomial coefficient C(n - m, m).
{"url":"https://yujunmao1.github.io/CHAMP/solutions/p_combinatorics_16_gpt4t_directcwh.html","timestamp":"2024-11-05T04:01:18Z","content_type":"text/html","content_length":"7860","record_id":"<urn:uuid:6096c84d-5c84-49d3-8767-9e671ea35b00>","cc-path":"CC-MAIN-2024-46/segments/1730477027870.7/warc/CC-MAIN-20241105021014-20241105051014-00243.warc.gz"}
What is the linear regression line? + Example What is the linear regression line? 1 Answer It's the line that gives a closest fit between to variables if there is supposed to be a linear correlation. In my job as a teacher I had the feeling that students that scored good in Maths also scored good in Physics and vice versa. So I made a scatterplot on a chart in Excel, where x=Maths and y=Physics, where each student was represent by a dot. I noticed that the collection of points looked like a sigar-shape in stead of being all over the place (the latter would mean No correlation at all). And then I did two things: (1) I had the correlation coefficient calculated (which was high) (2) I had the "line of best fit" drawn The latter one is the regression line, and you can even have an equation attached to it. From this you may make a more or less reasonable prediction of one score from the other, depending on how good the correlation is (correlation is another subject). There are a lot of 'buts' and 'ifs'. For one thing you have to be reasonably sure the correlation is linear. Impact of this question 5834 views around the world
{"url":"https://api-project-1022638073839.appspot.com/questions/what-the-linear-regression-line-mean","timestamp":"2024-11-06T23:31:23Z","content_type":"text/html","content_length":"35453","record_id":"<urn:uuid:c6fc787a-eb4f-44f8-95f4-4223c4fe7f4f>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.54/warc/CC-MAIN-20241106230027-20241107020027-00039.warc.gz"}
Josef Lauri According to our database , Josef Lauri authored at least 35 papers between 1979 and 2024. Collaborative distances: • Dijkstra number^2 of four. • Erdős number^3 of two. Book In proceedings Article PhD thesis Dataset Other Online presence: On csauthors.net: Flip colouring of graphs. Graphs Comb., December, 2024 The feasibility problem for line graphs. Discret. Appl. Math., 2023 On small balanceable, strongly-balanceable and omnitonal graphs. Discuss. Math. Graph Theory, 2022 Index of parameters of iterated line graphs. Discret. Math., 2022 On Zero-Sum Spanning Trees and Zero-Sum Connectivity. Electron. J. Comb., 2022 A note on totally-omnitonal graphs. Bull. ICA, 2020 Preface: The Second Malta Conference in Graph Theory and Combinatorics. Discret. Appl. Math., 2019 The construction of a smallest unstable asymmetric graph and a family of unstable asymmetric graphs with an arbitrarily high index of instability. Discret. Appl. Math., 2019 Notes on spreads of degrees in graphs. Bull. ICA, 2019 Selective hypergraph colourings. Discret. Math., 2016 The saturation number for the length of degree monotone paths. Discuss. Math. Graph Theory, 2015 Constrained colouring and σ-hypergraphs. Discuss. Math. Graph Theory, 2015 (2, 2)-colourings and clique-free σ-hypergraphs. Discret. Appl. Math., 2015 Unstable graphs: A fresh outlook via TF-automorphisms. Ars Math. Contemp., 2015 Independence and matchings in σ-hypergraphs. Australas. J Comb., 2015 Non-monochromatic non-rainbow colourings of σ-hypergraphs. Discret. Math., 2014 On the edge-reconstruction number of a tree. Australas. J Comb., 2014 Links between two semisymmetric graphs on 112 vertices via association schemes. J. Symb. Comput., 2012 Coset graphs for low-density parity check codes: performance on the binary erasure channel. IET Commun., 2011 Two-fold automorphisms of graphs. Australas. J Comb., 2011 Subgraphs as a Measure of Similarity. Proceedings of the Structural Analysis of Complex Networks, 2011 A survey of some open questions in reconstruction numbers. Ars Comb., 2010 Constructing graphs with several pseudosimilar vertices or edges. Discret. Math., 2003 Proceedings of the Handbook of Graph Theory., 2003 On Disconnected Graph with Large Reconstruction Number. Ars Comb., 2002 On a formula for the number of Euler trails for a class of digraphs. Discret. Math., 1997 Pseudosimilarity in Graphs - A Survey. Ars Comb., 1997 Large sets of pseudosimilar vertices. Discret. Math., 1996 The class reconstruction number of maximal planar graphs. Graphs Comb., 1987 Proof of Harary's conjecture on the reconstruction of trees. Discret. Math., 1983 Edge-reconstruction of 4-connected planar graphs. J. Graph Theory, 1982 The reconstruction of maximal planar graphs II. Reconstruction. J. Comb. Theory B, 1981 The reconstruction of maximal planar graphs. I. Recognition. J. Comb. Theory B, 1981 Edge-reconstruction of planar graphs with minimum valency 5. J. Graph Theory, 1979
{"url":"https://www.csauthors.net/josef-lauri/","timestamp":"2024-11-12T16:41:35Z","content_type":"text/html","content_length":"44673","record_id":"<urn:uuid:62950de8-db3a-4183-84a8-1f567e1f0be4>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.63/warc/CC-MAIN-20241112145015-20241112175015-00793.warc.gz"}
Probability - Proportional Reasoning & Fractions - Math Motivator Probability – Proportional Reasoning & Fractions Learning about probability provides a great opportunity to revisit fractions and many other important math concepts such as proportional reasoning in a fun way! Much of this work involves the use of spinners. Big Ideas • An experimental probability is based on past events and experiments. • A theoretical probability is based on an analysis of what could happen. • In a probability situation, you can never be sure of what will happen next. • An experimental probability approaches a theoretical probability when enough random samples are used. How many times should I land on a heart, star or triangle if I spin the spinner 8 times? With a task like this you can connect all of the above big ideas e.g., Begin by having students spin the spinner 8 times to find out what happens (experimental probability). If things work out as they should (theoretical probability), then out of 8 spins they will land on stars 4 times and the heart and triangle 2 times each. Of course, in a probability situation we can never be sure what will happen. However, if we use the data from the whole class or have the students do the spinning multiple times (enough random samples), the experimental probability will approach the theoretical Fractional thinking comes in when students look at the spinner design and notice that there are twice as many stars as triangles and hearts, therefore, half of the total spins should be stars and one-fourth each for the triangle and the heart (8 total spins = 4 spins on the stars, 2 spins on the triangle, and 2 spins on the heart). Probability is also the perfect opportunity to focus on proportional reasoning. How do the numbers change if the number of spins change from 8 to 16? Sixteen is double 8 so, 4 spins on stars doubles to 8, and 2 each on the triangle and heart, doubles from 2 to 4. What if we spin 24, 32, or 48 times? Think about the relationships between 8, 16, 24, 32, 48 and use this information to determine the theoretical probability. How might we organize the data on a chart so that we can highlight the number patterns? In one simple task, we have touched on probability, fractions, division, decomposing numbers, patterning, data management and proportional reasoning. Wow! Next year wouldn’t it be great to embed probability experiences right from the start of the school year?
{"url":"http://mathmotivator.com/probability-proportional-reasoning-fractions/","timestamp":"2024-11-10T05:07:16Z","content_type":"text/html","content_length":"62295","record_id":"<urn:uuid:5481b12b-df4b-4762-92ab-8b3c27edcdc1>","cc-path":"CC-MAIN-2024-46/segments/1730477028166.65/warc/CC-MAIN-20241110040813-20241110070813-00705.warc.gz"}
How Much Are Turnovers Hurting Michigan State? [NOTE: Almost the exact same article can be written for Baylor, except that Baylor hasn’t played any tough games, and as a result are undefeated. But I wanted to choose just one team to refer to throughout. So, Baylor fans, just Ctrl+H and replace “Michigan State”/”Tom Izzo” with “Baylor”/”Scott Drew”.] A 6-3 record against a tough schedule certainly isn’t the end of the world, and as The Only Colors pointed out, the Spartans have had plenty of success in the postseason after slow November/December starts. But Michigan State was ranked #2 in the preseason AP poll, and the team is clearly struggling more than expected. Taking a look at their stats page on kenpom.com, what jumps out are the big red splotches on the left: they’re ranked 322nd nationally in Turnover%, 293rd in FT%, and 298th in Steal%. But what are those marks costing Tom Izzo’s team? Quite a lot of offense, it turns out. One way to gauge the effect of turnovers is to look at what happens when a team doesn’t turn the ball over. To calculate a team’s offensive efficiency on possessions where they managed to hang on to the ball (TOAdjOff), I used a simple formula: TOAdjOff = Adjusted Offensive Efficiency / (1 – Turnover%) I then subtracted this from their actual adjusted offensive efficiency, to get what I’ll call turnover cost. It tells us how much a team’s adjusted offensive efficiency would increase if they somehow never turned it over. Here’s the top 20 in the country: In case you’re wondering, that value of 150.6 for TOAdjOff is 3rd in the country, behind Duke and Georgetown. When the Spartans don’t turn it over, they’re among the best of the best. Of course, a turnoverless team is a pipe dream; a more reasonable goal for the Spartans is to try to improve their TO% from abysmal to merely average. This seems doable – the average MSU TO% over the last 8 years has been 21.4%, which is right in line with this year’s national average of 21.2%. Using the same concept as above, but adjusting TO% to 21.2% instead of 0%, Michigan State ends up with an offensive efficiency of 118.7 (a gain of 6.5 over their current 112.2). That would bump their offensive rank from 26th to 4th, and their Pomeroy ranking from 14th to 5th. Couple that with what I can only assume would be a dip in opponent transition points, and they could rise even higher. Michigan State is nearly as poor at free throw shooting as they are at preventing turnovers, but it’s not nearly as important because: A) a missed free throw only costs 1 point, while a wasted possession costs, as we saw above, 1.5 points; and B) there tends to be far fewer free throw attempts than possessions. The Spartans have a 63.4 FT% so far, compared to a national average of 68.1%. Over their 202 FTA, that amounts to a difference of 9.6 total points. Working back from their number of possessions, that works out to 1.5 points per 100 possessions. If you add those 1.5 points onto the 6.5 gained from reducing turnovers, Michigan State’s offensive efficiency would rise to 120.2. However, because the gap between the top 4 teams (Duke, Kansas, Ohio State, and Pittsburgh) and the rest of the field is so large, their overall ranking wouldn’t change. Still, if Tom Izzo can tighten up (see also: tighten up) his leaky boat, he’ll have a good chance of floating down to Houston, come April. 4 comments: 1. Hey, I tweeted at ya. I had to do a bit of algebra before I believed that your formula made sense. I think it would be wiser to just use raw offensive efficiencies, as this better measures how it impacts them on a game-by-game basis. No team ever loses an 'adjusted' point. Also, an increase in TO% increases your defensive rating as well, which you might want to look into. 2. On raw vs. adjusted, I think it depends on what you're interested in. If you want to know how many actual points the turnovers have cost, then raw efficiency is better. But if you want to know how it affects your judgment of the quality of the offense, isn't adjusted efficiency better? I guess going forward, I should do something like show both A) the raw points a team has lost, and B) what the new estimate for their adjusted efficiency should be. It isn't helpful that I used raw numbers in the FT section, huh? And yeah, I did try to acknowledge the defensive impact of turnovers with the throwaway line in there about allowing fewer transition points, but it is something I'll try to explore more. (I guess by regressing offTO% onto raw DefEff, right?) 3. Alright, so I regressed offTO% to DefEff and got a correlation of roughly zero. So here's what turnovers do to an offense, keeping their other four factors equal (tweeted the same, sorry for the tweetspam): 4. After thinking more about this, and reading your blog post (http://thebasketballdistribution.blogspot.com/2010/12/ncaa-four-factor-impact.html for anybody that hasn't seen it - definitely check it out), I'm not sure I want to be using a regression above to deal with either the turnover or the free throw issue. If it were something more complicated like rebounding, a regression would probably be necessary. But turnovers are simple: every possession has either 1 or 0 turnovers; for possessions with 1 turnover, points=0. So when I want to see how efficient MSU is on possessions where they don't turn it over, I can calculate the exact number (Points/(Poss-TO)). Whereas with rebounding, you could in theory have a possession where you get 500 offensive rebounds and 0 points, or you could eventually score after every single rebound - there's no way to know without looking at the play-by-play. In that case, a regression is probably more helpful. Now, to estimate the defensive impact of turnovers, a regression will be useful. Even if offTO% and DefEff have a correlation of ~0, I bet offTO% might come in as significant once you account for the defensive Four Factors. I'll work on that this morning if I get a chance (i.e. if my girlfriend doesn't wake up for another hour). As for free throws, I just realized I am underestimating their impact, because some of the misses are likely on the front end of 1-and-1's. That won't make a huge difference - even if all 9.6 of MSU's extra misses are on the front end of 1-and-1's, that's only costing them ~6.5 more points (assuming ave FT%). But clearly they're NOT all on 1-and-1's, so that number is much lower. Any idea on where I can get info on how what proportion of FTA's are the front end of 1-and-1's?
{"url":"https://audacityofhoops.blogspot.com/2010/12/how-much-are-turnovers-hurting-michigan.html","timestamp":"2024-11-11T21:16:02Z","content_type":"application/xhtml+xml","content_length":"71239","record_id":"<urn:uuid:7dd99f86-ecbd-4264-9a6b-d018d3ed0cdb>","cc-path":"CC-MAIN-2024-46/segments/1730477028239.20/warc/CC-MAIN-20241111190758-20241111220758-00498.warc.gz"}
Finishing school maths when not ready for college Welcome to the Gifted Issues Discussion Forum. We invite you to share your experiences and to post information about advocacy, research and other gifted education issues on this free public discussion forum. CLICK HERE to Log In. Click here for the Board Rules. Who's Online Now 0 members (), 66 guests, and 23 robots. Key: Admin, Global Mod, Mod S M T W T F S We are in a very similar situation with our 7yo son, in some respects. He's specifically much much better in mathematics than his other subjects, and he clearly just naturally thinks like a mathematician. He's in 1st grade just finishing 5th grade math, and we anticipate him finishing AP Calculus BC (covering 1-1.5 years of college calculus) in 6th grade at age 11-12. Since we are in a (free public) virtual school, he can go at any pace. Basically we are homeschooling using the "canned" courses from k12.com, which are designed for average students but which are designed to be compressible, to cover material quickly for the strong student, so he can go through all the material without gaps. We have 2 Maths PhDs, but 1 income (by choice) and 3 kids (2 not yet in school), so we are very much focused on bringing up, and especially educating, our children ourselves directly, but have to minimize costs. The schools are mostly really awful here, in various ways, so we have few options. Yes, of course one has to keep going in maths. We don't have a detailed plan, just some rough ideas. We'll supplement with AoPS courses, for all topics not on the standard path to calculus, and the competition preparation courses, whenever he is ready. (The few hundred dollars per course is an okay expense. Question: are there any other "schools" of AoPS's ilk that are worth looking into?) There is a school in the state (but far away) that has more advanced, or different, courses such as multivariable calculus, differential equations, discrete mathematics, etc., and these courses are available for free (within the state), so that will cover a couple of years maybe. After that, there is the possibility of courses, or maybe reading courses, and the local university. It is a fairly average state university, but a lot of the faculty have PhDs from Ivy League/Oxbridge type places, so there is plenty that a smart schoolkid can learn from them. And we can plain old fashioned homeschool using books and our own mathematical knowledge. Maybe he could do some research. There are logistics to work out with all this. It's just a vague plan. One concern is I was wondering if taking university courses (while officially being a seconday school student) could disqualify you from competing in certain maths competitions. We are not going to worry too much about credit, as long as university entrance (and high school graduation) conditions are satisfied, and as long as universities at least somehow take into account all the "extracurricular" maths. As for maths competitions, whatever their drawbacks, I think one has to compete. They are a way to see how one measures up against others in your region or country (or the world). There's competition to get into universities and to get jobs, so competition can't be avoided. And a string of very good competition results may be regarded as more signicant than rapid progress and high marks in easy schoolwork. These competitions (or sequences of competitions) really can identify people as being not just top 1% or top 0.1%, but even top 0.01% or rarer, and that kind of identification can help. When I was a kid, I never heard of people preparing for maths competitions. I just thought they were fun, and did well. But I see these days there are competition preparation courses and math clubs/ circles. While "teaching to the test" would be a sad thing, I gather these preparation activities are just a good way to learn some mathematics that's not in school, and to interact with similar kids, so that's definitely something we'll look into. So after finishing calculus in 6th grade, our son won't be twiddling his thumbs for the following 6 years waiting for the next maths course to show up at uni. There's plenty of maths he can do in the meantime, even if it takes some scrambling and improvising. If he has to repeat some material, hopefully it's at an elite (Ivy League/Oxbridge type) university, where it's presented at a much higher level. (I was looking at the Princeton University website once where it said words to the effect, my paraphrasing, "yeah, sure, you mighta taken calculus before, but you haven't taken our calculus", and they do have a point.) We don't yet know if he'll be good enough for those places, but maybe competitions over the next few years will give us a rough idea where he stands. As to the issue of whether to start university early, here's why not for us. Our son is fairly good at all his subjects, but he is absolutely not the kind of kid (in contrast to some on this forum) who could accelerate multiple years in all subjects. (Actually he's 1 year accelerated across the board, so he could conceivably start uni at 17 instead of 18.) Instead he's specifically very good at maths and less good at the non-mathy subjects, so he'll probably continue those at the regular pace. And there's the usual considerations such as maturity, social eptness etcetera. But another consideration for maths is that when it comes to competing for entrance at an elite institution, it's very hard for a 15 or 16 year old to compete with a, say, top 0.01% 18 year old, which is what I'm guessing it takes to get into these places, though I could be totally wrong about that. By the way, does anyone know what it takes to get into maths at an elite institution? Is it based purely on merit? Or do you, as some have suggested on this forum, have to fluff your CV with extracurricular activities like volunteering at the homeless cat shelter and playing polo? One final thought. There are a lot of jobs where mathematical ability is important, but very few jobs as a research mathematician. So you have to have your child prepared for this uncertainty. I'm sure I've forgotten to say several things, but I need to sleep now. Originally Posted by 22B Question: are there any other "schools" of AoPS's ilk that are worth looking into? Nothing seriously comparable that I've managed to find, and I have been looking. Many less good things. I've been watching what's provided by the DaVinci group (organisation has been hopping around with funding, current page ) but haven't joined/used it yet. I see they have added some maths provision, OxMaths, since I last looked, but it's not suitable for your kid or mine. Originally Posted by 22B Maybe he could do some research. Ah yes, the "grow your own collaborator" plan. DS wants to prove Goldbach's conjecture; we'd far rather he proved P ne NP (and not only for financial reasons), but we'd settle for Goldbach if that's what lights his fire ;-) ;-) Originally Posted by 22B One concern is I was wondering if taking university courses (while officially being a seconday school student) could disqualify you from competing in certain maths competitions. This is a valid concern, I think, and the rules are quite likely to change given the fluidity of the current situation, so it's one to watch. For the IMO at present, Originally Posted by IMO OP Contestants must not have formally enrolled at a university or any other equivalent post-secondary institution, and they must have been born less than twenty years before the day of the second Contest Unfortunately, "formally enrolled" is not further defined, though some countries (Canada turned up on my google) elucidate this as meaning enrolled on a degree-granting programme. Originally Posted by 22B So after finishing calculus in 6th grade, our son won't be twiddling his thumbs for the following 6 years waiting for the next maths course to show up at uni. There's plenty of maths he can do in the meantime, even if it takes some scrambling and improvising. If he has to repeat some material, hopefully it's at an elite (Ivy League/Oxbridge type) university, where it's presented at a much higher level. (I was looking at the Princeton University website once where it said words to the effect, my paraphrasing, "yeah, sure, you mighta taken calculus before, but you haven't taken our calculus", and they do have a point.) They certainly do (and if you were talking about Oxbridge literally, the course assumes you've done plenty of calculus anyway, since it's on the normal school syllabus here rather than being nominally university maths). All the same, if much of his six years after Calculus BC turns out to be spent doing university-level analysis courses and research in that field, he could still easily end up more suited to teaching Princeton's intro calculus course than taking it... but here we surely come to "plans are useless, planning is vital". Originally Posted by 22B By the way, does anyone know what it takes to get into maths at an elite institution? Is it based purely on merit? Or do you, as some have suggested on this forum, have to fluff your CV with extracurricular activities like volunteering at the homeless cat shelter and playing polo? It will surely depend on which elite institution, but I can say for sure that neither Oxford nor Cambridge could care less about anything but academic merit, because they're both on record saying this clearly. I sort of doubt that someone who had IMO medals and/or papers in reputable journals to their name, and didn't have two heads, would in practice get turned down even at US elite institutions - but it would be good to hear from someone who knows. Email: my username, followed by 2, at google's mail 22B, I hope that you've seen Val's post in the other (pre-calculus textbook request) thread: secondary math and textbooks and pedagogy, oh my... This has very definitely been our experience. I'm sure that you won't overlook it, given your background and the fact that a parent is home with your son-- we certainly didn't miss it, that's for sure (we're with Connections). The pluses of such online programs: * you go through the ENTIRE textbook each year-- including those ending chapters that B&M schools usually skip * self-pacing means that you can rip through the material at whatever rate seems appropriate and the negative: * it's the SAME (watered-down) math instruction from the same awful textbooks that B&M schools use * there may be little-to-no actual instruction from a live teacher for more advanced mathematics, which is only okay for When DD was seven, I'd have predicted her to be in calculus last year, too (that would have been when she was 12). Didn't happen, and I'm glad. I do think that you're right to be considering what to do when he runs out of math... because the asynchrony is going to be a real bear... but my personal opinion (our DD has two parents with PhD's in the physical sciences, btw, and she's a rising HS senior as of the end of next week) is that primary and secondary mathematics teaching/pedagogy is weak and getting worse by the minute. I absolutely would begin making a plan to supplement with authentic materials. Depending on the type of learner he is, maybe Great Courses has something he'd like, too. We've used a few of their things, but DD's learning style isn't terribly compatible with non-live instructional methods. I also hope that your DS continues to tolerate the pacing/level of 'instruction' via K12. My DD has NOT tolerated it very well. It's been a continuous battle for over 7 years. I was angry over the gutting of geometry, and so was my DH. It ruined that course. Ruined it completely for kids with the math ability to fall in love with it. Gaaa. Oh-- and the other thing to watch for since we're using that same virtual schooling model? Make sure that he can continue to work at his own pace in secondary. That's a huge catch with Connections. They can't; they MUST work synchronously and in order once they reach secondary math. Also make sure that if you're going to venture outside that system for enrichment/alternatives, that you've satisfied the requirements for graduation and have the requisite coursework listed on a high school transcript somehow. This may mean that your DS to take "high school" geometry when he's 9-- which also means that any age-appropriate flakiness has lasting consequences. If they tell you that you can use local university credits to substitute for AP Calculus-- get it in WRITING. We've found that national is surprisingly (or not, perhaps) stubborn about "you should take OUR class... we offer Calculus/Chemistry/Econ/Psychology" Yeah, but your version is a canned JOKE... and I want my DD's first experience with this subject to be, you know... authentic. "We offer that class." Anyway. I mention all of this because it was absolutely NOT obvious to me when my DD was in primary grades just how awful the secondary math instruction has become. If K12 is anything like CA, they also won't let you do much "previewing" of course syllabi, either, nor of content. I mean, it's great to have a parent to offer direct instruction when that is a major deficit in a program (we have that problem here), but it only works when there is some real content within the course. Otherwise, you wind up shooting them in the foot because the assessments are aimed at something totally different than the level that they understand. Don't even get me started on "front end estimation" and ALL multiple choice assessments in this model. Schrödinger's cat walks into a bar. And doesn't. I don't have quite the same issue but I have looked ahead and made decisions based on the distant future. I have less of a problem because DS is not a math prodigy and is also equally strong verbally. We chose to wait to do Algebra next year (5th grade) even though DS appeared ready by every measure. This way he won't start Pre-Calcuus until 8th grade, which will leave him enough math in high school - Calculus, Differential Equations, Linear Algebra and Statistics (current offerings in our district). There is also a lot of math horizontally. DS has picked up odds and ends by reading interesting math books (not textbooks). DS has expressed some interest in business math and econometrics. My thought is to help him develop an interedisciplinary base, which is actually more beneficial in the long run. Of couse, he won't be ready for something like econometrics until he has mastered Calculus and Statistics. He will also likely do some competition math. Originally Posted by CFK After finishing the usual highschool sequence, my son took the undergraduate math courses at the university, but as a highschool student. When he matriculated and started as a freshman at his current university he started taking graduate level courses. He has never had to repeat anything. He has also self studied topics and has been allowed to skip taking the courses formally by showing mastery through discussions with the professors and, in two instances, taking the course final exams. ( didn't receive credit, just was able to waive them as prerequisites to higher level courses) What (kind of) universities were these where could take undergrad during high school and grad during undergrad? Do you think you were very lucky to have no forced repetition and to get credit for courses taken, or do you think this is to be expected? Originally Posted by ColinsMum DS9 isn't there yet, but it's foreseeable that he will be. In standard-US terms, he has most of AP Calculus and most of AP Statistics still to go. In UK terms we have a bit more flexibility, because there are more options in the final two years of school maths than any one student normally takes; if we have him do it all (and whether this is sensible is one of the questions in my mind), it'll keep him going for a few more years. one fairly typical syllabus document. * I had a quick look at that page PDF document. Obviously your son should just do the whole lot if possible. From your comments in various threads I wasn't quite sure how he's covering this material, since he's just going to his regular grade in a B&M school. How is he doing it? Yes "Mechanics" is part of Physics in the USA. Also I see the subject area called "Decision Mathematics" which looks more like Discrete Mathematics. That's an area (if interested) that he could go a lot further in without clashing too much with the university courses (since the area is somewhat neglected in many departments). The UK K-12 syllabus certainly covers more than in the USA. I assume that's due to earlier specialization, and due to not lowering the level so that more people can reach it. It's true that one can do 100% maths in a UK undergraduate degree, right? American undergraduate degrees are far too broad, meaning not enough maths gets covered. Anyway, that's an argument for covering material early, just to get to a reasonable level. Someone was questioning in another thread, why would anyone bother to get a PhD, just to end up teaching high school math. Well the answer is that you need a PhD to get a job at a university. Originally Posted by ColinsMum Thanks everyone, lots of interesting thoughts there. I agree wholeheartedly. Originally Posted by kaibab Math summer programs in the US usually cover math outside the traditional curriculum (Mathpath for middle schoolers or Awesome math, or for high school, Promys, HCSSiM, etc.). These are expensive but also many are international and provide access to higher math and math growth in the summer at least. How much do these cost for how long? What is there for elementary school kids? Originally Posted by ColinsMum Originally Posted by 22B Question: are there any other "schools" of AoPS's ilk that are worth looking into? Nothing seriously comparable that I've managed to find, and I have been looking. Many less good things. A general question to anybody: does anyone here who has actually used these course have some feedback about them? Apparently lessons are live online, but all communication is by typing into a chat box with no sound or video, not sure about pictures. How suitable is this format for elementary school kids? What level does Alcumus start at, and what is that like? Originally Posted by ColinsMum Originally Posted by 22B Maybe he could do some research. Ah yes, the "grow your own collaborator" plan. DS wants to prove Goldbach's conjecture; we'd far rather he proved P ne NP (and not only for financial reasons), but we'd settle for Goldbach if that's what lights his fire ;-) ;-) Okay "grow your own collaborator" is very funny. Actually "grow your own scribe" would better improve my output. (But we're not growing a scribe.) If it comes to financial reasons, P=NP is more lucrative. Seriously, "research" can just be a toy research project to dip one's toe in the water (depending on one's level). It's just another activity outside of regular school maths. Originally Posted by ColinsMum ..."formally enrolled"... Okay, I admit having peeked at this regulation, though there's 99.x% chance we won't need to know its exact meaning. But if you ever find out, let us know, just in case. Moderated by Link Copied to Clipboard
{"url":"https://giftedissues.davidsongifted.org/bb/ubbthreads.php/topics/158647.html","timestamp":"2024-11-03T00:59:15Z","content_type":"text/html","content_length":"94476","record_id":"<urn:uuid:6004f1ae-8b0c-4828-99df-504485f7a295>","cc-path":"CC-MAIN-2024-46/segments/1730477027768.43/warc/CC-MAIN-20241102231001-20241103021001-00426.warc.gz"}
Highlights of NEC Articles 422, 430, and 440 As we continue our journey through Chapter 4 in the 2011 National Electrical Code, I will point out highlights in the material that I believe will be most relevant for inspectors. These articles cover issues related to appliances, motors and air-conditioning equipment. Since I plan to only point out what I consider to be the key issues in these articles for inspections, we will be skipping quite a bit of each article. Again, I would encourage you to get out your code book and please read the entire article to familiarize yourself with everything that is included in the code. Starting in Article 422 Appliances, let’s first look at 422.16 Flexible Cords. In dwelling construction, we have several types of appliances that are typically installed and many of these are connected to our branch circuits through the use of flexible cords and plugged into receptacles. Starting with kitchen waste disposers in 422.16(B)(1), we see that you have to use a cord that is identified as suitable for the purpose by the appliance manufacturer. Usually this is a flat appliance cord. There are further restrictions: first, it has to be a grounding type attachment plug; second, the length of this cord must be between 18 inches and 36 inches. This cord must be protected from physical damage. Physical damage is an undefined term in the NEC, but we hope that you will know it when you see it. Lastly, the receptacle that the disposal is plugged into must be accessible. Photo 1. This is a motor starter, and on the left is where the fuses are installed for the short circuit and ground-fault protection. On the right, just above the red push bar, is where the overload elements are installed. As you can see, we have two different items for the protection of motors. The next item we come to includes the dishwasher and trash compactor. Neither of these items refers to your teenage kids. We are talking about the electrical type of appliances, although at times we might like to jolt our kids. The language in 422.16(B)(2) is very similar to what we found for disposals except the permitted length of the cord, which is required to be between 3 feet and 4 feet. Wall-Mounted Ovens and Counter-Mounted Cooking Units are the next things we find in 422.16. Here we have the choice of permanently connecting these items or using a cord-and-plug connection. I will share my experience with these, having seen thousands of these appliances installed during the housing boom in Southern Nevada during the late 1990s and 2000s. Commonly, the electrician installs the branch circuit to a junction box for the oven or range. When the appliances are delivered, they are connected by personnel of the trucking company that delivers them. Because of this installation by unqualified persons, we have had many code violations in the wiring methods used for the final connections, including everything from twist-on connectors (one brand name is Wire-Nuts) for 6 AWG and 4 AWG conductors to aluminum and copper conductors connected together without the use of proper listed connectors. When we bought the house where I live now, it had a double oven and we decided to change it out for a microwave and single oven. When the original oven was removed, we found 6 AWG aluminum connected to 8 AWG copper with red wire nuts, without even the benefit of oxide inhibitor. Photo 2. This is a new A/C compressor unit, with an enlargement of the specification label, showing the electrical information as mentioned in the article. I prefer to have electricians doing electrical work, and several local electrical contractors asked me about their liability issues in these cases. All I could tell them was to make sure the general contractor allows them to do the final connection, or that they should install range receptacles so that we have a clear line of where the electrician’s work ends and the appliance’s delivery person’s work begins. Basically, just like any other receptacle circuit in the house, the electricians are not responsible for whatever is plugged in. The only issue here is that you have to make sure the appliance you are using permits a cord-and-plug connection as part of its listing. Range Hoods are covered in the next section; again, the language is similar, and in this case the length of the cord is again 18 inches to 36 inches. This is actually a real convenience since these range hoods are often changed out to combination microwave-hood units. Having a receptacle so that the customer can simply unplug the range hood and plug in the new microwave is much safer for the future homeowner. Photo 3. An example of an A/C equipment label showing the information needed for both inspection and wiring. The last thing I will cover in Article 422 is 422.18 Support of Ceiling-Suspended (Paddle) Fans. These are considered an appliance, so they are also covered in this article. Basically, the language calls for the fan unit to be independently supported or to be supported by an outlet box or system that is listed for the support of fan units. It also refers us back to 314.27(C), where we find more details on the requirements for this listed box. Let’s jump to Article 430 Motors, Motor Circuits and Controllers. We are going to take a high level fly-over of this article. I will cover the basics related to common motor installations, motor conductor sizing, short-circuit and ground-fault protection, and overload protection. First of all, please note that there is a distinct separation for motors related to short circuit, ground fault and overload protection. When we get to motors, this protection is generally provided by two separate devices. Article 430 is one of those articles that provides us with a road map to follow in Figure 430.1. One item I’ve recommended to those that I’ve taught the code to in the past is to take the time and put the page numbers next to each line of this diagram; this will help you get to the right place when in a hurry or taking a test. The first item to learn is that for normal motors we use the nameplate information of the motor only to find the horsepower rating, phase and voltage of the motor. We then take this information and go to the tables in Article 430 to find the full-load current used to calculate the wire size and the short-circuit and ground-fault protection. The only time we use the ampere rating on the motor nameplate is when we do the overload protection, and I will explain that later in this article. Photo 4. This is a example of motor winding coils and the armature. So let’s start with finding out what size conductor the code requires for a motor. Look at the nameplate to find the horsepower, phase and voltage of the motor. Immediately, proceed to either table 430.248 for single-phase motors or 430.250 for three-phase motors. Let’s choose a single-phase, 115 V, 1.5 hp motor. Scan down the left column to find your horsepower and then follow that row across to the current value that corresponds with the voltage (20 amps). Write down the ampere value from the table. This is the value we will use for both the conductor sizing and short-circuit and ground-fault sizing. To find the conductor size, we take our current value and go to Part II of Article 430, specifically 430.22 that states the conductor size will be 125% of the current value for a continuous duty motor. The duty rating of the motor will be marked on the nameplate. If your current value is 20 amps, then the conductor has to be good for 25 amps. When we go to Table 310.15(B)(16) we see that a 12 AWG copper is good for this current value in the 75 degree C column. This is the column we commonly use as outlined in 110.14(C)(1)(a)(4). Now let’s look at our short-circuit and ground-fault protection sizing. First, we have to decide if we are going to use fuses or circuit breakers for this protection. Part IV deals with this protection and we go to 430.52(C)(1), which tells us to use Table 430.52. Using the value of 20 amps for our motor, we see that for an inverse time breaker (which is the style commonly used), we can use 250 percent of our 20 amps. If we are using dual element time delay fuses, then we use 175 percent. Doing the math, we find out that we can use a 50-amp breaker or a 35-amp fuse, both of which are standard sizes according to 240.6. Now I will give you a personal opinion here: the language in 430.52(C)(1) states that the protective device shall have a rating not exceeding the calculated value, which we just used. To me, this requirement tells me that we are required to round down if the calculated value doesn’t match a size listed in 240.6. But, fear not, we have an exception which permits us to round up to the next standard size [see exception number 1 to 430.52(C)(1)]. However, from my experience, unless the application has a very heavy load at startup there is no need to go up in size. In the interest of providing better protection, I have always taken the “not exceeding” language to heart and rounded down. Photo 5. This is a close up of a single phase centrifugal switch used for startup of the motor. Have you noticed that we have a 12 AWG conductor with a short-circuit and ground-fault protective device that is either a 50-amp breaker or a 35-amp fuse? Is this really permitted? What about Article 240 where it states that the maximum protection for 12 AWG copper is 20 amps? Please review Table 240.4(G) and you will find Article 430 listed as one of the areas considered as a specific conductor application. So the answer is yes, this really is permitted and it is a perfectly good, code-compliant installation. Don’t worry if you didn’t know this before, it is one of the items that is very often not properly enforced. Our next step is to figure out how to protect the motor from overloads using one of the means in 430.32. So how do we size the motor overload device, which is separate from the other circuit protection we have already calculated? This is where we use the nameplate current rating as mentioned in 430.6(A)(2). In Part III, specifically 430.32(A)(1), we find three ranges for overload protection that depend on the rating of your motor. If it is marked with a service factor of 1.15 or a temperature rise of 40 degrees C, we can use a value of 125 percent of the nameplate current rating. Frequently you won’t find these types of motors, so for all other motors the sizing is 115 percent of the nameplate current rating. Let’s take our example motor, for which table 430.248 listed a full-load current value of 20 amps, but the nameplate is labeled at 18 amps. If it is just a standard motor, then we multiply 18 amps times 115 percent and find the overload protection shall be 20.7 amperes. This value is completely different from what we are used to seeing in breakers and fuses; however, if you look in the motor starter (which has a horsepower size rating to match your motor), you will find overload sizing broken down to a fraction of an amp. You can then match the overload device units to provide the protection calculated. Let’s step back and look at the overall picture for a moment. We are wiring a motor that has overload protection to open the motor controller if the current draw exceeds either 115 or 125 percent, depending on the motor. If our conductor feeding that motor is sized at 125 percent then do we have the conductors protected from an overload? Yes, so we have overload protection of the conductors being provided by the overload device for the motor. If there is a short circuit or ground fault, then the breaker or fuse will provide the fast reacting protection required for the high-fault current condition. This method of determining overcurrent protection appears quite different when compared to normal wiring methods, but as you can see, it works to provide the protection we need for Photo 6. Pictured here is a coil assembly for a common ceiling fan like you might have installed in your house. Next, let’s look at the other two things we do when it comes to motors: sizing a feeder for multiple motors and sizing the short-circuit, ground-fault protective device for that feeder. This is covered in 430.24, which states that you take the largest motor rating you found in Tables 430.248 or 430.250, multiply it by 125 percent, then add the sums of the rest of the motor values you found in the table. If your largest motor has a value of 100 amps, then 125 percent of 100 amps is 125 amps. Let’s say the other motors are 50 amps, 25 amps and two at 10 amps, for a total of 220 amperes. Take that value and again use Table 310.15(B)(16) to find a conductor that is sufficient for the 220 amps. Now let’s find a breaker or fuse to protect this feeder. In Part V, 430.62 we find that we take the largest individual fuse or breaker from our individual protection calculation; then from that value, we add the sum of all the other motor loads on that feeder. Again, these values are based on the tables. Using the numbers we found for the feeder and starting with fuses: 100 amps multiplied by the 175 percent gives us 175 amps; then add in the other four motors to find a value of 270 amps, so we would have to use a 250-amp fuse for the feeder. Using breakers, it would be 100 amps times 250 percent for a value of 250 amps plus the other motors, which gives us 345 amps. We would use a 300-amp breaker for the feeder. I hope this has simplified motor calculations. There is a lot of information in Article 430, but I have covered the areas that in my experience are the most commonly used and needed during the inspection process. Please take the time to review each of the sections I have mentioned above and look at some of the other language found in Article 430. So when it comes to Article 440, what do we do? In close to three decades of electrical work, I don’t think I have had to use language in Article 440 more than a handful of times. Why is that, you may ask, haven’t you ever installed an air conditioner? Yes, and I have inspected thousands, but here is the secret: this equipment has a requirement that the manufacturer does this work for us and the information is found on the nameplate per Article 440. All we look for when it comes to A/C units is two things, the first of which is Minimum Circuit Ampacity (MCA). This value gives us the minimum conductor size required for this equipment. It is based on 125 percent of the largest load plus the sum of all the other motors. Sound familiar? Second, we read the label for the Maximum Fuse or Maximum Circuit Breaker. Again, they have done the math and we just have to make sure the installed fuse or breaker does not exceed this value. Pay careful attention, since at times the nameplate may give the option to only use a breaker, only use a fuse, or allow the use of either a breaker or fuse. Make sure it is installed exactly as the label states, since if it only gives a value for fuses, then the equipment is tested and listed to be protected by fuses only, and likewise for circuit breakers. I hope this information helps when inspecting motors and A/C equipment. Make sure you review the code language along with reading this article to fill in the voids of what I haven’t specifically Randy Hunter Randy Hunter is an instructor and consultant specializing in electrical code and installations, and co-owner of Hunter Technical Services. He holds ten inspections certifications from IAEI and ICC. He has been a master electrician since 1988.
{"url":"https://iaeimagazine.org/2015/mayjune-2015/highlights-of-nec-articles-422-430-and-440/","timestamp":"2024-11-10T04:52:40Z","content_type":"text/html","content_length":"140210","record_id":"<urn:uuid:555b9d20-2097-41d4-86c5-985d17fe1f48>","cc-path":"CC-MAIN-2024-46/segments/1730477028166.65/warc/CC-MAIN-20241110040813-20241110070813-00266.warc.gz"}
Python NetworkX for Graph Optimization Tutorial This NetworkX tutorial will show you how to do graph optimization in Python by solving the Chinese Postman Problem in Python. With this tutorial, you'll tackle an established problem in graph theory called the Chinese Postman Problem. There are some components of the algorithm that while conceptually simple, turn out to be computationally rigorous. However, for this tutorial, only some prior knowledge of Python is required: no rigorous math, computer science or graph theory background is needed. This tutorial will first go over the basic building blocks of graphs (nodes, edges, paths, etc) and solve the problem on a real graph (trail network of a state park) using the NetworkX library in Python. You'll focus on the core concepts and implementation. For the interested reader, further reading on the guts of the optimization are provided. You've probably heard of the Travelling Salesman Problem which amounts to finding the shortest route (say, roads) that connects a set of nodes (say, cities). Although lesser known, the Chinese Postman Problem (CPP), also referred to as the Route Inspection or Arc Routing problem, is quite similar. The objective of the CPP is to find the shortest path that covers all the links (roads) on a graph at least once. If this is possible without doubling back on the same road twice, great; That's the ideal scenario and the problem is quite simple. However, if some roads must be traversed more than once, you need some math to find the shortest route that hits every road at least once with the lowest total mileage. (The following is a personal note: cheesy, cheeky and 100% not necessary for learning graph optimization in Python) I had a real-life application for solving this problem: attaining the rank of Giantmaster Marathoner. What is a Giantmaster? A Giantmaster is one (canine or human) who has hiked every trail of Sleeping Giant State Park in Hamden CT (neighbor to my hometown of Wallingford)... in their lifetime. A Giantmaster Marathoner is one who has hiked all these trails in a single day. Thanks to the fastidious record keeping of the Sleeping Giant Park Association, the full roster of Giantmasters and their level of Giantmastering can be found here. I have to admit this motivated me quite a bit to kick-start this side-project and get out there to run the trails. While I myself achieved Giantmaster status in the winter of 2006 when I was a budding young volunteer of the Sleeping Giant Trail Crew (which I was pleased to see recorded in the SG archive), new challenges have since arisen. While the 12-month and 4-season Giantmaster categories are impressive and enticing, they'd also require more travel from my current home (DC) to my formative home (CT) than I could reasonably manage... and they're not as interesting for graph optimization, so Giantmaster Marathon it is! For another reference, the Sleeping Giant trail map is provided below: The nice thing about graphs is that the concepts and terminology are generally intuitive. Nonetheless, here's some of the basic lingo: Graphs are structures that map relations between objects. The objects are referred to as nodes and the connections between them as edges in this tutorial. Note that edges and nodes are commonly referred to by several names that generally mean exactly the same thing: node == vertex == point edge == arc == link The starting graph is undirected. That is, your edges have no orientation: they are bi-directional. For example: A<--->B == B<--->A.By contrast, the graph you might create to specify the shortest path to hike every trail could be a directed graph, where the order and direction of edges matters. For example: A--->B != B--->A. The graph is also an edge-weighted graph where the distance (in miles) between each pair of adjacent nodes represents the weight of an edge. This is handled as an edge attribute named "distance". Degree refers to the number of edges incident to (touching) a node. Nodes are referred to as odd-degree nodes when this number is odd and even-degree when even. The solution to this CPP problem will be a Eulerian tour: a graph where a cycle that passes through every edge exactly once can be made from a starting node back to itself (without backtracking). An Euler Tour is also known by several names: A matching is a subset of edges in which no node occurs more than once. A minimum weight matching finds the matching with the lowest possible summed edge weight. NetworkX is the most popular Python package for manipulating and analyzing graphs. Several packages offer the same basic level of graph manipulation, notably igraph which also has bindings for R and C++. However, I found that NetworkX had the strongest graph algorithms that I needed to solve the CPP. If you've done any sort of data analysis in Python or have the Anaconda distribution, my guess is you probably have pandas and matplotlib. However, you might not have networkx. These should be the only dependencies outside the Python Standard Library that you'll need to run through this tutorial. They are easy to install with pip: These should be all the packages you'll need for now. imageio and numpy are imported at the very end to create the GIF animation of the CPP solution. The animation is embedded within this post, so these packages are optional. import itertools import copy import networkx as nx import pandas as pd import matplotlib.pyplot as plt The edge list is a simple data structure that you'll use to create the graph. Each row represents a single edge of the graph with some edge attributes. # Grab edge list data hosted on Gist edgelist = pd.read_csv('https://gist.githubusercontent.com/brooksandrew/e570c38bcc72a8d102422f2af836513b/raw/89c76b2563dbc0e88384719a35cba0dfc04cd522/ │ │ node1 │ node2 │trail│distance│color│estimate│ │0│rs_end_north │v_rs │rs │0.30 │red │0 │ │1│v_rs │b_rs │rs │0.21 │red │0 │ │2│b_rs │g_rs │rs │0.11 │red │0 │ │3│g_rs │w_rs │rs │0.18 │red │0 │ │4│w_rs │o_rs │rs │0.21 │red │0 │ │5│o_rs │y_rs │rs │0.12 │red │0 │ │6│y_rs │rs_end_south │rs │0.39 │red │0 │ │7│rc_end_north │v_rc │rc │0.70 │red │0 │ │8│v_rc │b_rc │rc │0.04 │red │0 │ │9│b_rc │g_rc │rc │0.15 │red │0 │ Node lists are usually optional in networkx and other graph libraries when edge lists are provided because the node names are provided in the edge list's first two columns. However, in this case, there are some node attributes that we'd like to add: X, Y coordinates of the nodes (trail intersections) so that you can plot your graph with the same layout as the trail map. I spent an afternoon annotating these manually by tracing over the image with GIMP: Creating the node names also took some manual effort. Each node represents an intersection of two or more trails. Where possible, the node is named by trail1_trail2 where trail1 precedes trail2 in alphabetical order. Things got a little more difficult when the same trails intersected each other more than once. For example, the Orange and White trail. In these cases, I appended a _2 or _3 to the node name. For example, you have two distinct node names for the two distinct intersections of Orange and White: o_w and o_w_2. This took a lot of trial and error and comparing the plots generated with X,Y coordinates to the real trail map. # Grab node list data hosted on Gist nodelist = pd.read_csv('https://gist.githubusercontent.com/brooksandrew/f989e10af17fb4c85b11409fea47895b/raw/a3a8da0fa5b094f1ca9d82e1642b384889ae16e8/ │ │ id │ X │ Y │ │0│b_bv │1486│732 │ │1│b_bw │716 │1357│ │2│b_end_east │3164│1111│ │3│b_end_west │141 │1938│ │4│b_g │1725│771 │ Now you use the edge list and the node list to create a graph object in networkx. Loop through the rows of the edge list and add each edge and its corresponding attributes to graph g. # Add edges and edge attributes for i, elrow in edgelist.iterrows(): g.add_edge(elrow[0], elrow[1], attr_dict=elrow[2:].to_dict()) To illustrate what's happening here, let's print the values from the last row in the edge list that got added to graph g: # Edge list example print(elrow[0]) # node1 print(elrow[1]) # node2 print(elrow[2:].to_dict()) # edge attribute dict o_gy2 y_gy2 {'color': 'yellowgreen', 'estimate': 0, 'trail': 'gy2', 'distance': 0.12} Similarly, you loop through the rows in the node list and add these node attributes. # Add node attributes for i, nlrow in nodelist.iterrows(): g.node[nlrow['id']] = nlrow[1:].to_dict() Here's an example from the last row of the node list: id y_rt X 977 Y 1666 Name: 76, dtype: object Your graph edges are represented by a list of tuples of length 3. The first two elements are the node names linked by the edge. The third is the dictionary of edge attributes. [('rs_end_south', 'y_rs', {'color': 'red', 'distance': 0.39, 'estimate': 0, 'trail': 'rs'}), ('w_gy2', 'park_east', {'color': 'gray', 'distance': 0.12, 'estimate': 0, 'trail': 'w'}), ('w_gy2', 'g_gy2', {'color': 'yellowgreen', 'distance': 0.05, 'estimate': 0, 'trail': 'gy2'}), ('w_gy2', 'b_w', {'color': 'gray', 'distance': 0.42, 'estimate': 0, 'trail': 'w'}), ('w_gy2', 'b_gy2', {'color': 'yellowgreen', 'distance': 0.03, 'estimate': 1, 'trail': 'gy2'})] Similarly, your nodes are represented by a list of tuples of length 2. The first element is the node ID, followed by the dictionary of node attributes. [('rs_end_south', {'X': 1865, 'Y': 1598}), ('w_gy2', {'X': 2000, 'Y': 954}), ('rd_end_south_dupe', {'X': 273, 'Y': 1869}), ('w_gy1', {'X': 1184, 'Y': 1445}), ('g_rt', {'X': 908, 'Y': 1378}), ('v_rd', {'X': 258, 'Y': 1684}), ('g_rs', {'X': 1676, 'Y': 775}), ('rc_end_north', {'X': 867, 'Y': 618}), ('v_end_east', {'X': 2131, 'Y': 921}), ('rh_end_south', {'X': 721, 'Y': 1925})] Positions: First you need to manipulate the node positions from the graph into a dictionary. This will allow you to recreate the graph using the same layout as the actual trail map. Y is negated to transform the Y-axis origin from the topleft to the bottomleft. # Define node positions data structure (dict) for plotting node_positions = {node[0]: (node[1]['X'], -node[1]['Y']) for node in g.nodes(data=True)} # Preview of node_positions with a bit of hack (there is no head/slice method for dictionaries). dict(list(node_positions.items())[0:5]) {'b_rd': (268, -1744), 'g_rt': (908, -1378), 'o_gy1': (1130, -1297), 'rh_end_tt_2': (550, -1608), 'rs_end_south': (1865, -1598)} Colors: Now you manipulate the edge colors from the graph into a simple list so that you can visualize the trails by their color. # Define data structure (list) of edge colors for plotting edge_colors = [e[2]['color'] for e in g.edges(data=True)] # Preview first 10 edge_colors[0:10] ['red', 'gray', 'yellowgreen', 'gray', 'yellowgreen', 'blue', 'black', 'yellowgreen', 'gray', 'gray'] Now you can make a nice plot that lines up nicely with the Sleeping Giant trail map: plt.figure(figsize=(8, 6)) nx.draw(g, pos=node_positions, edge_color=edge_colors, node_size=10, node_color='black') plt.title('Graph Representation of Sleeping Giant Trail Map', size=15) plt.show() This graph representation obviously doesn't capture all the trails' bends and squiggles, however not to worry: these are accurately captured in the edge distance attribute which is used for computation. The visual does capture distance between nodes (trail intersections) as the crow flies, which appears to be a decent approximation. OK, so now that you've defined some terms and created the graph, how do you find the shortest path through it? Find all nodes with odd degree (very easy).(Find all trail intersections where the number of trails touching that intersection is an odd number) Add edges to the graph such that all nodes of odd degree are made even. These added edges must be duplicates from the original graph (we'll assume no bushwhacking for this problem). The set of edges added should sum to the minimum distance possible (hard...np-hard to be precise).(In simpler terms, minimize the amount of double backing on a route that hits every trail) Given a starting point, find the Eulerian tour over the augmented dataset (moderately easy).(Once we know which trails we'll be double backing on, actually calculate the route from beginning to end) While a shorter and more precise path could be generated by relaxing the assumptions below, this would add complexity beyond the scope of this tutorial which focuses on the CPP. As you can see from the trail map above, there are roads along the borders of the park that could be used to connect trails, particularly the red trails. There are also some trails (Horseshoe and unmarked blazes) which are not required per the Giantmaster log, but could be helpful to prevent lengthy double backing. The inclusion of optional trails is actually an established variant of the CPP called the Rural Postman Problem. We ignore optional trails in this tutorial and focus on required trails only. The CPP assumes that the cost of walking a trail is equivalent to its distance, regardless of which direction it is walked. However, some of these trails are rather hilly and will require more energy to walk up than down. Some metric that combines both distance and elevation change over a directed graph could be incorporated into an extension of the CPP called the Windy Postman Problem. While possible, the inclusion of parallel edges (multiple trails connecting the same two nodes) adds complexity to computation. Luckily this only occurs twice here (Blue <=> Red Diamond and Blue <=> Tower Trail). This is addressed by a bit of a hack to the edge list: duplicate nodes are included with a _dupe suffix to capture every trail while maintaining uniqueness in the edges. The CPP implementation in the postman_problems package I wrote robustly handles parallel edges in a more elegant way if you'd like to solve the CPP on your own graph with many parallel edges. This is a pretty straightforward counting computation. You see that 36 of the 76 nodes have odd degree. These are mostly the dead-end trails (degree 1) and intersections of 3 trails. There are a handful of degree 5 nodes. # Calculate list of nodes with odd degree nodes_odd_degree = [v for v, d in g.degree_iter() if d % 2 == 1] # Preview nodes_odd_degree[0:5] # Counts print('Number of nodes of odd degree: {}'.format(len(nodes_odd_degree))) print('Number of total nodes: {}'.format(len(g.nodes()))) Number of nodes of odd degree: 36 Number of total nodes: 77 This is really the meat of the problem. You'll break it down into 5 parts: You use the itertools combination function to compute all possible pairs of the odd degree nodes. Your graph is undirected, so we don't care about order: For example, (a,b) == (b,a). # Compute all pairs of odd nodes. in a list of tuples odd_node_pairs = list(itertools.combinations(nodes_odd_degree, 2)) # Preview pairs of odd degree nodes odd_node_pairs[0:10] [('rs_end_south', 'rc_end_north'), ('rs_end_south', 'v_end_east'), ('rs_end_south', 'rh_end_south'), ('rs_end_south', 'b_end_east'), ('rs_end_south', 'b_bv'), ('rs_end_south', 'rt_end_south'), ('rs_end_south', 'o_rt'), ('rs_end_south', 'y_rt'), ('rs_end_south', 'g_gy2'), ('rs_end_south', 'b_tt_3')] Let's confirm that this number of pairs is correct with a the combinatoric below. Luckily, you only have 630 pairs to worry about. Your computation time to solve this CPP example is trivial (a couple However, if you had 3,600 odd node pairs instead, you'd have ~6.5 million pairs to optimize. That's a ~10,000x increase in output given a 100x increase in input size. \begin{equation*} \#\;of\;pairs = n\;choose\;r = {n \choose r} = \frac{n!}{r!(n-r)!} = \frac{36!}{2! (36-2)!} = 630 \end{equation*} This is the first step that involves some real computation. Luckily networkx has a convenient implementation of Dijkstra's algorithm to compute the shortest path between two nodes. You apply this function to every pair (all 630) calculated above in odd_node_pairs. def get_shortest_paths_distances(graph, pairs, edge_weight_name): """Compute shortest distance between each pair of nodes in a graph. Return a dictionary keyed on node pairs (tuples).""" distances = {} for pair in pairs: distances[pair] = nx.dijkstra_path_length(graph, pair[0], pair[1], weight=edge_weight_name) return distances # Compute shortest paths. Return a dictionary with node pairs keys and a single value equal to shortest path distance. odd_node_pairs_shortest_paths = get_shortest_paths_distances(g, odd_node_pairs, 'distance') # Preview with a bit of hack (there is no head/slice method for dictionaries). dict(list(odd_node_pairs_shortest_paths.items())[0:10]) {('b_bv', 'y_gy1'): 1.22, ('b_bw', 'rc_end_south'): 1.35, ('b_end_east', 'b_bw'): 3.0400000000000005, ('b_end_east', 'rd_end_north'): 3.83, ('g_gy1', 'nature_end_west'): 0.9900000000000001, ('o_rt', 'y_gy1'): 0.53, ('rc_end_north', 'rd_end_south'): 2.21, ('rc_end_north', 'rs_end_north'): 1.79, ('rs_end_north', 'o_tt'): 2.0999999999999996, ('w_bw', 'rd_end_north'): 1.02} A complete graph is simply a graph where every node is connected to every other node by a unique edge. Here's a basic example from Wikipedia of a 7 node complete graph with 21 (7 choose 2) edges: The graph you create below has 36 nodes and 630 edges with their corresponding edge weight (distance). create_complete_graph is defined to calculate it. The flip_weights parameter is used to transform the distance to the weight attribute where smaller numbers reflect large distances and high numbers reflect short distances. This sounds a little counter intuitive, but is necessary for Step 2.4 where you calculate the minimum weight matching on the complete graph. Ideally you'd calculate the minimum weight matching directly, but NetworkX only implements a max_weight_matching function which maximizes, rather than minimizes edge weight. We hack this a bit by negating (multiplying by -1) the distance attribute to get weight. This ensures that order and scale by distance are preserved, but reversed. def create_complete_graph(pair_weights, flip_weights=True): """ Create a completely connected graph using a list of vertex pairs and the shortest path distances between them Parameters: pair_weights: list[tuple] from the output of get_shortest_paths_distances flip_weights: Boolean. Should we negate the edge attribute in pair_weights? """ g = nx.Graph() for k, v in pair_weights.items(): wt_i = - v if flip_weights else v g.add_edge(k[0], k[1], attr_dict={'distance': v, 'weight': wt_i}) return g # Generate the complete graph g_odd_complete = create_complete_graph(odd_node_pairs_shortest_paths, flip_weights=True) # Counts print('Number of nodes: {}'.format(len(g_odd_complete.nodes()))) print ('Number of edges: {}'.format(len(g_odd_complete.edges()))) For a visual prop, the fully connected graph of odd degree node pairs is plotted below. Note that you preserve the X, Y coordinates of each node, but the edges do not necessarily represent actual trails. For example, two nodes could be connected by a single edge in this graph, but the shortest path between them could be 5 hops through even degree nodes (not shown here). # Plot the complete graph of odd-degree nodes plt.figure(figsize=(8, 6)) pos_random = nx.random_layout(g_odd_complete) nx.draw_networkx_nodes(g_odd_complete, node_positions, node_size=20, node_color= "red") nx.draw_networkx_edges(g_odd_complete, node_positions, alpha=0.1) plt.axis('off') plt.title('Complete Graph of Odd-degree Nodes') plt.show() This is the most complex step in the CPP. You need to find the odd degree node pairs whose combined sum (of distance between them) is as small as possible. So for your problem, this boils down to selecting the optimal 18 edges (36 odd degree nodes / 2) from the hairball of a graph generated in 2.3. Both the implementation and intuition of this optimization are beyond the scope of this tutorial... like 800+ lines of code and a body of academic literature beyond this scope. A huge thanks to Joris van Rantwijk for writing the orginal implementation on his blog way back in 2008. I stumbled into the problem a similar way with the same intention as Joris. From Joris's 2008 Since I did not find any Perl implementations of maximum weighted matching, I lightly decided to write some code myself. It turned out that I had underestimated the problem, but by the time I realized my mistake, I was so obsessed with the problem that I refused to give up. This Maximum Weight Matching has since been folded into and maintained within the NetworkX package. Another big thanks to the 10+ contributors on GitHub who have maintained this hefty codebase. This is a hard and intensive computation. The first breakthrough in 1965 proved that the Maximum Matching problem could be solved in polynomial time. It was published by Jack Edmonds with perhaps one of the most beautiful academic paper titles ever: "Paths, trees, and flowers" [1]. A body of literature has since built upon this work, improving the optimization procedure. The code implemented in the NetworkX function max_weight_matching is based on Galil, Zvi (1986) [2] which employs an O(n3) time algorithm. # Compute min weight matching. # Note: max_weight_matching uses the 'weight' attribute by default as the attribute to maximize. odd_matching_dupes = nx.algorithms.max_weight_matching(g_odd_complete, True) print('Number of edges in matching: {}'.format(len(odd_matching_dupes))) The matching output (odd_matching_dupes) is a dictionary. Although there are 36 edges in this matching, you only want 18. Each edge-pair occurs twice (once with node 1 as the key and a second time with node 2 as the key of the dictionary). {'b_bv': 'v_bv', 'b_bw': 'rh_end_tt_1', 'b_end_east': 'g_gy2', 'b_end_west': 'rd_end_south', 'b_tt_3': 'rt_end_north', 'b_v': 'v_end_west', 'g_gy1': 'rc_end_north', 'g_gy2': 'b_end_east', 'g_w': 'w_bw', 'nature_end_west': 'o_y_tt_end_west', 'o_rt': 'o_w_1', 'o_tt': 'rh_end_tt_2', 'o_w_1': 'o_rt', 'o_y_tt_end_west': 'nature_end_west', 'rc_end_north': 'g_gy1', 'rc_end_south': 'y_gy1', 'rd_end_north': 'rh_end_north', 'rd_end_south': 'b_end_west', 'rh_end_north': 'rd_end_north', 'rh_end_south': 'y_rh', 'rh_end_tt_1': 'b_bw', 'rh_end_tt_2': 'o_tt', 'rh_end_tt_3': 'rh_end_tt_4', 'rh_end_tt_4': 'rh_end_tt_3', 'rs_end_north': 'v_end_east', 'rs_end_south': 'y_gy2', 'rt_end_north': 'b_tt_3', 'rt_end_south': 'y_rt', 'v_bv': 'b_bv', 'v_end_east': 'rs_end_north', 'v_end_west': 'b_v', 'w_bw': 'g_w', 'y_gy1': 'rc_end_south', 'y_gy2': 'rs_end_south', 'y_rh': 'rh_end_south', 'y_rt': 'rt_end_south'} You convert this dictionary to a list of tuples since you have an undirected graph and order does not matter. Removing duplicates yields the unique 18 edge-pairs that cumulatively sum to the least possible distance. # Convert matching to list of deduped tuples odd_matching = list(pd.unique([tuple(sorted([k, v])) for k, v in odd_matching_dupes.items()])) # Counts print('Number of edges in matching (deduped): {} [('rs_end_south', 'y_gy2'), ('b_end_west', 'rd_end_south'), ('b_bv', 'v_bv'), ('rh_end_tt_3', 'rh_end_tt_4'), ('b_bw', 'rh_end_tt_1'), ('o_tt', 'rh_end_tt_2'), ('g_w', 'w_bw'), ('b_end_east', 'g_gy2'), ('nature_end_west', 'o_y_tt_end_west'), ('g_gy1', 'rc_end_north'), ('o_rt', 'o_w_1'), ('rs_end_north', 'v_end_east'), ('rc_end_south', 'y_gy1'), ('rh_end_south', 'y_rh'), ('rt_end_south', 'y_rt'), ('b_tt_3', 'rt_end_north'), ('rd_end_north', 'rh_end_north'), ('b_v', 'v_end_west')] Let's visualize these pairs on the complete graph plotted earlier in step 2.3. As before, while the node positions reflect the true graph (trail map) here, the edge distances shown (blue lines) are as the crow flies. The actual shortest route from one node to another could involve multiple edges that twist and turn with considerably longer distance. plt.figure(figsize=(8, 6)) # Plot the complete graph of odd-degree nodes nx.draw(g_odd_complete, pos=node_positions, node_size=20, alpha=0.05) # Create a new graph to overlay on g_odd_complete with just the edges from the min weight matching g_odd_complete_min_edges = nx.Graph(odd_matching) nx.draw(g_odd_complete_min_edges, pos=node_positions, node_size=20, edge_color='blue', node_color='red') plt.title('Min Weight Matching on Complete Graph') plt.show() To illustrate how this fits in with the original graph, you plot the same min weight pairs (blue lines), but over the trail map (faded) instead of the complete graph. Again, note that the blue lines are the bushwhacking route (as the crow flies edges, not actual trails). You still have a little bit of work to do to find the edges that comprise the shortest route between each pair in Step 3. plt.figure(figsize=(8, 6)) # Plot the original trail map graph nx.draw(g, pos=node_positions, node_size=20, alpha=0.1, node_color='black') # Plot graph to overlay with just the edges from the min weight matching nx.draw(g_odd_complete_min_edges, pos=node_positions, node_size=20, alpha=1, node_color='red', edge_color='blue') plt.title('Min Weight Matching on Orginal Graph') plt.show() Now you augment the original graph with the edges from the matching calculated in 2.4. A simple function to do this is defined below which also notes that these new edges came from the augmented graph. You'll need to know this in 3. when you actually create the Eulerian circuit through the graph. def add_augmenting_path_to_graph(graph, min_weight_pairs): """ Add the min weight matching edges to the original graph Parameters: graph: NetworkX graph (original graph from trailmap) min_weight_pairs: list[tuples] of node pairs from min weight matching Returns: augmented NetworkX graph """ # We need to make the augmented graph a MultiGraph so we can add parallel edges graph_aug = nx.MultiGraph(graph.copy()) for pair in min_weight_pairs: graph_aug.add_edge(pair[0], pair[1], attr_dict={'distance': nx.dijkstra_path_length(graph, pair[0], pair[1]), 'trail': 'augmented'} ) return Let's confirm that your augmented graph adds the expected number (18) of edges: # Create augmented graph: add the min weight matching edges to g g_aug = add_augmenting_path_to_graph(g, odd_matching) # Counts print('Number of edges in original graph: {}'.format(len(g.edges()))) print('Number of edges in augmented graph: {}'.format(len(g_aug.edges()))) Number of edges in original graph: 123 Number of edges in augmented graph: 141 Let's also confirm that every node now has even degree: Now that you have a graph with even degree the hard optimization work is over. As Euler famously postulated in 1736 with the Seven Bridges of Königsberg problem, there exists a path which visits each edge exactly once if all nodes have even degree. Carl Hierholzer fomally proved this result later in the 1870s. There are many Eulerian circuits with the same distance that can be constructed. You can get 90% of the way there with the NetworkX eulerian_circuit function. However there are some limitations. The augmented graph could (and likely will) contain edges that didn't exist on the original graph. To get the circuit (without bushwhacking), you must break down these augmented edges into the shortest path through the edges that actually exist. eulerian_circuit only returns the order in which we hit each node. It does not return the attributes of the edges needed to complete the circuit. This is necessary because you need to keep track of which edges have been walked already when multiple edges exist between two nodes. As expected, the length of the naive Eulerian circuit is equal to the number of the edges in the augmented graph. The output is just a list of tuples which represent node pairs. Note that the first node of each pair is the same as the second node from the preceding pair. [('b_end_east', 'g_gy2'), ('g_gy2', 'b_g'), ('b_g', 'b_w'), ('b_w', 'b_gy2'), ('b_gy2', 'w_gy2'), ('w_gy2', 'b_w'), ('b_w', 'w_rs'), ('w_rs', 'g_rs'), ('g_rs', 'b_g'), ('b_g', 'b_rs')] Now let's define a function that utilizes the original graph to tell you which trails to use to get from node A to node B. Although verbose in code, this logic is actually quite simple. You simply transform the naive circuit which included edges that did not exist in the original graph to a Eulerian circuit using only edges that exist in the original graph. You loop through each edge in the naive Eulerian circuit (naive_euler_circuit). Wherever you encounter an edge that does not exist in the original graph, you replace it with the sequence of edges comprising the shortest path between its nodes using the original graph. def create_eulerian_circuit(graph_augmented, graph_original, starting_node=None): """Create the eulerian path using only edges from the original graph.""" euler_circuit = [] naive_circuit = list (nx.eulerian_circuit(graph_augmented, source=starting_node)) for edge in naive_circuit: edge_data = graph_augmented.get_edge_data(edge[0], edge[1]) if edge_data[0]['trail'] != 'augmented': # If `edge ` exists in original graph, grab the edge attributes and add to eulerian circuit. edge_att = graph_original[edge[0]][edge[1]] euler_circuit.append((edge[0], edge[1], edge_att)) else: aug_path = nx.shortest_path(graph_original, edge[0], edge[1], weight='distance') aug_path_pairs = list(zip(aug_path[:-1], aug_path[1:])) print('Filling in edges for augmented edge: {}'.format(edge)) print ('Augmenting path: {}'.format(' => '.join(aug_path))) print('Augmenting path pairs: {}\n'.format(aug_path_pairs)) # If `edge` does not exist in original graph, find the shortest path between its nodes and # add the edge attributes for each link in the shortest path. for edge_aug in aug_path_pairs: edge_aug_att = graph_original[edge_aug[0]][edge_aug[1]] euler_circuit.append((edge_aug[0], edge_aug[1], edge_aug_att)) return euler_circuit You hack limitation 3 a bit by starting the Eulerian circuit at the far east end of the park on the Blue trail (node "b_end_east"). When actually running this thing, you could simply skip the last direction which doubles back on it. Verbose print statements are added to convey what happens when you replace nonexistent edges from the augmented graph with the shortest path using edges that actually exist. # Create the Eulerian circuit euler_circuit = create_eulerian_circuit(g_aug, g, 'b_end_east') Filling in edges for augmented edge: ('b_end_east', 'g_gy2') Augmenting path: b_end_east => b_y => b_o => b_gy2 => w_gy2 => g_gy2 Augmenting path pairs: [('b_end_east', 'b_y'), ('b_y', 'b_o'), ('b_o', 'b_gy2'), ('b_gy2', 'w_gy2'), ('w_gy2', 'g_gy2')] Filling in edges for augmented edge: ('b_bw', 'rh_end_tt_1') Augmenting path: b_bw => b_tt_1 => rh_end_tt_1 Augmenting path pairs: [('b_bw', 'b_tt_1'), ('b_tt_1', 'rh_end_tt_1')] Filling in edges for augmented edge: ('b_tt_3', 'rt_end_north') Augmenting path: b_tt_3 => b_tt_2 => tt_rt => v_rt => rt_end_north Augmenting path pairs: [('b_tt_3', 'b_tt_2'), ('b_tt_2', 'tt_rt'), ('tt_rt', 'v_rt'), ('v_rt', 'rt_end_north')] Filling in edges for augmented edge: ('rc_end_north', 'g_gy1') Augmenting path: rc_end_north => v_rc => b_rc => g_rc => g_gy1 Augmenting path pairs: [('rc_end_north', 'v_rc'), ('v_rc', 'b_rc'), ('b_rc', 'g_rc'), ('g_rc', 'g_gy1')] Filling in edges for augmented edge: ('y_gy1', 'rc_end_south') Augmenting path: y_gy1 => y_rc => rc_end_south Augmenting path pairs: [('y_gy1', 'y_rc'), ('y_rc', 'rc_end_south')] Filling in edges for augmented edge: ('b_end_west', 'rd_end_south') Augmenting path: b_end_west => b_v => rd_end_south Augmenting path pairs: [('b_end_west', 'b_v'), ('b_v', 'rd_end_south')] Filling in edges for augmented edge: ('rh_end_north', 'rd_end_north') Augmenting path: rh_end_north => v_rh => v_rd => rd_end_north Augmenting path pairs: [('rh_end_north', 'v_rh'), ('v_rh', 'v_rd'), ('v_rd', 'rd_end_north')] Filling in edges for augmented edge: ('v_end_east', 'rs_end_north') Augmenting path: v_end_east => v_rs => rs_end_north Augmenting path pairs: [('v_end_east', 'v_rs'), ('v_rs', 'rs_end_north')] Filling in edges for augmented edge: ('y_gy2', 'rs_end_south') Augmenting path: y_gy2 => y_rs => rs_end_south Augmenting path pairs: [('y_gy2', 'y_rs'), ('y_rs', 'rs_end_south')] You see that the length of the Eulerian circuit is longer than the naive circuit, which makes sense. # Preview first 20 directions of CPP solution for i, edge in enumerate(euler_circuit[0:20]): print(i, edge) 0 ('b_end_east', 'b_y', {'color': 'blue', 'estimate': 0, 'trail': 'b', 'distance': 1.32}) 1 ('b_y', 'b_o', {'color': 'blue', 'estimate': 0, 'trail': 'b', 'distance': 0.08}) 2 ('b_o', 'b_gy2', {'color': 'blue', 'estimate': 1, 'trail': 'b', 'distance': 0.05}) 3 ('b_gy2', 'w_gy2', {'color': 'yellowgreen', 'estimate': 1, 'trail': 'gy2', 'distance': 0.03}) 4 ('w_gy2', 'g_gy2', {'color': 'yellowgreen', 'estimate': 0, 'trail': 'gy2', 'distance': 0.05}) 5 ('g_gy2', 'b_g', {'color': 'green', 'estimate': 0, 'trail': 'g', 'distance': 0.45}) 6 ('b_g', 'b_w', {'color': 'blue', 'estimate': 0, 'trail': 'b', 'distance': 0.16}) 7 ('b_w', 'b_gy2', {'color': 'blue', 'estimate': 0, 'trail': 'b', 'distance': 0.41}) 8 ('b_gy2', 'w_gy2', {'color': 'yellowgreen', 'estimate': 1, 'trail': 'gy2', 'distance': 0.03}) 9 ('w_gy2', 'b_w', {'color': 'gray', 'estimate': 0, 'trail': 'w', 'distance': 0.42}) 10 ('b_w', 'w_rs', {'color': 'gray', 'estimate': 1, 'trail': 'w', 'distance': 0.06}) 11 ('w_rs', 'g_rs', {'color': 'red', 'estimate': 0, 'trail': 'rs', 'distance': 0.18}) 12 ('g_rs', 'b_g', {'color': 'green', 'estimate': 1, 'trail': 'g', 'distance': 0.05}) 13 ('b_g', 'b_rs', {'color': 'blue', 'estimate': 0, 'trail': 'b', 'distance': 0.07}) 14 ('b_rs', 'g_rs', {'color': 'red', 'estimate': 0, 'trail': 'rs', 'distance': 0.11}) 15 ('g_rs', 'g_rc', {'color': 'green', 'estimate': 0, 'trail': 'g', 'distance': 0.45}) 16 ('g_rc', 'g_gy1', {'color': 'green', 'estimate': 0, 'trail': 'g', 'distance': 0.37}) 17 ('g_gy1', 'g_rt', {'color': 'green', 'estimate': 0, 'trail': 'g', 'distance': 0.26}) 18 ('g_rt', 'g_w', {'color': 'green', 'estimate': 0, 'trail': 'g', 'distance': 0.31}) 19 ('g_w', 'o_w_1', {'color': 'gray', 'estimate': 0, 'trail': 'w', 'distance': 0.18}) You can tell pretty quickly that the algorithm is not very loyal to any particular trail, jumping from one to the next pretty quickly. An extension of this approach could get fancy and build in some notion of trail loyalty into the objective function to make actually running this route more manageable. Let's peak into your solution to see how reasonable it looks.(Not important to dwell on this verbose code, just the printed output) # Computing some stats total_mileage_of_circuit = sum([edge[2]['distance'] for edge in euler_circuit]) total_mileage_on_orig_trail_map = sum(nx.get_edge_attributes(g, 'distance').values()) _vcn = pd.value_counts(pd.value_counts([(e[0]) for e in euler_circuit]), sort=False) node_visits = pd.DataFrame({'n_visits': _vcn.index, 'n_nodes': _vcn.values}) _vce = pd.value_counts(pd.value_counts ([sorted(e)[0] + sorted(e)[1] for e in nx.MultiDiGraph(euler_circuit).edges()])) edge_visits = pd.DataFrame({'n_visits': _vce.index, 'n_edges': _vce.values}) # Printing stats print('Mileage of circuit: {0:.2f}'.format(total_mileage_of_circuit)) print('Mileage on original trail map: {0:.2f}'.format(total_mileage_on_orig_trail_map)) print('Mileage retracing edges: {0:.2f}'.format (total_mileage_of_circuit-total_mileage_on_orig_trail_map)) print('Percent of mileage retraced: {0:.2f}%\n'.format((1-total_mileage_of_circuit/total_mileage_on_orig_trail_map)*-100)) print('Number of edges in circuit: {}'.format(len(euler_circuit))) print('Number of edges in original graph: {}'.format(len(g.edges()))) print('Number of nodes in original graph: {}\n'.format(len(g.nodes()))) print ('Number of edges traversed more than once: {}\n'.format(len(euler_circuit)-len(g.edges()))) print('Number of times visiting each node:') print(node_visits.to_string(index=False)) print('\nNumber of times visiting each edge:') print(edge_visits.to_string(index=False)) Mileage of circuit: 33.59 Mileage on original trail map: 25.76 Mileage retracing edges: 7.83 Percent of mileage retraced: 30.40% Number of edges in circuit: 158 Number of edges in original graph: 123 Number of nodes in original graph: 77 Number of edges traversed more than once: 35 Number of times visiting each node: n_nodes n_visits 18 1 38 2 20 3 1 4 Number of times visiting each edge: n_edges n_visits 88 1 35 2 While NetworkX also provides functionality to visualize graphs, they are notably humble in this department: NetworkX provides basic functionality for visualizing graphs, but its main goal is to enable graph analysis rather than perform graph visualization. In the future, graph visualization functionality may be removed from NetworkX or only available as an add-on package. Proper graph visualization is hard, and we highly recommend that people visualize their graphs with tools dedicated to that task. Notable examples of dedicated and fully-featured graph visualization tools are Cytoscape, Gephi, Graphviz and, for LaTeX typesetting, PGF/TikZ. NetworkX provides basic functionality for visualizing graphs, but its main goal is to enable graph analysis rather than perform graph visualization. In the future, graph visualization functionality may be removed from NetworkX or only available as an add-on package. Proper graph visualization is hard, and we highly recommend that people visualize their graphs with tools dedicated to that task. Notable examples of dedicated and fully-featured graph visualization tools are Cytoscape, Gephi, Graphviz and, for LaTeX typesetting, PGF/TikZ. That said, the built-in NetworkX drawing functionality with matplotlib is powerful enough for eyeballing and visually exploring basic graphs, so you stick with NetworkX draw for this tutorial. I used graphviz and the dot graph description language to visualize the solution in my Python package postman_problems. Although it took some legwork to convert the NetworkX graph structure to a dot graph, it does unlock enhanced quality and control over visualizations. Your first step is to convert the list of edges to walk in the Euler circuit into an edge list with plot-friendly attributes. create_cpp_edgelist Creates an edge list with some additional attributes that you'll use for plotting: def create_cpp_edgelist(euler_circuit): """ Create the edgelist without parallel edge for the visualization Combine duplicate edges and keep track of their sequence and # of walks Parameters: euler_circuit: list[tuple] from create_eulerian_circuit """ cpp_edgelist = {} for i, e in enumerate(euler_circuit): edge = frozenset([e[0], e[1]]) if edge in cpp_edgelist: cpp_edgelist[edge][2] ['sequence'] += ', ' + str(i) cpp_edgelist[edge][2]['visits'] += 1 else: cpp_edgelist[edge] = e cpp_edgelist[edge][2]['sequence'] = str(i) cpp_edgelist[edge][2]['visits'] = 1 return list As expected, your edge list has the same number of edges as the original graph. The CPP edge list looks similar to euler_circuit, just with a few additional attributes. [('rh_end_tt_4', 'nature_end_west', {'color': 'black', 'distance': 0.2, 'estimate': 0, 'sequence': '73', 'trail': 'tt', 'visits': 1}), ('rd_end_south', 'b_rd', {'color': 'red', 'distance': 0.13, 'estimate': 0, 'sequence': '95', 'trail': 'rd', 'visits': 1}), ('w_gy1', 'w_rc', {'color': 'gray', 'distance': 0.33, 'estimate': 0, 'sequence': '151', 'trail': 'w', 'visits': 1})] Here you illustrate which edges are walked once (gray) and more than once (blue). This is the "correct" version of the visualization created in 2.4 which showed the naive (as the crow flies) connections between the odd node pairs (red). That is corrected here by tracing the shortest path through edges that actually exist for each pair of odd degree nodes. If the optimization is any good, these blue lines should represent the least distance possible. Specifically, the minimum distance needed to generate a matching of the odd degree nodes. plt.figure(figsize=(14, 10)) visit_colors = {1:'lightgray', 2:'blue'} edge_colors = [visit_colors[e[2]['visits']] for e in g_cpp.edges(data=True)] node_colors = ['red' if node in nodes_odd_degree else 'lightgray' for node in g_cpp.nodes()] nx.draw_networkx(g_cpp, pos=node_positions, node_size=20, node_color=node_colors, edge_color=edge_colors, with_labels=False) plt.axis('off') plt.show() Here you plot the original graph (trail map) annotated with the sequence numbers in which we walk the trails per the CPP solution. Multiple numbers indicate trails we must double back on. You start on the blue trail in the bottom right (0th and the 157th direction). plt.figure(figsize=(14, 10)) edge_colors = [e[2]['color'] for e in g_cpp.edges(data=True)] nx.draw_networkx(g_cpp, pos=node_positions, node_size=10, node_color='black', edge_color=edge_colors, with_labels=False, alpha=0.5) bbox = {'ec':[1,1,1,0], 'fc':[1,1,1,0]} # hack to label edges over line (rather than breaking up line) edge_labels = nx.get_edge_attributes(g_cpp, 'sequence') nx.draw_networkx_edge_labels(g_cpp, pos=node_positions, edge_labels=edge_labels, bbox=bbox, font_size=6) plt.axis('off') plt.show() The movie below that traces the Euler circuit from beginning to end is embedded below. Edges are colored black the first time they are walked and red the second time. Note that this gif doesn't do give full visual justice to edges which overlap another or are too small to visualize properly. A more robust visualization library such as graphviz could address this by plotting splines instead of straight lines between nodes. The code that creates it is presented below as a reference. First a PNG image is produced for each direction (edge walked) from the CPP solution. visit_colors = {1:'black', 2:'red'} edge_cnter = {} g_i_edge_colors = [] for i, e in enumerate(euler_circuit, start=1): edge = frozenset([e[0], e[1]]) if edge in edge_cnter: edge_cnter[edge] += 1 else: edge_cnter[edge] = 1 # Full graph (faded in background) nx.draw_networkx(g_cpp, pos=node_positions, node_size=6, node_color='gray', with_labels=False, alpha=0.07) # Edges walked as of iteration i euler_circuit_i = copy.deepcopy(euler_circuit[0:i]) for i in range(len(euler_circuit_i)): edge_i = frozenset([euler_circuit_i[i][0], euler_circuit_i[i][1]]) euler_circuit_i[i][2]['visits_i'] = edge_cnter[edge_i] g_i = nx.Graph(euler_circuit_i) g_i_edge_colors = [visit_colors[e[2]['visits_i']] for e in g_i.edges(data=True)] nx.draw_networkx_nodes(g_i, pos=node_positions, node_size=6, alpha= 0.6, node_color='lightgray', with_labels=False, linewidths=0.1) nx.draw_networkx_edges(g_i, pos=node_positions, edge_color=g_i_edge_colors, alpha=0.8) plt.axis('off') plt.savefig('fig/png/img {}.png'.format(i), dpi=120, bbox_inches='tight') plt.close() Then the the PNG images are stitched together to make the nice little gif above. First the PNGs are sorted in the order from 0 to 157. Then they are stitched together using imageio at 3 frames per second to create the gif. import glob import numpy as np import imageio import os def make_circuit_video(image_path, movie_filename, fps=5): # sorting filenames in order filenames = glob.glob(image_path + 'img*.png') filenames_sort_indices = np.argsort([int(os.path.basename(filename).split('.')[0][3:]) for filename in filenames]) filenames = [filenames[i] for i in filenames_sort_indices] # make movie with imageio.get_writer(movie_filename, mode='I', fps=fps) as writer: for filename in filenames: image = imageio.imread(filename) writer.append_data(image) make_circuit_video('fig/png/', 'fig/gif/ cpp_route_animation.gif', fps=3) Congrats, you have finished this tutorial solving the Chinese Postman Problem in Python. You have covered a lot of ground in this tutorial (33.6 miles of trails to be exact). For a deeper dive into network fundamentals, you might be interested in Datacamp's Network Analysis in Python course which provides a more thorough treatment of the core concepts. Don't hesitate to check out the NetworkX documentation for more on how to create, manipulate and traverse these complex networks. The docs are comprehensive with a good number of examples and a series of tutorials. If you're interested in solving the CPP on your own graph, I've packaged the functionality within this tutorial into the postman_problems Python package on Github. You can also piece together the code blocks from this tutorial with a different edge and node list, but the postman_problems package will probably get you there more quickly and cleanly. One day I plan to implement the extensions of the CPP (Rural and Windy Postman Problem) here as well. I also have grand ambitions of writing about these extensions and experiences testing the routes out on the trails on my blog here. Another application I plan to explore and write about is incorporating lat/long coordinates to develop (or use) a mechanism to send turn-by-turn directions to my Garmin watch. And of course one last next step: getting outside and trail running the route! If you would like to learn more about Networks in Python, check out these DataCamp's courses: 1: Edmonds, Jack (1965). "Paths, trees, and flowers". Canad. J. Math. 17: 449–467.2: Galil, Z. (1986). "Efficient algorithms for finding maximum matching in graphs". ACM Computing Surveys. Vol. 18, No. 1: 23-38. Networks today are part of our everyday life. Let's learn how to visualize and understand a social network in Python using Networks. A tutorial on functions in Python that covers how to write functions, how to call them, and more! Learn how to create highly interactive and visually appealing charts with Python Plotly Express. Learn to visualize high-dimensional data in a low-dimensional space using a nonlinear dimensionality reduction technique. Learn everything about Graph Neural Networks, including what GNNs are, the different types of graph neural networks, and what they're used for. Plus, learn how to build a Graph Neural Network with In this tutorial, learn how to implement decorators in Python.
{"url":"https://www.datacamp.com/tutorial/networkx-python-graph-tutorial","timestamp":"2024-11-06T01:21:18Z","content_type":"text/html","content_length":"534678","record_id":"<urn:uuid:8af039b6-7145-49e2-aee6-92d20bdfbe31>","cc-path":"CC-MAIN-2024-46/segments/1730477027906.34/warc/CC-MAIN-20241106003436-20241106033436-00470.warc.gz"}
8 km to miles Heading 1: Understanding the Basics In order to fully grasp the concept of converting kilometers to miles, it’s important to first understand the basics. Kilometers and miles are both units of length, but they are used in different countries and systems of measurement. Kilometers are widely used in most countries around the world, while miles are primarily used in the United States and a few other countries. One kilometer is equal to 0.6214 miles, but why do we need to convert between the two? The answer lies in the fact that different countries and regions have different preferences when it comes to measuring distances. For example, if you’re planning a road trip across Europe, it would be helpful to know how many miles you’ll be traveling if you’re used to thinking in kilometers. Similarly, if you’re an American traveling in a country that uses kilometers, it would be useful to be able to convert kilometers to miles to have a better understanding of the distance you’re covering. Heading 2: Kilometers and Miles: What’s the Difference? Kilometers and miles are both units of measurement used to determine distance, but they differ in terms of their value and usage. Kilometers are the more commonly used unit in many countries around the world, including most of Europe and Asia. On the other hand, miles are primarily used in countries like the United States and the United Kingdom. The main difference between kilometers and miles lies in their conversion factor. One kilometer is equivalent to 0.621 miles, which means that if you were to convert a distance from kilometers to miles, you would need to multiply the value by 0.621. This conversion factor is important to understand when working with distances in different units, as it allows for accurate calculations and comparisons. So, whether you’re traveling abroad or studying a map, knowing the difference between kilometers and miles can be useful for understanding distances and navigating different regions. Heading 3: The Conversion Formula: Making Sense of It All The conversion formula for converting kilometers to miles may seem intimidating at first, but it’s actually quite straightforward. To convert kilometers to miles, you simply need to multiply the number of kilometers by a conversion factor of 0.6214. This means that if you have 10 kilometers, you would multiply it by 0.6214 to get the equivalent in miles, which in this case would be 6.214 miles. It’s as simple as that! Understanding the conversion formula is important because it allows us to easily switch between kilometers and miles, which are the two most commonly used units of distance measurement around the world. Whether you’re traveling abroad and need to understand distances in miles, or you come across a road sign in kilometers while driving in a foreign country, knowing how to convert between the two is incredibly useful. So, next time you find yourself needing to convert kilometers to miles, just remember the conversion formula and you’ll be able to make sense of it all! Heading 4: Quick and Easy Mental Math Tricks Calculating conversions between kilometers and miles can seem complicated, but there are some quick and easy mental math tricks that can make the process much simpler. One trick is to remember that 1 kilometer is approximately equal to 0.6 miles. So, if you need to convert kilometers to miles, you can simply multiply the number of kilometers by 0.6 to get the rough estimate in miles. For example, if you have 10 kilometers, you can mentally calculate that it would be roughly 6 miles. This trick can be especially handy when you are trying to get a ballpark estimate and don’t need to be exact. Another mental math trick that can come in handy is dividing the number of miles by 0.6 to convert them into kilometers. For example, if you have 12 miles, you can mentally calculate that it would be approximately 20 kilometers. By using these mental math tricks, you can quickly and easily convert between kilometers and miles without the need for a calculator or complex formulas. These tricks are especially useful when you are on the go and need to make a quick conversion without any tools or resources available. So, next time you need to convert kilometers to miles or vice versa, give these mental math tricks a try and see how easy it can be! Heading 5: Why Do We Need to Convert Kilometers to Miles? When it comes to measuring distance, the world is divided. While many countries, including the United States, use the mile as their standard unit of measurement, others, like most European countries, use kilometers. This difference can create some confusion, especially when it comes to travel and international communication. That is why it is important to understand how to convert kilometers to miles and vice versa. One of the main reasons why we need to convert kilometers to miles is for travel purposes. If you are planning a trip to a country that measures distance in kilometers, it is crucial to be able to understand and estimate the distance in miles. This will help you better plan your itinerary, calculate travel times, and decide on transportation options. Similarly, if you are hosting someone who is used to miles, being able to convert distances to kilometers will make it easier to communicate and provide directions. Having a basic understanding of these conversions can greatly enhance your travel experiences and make navigation simpler, regardless of where you are in the world.
{"url":"https://convertertoolz.com/km-to-miles/8-km-to-miles/","timestamp":"2024-11-09T13:14:27Z","content_type":"text/html","content_length":"42147","record_id":"<urn:uuid:dffa9d9f-8beb-4883-8b00-c7a40b28c52a>","cc-path":"CC-MAIN-2024-46/segments/1730477028118.93/warc/CC-MAIN-20241109120425-20241109150425-00671.warc.gz"}
British Mathematical Olympiad British Mathematical Olympiad The British Mathematical Olympiad is a national math competition held in the United Kingdom. Solvers who score over a certain threshold in the Senior Mathematical Challenge are automatically entered to the first round, but others can register for the first round. The British Mathematical Olympiad is divided into two rounds. In the first round (BMO 1), solvers have 3.5 hours to solve 6 problems. High scorers can move on into the second round (BMO 2), where solvers have 3.5 hours to solve 4 problems. For both rounds, each problem is worth 10 points. Like most Olympiads, complete solutions are required in order to get full credit. Participants who submit a solution with the highest quality in BMO 2 can earn the Christopher Bradley elegance prize. 8-th British Mathematical Olympiad 1972 Problem 5 In a right circular cone the semi-vertical angle of which is $\theta$, a cube is placed so that four of its vertices are on the base and four on the curved surface. Prove that as $\theta$ varies the maximum value of the ratio of the volume of the cube to the volume of the cone occurs when $3\sin \theta = 1.$ Let the cube side be $a,$ height of the cone be $h,$ radius of the cone be $r = h \tan \theta.$ See diagram for the description of terms used. $A$ is the vertex of the cone, $B$ is the center of the cube upper face, $C$ is the vertex of upper surface of the cube, $O$ is the center of the base of the cone, $D$ is the point where $AC$ crosses the base of the cone. $\[BO = a, BC = \frac {a}{\sqrt {2}}, AO = h, DO = r = h \tan \theta.\]$$\[\frac {AB}{AO} = \frac {BC}{OD} \implies \frac {h-a}{h} = \frac {a}{r \sqrt{2}} \implies \frac{r}{a} = \tan \theta + \frac {1}{\sqrt{2}}.\]$ The ratio of the volume of the cube to the volume of the cone is $\[\bar V = \frac {3a^3}{\pi r^2 h} = \frac {3r}{\pi h} \cdot \left( \frac {a}{r} \right)^3 = \frac {3}{\pi} \cdot \frac {\tan \theta}{(\tan \theta + \frac {1}{\sqrt{2}})^3} = \frac {3}{\pi f^3}.\]$ Here we use $\[x^3 = \tan \theta, f = \frac {x^3 + \frac {1}{\sqrt{2}} }{x} = x^2 + \frac {1}{x \sqrt {2}} = x^2 + \frac {1}{2x \sqrt {2}} + \frac {1}{2x \sqrt {2}},\]$$\[f \ge 3 \sqrt [3] {x^2 \cdot \frac {1}{2x \sqrt {2}} \cdot \frac {1}{2x \sqrt {2}}} = \frac {3}{2}\]$ if $\[x^2 = \frac {1}{2x \sqrt {2}} \implies \tan \theta = x^3 = \frac {1}{2 \sqrt {2}} \implies \sin \theta = \frac {1}{3}.\]$$\[\bar V = \frac {3}{\pi f^3} \le \frac {8}{9 \pi} = 0.283.\]$ The maximum ratio of the volume of the cube to the volume of the cone is vladimir.shelomovskii@gmail.com, vvsss 21-th Mathematical Olympiad 1985 Problem 5 A circular hoop of radius 4 cm is held fixed in a horizontal plane. A cylinder with radius 4 cm and length 6 cm rests on the hoop with its axis horizontal, and with each of its two circular ends touching the hoop at two points. The cylinder is free to move subject to the condition that each of its circular ends always touches the hoop at two points. Find, with proof, the locus of the centre of one of the cylinder’s circular ends. Let the centroid of the cylinder be the point $O.$ The side surface of the cylinder is shown by blue. Let the center of one of the circular ends be the point $A.$ This end is shown by green. Its edge is a purple circle $\omega, OA = 3.$ Let the center of the hoop $\Omega$ be $B.$ The hoop is shown by red. Let $\omega$ cross $\Omega$ at point $C.$ Therefore $\omega$ cross $\Omega$ at second point symmetrical to $C$ with respect to the plane $OAB.$$\[AC = BC = 4, BC \perp OA \implies AC = 5.\]$ Let $\ Theta$ be the sphere of radius $5$ centered at $O.$ Part of this sphere is shown in the diagram by yellow. Let the cylinder be glued to the sphere and the point $O$ be fixed. In this case $\omega$ and $\Omega$ both lie on $S,$ and point $A$ lies on the sphere centered at $O$ with radius $OA = 3.$ The claim “The cylinder is free to move subject to the condition that each of its circular ends always touches the hoop at two points” has the equivalent form “The sphere is free to move with fixed center subject to the condition that $\omega$ cross $\Omega.$” If the sphere rotates around an axis $OB,$ then point $A$ moves along circle with axis $OB.$ Let the sphere rotate around an axis perpendicular $OA$ and $OB.$ Axis view is shown on the diagram. We rotate $\Theta$ together with $\omega$ in counterclockwise direction. Point $C$ moves along $\ Omega$ till point $C'$ in the plane $OAB$ where $\omega$ touch $\Omega.$ The point $A$ moves to extreme position $A'.$ If one rotates $\Theta$ in clockwise direction, point $A$ moves to position $A''$ symmetric to $A'$ with respect $OA.$$\[OA = OB = 3, BC' = A'C'= 4,\]$$\[\angle A'C'B = 2 \arcsin \frac {3}{5} \ implies\]$$\[A'A'' = 2(\frac {96}{25} - 3) = \frac {42}{25} = 1.68.\]$ The locus of the point $A$ is the belt with a width of $1.68$ cm located on a sphere with a radius of $3$ cm symmetrically to the circumference of the great circle located parallel hoop. vladimir.shelomovskii@gmail.com, vvsss This article is a stub. Help us out by expanding it.
{"url":"https://artofproblemsolving.com/wiki/index.php/British_Mathematical_Olympiad","timestamp":"2024-11-10T08:42:57Z","content_type":"text/html","content_length":"54417","record_id":"<urn:uuid:4df6e2cf-28f4-49cb-8bde-9584b590ae92>","cc-path":"CC-MAIN-2024-46/segments/1730477028179.55/warc/CC-MAIN-20241110072033-20241110102033-00615.warc.gz"}
Do Bowling Balls Float? - Cherry Picks Have you ever wondered if a bowling ball can float? It’s a fascinating question knowing that the nature of the bowling balls is solid and heavy, which will end up sinking in the water. But before we jump to that conclusion, the answer can be yes or no. This is because the buoyancy of the ball depends on some factors, such as its mass and the density of the water it occupies. As we all know, bowling balls have different weights; the weight of a bowling ball that can float in water is a lighter ball weight. If your bowling ball is above 12 pounds, it will automatically sink because its density is heavier than water. If you are curious to know more about the behavior of the bowling ball on the water, this article will shed some light on whether the ball can float. Let's dive in and understand the bowling ball buoyancy! To answer this, a bowling ball depends on its mass and density to determine whether it can float or sink. The challenging part of determining which ball can float is that they all look similar in size. Don't worry about that since we will help you crack the nutshell and know which weight will float and which one will not. Each bowling ball's weight has a different density, and the bowling ball with a closer density to water is a 12-pound bowling ball. We all know water density is 1 gram/centimeter while a 12 pounds bowling ball has a 0.99 gram/centimeter density. Density is one of the crucial factors that can either cause the ball to float or sink. Balls that weigh 12 pounds or less will float on water, but balls that weigh more than 12 pounds have heavy weight and higher density than water which can cause them to sink straight to the depths of the water. The physics involved in this buoyancy effect in bowling balls is that for the ball to float, its density should not be as high as that of water. Regarding whether a bowling ball can float on water, we have some key things you will have to consider. Let's check on the following factors that affect a bowling ball's behavior in water: First and foremost, the density of the ball plays an essential role In determining if the ball will float on water. To define density, we can refer to how much mass is packed into a given volume of an object. Most people believe if a substance looks big, it will sink, but that is not the case. For a substance to float density of the material used determines a lot. Let's dive in and see how density comes in; for instance, all bowling balls have the same size but have different densities. If the bowling ball's density is less than the liquid it's placed in, then it means the ball will float. If the ball has the same density as the liquid (water), the ball will stay in place. The ball will sink to the bottom for heavier balls with higher density Another factor to take into account is the density of the water. As we have discussed, pure water's density is 1g/cm3; when you place a bowling ball in the water, the density of the water will determine whether the ball will float or sink. Water can be categorized into two types: saltwater and freshwater. Will the density of different types of water affect the bowling ball? Well, it will affect since salty water has a density that is raised compared to fresh water, which is lesser. Another essential factor that can affect the ball to float or sink in water is the bowling ball's weight. The buoyancy of the bowling ball is determined by its weight. Heavier balls are likely to sink compared to light balls that tend to float. If you use a 12 pounds ball or lesser weight than that, you will see the ball floating; if its weight exceeds 12 pounds, it will sink. Have you ever imagined what would happen when you place your bowling ball in freshwater? We can demonstrate that using different bowling balls weights(8,10, and 13 pounds) put inside a container of full fresh water. As we all know, the density of freshwater is 1g/cm3; we can see how the first ball of 10 pounds will behave when inserted in freshwater; The ball will be suspended in its position this is because the density of the 10-pound ball will match the density of the water. If let's say, you gently place the ball in the water, it will remain in place, and if you try to push it deeper, it will still resist and stay where it was. The 8-pound bowling ball will float in the freshwater. The ball's weight is lighter and less dense than water. The ball will still float even if you push it down; it will just pop back and float. However, heavier balls, like 13-pound balls, when put into the freshwater container, the ball will sink to the bottom of the container. The 13-pound ball has a higher density than the freshwater, which is why the ball sinks. Let's say you try to place the ball gently. It will still go back to the bottom of the container. The demonstrations of different sizes of bowling balls have shown how the weight of different densities in freshwater can sink or float. Will bowling balls still behave the same way as in freshwater? Well, the answer to that is no. The floating and sinking of the ball in salt water will still depend on its density it has. Though there is some difference in density in salt water, salt tends to increase the density of any liquid when added. Therefore salt water, such as ocean water, will have a higher density than fresh water. Let's try a short experiment with different sizes of bowling balls (8,10,12). Let's see how they will behave when placed in salty or ocean water. When you place an 8-pound ball in salty water, the ball will end up floating because the ball density is lower than in salted water. Try another ball size, like a 10-pound ball. What do you notice? You will see the ball floating just like the first ball, but it won't maintain its position like what we observed in the freshwater. This observation is due to the increase of density in the salty water. Depending on the concentration of the salty water, a 12-pound ballpoint can either float or maintain its position. It will float if the water is too salty, but it will maintain its position or even sink if it's less salty. Have you ever heard of the Dead Sea? It is one of the saltiest bodies of water on earth. The sea contains an incredibly high salt concentration compared to other bodies worldwide. Its salt concentration is much higher than salt water or ocean water, and it has a density of 1.241g/cm3 What will happen if we throw different balls into the dead sea? You will be surprised that all the bowling balls of 8, 10, 12, and 13 pounds will float on the water's surface. The balls float because their densities are lower than the Dead Sea water, which has a high density due to the increased salt concentration. If you are looking for a bowling ball that will sink into the deep sea, you can try out bowling balls that weigh 14 pounds and more so that they can overcome the buoyant force of the salty water. To calculate the density of a bowling ball, you need to know its mass and volume. Here's how you can calculate it: 1. First, measure the mass of the bowling ball using a scale. The mass is typically measured in grams or kilograms. If your bowling balls weigh in the pound, here is a simple conversion you can use from pounds to grams.: 1 Pound = 453.6 grams. Here is a table that has the mass in both pounds and grams for bowling ball size weights from 8 pounds to 12 pounds: Bowling Ball (Weight in Pounds) Weight in Grams 8 Pounds 3,632 grams 9 Pounds 4,086 grams 10 Pounds 4540 grams 11 Pounds 4994 grams 12 Pounds 5448 grams 13 Pounds 5902 grams 14 Pounds 6356 grams 15 Pounds 6810 grams 16 Pounds 7264 grams 2. Measure the volume of the bowling ball. Since bowling balls have irregular shapes, measuring their volume directly can be challenging. However, you can approximate the volume by using the water displacement method. To achieve the water displacement method, you will have to fill a container with water and record the initial volume. From there, you can gently submerge the bowling ball in the water, ensuring it is fully submerged without touching the bottom or sides of the container. Measure the new volume of the water and subtract the initial volume to find the volume of the bowling ball. Alternatively, you can get the volume of your bowling as long as you have the radius of the ball. Use the following equation to calculate the volume of the bowling ball. 3. Once you have the mass and volume, you can calculate the density using the formula: Density = Mass / Volume. For example, an 8 pounds bowling ball has a volume of 5452 cm3, and its mass in grams is 3632 grams. Let's find density using the above formula: Density = 3,632 grams / 5,452 cm3 = 0.67 g/cm3 The density of an 8-pound bowling ball is less than that of water, which is 1.0 g/cm3; it means the ball is less dense than water, and hence it will float on the water's surface. Here is a summary table that shows the calculated density of each bowling ball weight starting from 8 pounds to 16 pounds: Bowling Ball (Weight in Weight in Grams Density Weight / Pounds) 5452 cm^3 8 Pounds 3,632 grams 0.67 g/cm^3 9 Pounds 4,086 grams 0.75 g/cm^3 10 Pounds 4540 grams 0.83 g/cm^3 11 Pounds 4994 grams 0.92 g/cm^3 12 Pounds 5448 grams 0.99 g/cm^3 13 Pounds 5902 grams 1.08 g/cm^3 14 Pounds 6356 grams 1.17 g/cm^3 15 Pounds 6810 grams 1.25 g/cm^3 16 Pounds 7264 grams 1.33 g/cm^3 4. Make sure to use consistent units for mass and volume in the calculation. For example, if the mass is measured in grams and the volume is in cubic centimeters, the density will be in grams per cubic centimeter (g/cm³). To make a bowling ball float, we can apply a little physics that we know and use buoyancy knowledge. What is buoyancy? To explain this term sufficiently, buoyancy is the force that acts on the object to keep it floating and depends on the difference between the density of the object and the fluid in it. Bowling ball is well made with polyurethane, which is dense but not as dense as other materials like steel or lead. So polyurethane is the perfect material used to make bowling balls as it strikes a balance between density and durability. How do we make a bowling ball float? We have to find a body of water that is deep enough that you can fully submerge the ball into the water. A hot tub or swimming pool can be the best example. We can experiment with the Bowling balls. Grab your ball and lower it into the water until it is entirely underwater; you can release your grip from there. By doing this, the buoyancy force will make the ball rise naturally to the water's surface and start to float. To sum it up, bowling balls can float or sink depending on their main factor, Density. Bowling balls can float in water because they are generally less dense than water. When an object is less dense than the liquid it's in, it can float. Bowling balls are relatively lightweight or heavyweight; when the ball is too heavy, above 12 pounds will sink in water, while balls below 12 pounds will have lower density and eventually float in water. Also, we have seen how different bowling ball behaves when placed in seawater and the dead sea, which has different densities with different behaviors on the bowling balls. Parting with an old bowling ball that has served you well for years may be a bittersweet experience. It's natural for most bowlers to become attached to ... An excellent bowling experience depends on maintaining ideal lane conditions, and applying oil to the lanes regularly is a crucial part of this maintenance... When someone begins to learn how to bowl, the attention frequently shifts to selecting the best ball and figuring out how to use it to score as many points... Have you ever wondered if bowling pins are hollow? It’s a question that might seem simple, but the answer is quite intriguing. Whether you are an avid ... Have you ever bowled a great strike only to have the ball go to waste afterward? Gutter balls can quickly reduce your score and be rather annoying. But are... Have you ever wondered why bowling has bumpers? Knowing the function and effects of bumpers can improve your bowling experience, regardless of your level o...
{"url":"https://cherrypicks.reviews/blog/do-bowling-balls-float","timestamp":"2024-11-09T17:08:16Z","content_type":"text/html","content_length":"130501","record_id":"<urn:uuid:599b1622-64e0-423a-a814-e88fbfb9802f>","cc-path":"CC-MAIN-2024-46/segments/1730477028125.59/warc/CC-MAIN-20241109151915-20241109181915-00500.warc.gz"}
Jump to navigation Jump to search There are two basic ways of creating tables in Displayr. A table can be created using an Calculation or by dragging and dropping onto a Page from the Data Sets tree. Creating tables Summary tables When a Variable Set is dragged from the Data Sets tree a SUMMARY Table is created. Summary tables can contain multiple statistics, such as both percentages and counts, and that they can also be one or two-dimensional in terms of the data that they represent (this is determined by the Variable Set Structure, as described below). A crosstab is created when a second Variable Set is dragged onto a Summary Table. Tables created from Calculations Tables can also be created using R code (see Calculation). Note that this type of table is ultimately a custom Calculation and is qualitatively different to summary tables and crosstabs (e.g., you can sort this type of table by clicking on the column headings). Manipulating tables and controlling their appearance Summary tables and crosstabs The appearance of a table is governed by: • The properties of the Variable Set: • The selections in the Inputs and Properties sections of the Object Inspector on the right of the screen: R Output tables The appearance of a table created by a custom Calculation is governed by the:
{"url":"https://docs.displayr.com/wiki/Table","timestamp":"2024-11-11T14:32:49Z","content_type":"text/html","content_length":"30093","record_id":"<urn:uuid:4cf41b7f-f988-443b-934b-4edc5f6a736a>","cc-path":"CC-MAIN-2024-46/segments/1730477028230.68/warc/CC-MAIN-20241111123424-20241111153424-00764.warc.gz"}
Detecting Spammers on Mechanical Turk, Part II In my previous post , I gave a brief overview of different techniques used to improve the quality of the results on Amazon Mechanical Turk. The main outcome of these techniques is a matrix that describes the error rate of each worker. For example, consider the task of categorizing webpages as porn or not. We have three target categories: • G-rated: Pages appropriate for a general audience, children included. • PG-rated: Pages with adult themes but without any explicit sexual scenes, appropriate only for children above 13 • R-rated: Pages with content appropriate for adults only. In this case, the confusion matrix of a worker, inferred using the techniques described in my previous post , would look like: $ \left( \begin{array}{ccc} Pr[G \rightarrow G] & Pr[G \rightarrow PG] & Pr[G \rightarrow R] \\ Pr[PG \rightarrow G] & Pr[PG \rightarrow PG] & Pr[PG \rightarrow R] \\ Pr[R \rightarrow G] & Pr[R \ rightarrow PG] & Pr[R \rightarrow R] \end{array} \right) $ where $Pr[X \rightarrow Y]$ is the probability that the worker will give the answer $Y$ when given a question where the correct answer is $X$. (The sum of the elements in each line should sum up to And here is the question that seems trivially easy: Given the confusion matrix, how can we detect the spammers? Computing the Error Rate The simple answer is: Just sum up the elements out of the diagonal! Since every non-diagonal element corresponds to an error, if the sum is high, the worker is a spammer. Of course, this ignores the fact that class priors will often differ. So, instead of giving equal weights to each category, we weight the errors according to the class priors (i.e., how often we expect to see each $\\ Pr[G] (Pr[G \rightarrow PG] + Pr[G \rightarrow R]) + \\ Pr[PG] (Pr[PG \rightarrow G] + Pr[PG \rightarrow R]) + \\ Pr[R] (Pr[R \rightarrow G] + Pr[R \rightarrow PG]) $ For example, if in confusion matrix is $\left( \begin{array}{ccc} 0.5 & 0.3 & 0.2 \\ 0.2 & 0.6 & 0.2 \\ 0.1 & 0.1 & 0.8 \end{array} \right) $ and the class priors are 80% $G$, 15% $PG$, and 5% $R$, then the weighted error rate is $80\% \cdot (0.3 + 0.2) + 15\% \cdot (0.2 + 0.2) + 5\% \cdot (0.1 + 0.1) = 0.47$ Notice that the error rates for the first line, which correspond to category $G$, got weighted more heavily. Unfortunately, this method does not work very well. When we started using this technique, we ended up marking legitimate workers as spammers (false positives), and classifying spammers as legitimate workers (false negatives). Needless to say, both mistakes were hurting us. Legitimate workers were complaining and (understandably) badmouthing us, and spammers kept polluting the results. Let me give some more details on how such errors appear. False Negatives: Strategic Spammers and Uneven Class Priors Spammers on Mechanical Turk are often smart lazy. They will try to submit answers that seem legitimate but without spending too much time. (Otherwise, they may as well do the work :-) In our case, we were categorizing sites as porn or not. Most of the time the sites were not porn, and only 10%-20% of the time we had sites that were falling into one of the porn categories. Some workers noticed this fact, and realized that they could keep their error rate low by simply classifying everything as not-porn. Following the standard way of computing an error rate, these workers were faring much better than legitimate workers that were misclassifying some of the not-porn sites. Here is an illustration. With three categories (G-, PG13-, and R-rated), the confusion matrix for a spammer looks like this: $\left( \begin{array}{ccc} 1.0 & 0.0 & 0.0 \\ 1.0 & 0.0 & 0.0 \\ 1.0 & 0.0 & 0.0 \end{array} \right) $ With class priors are 80% in G, 15% in PG13, and 5% in R, the weighted error rate of the spammer is: $80\% \cdot (0.0 + 0.0) + 15\% \cdot (1.0 + 0.0) + 5\% \cdot (1.0 + 0.0) = 0.2$ Compare this overall error rate with the one of a legitimate worker, with rather modest error rates: $\left( \begin{array}{ccc} 0.8 & 0.2 & 0.0 \\ 0.1 & 0.8 & 0.1 \\ 0.0 & 0.25 & 0.75 \end{array} \right) $ The error rate for this legitimate worker is: $80\% \cdot (0.2 + 0.0) + 15\% \cdot (0.1 + 0.1) + 5\% \cdot (0.25 + 0.0) = 0.2025$ Yes, the legitimate worker appears to be worse than the spammer! False Positives: Biased Workers The second type of error is when we classify honest workers as spammers. Interestingly enough, when we started evaluating workers, the top "spammers" ended up being members of the internal team. Take a look at the error rate of this worker: $\left( \begin{array}{ccc} 0.35 & 0.65 & 0.0 \\ 0.0 & 0.0 & 1.0 \\ 0.0 & 0.0 & 1.0 \end{array} \right) $ The error rate is: $80\% \cdot (0.6 + 0.0) + 15\% \cdot (0.0 + 1.0) + 5\% \cdot (0.0 + 0.0) = 0.67$ The error rate would imply that this worker is essentially random. A clear case of a worker that should be banned. After a careful inspection though, you can see that this is not the case. This is the confusion matrix of a worker that tends to be much more conservative than others and classifies 65% of the "G" pages as "PG13". Similarly, all the pages tat are in reality "PG13" are classified as "R". (This worker was a parent with young children and was much more strict on what content would pass as "G" vs In a sense, this is a a pretty careful worker! Even though this worker does mix up R and PG13 pages, there is a very clear separation between G and PG13/R pages. Still the error rate alone would put this worker very clearly in the spam category. Solution: Examine Ambiguity not Errors You will notice that one thing that separates spammers from legitimate workers is the information provided by their answers. A spammer that gives the same reply all the time does not give us any information. In contrast, when the biased worker gives the answer "PG13", we know that this corresponds to a page that in reality belongs to the "G" class. Even if the answer is wrong, we can always guess the correct answer! So, by "reversing" the errors, we can see how ambiguous are the answers of a worker, and use this information to decide whether to reject a worker or not. You can find more details about the process in our HCOMP 2010 paper " Quality Management on Amazon Mechanical Turk You can also find a demo of the algorithm at and you can plug your own data to see how it works. The code will take as input the responses of the workers, the misclassification costs, and the "gold" data points, if you have any. The demo returns the confusion matrix for each worker, and the estimated "cost" of each worker. The output is plain text and kind of ugly but you can find what you need. The code is also open source and available at Google Code If you have any questions, just drop me an email!
{"url":"https://www.behind-the-enemy-lines.com/2010/07/detecting-spammers-on-mechanical-turk.html","timestamp":"2024-11-08T20:39:54Z","content_type":"application/xhtml+xml","content_length":"84049","record_id":"<urn:uuid:1685324e-b6a6-4f1a-bf4f-3e2a38fe7134>","cc-path":"CC-MAIN-2024-46/segments/1730477028079.98/warc/CC-MAIN-20241108200128-20241108230128-00618.warc.gz"}
Calum MacRury, Tomáš Masařík, Leilani Pai, Xavier Pérez-Giménez Motivated by the Beck-Fiala conjecture, we study the discrepancy problem in two related models of random hypergraphs on $n$ vertices and $m$ edges. In the first (edge-independent) model, a random hypergraph $H_1$ is constructed by fixing a parameter $p$ and allowing each of the $n$ vertices to join each of the $m$ edges independently with probability $p$. In the parameter range in which $pn \ rightarrow \infty$ and $pm \rightarrow \infty$, we show that with high probability (w.h.p.) $H_1$ has discrepancy at least $\Omega(2^{-n/m} \sqrt{pn})$ when $m = O(n)$, and at least $\Omega(\sqrt{pn \log\gamma })$ when $m \gg n$, where $\gamma = \min\{ m/n, pn\}$. In the second (edge-dependent) model, $d$ is fixed and each vertex of $H_2$ independently joins exactly $d$ edges uniformly at random. We obtain analogous results for this model by generalizing the techniques used for the edge-independent model with $p=d/m$. Namely, for $d \rightarrow \infty$ and $dn/m \rightarrow \infty$, we prove that w.h.p. $H_{2}$ has discrepancy at least $\Omega(2^{-n/m} \sqrt{dn/m})$ when $m = O(n)$, and at least $\Omega(\sqrt{(dn/m) \log\gamma})$ when $m \gg n$, where $\gamma =\min\{m/n, dn/m\} $. Furthermore, we obtain nearly matching asymptotic upper bounds on the discrepancy in both models (when $p=d/m$), in the dense regime of $m \gg n$. Specifically, we apply the partial colouring lemma of Lovett and Meka to show that w.h.p. $H_{1}$ and $H_{2}$ each have discrepancy $O( \sqrt{dn/m} \log(m/n))$, provided $d \rightarrow \infty$, $d n/m \rightarrow \infty$ and $m \gg n$. This result is algorithmic, and together with the work of Bansal and Meka characterizes how the discrepancy of each random hypergraph model transitions from $\Theta(\sqrt{d})$ to $o(\sqrt{d})$ as $m$ varies from $m=\Theta(n)$ to $m \gg n$.
{"url":"https://www.thejournal.club/c/paper/325839/","timestamp":"2024-11-09T17:15:21Z","content_type":"text/html","content_length":"33892","record_id":"<urn:uuid:a1fd338f-fcd5-4ac8-8924-63d8b5b5a54e>","cc-path":"CC-MAIN-2024-46/segments/1730477028125.59/warc/CC-MAIN-20241109151915-20241109181915-00760.warc.gz"}
Discrete Mathematics Discrete Mathematics. Instructor: Prof. Ashish Choudhury, Department of Computer Science, IIIT Bangalore. Discrete mathematics is the study of mathematical structures that are discrete in the sense that they assume only distinct, separate values, rather than in a range of values. It deals with the mathematical objects that are widely used in almost all fields of computer science, such as programming languages, data structures and algorithms, cryptography, operating systems, compilers, computer networks, artificial intelligence, image processing, computer vision, natural language processing, etc. The subject enables the students to formulate problems precisely, solve the problems, apply formal proof techniques and explain their reasoning clearly. (from nptel.ac.in) Discrete Mathematics Instructor: Prof. Ashish Choudhury, Department of Computer Science, IIIT Bangalore. Discrete mathematics is the study of mathematical structures that are discrete in the sense that they assume only distinct, separate values, rather than in a range of values.
{"url":"http://www.infocobuild.com/education/audio-video-courses/computer-science/discrete-mathematics-iiit-bangalore.html","timestamp":"2024-11-06T22:07:04Z","content_type":"text/html","content_length":"18671","record_id":"<urn:uuid:743713d1-89aa-40da-bb56-f63d355bd337>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.47/warc/CC-MAIN-20241106194801-20241106224801-00495.warc.gz"}
There are 50 desks in a class. In each desk one girl and 2 boys are made to sit. Two out of 50 desks has only boys and 3 out of 50 desk has only 3 girls each. Find out the number of boys and girls respectively. There are 50 desks in a class. In each desk one girl and 2 boys are made to sit. Two out of 50 desks has only boys and 3 out of 50 desk has only 3 girls each. Find out the number of boys and girls
{"url":"https://clay6.com/qa/84470/there-are-50-desks-in-a-class-in-each-desk-one-girl-and-2-boys-are-made-to-","timestamp":"2024-11-12T00:58:47Z","content_type":"text/html","content_length":"17603","record_id":"<urn:uuid:7f935132-775a-4717-ba8a-37a7dbaddc15>","cc-path":"CC-MAIN-2024-46/segments/1730477028240.82/warc/CC-MAIN-20241111222353-20241112012353-00186.warc.gz"}
Polar-Coded Transmission over 7.8-km Terrestrial Free-Space Optical Links Graduate School of Engineering, Nagoya Institute of Technology, Gokiso-cho, Showa-ku, Nagoya 466-8555, Japan National Institute of Information and Communications Technology, Nukui-Kitamachi 4-2-1, Koganei, Tokyo 184-8795, Japan Department of Aeronautics and Astronautics, Tokyo Metropolitan University, 6-6 Asahigaoka, Hino, Tokyo 191-0065, Japan Graduate School of Informatics and Engineering, University of Electro-Communications, Chofugaoka 1-5-1, Chofu, Tokyo 182-8585, Japan Author to whom correspondence should be addressed. Submission received: 15 December 2022 / Revised: 28 March 2023 / Accepted: 13 April 2023 / Published: 17 April 2023 Free-space optical (FSO) communications can offer high-capacity transmission owing to the properties of the laser beams. However, performance degradation caused by atmospheric turbulence is an urgent issue. Recently, the application of polar codes, which can provide capacity-achieving error-correcting performance with low computational cost for decoding, to FSO communications has been studied. However, long-distance and real-field experiments have not been conducted in these studies. To the best of our knowledge, this study is the first to present the experimental results of polar-coded transmission over 7.8-km FSO links. Using experimental data, we investigated the performance of polar codes over atmospheric channels, including their superiority to regular low-density parity-check codes. We expect that our results will offer a path toward the application of polar codes in high-speed optical communication networks including satellites. 1. Introduction Free-space optical (FSO) communications are expected to satisfy the continually increasing demand for high-capacity wireless communication [ ] owing to features such as a wide bandwidth in an unregulated spectrum, ultra-low inter-channel interference, and power-efficient transmission. In addition, implemented in satellites, FSO communications can expand the coverage of high-speed communications to the sea and sky, where optical fiber implementation is difficult [ ]. However, a laser beam propagating in the atmosphere is influenced by atmospheric turbulence, which causes random fluctuations in the received power and thus degrades communication performance. Error-correcting codes, used in radio-frequency wireless and fiber-based optical communications, remain an effective means to address this problem even in FSO communications. The application of conventional error-correcting codes, such as the Reed–Solomon [ ], low-density parity check (LDPC) [ ], and turbo [ ] codes for FSO communications has been investigated through numerical simulation and experiments [ ]. In parallel, discussions on the standardization of error-correcting codes in satellite-to-ground FSO communications have been promoted by the Consultative Committee for Space Data Systems (CCSDS) In recent years, the application of polar codes [ ] to optical wireless communication has also been investigated. Polar codes are known as capacity-achieving codes, as with LDPC codes. Moreover, the computational costs for decoding polar codes are lower than those for LDPC codes. Recently, polar codes with successive cancellation list decoding (SCLD) [ ] concatenated by cyclic redundancy check (CRC) codes, that is, polar codes with CRC-aided SCLD (CA-SCLD), can achieve a higher error-correcting performance than LDPC codes in the short-code-length regime. Refs. [ ] experimentally revealed the superiority of polar codes over LDPC codes in terms of ultraviolet and visible light communication, respectively. The performance of polar-coded transmission over FSO communications has also been studied numerically [ ] and experimentally [ ]. In particular, Ref. [ ] performed a CA-SCLD polar-coded transmission experiment in a laboratory environment, and demonstrated that CA-SCLD polar code has an acceptable error-correcting performance, even over atmospheric channels. However, the communication path of the experiment was 7 m, and the fading was simulated using a hot-air heater. Therefore, the experiment addressed only a limited case with weaker turbulence. Because in a real-field environment, the degree of atmospheric turbulence varies over a wider range and the transmission distance can span from a hundred meters to several kilometers, a performance investigation in a real-field environment is highly desirable. In this study, we report a real-field CA-SCLD polar code transmission experiment over a 7.8-km terrestrial FSO link in urban Tokyo. To the best of our knowledge, polar code transmission experiments across such extensive atmospheric paths have not been reported yet. Based on the experimental data, we demonstrate the effectiveness of equalization and block interleaving, techniques used to mitigate atmospheric effects, over a real-field long-distance FSO link. We also compared the experimental results with computational simulations that indicated the decoding performance degradation by the real-field effect. In addition, we performed a CA-SCLD benchmark test for some types of LDPC code. These results are beneficial for designing high-performance error-correcting codes for emerging non-terrestrial networks (NTN) to aid the consideration of an increase in transmission speed using the FSO link. The results of this study partly appeared in [ ] and [ ]. In [ ], the initial experimental results of polar code transmission were reported. However, the transmission performance could not be improved because channel equalization was not used. In [ ], although the application of channel equalization improved the characteristics, the reasons that led to this improvement were not sufficiently considered. In this paper, we newly add an analysis of experimental results and a discussion to reinforce the insights deduced from the experiment. The contributions of this study are as follows: Long-distance transmission of polar and LDPC codes over a 7.8-km terrestrial FSO link, demonstrating that the characteristics, especially block error performance, of polar codes are better than those of the regular LDPC codes; Investigating factors that cause differences in the characteristics of polar and LDPC codes in FSO communications; Comparing the performance of LDPC codes used in the recent standardization of the fifth-generation mobile communications system (5G) numerically, and clarifying the effectiveness of polar code 2. CA-SCLD Polar-Coded FSO Transmission System In this section, we explain the CA-SCLD polar-coded FSO transmission system [ ], the block diagram of which is displayed in Figure 1 . The sender prepares a $K − k$ -bit message sequence, $m K − k ,$ and inputs it into the CRC encoder. The CRC encoder adds -bit CRC parity to $m K − k$ and inputs the resulting -bit sequence $m ~ K$ into a polar encoder. The polar encoder computes the location of the frozen bits (i.e., the bits with a high probability of causing an error at the receiver side) in $m ~ K$ and inserts bit “0” to the corresponding locations, which results in an -bit information sequence $u N$ . We assume that $N = 2 n$ with an integer . The polar encoder further transforms $u N$ into the polar codeword $x N$ as follows: $G N$ is a generator matrix defined as follows: $G N = R N ( F ⊗ I N / 2 ) ( I 2 ⊗ G N / 2 ) , G 1 = I 1 , F = 1 0 1 1 ,$ with a unit matrix $I N$ of order , the Kronecker product , and a permutation matrix $R N$ that reorders the vector elements into even- and odd-index parts. The sender transmits the codeword $x N$ over the atmospheric channel by modulating the laser source with an on–off keying (OOK) scheme, where the laser source turns “on” for bit $x i = 0$ and turns “off” for $x i = 1$ . We assume that the inter-symbol interference induced by the atmospheric effect is negligible as in real FSO channels because of strong beam directivity. It is also assumed that the thermal noise at the receiver’s detector follows a Gaussian distribution. Therefore, the received symbol $y i ∈ R$ output from the channel is given as follows: $y i = h i x i ⊕ 1 + n i ,$ $h i ∈ R$ is the channel coefficient and $n i ∈ R$ is the zero-mean Gaussian white noise with a variance of $σ 2$ At the receiver side, soft-decision decoding [ ] is performed. The SCLD decoder calculates the initial log-likelihood ratio (LLR) $L A W G N ( y i , h i )$ for each received symbol $y i$ as follows: $L A W G N y i , h i = h i 2 − 2 h i y i 2 σ 2 .$ From this initial LLR, the SCLD decoder iteratively calculates the LLR as follows: $L i ≜ l n P r y N , u i | u i = 0 P r y N , u i | u i = 1$ and determines the estimation $u ^ i$ of information bit $u i$ as follows: $u ^ i = 0 , L i ≥ 0 1 , L i < 0$ in ascending order of the index . If $u i$ is a frozen bit, $u ^ i = 0$ immediately. Finally, the SCLD decoder outputs a list of $L m a x$ candidates in terms of the path metrics. This list is input into the CRC decoder, and the candidate sequence passing the CRC check is finally outputted as the decoded sequence $m ^ K − k$ Before calculating $L A W G N y i , h i$, we employ a process called channel equalization. Channel gain $h i$ of the atmospheric channel varies temporally. Further, in the OOK scheme, $h i$ influences only the on-signal (i.e., $x i = 0$), as opposed to the phase modulation schemes. These effects cause asymmetry in the probability distribution of the received symbol $y i$ and the LLR distribution, resulting in degradation of the error-correcting performance. Channel equalization is used to compensate for this degradation. In channel equalization, we estimate $h i$ from temporally varying received sequences or channel state information (CSI). Figure 2 displays the probability density functions of the LLR $L A W G N ( y i , h i )$ in OOK, which are numerically simulated for the cases with equalization (green line) and without equalization (blue line). These distributions have two peaks, corresponding to $x i = 0$ (left) and 1 (right). The distribution without equalization was asymmetric, and the peak for $x i = 1$ is significantly greater than that for $x i = 0$ . This asymmetry causes a misestimation of information bit $u i$ . Conversely, the distribution with equalization is symmetric at LLR = 0, indicating that channel equalization significantly improves error-correcting performance. 3. Experimental Setup In this section, we describe the setup of the transmission experiment. 3.1. Tokyo FSO Testbed The transmission experiment was conducted over an FSO communication testbed, called the Tokyo FSO testbed. As indicated in Figure 3 , this testbed connects the transmission system at the University of Electro-Communications (UEC) and the receiving system at the National Institute of Information and Communications Technology (NICT). The link distance is approximately 7.8 km. The transmission system at the UEC site was located in an all-weather dome-shaped facility. The light source was a direct modulation laser with a central wavelength of 1550 nm and the modulation scheme was OOK with a transmission rate of 10 MHz. Here, our goal is a high-speed FSO; however, because of the limitations of the experimental system, a 10 MHz transmission rate is used as the first step. The transmission data were input to the laser source from an arbitrary waveform generator. The modulated optical signal from the source was amplified with a fiber amplifier and coupled to a fiber collimator that expanded the laser beam to a diameter of approximately 5.5 mm. The receiver system at the NICT site was located near the window of the building. The laser beam propagating over the 7.8 km FSO link and diverging to 8 m in full width at half maximum (FWHM) was collected using a Cassegrain-type telescope with a diameter of 100 mm and a focal length of 800 mm. The optical intensity was measured using a PIN photodiode detector and then recorded using an oscilloscope at a sampling rate of 50 MHz. A more detailed explanation of the Tokyo FSO testbed is available in our previous studies [ 3.2. Error-Correcting Codes In the field experiment, we compared the error-correcting performance of the CA-SCLD polar code to that of polar code with normal (i.e., without CRC code check) SCLD decoding and regular LDPC codes. Table 1 summarizes the parameters of these codes. We set the code length to 2048 and the code rate to 0.5. We set $L m a x = 32$ $I m a x = 50$ as the standard parameters with sufficient decoding performance [ In the polar codes, the Monte Carlo method for the AWGN channel is used to select the frozen bits [ ], and those frozen bit tables are assumed to be shared between the transmitter and receiver in advance. It is empirically demonstrated that frozen bit tables for AWGN channels are also effective in fading environments, including FSO channels with strong turbulence; however, the optimization of such frozen bit tables for FSO channels remains for future study. In the LDPC code, the parity check matrix was generated based on Gallager’s semi-random construction method and the sum-product decoding algorithm was used [ Table 2 , we compare the computational cost of decoding these codes in terms of theoretical value and execution time. Theoretical values are calculated based on the formulas in Table 3 Table 2 illustrates the tradeoff relation between the computational cost and error-correcting performance. Owing to the employment of CRC, CA-SCLD polar code can achieve higher error-correcting performance than normal SCLD one, whereas the latter has a lower computational cost than the former. In addition, the error-correcting performance of CA-SCLD polar code is superior to that of regular LDPC code at a short code length [ ] and the computational cost of the LDPC code is 35% greater than that of the polar code with CA-SCLD. The execution time was measured using our decoding program. Our program is written in C and runs on our Linux workstation equipped with an Intel Core i9-11900K processor clocked at 3.50 GHz. The values listed in Table 2 are the averages of 100 codeword trials with the configuration listed in Table 1 with the AWGN channel having = 2 dB. The relationship between these measured execution times approximately follows that of the theoretical values, corroborating the superiority of the CA-SCLD polar code. Here, the relatively long LDPC transmission execution time was due to the program not being particularly well-optimized. 3.3. Data Frame Format Figure 4 shows a schematic diagram of our data frame format used in the transmission experiment. We designed this format to realize: (1) CSI collection for channel equalization; (2) comparison of three error correction codes under similar conditions; and (3) timing synchronization. For the information data to be transmitted, we employed the “Lena” image with a size of 131,504 bits. For the CA-SCLD polar code, we divided this image into 132 blocks with a size of 1000 bits. The last 132-th block was padded with “0” bits so that its size became 1000 bits. After adding the 24-bit CRC code, the 1024-bit blocks were encoded into 2048-bit codewords and concatenated into a 270,336-bit sequence. Similarly, for SCLD polar code and regular LDPC code, we divided the Lena image into 129 blocks of size 1024 bits. After zero-bit padding to the last block, these 1024-bit blocks were encoded into 2048-bit codewords and concatenated into a 264,192-bit sequence. We concatenated the 264,192-bit sequences for regular LDPC code and SCLD polar code and the 270,336-bit sequence for CA-SCLD polar code. Subsequently, we divided this sequence into 1560 blocks (516 blocks for regular LDPC code and SCLD polar code, and 528 blocks for CA-SCLD polar code) with a size of 512 bits. We added a 128-bit pilot sequence, which was an iterative “01” pattern, at the end of each 512-bit sequence. CSI was estimated by averaging the received optical power for “on” symbols in the pilot. We decided on the length of pilot sequence with careful consideration: When the pilot sequence becomes longer, the precision of the CSI estimation becomes greater, whereas the transmission rate decreases. In the CCSDS standard, a series of 16 to 192 bits is typically used for synchronization [ ]. Further, the tail sequence for LDPC codes in [ ] is 128 bits. Therefore, 128 bits are used in this study. Additionally, we constructed another sequence to investigate the effects of block interleaving. In this experiment, we adopted block interleaving at a depth of ten. We added an ancillary codeword consisting of iterative “01” patterns such that the number of code words for each error-correcting code was a multiple of ten. Finally, we added a 256-bit preamble for synchronization at the beginning of the sequence. Thus, the frame length is 2,022,656 bits. 4. Experimental Results In this section, we present the results of the transmission experiments. We conducted an FSO transmission campaign from 29 January 2020, at 18:00 JST, to 31 January 2020, at 13:00 JST. We transmitted a single 2,022,656-bit frame every ten minutes. During the campaign, we adjusted the transmission power to investigate the performance as a function of signal-to-noise ratio (SNR) at the receiver side. The total number of transmitted frames differed with the SNR on the receiver side. For example, 2000, 5800, and 240 frames were transmitted at 0, 10, and 24 dB, respectively. 4.1. Lena Image Figure 5 shows a typical example of a decoded Lena image for each error-correcting code. This figure shows the characteristics of the error correction performance of these codes. As expected from the fading nature of the FSO links, burst errors occurred in these figures. However, the distribution of these errors was clearly different for these codes. The erroneous areas of polar codes seem completely random and no longer hold any information about the original image, whereas we can identify the original image from the decoded image to a certain extent in regular LDPC code. This is because of the property of the SCLD algorithm wherein bit errors are scattered over the codeword. Furthermore, errors are densely accumulated in polar codes, whereas they are sparsely distributed in regular LDPC code. This implies that residual error is more likely in regular LDPC code than in polar codes because the decoding of regular LDPC code is sensitive to imperfections in the CSI estimation. We further discuss the properties of the decoding algorithms for these codes in Section 5 4.2. Error-correcting Performance Figure 6 a,b displays the bit error rate (BER) and block error rate (BLER) of the experimental data as functions of the average SNR at the receiver. For the cases without channel equalization, we subtracted the received voltage by 1/2 of the average when “on” was transmitted to the pilot symbol to mitigate the degradation due to the asymmetry of the LLR distribution [ Figure 6 demonstrates that the curves with channel equalization (solid lines) clearly surpass those without equalization (dotted lines) for both BER and BLER. The BER gain of channel equalization was approximately 4 dB with SNR = 20 dB. In addition, the error floor in the LDPC BLER, which occurs due to residual bit error generated by the sum-product algorithm, disappeared when channel equalization was applied. These observations indicate that channel equalization significantly improves the error correction performance of the polar and LDPC codes. Next, we investigate the BER and BLER performances with channel equalization. As shown in Figure 6 a, the relationship of BER performance between the three error-correcting codes is different in the lower and higher SNR regions. In the lower SNR region, the BER performance of regular LDPC was better than that of the polar codes, and the curves of the polar codes overlapped. This behavior can be due to the error-scattering property of SCLD decoding, as discussed in Section 4.1 . As the SNR increases, the curve of the regular LDPC code becomes gentler and approaches that of the polar codes. The curves of the regular LDPC and CA-SCLD polar codes cross at an SNR of 19.5 dB. This is because of the residual error of sum-product decoding. Simultaneously, the gap between the two polar codes widens with an increase in SNR, indicating the efficacy of the CRC code. Its function, selecting the correct codeword from the list of potential candidates, effectively eliminates the residual error bits. This function does not work effectively in the lower SNR region because the CRC itself does not have an error-correcting ability. BLER performance is slightly different from that of BER, as shown in Figure 6 b. The curves of the polar codes are lower than those of the regular LDPC codes, which are irrelevant to SNR. This behavior indicates higher BLER performance for polar codes. The state-of-the-art standardization for modern communication systems (e.g., ground cellar systems, satellite communication systems, etc.) encourages the adoption of the packet transmission scheme with a sufficient SNR link-budget margin. We observed that polar codes were more beneficial to these systems than regular LDPC codes based on real-field transmission data. 4.3. Effect of Interleaving Figure 7 , we show the BLER performance of three error-correcting codes without block interleaving (solid lines) and with block interleaving (dashed lines). The result demonstrates that block interleaving improves the performance because the interleaver spreads burst errors and improves the LLR quality in the decoding process. 4.4. Comparison with Numerical Results We compared the error-correcting performance of the numerical simulation with the experimental results. In the simulation, we generated a sequence based on time-correlated gamma–gamma distribution [ ] to mimic the channel coefficient. The distribution can be parametrized using the scintillation index ( ), which is given as follows: denotes the received power intensity included in the pilot sequence. We calculate the for the experimental data using Equation (7) and use it to generate the simulated sequence. Figure 8 a,b shows the BER and BLER performances for the experimental (solid lines) and the numerical simulation (dashed lines) results, respectively. In these figures, the simulation results outperform the experimental results. The real-field data contain several effects which are not necessarily originated from the atmospheric turbulences. This results in the deviation of the LLR of the experimental data from that obtained from the simulation, and, hence, the deterioration of the BER and BLER performances. Among the three codes, the gap between the experiment and the simulation was relatively smaller for the CA-SCLD polar code compared with the others. The benefit of the CRC code is as follows: the decoding method of the other two codes depends only on the LLR. In contrast, CA-SCLD exploits the CRC code. It is independent of the LLR, and thus mitigates deterioration due to the deviation between experiment and simulation. 5. Discussion on Experimental Results 5.1. Different BER and BLER Tendency of Polar and LDPC Codes In this subsection, we discuss the difference in the error-correcting performance of the polar and LDPC codes presented in the previous section from the viewpoint of the decoding method. First, we provide a detailed explanation of the decoding method for the polar codes. As explained in Section 2 , the SCLD algorithm calculates the LLR $L i$ and estimates $u ^ i$ in ascending order of index . This algorithm is schematically represented in Figure 9 . For code length $N = 2 n$ , this graph has $n + 1$ layers of nodes indexed by $λ ∈ [ 0 , n ]$ . The adjacent layers are connected by edges specified by the permutation matrix $R N$ . The initial LLR $L i ( 0 ) = L A W G N ( y i , h i )$ is input into the corresponding -th node of the rightmost layer ( $λ = 0$ ). The LLR at each layer is then iteratively computed based on the LLRs in the right layer. Specifically, from the pair of LLRs, $L i ( λ )$ $L i + 1 ( λ )$ for the -th layer and an odd index , LLRs $L ~ i ( λ )$ $L ~ i + 1 ( λ )$ are calculated as follows: $L ~ i ( λ ) = f p o l a r L i ( λ ) , L i + 1 ( λ ) ,$ $L ~ i ( λ ) = g p o l a r L i ( λ ) , L i + 1 ( λ ) , u ^ i λ ,$ $u ^ i λ$ denotes an estimation of the information bit at the -th layer calculated from the estimations previously determined, and the functions $f p o l a r$ $g p o l a r$ are defined as follows, respectively [ $f p o l a r L α , L β ≜ 2 tanh − 1 ⁡ tanh ⁡ L α 2 tanh ⁡ L β 2 ,$ $g p o l a r L α , L β , u ^ ≜ − 1 u ^ L α + L β ,$ $L ~ i ( λ )$ $L ~ i + 1 ( λ )$ are sent to the next left layer over the edge. As indicated above, the LLR calculation in polar codes requires estimation of the information bit, which has previously been estimated. Therefore, if the estimation fails, the error will propagate over the code word. This explains why errors occur in the polar codes, as indicated in Figure 5 Second, we demonstrate that the function $g p o l a r L α , L β , u ^$ makes polar codes insensitive to imperfections in the CSI estimation, which is caused by the limited length of the pilot sequence. For example, we assume that we send an “all zero” codeword. In this $u ^ i λ$ should be “0” if the estimation succeeds and the LLR is likely positive. Hence, Equation (11) becomes $g p o l a r L α , L β , u ^ = L α + L β ,$ which is greater than that of the two inputs, $L α$ $L β$ . This value is input into the $tanh ⁡ x$ function in the functions $f p o l a r L α , L β$ of the next layer. Recalling that the increase in the $tanh ⁡ x$ function becomes slower as $| x |$ increases, the functions $f p o l a r L α , L β$ become insensitive to small changes in the initial LLR $L i ( 0 )$ . Therefore, polar codes become insensitive to LLR errors owing to imperfect CSI estimations. Conversely, in the sum-product decoding of the LDPC code, a priori LLR $u m n ( l )$ and external $v m n ( l )$ are exchanged between check node and variable node at iteration . The LLR values are then updated using the following equations: $v m n ( l ) = ∏ n ′ ∈ N m ∖ n s i g n u m n ′ l f L D P C ∑ n ′ ∈ N m ∖ n f L D P C u m n ′ ( l ) ,$ $u m n ( l ) = ∑ m ′ ∈ M n ∖ m v m ′ n l ,$ $M n$ $N m$ denote the sets of check and variable nodes connected to variable nodes and check node , respectively, and the function $f L D P C ( · )$ is given as follows: $f L D P C x = l n e x + 1 e x − 1 = − l n tanh ⁡ x 2 .$ As indicated, in LDPC codes, the LLR value is directly input to the $tanh ⁡ x$ function [ ], and there is no mechanism to amplify the LLR value, such as the function $g p o l a r L α , L β , u ^$ in polar codes. This is why residual errors occur, as indicated in Figure 5 . Hence, the BLER performance of the LDPC function is reduced by the residual errors. 5.2. Comparison with 5G LDPC Codes We have only devoted our attention to regular LDPC code in the results presented above. However, irregular LDPC codes, which have a better performance, are the standard use in 5G new radio (NR) [ ] and CCSDS [ ]. In this subsection, we compare the characteristics of polar and irregular LDPC codes based on the numerical simulation. We evaluated the BER characteristics of the AWGN channel versus as the characteristics of channel codes are generally evaluated in a Gaussian noise channel and its relative characteristics are preserved in a fading environment. Table 4 shows the simulation parameters of the compared codes, where the basic parameters match those used for the experiments in this study (shown in Table 1 ). The decoding method for 5G NR LDPC was the offset min-sum algorithm with a maximum iteration number of 20. This decoding method and the iteration number are commonly used in 5G systems and it also considers the computational complexity and delay time of decoding [ ]. In addition, we investigated the performance of the 5G NR LDPC code based on sum-product decoding with 50 iterations, which was the best decoding method. Figure 10 shows the BER performance of the CA-SCLD code and LDPC codes as a function of . The 5G NR LDPC code based on the sum-product algorithm exhibited the best performance among these codes. The relationship between CA-SCLD polar and regular LDPC codes is approximately the same in Figure 6 a and Figure 8 a. The former is better than the latter in the higher SNR region, and these curves cross a certain threshold. However, the threshold SNR was much lower than that shown in previous figures. This can be attributed to differences in the channel model. The AWGN channel is shown in Figure 10 , whereas the gamma–gamma fading channel is shown in Figure 6 Figure 8 . Furthermore, the gamma–gamma distributions in Figure 8 are far from ideal because they were obtained through a real-field experiment. These factors deteriorate the performance of the CA-SCLD polar code more than that of the regular LDPC code; hence, the crossover point moves toward the higher-SNR region. 5G NR LDPC code based on offset min-sum decoding provides an approximate 0.2 dB improvement over the (3,6) regular LDPC code at an error rate of 10 . However, it was found that the polar code has better characteristics, which is consistent with the results of a previous study [ ]. In addition, [ ] showed that implementation of the 5G NR LDPC code is much more complex that of CA-SCLD polar code. The LDPC code used in CCSDS [ ] utilizes an irregular check matrix generated using a protograph. The structure of this check matrix is similar to that of the 5G NR LDPC; therefore, the BER characteristics were considered comparable. The application of nonbinary LDPC has been discussed; however, it is not currently implemented due to the increased decoding complexity compared to the gain obtained [ ]. It can be concluded that the application of polar code is effective compared to recent practical LDPC codes. 6. Conclusions In this study, we evaluated the performance of error-correcting codes with channel estimation and equalization of polar-coded terrestrial FSO transmission over a distance of 7.8 km. The results show that the decoding performance of the codes can be improved through channel equalization. Moreover, even with channel equalization, the BLER of polar codes was superior to that of the regular LDPC codes. In addition, we compared the computational complexity of polar codes with LDPC codes and demonstrated that polar codes have a lower computational complexity than LDPC codes. Based on the results of this study, we believe that the characteristics of polar codes may be further improved by the construction of a frozen bit table for FSO communications, implementing an adaptive modification of coding rate, or the coupling of block error correcting codes, such as Reed-Solomon codes, via interleave, all of which will be considered in future studies. Author Contributions Conceptualization, S.F., E.O., H.T. and H.E.; methodology, S.F., H.T. and H.E.; software, S.F.; validation, E.O. and H.E.; formal analysis, S.F., E.O. and H.E.; investigation, S.F., E.O. and H.E.; resources, H.T., M.F., M.K. and R.S.; data curation, S.F. and E.O.; writing—original draft preparation, S.F.; writing—review and editing, E.O. and H.E.; visualization, S.F. and H.E.; supervision, E.O. and H.E.; project administration, E.O., M.S. and M.T. All authors have read and agreed to the published version of the manuscript. This paper is supported by the Japan Society for the Promotion of Science (KAKENHI 17H01281), Research and Development of the Quantum Cryptography Technology for Satellite Communications (JPJ007462) in Research and Development of Information and Communications Technology (JPMI00316) of the Ministry of Internal Affairs and Communications (MIC), Japan. Conflicts of Interest The authors declare no conflict of interest. 1. Khalighi, M.A.; Uysal, M. Survey on Free Space Optical Communication: A Communication Theory Perspective. IEEE Commun. Surv. Tutor. 2014, 16, 2231–2258. [Google Scholar] [CrossRef] 2. Toyoshima, M. Recent trends in space laser communications for small satellites and constellations. J. Light. Technol. 2021, 39, 693–699. [Google Scholar] [CrossRef] 3. Reed, I.S.; Solomon, G. Polynomial codes over certain finite fields. J. Soc. Ind. Appl. Math. 1960, 8, 300–304. [Google Scholar] [CrossRef] 4. Gallager, R.G. Low-density parity-check codes. IEEE Trans. Inf. Theory 1962, 8, 21–28. [Google Scholar] [CrossRef] 5. Berrou, C.; Glavieux, A.; Thitimajshima, P. Near Shannon limit error-correcting coding and decoding: Turbo-codes. In Proceedings of the ICC’93-IEEE International Conference on Communication, Geneva, Switzerland, 23–26 May 1993; Volume 2, pp. 1064–1070. [Google Scholar] 6. Lee, H. A high-speed low-complexity Reed-Solomon decoder for optical communications. IEEE Trans. Circuits Syst. II Express Briefs 2005, 52, 461–465. [Google Scholar] 7. Djordjevic, I.B. Adaptive Modulation and Coding for Free-Space Optical Channels. J. Opt. Commun. Netw. 2010, 2, 221–229. [Google Scholar] [CrossRef] 8. Calzolari, G.P.; Chiaraluce, F.; Garello, R.; Vassallo, E. Turbo code applications on telemetry and deep space communications. In Turbo Code Applications: A Journey from a Paper to Realization; Sripimanwat, K., Ed.; Springer: Dordrecht, The Netherlands, 2005; pp. 321–344. [Google Scholar] 9. CCSDS. TM Synchronization and Channel Coding-Summary of Concept and Rationale; Consultative Committee for Space Data Sys-tems (CCSDS), Informational Report, 130.1-G-3; CCSDS: Washington, DC, USA, 2020. [Google Scholar] 10. Arikan, E. Channel Polarization: A Method for Constructing Capacity-Achieving Codes for Symmetric Binary-Input Memoryless Channels. IEEE Trans. Inf. Theory 2009, 55, 3051–3073. [Google Scholar] [ 11. Niu, K.; Chen, K. CRC-Aided Decoding of Polar Codes. IEEE Commun. Lett. 2012, 16, 1668–1671. [Google Scholar] [CrossRef] 12. Hu, W.; Luo, Z.; Han, D.; Chen, Q.; Ai, L.; Li, Q.; Zhang, M. A scheme of ultraviolet communication system with polar channel coding. In Proceedings of the 2017 16th International Conference on Optical Communications and Networks (ICOCN), Wuzhen, China, 7–10 August 2017. [Google Scholar] 13. Zhang, J.; Hu, W.; Li, X.; Zhang, M.; Han, D.; Ghassemlooy, Z. Polar coding performance for indoor LOS VLC system. In Proceedings of the 2017 IEEE/CIC International Conference on Communications in China (ICCC Workshops), Qingdao, China, 22–24 October 2017. [Google Scholar] 14. Ito, K.; Okamoto, E.; Takenaka, H.; Kunimori, H.; Toyoshima, M. An adaptive coded transmission scheme utilizing frozen bits of polar code in satellite laser communications. In Proceedings of the International Conference on Space Optics—ICSO 2018, Chania, Greece, 9–12 October 2018; pp. 1–7. [Google Scholar] 15. Fang, J.; Bi, M.; Xiao, S.; Yang, G.; Li, C.; Liu, L.; Zhang, Y.; Huang, T.; Hu, W. Performance investigation of the polar coded FSO communication system over turbulence channel. Appl. Opt. 2018, 57, 7378–7384. [Google Scholar] [CrossRef] 16. Fujita, S.; Toyoshima, M.; Shimizu, R.; Ito, K.; Okamoto, E.; Takenaka, H.; Kunimori, H.; Endo, H.; Fujiwara, M.; Kitamura, M.; et al. Experimental evaluation of polar code transmission in terrestrial free space optics. In Proceedings of the 2019 IEEE International Conference on Space Optical Systems and Applications (ICSOS), Portland, OR, USA, 14–16 October 2019. [Google Scholar] 17. Fujia, S.; Okamoto, E.; Takenaka, H.; Kunimori, H.; Endo, H.; Fujiwara, M.; Shimizu, R.; Sasaki, M.; Toyoshima, M. Performance analysis of polar-code transmission experiments over 7.8-km terrestrial free-space optical link using channel equalization. In Proceedings of the International Conference on Space Optics (ICSO2020), Virtual Conference, 30 March–2 April 2021; pp. 1–9. [ Google Scholar] [CrossRef] 18. Balatsoukas-Stimming, A.; Parizi, M.B.; Burg, A. LLR-based successive cancellation list decoding of polar codes. IEEE Trans. Signal Process. 2015, 63, 5165–5179. [Google Scholar] [CrossRef] 19. Endo, H.; Fujiwara, M.; Kitamura, M.; Ito, T.; Toyoshima, M.; Takayama, Y.; Takenaka, H.; Shimizu, R.; Laurenti, N.; Vallone, G.; et al. Free-space optical channel estimation for physical layer security. Opt. Express 2016, 24, 8940–8955. [Google Scholar] [CrossRef] [PubMed] 20. Fujiwara, M.; Ito, T.; Kitamura, M.; Endo, H.; Tsuzuki, O.; Toyoshima, M.; Takenaka, H.; Takayama, Y.; Shimizu, R.; Takeoka, M.; et al. Free-space optical wiretap channel and experimental secret key agreement in 78 km terrestrial link. Opt. Express 2018, 26, 19513–19523. [Google Scholar] [CrossRef] [PubMed] 21. Endo, H.; Fujiwara, M.; Kitamura, M.; Tsuzuki, O.; Ito, T.; Shimizu, R.; Takeoka, M.; Sasaki, M. Free space optical secret key agreement. Opt. Express 2018, 26, 23305–23332. [Google Scholar] [ CrossRef] [PubMed] 22. Endo, H.; Fujiwara, M.; Kitamura, M.; Tsuzuki, O.; Shimizu, R.; Takeoka, M.; Sasaki, M. Group key agreement over free-space optical links. OSA Contin. 2020, 3, 2525–2543. [Google Scholar] [ 23. Geiselhart, M.; Elkelesh, A.; Ebada, M.; Cammerer, S.; Brink, S.T. CRC-Aided Belief Propagation List Decoding of Polar Codes. In Proceedings of the 2020 IEEE International Symposium on Information Theory (ISIT), Los Angeles, CA, USA, 21–26 June 2020; pp. 395–400. [Google Scholar] [CrossRef] 24. Zhang, Z.; Dolecek, L.; Nikolic, B.; Anantharam, V.; Wainwright, M.J. Design of LDPC decoders for improved low error rate performance: Quantization and algorithm choices. IEEE Trans. Commun. 2009 , 57, 3258–3268. [Google Scholar] [CrossRef] 25. Vangala, H.; Viterbo, E.; Hong, Y. A Comparative Study of Polar Code Constructions for the AWGN Channel. arXiv 2015, arXiv:1501.02473. [Google Scholar] 26. Sybis, M.; Wesolowski, K.; Jayasinghe, K.; Venkatasubramanian, V.; Vukadinovic, V. Channel Coding for Ultra-Reliable Low-Latency Communication in 5G Systems. In Proceedings of the 2016 IEEE 84th Vehicular Technology Conference (VTC-Fall), Montreal, QC, Canada, 18–21 September 2016; pp. 1–5. [Google Scholar] [CrossRef] 27. Song, H.; Fu, J.-C.; Zeng, S.-J.; Sha, J.; Zhang, Z.; You, X.; Zhang, C. Polar-coded forward error correction for MLC NAND flash memory. Sci. China Inf. Sci. 2018, 61, 102307. [Google Scholar] [ 28. IEEE 802.22-07/0313r0; LDPC Decoding for 802.22 Standard. IEEE: New York, NY, USA, 15–20 July 2007. 29. Robertson, P.; Villebrun, E.; Hoeher, P. A comparison of optimal and sub-optimal MAP decoding algorithms operating in the log domain. In Proceedings of the IEEE International Conference on Communications ICC ‘95, Seattle, WA, USA, 18–22 June 1995; pp. 1009–1013. [Google Scholar] 30. CCSDS 231.0-B-4; TC Synchronization and Channel Coding. CCSDS Secretariat National Aeronautics and Space Administration: Washington, DC, USA, 2021. 31. CCSDS 131.0-B-4; TM Synchronization and Channel Coding. CCSDS Secretariat National Aeronautics and Space Administration: Washington, DC, USA, 2022. 32. Bykhovsky, D. Simple Generation of Gamma, Gamma-Gamma and K Distributions with Exponential Autocorrelation Function. IEEE J. Light. Technol. 2016, 34, 2106–2110. [Google Scholar] [CrossRef] 33. Tajima, S.; Takahashi, T.; Ibi, S.; Sampei, S. Iterative Decoding Based on Concatenated Belief Propagation for CRC-Aided Polar Codes. In Proceedings of the 2018 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC), Honolulu, HI, USA, 12–15 November 2018; pp. 1411–1415. [Google Scholar] 34. Hagenauer, J.; Offer, E.; Papke, L. Iterative decoding of binary block and convolutional codes. IEEE Trans. Inf. Theory 1996, 42, 429–445. [Google Scholar] [CrossRef] 35. GPP TS 38.212; NR; Multiplexing and Channel Coding. V15.13.0. ETSI: Sophia Antipolis, France, 2021. 36. Hui, D.; Sandberg, S.; Blankenship, Y.; Andersson, M.; Grosjean, L. Channel Coding in 5G New Radio: A Tutorial Overview and Performance Comparison with 4G LTE. IEEE Veh. Technol. Mag. 2018, 13, 60–69. [Google Scholar] [CrossRef] 37. Sharma, A.; Salim, M. Polar Code Appropriateness for Ultra-Reliable and Low-Latency Use Cases of 5G Systems. Int. J. Netw. Distrib. Comput. 2019, 7, 93–99. [Google Scholar] [CrossRef] 38. Nguyen, T.T.B.; Tan, T.N.; Lee, H. Low-Complexity High-Throughput QC-LDPC Decoder for 5G New Radio Wireless Communication. Electronics 2021, 10, 516. [Google Scholar] [CrossRef] 39. Sahin, O. A Study on Comparison of Polar and LDPC Codes above 100Gb/s Throughput Regime; IEEE 802.15 Standing Committee Terahertz; July 2019; InterDigital Europe: London, UK, 2019. [Google 40. Álvarez, Á.; Fernández, V.; Matuz, B. An Efficient NB-LDPC Decoder Architecture for Space Telecommand Links. IEEE Trans. Circuits Syst. II Express Briefs 2021, 68, 1213–1217. [Google Scholar] [ Figure 2. Distribution of LLR $L A W G N ( y i , h i )$ for OOK scheme with and without channel equalizer. In the simulation, the signal-to-noise ratio (SNR) = 10.0 dB and the intensity variation follows a gamma–gamma distribution with scintillation index of 0.2. For the distribution without equalization, we set $h i = 1$ for all bits. Figure 5. Decoding result of Lena image at SNR = 6.0 dB. Note that each pixel of the Lenna image has three values for red, green, and blue (RGB). Figure 6. (a) BER and (b) BLER of experimental data. The solid and dotted lines are for the cases with and without channel equalization, respectively. Figure 7. BLER performance with block interleaving (dashed lines) and without block interleaving (solid lines). The solid lines are the same ones shown in Figure 6 Figure 8. Performance comparison of experiment (solid lines) and simulation results (dashed lines): ( ) BER and ( ) BLER. BLER performance for experimental result is the same as shown in Figure 7 CA-SCLD Polar SCLD Polar Regular LDPC Code length $N$ 2048 Code rate $R$ 0.5 CRC length $k$ 24 - - List size $L m a x$ 32 - Column and row weights $( d v , d c )$ - - (6,3) Decoding iterations $I m a x$ - - 50 CA-SCLD Polar SCLD Polar Regular LDPC Theoretical value [a.u.] 589,056 573,440 793,600 Normalized by CA-SCLD polar 1 0.973 1.35 Execution time [ms] 6064 5985 13,687 Error-Correcting Code Computational Cost for Decoding CA-SCLD polar code [26,27] $L m a x N 1 + log 2 ⁡ N + K 2 log 2 ⁡ L m a x + 4 − k$ SCLD polar code [26,27] $L m a x N 1 + log 2 ⁡ N + K 2 log 2 ⁡ L m a x + 3$ Regular LDPC code [28,29] $I m a x ( 2 d v N + 2 d c + 1 N − K )$ 5G NR LDPC Regular LDPC Code length 2048 Code rate 0.5 Column and row weights Variable, base-graph 2 (6,3) Decoding algorithm Offset min-sum Sum-product Sum-product Decoding iterations 20 50 50 Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. © 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https: Share and Cite MDPI and ACS Style Fujita, S.; Okamoto, E.; Takenaka, H.; Endo, H.; Fujiwara, M.; Kitamura, M.; Shimizu, R.; Sasaki, M.; Toyoshima, M. Polar-Coded Transmission over 7.8-km Terrestrial Free-Space Optical Links. Photonics 2023, 10, 462. https://doi.org/10.3390/photonics10040462 AMA Style Fujita S, Okamoto E, Takenaka H, Endo H, Fujiwara M, Kitamura M, Shimizu R, Sasaki M, Toyoshima M. Polar-Coded Transmission over 7.8-km Terrestrial Free-Space Optical Links. Photonics. 2023; 10 (4):462. https://doi.org/10.3390/photonics10040462 Chicago/Turabian Style Fujita, Shingo, Eiji Okamoto, Hideki Takenaka, Hiroyuki Endo, Mikio Fujiwara, Mitsuo Kitamura, Ryosuke Shimizu, Masahide Sasaki, and Morio Toyoshima. 2023. "Polar-Coded Transmission over 7.8-km Terrestrial Free-Space Optical Links" Photonics 10, no. 4: 462. https://doi.org/10.3390/photonics10040462 Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details Article Metrics
{"url":"https://www.mdpi.com/2304-6732/10/4/462","timestamp":"2024-11-13T09:44:17Z","content_type":"text/html","content_length":"504675","record_id":"<urn:uuid:4235f5d2-dd10-4a6f-ae5a-9099d8a2b666>","cc-path":"CC-MAIN-2024-46/segments/1730477028342.51/warc/CC-MAIN-20241113071746-20241113101746-00161.warc.gz"}
5 Best Ways to Find Distinct Elements Common to All Rows of a Matrix in Python π ‘ Problem Formulation: Imagine you have a 2D matrix where each row represents a collection of elements. Your task is to find the set of distinct elements that are common to all rows in the matrix. For example, given the matrix [[1, 2, 3], [2, 3, 4], [1, 3, 5]], the desired output would be [3] since the number 3 is the only element present in all rows. Method 1: Using Set Intersection This method involves converting each row of the matrix into a set and then finding the intersection of all these sets. The intersection of sets contains only the elements that are common to all sets, which in this case are the distinct elements common to all rows of the matrix. Here’s an example: matrix = [[1, 2, 3], [2, 3, 4], [1, 3, 5]] common_elements = set(matrix[0]).intersection(*matrix[1:]) This code snippet first converts the first row of the matrix into a set and then computes the intersection with the rest of the rows. The intersection() method of sets is used with unpacking operator * to handle all remaining rows. This method is straightforward and efficient for small to medium-sized matrices. Method 2: Using List Comprehension and all() In this approach, we utilize list comprehension along with the built-in function all() to check if an element is present in all rows. We iterate through the first row and keep only those elements if they are present in every row of the matrix. Here’s an example: matrix = [[1, 2, 3], [2, 3, 4], [1, 3, 5]] common_elements = [el for el in matrix[0] if all(el in row for row in matrix)] The code examines each element in the first row and uses a nested comprehension along with all() to determine if that element is present in every other row. This method is intuitive and relatively easy to understand, though possibly less efficient than using set operations for large datasets. Method 3: Using functools and reduce() Using the functools.reduce() function along with set intersection, we can systematically reduce the matrix to the common elements shared by all rows. The reduce() function applies a function of two arguments cumulatively to the items of a sequence. Here’s an example: from functools import reduce matrix = [[1, 2, 3], [2, 3, 4], [1, 3, 5]] common_elements = reduce(lambda x, y: x & set(y), matrix, set(matrix[0])) This code applies a lambda function that performs a set intersection across the rows of the matrix, starting with the first row. The reduce() function successively applies this lambda function and cumulatively intersects all row sets. This method can be useful for large datasets but might be less readable to those unfamiliar with reduce(). Method 4: Using Collections Counter The collections.Counter class can be used to count the frequency of elements in each row and then to find common elements. We initialize a counter with the first row and then update the counter with the elements of each subsequent row using the & operator. Here’s an example: from collections import Counter matrix = [[1, 2, 3], [2, 3, 4], [1, 3, 5]] common_elements = Counter(matrix[0]) for row in matrix[1:]: common_elements &= Counter(row) We construct a Counter for the first row and then intersect it with Counter objects of each subsequent row. The elements() method then retrieves the common elements. This is a powerful method, particularly for matrices with repeated elements, though it can be a bit more involved than using pure set operations. Bonus One-Liner Method 5: Using numpy and np.prod() When working with numerical matrices, the numpy library offers a concise and efficient solution. Using the np.prod() function, we can multiply the boolean arrays representing the presence of elements in each row, resulting in a boolean array that signifies the common elements. Here’s an example: import numpy as np matrix = np.array([[1, 2, 3], [2, 3, 4], [1, 3, 5]]) common_elements = np.prod(matrix == matrix[:, None, :], axis=0).any(axis=1) common_elements = matrix[0, common_elements] This compact numpy-based one-liner creates a 3D boolean array where each slice along the third axis represents the presence of one of the first row’s elements in the rows of the matrix. The product across the rows then gives a boolean array where True values correspond to common elements. This method can be very fast and memory-efficient for numerical matrices of varying sizes. • Method 1: Set Intersection. Simple and efficient, works well with unique elements. Not optimal for repeated elements or very large matrices. • Method 2: List Comprehension and all(). Intuitive and easy to understand. May be inefficient for larger matrices due to repeated containment checks. • Method 3: functools and reduce(). Useful for large datasets and functional programming enthusiasts. Can be less readable for those not familiar with reduce(). • Method 4: Collections Counter. Powerful for counting repeated elements. A bit more involved and possibly overkill for simple problems. • Bonus One-Liner Method 5: Using numpy. Highly efficient for numerical data. Leverages numpy’s speed and vectorized operations, but less suitable for non-numeric data.
{"url":"https://blog.finxter.com/5-best-ways-to-find-distinct-elements-common-to-all-rows-of-a-matrix-in-python/","timestamp":"2024-11-06T11:53:57Z","content_type":"text/html","content_length":"72787","record_id":"<urn:uuid:3d408594-05a7-4f2a-9df5-339868a84b32>","cc-path":"CC-MAIN-2024-46/segments/1730477027928.77/warc/CC-MAIN-20241106100950-20241106130950-00030.warc.gz"}