text
stringlengths
4
602k
Using the Slope Calculator This slope calculator takes two points and then uses the slope formula to calculate the slope of a line defined by those two points, and then the y intercept. The slope and the intercept are then combined to provide the equation of the line in slope intercept form ("y=mx+b"). A graph of the line is drawn on a coordinate plane, along with the slope intercept equation. The slope calculator updates the graph and the equation automatically when you enter new values for the points. The calculator also allows direct entry of the rise or run values, or a decimal value for m. The calculator will automatically create the correct decimal or fraction components for whatever you enter. You may also calculate the equation for a line by changing the slope independently (either as a slope fraction or a slope decimal), or by entering a new y intercept. If a new slope is entered, the slope calculator will move one of the points so that the equation matches the new line. If a new y intercept is entered, the slope will remain the same but the calculator will move the two points to shift the line to match the new y intercept. What is the Slope of a Line? The slope of a line is a mathematical measurement of how steep a line drawn on a graph appears, and this value is usually shown as the variable m in an equation in slope intercept form, y=mx+b. Slope is defined as the ratio of vertical (y-axis) change over a given amount of horizontal (x-axis) change, often remembered more simply as a fraction describing rise over run or the rate of change. This slope calculator provides this ratio both as a fraction and a decimal, but shows the slope as a fraction in the calculator graph. If a line is sloping up and to the right, it is rising as you look left-to-right across the x-axis. The rise in this case is positive, and such a line will have a positive slope. If a line is sloping down and to the right, it is falling as you look left-to-right across the x-axis. The rise in this case is negative (the line is "falling"), and such a line will have a negative slope. The Slope Intercept Form When we're dealing with an equation that describe lines (i.e., a linear equation), we typically put the equations into a form called slope intercept form that looks like this… Slope Intercept Form An equation in this form describes how the y coordinate for a point on the line is calculated given an x coordinate. The slope calculator takes the points you provide and then calculates the slope and the y intercept as described below. These values are combined and the equation of the line is shown in the calculator graph area. How to Find the Slope of a Line If you have two points, they define a line on a Cartesian coordinate plane, and you can use those points to calculate the slope of the line. This slope calculator does exactly that using the formula below… By starting with two points (x1,y1) and (x2,y2), the slope calculator substitutes the values into this equation to calculate the "rise" on the top and the "run" on the bottom. Given your two points. it doesn't matter which point is used (x1,y1) or (x2,y2), but it is very important that you consistently use the coordinates for each point. For example, if you choose one point such as (5, 6), be sure to use 5 as the first term of the subtraction on the top of the equation, and 6 as the first term of the subtraction on the bottom of the equation. Mixing the individual coordinates between points or thinking that there's some specific reason to choose one point as (x1,y1) are common mistakes calculating slope. When in doubt, check your answer with this slope calculator and you'll see it's a lot easier than it seems. What is the Slope of a Horizontal Line? The slope of a horizontal line is equal to zero. In the slope formula above, the top component of the slope ratio shows the vertical change between two points on the line. Because every point on a horizontal line will have the same y-axis coordinate, the numerator in this slope fraction will always be zero, and therefore the calculated slope will also always be zero. The slope calculator will calculate the equation of the line without the first term, effectively reducing the y=mx+b equation to the form y=b, reflecting that the calculated y coordinate is constant for any given x coordinate. What is the Slope of a Vertical Line? Like the slope of a horizontal line, the slope of a vertical line is special. Again, referring to the slope equation, consider the way coordinates change as you travel up and down a vertical line. In the case of a vertical line, the x-axis coordinate will never change for any given y-axis coordinate. Because of this, the change in the x-axis represented as the bottom component of the slope ratio is zero. There's a problem here. The slope equation divides by this change in x value result, and division by zero is not allowed. As a result, the slope of a vertical line is undefined, and you can readily see you cannot calculate y values in terms of x values using an equation in the y=mx+b slope intercept form because the m value for slope is undefined, making the whole equation undefined. Simply put, there is not equivalent slope intercept form equation for a vertical line, so we need something else. The equation for a vertical line is transformed by the slope calculator to the form x=c, where c represents a constant x value that defines the line for every possible y coordinate. How to Find the Y Intercept of a Line Once you have the equation of a line in slope intercept form, finding the y intercept is easy, but understanding why the equation highlights the intercept is as important as simply being able to read it from the equation's final term. The y intercept is the point where the line crosses the y-axis. Because every point on the y-axis has an x coordinate value of zero, the line's slope intercept equation can be used to solve for y given an x value of zero. This will calculate the value where the line crosses the y-axis. The y intercept is formally a coordinate pair, but because the x coordinate by definition zero, the y intercept is often only identified by a single value (the y coordinate). Additionally, this y-axis value is alone as the b variable in the y=mx+b slope intercept equation. In fact, when a line is described as a slope intercept equation, the y intercept value can be read from the last term in the equation. However, the what if you don't have the line's equation and you're just starting from the points? The you can reshuffle the slope intercept equation so that it takes on the following form… This formula calculates the intercept from the slope and one point on the line. The slope calculator uses this same formula to find the intercept after determining the actual slope as described above. How to Find the Equation of a Line There are two scenarios for finding the equation of a line, both of which are used internally by this slope calculator depending on what values you change. The discussion above shows how the calculator finds the slope using two points using the slope formula (the "rise over run" calculation). Given the slope and at least one point, the b=y-mx equation is used to find the intercept. With the intercept and slope calculated, all the parts necessary to create an equation in slope intercept form are present. The slope intercept equation for the line is shown in the calculator graph in one of the quadrants not intersected by the line. Slope Calculator Updates |04/16/2019||Initial version of the slope calculator.| |04/21/2019||Corrected error messages in slope calculator graph.| |04/22/2019||Enhanced calculator instructions.| |04/24/2019||Added rise over run scale to slope calculator graph.| |04/29/2019||Added images to calculator description.|
All quantitative aptitude examinations will have questions based on percentage system. Most of the questions from this category are related to change of percentages or successive change of percentages, for example "In a shop price of Laptop which is marked at $1000 was discounted 20% for Christmas eve and further 30% discounted for New Year eve. What is the price of Laptop now?" Here we have provided a set of basic concepts, tips and shortcuts on how to solve percentage problems easily and quickly. Percent means "out of one hundred" or "per 100" and is one way of expressing a part-to-whole relationship. "Part-to-whole" is just the math expression for how many parts or portions you have out of a whole thing. For example, two glasses of juice out of whole eight-glasses would be 2/8, which also equals 25/100, or 25%. Using a percentage allows us to express this part-to-whole relationship as a whole number instead of as a fraction or decimal; for example “25% of the crowd” means we are talking about 25 out of every 100 people in the crowd. In decimal form, this number would be 0.25 and in fraction form it would be 25/100. All three forms tell us the same piece of information. You can also try - Tricks to crack aptitude questions on Numbers - Important formulas and Tips to solve Cyclicity of Numbers aptitude questions Basic Concepts of Percentage & Conversion Type Questions To solve questions of format "What is x% of y?" every word or symbol in this sentence needs to be translated into math. “What” always represents a variable; let’s use x. The verb (is, was, are) always represents an equals sign. For any percentage, we write a fraction with the given number over 100. “Of” always represents multiplication. - Let’s solve the problem “What is 30% of 80?” Now, we can write: x = 30/100 * 80 and we do the math to solve for x. (The answer is 24.) Let’s try a slightly more complicated problem, - x% of y is 50 and y% of 18 is 27. What is x? Now we have multiple variables for our percentages. Can we still use our word translation method here? Sure! We have: x/100 * y = 50 and y/100 * 18 = 27 Let’s solve the second equation first, since it has only one variable. We get y = 150 (remember – not 150 %!) so we plug y = 150 into the first equation and we get x = 33 and 1/3 (again, with no percentage sign). Next, we move on to Quick math on Percentages and converting among percents, fractions and decimals. - Given 100% of a number, it is very easy to calculate 50%, 10%, 5% and 1% of that number. These four building blocks can then be used to calculate or estimate any whole-number percentage in a very short time. We’ll learn this method by example. Let’s start with the number 120. First, we create a quick chart as follows: 12 = 10% 1.2 = 1% 60 = 50% 6 = 5% Now we can solve the problems using this base numbers. Example 6% of 120 = 5% + 1% = 6 + 1.2 =7.2 One other type of calculation you must be adept is converting fractions, decimals to percents. - Percent to Decimal: move the decimal point two places to the left. For example, 42% = 0.42. - Percent to Fraction: place the percent number in the numerator and 100 in the denominator; simplify. For example, 42% = 42/100 = 21/50. - Decimal to Percent: move decimal point two places to the right, For example, 1.6 = 160%. - Fraction to Percent: first convert fraction to decimal, then follow the directions to convert from decimal to percent. For example, 5/6 = 0.833 repeating = 83 and 1/3 %. Example, 0.54= 54% by moving two places which is nothing but 54/100 45% = 0.45 which is nothing but 45/100. Remember the following to reduce your calculation speed. 1/3 = 33.33% 1/4 = 25% 1/5 = 20% ; 2/5 = 2 x (20%) = 40% and so on… 1/6 = 16.66% 1/7 = 14.28% 1/8 = 12.5% 1/9 = 11.11% 1/11 = 9.09% and their multiples. Percent Change Type of Questions Percent increase or decrease is one way to represent a change in a given number. (Note that, here, we are talking about a single change. Multiple changes will be covered below in the Continuous percentage change section) Percent increase is the percentage that the original number increases and percent decrease is the percentage that the original number decreases. We can use a very simple formula for either type of problem: Increase (or Decrease) = (Change / Original) * 100 Let’s try it out. - You’ve had your eye on a $100 Trouser at the store, but you think it’s too expensive. Finally, it goes on sale for $60. What is the percent decrease? The is always the difference between our starting and ending points. In this case, it’s 100 – 60 = 40. The “original” is our starting point; in this case, it’s 100. (40/100)*100 = (0.4)*100 = 40%. Always remember that your denominator is the original number or your starting point. The most common mistake made on this type of problem is using the smaller number for percent decrease or the larger number for percent increase. This is actually exactly the opposite of what you want to do! Percent decrease means you’re going from a larger number to a smaller one, so the larger number is your starting point. And percent increase, of course, means you’re going from a smaller number to a larger one, so the smaller number is your starting point. Always think of the denominator as your starting point number and you won’t get mixed up. Let’s try another one. - Kelvin makes $60 a week from his job. He earns a raise and now makes $70 a week. What is the percent increase? If there is 1/x fractional increase then you will have 1/(x+1) fractional decrease and vice versa. - Example: If price of an article is increased by 33.33%, by what % it need to be decreased to make it to the same price? Multiple Percent Change Type of Questions What about when we have multiple percentage changes happening in one problem? These are called successive percentage change problems and our process is almost exactly the same. - Two years ago, the population of a street in Los Angles was 250. Last year, the population increased by 20% and this year the population is expected to increase another 10% in that street. How many residents is that street of Los Angles expected to have at the end of this year? Let’s look at the right way and the wrong way to do this problem. - First, the right way: Street of Los Angles starts out with a population of 250. In the first year, the population increases by 20%, so we add 50 people (practice Fast Math here: 10% + 10% = 20%, so 25 + 25 = 50). Our new population is 250 + 50 = 300. This year, Los Angles will add 10% but, this time, 10% is based on the new population figure of 300, not the old figure 250. This year, we add 30 people, so our population at the end of the year is expected to be 300 + 30 = 330. - Now, the wrong way: If we just add 20% and 10%, for an increase of 30%, we would have said that our population is based on a 30% increase of the 250 figure, or 75 (10% + 10% + 10% = 25 + 25 + 25 = 75). Our final answer would be 325. You can expect this number to show up in the answer choices, so you would not realize it if you made this mistake. The reason we must do each step separately in a successive change problem is that the starting point for each step is a different number – it’s based on the number you just calculated in the preceding step. Just remember that you must do these problems step-by-step to get them right. You will never get it right if you just add or subtract the percents and do the math all at once. Important Formulas for Percentage CalculationFormulas for Percentage of Population - Population after n years = P*(1 + R/100)n - Population n years ago = P/(1 + R/100)n Formulas for Percentage Increase/Decrease - If the price of an object increases by R% , the reduction in consumption so as not to increase the expenditure = [R/(100 + R)] * 100 % - If the price of an object decreases by R% , then the increase in consumption so as not to decrease the expenditure is = [R/(100 - R)] * 100 % Formulas for Depreciation - Value of car after n years = P*(1 - R/100)n - Value of car n years ago = P/(1 - R/100)n Sample Questions and Answers on Percentages Last but not the least, practice as many question as you can to gain better understanding of these concepts of percentages and to improve your speed in solving problems related them. Here is a sample set of solved quantitative aptitude questions based on percentages. Take the test and strengthen your understanding.
Bank of England — the UK’s central bank responsible for lending money (More) Bond/Gilt — A loan made to the government that it promises to pay back after a certain length of time, plus a fixed amount of regular interest (typically paid twice a year). It is one way that a government can borrow money. The price of a bond may change depending on the interest rate, and time to maturity (when the government buys back the bond). (More at Investopedia) Central bank — The institution that manages a country’s money. It is often named after the country (e.g. Bank of England, Wikipedia). In the U.S. it is called the Federal Reserve (Wikipedia), in the European Union it is called the European Central Bank (Wikipedia). Chartalism — a theory that defines money as something created by the government that gets its value from its legal status, and so has no intrinsic value. (ref) See Fiat money and metalism Debt — an amount of money owed. The National Debt is the total amount that the Government has spent into the economy, and not taxed back yet. Deficit — the difference between the amount of money spent, and that received, typically measured over the course of a year. (More at Investopedia) Deflation — The continuous fall in prices. High deflation means prices are falling a lot, low deflation means prices are falling more slowly. The opposite is inflation. Economics — is the social science that studies the production, distribution, and consumption of goods and services. It is often split into two main categories: microeconomics and macroeconomics (see below) Endogenous money — Money that is created by banks or government without the requirement of having the money in an account as savings. Banks do not lend out other customers’ savings, but literally use computer keystrokes to create a liability and loan account. Likewise the government does not rely on spending “taxpayers” money, the UK Parliament determines spending in the Budget, and instructs the Bank of England to create an account, that may later be removed from the economy with taxes. (See also exogenous money) Energy sovereignty — refers to countries that produce, and are self-sufficient in their own energy, without having to rely on another country. Exogenous money — The idea that money is derived by banks and government from existing savings, or taxpayers’ money, and assumes that the money supply is fixed. (See also endogenous money) Federal Reserve — the central bank of the U.S.A that is responsible for lending money (More) Fiat money — money issued by a government that is not backed by a gold or silver. (ref) See also Chartalism and Metalism, Fiscal deficit — When a government has spent more into the economy than it has received in taxes. Fiscal deficits stimulate the economy. The total amount over all time is the National Debt. Fiscal policy — is the action a government can take to influence the economy, and includes government spending and taxation. See also Monetary policy. Fiscal surplus — When a government has spent less into the economy than it has received back in taxes. Fiscal surpluses hinder the economy and may lead to austerity. The total amount over all time is the National Debt. Heterodox — Not orthodox, ie. not conforming to standard or conventional wisdom. e.g. MMT is a heterodox economic theory. Inflation — The continuous rise in prices. High inflation means prices are increasing a lot, low inflation means prices are increasing more slowly. The opposite is deflation. It is typically based on the last 12 months. Job guarantee — an MMT policy that encourages the government to become the Employer of Last Resort, and guarantee a job to anyone who wants one, at a living wage. Macroeconomics — large-scale economics such as that applied to a country. May include factors such as interest rates, national productivity and taxes. Metalism — a theory that defines the value of money as derived from the commodity it is based on, such as a gold standard (ref) See Chartalism and Fiat money. Microeconomics — studies individuals and business decisions, focusing on supply and demand. Modern Money Theory (MMT) — is a description of the way money really works in a country. This suggests some differences from how many people and economists think the economy works. Monetary policy — is the action a country’s Central bank can take, to influence the inflation rate, and the amount of money in the economy. Actions include changing the “base” interest rate which affects saving and loans, and buying and selling government bonds (sometimes through quantitative easing (QE). (More) See also “Fiscal policy”. Monetary sovereignty — refers to countries that issue and control their own currency. Many European countries do not have monetary sovereignty because the European Central Bank controls their currency, the Euro. Money (currency) — the means by which the government officially keeps an account of its finances. This may include currency (notes and coins), and a digital form that is recognised and managed by a money account (ie. bank account) NAIRU — “Non-Accelerating Inflation Rate of Unemployment”, a theoretical level of unemployment below which inflation would be expected to rise. It was introduced in 1975 as an improvement over the “natural rate of unemployment” concept. NAIRU suggests that inflation can be controlled through unemployment. It is contested. National Debt — is the total amount of money that the Government thinks it owes (to itself?). From an MMT point of view, it is the amount invested in the country, it is your saving, and money in your pocket. Neoliberalism — an ideology that uses microeconomics to make the markets “more efficient”, such as: curbing the powers of the unions, reducing job protections, reducing wages and other entitlements, and reducing income support schemes (ref) Sovereignty — absolute power and monopoly of the state. For example, monetary sovereignty, energy sovereignty, food sovereignty. Tax — the means by which a government drives its currency and gives it value, as it is a statutory and legal requirement. It removes money from the economy, and once paid, it is deleted and no longer available. TINA — “There Is No Alternative”, used as a political slogan by Margaret Thatcher in response to critics of her monetary policy.
Back to Course Module 1: Properties and Structure of Matter1.1 Properties of Matter 1.2 Atomic Structure and Atomic Mass Module 2: Introduction to Quantitative Chemistry2.1 Chemical Reactions and Stoichiometry 2.2 Mole Concept 2.3 Concentration and Molarity 2.4 Gas Laws Module 3: Reactive Chemistry3.1 Chemical Reactions 3.2 Predicting Reactions of Metals 3.3 Rates of Reactions Module 4: Drivers of Reactions4.1 Energy Changes in Chemical Reactions 4.2 Enthalpy and Hess's Law 4.3 Entropy and Gibbs Free Energy Module 5: Equilibrium and Acid Reactions5.1 Static and Dynamic Equilibrium5 Topics 5.2 Factors that Affect Equilibrium2 Topics 5.3 Calculating the Equilibrium Constant2 Topics 5.4 Solution Equilibria Module 6: Acid/Base Reactions6.1 Properties of Acids and Bases7 Topics 6.2 Using Brønsted–Lowry Theory2 Topics 6.3 Quantitative Analysis1 Topic Module 7: Organic Chemistry7.1 Nomenclature2 Topics 7.2 Hydrocarbons2 Topics 7.3 Products of Reactions Involving Hydrocarbons 7.4 Alcohols1 Topic 7.5 Reactions of Organic Acids and Bases 7.6 Polymers2 Topics Module 8: Applying Chemical Ideas8.1 Analysis of Inorganic Substances3 Topics 8.2 Analysis of Organic Substances 8.3 Chemical Synthesis and Design Working ScientificallyWorking Scientifically Overview1 Topic Lesson 27, Topic 2 Addition polymers are synthetically produced by adding together unsaturated monomers without the elimination of any atoms. - Initiator molecule breaks C=C in alkenes (addition reaction). The new substance is a monomer radical, i.e. has one free electron. - The monomer radical breaks the C=C in another monomer and bonds to it. This, in turn, creates leaves an un-bonded electron across the broken double-bond. - The polymer propagates in this repeating fashion until an inhibitor molecule bonds to the end of a chain, de-radicalising the polymer and preventing it from further elongating. Structure of Polymers Vinyl chloride (chloroethene) Polyvinyl chloride (PVC) Uses of Additional Polymers Low Density – Branched chains; cannot pack closely together. Low melting point (~80 °C) Squeezy sauce bottles Vacuum cleaner tube High Density – Unbranched, linear chains; packs tightly. Harder/more rigide than LDPE; less flexible Garbage bins and buckets |Piping, sliding, gutters| Waste water pipes Resistant to chemical corrosion Electrical wire insulation Can be expanded to form styrofoam (low density insulator) |CD and packaging| Plastic wine glasses Foam coffee cups High melting point (327 °C) High chemical resistance Low coefficient of friction |Non-stick coating for cooking pans| Anti-corrision container, pipe and medical coatings Sliding applications, e.g. bearings. Explanation of Properties |Polyethylene||LDPE has a high degree of branching, which means is has a low degree of crystallinity and relatively weak dispersion forces. It therefore has a low melting point, is low density and flexible.| HDPE has a low degree of branching, so a high degree of crystallinity and stronger dispersion forces. This gives it a high melting point, high density and rigidity. |Polyvinyl chloride||The bulky chlorine side group increases dispersion forces and physical flexibility, resulting in hardness and rigidity.| Plasticisers may be added along with PVC chains to increase flexibility by weakening the dispersion forces. |Polystyrene||Bulky benzene substituent increases physical rigidity and strength due to increased dispersion forces.| |Polytetrafluoroethylene||Fluorine side groups repel, locking the polymer into a linear, elongated helix. Chains align closely to form a crystalline structure, making PTFE hard and rigid due to dispersion forces.|
CHAPTER 3 Counting t may seem peculiar that a college-level text has a chapter on counting. At its most basic level, counting is a process of pointing to each object in a collection and calling off “one, two, three,...” until the quantity of objects is determined. How complex could that be? Actually, counting can become quite subtle, and in this chapter we explore some of its more sophisticated aspects. Our goal is still to answer the question “How many?” but we introduce mathematical techniques that bypass the actual process of counting individual objects. Almost every branch of mathematics uses some form of this “sophisticated counting.” Many such counting problems can be modeled with the idea of a list, so we start there. I 3.1 Counting Lists A list is an ordered sequence of objects. A list is denoted by an opening parenthesis, followed by the objects, separated by commas, followed by a closing parenthesis. For example (a, b, c, d, e) is a list consisting of the first five letters of the English alphabet, in order. The objects a, b, c, d, e are called the entries of the list; the first entry is a, the second is b, and so on. If the entries are rearranged we get a different list, so, for instance, (a, b, c, d, e) 6= ( b, a, c, d, e). A list is somewhat like a set, but instead of being a mere collection of objects, the entries of a list have a definite order. Note that for sets we have © ª © ª a, b, c, d, e = b, a, c, d, e , but—as noted above—the analogous equality for lists does not hold. Unlike sets, lists are allowed to have repeated entries. For example (5, 3, 5, 4, 3, 3) is a perfectly acceptable list, as is (S, O, S ). The number of entries in a list is called its length. Thus (5, 3, 5, 4, 3, 3) has length six, and (S, O, S ) has length three. Counting 64 Occasionally we may get sloppy and write lists without parentheses and commas; for instance, we may express (S, O, S ) as SOS if there is no danger of confusion. But be alert that doing this can lead to ambiguity. Is it reasonable that (9, 10, 11) should be the same as 91011? If so, then (9, 10, 11) = 91011 = (9, 1, 0, 1, 1), which makes no sense. We will thus almost always adhere to the parenthesis/comma notation for lists. Lists are important because many real-world phenomena can be described and understood in terms of them. For example, your phone number (with area code) can be identified as a list of ten digits. Order is essential, for rearranging the digits can produce a different phone number. A byte is another important example of a list. A byte is simply a length-eight list of 0’s and 1’s. The world of information technology revolves around bytes. To continue our examples of lists, (a, 15) is a list of length two. Likewise (0, (0, 1, 1)) is a list of length two whose second entry is a list of length three. The list (N, Z, R) has length three, and each of its entries is a set. We emphasize that for two lists to be equal, they must have exactly the same entries in exactly the same order. Consequently if two lists are equal, then they must have the same length. Said differently, if two lists have different lengths, then they are not equal. For example, (0, 0, 0, 0, 0, 0) 6= (0, 0, 0, 0, 0). For another example note that ( g, r, o, c, e, r, y, l, i, s, t ) 6= ¡ bread milk eggs mustard coffee ¢ because the list on the left has length eleven but the list on the right has just one entry (a piece of paper with some words on it). There is one very special list which has no entries at all. It is called the empty list, and is denoted (). It is the only list whose length is zero. One often needs to count up the number of possible lists that satisfy some condition or property. For example, suppose we need to make a list of length three having the property that the first entry must be an element © ª © ª of the set a, b, c , the second entry must be in 5, 7 and the third entry © ª must be in a, x . Thus (a, 5, a) and ( b, 5, a) are two such lists. How many such lists are there all together? To answer this question, imagine making the list by selecting the first element, then the second and finally the third. This is described in Figure 3.1. The choices for the first list entry are a, b or c, and the left of the diagram branches out in three directions, one for each choice. Once this choice is made there are two choices (5 or 7) for the second entry, and this is described graphically by two branches from each of the three choices for the first entry. This pattern continues Counting Lists 65 for the choice for the third entry, which is either a or x. Thus, in the diagram there are 3 · 2 · 2 = 12 paths from left to right, each corresponding to a particular choice for each entry in the list. The corresponding lists are tallied at the far-right end of each path. So, to answer our original question, there are 12 possible lists with the stated properties. Resulting list first choice second choice 5 a 7 5 b 7 5 c 7 third choice a x a x a x a x a x a x (a, 5, a) (a, 5, x) (a, 7, a) (a, 7, x) ( b, 5, a) ( b, 5, x) ( b, 7, a) ( b, 7, x) ( c, 5, a) ( c, 5, x) ( c, 7, a) ( c, 7, x) Figure 3.1. Constructing lists of length 3 We summarize the type of reasoning used above in an important fact called the multiplication principle. Fact 3.1 (Multiplication Principle) Suppose in making a list of length n there are a 1 possible choices for the first entry, a 2 possible choices for the second entry, a 3 possible choices for the third entry and so on. Then the total number of different lists that can be made this way is the product a1 · a2 · a3 · · · · · a n . So, for instance, in the above example we had a 1 = 3, a 2 = 2 and a 3 = 2, so the total number of lists was a 1 · a 2 · a 3 = 3 · 2 · 2 = 12. Now let’s look at some additional examples of how the multiplication principle can be used. Example 3.1 A standard license plate consists of three letters followed by four numbers. For example, JRB-4412 and MMX-8901 are two standard license plates. (Vanity plates such as LV2COUNT are not included among the standard plates.) How many different standard license plates are possible? Counting 66 To answer this question, note that any standard license plate such as JRB-4412 corresponds to a length-7 list (J,R,B,4,4,1,2), so the question can be answered by counting how many such lists are possible. We use the multiplication principle. There are a 1 = 26 possibilities (one for each letter of the alphabet) for the first entry of the list. Similarly, there are a 2 = 26 possibilities for the second entry and a 3 = 26 possibilities for the third entry. There are a 4 = 10 possibilities for the fourth entry, and likewise a 5 = a 6 = a 7 = 10. Therefore there are a total of a 1 · a 2 · a 3 · a 4 · a 5 · a 6 · a 7 = 26 · 26 · 26 · 10 · 10 · 10 · 10 = 175,760,000 possible standard license plates. There are two types of list-counting problems. On one hand, there are situations in which the same symbol or symbols may appear multiple times in different entries of the list. For example, license plates or telephone numbers can have repeated symbols. The sequence CCX-4144 is a perfectly valid license plate in which the symbols C and 4 appear more than once. On the other hand, for some lists repeated symbols do not make sense or are not allowed. For instance, imagine drawing 5 cards from a standard 52-card deck and laying them in a row. Since no 2 cards in the deck are identical, this list has no repeated entries. We say that repetition is allowed in the first type of list and repetition is not allowed in the second kind of list. (Often we call a list in which repetition is not allowed a non-repetitive list.) The following example illustrates the difference. Example 3.2 Consider making lists from symbols A, B, C, D, E, F, G. (a) How many length-4 lists are possible if repetition is allowed? (b) How many length-4 lists are possible if repetition is not allowed? (c) How many length-4 lists are possible if repetition is not allowed and the list must contain an E? (d) How many length-4 lists are possible if repetition is allowed and the list must contain an E? Solutions: (a) Imagine the list as containing four boxes that we fill with selections from the letters A,B,C,D,E,F and G, as illustrated below. ( , , , ) 7 choices 7 choices 7 choices 7 choices There are seven possibilities for the contents of each box, so the total number of lists that can be made this way is 7 · 7 · 7 · 7 = 2401. Counting Lists 67 (b) This problem is the same as the previous one except that repetition is not allowed. We have seven choices for the first box, but once it is filled we can no longer use the symbol that was placed in it. Hence there are only six possibilities for the second box. Once the second box has been filled we have used up two of our letters, and there are only five left to choose from in filling the third box. Finally, when the third box is filled we have only four possible letters for the last box. ( , , , ) 7 choices 6 choices 5 choices 4 choices Thus the answer to our question is that there are 7 · 6 · 5 · 4 = 840 lists in which repetition does not occur. (c) We are asked to count the length-4 lists in which repetition is not allowed and the symbol E must appear somewhere in the list. Thus E occurs once and only once in each such list. Let us divide these lists into four categories depending on whether the E occurs as the first, second, third or fourth entry. These four types of lists are illustrated below. Type 1 (E, , Type 2 , ) ( , E, Type 3 , ) ( , , E, Type 4 ) ( , , , E) 6 choices 6 choices 6 choices 6 choices 5 choices 5 choices 5 choices 5 choices 4 choices 4 choices 4 choices 4 choices Consider lists of the first type, in which the E appears in the first entry. We have six remaining choices (A,B,C,D,F or G) for the second entry, five choices for the third entry and four choices for the fourth entry. Hence there are 6 · 5 · 4 = 120 lists having an E in the first entry. As indicated in the above diagram, there are also 6 · 5 · 4 = 120 lists having an E in the second, third or fourth entry. Thus there are 120 + 120 + 120 + 120 = 480 such lists all together. (d) Now we must find the number of length-four lists where repetition is allowed and the list must contain an E. Our strategy is as follows. By Part (a) of this exercise there are 7 · 7 · 7 · 7 = 74 = 2401 lists where repetition is allowed. Obviously this is not the answer to our current question, for many of these lists contain no E. We will subtract from 2401 the number of lists that do not contain an E. In making a list that does not contain an E, we have six choices for each list entry (because Counting 68 we can choose any one of the six letters A,B,C,D,F or G). Thus there are 6 · 6 · 6 · 6 = 64 = 1296 lists that do not have an E. Therefore the final answer to our question is that there are 2401 − 1296 = 1105 lists with repetition allowed that contain at least one E. Perhaps you wondered if Part (d) of Example 3.2 could be solved with a setup similar to that of Part (c). Let’s try doing it that way. We want to count the length-4 lists (with repetition allowed) that contain at least one E. The following diagram is adapted from Part (c), the only difference being that there are now seven choices in each slot because we are allowed to repeat any of the seven letters. Type 1 (E, , Type 2 , ) ( , E, Type 3 , ) ( , , E, Type 4 ) ( , , , E) 7 choices 7 choices 7 choices 7 choices 7 choices 7 choices 7 choices 7 choices 7 choices 7 choices 7 choices 7 choices This gives a total of 73 + 73 + 73 + 73 = 1372 lists, an answer that is substantially larger than the (correct) value of 1105 that we got in our solution to Part (d) above. It is not hard to see what went wrong. The list (E, E, A, B) is of type 1 and type 2, so it got counted twice. Similarly (E, E, C, E ) is of type 1, 3 and 4, so it got counted three times. In fact, you can find many similar lists that were counted multiple times. In solving counting problems, we must always be careful to avoid this kind of double-counting or triple-counting, or worse. Exercises for Section 3.1 Note: A calculator may be helpful for some of the exercises in this chapter. This is the only chapter for which a calculator may be helpful. (As for the exercises in the other chapters, a calculator makes them harder.) 1. Consider lists made from the letters T,H,E,O,R,Y, with repetition allowed. (a) How many length-4 lists are there? (b) How many length-4 lists are there that begin with T ? (c) How many length-4 lists are there that do not begin with T ? 2. Airports are identified with 3-letter codes. For example, the Richmond, Virginia airport has the code RIC, and Portland, Oregon has PDX. How many different 3-letter codes are possible? 3. How many lists of length 3 can be made from the symbols A,B,C,D,E,F if... Counting Lists (a) (b) (c) (d) ... ... ... ... 69 repetition is allowed. repetition is not allowed. repetition is not allowed and the list must contain the letter A. repetition is allowed and the list must contain the letter A. 4. Five cards are dealt off of a standard 52-card deck and lined up in a row. How many such line-ups are there in which all 5 cards are of the same suit? 5. Five cards are dealt off of a standard 52-card deck and lined up in a row. How many such line-ups are there in which all 5 cards are of the same color (i.e., all black or all red)? 6. Five cards are dealt off of a standard 52-card deck and lined up in a row. How many such line-ups are there in which exactly one of the 5 cards is a queen? 7. This problem involves 8-digit binary strings such as 10011011 or 00001010 (i.e., 8-digit numbers composed of 0’s and 1’s). (a) How many such strings are there? (b) How many such strings end in 0? (c) How many such strings have the property that their second and fourth digits are 1’s? (d) How many such strings have the property that their second or fourth digits are 1’s? 8. This problem concerns lists made from the symbols A,B,C,D,E. (a) How many such length-5 lists have at least one letter repeated? (b) How many such length-6 lists have at least one letter repeated? 9. This problem concerns 4-letter codes made from the letters A,B,C,D,...,Z. (a) How many such codes can be made? (b) How many such codes have no two consecutive letters the same? 10. This problem concerns lists made from the letters A,B,C,D,E,F,G,H,I,J. (a) How many length-5 lists can be made from these letters if repetition is not allowed and the list must begin with a vowel? (b) How many length-5 lists can be made from these letters if repetition is not allowed and the list must begin and end with a vowel? (c) How many length-5 lists can be made from these letters if repetition is not allowed and the list must contain exactly one A? 11. This problem concerns lists of length 6 made from the letters A,B,C,D,E,F,G,H. How many such lists are possible if repetition is not allowed and the list contains two consecutive vowels? 12. Consider the lists of length six made with the symbols P, R, O, F, S, where repetition is allowed. (For example, the following is such a list: (P,R,O,O,F,S).) How many such lists can be made if the list must end in an S and the symbol O is used more than once? Counting 70 3.2 Factorials In working the examples from Section 3.1, you may have noticed that often we need to count the number of non-repetitive lists of length n that are made from n symbols. In fact, this particular problem occurs with such frequency that a special idea, called a factorial, is introduced to handle it. n Symbols Non-repetitive lists of length n made from the symbols n! 0 ©ª () 1 1 ( A) 1 ( A, B), (B, A ) 2 3 © ª A © ª A, B © ª A, B, C ( A, B, C ), ( A, C, B), (B, C, A ), (B, A, C ), (C, A, B), (C, B, A ) 6 4 © .. . .. . 2 A, B, C, D ª (A,B,C,D), (A,B,D,C), (A,C,B,D), (A,C,D,B), (A,D,B,C), (A,D,C,B) (B,A,C,D), (B,A,D,C), (B,C,A,D), (B,C,D,A), (B,D, A,C), (B,D,C,A) (C,A,B,D), (C,A,D,B), (C,B,A,D), (C,B,D,A), (C,D,A,B), (C,D,B,A) (D,A,B,C), (D,A,C,B), (D,B,A,C), (D,B,C,A), (D,C,A,B), (D,C,B,A) .. . 24 .. . The above table motivates this idea. The first column contains successive integer values n (beginning with 0) and the second column contains © ª a set A, B, · · · of n symbols. The third column contains all the possible non-repetitive lists of length n which can be made from these symbols. Finally, the last column tallies up how many lists there are of that type. Notice that when n = 0 there is only one list of length 0 that can be made from 0 symbols, namely the empty list ( ). Thus the value 1 is entered in the last column of that row. For n > 0, the number that appears in the last column can be computed using the multiplication principle. The number of non-repetitive lists of length n that can be made from n symbols is n(n − 1)( n − 2) · · · 3 · 2 · 1. Thus, for instance, the number in the last column of the row for n = 4 is 4 · 3 · 2 · 1 = 24. The number that appears in the last column of Row n is called the factorial of n. It is denoted as n! (read “ n factorial”). Here is the definition: Definition 3.1 If n is a non-negative integer, then the factorial of n, denoted n!, is the number of non-repetitive lists of length n that can be made from n symbols. Thus 0! = 1 and 1! = 1. If n > 1, then n! = n( n − 1)( n − 2) · · · 3 · 2 · 1. Factorials It follows that 71 0! 1! 2! 3! 4! 5! 6! = = = = = = = 1 1 2·1 = 2 3·2·1 = 6 4 · 3 · 2 · 1 = 24 5 · 4 · 3 · 2 · 1 = 120 6 · 5 · 4 · 3 · 2 · 1 = 720, and so on. Students are often tempted to say 0! = 0, but this is wrong. The correct value is 0! = 1, as the above definition and table tell us. Here is another way to see that 0! must equal 1: Notice that 5! = 5 · 4 · 3 · 2 · 1 = 5 · (4 · 3 · 2 · 1) = 5 · 4!. Also 4! = 4 · 3 · 2 · 1 = 4 · (3 · 2 · 1) = 4 · 3!. Generalizing this reasoning, we have the following formula. n! = n · ( n − 1)! (3.1) Plugging in n = 1 gives 1! = 1 · (1 − 1)! = 1 · 0!. If we mistakenly thought 0! were 0, this would give the incorrect result 1! = 0. We round out our discussion of factorials with an example. Example 3.3 This problem involves making lists of length seven from the symbols 0, 1, 2, 3, 4, 5 and 6. (a) How many such lists are there if repetition is not allowed? (b) How many such lists are there if repetition is not allowed and the first three entries must be odd? (c) How many such lists are there in which repetition is allowed, and the list must contain at least one repeated number? To answer the first question, note that there are seven symbols, so the number of lists is 7! = 5040. To answer the second question, notice that © ª the set 0, 1, 2, 3, 4, 5, 6 contains three odd numbers and four even numbers. Thus in making the list the first three entries must be filled by odd numbers and the final four must be filled with even numbers. By the multiplication principle, the number of such lists is 3 · 2 · 1 · 4 · 3 · 2 · 1 = 3!4! = 144. To answer the third question, notice that there are 77 = 823, 543 lists in which repetition is allowed. The set of all such lists includes lists that are non-repetitive (e.g., (0, 6, 1, 2, 4, 3, 5)) as well as lists that have some repetition (e.g., (6, 3, 6, 2, 0, 0, 0)). We want to compute the number of lists that have at least one repeated number. To find the answer we can subtract the number of non-repetitive lists of length seven from the total number of possible lists of length seven. Therefore the answer is 77 − 7! = 823, 543 − 5040 = 818, 503. Counting 72 We close this section with a formula that combines the ideas of the first and second sections of the present chapter. One of the main problems of Section 3.1 was as follows: Given n symbols, how many non-repetitive lists of length k can be made from the n symbols? We learned how to apply the multiplication principle to obtain the answer n( n − 1)( n − 2) · · · ( n − k + 1). Notice that by cancellation this value can also be written as n( n − 1)( n − 2) · · · ( n − k + 1)( n − k)( n − k − 1) · · · 3 · 2 · 1 ( n − k)( n − k − 1) · · · 3 · 2 · 1 = n! . ( n − k)! We summarize this as follows: Fact 3.2 The number of non-repetitive lists of length k whose entries are chosen from a set of n possible entries is (n−n!k)! . For example, consider finding the number of non-repetitive lists of length five that can be made from the symbols 1, 2, 3, 4, 5, 6, 7, 8. We will do this two ways. By the multiplication principle, the answer is 8 · 7 · 6 · 5 · 4 = 40,320 6720. Using the formula from Fact 3.2, the answer is (8−8!5)! = 8! = 3! = 6 6720. The new formula isn’t really necessary, but it is a nice repackaging of an old idea and will prove convenient in the next section. Exercises for Section 3.2 1. What is the smallest n for which n! has more than 10 digits? 2. For which values of n does n! have n or fewer digits? 3. How many 5-digit positive integers are there in which there are no repeated digits and all digits are odd? 4. Using only pencil and paper, find the value of 5. Using only pencil and paper, find the value of 100! 95! . 120! 118! . 6. There are two 0’s at the end of 10! = 3, 628, 800. Using only pencil and paper, determine how many 0’s are at the end of the number 100!. 7. Compute how many 9-digit numbers can be made from the digits 1, 2, 3, 4, 5, 6, 7, 8, 9 if repetition is not allowed and all the odd digits occur first (on the left) followed by all the even digits (i.e. as in 137598264, but not 123456789). 8. Compute how many 7-digit numbers can be made from the digits 1, 2, 3, 4, 5, 6, 7 if there is no repetition and the odd digits must appear in an unbroken sequence. (Examples: 3571264 or 2413576 or 2467531, etc., but not 7234615.) Counting Subsets 73 function Γ : [0, ∞) → R called the gamma function. 9. There is a very interesting R It is defined as Γ( x) = 0∞ t x−1 e− t dt. It has the remarkable property that if x ∈ N, then Γ( x) = ( x − 1)!. Check that this is true for x = 1, 2, 3, 4. Notice that this function provides a way of extending factorials to numbers other than integers. Since Γ( n) = (n − 1)! for all n ∈ N, we have the formula n! = Γ(n + 1). But Γ can be evaluated at any number in [0, ∞), not just at integers, so we have a formula for n! for any n ∈ [0, ∞). Extra credit: Compute π!. 10. There is another significant function called Stirling’s formula that provides an p ¡ n ¢n approximation to factorials. It states that n! ≈ 2π n e . It is an approximation to n! in the sense that p2πnn!¡ n ¢n approaches 1 as n approaches ∞. Use Stirling’s e formula to find approximations to 5!, 10!, 20! and 50!. 3.3 Counting Subsets The previous two sections were concerned with counting the number of lists that can be made by selecting k entries from a set of n possible entries. We turn now to a related question: How many subsets can be made by selecting k elements from a set with n elements? To highlight the differences between these two problems, look at the set © ª A = a, b, c, d, e . First, think of the non-repetitive lists that can be made from selecting two entries from A . By Fact 3.2 (on the previous page), 120 there are (5−5!2)! = 5! 3! = 6 = 20 such lists. They are as follows. (a, b), (a, c), (a, d ), (a, e), ( b, c), ( b, d ), ( b, e), ( c, d ), ( c, e) ( d, e) ( b, a), ( c, a), ( d, a), ( e, a), ( c, b), ( d, b), ( e, b), ( d, c), ( e, c) ( e, d ) Next consider the subsets of A that can made from selecting two elements from A . There are only ten such subsets, as follows. © ª © ª © ª © ª © ª © ª © ª © ª © ª © ª a, b , a, c , a, d , a, e , b, c , b, d , b, e , c, d , c, e , d, e . The reason that there are more lists than subsets is that changing the order of the entries of a list produces a different list, but changing the order of the elements of a set does not change the set. Using elements © ª a, b ∈ A , we can make two lists (a, b) and ( b, a), but only one subset a, b . In this section we are concerned not with counting lists, but with counting subsets. As was noted above, the basic question is this: How many subsets can be made by choosing k elements from an n-element set? We begin with some notation that gives a name to the answer to this question. Counting 74 Definition 3.2 If n and k are integers, then nk denotes the number of subsets that can be made by choosing k elements from a set with n ¡ ¢ elements. The symbol nk is read “ n choose k.” (Some textbooks write ¡ n¢ C ( n, k) instead of k .) ¡ ¢ To illustrate this definition, the following table computes the values of for various values of k by actually listing all the subsets of the 4-element k © ª set A = a, b, c, d that have cardinality k. The values of k appear in the far-left column. To the right of each k are all of the subsets (if any) of A of size k. For example, when k = 1, set A has four subsets of size k, namely © ª © ª © ª © ª ¡ ¢ a , b , c and d . Therefore 41 = 4. Similarly, when k = 2 there are six ¡ ¢ subsets of size k so 42 = 6. ¡4¢ k © ª k-element subsets of a, b, c, d 1 2 3 4 k ¡4¢ −1 0 ¡4¢ −1 ; © ª© ª© ª© ª a , b , c , d © ª© ª© ª© ª© ª© ª a, b , a, c , a, d , b, c , b, d , c, d © ª© ª© ª© ª a, b, c , a, b, d , a, c, d , b, c, d © ª a, b, c, d ¡4¢ 0 ¡4¢ 1 ¡4¢ 2 ¡4¢ 3 ¡4¢ 4 5 ¡4¢ 6 ¡4¢ 5 6 =0 =1 =4 =6 =4 =1 =0 =0 When k = 0, there is only one subset of A that has cardinality k, namely ¡ ¢ the empty set, ;. Therefore 40 = 1. Notice that if k is negative or greater than | A |, then A has no subsets ¡ ¢ ¡ ¢ of cardinality k, so 4k = 0 in these cases. In general nk = 0 whenever k < 0 ¡ n¢ or k > n. In particular this means k = 0 if n is negative. Although it was not hard to work out the values of 4k by writing out subsets in the above table, this method of actually listing sets would not ¡ ¢ be practical for computing nk when n and k are large. We need a formula. ¡ ¢ To find one, we will now carefully work out the value of 53 in such a way ¡ ¢ that a pattern will emerge that points the way to a formula for any nk . ¡ ¢ Counting Subsets 75 To begin, note that 53 is the number of 3-element subsets of a, b, c, d, e . ¡ ¢ These are listed in the following table. We see that in fact 53 = 10. ¡ ¢ © ª ¡5¢ 3 3! © ª© ª© ª© ª© a,b,c a,b,d a,b,e a,c,d a,c,e ª© ª© ª© a,d,e b,c,d ª© b,c,e ª© b,d,e ª c,d,e The formula will emerge when we expand this table as follows. Taking any one of the ten 3-element sets above, we can make 3! different non© ª repetitive lists from its elements. For example, consider the first set a, b, c . The first column of the following table tallies the 3! = 6 different lists that © ª can be the letters a, b, c . The second column tallies the lists that can be © ª made from a, b, d , and so on. ¡5¢ 3 3! abc abd abe acd ace ade bcd bce bde cde acb adb aeb adc aec aed bdc bec bed ced bac bad bae cad cae dae cbd cbe dbe dce bca bda bea cda cea dea cdb ceb deb dec cba dba eba dca eca eda dcb ecb edb edc cab dab eab dac eac ead dbc ebc ebd ecd The final table has 53 columns and 3! rows, so it has a total of 3! 53 lists. But notice also that the table consists of every non-repetitive length-3 list © ª that can be made from the symbols a, b, c, d, e . We know from Fact 3.2 that there are (5−5!3)! such lists. Thus the total number of lists in the table ¡ ¢ is 3! 53 = (5−5!3)! . Dividing both sides of this equation by 3!, we get ¡ ¢ ¡ ¢ à ! 5 5! = . 3 3!(5 − 3)! Working this out, you will find that it does give the correct value of 10. But there was nothing special about the values 5 and 3. We could ¡ ¢ ¡ ¢ ¡ ¢ do the above analysis for any nk instead of 53 . The table would have nk columns and k! rows. We would get à ! n n! = . k k!( n − k)! We summarize this as follows: Counting 76 à ! à ! n n! n Fact 3.3 If n, k ∈ Z and 0 ≤ k ≤ n, then = . Otherwise = 0. k k!( n − k)! k Let’s now use our new knowledge to work some exercises. © ª Example 3.4 How many 4-element subsets does 1, 2, 3, 4, 5, 6, 7, 8, 9 have? ¡ ¢ 9! ·7·6·5! · 7· 6 = 9·84!5! = 9·84!·7·6 = 9·824 = 126. The answer is 94 = 4!(99!−4)! = 4!5! Example 3.5 A single 5-card hand is dealt off of a standard 52-card deck. How many different 5-card hands are possible? To answer this, think of the deck as being a set D of 52 cards. Then a 5-card hand is just a 5-element subset of D . For example, here is one of many different 5-card hands that might be dealt from the deck. ½ 7 ♣ 2 , ♣ , 3 ♥ , A ♠ , 5 ¾ ♦ The total number of possible hands equals the number of 5-element subsets of D , that is à ! 52 52 · 51 · 50 · 49 · 48 · 47! 52 · 51 · 50 · 49 · 48 52! = = = 2, 598, 960. = 5! · 47! 5! · 47! 5! 5 Thus the answer to our question is that there are 2,598,960 different five-card hands that can be dealt from a deck of 52 cards. Example 3.6 This problem concerns 5-card hands that can be dealt off of a 52-card deck. How many such hands are there in which two of the cards are clubs and three are hearts? Solution: Think of such a hand as being described by a list of length two of the form µ½ ¾ ½ ¾¶ ∗ ♣ , ∗ ♣ , ∗ ♥ , ∗ ♥ , ∗ ♥ , where the first entry is a 2-element subset of the set of 13 club cards, and the second entry is a 3-element subset of the set of 13 heart cards. There ¡ ¢ ¡13¢ are 13 for the second entry, so 2 choices for the first entry and 3 ¡choices 13¢¡13¢ 13! 13! by the multiplication principle there are 2 3 = 2!11! 3!10! = 22, 308 such lists. Answer: There are 22, 308 possible 5-card hands with two clubs and three hearts. Example 3.7 Imagine a lottery that works as follows. A bucket contains 36 balls numbered 1, 2, 3, 4, ..., 36. Six of these balls will be drawn randomly. For $1 you buy a ticket that has six blanks: ä ä ä ä ä ä . You fill in the blanks with six different numbers between 1 and 36. You win $1, 000, 000 Counting Subsets 77 if you chose the same numbers that are drawn, regardless of order. What are your chances of winning? Solution: In filling out the ticket you are choosing six numbers from ¡ ¢ 36! a set of 36 numbers. Thus there are 36 6 = 6!(36−6)! = 1, 947, 792 different combinations of numbers you might write. Only one of these will be a winner. Your chances of winning are one in 1, 947, 792. Exercises for Section 3.3 1. Suppose a set A has 37 elements. How many subsets of A have 10 elements? How many subsets have 30 elements? How many have 0 elements? 2. Suppose A is a set for which | A | = 100. How many subsets of A have 5 elements? How many subsets have 10 elements? How many have 99 elements? 3. A set X has exactly 56 subsets with 3 elements. What is the cardinality of X ? 4. Suppose a set B has the property that ¯ X : X ∈ P (B), | X | = 6 ¯ = 28. Find |B|. ¯© ª¯ 5. How many 16-digit binary strings contain exactly seven 1’s? (Examples of such strings include 0111000011110000 and 0011001100110010, etc.) 6. ¯ X ∈ P ( 0, 1, 2, 3, 4, 5, 6, 7, 8, 9 ) : | X | = 4 ¯ = ¯© © ª ª¯ 7. ¯ X ∈ P ( 0, 1, 2, 3, 4, 5, 6, 7, 8, 9 ) : | X | < 4 ¯ = ¯© © ª ª¯ 8. This problem concerns lists made from the symbols A,B,C,D,E,F,G,H,I. (a) How many length-5 lists can be made if repetition is not allowed and the list is in alphabetical order? (Example: BDEFI or ABCGH, but not BACGH.) (b) How many length-5 lists can be made if repetition is not allowed and the list is not in alphabetical order? 9. This problem concerns lists of length 6 made from the letters A,B,C,D,E,F, without repetition. How many such lists have the property that the D occurs before the A? 10. A department consists of 5 men and 7 women. From this department you select a committee with 3 men and 2 women. In how many ways can you do this? 11. How many positive 10-digit integers contain no 0’s and exactly three 6’s? 12. Twenty-one people are to be divided into two teams, the Red Team and the Blue Team. There will be 10 people on Red Team and 11 people on Blue Team. In how many ways can this be done? k are integers for which 0 ≤ k ≤ n. Use the formula 13. Suppose n and ¡ ¢ ¡ ¢ to show that nk = n−n k . ¡ n¢ k = n! k!( n− k)! 14. Suppose n, k ∈¡Z¢, and 0 ≤ k ≤ n. Use Definition 3.2 alone (without using Fact 3.3) ¡ ¢ to show that nk = n−n k . Counting 78 3.4 Pascal’s Triangle and the Binomial Theorem There are some beautiful and significant patterns among the numbers nk . This section investigates a pattern based on one equation in particular. It happens that à ! à ! à ! ¡ ¢ n+1 n n = + k k−1 k (3.2) for any integers n and k with 1 ≤ k ≤ n. ¡ ¢ To see why this is true, recall that n+k 1 equals the number of k-element © ª subsets of a set with n + 1 elements. Now, the set A = 0, 1, 2, 3, . . . , n has ¡n+1¢ n + 1 elements, so k equals the number of k-element subsets of A . Such subsets can be divided into two types: those that contain 0 and those that do not contain 0. To make a k-element subset that contains 0 we can start © ª with 0 and then append to this set an additional k − 1 numbers selected © ª ¡ ¢ from 1, 2, 3, . . . , n . There are k−n 1 ways to make this selection, so there ¡ n ¢ are k−1 k-element subsets of A that contain 0. Concerning the k-element ¡ ¢ subsets of A that do not contain 0, there are nk of these sets, for we can © ª form them by selecting k elements from the n-element set 1, 2, 3, . . . , n . In light of all this, Equation (3.2) just expresses the obvious fact that the number of k-element subsets of A equals the number of k-element subsets that contain 0 plus the number of k-element subsets that do not contain 0. ¡0¢ ¡1¢ 0 ¡1¢ ¡2¢ 0 ¡2¢ 1 ¡2¢ ¡3¢ 0 ¡3¢ 1 ¡3¢ 2 ¡3¢ ¡4¢ 0 ¡4¢ 1 ¡4¢ 2 ¡4¢ 3 ¡4¢ ¡5¢ 0 ¡5¢ 1 ¡5¢ 2 ¡5¢ 3 ¡5¢ 4 ¡5¢ ¡6¢ 0 ¡6¢ 1 ¡6¢ 2 ¡6¢ 3 ¡6¢ 4 ¡6¢ 5 ¡6¢ ¡7¢ 0 ¡7¢ 1 ¡7¢ 2 ¡7¢ 3 ¡7¢ 4 ¡7¢ 5 ¡7¢ 6 ¡7¢ 3 . 4 5 6 7 . 1 2 .0 .. .. .. 1 1 1 1 1 1 2 3 6 1 1 10 10 5 1 1 6 15 20 15 6 1 1 7 21 35 . 35 21 7 1 . . .. .. .. 1 4 1 3 4 5 Figure 3.2. Pascal’s triangle Now that we have seen why Equation (3.2) is true, we are going to ¡ ¢ arrange the numbers nk in a triangular pattern that highlights various relationships among them. The left-hand side of Figure 3.2 shows numbers ¡ n¢ ¡ ¢ arranged in a pyramid with 00 at the apex, just above a row containing ¡1k¢ ¡ ¢ the values of 2k for k with k = 0 and k = 1. Below this is a row listing ¡ ¢ k = 0, 1, 2. In general, each row listing the numbers nk is just above a row ¡n+1¢ listing the numbers k . Pascal’s Triangle and the Binomial Theorem 79 Any number n+k 1 for 0 < k < n in this pyramid is immediately below ¡ ¢ ¡ ¢ and between the the two numbers k−n 1 and nk in the previous row. But ¡ ¢ ¡ ¢ ¡ ¢ Equation 3.2 says n+k 1 = k−n 1 + nk , and therefore any number (other than 1) in the pyramid is the sum of the two numbers immediately above it. This pattern is especially evident on the right of Figure 3.2, where ¡ ¢ each nk is worked out. Notice how 21 is the sum of the numbers 6 and 15 above it. Similarly, 5 is the sum of the 1 and 4 above it and so on. The arrangement on the right of Figure 3.2 is called Pascal’s triangle. (It is named after Blaise Pascal, 1623–1662, a French mathematician and philosopher who discovered many of its properties.) Although we have written only the first eight rows of Pascal’s triangle (beginning with Row 0 at the apex), it obviously could be extended downward indefinitely. We could add an additional row at the bottom by placing a 1 at each end and obtaining each remaining number by adding the two numbers above its position. Doing this would give the following row: ¡ ¢ 1 8 28 56 70 56 28 8 1 This row consists of the numbers 8k for 0 ≤ k ≤ 8, and we have computed ¡ ¢ ¡ ¢ them without the formula 8k = k!(88!−k)! . Any nk can be computed this way. The very top row (containing only 1) is called Row 0. Row 1 is the next down, followed by Row 2, then Row 3, etc. With this labeling, Row n ¡ ¢ consists of the numbers nk for 0 ≤ k ≤ n. Notice that Row n appears to be a list of the coefficients of ( x + y)n . For example ( x + y)2 = 1 x2 + 2 x y + 1 y2 , and Row 2 lists the coefficients 1 2 1. Similarly ( x + y)3 = 1 x3 + 3 x2 y + 3 x y2 + 1 y3 , and Row 3 is 1 3 3 1. Pascal’s triangle is shown on the left of Figure 3.3 and on the right are the expansions of ( x + y)n for 0 ≤ n ≤ 5. In every case (at least as far as you care to check) the numbers in Row n match up with the coefficients of ( x + y)n . ¡ ¢ 1 1 1 1 1 1 . .. 3 3 .. . 1 x3 + 3 x2 y + 3 x y2 + 1 y3 1 4 10 + 1y 1 x2 + 2 x y + 1 y2 1 6 10 1x 1 2 4 5 1 1 x4 + 4 x3 y + 6 x2 y2 + 4 x y3 + 1 y4 1 5 1 .. . 1 x5 + 5 x4 y + 10 x3 y2 + 10 x2 y3 + 5 x y4 + 1 y5 .. .. . . . .. Figure 3.3. The n th row of Pascal’s triangle lists the coefficients of ( x + y)n Counting 80 In fact this turns out to be true for every n. This result is known as the binomial theorem, and it is worth mentioning here. It tells how to raise a binomial x + y to a non-negative integer power n. Theorem 3.1 ( x + y) n = ¡ n¢ 0 xn + (Binomial Theorem) If n is a non-negative integer, then ¡n¢ n−1 ¡ ¢ ¡ ¢ ¡ n ¢ n−1 ¡n¢ n y + n2 x n−2 y2 + n3 x n−3 y3 + · · · + n− + n y . 1 x 1 xy For now we will be content to accept the binomial theorem without proof. (You will be asked to prove it in an exercise in Chapter 10.) You may find it useful from time to time. For instance, you can apply it if you ever need to expand an expression such as ( x + y)7 . To do this, look at Row 7 of Pascal’s triangle in Figure 3.2 and apply the binomial theorem to get ( x + y)7 = x7 + 7 x6 y + 21 x5 y2 + 35 x4 y3 + 35 x3 y4 + 21 x2 y5 + 7 x y6 + y7 . For another example, (2a − b)4 = ((2a) + (− b))4 = (2a)4 + 4(2a)3 (− b) + 6(2a)2 (− b)2 + 4(2a)(− b)3 + (− b)4 = 16a4 − 32a3 b + 24a2 b2 − 8ab3 + b4 . Exercises for Section 3.4 1. 2. 3. 4. 5. 6. 7. 8. 9. Write out Row 11 of Pascal’s triangle. Use the binomial theorem to find the coefficient of x8 y5 in ( x + y)13 . Use the binomial theorem to find the coefficient of x8 in ( x + 2)13 . Use the binomial theorem to find the coefficient of x6 y3 in (3 x − 2 y)9 . P ¡ ¢ Use the binomial theorem to show nk=0 nk = 2n . P ¡ ¢ Use Definition 3.2 (page 74) and Fact 1.3 (page 12) to show nk=0 nk = 2n . ¡ ¢ P Use the binomial theorem to show nk=0 3k nk = 4n . Use Fact 3.3 (page 76) to derive Equation 3.2 (page 78). ¡ ¢ ¡ ¢ ¡ ¢ ¡ ¢ ¡ ¢ ¡ ¢ Use the binomial theorem to show n0 − n1 + n2 − n3 + n4 − · · · + (−1)n nn = 0. 10. Show that the formula k nk = n nk−−11 is true for all integers n, k with 0 ≤ k ≤ n. ¡ ¢ P 11. Use the binomial theorem to show 9n = nk=0 (−1)k nk 10n−k . ¡ ¢ 12. Show that ¡ ¢ ¡n¢¡ k ¢ ¡ n ¢¡n−m¢ k m = m k− m . ¡n−1¢ ¡n¢ ¡2¢ ¡3¢ ¡4¢ ¡5¢ 3 = 2 + 2 + 2 + 2 +···+ 2 . 13. Show that 14. The first five rows of Pascal’s triangle appear in the digits of powers of 11: 110 = 1, 111 = 11, 112 = 121, 113 = 1331 and 114 = 14641. Why is this so? Why does the pattern not continue with 115 ? Inclusion-Exclusion 81 3.5 Inclusion-Exclusion Many counting problems involve computing the cardinality of a union A ∪ B of two finite sets. We examine this kind of problem now. First we develop a formula for | A ∪ B|. It is tempting to say that | A ∪ B| must equal | A | + |B|, but that is not quite right. If we count the elements of A and then count the elements of B and add the two figures together, we get | A | + |B|. But if A and B have some elements in common, then we have counted each element in A ∩ B twice. B A Therefore | A | + |B| exceeds | A ∪ B| by | A ∩ B|, and consequently | A ∪ B| = | A | + |B| − | A ∩ B|. This can be a useful equation. | A ∪ B | = | A | + |B | − | A ∩ B | (3.3) Notice that the sets A , B and A ∩ B are all generally smaller than A ∪ B, so Equation (3.3) has the potential of reducing the problem of determining | A ∪ B| to three simpler counting problems. It is sometimes called an inclusion-exclusion formula because elements in A ∩ B are included (twice) in | A |+|B|, then excluded when | A ∩ B| is subtracted. Notice that if A ∩ B = ;, then we do in fact get | A ∪ B| = | A | + |B|; conversely if | A ∪ B| = | A | + |B|, then it must be that A ∩ B = ;. Example 3.8 A 3-card hand is dealt off of a standard 52-card deck. How many different such hands are there for which all 3 cards are red or all three cards are face cards? Solution: Let A be the set of 3-card hands where all three cards are red (i.e., either ♥ or ♦). Let B be the set of 3-card hands in which all three cards are face cards (i.e., J,K or Q of any suit). These sets are illustrated below. (( A = ♥ (( B = 5 K ♠ , , K ♦ K ♦ , , 2 ♥ J ♣ ) ( , K ♥ ) ( , K ♥ J , ♥ , J ♥ Q , ♥ , Q ♥ ) ( , A ♦ ) ( , Q ♦ 6 , ♦ , Q ♣ 6 , ) ♥ , ) ,... Q ♥ ) (Red cards) ) ,... (Face cards) Counting 82 We seek the number of 3-card hands that are all red or all face cards, and this number is | A ∪ B|. By Formula (3.3), | A ∪ B| = | A | + |B| − | A ∩ B|. Let’s examine | A |, |B| and | A ∩ B| separately. Any hand in A is formed ¡ ¢ by selecting three cards from the 26 red cards in the deck, so | A | = 26 3 . Similarly, any hand in B is formed by selecting three cards from the 12 ¡ ¢ face cards in the deck, so |B| = 12 3 . Now think about A ∩ B. It contains all the 3-card hands made up of cards that are red face cards. (( A∩B = K ♥ , K ♦ , J ♥ ) ( , K ♥ , J ♥ , Q ♥ ) ( , , Q ♦ , J ♦ , Q ♥ ) ) ,... (Red face cards) The deck has only 6 red face cards, so | A ∩ B| = 63 . Now we can answer our question. The number of 3-card hands that ¡ ¢ ¡12¢ ¡6¢ are all red or all face cards is | A ∪ B| = | A | + |B| − | A ∩ B| = 26 3 + 3 − 3 = 2600 + 220 − 20 = 2800. ¡ ¢ There is an analogue to Equation (3.3) that involves three sets. Consider three sets A , B and C , as represented in the following Venn Diagram. C A B Using the same kind of reasoning that resulted in Equation (3.3), you can convince yourself that | A ∪ B ∪ C | = | A | + | B | + | C | − | A ∩ B | − | A ∩ C | − | B ∩ C | + | A ∩ B ∩ C |. (3.4) There’s probably not much harm in ignoring this one for now, but if you find this kind of thing intriguing you should definitely take a course in combinatorics. (Ask your instructor!) As we’ve noted, Equation (3.3) becomes | A ∪ B| = | A | + |B| if it happens that A ∩ B = ;. Also, in Equation (3.4), note that if A ∩ B = ;, A ∩ C = ; and B ∩ C = ;, we get the simple formula | A ∪ B ∪ C | = | A | + |B| + |C |. In general, we have the following formula for n sets, none of which overlap. It is sometimes called the addition principle. Fact 3.4 (Addition Principle) If A 1 , A 2 , . . . , A n are sets with A i ∩ A j = ; whenever i 6= j , then | A 1 ∪ A 2 ∪ · · · ∪ A n | = | A 1 | + | A 2 | + · · · + | A n |. Inclusion-Exclusion 83 Example 3.9 How many 7-digit binary strings (0010100, 1101011, etc.) have an odd number of 1’s? Solution: Let A be the set of all 7-digit binary strings with an odd number of 1’s, so the answer to the question will be | A |. To compute | A |, we break A up into smaller parts. Notice any string in A will have either one, three, five or seven 1’s. Let A 1 be the set of 7-digit binary strings with only one 1. Let A 3 be the set of 7-digit binary strings with three 1’s. Let A 5 be the set of 7-digit binary strings with five 1’s, and let A 7 be the set of 7-digit binary strings with seven 1’s. Therefore A = A 1 ∪ A 3 ∪ A 5 ∪ A 7 . Notice that any two of the sets A i have empty intersection, so Fact 3.4 gives | A | = | A 1 | + | A 3 | + | A 5 | + | A 7 |. Now the problem is to find the values of the individual terms of this sum. For instance take A 3 , the set of 7-digit binary strings with three 1’s. Such a string can be formed by selecting three out of seven positions for ¡ ¢ the 1’s and putting 0’s in the other spaces. Therefore | A 3 | = 73 . Similarly ¡ ¢ ¡ ¢ ¡ ¢ | A 1 | = 71 , | A 5 | = 75 , and | A 7 | = 77 . Finally the answer to our question is ¡7¢ ¡7¢ ¡7¢ ¡7¢ | A | = | A 1 | + | A 3 | + | A 5 | + | A 7 | = 1 + 3 + 5 + 7 = 7 + 35 + 21 + 1 = 64. There are 64 seven-digit binary strings with an odd number of 1’s. You may already have been using the Addition Principle intuitively, without thinking of it as a free-standing result. For instance, we used it in Example 3.2(c) when we divided lists into four types and computed the number of lists of each type. Exercises for Section 3.5 1. At a certain university 523 of the seniors are history majors or math majors (or both). There are 100 senior math majors, and 33 seniors are majoring in both history and math. How many seniors are majoring in history? 2. How many 4-digit positive integers are there for which there are no repeated digits, or for which there may be repeated digits, but all are odd? 3. How many 4-digit positive integers are there that are even or contain no 0’s? 4. This problem involves lists made from the letters T,H,E,O,R,Y, with repetition allowed. (a) How many 4-letter lists are there that don’t begin with T, or don’t end in Y? (b) How many 4-letter lists are there in which the sequence of letters T,H,E appears consecutively? (c) How many 5-letter lists are there in which the sequence of letters T,H,E appears consecutively? 84 Counting 5. How many 7-digit binary strings begin in 1 or end in 1 or have exactly four 1’s? 6. Is the following statement true or false? Explain. If A 1 ∩ A 2 ∩ A 3 = ;, then | A 1 ∪ A 2 ∪ A 3 | = | A 1 | + | A 2 | + | A 3 |. 7. This problem concerns 4-card hands dealt off of a standard 52-card deck. How many 4-card hands are there for which all 4 cards are of the same suit or all 4 cards are red? 8. This problem concerns 4-card hands dealt off of a standard 52-card deck. How many 4-card hands are there for which all 4 cards are of different suits or all 4 cards are red? 9. A 4-letter list is made from the letters L,I,S,T,E,D according to the following rule: Repetition is allowed, and the first two letters on the list are vowels or the list ends in D. How many such lists are possible? 10. A 5-card poker hand is called a flush if all cards are the same suit. How many different flushes are there? © Copyright 2020
Section I Use of English Directions: Read the following text. Choose the best word(s) for each numbered blank and mark A, B, C, and D on ANSWER SHEET 1 (10 points) By 1830 the former Spanish and Portuguese colonies had become independent nations. The roughly 20 million __1__ of these nations looked __2__ to the future. Born in the crisis of the old regime and Iberian Colonialism, many of the leaders of independence __3__ the ideas of representative government, careers __4__ to talent, freedom of commerce and trade, the __5__ to private property, and a belief in the individual as the basis of society, __6__ there was a belief that the new nations should be sovereign and independent states, large enough to be economically viable and integrated by a __7__ set of laws. On the issue of __8__ of religion and the position of the church,__9__ ,there was less agreement __10__ the leadership. Roman Catholicism had been the state religion and the only one __11__ by the Spanish crown,__12__ most leaders sought to maintain Catholicism __13__ the official religion of the new states, some sought to end the __14__ of other faiths. The defense of the Church became a rallying __15__ for the conservative forces. The ideals of the early leaders of independence were often egalitarian, valuing equality of everything. Bolivar had received aid from Haiti and had __16__ in return to abolish slavery in the areas he liberated. By 1854 slavery had been abolished everywhere except Spain's __17__ colonies. Early promises to end Indian tribute and taxes on people of mixed origin came much __18__ because the new nations still needed the revenue such policies __19__ Egalitarian sentiments were often tempered by fears that the mass of the population was __20__ self-rule and democracy. 1. [A] natives [B] inhabitants [C] peoples [D] individuals 2. [A] confusedly [B] cheerfully [C] worriedly [D] hopefully 3. [A] shared [B] forgot [C] attained [D] rejected 4. [A] related [B] close [C] open [D] devoted 5. [A] access [B] succession [C] right [D] return 6. [A] Presumably [B] Incidentally [C] Obviously [D] Generally 7. [A] unique [B] common [C] particular [D] typical 8. [A] freedom [B] origin [C] impact [D] reform 9. [A] therefore [B] however [C] indeed [D] moreover 10. [A] with [B] about [C] among [D] by 11. [A] allowed [B] preached [C] granted [D] funded 12. [A] Since [B] If [C] Unless [D] While 13. [A] as [B] for [C] under [D] against 14. [A] spread [B] interference [C] exclusion [D] influence 15. [A] support [B] cry [C] plea [D] wish 16. [A] urged [B] intended [C] expected [D] promised 17. [A] controlling [B] former [C] remaining [D] original 18. [A] slower [B] faster [C] easier [D] tougher 19. [A] created [B] produced [C] contributed [D] preferred 20. [A] puzzled by [B] hostile to [C] pessimistic about [D] unprepared for Section II Reading Comprehension Read the following four texts. Answer the questions below each text by choosing A, B, C or D. Mark your answers on ANSWER SHEET 1. (40 points) Text 1 [410 words] If you were to examine the birth certificates of every soccer player in 2006's World Cup tournament you would most likely find a noteworthy quirk elite soccer later months. If you then examined the European national youth teams that feed the World Cup and professional ranks, you would find this strange phenomenon to be even more pronounced. What might account for this strange phenomenon? Here are a few guesses: a) certain astrological signs confer superior soccer skills. b) winter-born bathes tend to have higher oxygen capacity which increases soccer stamina. c) soccer mad parents are more likely to conceive children in springtime at the annual peak of soccer mania. d) none of the above. Anders Ericsson, a 58-year-old psychology professor at Florida State University, says he believes strongly in “none of the above.” Ericsson grew up in Sweden, and studied nuclear engineering until he realized he realized he would have more opportunity to conduct his own research if he switched to psychology. His first experiment nearly years ago, involved memory: training a person to hear and then repeat a random series of numbers. “With the first subject. after about 20 hours of training his digit span had risen from 7 to 20,” Ericsson recalls. “He kept improving, and after about 200 hours of training he had risen to over 80 numbers.” This success coupled with later research showing that memory itself as not genetically determined, led Ericsson to conclude that the act of memorizing is more of a cognitive exercise than an intuitive one. In other words, whatever inborn differences two people may exhibit in their abilities to memorize those differences are swamped by how well each person “encodes” the information. And the best way to learn how to encode information meaningfully, Ericsson determined, was a process known as deliberate practice. Deliberate practice entails more than simply repeating a task. Rather, it involves setting specific goals, obtaining immediate feedback and concentrating as much on technique as on outcome. Ericsson and his colleagues have thus taken to studying expert performers in a wide range of pursuits, including soccer. They gather all the data they can, not just predominance statistics and biographical details but also the results of their own lavatory experiments with high achievers. Their work makes a rather startling assertion: the trait we commonly call talent is highly overrated. Or, put another way, expert performers whether in memory or surgery, ballet or computer programming are nearly always made, not born. 21. The birthday phenomenon found among soccer players is mentioned to [A] stress the importance of professional training. [B] spotlight the soccer superstars in the World Cup. [C] introduce the topic of what males expert performance. [D] explain why some soccer teams play better than others. 22. The word “mania” (Line 4, Paragraph 2) most probably means 23. According to Ericsson good memory [A] depends on meaningful processing of information. [B] results from intuitive rather than cognitive exercises. [C] is determined by genetic rather than psychological factors. [D] requires immediate feedback and a high degree of concentration. 24. Ericsson and his colleagues believe that [A] talent is a dominating factor for professional success. [B] biographical data provide the key to excellent performance. [C] the role of talent tends to be overlooked. [D] high achievers owe their success mostly to nurture. 25. Which of the following proverbs is closest to the message the text tries to convey? [A] “Faith will move mountains.” [B] “One reaps what one sows.” [C] “Practice makes perfect.” [D] “Like father, like son” Text 2 [451 words] For the past several years, the Sunday newspaper supplement Parade has featured a column called “Ask Marilyn.” People are invited to query Marilyn vos Savant, who at age 10 had tested at a mental level of someone about 23 years old; that gave her an IQ of 228-the highest score ever recorded. IQ tests ask you to complete verbal and visual analogies, to envision paper after it has been folded and cut, and to deduce numerical sequences, among other similar tasks. So it is a bit confusing when vos Savant fields such queries from the average Joe (whose IQ is 100) as, What's the difference between love and fondness? Or what is the nature of luck and coincidence? It's not obvious how the capacity to visualize objects and to figure out numerical patterns suits one to answer questions that have eluded some of the best poets and philosophers. Clearly, intelligence encompasses more than a score on a test. Just what does it means to be smart? How much of intelligence can be specified, and how much can we learn about it from neurology, genetics, computer science and other fields? The defining term of intelligence in humans still seems to be the IQ score, even though IQ tests are not given as often as they used to be. The test comes primarily in two forms: the Stanford-Binet Intelligence Scale and the Wechsler Intelligence Scales (both come in adult and children's version)。 Generally costing several hundred dollars, they are usually given only by psychologists, although variations of them populate bookstores and the World Wide Web. Superhigh scores like vos Savant’s are no longer possible, because scoring is now based on a statistical population distribution among age pecks, rather tan simply dividing the mental are by the chronological age and multiplying by 100. Other standardized tests, such as the Scholastic Assessment Test (SAT) and the Graduate Record Exam (GRE), capture the main aspects of IQ tests. Such standardized tests may not assess all the important elements necessary to succeed in school and in life, argues Robert J. Sternberg. In his article “How Intelligent Is Intelligence Testing?”. Sternberg notes that traditional tests best assess analytical and verbal skills but fail to measure creativity and practical knowledge, components also critical to problem solving and life success. Moreover, IQ tests do not necessarily predict so well once populations or situations change. Research has found that IQ predicted leadership sills when the tests were given under low-stress conditions, but under high-stress conditions. IQ was negatively correlated with leadership-that is it predicted the opposite. Anyone who bas toiled through SAT will testify that test-taking skill also matters, whether it‘s knowing when to guess or what questions of skip. 26. Which of the following may be required in an intelligence test? [A] Answering philosophical questions. [B] Folding or cutting paper into different shapes. [C] Telling the differences between certain concepts. [D] Choosing words or graphs similar to the given ones. 27. What can be inferred about intelligence testing from Paragraph 3? [A] People no longer use IQ scores as an indicator of intelligence. [B] More versions of IQ tests are now available on the Internet. [C] The test contents and formats for adults and children may be different. [D] Scientists have defined the important elements of human intelligence. 28. People nowadays can no longer achieve IQ scores as high as vos Savant's because [A] the scores are obtained through different computational procedures. [B] creativity rather than analytical skills is emphasized now. [C] vos Savant's case is an extreme one that will not repeat. [D] the defining characteristic of IQ tests has changed. 29. We can conclude from the last paragraph that [A] test scores may not be reliable indicators of one's ability [B] IQ scores and SAT results are highly correlated. [C] testing involves a lot of guesswork. [D] traditional tests are out of date. 30. What is the author's attitude towards IQ tests? Text 3 [421 words] During the past generation, the American middle-class family that once could count on hard work and fair play to keep itself financially secure has been transformed by economic risk and new realities. Now a pink slip, a bad diagnosis. or a disappearing spouse can reduce a family from solidly middle class to newly poor in a few months. In just one generation, millions of mothers have gone to work, transforming basic family economics. Scholars, policymakers, and critics of all stripes have debated the social implications of these changes, but few have looked at the side effect family risk has risen as well. Today's families have budgeted to the limits of their new two-paycheck status. As a result they have lost the parachute they once had in times of financial setback- a back-up earner (usually Mom) who could go into the workforce if the primary earner got laid off or fell sick. This “added-worker effect” could support the safety net offered by unemployment insurance or disability insurance to help families weather bad times. But today, a disruption to family fortunes can not longer be made up with extra income from an otherwise-stay-at-home partner. During the same period, families have been asked to absorb much more risk in their retirement income. Steelworkers, airline employees, and now those in the auto industry are joining millions of families who must worry about interest rates, stock market fluctuation, and the harsh reality that they may outlive their retirement money. For much of the past year. President Bush campaigned to move Social Security to a savings-account model, with retirees trading much or all of their guaranteed payments for payments depending on investment returns. For younger families, the picture is not any better. Both the absolute cost of healthcare and the share of it borne by families have risen-and newly fashionable health-savings plans are spreading from legislative halls to Wal-Mart workers, with much higher deductibles and a large new does of investment risk for families‘ future healthcare. Even demographics are working against the middle class family, as the odds of having a weak elderly parent- and all the attendant need for physical and financial assistance have jumped eightfold in just one generation. From the middle-class family perspective, much of this, understandably, looks far less like an opportunity to exercise more financial responsibility, and a good deal more like a frightening acceleration of the wholesale shift of financial risk onto their already overburdened shoulders. The financial fallout has begun, and the political fallout may not be far behind. 31. Today's double-income families are at greater financial risk in that [A] the safety net they used to enjoy has disappeared. [B] their chances of being laid off have greatly increased. [C] they are more vulnerable to changes in family economics. [D] they are deprived of unemployment or disability insurance. 32. As a result of President Bush's reform, retired people may have [A] a higher sense of security. [B] less secured payments. [C] less chance to invest. [D] a guaranteed future. 33. According go the author, health-savings plans will [A] help reduce the cost of healthcare. [B] popularize among the middle class. [C] compensate for the reduced pensions. [D] increase the families investment risk. 34. It can be inferred from the last paragraph that [A] financial risks tend to outweigh political risks. [B] the middle class may face greater political challenges. [C] financial problems may bring about political problems. [D] financial responsibility is an indicator of political status. 35. Which of the following is the best title for this text? [A] The Middle Class on the Alert [B] The Middle Class on the Cliff [C] The Middle Class in Conflict [D] The Middle Class in Ruins Text 4 [416 words] It never rains but it pours. Just as bosses and boards have finally sorted out their worst accounting and compliance troubles, and improved their feeble corporation governance, a new problem threatens to earn them- especially in America-the sort of nasty headlines that inevitably lead to heads rolling in the executive suite: data insecurity. Left, until now, to odd, low-level IT staff to put right, and seen as a concern only of data-rich industries such as banking, telecoms and air travel, information protection is now high on the boss's agenda in businesses of every variety. Several massive leakages of customer and employee data this year- from organizations as diverse as Time Warner, the American defense contractor Science Applications International Corp and even the University of California. Berkeley-have left managers hurriedly peering into their intricate 11 systems and business processes in search of potential vulnerabilities. “Data is becoming an asset which needs no be guarded as much as any other asset.” says I am Mendelson of Stanford University's business school “The ability guard customer data is the key to market value, which the board is responsible for on behalf of shareholders” Indeed, just as there is the concept of Generally Accepted Accounting Principles (GAAP)。 perhaps it is time for GASP. Generally Accepted Security Practices, suggested Eli Noam of New York's Columbia Business School. “Setting the proper investment level for security, redundancy, and recovery is a management issue, not a technical one.” he says. The mystery is that this should come as a surprise to any boss. Surely it should be obvious to the dimmest exccutive that trust, that most valuable of economic assets, is easily destroyed and hugely expensive to restore-and that few things are more likely to destroy trust than a company letting sensitive personal data get into the wrong hands. The current state of affairs may have been encouraged-though not justified-by the lack of legal penalty (in America, but not Europe) for data leakage. Until California recently passed a law. American firms did not have to tell anyone, even the victim, when data went astray, I hat may change fast lots of proposed data-security legislation now doing the rounds in Washington. D.C. Meanwhile. the theft of information about some 40 million credit-card accounts in America, disclosed on June 17th. overshadowed a hugely important decision a day earlier by America's Federal Trade Commission (FTC) that puts corporate America on notice that regulators will act if firms fail to provide adequate data security. 36. The statement: “It never rains but it pours” is used to introduce [A] the fierce business competition. [B] the feeble boss-board relations [C] the threat from news reports. [D] the severity of data leakage. 37. According to Paragraph 2, some organizations check their systems to find out [A] whether there is any weak point. [B] what sort of data has been stolen. [C] who is responsible for the leakage. [D] how the potential spies can be located. 38. In bringing up the concept of GASP the author is making the point that [A] shareholders interests should be properly attended to. [B] information protection should be given due attention. [C] businesses should enhance their level of accounting security. [D] the market value of customer data should be emphasized. 39. According to Paragraph 4, what puzzles the author is that some bosses fail to [A] see the link between trust and data protection. [B] perceive the sensitivity of personal data. [C] realize the high cost of data restoration. [D] appreciate the economic value of trust. 40. It can be inferred from Paragraph 5 that [A] data leakage is more severe in Europe. [B] FTC's decision is essential to data security. [C] California takes the lead in security legislation. [D] legal penalty is a major Solomon to data leakage. You are going to read a list of headings and a text about what parents are supposed to do to guide their children into adulthood. Choose a heading from the list A——G that best fits the meaning of each numbered part of the text (41——45)。 The first and last paragraphs of the text are not numbered. There are two extra headings that you do not need to use. Mark your answers on ANSWER SHEET 1. (10 points) A. Set a Good Example for Your Kids B. Build Your Kid's Work Skills C. Place Time Limits on Leisure Activities D. Talk about the Future on a Regular Basis E. Help Kids Develop Coping Strategies F. Help Your Kids Figure Out Who They Are G. Build Your Kids Sense of Responsibility How Can a Parent Help? Mothers and fathers can do a lot to ensure a safe landing in early adulthood for their kids. Even if a job's starting salary seems too small to satisfy an emerging adult's need for rapid content, the transition from school to work can be less of a setback if the start-up adult is ready for the move. Here are a few measures, drawn from my book Ready or Not, Here Life Comes, that parents can take to prevent what I call “work-life unread ness”。 You can start this process when they are 11 or 12. Periodically review their emerging strengths and weaknesses with them and work together on any shortcomings, like difficulty in communicating well or collaborating. Also, identify the kinds of interests they keep coming back to, as these offer clues to the careers that will fit them best. Kids need a range of authentic role models-as opposed to members of their clique, pop stars and vaunted athletes. Have regular dinner-table discussions about people the family knows and how they got where they are. Discuss the joys and downsides of your own career and encourage your kids to form some ideas about their own future. When asked what they want to do, they should be discouraged from saying “I have no idea.” They can change their minds 200 times, but having only a foggy view of the future is of little good. Teachers are responsible for teaching kids how to learn; parents should e responsible for teaching them how to work. Assign responsibilities around the house and make sure homework deadlines are met. Encourage teenagers to take a part-time job Kids need plenty of practice delaying gratification and deploying effective organizational skills, such as managing time and setting priorities. Paying video games encourages immediate content. And hours of watching TV shows with canned laughter only teaches kids to process information in a passive way. At the same time, listening through earphones to the same monotonous beats for long stretches encourages kids to stay inside their bubble instead of pursuing other endeavors. All these activities can prevent the growth of important communication and thinking skills and make it difficult for kids to develop the kind of sustained concentration they will need for most jods. They should know how to deal with setbacks, stresses and feelings of inadequacy. They should also learn how to solve problems and resolve conflicts, ways to brainstorm and think critically. Discussions at home can help kids practice doing these things and help them apply these skills to everyday life situations. What about the son or daughter who is grown but seems to be struggling and wandering aimlessly through early adulthood? Parents still have a major role to play, but now it is more delicate. They have to be careful not to come across as disappointed in their child. They should exhibit strong interest and respect for whatever currently interests their fledging adult (as na?ve or ill conceived as it may seem) while becoming a partner in exploring options for the future. Most of all, these new adults must fell that they are respected and supported by a family that appreciates them. Read the following text carefully and then translate the underlined segments into Chinese. Your translation should be written clearly on ANSWER SHEET 2. (10 points) The study of law has been recognized for centuries as a basic intellectual discipline in European universities. However, only in recent years has it become a feature of undergraduate programs in Canadian universities. (46)Traditionally, legal learning has been viewed in such institutions as the special preserve of lawyers rather than a necessary part of the intellectual equipment of an educated person. Happily, the older and more continental view of legal education is establishing itself in a number of Canadian universities and some have even begun to offer undergraduate degrees in law. If the study of law is beginning to establish itself as part and parcel of a general education, its aims and methods should appeal directly to journalism educators. Law is a discipline which encourages responsible judgment. On the one hand, it provides opportunities to analyze such ideas as justice, democracy and freedom. (47)On the other, it links these concepts to everyday realities in a manner which is parallel to the links journalists forge on a daily basis as they cover and comment on the news. For example, notions of evidence and fact, of basic rights and public interest are at work in the process of journalistic judgment and production just as in courts of law. Sharpening judgment by absorbing and reflecting on law is a desirable component of a journalist's intellectual preparation for his or her career. (48)But the idea that the journalist must understand the law more profoundly than an ordinary citizen rests on an understanding of the established conventions and special responsibilities of the news media. Politics or, more broadly, the functioning of the state, is a major subject for journalists. The better informed they are about the way the state works, the better their reporting will be. (49)In fact, it is difficult to see how journalists who do not have a clear preps of the basic features of the Canadian Constitution can do a competent job on political stories. Furthermore, the legal system and the events which occur within it are primary subjects for journalists. While the quality of legal journalism varies greatly, there is an undue reliance amongst many journalists on interpretations supplied to them by lawyers. (50) While comment and reaction from lawyers may enhance stories, it is preferable for journalists to rely on their own notions of significance and make their own judgments. These can only come from a well-grounded understanding of the legal system. Section III Writing 51. Directions:Write a letter to your university library, making suggestions for improving its service. You should write about 100 words on ANSWER SHEET 2. Do not sign your own name at the end of the letter. Use “Li Ming” instead. Do not write the address. (10 points) 52. Directions:Write an essay of 160-200 words based on the following drawing. In your essay, you should 1) describe the drawing briefly, 2) explain its intended meaning, and then 3) support your view with an example/ examples. You should write neatly on ANSWER SHEET 2. (20 points)
Atlantic slave trade The Atlantic slave trade or transatlantic slave trade took place across the Atlantic Ocean from the 16th through to the 19th centuries. The vast majority of those enslaved that were transported to the New World, many on the triangular trade route and its Middle Passage, were West Africans from the central and western parts of the continent sold by western Africans to western European slave traders, or by direct European capture to the Americas. The numbers were so great that Africans who came by way of the slave trade became the most numerous Old World immigrants in both North and South America before the late 18th century. Far more slaves were taken to South America than to the north. The South Atlantic economic system centered on producing commodity crops, and making goods and clothing to sell in Europe, and increasing the numbers of African slaves brought to the New World. This was crucial to those western European countries which, in the late 17th and 18th centuries, were vying with each other to create overseas empires. The Portuguese were the first to engage in the New World slave trade in the 16th century, and others soon followed. Ship owners considered the slaves as cargo to be transported to the Americas as quickly and cheaply as possible, there to be sold to labour in coffee, tobacco, cocoa, sugar and cotton plantations, gold and silver mines, rice fields, construction industry, cutting timber for ships, in skilled labour, and as domestic servants. The first Africans imported to the English colonies were classified as "indentured servants", like workers coming from England, and also as "apprentices for life". By the middle of the 17th century, slavery had hardened as a racial caste; they and their offspring were legally the property of their owners, and children born to slave mothers were slaves. As property, the people were considered merchandise or units of labour, and were sold at markets with other goods and services. The Atlantic slave traders, ordered by trade volume, were: the Portuguese, the British, the French, the Spanish, and the Dutch Empire. Several had established outposts on the African coast where they purchased slaves from local African leaders. These slaves were managed by a factor who was established on or near the coast to expedite the shipping of slaves to the New World. These slaves were kept in a factory while awaiting shipment. Current estimates are that about 12 million Africans were shipped across the Atlantic, although the number purchased by the traders is considerably higher. The slave trade is sometimes called the Maafa by African and African-American scholars, meaning "great disaster" in Swahili. Some scholars, such as Marimba Ani and Maulana Karenga, use the terms "African Holocaust" or "Holocaust of Enslavement". - 1 Background - 2 16th, 17th and 18th centuries - 3 Human toll - 4 European competition - 5 New World destinations - 6 Economics of slavery - 7 Effects - 8 End of the Atlantic slave trade - 9 Legacy - 10 See also - 11 References - 12 Further reading - 13 External links The Atlantic slave trade arose after trade contacts were first made between the continents of the "Old World" (Europe, Africa, and Asia) and those of the "New World" (North America and South America). For centuries, tidal currents had made ocean travel particularly difficult and risky for the ships that were then available, and as such there had been very little, if any, naval contact between the peoples living in these continents. In the 15th century, however, new European developments in seafaring technologies meant that ships were better equipped to deal with the problem of tidal currents, and could begin traversing the Atlantic Ocean. Between 1600 and 1800, approximately 300,000 sailors engaged in the slave trade visited West Africa. In doing so, they came into contact with societies living along the west African coast and in the Americas which they had never previously encountered. Historian Pierre Chaunu termed the consequences of European navigation "disenclavement", with it marking an end of isolation for some societies and an increase in inter-societal contact for most others. Historian John Thornton noted, "A number of technical and geographical factors combined to make Europeans the most likely people to explore the Atlantic and develop its commerce". He identified these as being the drive to find new and profitable commercial opportunities outside Europe as well as the desire to create an alternative trade network to that controlled by the Muslim Empire of the Middle East, which was viewed as a commercial, political and religious threat to European Christendom. In particular, European traders wanted to trade for gold, which could be found in western Africa, and also to find a naval route to "the Indies" (India), where they could trade for luxury goods such as spices without having to obtain these items from Middle Eastern Islamic traders. Although the initial Atlantic naval explorations were performed purely by Europeans, members of many European nationalities were involved, including sailors from Portugal, Spain, the Italian kingdoms, England, France and the Netherlands. This diversity led Thornton to describe the initial "exploration of the Atlantic" as "a truly international exercise, even if many of the dramatic discoveries [such as those by Christopher Columbus and Ferdinand Magellan] were made under the sponsorship of the Iberian monarchs." That leadership later gave rise to the myth that "the Iberians were the sole leaders of the exploration". Slavery was practiced in some parts of Africa, Europe, Asia and the Americas for many centuries before the beginning of the Atlantic slave trade. There is evidence that enslaved people from some African states were exported to other states in Africa, Europe and Asia prior to the European colonization of the Americas. The African slave trade provided a large number of slaves to Europeans and many more to people in Muslim countries. The Atlantic slave trade was not the only slave trade from Africa, although it was the largest in volume and intensity. As Elikia M’bokolo wrote in Le Monde diplomatique: The African continent was bled of its human resources via all possible routes. Across the Sahara, through the Red Sea, from the Indian Ocean ports and across the Atlantic. At least ten centuries of slavery for the benefit of the Muslim countries (from the ninth to the nineteenth).... Four million enslaved people exported via the Red Sea, another four million through the Swahili ports of the Indian Ocean, perhaps as many as nine million along the trans-Saharan caravan route, and eleven to twenty million (depending on the author) across the Atlantic Ocean. According to John K. Thornton, Europeans usually bought enslaved people who were captured in endemic warfare between African states. Some Africans had made a business out of capturing Africans from neighboring ethnic groups or war captives and selling them. A reminder of this practice is documented in the Slave Trade Debates of England in the early 19th century: "All the old writers... concur in stating not only that wars are entered into for the sole purpose of making slaves, but that they are fomented by Europeans, with a view to that object." People living around the Niger River were transported from these markets to the coast and sold at European trading ports in exchange for muskets and manufactured goods such as cloth or alcohol. However, the European demand for slaves provided a large new market for the already existing trade. While those held in slavery in their own region of Africa might hope to escape, those shipped away had little chance of returning to Africa. European colonization and slavery in West Africa |This section relies largely or entirely upon a single source. (April 2011)| Upon discovering new lands through their naval explorations, European colonisers soon began to migrate to and settle in lands outside their native continent. Off the coast of Africa, European migrants, under the directions of the Kingdom of Castile, invaded and colonised the Canary Islands during the 15th century, where they converted much of the land to the production of wine and sugar. Along with this, they also captured native Canary Islanders, the Guanches, to use as slaves both on the Islands and across the Christian Mediterranean. As historian John Thornton remarked, "the actual motivation for European expansion and for navigational breakthroughs was little more than to exploit the opportunity for immediate profits made by raiding and the seizure or purchase of trade commodities". Using the Canary Islands as a naval base, European, at the time primarily Portuguese traders, began to move their activities down the western coast of Africa, performing raids in which slaves would be captured to be later sold in the Mediterranean. Although initially successful in this venture, "it was not long before African naval forces were alerted to the new dangers, and the Portuguese [raiding] ships began to meet strong and effective resistance", with the crews of several of them being killed by African sailors, whose boats were better equipped at traversing the west African coasts and river systems. By 1494, the Portuguese king had entered agreements with the rulers of several West African states that would allow trade between their respective peoples, enabling the Portuguese to "tap into" the "well-developed commercial economy in Africa... without engaging in hostilities". "Peaceful trade became the rule all along the African coast", although there were some rare exceptions when acts of aggression led to violence. For instance Portuguese traders attempted to conquer the Bissagos Islands in 1535. In 1571 Portugal, supported by the Kingdom of Kongo, took control of the south-western region of Angola in order to secure its threatened economic interest in the area. Although Kongo later joined a coalition in 1591 to force the Portuguese out, Portugal had secured a foothold on the continent that it continued to occupy until the 20th century. Despite these incidences of occasional violence between African and European forces, many African states ensured that any trade went on in their own terms, for instance, imposing custom duties on foreign ships. In 1525, the Kongolese king, Afonso I, seized a French vessel and its crew for illegally trading on his coast. Historians have widely debated the nature of the relationship between these African kingdoms and the European traders. The Guyanese historian Walter Rodney (1972) has argued that it was an unequal relationship, with Africans being forced into a "colonial" trade with the more economically developed Europeans, exchanging raw materials and human resources (i.e. slaves) for manufactured goods. He argued that it was this economic trade agreement dating back to the 16th century that led to Africa being underdeveloped in his own time. These ideas were supported by other historians, including Ralph Austen (1987). This idea of an unequal relationship was contested by John Thornton (1998), who argued that "the Atlantic slave trade was not nearly as critical to the African economy as these scholars believed" and that "African manufacturing [at this period] was more than capable of handling competition from preindustrial Europe". However, Anne Bailey, commenting on Thornton's suggestion that Africans and Europeans were equal partners in the Atlantic slave trade, wrote: To see Africans as partners implies equal terms and equal influence on the global and intercontinental processes of the trade. Africans had great influence on the continent itself, but they had no direct influence on the engines behind the trade in the capital firms, the shipping and insurance companies of Europe and America, or the plantation systems in Americas. They did not wield any influence on the building manufacturing centers of the West. 16th, 17th and 18th centuries The Atlantic slave trade is customarily divided into two eras, known as the First and Second Atlantic Systems. The First Atlantic system was the trade of enslaved Africans to, primarily, South American colonies of the Portuguese and Spanish empires; it accounted for slightly more than 3% of all Atlantic slave trade. It started (on a significant scale) in about 1502 and lasted until 1580 when Portugal was temporarily united with Spain. While the Portuguese were directly involved in trading enslaved peoples, the Spanish empire relied on the asiento system, awarding merchants (mostly from other countries) the license to trade enslaved people to their colonies. During the first Atlantic system most of these traders were Portuguese, giving them a near-monopoly during the era. Some Dutch, English, and French traders also participated in the slave trade. After the union, Portugal came under Spanish legislation that prohibited it from directly engaging in the slave trade as a carrier. It became a target for the traditional enemies of Spain, losing a large share of the trade to the Dutch, English and French. The Second Atlantic system was the trade of enslaved Africans by mostly English, Portuguese, French and Dutch traders. The main destinations of this phase were the Caribbean colonies and Brazil, as European nations built up economically slave-dependent colonies in the New World. Slightly more than 3% of the enslaved people exported from Africa were traded between 1450 and 1600, and 16% in the 17th century. It is estimated that more than half of the entire slave trade took place during the 18th century, with the British, Portuguese and French being the main carriers of nine out of ten slaves abducted from Africa. By the 1690s, the English were shipping the most slaves from West Africa. They maintained this position during the 18th century, becoming the biggest shippers of slaves across the Atlantic. Following the British and United States' bans on the African slave trade in 1808, it declined, but the period still accounted for 28.5% of the total volume of the Atlantic slave trade. European colonists initially practiced systems of both bonded labour and "Indian" slavery, enslaving many of the natives of the New World. For a variety of reasons, Africans replaced Native Americans as the main population of enslaved people in the Americas. In some cases, such as on some of the Caribbean Islands, diseases such as smallpox and warfare eliminated the natives completely. In other cases, such as in South Carolina, Virginia, and New England, colonists found they needed alliances with native tribes; together with the availability of enslaved Africans at affordable prices (beginning in the early 18th century for these colonies), they banned Native American slavery. A burial ground in Campeche, Mexico, suggests slaves had been brought there not long after Hernán Cortés completed the subjugation of Aztec and Mayan Mexico in the 16th century. The graveyard had been in use from approximately 1550 to the late 17th century. The first side of the triangle was the export of goods from Europe to Africa. A number of African kings and merchants took part in the trading of enslaved people from 1440 to about 1833. For each captive, the African rulers would receive a variety of goods from Europe. These included guns, ammunition and other factory made goods. The second leg of the triangle exported enslaved Africans across the Atlantic Ocean to the Americas and the Caribbean Islands. The third and final part of the triangle was the return of goods to Europe from the Americas. The goods were the products of slave-labour plantations and included cotton, sugar, tobacco, molasses and rum. Sir John Hawkins, considered the pioneer of the British slave trade, was the first to run the Triangular trade, making a profit at every stop. Brazil (the main importer of slaves) manufactured these goods in South America and directly traded with African ports, thus not taking part in a triangular trade. Labour and slavery The Atlantic Slave Trade was the result of, among other things, labour shortage, itself in turn created by the desire of European colonists to exploit New World land and resources for capital profits. Native peoples were at first utilized as slave labour by Europeans, until a large number died from overwork and Old World diseases. Alternative sources of labour, such as indentured servitude, failed to provide a sufficient workforce. Many crops could not be sold for profit, or even grown, in Europe. Exporting crops and goods from the New World to Europe often proved to be more profitable than producing them on the European mainland. A vast amount of labour was needed to create and sustain plantations that required intensive labour to grow, harvest, and process prized tropical crops. Western Africa (part of which became known as "the Slave Coast"), and later Central Africa, became the source for enslaved people to meet the demand for labour. The basic reason for the constant shortage of labour was that, with large amounts of cheap land available and lots of landowners searching for workers, free European immigrants were able to become landowners themselves after a relatively short time, thus increasing the need for workers. Thomas Jefferson attributed the use of slave labour in part to the climate, and the consequent idle leisure afforded by slave labour: "For in a warm climate, no man will labour for himself who can make another labour for him. This is so true, that of the proprietors of slaves a very small proportion indeed are ever seen to labour." African participation in the slave trade Africans played a direct role in the slave trade, selling their captives or prisoners of war to European buyers. The prisoners and captives who were sold were usually from neighbouring or enemy ethnic groups. These captive slaves were considered "other", not part of the people of the ethnic group or "tribe" ; African kings held no particular loyalty to them. Sometimes criminals would be sold so that they could no longer commit crimes in that area. Most other slaves were obtained from kidnappings, or through raids that occurred at gunpoint through joint ventures with the Europeans. But some African kings refused to sell any of their captives or criminals. King Jaja of Opobo, a former slave, refused to do business with the slavers completely. However, Shahadah notes that with the rise of a large commercial slave trade driven by European needs, enslaving enemies became less a consequence of war, and more and more a reason to go to war. European participation in the slave trade Although Europeans were the market for slaves, Europeans rarely entered the interior of Africa, due to fear of disease and fierce African resistance. The enslaved people would be brought to coastal outposts where they would be traded for goods. Enslavement became a major by-product of internal wars in Africa as nation states expanded through military conflicts, in many cases through deliberate sponsorship of benefiting Western European nations. During such periods of rapid state formation or expansion (Asante and Dahomey being good examples), slavery formed an important element of political life which the Europeans exploited: as Queen Sara's plea to the Portuguese courts revealed, the system became "sell to the Europeans or be sold to the Europeans". In Africa, convicted criminals could be punished by enslavement, a punishment which became more prevalent as slavery became more lucrative. Since most of these nations did not have a prison system, convicts were often sold or used in the scattered local domestic slave market. As of 1778, Thomas Kitchin estimated that Europeans were bringing an estimated 52,000 slaves to the Caribbean yearly, with the French bringing the most Africans to the French West Indies (13,000 out of the yearly estimate). The Atlantic slave trade peaked in the last two decades of the 18th century, during and following the Kongo Civil War. Wars among tiny states along the Niger River's Igbo-inhabited region and the accompanying banditry also spiked in this period. Another reason for surplus supply of enslaved people was major warfare conducted by expanding states, such as the kingdom of Dahomey, the Oyo Empire, and the Asante Empire. Slavery in Africa and the New World contrasted Forms of slavery varied both in Africa and in the New World. In general, slavery in Africa was not heritable – that is, the children of slaves were free – while in the Americas, children of slave mothers were considered born into slavery. This was connected to another distinction: slavery in West Africa was not reserved for racial or religious minorities, as it was in European colonies, although the case was otherwise in places such as Somalia, where Bantus were taken as slaves for the ethnic Somalis. The treatment of slaves in Africa was more variable than in the Americas. At one extreme, the kings of Dahomey routinely slaughtered slaves in hundreds or thousands in sacrificial rituals, and slaves as human sacrifices was also known in Cameroon. On the other hand, slaves in other places were often treated as part of the family, "adopted children," with significant rights including the right to marry without their masters' permission. Scottish explorer Mungo Park wrote: The slaves in Africa, I suppose, are nearly in the proportion of three to one to the freemen. They claim no reward for their services except food and clothing, and are treated with kindness or severity, according to the good or bad disposition of their masters.... The slaves which are thus brought from the interior may be divided into two distinct classes – first, such as were slaves from their birth, having been born of enslaved mothers; secondly, such as were born free, but who afterwards, by whatever means, became slaves. Those of the first description are by far the most numerous...." In the Americas, slaves were denied the right to marry freely and masters did not generally accept them as equal members of the family. While slaves convicted of revolt or murder were executed, New World colonists did not submit slaves to arbitrary ritual sacrifice. New World slaves were useful and expensive enough to maintain and care for, but still the property of their owners. Slave market regions and participation There were eight principal areas used by Europeans to buy and ship slaves to the Western Hemisphere. The number of enslaved people sold to the New World varied throughout the slave trade. As for the distribution of slaves from regions of activity, certain areas produced far more enslaved people than others. Between 1650 and 1900, 10.24 million enslaved Africans arrived in the Americas from the following regions in the following proportions: - Senegambia (Senegal and the Gambia): 4.8% - Upper Guinea (Guinea-Bissau, Guinea and Sierra Leone): 4.1% - Windward Coast (Liberia and Côte d'Ivoire): 1.8% - Gold Coast (Ghana and east of Côte d'Ivoire): 10.4% - Bight of Benin (Togo, Benin and Nigeria west of the Niger Delta): 20.2% - Bight of Biafra (Nigeria east of the Niger Delta, Cameroon, Equatorial Guinea and Gabon): 14.6% - West Central Africa (Republic of Congo, Democratic Republic of Congo and Angola): 39.4% - Southeastern Africa (Mozambique and Madagascar): 4.7% African kingdoms of the era There were over 173 city-states and kingdoms in the African regions affected by the slave trade between 1502 and 1853, when Brazil became the last Atlantic import nation to outlaw the slave trade. Of those 173, no fewer than 68 could be deemed nation states with political and military infrastructures that enabled them to dominate their neighbours. Nearly every present-day nation had a pre-colonial predecessor, sometimes an African Empire with which European traders had to barter. The different ethnic groups brought to the Americas closely corresponds to the regions of heaviest activity in the slave trade. Over 45 distinct ethnic groups were taken to the Americas during the trade. Of the 45, the ten most prominent, according to slave documentation of the era are listed below. - The BaKongo of the Democratic Republic of Congo and Angola - The Mandé of Upper Guinea - The Gbe speakers of Togo, Ghana and Benin (Adja, Mina, Ewe, Fon) - The Akan of Ghana and Cote d'Ivoire - The Wolof of Senegal and the Gambia - The Igbo of southeastern Nigeria - The Mbundu of Angola (includes both Ambundu and Ovimbundu) - The Yoruba of southwestern Nigeria - The Chamba of Cameroon - The Makua of Mozambique The transatlantic slave trade resulted in a vast and as yet still unknown loss of life for African captives both in and outside America. Approximately 1.2 – 2.4 million Africans died during their transport to the New World. More died soon upon their arrival. The number of lives lost in the procurement of slaves remains a mystery but may equal or exceed the number who survived to be enslaved. The savage nature of the trade led to the destruction of individuals and cultures. The following figures do not include deaths of enslaved Africans as a result of their labour, slave revolts, or diseases suffered while living among New World populations. Historian Ana Lucia Araujo has noted that the process of enslavement did not end with arrival on the American shores; the different paths taken by the individuals and groups who were victims of the Atlantic slave trade were influenced by different factors—including the disembarking region, the kind of work performed, gender, age, religion, and language. A database compiled in the late 1990s put the figure for the transatlantic slave trade at more than 11 million people. For a long time, an accepted figure was 15 million, although this has in recent years been revised down. Estimates by Patrick Manning are that about 12 million slaves entered the Atlantic trade between the 16th and 19th century, but about 1.5 million died on board ship. About 10.5 million slaves arrived in the Americas. Besides the slaves who died on the Middle Passage, more Africans likely died during the slave raids in Africa and forced marches to ports. Manning estimates that 4 million died inside Africa after capture, and many more died young. Manning's estimate covers the 12 million who were originally destined for the Atlantic, as well as the 6 million destined for Asian slave markets and the 8 million destined for African markets. According to Dr. Kimani Nehusi, the presence of European slavers affected the way in which the legal code in African societies responded to offenders. Crimes traditionally punishable by some other form of punishment became punishable by enslavement and sale to slave traders. According to David Stannard's American Holocaust, 50% of African deaths occurred in Africa as a result of wars between native kingdoms, which produced the majority of slaves. This includes not only those who died in battles, but also those who died as a result of forced marches from inland areas to slave ports on the various coasts. The practice of enslaving enemy combatants and their villages was widespread throughout Western and West Central Africa, although wars were rarely started to procure slaves. The slave trade was largely a by-product of tribal and state warfare as a way of removing potential dissidents after victory, or financing future wars. However, some African groups proved particularly adept and brutal at the practice of enslaving, such as Oyo, Benin, Igala, Kaabu, Asanteman, Dahomey, the Aro Confederacy and the Imbangala war bands. In letters written by the Manikongo, Nzinga Mbemba Afonso, to the King João III of Portugal, he writes that Portuguese merchandise flowing in is what is fueling the trade in Africans. He requests the King of Portugal to stop sending merchandise but should only send missionaries. In one of his letters he writes: Each day the traders are kidnapping our people—children of this country, sons of our nobles and vassals, even people of our own family. This corruption and depravity are so widespread that our land is entirely depopulated. We need in this kingdom only priests and schoolteachers, and no merchandise, unless it is wine and flour for Mass. It is our wish that this Kingdom not be a place for the trade or transport of slaves… Many of our subjects eagerly lust after Portuguese merchandise that your subjects have brought into our domains. To satisfy this inordinate appetite, they seize many of our black free subjects.... They sell them. After having taken these prisoners [to the coast] secretly or at night.... As soon as the captives are in the hands of white men they are branded with a red-hot iron. Before the arrival of the Portuguese, slavery had already existed in Kongo. Afonso believed that the slave trade should be subject to Kongo law. When he suspected the Portuguese of receiving illegally enslaved persons to sell, he wrote to King João III in 1526 imploring him to put a stop to the practice. The kings of Dahomey sold war captives into transatlantic slavery; they would otherwise have been killed in a ceremony known as the Annual Customs. As one of West Africa's principal slave states, Dahomey became extremely unpopular with neighbouring peoples. Like the Bambara Empire to the east, the Khasso kingdoms depended heavily on the slave trade for their economy. A family's status was indicated by the number of slaves it owned, leading to wars for the sole purpose of taking more captives. This trade led the Khasso into increasing contact with the European settlements of Africa's west coast, particularly the French. Benin grew increasingly rich during the 16th and 17th centuries on the slave trade with Europe; slaves from enemy states of the interior were sold, and carried to the Americas in Dutch and Portuguese ships. The Bight of Benin's shore soon came to be known as the "Slave Coast". King Gezo of Dahomey said in the 1840s: The slave trade is the ruling principle of my people. It is the source and the glory of their wealth...the mother lulls the child to sleep with notes of triumph over an enemy reduced to slavery... In 1807, the UK Parliament passed the Bill that abolished the trading of slaves. The King of Bonny (now in Nigeria) was horrified at the conclusion of the practice: We think this trade must go on. That is the verdict of our oracle and the priests. They say that your country, however great, can never stop a trade ordained by God himself. After being marched to the coast for sale, enslaved people waited in large forts called factories. The amount of time in factories varied, but Milton Meltzer's Slavery: A World History states this period resulted in or around 4.5% of deaths during the transatlantic slave trade. In other words, over 820,000 people would have died in African ports such as Benguela, Elmina and Bonny, reducing the number of those shipped to 17.5 million. After being captured and held in the factories, slaves entered the infamous Middle Passage. Meltzer's research puts this phase of the slave trade's overall mortality at 12.5%. Around 2.2 million Africans died during these voyages where they were packed into tight, unsanitary spaces on ships for months at a time. Measures were taken to stem the onboard mortality rate, such as enforced "dancing" (as exercise) above deck and the practice of force-feeding enslaved persons who tried to starve themselves. The conditions on board also resulted in the spread of fatal diseases. Other fatalities were suicides, slaves who escaped by jumping overboard. The slave traders would try to fit anywhere from 350 to 600 slaves on one ship. Before the African slave trade was completely banned by participating nations in 1853, 15.3 million enslaved people had arrived in the Americas. Raymond L. Cohn, an economics professor whose research has focused on economic history and international migration, has researched the mortality rates among Africans during the voyages of the Atlantic slave trade. He found that mortality rates decreased over the history of the slave trade, primarily because the length of time necessary for the voyage was declining. "In the eighteenth century many slave voyages took at least 2½ months. In the nineteenth century, 2 months appears to have been the maximum length of the voyage, and many voyages were far shorter. Fewer slaves died in the Middle Passage over time mainly because the passage was shorter." Meltzer also states that 33% of Africans would have died in the first year at the seasoning camps found throughout the Caribbean. Many slaves shipped directly to North America bypassed this process; however, most slaves (destined for island or South American plantations) were likely to be put through this ordeal. The enslaved people were tortured for the purpose of "breaking" them and conditioning them to their new lot in life. Jamaica held one of the most notorious of these camps. Dysentery was the leading cause of death. All in all, 5 million Africans died in these camps, reducing the number of survivors to about 10 million. The trade of enslaved Africans in the Atlantic has its origins in the explorations of Portuguese mariners down the coast of West Africa in the 15th century. Before that, contact with African slave markets was made to ransom Portuguese who had been captured by the intense North African Barbary pirate attacks on Portuguese ships and coastal villages, frequently leaving them depopulated. The first Europeans to use enslaved Africans in the New World were the Spaniards, who sought auxiliaries for their conquest expeditions and labourers on islands such as Cuba and Hispaniola. The alarming decline in the native population had spurred the first royal laws protecting them (Laws of Burgos, 1512–13). The first enslaved Africans arrived in Hispaniola in 1501. After Portugal had succeeded in establishing sugar plantations (engenhos) in northern Brazil ca. 1545, Portuguese merchants on the West African coast began to supply enslaved Africans to the sugar planters. While at first these planters had relied almost exclusively on the native Tupani for slave labour, after 1570 they began importing Africans, as a series of epidemics had decimated the already destabilized Tupani communities. By 1630, Africans had replaced the Tupani as the largest contingent of labour on Brazilian sugar plantations. This ended the European medieval household tradition of slavery, resulted in Brazil's receiving the most enslaved Africans, and revealed sugar cultivation and processing as the reason that roughly 84% of these Africans were shipped to the New World. As Britain rose in naval power and settled continental North America and some islands of the West Indies, they became the leading slave traders. At one stage the trade was the monopoly of the Royal Africa Company, operating out of London. But, following the loss of the company's monopoly in 1689, Bristol and Liverpool merchants became increasingly involved in the trade. By the late 17th century, one out of every four ships that left Liverpool harbour was a slave trading ship. Much of the wealth on which the city of Manchester, and surrounding towns, was built in the late 18th century, and for much of the 19th century, was based on the processing of slave-picked cotton and manufacture of cloth. Other British cities also profited from the slave trade. Birmingham, the largest gun-producing town in Britain at the time, supplied guns to be traded for slaves. 75% of all sugar produced in the plantations was sent to London, and much of it was consumed in the highly lucrative coffee houses there. New World destinations The first slaves to arrive as part of a labour force in the New World reached the island of Hispaniola (now Haiti and the Dominican Republic) in 1502. Cuba received its first four slaves in 1513. Jamaica received its first shipment of 4000 slaves in 1518. Slave exports to Honduras and Guatemala started in 1526. The first enslaved Africans to reach what would become the United States arrived in January 1526 as part of a Spanish attempt to colonize South Carolina near Jamestown. By November the 300 Spanish colonists were reduced to 100, and their slaves from 100 to 70[why?]. The enslaved people revolted and joined a nearby Native American tribe, while the Spanish abandoned the colony altogether. Colombia received its first enslaved people in 1533. El Salvador, Costa Rica and Florida began their stints in the slave trade in 1541, 1563 and 1581, respectively. The 17th century saw an increase in shipments, with Africans arriving in the English colony of Jamestown, Virginia in 1619. These first kidnapped Africans were classed as indentured servants and freed after seven years. Chattel slavery was codified in Virginia law in 1656, and in 1662, the colony adopted the principle of partus sequitur ventrem, by which children of slave mothers were slaves, regardless of paternity. Irish immigrants took slaves to Montserrat in 1651, and in 1655, slaves were shipped to Belize. |British America (minus North America)||18.4%| |British North America||6.45%| |Dutch West Indies||2.0%| |Danish West Indies||0.3%| The number of the Africans arrived in each area can be calculated taking into consideration that the total number of slaves was close to 10,000,000. Economics of slavery The plantation economies of the New World were built on slave labour. Seventy percent of the enslaved people brought to the new world were forced to produce sugar, the most labour-intensive crop. The rest were employed harvesting coffee, cotton, and tobacco, and in some cases in mining. The West Indian colonies of the European powers were some of their most important possessions, so they went to extremes to protect and retain them. For example, at the end of the Seven Years' War in 1763, France agreed to cede the vast territory of New France (now Eastern Canada) to the victors in exchange for keeping the minute Antillean island of Guadeloupe. In France in the 18th century, returns for investors in plantations averaged around 6%; as compared to 5% for most domestic alternatives, this represented a 20% profit advantage. Risks—maritime and commercial—were important for individual voyages. Investors mitigated it by buying small shares of many ships at the same time. In that way, they were able to diversify a large part of the risk away. Between voyages, ship shares could be freely sold and bought. By far the most financially profitable West Indian colonies in 1800 belonged to the United Kingdom. After entering the sugar colony business late, British naval supremacy and control over key islands such as Jamaica, Trinidad, the Leeward Islands and Barbados and the territory of British Guiana gave it an important edge over all competitors; while many British did not make gains, a handful of individuals made small fortunes. This advantage was reinforced when France lost its most important colony, St. Domingue (western Hispaniola, now Haiti), to a slave revolt in 1791 and supported revolts against its rival Britain, after the 1793 French revolution in the name of liberty. Before 1791, British sugar had to be protected to compete against cheaper French sugar. After 1791, the British islands produced the most sugar, and the British people quickly became the largest consumers. West Indian sugar became ubiquitous as an additive to Indian tea. It has been estimated that the profits of the slave trade and of West Indian plantations created up to one-in-twenty of every pound circulating in the British economy at the time of the Industrial Revolution in the latter half of the 18th century. Historian Walter Rodney has argued that at the start of the slave trade in the 16th century, although there was a technological gap between Europe and Africa, it was not very substantial. Both continents were using Iron Age technology. The major advantage that Europe had was in ship building. During the period of slavery, the populations of Europe and the Americas grew exponentially, while the population of Africa remained stagnant. Rodney contended that the profits from slavery were used to fund economic growth and technological advancement in Europe and the Americas. Based on earlier theories by Eric Williams, he asserted that the industrial revolution was at least in part funded by agricultural profits from the Americas. He cited examples such as the invention of the steam engine by James Watt, which was funded by plantation owners from the Caribbean. Other historians have attacked both Rodney's methodology and accuracy. Joseph C. Miller has argued that the social change and demographic stagnation (which he researched on the example of West Central Africa) was caused primarily by domestic factors. Joseph Inikori provided a new line of argument, estimating counterfactual demographic developments in case the Atlantic slave trade had not existed. Patrick Manning has shown that the slave trade did have profound impact on African demographics and social institutions, but criticized Inikori's approach for not taking other factors (such as famine and drought) into account, and thus being highly speculative. Effect on the economy of West Africa |The neutrality of this section is disputed. (December 2013)| No scholars dispute the harm done to the enslaved people but the effect of the trade on African societies is much debated, due to the apparent influx of goods to Africans. Proponents of the slave trade, such as Archibald Dalzel, argued that African societies were robust and not much affected by the trade. In the 19th century, European abolitionists, most prominently Dr. David Livingstone, took the opposite view, arguing that the fragile local economy and societies were being severely harmed by the trade. Because the negative effects of slavery on the economies of Africa have been well documented, namely the significant decline in population, some African rulers likely saw an economic benefit from trading their subjects with European slave traders. With the exception of Portuguese controlled Angola, coastal African leaders "generally controlled access to their coasts, and were able to prevent direct enslavement of their subjects and citizens." Thus, as African scholar John Thornton argues, African leaders who allowed the continuation of the slave trade likely derived an economic benefit from selling their subjects to Europeans. The Kingdom of Benin, for instance, participated in the African slave trade, at will, from 1715 to 1735, surprising Dutch traders, who had not expected to buy slaves in Benin. The benefit derived from trading slaves for European goods was enough to make the Kingdom of Benin rejoin the trans-Atlantic slave trade after centuries of non-participation. Such benefits included military technology (specifically guns and gunpowder), gold, or simply maintaining amicable trade relationships with European nations. The slave trade was therefore a means for some African elite to gain economic advantages. Historian Walter Rodney estimates that by c.1770, the King of Dahomey was earning an estimated £250,000 per year by selling captive African soldiers and enslaved people to the European slave-traders. Both Thornton and Fage contend that while African political elite may have ultimately benefited from the slave trade, their decision to participate may have been influenced more by what they could lose by not participating. In Fage's article "Slavery and the Slave Trade in the Context of West African History," he notes that for West Africans "... there were really few effective means of mobilizing labour for the economic and political needs of the state" without the slave trade. Effects on the British economy Historian Eric Williams in 1944 argued that the profits that Britain received from its sugar colonies, or from the slave trade between Africa and the Caribbean, was a major factor in financing Britain's industrial revolution. However, he says that by the time of its abolition in 1833 it had lost its profitability and it was in Britain's economic interest to ban it. Other researchers and historians have strongly contested what has come to be referred to as the “Williams thesis” in academia. David Richardson has concluded that the profits from the slave trade amounted to less than 1% of domestic investment in Britain. Economic historian Stanley Engerman finds that even without subtracting the associated costs of the slave trade (e.g., shipping costs, slave mortality, mortality of British people in Africa, defense costs) or reinvestment of profits back into the slave trade, the total profits from the slave trade and of West Indian plantations amounted to less than 5% of the British economy during any year of the Industrial Revolution. Engerman’s 5% figure gives as much as possible in terms of benefit of the doubt to the Williams argument, not solely because it does not take into account the associated costs of the slave trade to Britain, but also because it carries the full-employment assumption from economics and holds the gross value of slave trade profits as a direct contribution to Britain’s national income. Historian Richard Pares, in an article written before Williams’ book, dismisses the influence of wealth generated from the West Indian plantations upon the financing of the Industrial Revolution, stating that whatever substantial flow of investment from West Indian profits into industry there was occurred after emancipation, not before. Seymour Drescher and Robert Anstey argue the slave trade remained profitable until the end, and that moralistic reform, not economic incentive, was primarily responsible for abolition. They say slavery remained profitable in the 1830s because of innovations in agriculture. Karl Marx in his influential economic history of capitalism Das Kapital wrote that "...the turning of Africa into a warren for the commercial hunting of black-skins, signaled the rosy dawn of the era of capitalist production." He argued that the slave trade was part of what he termed the "primitive accumulation" of capital, the 'non-capitalist' accumulation of wealth that preceded and created the financial conditions for Britain's industrialisation. The demographic effects of the slave trade is a controversial and highly debated issue. Walter Rodney argued that the export of so many people had been a demographic disaster and had left Africa permanently disadvantaged when compared to other parts of the world, and largely explains the continent's continued poverty. He presented numbers showing that Africa's population stagnated during this period, while that of Europe and Asia grew dramatically. According to Rodney, all other areas of the economy were disrupted by the slave trade as the top merchants abandoned traditional industries to pursue slaving, and the lower levels of the population were disrupted by the slaving itself. Others have challenged this view. J. D. Fage compared the number effect on the continent as a whole. David Eltis has compared the numbers to the rate of emigration from Europe during this period. In the nineteenth century alone over 50 million people left Europe for the Americas, a far higher rate than were ever taken from Africa. Other scholars accused Rodney of mischaracterizing the trade between Africans and Europeans. They argue that Africans, or more accurately African elites, deliberately let European traders join in an already large trade in enslaved people and were not patronized. As Joseph E. Inikori argues, the history of the region shows that the effects were still quite deleterious. He argues that the African economic model of the period was very different from the European, and could not sustain such population losses. Population reductions in certain areas also led to widespread problems. Inikori also notes that after the suppression of the slave trade Africa's population almost immediately began to rapidly increase, even prior to the introduction of modern medicines. Owen Alik Shahadah also states that the trade was not only of demographic significance in aggregate population losses but also in the profound changes to settlement patterns, exposure to epidemics, and reproductive and social development potential. Legacy of racism Professor Maulana Karenga states that the effects of slavery were that "the morally monstrous destruction of human possibility involved redefining African humanity to the world, poisoning past, present and future relations with others who only know us through this stereotyping and thus damaging the truly human relations among peoples." He states that it constituted the destruction of culture, language, religion and human possibility. Walter Rodney states: "Above all, it was the institution of slavery in the Americas which ultimately conditioned racial attitudes, even when their more immediate derivation was the literature on Africa or contacts within Europe itself. It has been well attested that New World slave-plantation society was the laboratory of modern racism. The owners contempt for and fear of the black slaves was expressed in religious, scientific and philosophical terms, which became the stock attitudes of European and even Africans in subsequent generations. Although there have been contributions to racist philosophy both before and after the slave trade epoch, the historical experience of whites enslaving blacks for four centuries forged the tie between racist and colour prejudice, and produced not merely individual racists but a society where racism was so all-pervasive that it was not even perceived as what it actually was. The very concept of human racial variants was never satisfactorily established in biological terms,and the assumptions of scientists and laymen alike were rooted in the perception of a reality in which Europeans had succeeded in reducing Africans to the level of chattel." Walter Rodney states, "The role of slavery in promoting racist prejudice and ideology has been carefully studied in certain situations, especially in the U.S.A. The simple fact is that no people can enslave another for four centuries without coming out with a notion of superiority, and when the colour and other physical traits of those peoples were quite different it was inevitable that the prejudice should take a racist form." End of the Atlantic slave trade In Britain, America, Portugal and in parts of Europe, opposition developed against the slave trade. Davis says that abolitionists assumed "that an end to slave imports would lead automatically to the amelioration and gradual abolition of slavery". Opposition to the trade was led by the Religious Society of Friends (Quakers) and establishment Evangelicals such as William Wilberforce. The movement was joined by many and began to protest against the trade, but they were opposed by the owners of the colonial holdings. Following Lord Mansfield's decision in 1772, slaves became free upon entering the British isles. Under the leadership of Thomas Jefferson, the new state of Virginia in 1778 became the first state and one of the first jurisdictions anywhere to stop the importation of slaves for sale; it made it a crime for traders to bring in slaves from out of state or from overseas for sale; migrants from other states were allowed to bring their own slaves. The new law freed all slaves brought in illegally after its passage and imposed heavy fines on violators. Denmark, which had been active in the slave trade, was the first country to ban the trade through legislation in 1792, which took effect in 1803. Britain banned the slave trade in 1807, imposing stiff fines for any slave found aboard a British ship (see Slave Trade Act 1807). The Royal Navy, which then controlled the world's seas, moved to stop other nations from continuing the slave trade and declared that slaving was equal to piracy and was punishable by death. The United States Congress passed the Slave Trade Act of 1794, which prohibited the building or outfitting of ships in the U.S. for use in the slave trade. In 1807 Congress outlawed the importation of slaves beginning on 1 January 1808, the earliest date permitted by the United States Constitution for such a ban. On Sunday, 28 October 1787, William Wilberforce wrote in his diary: "God Almighty has set before me two great objects, the suppression of the slave trade and the Reformation of society." For the rest of his life, William Wilberforce dedicated his life as a Member of the British Parliament to opposing the slave trade and working for the abolition of slavery throughout the British Empire. On 22 February 1807, twenty years after he first began his crusade, and in the middle of Britain's war with France, Wilberforce and his team's labours were rewarded with victory. By an overwhelming 283 votes for to 16 against, the motion to abolish the Atlantic slave trade was carried in the House of Commons. The United States acted to abolish the slave trade the same year, but not its internal slave trade which became the dominant character in American slavery until the 1860s. In 1805 the British Order-in-Council had restricted the importation of slaves into colonies that had been captured from France and the Netherlands. Britain continued to press other nations to end its trade; in 1810 an Anglo-Portuguese treaty was signed whereby Portugal agreed to restrict its trade into its colonies; an 1813 Anglo-Swedish treaty whereby Sweden outlawed its slave trade; the Treaty of Paris 1814 where France agreed with Britain that the trade is "repugnant to the principles of natural justice" and agreed to abolish the slave trade in five years; the 1814 Anglo-Netherlands treaty where the Dutch outlawed its slave trade. With peace in Europe from 1815, and British supremacy at sea secured, the Royal Navy turned its attention back to the challenge and established the West Africa Squadron in 1808, known as the "preventative squadron", which for the next 50 years operated against the slavers. By the 1850s, around 25 vessels and 2,000 officers and men were on the station, supported by some ships from the small United States Navy, and nearly 1,000 "Kroomen"—experienced fishermen recruited as sailors from what is now the coast of modern Liberia. Service on the West Africa Squadron was a thankless and overwhelming task, full of risk and posing a constant threat to the health of the crews involved. Contending with pestilential swamps and violent encounters, the mortality rate was 55 per 1,000 men, compared with 10 for fleets in the Mediterranean or in home waters. Between 1807 and 1860, the Royal Navy's Squadron seized approximately 1,600 ships involved in the slave trade and freed 150,000 Africans who were aboard these vessels. Several hundred slaves a year were transported by the navy to the British colony of Sierra Leone, where they were made to serve as "apprentices" in the colonial economy until the Slavery Abolition Act 1833. Action was taken against African leaders who refused to agree to British treaties to outlaw the trade, for example against "the usurping King of Lagos", deposed in 1851. Anti-slavery treaties were signed with over 50 African rulers. The last recorded slave ship to land on American soil was the Clotilde, which in 1859 illegally smuggled a number of Africans into the town of Mobile, Alabama. The Africans on board were sold as slaves; however, slavery in the U.S. was abolished 5 years later following the end of the American Civil War in 1865. The last survivor of the voyage was Cudjoe Lewis, who died in 1935. The last country to ban the Atlantic slave trade was Brazil in 1831. However, a vibrant illegal trade continued to ship large numbers of enslaved people to Brazil and also to Cuba until the 1860s, when British enforcement and further diplomacy finally ended the Atlantic trade. In 1870 Portugal ended the last trade route with the Americas where the last country to import slaves was Brazil. In Brazil slavery itself however did not end until 1888, which was the last country in the Americas to end involuntary servitude. The historian Walter Rodney contends that it was a decline in the profitability of the triangular trades that made it possible for certain basic human sentiments to be asserted at the decision-making level in a number of European countries- Britain being the most crucial because it was the greatest carrier of African captives across the Atlantic. Rodney states that changes in productivity, technology and patterns of exchange in Europe and the Americas informed the decision by the British to end their participation in the trade in 1807. In 1809 President James Madison outlawed the slave trade with the United States. Nevertheless, Michael Hardt and Antonio Negri argue that it was neither a matter of strictly economics nor of morals. Firstly because slavery was (in practice) still beneficial to capitalism, providing not only influx of capital, but also disciplining hardship into workers (a form of ‘apprenticeship’ to the capitalist industrial plant). The more 'recent' argument of a ‘moral shift’ (the basis of the previous lines of this article) is described by Hardt and Negri as an ‘ideological’ apparatus in order to eliminate the sentiment of guilt in western society. Although moral arguments did play a secondary role, it usually had major resonance when used as a strategy to undercut competitors' profits. Eurocentric history has been blind to the most important element in this fight for freedom, precisely, the constant revolt and antagonism of slaves' revolts. The most important of those being the Haitian Revolution. The shock of this revolution in 1804, certainly introduces an essential political argument into the end of slavery trade, which happen only three years later. The spectre of black revolution was haunting Europe's supremacy, and certainly still challenge the ethnocentrism of European ‘universal history’. The African diaspora which was created via slavery has been a complex interwoven part of American history and culture. In the United States, the success of Alex Haley's book Roots: The Saga of an American Family, published in 1976, and the subsequent television miniseries based upon it Roots, broadcast on the ABC network in January 1977, led to an increased interest and appreciation of African heritage amongst the African-American community. The influence of these led many African Americans to begin researching their family histories and making visits to West Africa. In turn, a tourist industry grew up to supply them. One notable example of this is through the Roots Homecoming Festival held annually in the Gambia, in which rituals are held through which African Americans can symbolically "come home" to Africa. Issues of dispute have however developed between African Americans and African authorities over how to display historic sites that were involved in the Atlantic slave trade, with prominent voices in the former criticising the latter for not displaying such sites sensitively, but instead treating them as a commercial enterprise. "Back to Africa" In 1816, a group of wealthy European-Americans, some of whom were abolitionists and others who were racial segregationists, founded the American Colonization Society with the express desire of returning African Americans who were in the United States to West Africa. In 1820, they sent their first ship to Liberia, and within a decade around two thousand African Americans had been settled in the west African country. Such re-settlement continued throughout the 19th century, increasing following the deterioration of race relations in the southern states of the US following Reconstruction in 1877. The Rastafari movement, which originated in Jamaica, where 98% of the population are descended from victims of the Atlantic slave trade, has made great efforts to publicize the slavery, and to ensure it is not forgotten, especially through reggae music. In 1998, UNESCO designated 23 August as International Day for the Remembrance of the Slave Trade and its Abolition. Since then there have been a number of events recognizing the effects of slavery. On 9 December 1999 Liverpool City Council passed a formal motion apologizing for the City's part in the slave trade. It was unanimously agreed that Liverpool acknowledges its responsibility for its involvement in three centuries of the slave trade. The City Council has made an unreserved apology for Liverpool's involvement and the continual effect of slavery on Liverpool's Black communities. In 1999, President Mathieu Kerekou of Benin (formerly the Kingdom of Dahomey) issued a national apology for the role Africans played in the Atlantic slave trade. Luc Gnacadja, minister of environment and housing for Benin, later said: "The slave trade is a shame, and we do repent for it." Researchers estimate that 3 million slaves were exported out of the Slave Coast bordering the Bight of Benin. World conference against racism At the 2001 World Conference Against Racism in Durban, South Africa, African nations demanded a clear apology for slavery from the former slave-trading countries. Some nations were ready to express an apology, but the opposition, mainly from the United Kingdom, Portugal, Spain, the Netherlands, and the United States blocked attempts to do so. A fear of monetary compensation might have been one of the reasons for the opposition. As of 2009, efforts are underway to create a UN Slavery Memorial as a permanent remembrance of the victims of the Atlantic slave trade. On 30 January 2006, Jacques Chirac (the then French President) said that 10 May would henceforth be a national day of remembrance for the victims of slavery in France, marking the day in 2001 when France passed a law recognising slavery as a crime against humanity. On 27 November 2006, British Prime Minister Tony Blair made a partial apology for Britain's role in the African slavery trade. However African rights activists denounced it as "empty rhetoric" that failed to address the issue properly. They feel his apology stopped shy to prevent any legal retort. Mr Blair again apologized on March 14, 2007. On 24 August 2007, Ken Livingstone (Mayor of London) apologized publicly for London's role in the slave trade. "You can look across there to see the institutions that still have the benefit of the wealth they created from slavery", he said pointing towards the financial district, before breaking down in tears. He claimed that London was still tainted by the horrors of slavery. Jesse Jackson praised Mayor Livingstone, and added that reparations should be made. United States of America On 24 February 2007 the Virginia General Assembly passed House Joint Resolution Number 728 acknowledging "with profound regret the involuntary servitude of Africans and the exploitation of Native Americans, and call for reconciliation among all Virginians." With the passing of that resolution, Virginia became the first of the 50 United States to acknowledge through the state's governing body their state's involvement in slavery. The passing of this resolution came on the heels of the 400th anniversary celebration of the city of Jamestown, Virginia, which was the first permanent English colony to survive in what would become the United States. Jamestown is also recognized as one of the first slave ports of the American colonies. On 31 May 2007, the Governor of Alabama, Bob Riley, signed a resolution expressing "profound regret" for Alabama's role in slavery and apologizing for slavery's wrongs and lingering effects. Alabama is the fourth Southern state to pass a slavery apology, following votes by the legislatures in Maryland, Virginia, and North Carolina. On 30 July 2008, the United States House of Representatives passed a resolution apologizing for American slavery and subsequent discriminatory laws. The language included a reference to the "fundamental injustice, cruelty, brutality and inhumanity of slavery and Jim Crow" segregation. On 18 June 2009, the United States Senate issued an apologetic statement decrying the "fundamental injustice, cruelty, brutality, and inhumanity of slavery". The news was welcomed by President Barack Obama. In 1998, President Yoweri Museveni of Uganda, called tribal chieftains to apologize for their involvement in the slave trade: "African chiefs were the ones waging war on each other and capturing their own people and selling them. If anyone should apologise it should be the African chiefs. We still have those traitors here even today." In 2009, the Civil Rights Congress of Nigeria has written an open letter to all African chieftains who participated in trade calling for an apology for their role in the Atlantic slave trade: "We cannot continue to blame the white men, as Africans, particularly the traditional rulers, are not blameless. In view of the fact that the Americans and Europe have accepted the cruelty of their roles and have forcefully apologized, it would be logical, reasonable and humbling if African traditional rulers ... [can] accept blame and formally apologize to the descendants of the victims of their collaborative and exploitative slave trade." |40x40px||Wikisource has original text related to this article:| - Triangular trade - Arab slave trade - History of slavery - Slave ship - Slave Trade Acts - Slavery in Africa - Slavery in Canada - Slavery in the colonial United States - Slavery in the United States - Curtin, Philip (1969). The Atlantic Slave Trade. The University Of Wisconsin Press. pp. 1–58. - Mannix, Daniel (1962). Black Cargoes. The Viking Press. pp. Introduction–1–5. - Klein, Herbert S. and Jacob Klein. The Atlantic Slave Trade. Cambridge University Press, 1999, pp. 103–139. - Ronald Segal, The Black Diaspora: Five Centuries of the Black Experience Outside Africa (New York: Farrar, Straus and Giroux, 1995), ISBN 0-374-11396-3, p. 4. "It is now estimated that 11,863,000 slaves were shipped across the Atlantic." (Note in original: Paul E. Lovejoy, "The Impact of the Atlantic Slave Trade on Africa: A Review of the Literature", in Journal of African History 30 (1989), p. 368.) - Eltis, David and Richardson, David, "The Numbers Game". In: Northrup, David: The Atlantic Slave Trade, 2nd edn, Houghton Mifflin Co., 2002, p. 95. - Basil Davidson. The African Slave Trade. - "African Holocaust How Many". African Holocaust Society. Retrieved 2007-01-04. While traditional studies often focus on official French and British records of how many Africans arrived in the New World, these studies neglect to include the death from raids, the fatalities on board the ships, deaths caused by European diseases, the victims from the consequences of enslavement, and trauma of refugees displaced by slaving activities. The number of arrivals also neglects the volume of Africans who arrived via pirates, who for obvious reasons, wouldn't have kept records. - "African Holocaust Special". African Holocaust Society. Retrieved 2007-01-04. - Thornton 1998, pp. 15–17. - Christopher 2006, p. 127. - Thornton 1998, p. 13. - Chaunu 1969, pp. 54–58. - Thornton 1998, p. 24. - Thornton 1998, pp. 24–26. - Thornton 1998, p. 27. - Historical survey > Slave societies Britannica. - Ferro, Mark (1997). Colonization: A Global History. Routledge, p. 221, ISBN 978-0-415-14007-2. - Adu Boahen, Topics In West African History, p. 110. - Kwaku Person-Lynn, African Involvement In Atlantic Slave Trade.[dead link] - "Slave trade: a root of contemporary African Crisis", Africa Economic Analysis 2000. - Elikia M’bokolo, "The impact of the slave trade on Africa", Le Monde diplomatique, 2 April 1998. - Thornton, p. 112. - Thornton, p. 310. - Slave Trade Debates 1806, Colonial History Series, Dawsons of Pall Mall, London 1968, pp. 203-204. - Thornton, p. 45. - Thornton, p. 94. - Thornton 1998, pp. 28–29. - Thornton 1998, p. 31. - Thornton 1998, pp. 29–31. - Thornton 1998, pp. 37. - Thornton 1998, p. 38. - Thornton 1998, p. 39. - Thornton 1998, p. 40. - Rodney 1972, pp. 95-113. - Austen 1987, pp. 81–108. - Thornton 1998, p. 44. - Anne C. Bailey, African Voices of the Atlantic Slave Trade: Beyond the Silence and the Shame. - Anstey, Roger: The Atlantic Slave Trade and British abolition, 1760–1810. London: Macmillan, 1975, p. 5. - P. C. Emmer, The Dutch in the Atlantic Economy, 1580–1880. Trade, Slavery and Emancipation (1998), p. 17. - Klein 2010. - Keith Bradley, Paul Cartledge (2011). The Cambridge World History of Slavery. Cambridge University Press. p. 583. ISBN 0-521-84066-X. - Hair & Law 1998, p. 257. - Christopher 2006, p. 6. - Lovejoy, Paul E., "The Volume of the Atlantic Slave Trade. A Synthesis". In: Northrup, David (ed.): The Atlantic Slave Trade. D.C. Heath and Company, 1994. - "Skeletons Discovered: First African Slaves in New World", 31 January 2006, LiveScience.com. Accessed September 27, 2006. - "Smallpox Through History". Archived from the original on 2009-10-31. - Solow, Barbara (ed.). Slavery and the Rise of the Atlantic System, Cambridge: Cambridge University Press, 1991. - Notes on the State of Virginia Query 18. - Historical survey > The international slave trade. - "Transatlantic Slave Trade". "Hakim Adi". - Kitchin, Thomas (1778). The Present State of the West-Indies: Containing an Accurate Description of What Parts Are Possessed by the Several Powers in Europe. London: R. Baldwin. p. 21. - Thornton, p. 304. - Thornton, p. 305. - Thornton, p. 311. - Thornton, p. 122. - Howard Winant (2001), The World is a Ghetto: Race and Democracy Since World War II, Basic Books, p. 58. - Catherine Lowe Besteman, Unraveling Somalia: Race, Class, and the Legacy of Slavery (University of Pennsylvania Press: 1999), pp. 83–84. - Kevin Shillington, ed. (2005), Encyclopedia of African History, CRC Press, vol. 1, pp. 333–34; Nicolas Argenti (2007), The Intestines of the State: Youth, Violence and Belated Histories in the Cameroon Grassfields, University of Chicago Press, p. 42. - Rights & Treatment of Slaves. Gambia Information Site. - Mungo Park, Travels in the Interior of Africa v. II, Chapter XXII - War and Slavery. - The Negro Plot Trials: A Chronology. - Lovejoy, Paul E. Transformations in Slavery. Cambridge University Press, 2000. - Midlo Hall, Gwendolyn (2007). Slavery and African Ethnicities in the Americas. University of North Carolina Press. p. [page needed]. ISBN 978-0-8078-5862-2. Retrieved 2011-01-24. - Quick guide: The slave trade; Who were the slaves? BBC News. - Stannard, David. American Holocaust. Oxford University Press, 1993. - Paths of the Atlantic Slave Trade: Interations, Identities, and Images. - Patrick Manning, "The Slave Trade: The Formal Dermographics of a Global System" in Joseph E. Inikori and Stanley L. Engerman (eds), The Atlantic Slave Trade: Effects on Economies, Societies and Peoples in Africa, the Americas, and Europe (Duke University Press, 1992), pp. 117-44, online at pp. 119-20. - "African Holocaust: Kimani Nehusi How Many". African Holocaust Society. Retrieved 2005-01-04. - Gomez, Michael A. Exchanging Our Country Marks. Chapel Hill, 1998 - Thornton, John. Africa and Africans in the Making of the Atlantic World, 1400–1800, Cambridge University Press, 1998. - Stride, G. T., and C. Ifeka. Peoples and Empires of West Africa: West Africa in History 1000–1800. Nelson, 1986. - Hochschild, Adam (1998). King Leopold's Ghost: A Story of Greed, Terror, and Heroism in Colonial Africa. Houghton Mifflin Books. ISBN 0-618-00190-5. - Winthrop, reading by John Thornton, "African Political Ethics and the Slave Trade", Millersville College. - Museum Theme: The Kingdom of Dahomey, Musee Ouidah. - "Dahomey (historical kingdom, Africa)", Encyclopædia Britannica. - "Benin seeks forgiveness for role in slave trade", Final Call, 8 October 2002. - Le Mali précolonial. - The Story of Africa, BBC. - "The Anglo-American Magazine" V. July–December 1854. Retrieved 2 July 2014. - African Slave Owners, BBC. - Meltzer, Milton. Slavery: A World History. Da Capo Press, 1993. - Raymond L. Cohn. - Cohn, Raymond L. "Deaths of Slaves in the Middle Passage", Journal of Economic History, September 1985. - Kiple, Kenneth F. (2002). The Caribbean Slave: A Biological History. Cambridge University Press. p. 65. ISBN 0-521-52470-9. - BBC – History – "British Slaves on the Barbary Coast". - HEALTH IN SLAVERY. - "European traders". International Slavery Museum. Retrieved 7 July 2014. - Elkins, Stanley: Slavery. New York: Universal Library, 1963, p. 48. - Rawley, James: London, Metropolis of the Slave Trade, 2003. - Anstey, Roger: The Atlantic Slave Trade and British Abolition, 1760–1810. London: Macmillan, 1975. - "Slave-grown cotton in greater Manchester", Revealing Histories. - Wynter, Sylvia (1984a). "New Seville and the Conversion Experience of Bartolomé de Las Casas: Part One"". Jamaica Journal 17 (2): 25-32. - Dauenhauer, Nora Marks; Richard Dauenhauer; Lydia T. Black (2008). <span />Anóoshi Lingít Aaní Ká, Russians in Tlingit America: The Battles of Sitka, 1802 and 1804. Seattle: University of Washington Press. pp. XXVI. ISBN 978-0-295-98601-2. - Stephen D. Behrendt, David Richardson, and David Eltis, W. E. B. Du Bois Institute for African and African-American Research, Harvard University. Based on "records for 27,233 voyages that set out to obtain slaves for the Americas". Stephen Behrendt (1999). "Transatlantic Slave Trade". Africana: The Encyclopedia of the African and African American Experience. New York: Basic Civitas Books. ISBN 0-465-00071-1. - Curtin, The Atlantic Slave Trade, 1972, p. 88. - Daudin 2004. - Slave Revolt in St. Domingue (Haiti). - Digital History. - UN report. - Walter Rodney, How Europe Underdeveloped Africa. ISBN 0950154644. - Manning, Patrick: "Contours of Slavery and Social change in Africa". In: Northrup, David (ed.): The Atlantic Slave Trade. D.C. Heath & Company, 1994, pp. 148–160. - Thornton, John. A Cultural History of the Atlantic World 1250-1820. 2012, p. 64. - Fage, J. D. "Slavery and the Slave Trade in the Context of West African History", The Journal of African History, Vol. 10. No 3, 1969, p. 400. - Eric Williams, Capitalism & Slavery (University of North Carolina Press, 1944), pp. 98–107, 169–177. - David Richardson, "The British Empire and the Atlantic Slave Trade, 1660-1807," in P. J. Marshall, ed. The Oxford History of the British Empire: Volume II: The Eighteenth Century (1998), pp. 440-64. - Stanley L. Engerman. "The Slave Trade and British Capital Formation in the Eighteenth Century". JSTOR 3113341. - Richard Pares. "The Economic Factors in the History of the Empire". JSTOR 2590147. - J.R. Ward, "The British West Indies in the Age of Abolition," in P. J. Marshall, ed. The Oxford History of the British Empire: Volume II: The Eighteenth Century (1998), pp. 415-39. - Marx, Karl. "Chapter Thirty-One: Genesis of the Industrial Capitalist". Karl Marx: Capital Volume One. Retrieved 21 February 2014. the turning of Africa into a warren for the commercial hunting of black-skins, signalised the rosy dawn of the era of capitalist production. These idyllic proceedings are the chief momenta of primitive accumulation. - Rodney, Walter. How Europe Underdeveloped Africa, London: Bogle-L'Ouverture Publications, 1972. - David Eltis, Economic Growth and the Ending of the Transatlantic Slave Trade. - Thornton, John. Africa and Africans in the Making of the Atlantic World, 1400-1800. Cambridge University Press, 1992. - Joseph E. Inikori, "Ideology versus the Tyranny of Paradigm: Historians and the Impact of the Atlantic Slave Trade on African Societies", African Economic History, 1994. - "African Holocaust: Dark Voyage audio CD". "Owen 'Alik Shahadah". - "Effects on Africa". "Ron Karenga". - Williams, Eric (1994) . Capitalism and Slavery. p. 7. - David Brion Davis, The Problem of Slavery in the Age of Revolution: 1770–1823 (1975), p. 129. - Library of Society of Friends Subject Guide: Abolition of the Slave Trade. - Paul E. Lovejoy (2000). Transformations in Slavery: a history of slavery in Africa, Cambridge University Press, p. 290. - John E. Selby and Don Higginbotham, The Revolution in Virginia, 1775–1783 (2007), p. 158. - Erik S. Root, All Honor to Jefferson?: The Virginia Slavery Debates and the Positive Good Thesis (2008), p. 19. - William Wilberforce (1759–1833). - Marcyliena H. Morgan (2002). Language, Discourse and Power in African American Culture, Cambridge University Press, 2002, p. 20. - Huw Lewis-Jones, "The Royal Navy and the Battle to End Slavery". - Jo Loosemore, "Sailing against slavery", BBC. - "Britain forces 'freed slaves' into colonial labour". - The West African Squadron and slave trade - "Navy News, June 2007". Retrieved 2008-02-09. - Question of the Month – Jim Crow Museum at Ferris State University - Diouf, Sylvianne (2007). Dreams of Africa in Alabama: The Slave Ship Clotilda and the Story of the Last Africans Brought to America. Oxford University Press. ISBN 0-19-531104-3. - Hardt, M and Negri, A (2000) Empire, Cambridge, Mass, Harvard University Press. p. 114-128. - Africans in America PBS Special. - Handley 2006, pp. 21–23. - Handley 2006, pp. 23–25. - Osei-Tutu 2006. - Handley 2006, p. 21. - Reggae and slavery. - National Museums Liverpool, Accessed 31 August 2010. - "Ending the Slavery Blame-Game", The New York Times, 22 April 2010. - "Benin Officials Apologize For Role In U.S. Slave Trade". Chicago Tribune, 1 May 2000. - "Chirac names slavery memorial day". BBC News, 30 January 2006. Accessed 22 July 2009. - "Blair 'sorrow' over slave trade". BBC News, 27 November 2006. Accessed 15 March 2007. - "Blair 'sorry' for UK slavery role". BBC News, 14 March 2007. Accessed 15 March 2007. - "Livingstone breaks down in tears at slave trade memorial". Daily Mail, 24 August 2007. Accessed 22 July 2009. - Muir, Hugh (24 August 2007). "Livingstone weeps as he apologises for slavery". The Guardian. Retrieved 30 July 2014. - House Joint Resolution Number 728. Commonwealth of Virginia. Accessed 22 July 2009. - Associated Press. "Alabama Governor Joins Other States in Apologizing For Role in Slavery". Fox News, 31 May 2007. Accessed 22 July 2009. - Fears, Darryl. "House Issues An Apology For Slavery". The Washington Post, 30 July 2008, p. A03. Accessed 22 July 2009. - Agence France-Presse. "Obama praises 'historic' Senate slavery apology". Google News, 18 June 2009. Accessed 22 July 2009. - Smith, David. "African chiefs urged to apologise for slave trade". BBC News. Retrieved 1 March 2014. - "African chiefs urged to apologise for slave trade". The Guardian, 18 November 2009. - Academic books - Austen, Ralph (1987). African Economic History: Internal Development and External Dependency. London: James Currey. ISBN 978-0-85255-009-0. - Christopher, Emma (2006). Slave Ship Sailors and Their Captive Cargoes, 1730–1807. Cambridge: Cambridge University Press. ISBN 0-521-67966-4. - Rodney, Walter (1972). How Europe Underdeveloped Africa. London: Bogle L'Ouverture. ISBN 978-0-9501546-4-0. - Thornton, John (1998). Africa and Africans in the Making of the Atlantic World, 1400–1800 (2nd ed.). New York: Cambridge University Press. ISBN 978-0-521-62217-2. - Academic articles - Handley, Fiona J.L. (2006). "Back to Africa: Issues of hosting "Roots" tourism in West Africa". African Re-Genesis: Confronting Social Issues in the Diaspora (London: UCL Press): 20–31. - Osei-Tutu, Brempong (2006). "Contested Monuments: African-Americans and the commoditization of Ghana's slave castles". African Re-Genesis: Confronting Social Issues in the Diaspora (London: UCL Press): 09–19. - Non-academic sources - Anstey, Roger: The Atlantic Slave Trade and British Abolition, 1760–1810. London: Macmillan, 1975. ISBN 0-333-14846-0. - Blackburn, Robin (2011). The American Crucible: Slavery, Emancipation and Human Rights. London & New York: Verso. ISBN 978-1-84467-569-2. - Clarke, Dr. John Henrik: Christopher Columbus and the Afrikan Holocaust: Slavery and the Rise of European Capitalism. Brooklyn, N.Y.: A & B Books, 1992. ISBN 1-881316-14-9. - Curtin, Philip D.: The Atlantic Slave Trade. University of Wisconsin Press, 1969. - Daudin, Guillaume: "Profitability of slave and long distance trading in context: the case of eighteenth century France", Journal of Economic History, 2004. - Drescher, Seymour: From Slavery to Freedom: Comparative Studies in the Rise and Fall of Atlantic Slavery. London: Macmillan Press, 1999. ISBN 0-333-73748-2. - Eltis, David. "The volume and structure of the transatlantic slave trade: a reassessment." William and Mary Quarterly (2001): 17-46. in JSTOR - Emmer, Pieter C.: The Dutch in the Atlantic Economy, 1580–1880. Trade, Slavery and Emancipation. Variorum Collected Studies Series CS614. Aldershot [u.a.]: Variorum, 1998. ISBN 0-86078-697-8. - Eli Faber (1998). Jews, Slaves, and the Slave Trade: Setting the Record Straight. NYU Press., argues the role was minimal - Gleeson, David T. and Simon Lewis (eds). Ambiguous Anniversary: The Bicentennial of the International Slave Trade Bans (University of South Carolina Press; 2012) 207 pp. - Gomez, Michael Angelo: Exchanging Our Country Marks (The Transformation of African Identities in the Colonial and AnteBellum South). Chapel Hill, N.C.: The University of North Carolina Press, 1998. ISBN 0-8078-4694-5. - Hall, Gwendolyn Midlo: Slavery and African Ethnicities in the Americas: Restoring the Links. Chapel Hill, N.C.: The University of North Carolina Press, 2006. ISBN 0-8078-2973-0. - Horne, Gerald: The Deepest South: The United States, Brazil, and the African Slave Trade. New York, NY: New York University Press, 2007. ISBN 978-0-8147-3688-3, ISBN 978-0-8147-3689-0. - Inikori, Joseph E., and Stanley L. Engerman (eds) (1992). The Atlantic Slave Trade: Effects on Economies, Societies and Peoples in Africa, the Americas, and Europe. Duke UP. - Klein, Herbert S.: The Atlantic Slave Trade (2nd edn, 2010). - Lindsay, Lisa A. "Captives as Commodities: The Transatlantic Slave Trade". Prentice Hall, 2008. ISBN 978-0-13-194215-8 - McMillin, James A. The Final Victims: Foreign Slave Trade to North America, 1783–1810, (Includes database on CD-ROM) ISBN 978-1-57003-546-3 - Meltzer, Milton: Slavery: A World History. New York: Da Capo Press, 1993. ISBN 0-306-80536-7. - Northrup, David: The Atlantic Slave Trade (3rd edn, 2010) - Rawley, James A., and Stephen D. Behrendt. The transatlantic slave trade: a history (U of Nebraska Press, 2005) - Rediker, Marcus (2007). The Slave Ship: A Human History. New York, NY: Viking Press. ISBN 978-0-670-01823-9. - Rodney, Walter: How Europe Underdeveloped Africa. Washington, D.C.: Howard University Press; Revised edn, 1981. ISBN 0-88258-096-5. - Rodriguez, Junius P. (ed.), Encyclopedia of Emancipation and Abolition in the Transatlantic World. Armonk, N.Y.: M.E. Sharpe, 2007. ISBN 978-0-7656-1257-1. - Solow, Barbara (ed.), Slavery and the Rise of the Atlantic System. Cambridge: Cambridge University Press, 1991. ISBN 0-521-40090-2. - Thomas, Hugh: The Slave Trade: The History of the Atlantic Slave Trade 1440–1870. London: Picador, 1997. ISBN 0-330-35437-X.; comprehensive history - Thornton, John: Africa and Africans in the Making of the Atlantic World, 1400–1800, 2nd edn Cambridge University Press, 1998. ISBN 0-521-62217-4, ISBN 0-521-62724-9, ISBN 0-521-59370-0, ISBN 0-521-59649-1. - Williams, Eric (1994) . Capitalism & Slavery. Chapel Hill: University of North Carolina Press. ISBN 0-8078-2175-6. - Araujo, Ana Lucia. Public Memory of Slavery: Victims and Perpetrators in the South Atlantic Cambria Press, 2010. ISBN 9781604977141 |40x40px||Wikimedia Commons has media related to Slavery.| - Voyages: The Trans-Atlantic Slave Trade Database - African Holocaust: The legacy of Slavery remembered - BBC | Africa|Quick guide: The slave trade - Teaching resources about Slavery and Abolition on blackhistory4schools.com - British documents on slave holding and the slave trade, 1788–1793 Lua error in Module:Navbar at line 23: Invalid title Template:If empty. Lua error in Module:Navbar at line 23: Invalid title Template:If empty.
All resources have worked examples and there are exercises together with worked solutions for you to practise with. Are you confused by symbols such as +, –, x and ÷? (Do brackets have you worried?) In this resource you are taken through how the conventions of mathematics organise the order in which operations are carried out. Order of Operations in Mathematics [PDF 164kb] If you are not sure about how to calculate with both positive and negative numbers then have a look at this resource. Firstly, both positive and negative numbers are defined and then how to calculate with positive and negative numbers using money as an example is demonstrated. Directed Numbers Positives and Negatives [PDF 106kb] You might be worried about using a new calculator. You will find an introduction to using a Scientific Calculator in this resource. The information is based on the Casio fx82es or 100AU, and a range of operations are demonstrated, from clearing and correcting entry errors, through calculating with fractions, to using Scientific Notation. Calculators [PDF 195kb] In this, the first of 2 resources on decimals, the importance of understanding the Decimal or Base 10 system of numbers is emphasized as this system is used as the basis for addition and subtraction using decimals. Decimals 1 Plus Minus [PDF 74kb] This is the second of 2 resources on decimals. Decimals are described by their use in the Base 10 (or Decimal) system. The methods of multiplying and dividing using decimals are explained by this use, following an explanation of multiplying and dividing by 10, 100, 1 000 and so on. Decimals 2 Times Divide [PDF 435kb] This is the first of a series of four resources on Fractions. Fractions are defined as well as terminology associated with them. Equivalent Fractions are explained and the method of calculating fractions that are equivalent to each other is demonstrated using algorithms and models. Fractions 1 Manipulating fractions [PDF 162kb] Both algorithms and models of multiplication and division with fractions are explained in this resource. Fractions 2 Times divide [PDF 191kb] Both algorithms and models of addition and subtraction using fractions are explained in this resource. Fractions 3 Plus minus [PDF 314kb] A review of the Decimal or Base 10 system is given in this resource because it is the basis of the relationship between fractions, decimals and percentages. This relationship is explained and methods and examples of working between each type of number are given. Fractions 4 Fractions, decimals, percentages [PDF 238kb] You will be pleasantly surprised to find out just how much you really know about percentages! In this resource you will also find demonstrations of how to find a percentage of a quantity, and how to calculate the percentage change resulting in an increase or decrease of a quantity. Percentages 1 [PDF 190kb] This resource is useful to Science students in particular, and anyone who is wondering about how to write numbers in different ways, especially for scientific purposes. Our number system is the Decimal or Base 10 system – there is a review of this, followed by definitions and examples of using, firstly, a specific number of decimal places, then of using significant figures. Scientific notation is also explained. In this resource an explanation is given of how to calculate just two statistical measurements: the mean and standard deviation, by describing how each symbol is used in each formula and how to work out the formula as a whole. Statistics calculations [PDF 159kb] Here is a gentle introduction to using letters instead of numbers (Algebra). There is also a section on how to use formulas, from straightforward to complicated. Algebra 1 Calculating using Algebra [PDF 277kb] In this second resource on Algebra, the meaning of Algebraic Terms is explained, as well as how to distinguish between them. Calculating with algebra and simplifying algebraic expressions is also demonstrated. Algebra 2 Algebraic Expressions [PDF 124kb] In this resource (the third in a series on Algebra), firstly an explanation is given of what an equation is. A demonstration is given of the technique of backtracking as a process used in solving an equation. The widely-used technique of "balancing both sides", which builds on the backtracking process, is also demonstrated as a well-known method of solving equations. Algebra 3 Solving equations [PDF 528kb] The fourth resource on Algebra proceeds directly from the third. "Balancing both sides" is explained to enable you to solve equations with other variables involved. Algebra 4 Rearranging formulae [PDF 148kb] In this resource, you are taken through some scenarios in which two variables are related to each other. You are shown how to calculate one variable when you know the other, and then how to plot a graph of the relationship. Linear graphs [PDF 180kb] In this handout, you are shown how to convert between currencies (for example from Australian dollars to New Zealand dollars, and vice versa), between different systems of measurement (such as Imperial to/from Metric) and between units of measurement within the Metric system itself. Currency Conversions Rates [PDF 213kb] Firstly, an explanation is given of the meaning of powers and then how to calculate with them. Operations with powers, the zero power and negative powers are looked at. Logarithms are then explained and examples given of calculating with them. An explanation of powers expressed as fractions is given and of their use with logarithms. Powers and logs 1 [PDF 212kb] In this resource, the ways in which logarithms may be manipulated are discussed. Powers and logs 2 [PDF ??kb] When rearranging formulae, often the wanted variable is a power. In this resource a demonstration is given of how to use logarithms to change the formula so that the power can be isolated. Algebra 5 Rearranging formulae to isolate powers [PDF 137kb] Plane shapes are defined. Tessellations and the units used to measure area are discussed. Methods of calculating areas of rectangles, triangles and circles are demonstrated. Areas [PDF 388kb] The second in a series of 2 resources about areas and perimeters of plane shapes, the resource begins with defining "perimeter" and then demonstrates how to calculate the perimeters of circles, triangles, rectangles and shapes that are made up of combinations of other shapes. Perimeters [PDF 179kb] This resource describes right-angled triangles, defines the "hypotenuse" of a right-angled triangle, shows how to calculate the hypotenuse or another side of a triangle and discusses Pythagorean Triads. Pythagoras [PDF 172kb] In this resource, the definition of a solid is given as well as the definition of volume and capacity. The definitions of prisms and pyramids are also given with demonstrations of methods of calculating their volumes and capacities. Converting between volumes and capacities is also discussed. The use of correct units is emphasised. Volumes [PDF 267kb] In this resource, the definition of surface area is given. Methods of calculating the surface areas of various solids are given. The use of correct units is emphasised. Surface Areas [PDF 191kb] Have you ever had to divide a very large number by a smaller number without a calculator? It’s often much easier to work out whether the number can be divided by smaller numbers than to do the actual division. This resource will help you discover some methods to use to do just that. Divisibility [PDF 151kb] - CONNECT: Calculations – ORDER OF OPERATIONS IN MATHEMATICS - CONNECT: Directed Numbers – SORTING THE POSITIVES FROM THE NEGATIVES - CONNECT: Calculators – GETTING TO KNOW YOUR SCIENTIFIC CALCULATOR - CONNECT: Decimals – OPERATIONS: + and – - CONNECT: Decimals – OPERATIONS: x and ÷ - CONNECT: Fractions – FRACTIONS 1 – MANIPULATING FRACTIONS - CONNECT: Fractions – FRACTIONS 2 – OPERATIONS WITH FRACTIONS: x and ÷ - CONNECT: Fractions – FRACTIONS 3 – OPERATIONS WITH FRACTIONS: + and – - CONNECT: Fractions – FRACTIONS 4 – FRACTIONS, DECIMALS, PERCENTAGES – how do they relate? - CONNECT: Percentages - CONNECT: Ways of writing numbers – SCIENTIFIC NOTATION; SIGNIFICANT FIGURES; DECIMAL PLACES - CONNECT: Statistics – USING FORMULAS - CONNECT: Algebra – FROM THE SPECIFIC TO THE GENERAL - CONNECT: Algebra – ALGEBRAIC EXPRESSIONS – LIKE OR UNLIKE? - CONNECT: Algebra – SOLVING EQUATIONS - CONNECT: Algebra – REARRANGING FORMULAE - CONNECT: Graphs – STRAIGHT LINES (LINEAR GRAPHS) - CONNECT: Currency, Conversions, Rates – CHANGING FROM ONE TO THE OTHER - CONNECT: Powers and logs – POWERS, INDICES, EXPONENTS, LOGARITHMS – THEY ARE ALL THE SAME! - CONNECT: Powers and logs 2 – USES AND RULES OF LOGARITHMS - CONNECT: Algebra – REARRANGING FORMULAE TO ISOLATE POWERS - CONNECT: Areas, Perimeters – 1. AREAS OF PLANE SHAPES - CONNECT: Areas, Perimeters – 2. PERIMETERS OF PLANE SHAPES - CONNECT: Pythagoras' Theorem - CONNECT: Volume, Surface Area – 1. VOLUMES OF SOLIDS - CONNECT: Volume, Surface Area – 2. SURFACE AREAS OF SOLIDS - CONNECT: DIVISIBILITY
A geometric sequence is a sequence where the ratio of any term to its preceding term is a constant other than This ratio is called the common ratio. In the following geometric sequence, the first term is and the common ratio is Each term of a geometric sequence is multiplied by the common ratio to get the next term. Like any other sequence, the first term of a geometric sequence is denoted by the second and so on. Therefore, geometric sequences have the following form. Consider the following geometric sequence. Determine the common ratio and find the next three terms. In geometric sequences, the terms increase or decrease by a common ratio. Since we know that this sequence is geometric, it's enough to find the ratio between two consecutive terms. The ratio for the others must then be the same. Let's take the first two: If we let be the common ratio we get the equation The common ratio, is To find the next terms, we multiply by three times. In summary, the common ratio is and the next three terms are and All geometric sequences have a common ratio, Using the common ratio, together with the value of the first term of the sequence, an explicit rule describing the sequence can be found. By expressing the terms in a geometric sequence using and a pattern emerges. Note that is equal to , and that can be written as When increases by the exponent on increases by as well. Due to this, and that the exponent is when is the exponent is always less than Expressing this in a general form gives the explicit rule. The first four terms of a geometric sequence are Find the explicit rule describing the geometric sequence. Then, use the rule to find the eighth term of the sequence. To write the explicit rule for the sequence, we first have to find the common ratio, . To do so, we can divide any term in the sequence by the term that precedes it. Let's use the second and first term. Substituting and into the general rule for geometric sequences gives the desired rule. Now, we can find the eighth term in the sequence by substituting into the rule above. For a geometric sequence, it is known that the common ratio is positive, and that Find the explicit rule for the sequence and give its first six terms. The terms we've been given are not consecutive. Therefore, we can't directly find However, the terms and are positions apart, so the ratio between them must be This gives the equation which we can solve for Now that we know the common ratio, we have to find as well, to be able to write the explicit rule. Knowing one term, a subsequent one can by found by multiplying by Therefore, a previous term is instead found by dividing by Using and this way, we can find With and we have enough information to state the explicit rule. The desired explicit rule is We already know the terms and Let's use the rule to find the remaining three. The terms and are evaluated similarly. Thus, the first six terms of the sequence are Pelle's good friend, Lisa, decides to play a trick on Pelle. While he is away, she rearranges his pellets so that they are grouped in a geometric sequence instead of an arithmetic one. The first group has pellets, the second has the third has and so on. Find a rule describing this sequence. After finishing the seventh group, Lisa counted remaining pellets. Use the rule to figure out whether there are enough to make an eighth group. To begin, we'll write the explicit rule describing this particular geometric sequence. It is given that To find the common ratio, we can divide the second term by the first. To write the rule, we can substitute and into the general rule for geometric sequences. To find if there are enough pellets to finish the eighth group, we must know the eighth term in the sequence. We'll substitute into the rule.
Presenter: Tawni Hunt-Ferrarini Students learn about the purpose of the reserve requirement, how money is "created" in the economy through fractional reserves, and how the Federal Reserve uses the reserve requirement and loans to correct economic instability. In Yugoslavia “between October 1, 1993 and January 24, 1995 prices increased by 5 quadrillion percent. This number is a 5 with 15 zeroes after it.” (Thayer Watkins, Professor of Economics at San Jose State University). The uncontrolled creation of money causes a quick decrease in the value of currency and very rapid hyperinflation (in annual price increases of hundreds or thousands of percent), which can destroy an economy. The United States central bank–the Federal Reserve–can protect against such a calamity by controlling the supply of money. One technique the Federal Reserve uses for controlling how fast (or how slow) the money supply can grow is through a reserve requirement for bank deposits. By making changes in that reserve requirement, the Fed can “create” or “destroy” money in an attempt to prevent hyper inflation or correct serious instability in the economy. The Federal Reserve also has two other primary tools of monetary policy, the discount rate and open market operations, through which it can control the money supply. Read the account of Yugoslavian hyperinflation . Although the Yugoslav economic crisis was largely caused by the physical printing of money, the uncontrolled creation of money by any means can have a devastating effect on an economy. The U.S. Federal Reserve System was created in 1913 with a primary purpose of controlling the US money supply and the value of money. The reserve requirement is one of the most important tools the Fed uses to control the money supply. Under the reserve requirement, banks are required to hold a percentage of their deposits on account with the Fed or in their own vaults. Banks are prohibited from lending this money out to customers. In this way, the Fed puts a limit on the growth of the money supply. The Monetary Control Act of 1980 allows the Fed to set the reserve requirement at 8-14% of deposits, based on economic conditions. The reserve requirement as of February 2002 was 10% of deposits. “Magic money” is able to grow from our fractional reserve system because money deposited at the bank is largely loaned back out to other customers. The reserve requirement places a limit on the bank’s ability to do so. For example, if Tamika enters town with $1,000 to deposit into the local bank, the bank’s actual reserves increase by $1,000. Because of the reserve requirement, those reserves will be divided into two separate funds: required reserves, which the bank must hold, and excess reserves, which the bank can lend to other customers. If the Fed sets the reserve requirement at 10 percent, the bank is required to hold on to $100 of Tamika’s deposit, and it can then lend the remaining $900 to another customer. If that customer uses the money to buy something from Mariluz, who then deposits that money back into the bank, the money supply grows to $1,900. This “magic money” is created because the sum of the same dollars are being used twice: Tamika holds papers saying that she has $1,000 in her bank account, and Mariluz holds papers saying that she has $900 in her bank account. What will the bank do with Mariluz’s $900? In accordance to the 10 percent reserve requirement, the bank must hold $90 and is free to lend the remaining $810 to another customer. This process can continue until the last penny has been loaned. In addition to placing a limit on money creation, the Federal Reserve can make changes in the reserve requirement to try to correct problems of inflation or recession in the economy. If the economy were starting to experience serious inflation, the Federal Reserve could increase the reserve requirement, limiting the banks’ ability to lend funds and reducing the money supply. The reduced money supply would increase interest rates, making consumers and firms less likely to want to borrow funds, thereby reducing their demand for products and slowing down the economy. To see how this would work, remember that with a 10 percent reserve requirement, $10,000 could be created from an initial $1,000 deposit. Now use the interactive table to increase the reserve requirement to 12 percent. At that rate, only $8,333.33 can be created from that same deposit. The Fed can also reduce the reserve requirement to increase the money supply in the event of a recession, lowering interest rates and enticing consumers and firms to borrow more, to increase their spending. But because the reserve requirement is so powerful, the Federal Reserve Board of Governors only makes changes in the reserve requirement in case of serious economic problems. As precise as use of the reserve requirement may seem, several factors limit its effectiveness in correcting economic problems. During a recession, reducing the reserve requirement only allows banks to make more loans available; the Fed cannot force banks to lend the money nor force consumers to take out loans. In addition, those who receive the loaned funds may choose not to redeposit those funds, holding them in cash instead. Also, money may leave the country through the purchase of imports or foreign investments, and money may enter the country through foreign purchases of our exports or investment in American assets. Although its effectiveness may be limited by several factors, the reserve requirement remains the most powerful single tool in the Federal Reserve’s arsenal to combat economic instability. More importantly, the reserve requirement stands as one important protection against the hyperinflation that has seriously crippled economies around the world. Because of the potential for hyperinflation, the Federal Reserve uses reserve requirements to limit the growth of the money supply. If the Board of Governors sees inflation as a serious economic problem, the reserve requirement can be increased to further limit the ability of banks to make loans and create money. The Fed can also reduce the reserve requirement, to make more money available to stimulate the economy during a recession. While some factors limit its effectiveness, the reserve requirement remains a very powerful tool of the Federal Reserve. For more information Read https://www.federalreserve.gov/monetarypolicy/reservereq.htm. For extra practice in calculating the federal reserve requirements, money multipliers, and evaluating what the Federal Reserve should do to the reserve requirement to correct inflation or recession complete the interactive activity. Presenter: Tawni Hunt-Ferrarini Presenter: Matthew Gherman Presenter: Theresa Fischer
Otitis media refers to inflammation of the middle ear. When an abrupt infection occurs, the condition is called "acute otitis media." Acute otitis media occurs when a cold, allergy, and the presence of bacteria or viruses lead to the accumulation of pus and mucus behind the eardrum, blocking the Eustachian tube. This can cause earache and fever. When fluid sits in the middle ear for weeks, the condition is known as "otitis media with effusion." This occurs in a recovering ear infection. Fluid can remain in the ear for weeks to many months. If not treated, chronic ear infections have potentially serious consequences such as temporary hearing loss.Why do children have more ear infections than adults? To understand earaches, and ear infections, you must first know about the Eustachian tube, a narrow channel connecting the inside of the ear to the back of the throat, just above the soft palate and uvula. The tube allows drainage of fluid from the middle ear, which prevents it from building up and bursting the thin ear drum. In a healthy ear, the fluid drains down the tube, assisted by tiny hair cells, and is swallowed. The tube maintains middle ear pressure equal to the air outside the ear, enabling free eardrum movement. Normally, the tube is collapsed most of the time in order to prevent the many germs residing in the nose and mouth from entering the middle ear. Infection occurs when the Eustachian tube fails to do its job. When the tube becomes partially blocked, fluid accumulates in the middle ear, trapping bacteria already present, which then multiply. Additionally, as the air in the middle ear space escapes into the bloodstream, a partial vacuum is formed that absorbs more bacteria from the nose and mouth into the ear. Children have Eustachian tubes that are shorter, more horizontal, and straighter than those of adults. These factors make the journey for the bacteria quick and relatively easy. It also makes it harder for the ears to clear the fluid, since it cannot drain with the help of gravity. A childs tube is also floppier, with a smaller opening that easily clogs. Most people with middle ear infection or fluid have some degree of hearing loss. The average hearing loss in ears with fluid is 24 decibels...equivalent to wearing ear plugs. (Twenty-four decibels is about the level of the very softest of whispers.) Thicker fluid can cause much more loss, up to 45 decibels (the range of conversational speech). Suspect hearing loss if one is unable to understand certain words and speaks louder than normal. Conductive hearing loss is a form of hearing impairment where the transmission of sound from the environment to the inner ear is impaired, usually from an abnormality of the external auditory canal or middle ear. This form of hearing loss can be temporary or permanent. Untreated chronic ear infections can lead to conductive hearing loss. If fluid is filling the middle ear, hearing loss can be treated by draining the middle ear and inserting a tympanostomy tube. The other form of hearing loss is sensorineural hearing loss, hearing loss due to abnormalities of the inner ear or the auditory division of the 8th cranial nerve. Historically, this condition can occur at all ages, and is usually permanent. A hearing test should be performed for children who have frequent ear infections, hearing loss that lasts more than six weeks, or fluid in the middle ear for more than three months. There are a wide range of medical devices now available to test a childs hearing, Eustachian tube function, and flexibility of the ear drum. They include the otoscopy, tympanometer, and audiometer. Children and adults can incur temporary hearing loss for other reasons than chronic middle ear infection and Eustachian tube dysfunction. They include: - Cerumen impaction (compressed earwax) - Otitis externa: Inflammation of the external auditory canal, also called swimmer's ear. - Cholesteatoma: A mass of horn shaped squamous cell epithelium and cholesterol in the middle ear, usually resulting from chronic otitis media. - Otosclerosis: This is a disease of the otic capsule (bony labyrinth) in the ear, which is more prevalent in adults and characterized by formation of soft, vascular bone leading to progressive conductive hearing loss. It occurs due to fixation of the stapes (bones in the ear). Sensorineural hearing loss may result because of involvement of the cochlear duct. - Trauma: A trauma to the ear or head may cause temporary or permanent hearing loss. Reprinted from www.entnet.org/content/patient-health with permission of the American Academy of Otolaryngology—Head and Neck Surgery, copyright © 2017. All rights reserved
Hands-on Activity: Product Development and the Environment Educational Standards : Learning Objectives (Return to Contents) After this activity, students should be able to: Materials List (Return to Contents) Each group needs: Introduction/Motivation (Return to Contents) Everywhere around us are products made from metals and plastics. Some of these products are as simple as a hairbrush or toothbrush; while others are as complex as a vehicle or a computer. Do you ever stop to think about how these products are made? Everything that involves metal and plastic uses natural resources, requires energy to manufacture, and produces waste in our environment. Some products have a large impact on the environment, and some have less of an impact. Products that can be recycled have less of an impact on the environment and are considered "environmentally friendly." Engineers consider the environmental impacts to our air, water and other natural resources when creating new products. To do this, they consider the entire life cycle of products—including materials acquisition, materials processing, manufacturing, packaging, transportation, use and disposal. These represent all the life phases of products, similar to the life cycles of animals in nature. Looking at the life cycle of a product helps us understand how we use the Earth's natural resources and energy and, particularly, how we produce waste. An engineer uses a life cycle assessment to measure how much energy is used to create a product and the impact a product has on the environment, from its creation to its final disposal. This includes several general steps to determining the overall environmental impact of a manufactured product. The first step is called an inventory analysis. In this step, the energy and materials used during a product's life cycle are calculated. A number value is assigned for energy and physical materials for all the phases of the life cycle (materials acquisition, materials processing, manufacturing, packaging, transportation, use, and disposal). The next step is an impact analysis—where the number values from step 1 are added together. This gives a final number which represents the total impact on the environment. Lastly, an improvement analysis is performed to determine any way to reduce the product's impact on the environment. For example, conserving energy or water during any of the phases of the life cycle or exchanging materials for less hazardous waste ones would help reduce the impact. Then, the changes are inserted back into the inventory analysis to determine if the total environmental impact can be reduced. Today, we are going to think about the life cycle of some engineered products. Since we are not developing new products, we are going to re-engineer existing ones by breaking the products down into their individual parts and examine each part for our analysis. Using that information, we will assign representative numbers for the environmental impact of our products and compare those impact numbers with the other products of our classmates. Then we will think about ways to reduce our numbers, or in essence, the environmental impact of our products. Vocabulary/Definitions (Return to Contents) Procedure (Return to Contents) This activity gives students an idea of how a life cycle assessment can be useful. The numbers on the worksheet are fictional and are only used to compare the environmental impacts of different objects to each other. In a real engineering life cycle analysis, the numbers of each step are determined using actual measurable inputs and outputs of energy, electricity, raw materials, water, waste and emissions. Before the Activity With the Students Attachments (Return to Contents) Safety Issues (Return to Contents) Troubleshooting Tips (Return to Contents) More complex products, such as CD players, are often more fun for the students, but they take longer to analyze. Choose the products wisely; if one group has a hairbrush while another has a toaster, they may finish at different speeds. Assessment (Return to Contents) Class Discussion: Solicit, integrate and summarize student responses. Hold up a common item such as a stapler and ask students to think about the different parts and pieces that make up products. As a class, create a list of all the parts of the stapler on the classroom board. Prediction: Have students predict the outcome of the activity before the activity is performed. Show students several example products that they will analyze during the activity. Ask them to predict which will prove to have the largest impact on the environment throughout their life cycles. Activity Embedded Assessment Worksheet: Have students follow along with the activity on the Product Life Cycle Assessment Worksheets. After students have finished their worksheets, have them compare answers with their peers. Considering Design Trade-Offs: Have students think about their suggested product improvements from the worksheets. Tell them that engineers must sometimes consider trade-offs in their designs. For example, might reducing the impact on the environment by reducing the amount of materials in the product also reduce the durability and effectiveness of the product? Have students determine any similar or possible product trade-offs that should be considered in their suggested product improvements. Diagramming: Have students draw graphical models of the life cycles of their products. On their drawings, have them detail the materials, processes and energy involved in each phase of the life cycles. Require that they include the following phases: materials acquisition, materials processing, manufacturing, packaging, transportation, use and disposal of the product. Activity Extensions (Return to Contents) Have students look up the life cycles of some common products. A cell phone is a good example of a product that has changed significantly over time, from amount of materials, to packaging and accessories. Cell phone parts include the case, display, wiring, keypad, microphone, speaker, antennae and battery. Have students create life cycle assessments for the various parts of cell phones. Cell phone lives average about 18 months in the U.S. Have students compare the life cycle assessment of cell phones to conventional landline phones. Have students research more about the development, use and disposal of plastic in products from toy dolls to cars. In fact, plastics account for 25% of all waste in landfills when buried (and many plastics end up in our oceans). Several online websites report the amount of plastics in different products and describe the options for recycling plastics. Have students create brochures for their school community about the use of plastics and where to dispose of them properly. Activity Scaling (Return to Contents) References (Return to Contents) U.S. Environmental Protection Agency, Systems Analysis Research, Office of Research & Development, National Risk Management Research Laboratory, Program Brief, "Life Cycle Assessment Framework," January 29, 2007, accessed February 14, 2007. http://www.epa.gov/nrmrl/std/ ContributorsMalinda Schaefer Zarske, Janet Yowell, Kaelin Cawley Copyright© 2008 by Regents of the University of Colorado. Supporting Program (Return to Contents)Integrated Teaching and Learning Program, College of Engineering, University of Colorado Boulder Acknowledgements (Return to Contents) The contents of this digital library curriculum were developed under a grant from the Fund for the Improvement of Postsecondary Education (FIPSE), U.S. Department of Education and National Science Foundation GK-12 grant no. 0338326. However, these contents do not necessarily represent the policies of the Department of Education or National Science Foundation, and you should not assume endorsement by the federal government.
This article has multiple issues. Please help improve it or discuss these issues on the talk page. (Learn how and when to remove these template messages)(Learn how and when to remove this template message) In computer architecture, a processor register is a quickly accessible location available to a digital processor's central processing unit (CPU). Registers usually consist of a small amount of fast storage, although some registers have specific hardware functions, and may be read-only or write-only. Registers are typically addressed by mechanisms other than main memory, but may in some cases be memory mapped. Almost all computers, whether load/store architecture or not, load data from a larger memory into registers where it is used for arithmetic operations and is manipulated or tested by machine instructions. Manipulated data is then often stored back to main memory, either by the same instruction or by a subsequent one. Modern processors use either static or dynamic RAM as main memory, with the latter usually accessed via one or more cache levels. Processor registers are normally at the top of the memory hierarchy, and provide the fastest way to access data. The term normally refers only to the group of registers that are directly encoded as part of an instruction, as defined by the instruction set. However, modern high-performance CPUs often have duplicates of these "architectural registers" in order to improve performance via register renaming, allowing parallel and speculative execution. Modern x86 design acquired these techniques around 1995 with the releases of Pentium Pro, Cyrix 6x86, Nx586, and AMD K5. A common property of computer programs is locality of reference, which refers to accessing the same values repeatedly and holding frequently used values in registers to improve performance; this makes fast registers and caches meaningful. Allocating frequently used variables to registers can be critical to a program's performance; this register allocation is performed either by a compiler in the code generation phase, or manually by an assembly language programmer. Categories of registers Registers are normally measured by the number of bits they can hold, for example, an "8-bit register" or a "32-bit register". A processor often contains several kinds of registers, that can be classified according to their content or instructions that operate on them: - User-accessible registers can be read or written by machine instructions. The most common division of user-accessible registers is into data registers and address registers. - Data registers can hold numeric values such as integer and, in some architectures, floating-point values, as well as characters, small bit arrays and other data. In some older and low end CPUs, a special data register, known as the accumulator, is used implicitly for many operations. - Address registers hold addresses and are used by instructions that indirectly access primary memory. - Some processors contain registers that may only be used to hold an address or only to hold numeric values (in some cases used as an index register whose value is added as an offset from some address); others allow registers to hold either kind of quantity. A wide variety of possible addressing modes, used to specify the effective address of an operand, exist. - The stack pointer is used to manage the run-time stack. Rarely, other data stacks are addressed by dedicated address registers, see stack machine. - General-purpose registers (GPRs) can store both data and addresses, i.e., they are combined data/address registers and rarely the register file is unified to include floating point as well. - Status registers hold truth values often used to determine whether some instruction should or should not be executed. - Floating-point registers (FPRs) store floating point numbers in many architectures. - Constant registers hold read-only values such as zero, one, or pi. - Vector registers hold data for vector processing done by SIMD instructions (Single Instruction, Multiple Data). - Special-purpose registers (SPRs) hold program state; they usually include the program counter, also called the instruction pointer, and the status register; the program counter and status register might be combined in a program status word (PSW) register. The aforementioned stack pointer is sometimes also included in this group. Embedded microprocessors can also have registers corresponding to specialized hardware elements. - In some architectures, model-specific registers (also called machine-specific registers) store data and settings related to the processor itself. Because their meanings are attached to the design of a specific processor, they cannot be expected to remain standard between processor generations. - Memory Type Range Registers (MTRRs) - Internal registers – registers not accessible by instructions, used internally for processor operations. Hardware registers are similar, but occur outside CPUs. In some architectures, such as SPARC and MIPS, the first or last register in the integer register file is a pseudo-register in a way that it is hardwired to always return zero when read (mostly to simplify indexing modes), and it cannot be overwritten. In Alpha this is also done for the floating-point register file. As a result of this, register files are commonly quoted as having one register more than how many of them are actually usable; for example, 32 registers are quoted when only 31 of them fit within the above definition of a register. The following table shows the number of registers in several mainstream architectures. Note that in x86-compatible processors the stack pointer (ESP) is counted as an integer register, even though there are a limited number of instructions that may be used to operate on its contents. Similar caveats apply to most architectures. Although all of the above listed architectures are different, almost all are a basic arrangement known as the Von Neumann architecture, first proposed by mathematician John von Neumann. It is also noteworthy that the number of registers on GPUs is much higher than that on CPUs. |AT&T Hobbit||0||stack of 7||The AT&T Hobbit ATT92010 (around 1992) was a commercial version of the 32-bit CRISP(stack machine) processor with RISC style instruction, inspired by the Bell Labs C Machine project, aimed at a design optimized for the C language, Hobbit has no global registers. Addresses can be memory direct or indirect (for pointers) relative to the stack pointer without extra instructions or operand bits. The cache is not optimized for multiprocessors and FPU is consist with custom ASIC design in IEEE standard.| |Cray-1||8 scalar data, 8 address||8 scalar, 8 vector (64 elements)||Scalar data registers can be integer or floating-point; also 64 scalar scratch-pad T registers and 64 address scratch-pad B registers| |4004||1 accumulator, 16 others||0||Register A is for general purpose, while r0–r15 registers are for the address and segment.| |8008||1 accumulator, 6 others||0||The A register is an accumulator to which all arithmetic is done; the H and L registers can be used in combination as an address register; all registers can be used as operands in load/store/move/increment/decrement instructions and as the other operand in arithmetic instructions. There is no FP unit available.| |8080||1 accumulator, 6 others||0||Plus a stack pointer. The A register is an accumulator to which all arithmetic is done; the register pairs B+C, D+E, and H+L, can be used as address registers in some instructions; all registers can be used as operands in load/store/move/increment/decrement instructions and as the other operand in arithmetic instructions. Some instructions only use H+L; another instruction swaps H+L and D+E. There is no FP unit available.| |iAPX432||0||stack of 6||The iAPX 432 was referred to as a micromainframe, designed to be programmed entirely in high-level languages. The instruction set architecture was also entirely new and a significant departure from Intel's previous 8008 and 8080 processors as the iAPX 432 programming model was a stack machine with no visible general-purpose registers. It supported object-oriented programming, garbage collection and multitasking as well as more conventional memory management directly in hardware and microcode. Direct support for various data structures was also intended to allow modern operating systems to be implemented using far less program code than for ordinary processors| |16-bit x86||8||stack of 8 (if FP present)||8086/8088, 80186/80188, 80286, with 8087, 80187 or 80287 for floating-point, with an 80-bit wide, 8 deep register stack with some instructions able to use registers relative to the top of the stack as operands; without 8087/80187/80287, no floating-point registers| |IA-32||8||stack of 8 (if FP present), 8 (if SSE/MMX present)||80386 required 80387 for floating-point, later processors had built-in floating point, with both having an 80-bit wide, 8 deep register stack with some instructions able to use registers relative to the top of the stack as operands. The Pentium III and later had the SSE with additional 128-bit XMM registers.| |x86-64||16||16||FP registers are 128-bit XMM registers, later extended to 256-bit YMM registers with AVX.| |Xeon Phi||16||32||Including 32 256/512-bit ZMM registers with AVX-512| |Geode GX||1 data, 1 address||8||Geode GX/Media GX/4x86/5x86 is the emulation of 486/Pentium compatible processor made by Cyrix/National Semiconductor. Like Transmeta, the processor had a translation layer that translated x86 code to native code and executed it. It does not support 128-bit SSE registers, just the 80387 stack of eight 80-bit floating point registers, and partially support 3DNow! from AMD. The native processor only contains 1 data and 1 address register for all purpose and translated into 4 paths of 32-bit naming register r1 (base), r2 (data), r3 (back pointer), and r4 (stack pointer) within scratchpad sram for integer operation and uses the L1 cache for x86 code emulation(note that it's not compatible with some 286/386/486 instructions in real mode). Later the design was abandoned after AMD acquired the IP from National Semiconductor and branded it with Athlon core in embedded market.| |SunPlus SPG||0||6||A 16-bit wide, 32-bit address space stack machine processor that made from Taiwanese semiconductor called "Sunplus", it can be found on Vtech's v'smile line for educational purpose and video game console like Mattel hyperscan, XaviXPORT. it does lack any general purpose register or internal register for naming/renaming but its Floating Point Unit has 80-bit 6 stage stack.| |VM Labs Nuon||0||1||a 32-bit stack machine processor that developed by VM labs for specialized on multimedia purpose. It can be found on the company's own Nuon DVD player console line and Game Wave Family Entertainment System from ZaPit games. The design was heavy influence by Intel's MMX technology, it contained an 128 bytes unified stack cache for both vector and scalar instructions. the unified cache can be divided as 8 128-bit vector register or 32 32bit SIMD scalar register through bank renaming, no integer register found in this architecture.| |Nios II||31||8||Nios II is based on MIPS IV instruction set and has 31 32-bit GPRs, with register 0 being hardwired to zero and 8 64-bit floating point registers| |Motorola 6800||2 data, 1 index||0||Plus a stack pointer| |Motorola 68k||8 data (d0-d7), 8 address (a0-a7)||8 (if FP present)||Address register 8 (a7) is the stack pointer. 68000, 68010, 68012, 68020, and 68030 require an FPU for floating point; 68040 had FPU built in. FP registers are 80-bit.| |Emotion Engine||4||32 SIMD + 32 Vector||The Emotion Engine's main core contains four entries 32-bit general-purpose registers for integer computation and 32 entries 128-bit SIMD registers for storing SIMD instruction, streaming data value and some integer caculation value. one accumulator register for connecting general floating-point computation to vector register file on co-processor. The coprocessor is built via 32 entries 128-bit vector register file(can only store vector value that pass from accumulator in cpu. ) and no integer register is built in. Both vector co-processor(VPU 0/1) and emotion engine main processor are built based on modified MIPS instructions set. Accumulator in the this case is not general purpose but control status.| |CUDA||1||8/16/32/64/128||Each CUDA core contains a single 32/64-bit integer data register while the floating point unit contains a much larger number of registers: |IBM/360||16||4 (if FP present)||This applies to S/360's successors, System/370 through System/390; FP was optional in System/360, and always present in S/370 and later. In processors with the Vector Facility, there are 16 vector registers containing a machine-dependent number of 32-bit elements.| |z/Architecture||16||16||64-bit version of S/360 and successors| |MMIX||256||256||An instruction set designed by Donald Knuth in the late 1990s for pedagogical purposes.| |NS320xx||8||8 (if FP present)| |Xelerated X10||1||32||a 32/40 bit stack machine based network processor with modified MIPS instruction and 128 bit floating point unit.| |Parallax Propeller||0||2||An eight core 8/16 bit sliced stack machine controller with simple logic circus inside, have eight cog counter(core) and each contain three 8/16 bit special control registers with 32 bit x 512 stack ram however it does not carrying any general register for integer purpose. unlike most of shadow register file in modern processor and multi core system, all these stack ram in cog can be accessed in instruction level which all these cog can act as one big single general purpose core if necessary. Floating point unit is external and it contain two 80 bit vector register.| |Itanium||128||128||And 64 1-bit predicate registers and 8 branch registers. The FP registers are 82-bit.| |SPARC||31||32||Global register 0 is hardwired to 0. Uses register windows.| |IBM POWER||32||32||And 1 link and 1 count register.| |Power Architecture||32||32||And 1 link and 1 count register. Processors supporting the Vector facility also have 32 128-bit vector registers,| |Blackfin||8||16||containing two external uncore 40 bit accumulator, but non are general purpose. Support 64 bit RISC architecture ISA, vector register are 256 bit.| |IBM Cell SPE||128||128 GPRs, which can hold integer, address, or floating-point values| |Alpha||31||31||Registers R31 (integer) and F31 (floating-point) are hardwired to zero.| |6502||1 data, 2 index||0||6502's content A (Accumulator) register for main purpose data store and memory address (8-bit data/16-bit address), X,Y are indirect and direct index registers (respectively) and SP register are specific index only.| |W65C816S||1||0||65C816 is the 16-bit successor of the 6502. X,Y, D (Direct Page register) are condition register and SP register are specific index only. main accumulator extended to 16-bit (B) while keep 8-bit (A) for compatibility and main register can now address up to 24-bit (16-bit wide data instruction/24-bit memory address).| |65k||1||0||Direct successor of 6502, 65002 only content A (Accumulator) register for main purpose data store and extend data wide to 32-bit and 64-bit instruction wide, support 48-bit virtual address in software mode, X,Y are still condition register and remain 8-bit and SP register are specific index but increase to 16-bit wide.| |MeP||4||8||Media-embedded processor was a 32 bit processor developed by toshiba, a modded 8080 instruction set with only A, B, C, D register available through all mode(8/16/32 bit) and incompatible with x86, however it contain 80 bit floating point unit that is x87 compatible.| |ARM 32-bit||14||Varies (up to 32)||r15 is the program counter, and not usable as a GPR; r13 is the stack pointer; r8-r13 can be switched out for others (banked) on a processor mode switch. Older versions had 26-bit addressing, and used upper bits of the program counter (r15) for status flags, making that register 32-bit.| |ARM 64/32-bit||31||32||Register r31 is the stack pointer or hardwired to 0, depending on the context.| |MIPS||31||32||Register 0 is hardwired to 0.| |Epiphany||64 (per core)||Each instruction controls whether registers are interpreted as integers or single precision floating point. Architecture is scalable to 4096 cores with 16 and 64 core implementations currently available.| The number of registers available on a processor and the operations that can be performed using those registers has a significant impact on the efficiency of code generated by optimizing compilers. The Strahler number of an expression tree gives the minimum number of registers required to evaluate that expression tree. - "A Survey of Techniques for Designing and Managing CPU Register File". Concurrency and Computation. Wiley. 2016. doi:10.1002/cpe.3906. - "A Survey of Techniques for Architecting and Managing GPU Register File", IEEE TPDS, 2016 - "great microprocessor of the past and present". cpushock. April 2006. - "Cray-1 Computer System Hardware Reference Manual" (PDF). Cray Research. November 1977. - "MCS-4 Micro Computer Set Users Manual" (PDF). Intel. February 1973. - "8008 8 Bit Parallel Central Processor Unit Users Manual" (PDF). Intel. November 1973. Retrieved January 23, 2014. - "Intel 8080 Microcomputer Systems User's Manual" (PDF). Intel. September 1975. Retrieved January 23, 2014. - "80286 and 80287 Programmer's Reference Manual" (PDF). Intel. 1987. - "Intel 64 and IA-32 Architectures Software Developer Manuals". Intel. - "AMD64 Architecture Programmer's Manual Volume 1: Application Programming" (PDF). AMD. October 2013. - "Intel Xeon Phi Coprocessor Instruction Set Architecture Reference Manual" (PDF). Intel. September 7, 2012. - "Nios II Classic Processor Reference Guide" (PDF). Altera. April 2, 2015. - "Nios II Gen2 Processor Reference Guide" (PDF). Altera. April 2, 2015. - "M6800 Programming Reference Manual" (PDF). Motorola. November 1976. Retrieved May 18, 2015. - "Motorola M68000 Family Programmer's Reference Manual" (PDF). Motorola. 1992. Retrieved June 13, 2015. - "IBM Enterprise Systems Architecture/370 and System/370 - Vector Operations" (PDF). IBM. Retrieved January 5, 2014. - "MMIX Home Page". - "Series 32000 Databook" (PDF). National Semiconductor. - "Synergistic Processor Unit Instruction Set Architecture Version 1.2" (PDF). IBM. January 27, 2007. - "Procedure Call Standard for the ARM Architecture" (PDF). ARM Holdings. 30 November 2013. Retrieved 27 May 2013. - "Procedure Call Standard for the ARM 64-bit Architecture" (PDF). ARM Holdings. 22 May 2013. Retrieved 27 May 2013. - "Epiphany Architecture Reference" (PDF).
Corsi registrati su Corsi in diretta per la formazione di Front End Developer e Back End Developer In this lesson we will develop the Euclidean algorithm in Python. Euclid’s algorithm is a method used to find the greatest common divisor between two integers. By greatest common factor, GCD, between two integers we denote the greatest common divisor of both. Euclidean algorithm consists in dividing the two numbers and considering the remainder. The procedure ends when the remainder is found equal to zero. Let’s take some examples. Euclidean algorithm Python – First example Let’s take two numbers for example 20 and 15 and proceed according to Euclid’s algorithm. First pass: a / b i.e. 20/15 = 1 remainder 5 – the remainder is non-zero, so I keep dividing. The second step will thus be, exchanging a with b and b with r, that is 15/5 = 3 remainder 0. We found the remainder equal to zero, so the GCD is 5. Euclidean algorithm Python – Second example Let’s create a second example in order to understand how Euclid’s algorithm works. So let’s take the numbers 64 and 30. 64/30 = 2 remainder 4 – the remainder is non-zero, so we keep dividing. 30/4 = 7 remainder 2 – the remainder is non-zero, so we keep dividing. 4/2 = 2 remainder 0 The remainder is zero, so the GCD is 2. Euclidean algorithm in Python Let’s now implement this algorithm in Python. First of all we take as input the two numbers a and b. Then as long as b is greater than 0, we calculate the remainder of the divison of a divided by b and exchange a with b and b with r. So let’s print a. So here is the complete code that represents the Euclidean algorithm in Python: a = int(input('Insert the first number: ')) b = int(input('Insert the second number: ')) while b > 0: r = a % b a, b = b, r print (a) You can also test the code in the online Python compiler in that link: Python compiler online. This is a possible implementation of the Euclidean algorithm, in the next lesson we will do other examples on loops in Python. Some useful links How to find the maximum of N numbers
It’s perhaps the second week of your introductory physics course. Your instructor starts talking about friction and writes the following two formulas on the board. Then there is probably some sort of lecture like this: Friction is a contact force when two surfaces interact. The second equation is the kinetic frictional force that is used when two surfaces are sliding against each other. The frictional force in this case depends on the two types of materials interacting (described by the coefficient μk) and how hard these two surfaces are pushed together (the normal force). The static friction case is similar for when the two surfaces are stationary relative to each other. In static friction, the frictional force is whatever value it needs to be to prevent sliding up to some maximum value. Technically, this is called Amontons’ First and Second Law of Friction. See, it’s not just Newton that has laws. Notice that both of these friction formulas ONLY depend on the coefficient of friction and the normal force. It does not depend the area of contact, it doesn’t depend on the sliding speed. Next, there will probably be some type of friction laboratory experiment. In this lab, students will measure coefficients of friction and show that the frictional force doesn’t depend on surface area in contact. Also, the coefficient of friction doesn’t depend on the mass of the object. Pretty standard stuff here. Friction Is Just a Model How about another experiment? In this experiment, I am going to put an object on an moveable plane. I can then increase the angle of inclination until this block just starts to slide. At the moment it starts to slide, I can calculate both the normal force (pushing the plane against the object) and the friction force (the maximum static friction force). Here is a force diagram at the instant the block starts to slide. Just at the instant this thing starts to slide, all of these forces still have to add up to the zero vector (object is in equilibrium). That means that the component of the gravitational force perpendicular to the plane must be equal to the magnitude of the normal force and the component parallel to the plane must be equal to the frictional force. With just the mass and the sliding angle, I can get both the frictional force and the normal force. How can I calculate the coefficient of friction? What if I made a plot of friction vs. normal force for the same surface but with different masses? If the normal force and the frictional force are really proportional (like in the model above) then this data should be linear with the slope of the line being the coefficient of friction. It’s simple, right? Ok. Let’s do this. In order to keep everything the same except for the mass, I am going to put masses into one of these small boxes. This box has a teflon bottom with an open top so you can put masses inside (oh, it’s from PASCO). There is also a variable angle inclined plane. This one in particular has a large angle measurement on the side and here you can see the friction box with a large amount of mass both inside and on top of it. Actually, there is also a similar plane that is made of metal instead of wood. I tried this experiment both with a felt-bottomed box on wood and a teflon box on metal. For each mass, I slowly lifted the incline until the box slipped and then recorded the angle. I repeated the experiment for the same mass 5 or 6 times so that I could get an average angle and a standard deviation in the angle measurement. Here is a plot of friction force vs. normal force for both surfaces. The error bars are calculated (using the crank three times method) from the standard deviation in angle measurements. What’s going on here? Let’s look at the data for the teflon (the blue data). I fit a linear function to the first 4 data points and you can see it is very linear. The slope of this line gives a coefficient of static friction with a value of 0.235. However, as I add more and more mass to the friction box, the normal force keeps increasing but the friction force doesn’t increase as much. The same thing happens for friction box with felt on the bottom. This shows that the “standard” friction model is just that – a model. Models were meant to be broken. A More Detailed Look at Friction Really, what is friction? You could say that when two surfaces come near each other (call them surface A and surface B), the atoms in surface B get close enough to interact with surface A. The more atoms that are interacting in the two surfaces, the greater the total frictional force. How do you get more atoms to interact from the two surfaces? Well, if you push the surfaces together you can get more atoms from A to be close enough to the atoms from B to interact. Yes, I am simplifying this a bit. However, the point is that contact area does indeed matter. I am talking about contact area, not surface area. Suppose you put a rubber ball on a glass plate. As you push down on the rubber ball, it will deform such that more of the ball will come in “contact” with the glass. Here is a diagram of this. Greater contact area means greater frictional force. If the contact area is proportional to the normal force, then this looks just like Amontons’ Law with the frictional force proportional to the normal force. Of course this model “breaks” when the contact area can no longer increase. As I add more and more mass onto the friction box, there is less and less available contact area to expand into. In a sense, the contact area becomes saturated. I suppose that if I kept piling on the weight, the friction force would eventually level out and stop increasing. It’s Just a Model This really isn’t a big deal. The Amontons’ Law isn’t a law at all (ok – it depends on your definition of Law). It’s just a model. A model is not THE TRUTH, it’s just something that works some of the time. Let me give an example. Gravitational Model. Near the surface of the Earth, we can calculate the gravitational force on an object using the following model. The g vector is the local gravitational field. On Earth, it points “down” and has a magnitude around 9.8 N/kg. We often call this gravitational force the weight and it’s a very useful model. Even though this model is useful, we still know it’s wrong. The above gravitational model says that it doesn’t matter how high above the surface of the Earth you are, the weight is the same. Of course that’s not true, but it’s approximately true when close to the surface. Here is a better gravitational model. This says that the gravitational force decreases as the two interacting objects get further away from each other. If you put in the mass of the Earth and the radius of the Earth you get a weight that looks just like the mg version. So, at some point the two versions of gravity agree. The same is true for friction. The introductory physics version of friction works for some stuff and a more complicated version of friction works for other cases. Of course you could still use the complicated version of friction for simple cases – but why make your life difficult?
This 4 page lesson includes a page that details the formulas for finding the area of a square, rectangle, triangle, parallelogram and trapezoid. The second and third pages are practice problems. Answer Keys included! The total space inside the boundary of the triangle is called as the area of the triangle. Area is measured in terms of square unit.The total space inside the boundary of the triangle is called as the area of the triangle. Area is measured in terms of square unit. I present you with an introductory lesson on Formulas! Your students will enjoy manipulating the variables in our lesson from one side of the equal sign to the other, depending upon the problem at hand. Formulas presented in the lesson are for Celsius and Fahrenheit conversions, Kelvin and Fahrenheit conversions, area of triangles, area of rectangles, density and mass, the distance formula, and simple interest. This 40-page booklet lists all geometry terms relating to polygons introduced in elementary and middle school. There are: 28 Terms Relating to All Polygons 23 Terms Relating to Triangles 12 Terms Relating to Quadrilaterals 9 Terms Relating to Other PolygonsDefinitions, Symbols, Formulas for Area and Perimeter, and Drawingsare all included in this booklet. These bulletin board pages contain 21 facts about triangles, including the definition of triangles, classifying triangles, similar and congruent triangles including facts about SSS, ASA, SAS congruence, Pythagorean Theory and Pythagorean Triples, as well as area and perimeter formulas. Without any context, this is a useless formula. However, if the question were ``how do we compute the area of a triangle'' then the formula can make sense. Sometimes, what's overlooked is that getting the height is a bit of a pain. What are other ways of obtaining the area of a triangle?
The shape of the universe is a subject of investigation within physical cosmology. Cosmologists and astronomers describe the geometry of the universe which includes both local geometry and global geometry. The shape of the universe is loosely termed topology, even though strictly speaking it goes beyond topology. Introduction to the shape of the universe. The shape of the universe can be determined by measuring the average density of matter within it, assuming that all matter is evenly distributed, rather than the distortions caused by 'dense' objects such as galaxies. This assumption is justified by the observations that, while the universe is "weakly" inhomogeneous and anisotropic (see the large-scale structure of the cosmos), it is on average homogeneous and isotropic. Considerations of the geometry of the universe can be split into two parts; the local geometry relates to the Observable universe, while the global geometry relates to the universe as a whole - including that which we can't measure. Local geometry and the shape of the universe. The local geometry is the geometry describing the observable universe. Many astronomical observations, such as those from supernovae and the Cosmic Microwave Background radiation, show the observable universe to be homogeneous and isotropic and infer it to be accelerating. In General relativity, this is modelled by the Friedmann-Lemaître-Robertson-Walker (FLRW) model. This model, which can be represented by the Friedmann equations, provides a local geometry of the universe based on the mathematics of Fluid dynamics, i.e. it models the matter within the universe as a perfect fluid. Although stars and structures of mass can be introduced into an "almost FLRW" model, a strictly FLRW model is used to approximate the local geometry of the observable universe. Spatial curvature and the shape of the universe. The homogeneous and isotropic universe allows for a spatial geometry with a constant curvature. One aspect of local geometry to emerge from General Relativity and the FLRW model is that the density parameter, Omega (O), is related to the curvature of space. Omega is the average density of the universe divided by the critical energy density, i.e. that required for the universe to be flat (zero curvature). The curvature of space is a mathematical description of whether or not the Pythagorean theorem is valid for spatial coordinates. In the latter case, it provides an alternative formula for expressing local relationships between distances. If the curvature is zero, then O = 1, and the Pythagorean theorem is correct. If O > 1, there is positive curvature, and if O < 1 there is negative curvature; in either of these cases, the Pythagorean theorem is invalid (but discrepancies are only detectable in triangles whose sides' lengths are of cosmological scale). If you measure the circumferences of circles of steadilly larger diameters and divide the former by the latter, all three geometries give the value p for small enough diameters but the ratio departs from p for larger diameters unless O = 1. For O > 1 (the sphere, see diagram) the ratio falls below p: indeed, a great circle on a sphere has circumference only twice its diameter. For O < 1 the ratio rises above p. Astronomical measurements of both matter-energy density of the universe and spacetime intervals using supernova events constrain the spatial curvature to be very close to zero, although they do not constrain its sign. This means that although the local geometries are generated by the Theory of Relativity based on spacetime intervals, we can approximate it to the familiar geometries of three spatial dimensions. Local geometries and the shape of the universe. There are three categories for the possible spatial geometries of constant curvature, depending on the sign of the curvature. If the curvature is exactly zero, then the local geometry is flat; if it is positive, then the local geometry is spherical, and if it is negative than the local geometry is hyperbolic. If the observable universe is spatially "nearly flat", then a simplification can be made whereby the dynamic, accelerating dimension of the geometry can be separated and omitted by invoking comoving coordinates. Comoving coordinates, from a single frame of reference, leave a static geometry of three spatial dimensions. Under the assumption that the universe is homogeneous and isotropic, the curvature of the observable universe, or the local geometry, is described by one of the three "primitive" geometries: Even if the universe is not exactly spatially flat, the spatial curvature is close enough to zero to place the radius at approximately the horizon of the observable universe or beyond. Global geometry and the shape of the universe. Global geometry covers the geometry, in particular the topology, of the whole universe - both the observable universe and beyond. While the local geometry does not determine the global geometry completely, it does limit the possibilities, particularly a geometry of a constant curvature. For a flat spatial geometry, the scale of any properties of the topology is arbitrary and may or may not be directly detectable. For spherical and hyperbolic spatial geometries, the probability of detection of the topology by direct observation depends on the spatial curvature. Using the radius of curvature as a scale, a small curvature of the local geometry, with a corresponding scale greater than the observable horizon, makes the topology difficult to detect. A spherical geometry may well have a radius of curvature that can be detected. In a hyperbolic geometry the radius scale is unlikely to be within the observable horizon. Two strongly overlapping investigations within the study of global geometry are: Compactness of the global shape A compact space is a general topological definition that encompasses the more applicable notion of a bounded metric space. In cosmological models, it requires either one or both of: the space has positive curvature (like a sphere), and/or it is "multiply connected", or more strictly non-simply connected. If the 3-manifold of a spatial section of the universe is compact then, as on a sphere, straight lines pointing in certain directions, when extended far enough in the same direction will reach the starting point and the space will have a definable "volume" or "scale". If the geometry of the universe is not compact, then it is infinite in extent with infinite paths of constant direction that, generally do not return and the space has no definable volume, such as the Euclidean plane. If the spatial geometry is spherical, the topology is compact. Otherwise, for a flat or a hyperbolic spatial geometry, the topology can be either compact or infinite. In a flat universe, all of the local curvature and local geometry is flat. In general it can be described by Euclidian space, however there are some spatial geometries which are flat and bounded in one or more directions. These include, in two dimensions, the cylinder and the torus. Similar spaces in three dimensions also exist. A positively curved universe is described by spherical geometry, and can be thought of as a three-dimensional hypersphere. One of the endeavors in the analysis of data from the Wilkinson Microwave Anisotropy Probe (WMAP) is to detect multiple "back-to-back" images of the distant universe in the cosmic microwave background radiation. Assuming the light has enough time since its origin to travel around a bounded universe, multiple images may be observed. While current results and analysis do not rule out a bounded topology, if the universe is bounded then the spatial curvature is small, just as the spatial curvature of the surface of the Earth is small compared to a horizon of a thousand kilometers or so. Based on analyses of the WMAP data, cosmologists during 2004-2006 focused on the Poincaré dodecahedral space (PDS), but also considered horn topologies to be compatible with the data. A hyperbolic universe (frequently but confusingly called "open") is described by hyperbolic geometry, and can be thought of as something like a three-dimensional equivalent of an infinitely extended saddle shape. For hyperbolic local geometry, many of the possible three-dimensional spaces are informally called horn topologies. The ultimate fate of an open universe is that it will continue to expand forever, ending in a Heat Death, a Big Freeze or a Big Rip.
According to consensus in modern genetics anatomically modern humans first arrived on the Indian subcontinent from Africa between 73,000 and 55,000 years ago. However, the earliest known human remains in South Asia date to 30,000 years ago. Settled life, which involves the transition from foraging to farming and pastoralism, began in South Asia around 7,000 BCE. At the site of Mehrgarh presence can be documented of the domestication of wheat and barley, rapidly followed by that of goats, sheep, and cattle. By 4,500 BCE, settled life had spread more widely, and began to gradually evolve into the Indus Valley Civilization, an early civilization of the Old world, which was contemporaneous with Ancient Egypt and Mesopotamia. This civilisation flourished between 2,500 BCE and 1900 BCE in what today is Pakistan and north-western India, and was noted for its urban planning, baked brick houses, elaborate drainage, and water supply. In early second millennium BCE persistent drought caused the population of the Indus Valley to scatter from large urban centres to villages. Around the same time, Indo-Aryan tribes moved into the Punjab from Central Asia in several waves of migration. Their Vedic period (1500-500 BCE) was marked by the composition of the Vedas, large collections of hymns of these tribes. Their varna system, which evovled into the caste system, consisted of a hierarchy of priests, warriors, and free peasants, excluded indigenous peoples by labeling their occupations impure. The pastoral and nomadic Indo-Aryans spread from the Punjab into the Gangetic plain, large swaths of which they deforested for agriculture usage. The composition of Vedic texts ended around 600 BCE, when a new, interregional culture arose. Small chieftaincies, or janapadas, were consolidated into larger states, or mahajanapadas, and a second urbanisation took place. This urbanisation was accompanied by the rise of new ascetic movements in Greater Magadha, including Jainism and Buddhism, which opposed the growing influence of Brahmanism and the primacy of rituals, presided by Brahmin priests, that had come to be associated with Vedic religion, and gave rise to new religious concepts. In response to the succes of these movements, Vedic Brahmanism was synthesised with the preexisting religious cultures of the subcontinent, giving rise to Hinduism. Most of the Indian subcontinent was conquered by the Maurya Empire during the 4th and 3rd centuries BCE. From the 3rd century BCE onwards Prakrit and Pali literature in the north and the Tamil Sangam literature in southern India started to flourish. Wootz steel originated in south India in the 3rd century BCE and was exported to foreign countries. During the Classical period, various parts of India were ruled by numerous dynasties for the next 1,500 years, among which the Gupta Empire stands out. This period, witnessing a Hindu religious and intellectual resurgence, is known as the classical or "Golden Age of India". During this period, aspects of Indian civilisation, administration, culture, and religion (Hinduism and Buddhism) spread to much of Asia, while kingdoms in southern India had maritime business links with the Middle East and the Mediterranean. Indian cultural influence spread over many parts of Southeast Asia, which led to the establishment of Indianised kingdoms in Southeast Asia (Greater India). The most significant event between the 7th and 11th century was the Tripartite struggle centred on Kannauj that lasted for more than two centuries between the Pala Empire, Rashtrakuta Empire, and Gurjara-Pratihara Empire. Southern India saw the rise of multiple imperial powers from the middle of the fifth century, most notably the Chalukya, Chola, Pallava, Chera, Pandyan, and Western Chalukya Empires. The Chola dynasty conquered southern India and successfully invaded parts of Southeast Asia, Sri Lanka, the Maldives, and Bengal in the 11th century. In the early medieval period Indian mathematics, including Hindu numerals, influenced the development of mathematics and astronomy in the Arab world. Islamic conquests made limited inroads into modern Afghanistan and Sindh as early as the 8th century, followed by the invasions of Mahmud Ghazni. The Delhi Sultanate was founded in 1206 CE by Central Asian Turks who ruled a major part of the northern Indian subcontinent in the early 14th century, but declined in the late 14th century, and saw the advent of the Deccan Sultanates. The wealthy Bengal Sultanate also emerged as a major power, lasting over three centuries. This period also saw the emergence of several powerful Hindu states, notably Vijayanagara and Rajput states, such as Mewar. The 15th century saw the advent of Sikhism. The early modern period began in the 16th century, when the Mughal Empire conquered most of the Indian subcontinent, signalling the proto-industrialization, becoming the biggest global economy and manufacturing power, with a nominal GDP that valued a quarter of world GDP, superior than the combination of Europe's GDP. The Mughals suffered a gradual decline in the early 18th century, which provided opportunities for the Marathas, Sikhs, Mysoreans, Nizams, and Nawabs of Bengal to exercise control over large regions of the Indian subcontinent. From the mid-18th century to the mid-19th century, large regions of India were gradually annexed by the East India Company, a chartered company acting as a sovereign power on behalf of the British government. Dissatisfaction with company rule in India led to the Indian Rebellion of 1857, which rocked parts of north and central India, and led to the dissolution of the company. India was afterwards ruled directly by the British Crown, in the British Raj. After World War I, a nationwide struggle for independence was launched by the Indian National Congress, led by Mahatma Gandhi, and noted for nonviolence. Later, the All-India Muslim League would advocate for a separate Muslim-majority nation state. The British Indian Empire was partitioned in August 1947 into the Dominion of India and Dominion of Pakistan, each gaining its independence. Prehistoric era (until c. 3300 BCE) This section contains too many or overly lengthy quotations for an encyclopedic entry. (July 2021) Hominin expansion from Africa is estimated to have reached the Indian subcontinent approximately two million years ago, and possibly as early as 2.2 million years before the present. This dating is based on the known presence of Homo erectus in Indonesia by 1.8 million years before the present and in East Asia by 1.36 million years before present, as well as the discovery of stone tools made by proto-humans in the Soan River valley, at Riwat, and in the Pabbi Hills, in present-day Pakistan[verification needed]. Although some older discoveries have been claimed, the suggested dates, based on the dating of fluvial sediments, have not been independently verified. The oldest hominin fossil remains in the Indian subcontinent are those of Homo erectus or Homo heidelbergensis, from the Narmada Valley in central India, and are dated to approximately half a million years ago. Older fossil finds have been claimed, but are considered unreliable. Reviews of archaeological evidence have suggested that occupation of the Indian subcontinent by hominins was sporadic until approximately 700,000 years ago, and was geographically widespread by approximately 250,000 years before the present, from which point onward, archaeological evidence of proto-human presence is widely mentioned. According to a historical demographer of South Asia, Tim Dyson: "Modern human beings—Homo sapiens—originated in Africa. Then, intermittently, sometime between 60,000 and 80,000 years ago, tiny groups of them began to enter the north-west of the Indian subcontinent. It seems likely that initially, they came by way of the coast. ... it is virtually certain that there were Homo sapiens in the subcontinent 55,000 years ago, even though the earliest fossils that have been found of them date to only about 30,000 years before the present." "Y-Chromosome and Mt-DNA data support the colonization of South Asia by modern humans originating in Africa. ... Coalescence dates for most non-European populations average to between 73–55 ka." And according to an environmental historian of South Asia, Michael Fisher: "Scholars estimate that the first successful expansion of the Homo sapiens range beyond Africa and across the Arabian Peninsula occurred from as early as 80,000 years ago to as late as 40,000 years ago, although there may have been prior unsuccessful emigrations. Some of their descendants extended the human range ever further in each generation, spreading into each habitable land they encountered. One human channel was along the warm and productive coastal lands of the Persian Gulf and northern Indian Ocean. Eventually, various bands entered India between 75,000 years ago and 35,000 years ago." Archaeological evidence has been interpreted to suggest the presence of anatomically modern humans in the Indian subcontinent 78,000–74,000 years ago, although this interpretation is disputed. The occupation of South Asia by modern humans, over a long time, initially in varying forms of isolation as hunter-gatherers, has turned it into a highly diverse one, second only to Africa in human genetic diversity. According to Tim Dyson: "Genetic research has contributed to knowledge of the prehistory of the subcontinent's people in other respects. In particular, the level of genetic diversity in the region is extremely high. Indeed, only Africa's population is genetically more diverse. Related to this, there is strong evidence of ‘founder’ events in the subcontinent. By this is meant circumstances where a subgroup—such as a tribe—derives from a tiny number of ‘original’ individuals. Further, compared to most world regions, the subcontinent's people are relatively distinct in having practised comparatively high levels of endogamy." Settled life emerged on the subcontinent in the western margins of the Indus River alluvium approximately 9,000 years ago, evolving gradually into the Indus valley civilisation of the third millennium BCE. According to Tim Dyson: "By 7,000 years ago agriculture was firmly established in Baluchistan. And, over the next 2,000 years, the practice of farming slowly spread eastwards into the Indus valley." And according to Michael Fisher: "The earliest discovered instance ... of well-established, settled agricultural society is at Mehrgarh in the hills between the Bolan Pass and the Indus plain (today in Pakistan) (see Map 3.1). From as early as 7000 BCE, communities there started investing increased labor in preparing the land and selecting, planting, tending, and harvesting particular grain-producing plants. They also domesticated animals, including sheep, goats, pigs, and oxen (both humped zebu [Bos indicus] and unhumped [Bos taurus]). Castrating oxen, for instance, turned them from mainly meat sources into domesticated draft-animals as well." Bronze Age – first urbanisation (c. 3300 – c. 1800 BCE) Indus Valley Civilisation The Bronze Age in the Indian subcontinent began around 3300 BCE. Along with Ancient Egypt and Mesopotamia, the Indus valley region was one of three early cradles of civilization of the Old World. Of the three, the Indus Valley Civilization was the most expansive, and at its peak, may have had a population of over five million. The civilization was primarily centered in modern-day Pakistan, in the Indus river basin, and secondarily in the Ghaggar-Hakra river basin in eastern Pakistan and northwestern India. The Mature Indus civilization flourished from about 2600 to 1900 BCE, marking the beginning of urban civilization on the Indian subcontinent. The civilization included cities such as Harappa, Ganeriwala, and Mohenjo-daro in modern-day Pakistan, and Dholavira, Kalibangan, Rakhigarhi, and Lothal in modern-day India. Inhabitants of the ancient Indus river valley, the Harappans, developed new techniques in metallurgy and handicraft (carneol products, seal carving), and produced copper, bronze, lead, and tin. The civilization is noted for its cities built of brick, roadside drainage system, and multi-storeyed houses and is thought to have had some kind of municipal organisation. After the collapse of Indus Valley civilization, the inhabitants of the Indus Valley civilization migrated from the river valleys of Indus and Ghaggar-Hakra, towards the Himalayan foothills of Ganga-Yamuna basin. Ochre Coloured Pottery Culture During 2nd millennium BCE, Ochre Coloured Pottery culture was in Ganga Yamuna Doab region. These were rural settlement with agriculture practice and hunting. They were using copper tools such as Axe, Spear, Arrow, Antenna Sowrd etc. People had domisticated Cattle, Goat, sheep, horse, Pig and dog etc.The site gained attention for its Bronze Age solid-disk wheel carts, found in 2018, which were interpreted by some as horse-pulled "chariots".[note 1] Iron Age (1500 – 200 BCE) Vedic period (c. 1500 – 600 BCE) The Vedic period is the period when the Vedas were composed, the liturgical hymns from the Indo-Aryan people. The Vedic culture was located in part of north-west India, while other parts of India had a distinct cultural identity during this period. The Vedic culture is described in the texts of Vedas, still sacred to Hindus, which were orally composed and transmitted in Vedic Sanskrit. The Vedas are some of the oldest extant texts in India. The Vedic period, lasting from about 1500 to 500 BCE, contributed the foundations of several cultural aspects of the Indian subcontinent. In terms of culture, many regions of the Indian subcontinent transitioned from the Chalcolithic to the Iron Age in this period. Historians have analysed the Vedas to posit a Vedic culture in the Punjab region and the upper Gangetic Plain. Most historians also consider this period to have encompassed several waves of Indo-Aryan migration into the Indian subcontinent from the north-west. The peepal tree and cow were sanctified by the time of the Atharva Veda. Many of the concepts of Indian philosophy espoused later, like dharma, trace their roots to Vedic antecedents. Early Vedic society is described in the Rigveda, the oldest Vedic text, believed to have been compiled during 2nd millennium BCE, in the northwestern region of the Indian subcontinent. At this time, Aryan society consisted of largely tribal and pastoral groups, distinct from the Harappan urbanization which had been abandoned. The early Indo-Aryan presence probably corresponds, in part, to the Ochre Coloured Pottery culture in archaeological contexts. At the end of the Rigvedic period, the Aryan society began to expand from the northwestern region of the Indian subcontinent, into the western Ganges plain. It became increasingly agricultural and was socially organised around the hierarchy of the four varnas, or social classes. This social structure was characterized both by syncretising with the native cultures of northern India, but also eventually by the excluding of some indigenous peoples by labeling their occupations impure. During this period, many of the previous small tribal units and chiefdoms began to coalesce into Janapadas (monarchical, state-level polities). The Iron Age in the Indian subcontinent from about 1200 BCE to the 6th century BCE is defined by the rise of Janapadas, which are realms, republics and kingdoms—notably the Iron Age Kingdoms of Kuru, Panchala, Kosala, Videha. The Kuru kingdom was the first state-level society of the Vedic period, corresponding to the beginning of the Iron Age in northwestern India, around 1200–800 BCE, as well as with the composition of the Atharvaveda (the first Indian text to mention iron, as śyāma ayas, literally "black metal"). The Kuru state organised the Vedic hymns into collections, and developed the orthodox srauta ritual to uphold the social order. Two key figures of the Kuru state were king Parikshit and his successor Janamejaya, transforming this realm into the dominant political, social, and cultural power of northern Iron Age India. When the Kuru kingdom declined, the centre of Vedic culture shifted to their eastern neighbours, the Panchala kingdom. The archaeological PGW (Painted Grey Ware) culture, which flourished in the Haryana and western Uttar Pradesh regions of northern India from about 1100 to 600 BCE, is believed to correspond to the Kuru and Panchala kingdoms. During the Late Vedic Period, the kingdom of Videha emerged as a new centre of Vedic culture, situated even farther to the East (in what is today Nepal and Bihar state in India); reaching its prominence under the king Janaka, whose court provided patronage for Brahmin sages and philosophers such as Yajnavalkya, Aruni, and Gargi Vachaknavi. The later part of this period corresponds with a consolidation of increasingly large states and kingdoms, called mahajanapadas, all across Northern India. Second urbanisation (600–200 BCE) During the time between 800 and 200 BCE the Śramaṇa movement formed, from which originated Jainism and Buddhism. In the same period, the first Upanishads were written. After 500 BCE, the so-called "second urbanisation" started, with new urban settlements arising at the Ganges plain, especially the Central Ganges plain. The foundations for the "second urbanisation" were laid prior to 600 BCE, in the Painted Grey Ware culture of the Ghaggar-Hakra and Upper Ganges Plain; although most PGW sites were small farming villages, "several dozen" PGW sites eventually emerged as relatively large settlements that can be characterized as towns, the largest of which were fortified by ditches or moats and embankments made of piled earth with wooden palisades, albeit smaller and simpler than the elaborately fortified large cities which grew after 600 BCE in the Northern Black Polished Ware culture. The Central Ganges Plain, where Magadha gained prominence, forming the base of the Mauryan Empire, was a distinct cultural area, with new states arising after 500 BCE during the so-called "second urbanisation".[note 2] It was influenced by the Vedic culture, but differed markedly from the Kuru-Panchala region. It "was the area of the earliest known cultivation of rice in South Asia and by 1800 BCE was the location of an advanced Neolithic population associated with the sites of Chirand and Chechar". In this region, the Śramaṇic movements flourished, and Jainism and Buddhism originated. Buddhism and Jainism Around 800 BCE to 400 BCE witnessed the composition of the earliest Upanishads. Upanishads form the theoretical basis of classical Hinduism and are known as Vedanta (conclusion of the Vedas). Increasing urbanisation of India in 7th and 6th centuries BCE led to the rise of new ascetic or Śramaṇa movements which challenged the orthodoxy of rituals. Mahavira (c. 549–477 BCE), proponent of Jainism, and Gautama Buddha (c. 563–483 BCE), founder of Buddhism were the most prominent icons of this movement. Śramaṇa gave rise to the concept of the cycle of birth and death, the concept of samsara, and the concept of liberation. Buddha found a Middle Way that ameliorated the extreme asceticism found in the Śramaṇa religions. Around the same time, Mahavira (the 24th Tirthankara in Jainism) propagated a theology that was to later become Jainism. However, Jain orthodoxy believes the teachings of the Tirthankaras predates all known time and scholars believe Parshvanatha (c. 872 – c. 772 BCE), accorded status as the 23rd Tirthankara, was a historical figure. The Vedas are believed to have documented a few Tirthankaras and an ascetic order similar to the Śramaṇa movement. The Sanskrit epics Ramayana and Mahabharata were composed during this period. The Mahabharata remains, today, the longest single poem in the world. Historians formerly postulated an "epic age" as the milieu of these two epic poems, but now recognize that the texts (which are both familiar with each other) went through multiple stages of development over centuries. For instance, the Mahabharata may have been based on a small-scale conflict (possibly about 1000 BCE) which was eventually "transformed into a gigantic epic war by bards and poets". There is no conclusive proof from archaeology as to whether the specific events of the Mahabharata have any historical basis. The existing texts of these epics are believed to belong to the post-Vedic age, between c. 400 BCE and 400 CE. The period from c. 600 BCE to c. 300 BCE witnessed the rise of the Mahajanapadas, sixteen powerful and vast kingdoms and oligarchic republics. These Mahajanapadas evolved and flourished in a belt stretching from Gandhara in the northwest to Bengal in the eastern part of the Indian subcontinent and included parts of the trans-Vindhyan region. Ancient Buddhist texts, like the Anguttara Nikaya, make frequent reference to these sixteen great kingdoms and republics—Anga, Assaka, Avanti, Chedi, Gandhara, Kashi, Kamboja, Kosala, Kuru, Magadha, Malla, Matsya (or Machcha), Panchala, Surasena, Vriji, and Vatsa. This period saw the second major rise of urbanism in India after the Indus Valley Civilisation. Early "republics" or Gaṇa sangha, such as Shakyas, Koliyas, Mallas, and Licchavis had republican governments. Gaṇa sanghas, such as Mallas, centered in the city of Kusinagara, and the Vajjian Confederacy (Vajji), centered in the city of Vaishali, existed as early as the 6th century BCE and persisted in some areas until the 4th century CE. The most famous clan amongst the ruling confederate clans of the Vajji Mahajanapada were the Licchavis. This period corresponds in an archaeological context to the Northern Black Polished Ware culture. Especially focused in the Central Ganges plain but also spreading across vast areas of the northern and central Indian subcontinent, this culture is characterized by the emergence of large cities with massive fortifications, significant population growth, increased social stratification, wide-ranging trade networks, construction of public architecture and water channels, specialized craft industries (e.g., ivory and carnelian carving), a system of weights, punch-marked coins, and the introduction of writing in the form of Brahmi and Kharosthi scripts. The language of the gentry at that time was Sanskrit, while the languages of the general population of northern India are referred to as Prakrits. Many of the sixteen kingdoms had coalesced into four major ones by 500/400 BCE, by the time of Gautama Buddha. These four were Vatsa, Avanti, Kosala, and Magadha. The life of Gautama Buddha was mainly associated with these four kingdoms. Early Magadha dynasties Magadha formed one of the sixteen Mahā-Janapadas (Sanskrit: "Great Realms") or kingdoms in ancient India. The core of the kingdom was the area of Bihar south of the Ganges; its first capital was Rajagriha (modern Rajgir) then Pataliputra (modern Patna). Magadha expanded to include most of Bihar and Bengal with the conquest of Licchavi and Anga respectively, followed by much of eastern Uttar Pradesh and Orissa. The ancient kingdom of Magadha is heavily mentioned in Jain and Buddhist texts. It is also mentioned in the Ramayana, Mahabharata and Puranas. The earliest reference to the Magadha people occurs in the Atharva-Veda where they are found listed along with the Angas, Gandharis, and Mujavats. Magadha played an important role in the development of Jainism and Buddhism. The Magadha kingdom included republican communities such as the community of Rajakumara. Villages had their own assemblies under their local chiefs called Gramakas. Their administrations were divided into executive, judicial, and military functions. Early sources, from the Buddhist Pāli Canon, the Jain Agamas and the Hindu Puranas, mention Magadha being ruled by the Haryanka dynasty for some 200 years, c. 600–413 BCE. King Bimbisara of the Haryanka dynasty led an active and expansive policy, conquering Anga in what is now eastern Bihar and West Bengal. King Bimbisara was overthrown and killed by his son, Prince Ajatashatru, who continued the expansionist policy of Magadha. During this period, Gautama Buddha, the founder of Buddhism, lived much of his life in Magadha kingdom. He attained enlightenment in Bodh Gaya, gave his first sermon in Sarnath and the first Buddhist council was held in Rajgriha. The Haryanka dynasty was overthrown by the Shishunaga dynasty. The last Shishunaga ruler, Kalasoka, was assassinated by Mahapadma Nanda in 345 BCE, the first of the so-called Nine Nandas, which were Mahapadma and his eight sons. Nanda Empire and Alexander's campaign The Nanda Empire, at its greatest extent, extended from Bengal in the east, to the Punjab region in the west and as far south as the Vindhya Range. The Nanda dynasty was famed for their great wealth. The Nanda dynasty built on the foundations laid by their Haryanka and Shishunaga predecessors to create the first great empire of north India. To achieve this objective they built a vast army, consisting of 200,000 infantry, 20,000 cavalry, 2,000 war chariots and 3,000 war elephants (at the lowest estimates). According to the Greek historian Plutarch, the size of the Nanda army was even larger, numbering 200,000 infantry, 80,000 cavalry, 8,000 war chariots, and 6,000 war elephants. However, the Nanda Empire did not have the opportunity to see their army face Alexander the Great, who invaded north-western India at the time of Dhana Nanda, since Alexander was forced to confine his campaign to the plains of Punjab and Sindh, for his forces mutinied at the river Beas and refused to go any further upon encountering Nanda and Gangaridai forces. The Maurya Empire (322–185 BCE) unified most of the Indian subcontinent into one state, and was the largest empire ever to exist on the Indian subcontinent. At its greatest extent, the Mauryan Empire stretched to the north up to the natural boundaries of the Himalayas and to the east into what is now Assam. To the west, it reached beyond modern Pakistan, to the Hindu Kush mountains in what is now Afghanistan. The empire was established by Chandragupta Maurya assisted by Chanakya (Kautilya) in Magadha (in modern Bihar) when he overthrew the Nanda dynasty. Chandragupta rapidly expanded his power westwards across central and western India, and by 317 BCE the empire had fully occupied Northwestern India. The Mauryan Empire then defeated Seleucus I, a diadochus and founder of the Seleucid Empire, during the Seleucid–Mauryan war, thus gained additional territory west of the Indus River. Chandragupta's son Bindusara succeeded to the throne around 297 BCE. By the time he died in c. 272 BCE, a large part of the Indian subcontinent was under Mauryan suzerainty. However, the region of Kalinga (around modern day Odisha) remained outside Mauryan control, perhaps interfering with their trade with the south. Bindusara was succeeded by Ashoka, whose reign lasted for around 37 years until his death in about 232 BCE. His campaign against the Kalingans in about 260 BCE, though successful, led to immense loss of life and misery. This filled Ashoka with remorse and led him to shun violence, and subsequently to embrace Buddhism. The empire began to decline after his death and the last Mauryan ruler, Brihadratha, was assassinated by Pushyamitra Shunga to establish the Shunga Empire. Under Chandragupta Maurya and his successors, internal and external trade, agriculture, and economic activities all thrived and expanded across India thanks to the creation of a single efficient system of finance, administration, and security. The Mauryans built the Grand Trunk Road, one of Asia's oldest and longest major roads connecting the Indian subcontinent with Central Asia. After the Kalinga War, the Empire experienced nearly half a century of peace and security under Ashoka. Mauryan India also enjoyed an era of social harmony, religious transformation, and expansion of the sciences and of knowledge. Chandragupta Maurya's embrace of Jainism increased social and religious renewal and reform across his society, while Ashoka's embrace of Buddhism has been said to have been the foundation of the reign of social and political peace and non-violence across all of India. Ashoka sponsored the spreading of Buddhist missionaries into Sri Lanka, Southeast Asia, West Asia, North Africa, and Mediterranean Europe. The Arthashastra and the Edicts of Ashoka are the primary written records of the Mauryan times. Archaeologically, this period falls into the era of Northern Black Polished Ware. The Mauryan Empire was based on a modern and efficient economy and society. However, the sale of merchandise was closely regulated by the government. Although there was no banking in the Mauryan society, usury was customary. A significant amount of written records on slavery are found, suggesting a prevalence thereof. During this period, a high quality steel called Wootz steel was developed in south India and was later exported to China and Arabia. During the Sangam period Tamil literature flourished from the 3rd century BCE to the 4th century CE. During this period, three Tamil dynasties, collectively known as the Three Crowned Kings of Tamilakam: Chera dynasty, Chola dynasty, and the Pandyan dynasty ruled parts of southern India. The Sangam literature deals with the history, politics, wars, and culture of the Tamil people of this period. The scholars of the Sangam period rose from among the common people who sought the patronage of the Tamil Kings, but who mainly wrote about the common people and their concerns. Unlike Sanskrit writers who were mostly Brahmins, Sangam writers came from diverse classes and social backgrounds and were mostly non-Brahmins. They belonged to different faiths and professions such as farmers, artisans, merchants, monks, and priests, including also royalty and women. Around c. 300 BCE – c. 200 CE, Pathupattu, an anthology of ten mid-length books collection, which is considered part of Sangam Literature, were composed; the composition of eight anthologies of poetic works Ettuthogai as well as the composition of eighteen minor poetic works Pati���eṇkīḻkaṇakku; while Tolkāppiyam, the earliest grammarian work in the Tamil language was developed. Also, during Sangam period, two of the Five Great Epics of Tamil Literature were composed. Ilango Adigal composed Silappatikaram, which is a non-religious work, that revolves around Kannagi, who having lost her husband to a miscarriage of justice at the court of the Pandyan dynasty, wreaks her revenge on his kingdom, and Manimekalai, composed by Sīthalai Sāttanār, is a sequel to Silappatikaram, and tells the story of the daughter of Kovalan and Madhavi, who became a Buddhist Bikkuni. Classical and early medieval periods (c. 200 BCE – c. 1200 CE) The Great Chaitya in the Karla Caves. The shrines were developed over the period from 2nd century BCE to the 5th century CE. The time between the Maurya Empire in the 3rd century BCE and the end of the Gupta Empire in the 6th century CE is referred to as the "Classical" period of India. It can be divided in various sub-periods, depending on the chosen periodisation. Classical period begins after the decline of the Maurya Empire, and the corresponding rise of the Shunga dynasty and Satavahana dynasty. The Gupta Empire (4th–6th century) is regarded as the "Golden Age" of Hinduism, although a host of kingdoms ruled over India in these centuries. Also, the Sangam literature flourished from the 3rd century BCE to the 3rd century CE in southern India. During this period, India's economy is estimated to have been the largest in the world, having between one-third and one-quarter of the world's wealth, from 1 CE to 1000 CE. Early classical period (c. 200 BCE – c. 320 CE) The Shungas originated from Magadha, and controlled areas of the central and eastern Indian subcontinent from around 187 to 78 BCE. The dynasty was established by Pushyamitra Shunga, who overthrew the last Maurya emperor. Its capital was Pataliputra, but later emperors, such as Bhagabhadra, also held court at Vidisha, modern Besnagar in Eastern Malwa. Pushyamitra Shunga ruled for 36 years and was succeeded by his son Agnimitra. There were ten Shunga rulers. However, after the death of Agnimitra, the empire rapidly disintegrated; inscriptions and coins indicate that much of northern and central India consisted of small kingdoms and city-states that were independent of any Shunga hegemony. The empire is noted for its numerous wars with both foreign and indigenous powers. They fought battles with the Mahameghavahana dynasty of Kalinga, Satavahana dynasty of Deccan, the Indo-Greeks, and possibly the Panchalas and Mitras of Mathura. Art, education, philosophy, and other forms of learning flowered during this period including small terracotta images, larger stone sculptures, and architectural monuments such as the Stupa at Bharhut, and the renowned Great Stupa at Sanchi. The Shunga rulers helped to establish the tradition of royal sponsorship of learning and art. The script used by the empire was a variant of Brahmi and was used to write the Sanskrit language. The Shunga Empire played an imperative role in patronising Indian culture at a time when some of the most important developments in Hindu thought were taking place. This helped the empire flourish and gain power. The Śātavāhanas were based from Amaravati in Andhra Pradesh as well as Junnar (Pune) and Prathisthan (Paithan) in Maharashtra. The territory of the empire covered large parts of India from the 1st century BCE onward. The Sātavāhanas started out as feudatories to the Mauryan dynasty, but declared independence with its decline. The Sātavāhanas are known for their patronage of Hinduism and Buddhism, which resulted in Buddhist monuments from Ellora (a UNESCO World Heritage Site) to Amaravati. They were one of the first Indian states to issue coins struck with their rulers embossed. They formed a cultural bridge and played a vital role in trade as well as the transfer of ideas and culture to and from the Indo-Gangetic Plain to the southern tip of India. They had to compete with the Shunga Empire and then the Kanva dynasty of Magadha to establish their rule. Later, they played a crucial role to protect large part of India against foreign invaders like the Sakas, Yavanas and Pahlavas. In particular, their struggles with the Western Kshatrapas went on for a long time. The notable rulers of the Satavahana Dynasty Gautamiputra Satakarni and Sri Yajna Sātakarni were able to defeat the foreign invaders like the Western Kshatrapas and to stop their expansion. In the 3rd century CE the empire was split into smaller states. Trade and travels to India - The spice trade in Kerala attracted traders from all over the Old World to India. Early writings and Stone Age carvings of Neolithic age obtained indicates that India's Southwest coastal port Muziris, in Kerala, had established itself as a major spice trade centre from as early as 3,000 BCE, according to Sumerian records. Jewish traders from Judea arrived in Kochi, Kerala, India as early as 562 BCE. - Thomas the Apostle sailed to India around the 1st century CE. He landed in Muziris in Kerala, India and established Yezh (Seven) ara (half) palligal (churches) or Seven and a Half Churches. - Buddhism entered China through the Silk Road transmission of Buddhism in the 1st or 2nd century CE. The interaction of cultures resulted in several Chinese travellers and monks to enter India. Most notable were Faxian, Yijing, Song Yun and Xuanzang. These travellers wrote detailed accounts of the Indian subcontinent, which includes the political and social aspects of the region. - Hindu and Buddhist religious establishments of Southeast Asia came to be associated with the economic activity and commerce as patrons entrust large funds which would later be used to benefit the local economy by estate management, craftsmanship, promotion of trading activities. Buddhism in particular, travelled alongside the maritime trade, promoting coinage, art, and literacy. Indian merchants involved in spice trade took Indian cuisine to Southeast Asia, where spice mixtures and curries became popular with the native inhabitants. - The Greco-Roman world followed by trading along the incense route and the Roman-India routes. During the 2nd century BCE Greek and Indian ships met to trade at Arabian ports such as Aden. During the first millennium, the sea routes to India were controlled by the Indians and Ethiopians that became the maritime trading power of the Red Sea. The Kushan Empire expanded out of what is now Afghanistan into the northwest of the Indian subcontinent under the leadership of their first emperor, Kujula Kadphises, about the middle of the 1st century CE. The Kushans were possibly of Tocharian speaking tribe; one of five branches of the Yuezhi confederation. By the time of his grandson, Kanishka the Great, the empire spread to encompass much of Afghanistan, and then the northern parts of the Indian subcontinent at least as far as Saketa and Sarnath near Varanasi (Banaras). Emperor Kanishka was a great patron of Buddhism; however, as Kushans expanded southward, the deities of their later coinage came to reflect its new Hindu majority. They played an important role in the establishment of Buddhism in India and its spread to Central Asia and China. Historian Vincent Smith said about Kanishka: He played the part of a second Ashoka in the history of Buddhism. The empire linked the Indian Ocean maritime trade with the commerce of the Silk Road through the Indus valley, encouraging long-distance trade, particularly between China and Rome. The Kushans brought new trends to the budding and blossoming Gandhara art and Mathura art, which reached its peak during Kushan rule. H.G. Rowlinson commented: The Kushan period is a fitting prelude to the Age of the Guptas. Classical period: Gupta Empire (c. 320 – 650 CE) The Gupta period was noted for cultural creativity, especially in literature, architecture, sculpture, and painting. The Gupta period produced scholars such as Kalidasa, Aryabhata, Varahamihira, Vishnu Sharma, and Vatsyayana who made great advancements in many academic fields. The Gupta period marked a watershed of Indian culture: the Guptas performed Vedic sacrifices to legitimise their rule, but they also patronised Buddhism, which continued to provide an alternative to Brahmanical orthodoxy. The military exploits of the first three rulers – Chandragupta I, Samudragupta, and Chandragupta II – brought much of India under their leadership. Science and political administration reached new heights during the Gupta era. Strong trade ties also made the region an important cultural centre and established it as a base that would influence nearby kingdoms and regions in Burma, Sri Lanka, Maritime Southeast Asia, and Indochina. The latter Guptas successfully resisted the northwestern kingdoms until the arrival of the Alchon Huns, who established themselves in Afghanistan by the first half of the 5th century CE, with their capital at Bamiyan. However, much of the Deccan and southern India were largely unaffected by these events in the north. The Vākāṭaka Empire originated from the Deccan in the mid-third century CE. Their state is believed to have extended from the southern edges of Malwa and Gujarat in the north to the Tungabhadra River in the south as well as from the Arabian Sea in the western to the edges of Chhattisgarh in the east. They were the most important successors of the Satavahanas in the Deccan, contemporaneous with the Guptas in northern India and succeeded by the Vishnukundina dynasty. The Vakatakas are noted for having been patrons of the arts, architecture and literature. They led public works and their monuments are a visible legacy. The rock-cut Buddhist viharas and chaityas of Ajanta Caves (a UNESCO World Heritage Site) were built under the patronage of Vakataka emperor, Harishena. Samudragupta's 4th-century Allahabad pillar inscription mentions Kamarupa (Western Assam) and Davaka (Central Assam) as frontier kingdoms of the Gupta Empire. Davaka was later absorbed by Kamarupa, which grew into a large kingdom that spanned from Karatoya river to near present Sadiya and covered the entire Brahmaputra valley, North Bengal, parts of Bangladesh and, at times Purnea and parts of West Bengal. Ruled by three dynasties Varmanas (c. 350–650 CE), Mlechchha dynasty (c. 655–900 CE) and Kamarupa-Palas (c. 900–1100 CE), from their capitals in present-day Guwahati (Pragjyotishpura), Tezpur (Haruppeswara) and North Gauhati (Durjaya) respectively. All three dynasties claimed their descent from Narakasura, an immigrant from Aryavarta. In the reign of the Varman king, Bhaskar Varman (c. 600–650 CE), the Chinese traveller Xuanzang visited the region and recorded his travels. Later, after weakening and disintegration (after the Kamarupa-Palas), the Kamarupa tradition was somewhat extended until c. 1255 CE by the Lunar I (c. 1120–1185 CE) and Lunar II (c. 1155–1255 CE) dynasties. The Kamarupa kingdom came to an end in the middle of the 13th century when the Khen dynasty under Sandhya of Kamarupanagara (North Guwahati), moved his capital to Kamatapur (North Bengal) after the invasion of Muslim Turks, and established the Kamata kingdom. The Pallavas, during the 4th to 9th centuries were, alongside the Guptas of the North, great patronisers of Sanskrit development in the South of the Indian subcontinent. The Pallava reign saw the first Sanskrit inscriptions in a script called Grantha. Early Pallavas had different connexions to Southeast Asian countries. The Pallavas used Dravidian architecture to build some very important Hindu temples and academies in Mamallapuram, Kanchipuram and other places; their rule saw the rise of great poets. The practice of dedicating temples to different deities came into vogue followed by fine artistic temple architecture and sculpture style of Vastu Shastra. Pallavas reached the height of power during the reign of Mahendravarman I (571–630 CE) and Narasimhavarman I (630–668 CE) and dominated the Telugu and northern parts of the Tamil region for about six hundred years until the end of the 9th century. Kadambas originated from Karnataka, was founded by Mayurasharma in 345 CE which at later times showed the potential of developing into imperial proportions, an indication to which is provided by the titles and epithets assumed by its rulers. King Mayurasharma defeated the armies of Pallavas of Kanchi possibly with help of some native tribes. The Kadamba fame reached its peak during the rule of Kakusthavarma, a notable ruler with whom even the kings of Gupta Dynasty of northern India cultivated marital alliances. The Kadambas were contemporaries of the Western Ganga Dynasty and together they formed the earliest native kingdoms to rule the land with absolute autonomy. The dynasty later continued to rule as a feudatory of larger Kannada empires, the Chalukya and the Rashtrakuta empires, for over five hundred years during which time they branched into minor dynasties known as the Kadambas of Goa, Kadambas of Halasi and Kadambas of Hangal. Empire of Harsha Harsha ruled northern India from 606 to 647 CE. He was the son of Prabhakarvardhana and the younger brother of Rajyavardhana, who were members of the Vardhana dynasty and ruled Thanesar, in present-day Haryana. After the downfall of the prior Gupta Empire in the middle of the 6th century, North India reverted to smaller republics and monarchical states. The power vacuum resulted in the rise of the Vardhanas of Thanesar, who began uniting the republics and monarchies from the Punjab to central India. After the death of Harsha's father and brother, representatives of the empire crowned Harsha emperor at an assembly in April 606 CE, giving him the title of Maharaja when he was merely 16 years old. At the height of his power, his Empire covered much of North and Northwestern India, extended East until Kamarupa, and South until Narmada River; and eventually made Kannauj (in present Uttar Pradesh state) his capital, and ruled until 647 CE. The peace and prosperity that prevailed made his court a centre of cosmopolitanism, attracting scholars, artists and religious visitors from far and wide. During this time, Harsha converted to Buddhism from Surya worship. The Chinese traveller Xuanzang visited the court of Harsha and wrote a very favourable account of him, praising his justice and generosity. His biography Harshacharita ("Deeds of Harsha") written by Sanskrit poet Banabhatta, describes his association with Thanesar, besides mentioning the defence wall, a moat and the palace with a two-storied Dhavalagriha (White Mansion). Early medieval period (mid 6th c.–1200 CE) Early medieval India began after the end of the Gupta Empire in the 6th century CE. This period also covers the "Late Classical Age" of Hinduism, which began after the end of the Gupta Empire, and the collapse of the Empire of Harsha in the 7th century CE; the beginning of Imperial Kannauj, leading to the Tripartite struggle; and ended in the 13th century with the rise of the Delhi Sultanate in Northern India and the end of the Later Cholas with the death of Rajendra Chola III in 1279 in Southern India; however some aspects of the Classical period continued until the fall of the Vijayanagara Empire in the south around the 17th century. From the fifth century to the thirteenth, Śrauta sacrifices declined, and initiatory traditions of Buddhism, Jainism or more commonly Shaivism, Vaishnavism and Shaktism expanded in royal courts. This period produced some of India's finest art, considered the epitome of classical development, and the development of the main spiritual and philosophical systems which continued to be in Hinduism, Buddhism and Jainism. In the 7th century CE, Kumārila Bhaṭṭa formulated his school of Mimamsa philosophy and defended the position on Vedic rituals against Buddhist attacks. Scholars note Bhaṭṭa's contribution to the decline of Buddhism in India. In the 8th century, Adi Shankara travelled across the Indian subcontinent to propagate and spread the doctrine of Advaita Vedanta, which he consolidated; and is credited with unifying the main characteristics of the current thoughts in Hinduism. He was a critic of both Buddhism and Minamsa school of Hinduism; and founded mathas (monasteries), in the four corners of the Indian subcontinent for the spread and development of Advaita Vedanta. While, Muhammad bin Qasim's invasion of Sindh (modern Pakistan) in 711 CE witnessed further decline of Buddhism. The Chach Nama records many instances of conversion of stupas to mosques such as at Nerun. From the 8th to the 10th century, three dynasties contested for control of northern India: the Gurjara Pratiharas of Malwa, the Palas of Bengal, and the Rashtrakutas of the Deccan. The Sena dynasty would later assume control of the Pala Empire; the Gurjara Pratiharas fragmented into various states, notably the Paramaras of Malwa, the Chandelas of Bundelkhand, the Kalachuris of Mahakoshal, the Tomaras of Haryana, and the Chauhans of Rajputana, these states were some of the earliest Rajput kingdoms; while the Rashtrakutas were annexed by the Western Chalukyas. During this period, the Chaulukya dynasty emerged; the Chaulukyas constructed the Dilwara Temples, Modhera Sun Temple, Rani ki vav in the style of Māru-Gurjara architecture, and their capital Anhilwara (modern Patan, Gujarat) was one of the largest cities in the Indian subcontinent, with the population estimated at 100,000 in 1000 CE. The Chola Empire emerged as a major power during the reign of Raja Raja Chola I and Rajendra Chola I who successfully invaded parts of Southeast Asia and Sri Lanka in the 11th century. Lalitaditya Muktapida (r. 724–760 CE) was an emperor of the Kashmiri Karkoṭa dynasty, which exercised influence in northwestern India from 625 CE until 1003, and was followed by Lohara dynasty. Kalhana in his Rajatarangini credits king Lalitaditya with leading an aggressive military campaign in Northern India and Central Asia. The Hindu Shahi dynasty ruled portions of eastern Afghanistan, northern Pakistan, and Kashmir from the mid-7th century to the early 11th century. While in Odisha, the Eastern Ganga Empire rose to power; noted for the advancement of Hindu architecture, most notable being Jagannath Temple and Konark Sun Temple, as well as being patrons of art and literature. The Chalukya Empire ruled large parts of southern and central India between the 6th and the 12th centuries. During this period, they ruled as three related yet individual dynasties. The earliest dynasty, known as the "Badami Chalukyas", ruled from Vatapi (modern Badami) from the middle of the 6th century. The Badami Chalukyas began to assert their independence at the decline of the Kadamba kingdom of Banavasi and rapidly rose to prominence during the reign of Pulakeshin II. The rule of the Chalukyas marks an important milestone in the history of South India and a golden age in the history of Karnataka. The political atmosphere in South India shifted from smaller kingdoms to large empires with the ascendancy of Badami Chalukyas. A Southern India-based kingdom took control and consolidated the entire region between the Kaveri and the Narmada rivers. The rise of this empire saw the birth of efficient administration, overseas trade and commerce and the development of new style of architecture called "Chalukyan architecture". The Chalukya dynasty ruled parts of southern and central India from Badami in Karnataka between 550 and 750, and then again from Kalyani between 970 and 1190. 8th century Durga temple exterior view at Aihole complex. Aihole complex includes Hindu, Buddhist and Jain temples and monuments. Founded by Dantidurga around 753, the Rashtrakuta Empire ruled from its capital at Manyakheta for almost two centuries. At its peak, the Rashtrakutas ruled from the Ganges River and Yamuna River doab in the north to Cape Comorin in the south, a fruitful time of political expansion, architectural achievements and famous literary contributions. The early rulers of this dynasty were Hindu, but the later rulers were strongly influenced by Jainism. Govinda III and Amoghavarsha were the most famous of the long line of able administrators produced by the dynasty. Amoghavarsha, who ruled for 64 years, was also an author and wrote Kavirajamarga, the earliest known Kannada work on poetics. Architecture reached a milestone in the Dravidian style, the finest example of which is seen in the Kailasanath Temple at Ellora. Other important contributions are the Kashivishvanatha temple and the Jain Narayana temple at Pattadakal in Karnataka. The Arab traveller Suleiman described the Rashtrakuta Empire as one of the four great Empires of the world. The Rashtrakuta period marked the beginning of the golden age of southern Indian mathematics. The great south Indian mathematician Mahāvīra lived in the Rashtrakuta Empire and his text had a huge impact on the medieval south Indian mathematicians who lived after him. The Rashtrakuta rulers also patronised men of letters, who wrote in a variety of languages from Sanskrit to the Apabhraṃśas. The Gurjara-Pratiharas were instrumental in containing Arab armies moving east of the Indus River. Nagabhata I defeated the Arab army under Junaid and Tamin during the Caliphate campaigns in India. Under Nagabhata II, the Gurjara-Pratiharas became the most powerful dynasty in northern India. He was succeeded by his son Ramabhadra, who ruled briefly before being succeeded by his son, Mihira Bhoja. Under Bhoja and his successor Mahendrapala I, the Pratihara Empire reached its peak of prosperity and power. By the time of Mahendrapala, the extent of its territory rivalled that of the Gupta Empire stretching from the border of Sindh in the west to Bengal in the east and from the Himalayas in the north to areas past the Narmada in the south. The expansion triggered a tripartite power struggle with the Rashtrakuta and Pala empires for control of the Indian subcontinent. During this period, Imperial Pratihara took the title of Maharajadhiraja of Āryāvarta (Great King of Kings of India). By the 10th century, several feudatories of the empire took advantage of the temporary weakness of the Gurjara-Pratiharas to declare their independence, notably the Paramaras of Malwa, the Chandelas of Bundelkhand, the Kalachuris of Mahakoshal, the Tomaras of Haryana, and the Chauhans of Rajputana. Sculptures near Teli ka Mandir, Gwalior Fort. Jainism-related cave monuments and statues carved into the rock face inside Siddhachal Caves, Gwalior Fort. Ghateshwara Mahadeva temple at Baroli Temples complex. The complex of eight temples, built by the Gurjara-Pratiharas, is situated within a walled enclosure. The Khayaravala dynasty,ruled parts of the present-day Indian states of Bihar and Jharkhand, during 11th and 12th centuries. Their capital was located at Khayaragarh in Shahabad district. Pratapdhavala and Shri Pratapa were king of the dynasty according to inscription of Rohtas. The Pala Empire was founded by Gopala I. It was ruled by a Buddhist dynasty from Bengal in the eastern region of the Indian subcontinent. The Palas reunified Bengal after the fall of Shashanka's Gauda Kingdom. The Palas were followers of the Mahayana and Tantric schools of Buddhism, they also patronised Shaivism and Vaishnavism. The morpheme Pala, meaning "protector", was used as an ending for the names of all the Pala monarchs. The empire reached its peak under Dharmapala and Devapala. Dharmapala is believed to have conquered Kanauj and extended his sway up to the farthest limits of India in the northwest. The Pala Empire can be considered as the golden era of Bengal in many ways. Dharmapala founded the Vikramashila and revived Nalanda, considered one of the first great universities in recorded history. Nalanda reached its height under the patronage of the Pala Empire. The Palas also built many viharas. They maintained close cultural and commercial ties with countries of Southeast Asia and Tibet. Sea trade added greatly to the prosperity of the Pala Empire. The Arab merchant Suleiman notes the enormity of the Pala army in his memoirs. Medieval Cholas rose to prominence during the middle of the 9th century CE and established the greatest empire South India had seen. They successfully united the South India under their rule and through their naval strength extended their influence in the Southeast Asian countries such as Srivijaya. Under Rajaraja Chola I and his successors Rajendra Chola I, Rajadhiraja Chola, Virarajendra Chola and Kulothunga Chola I the dynasty became a military, economic and cultural power in South Asia and South-East Asia. Rajendra Chola I's navies went even further, occupying the sea coasts from Burma to Vietnam, the Andaman and Nicobar Islands, the Lakshadweep (Laccadive) islands, Sumatra, and the Malay Peninsula in Southeast Asia and the Pegu islands. The power of the new empire was proclaimed to the eastern world by the expedition to the Ganges which Rajendra Chola I undertook and by the occupation of cities of the maritime empire of Srivijaya in Southeast Asia, as well as by the repeated embassies to China. They dominated the political affairs of Sri Lanka for over two centuries through repeated invasions and occupation. They also had continuing trade contacts with the Arabs in the west and with the Chinese empire in the east. Rajaraja Chola I and his equally distinguished son Rajendra Chola I gave political unity to the whole of Southern India and established the Chola Empire as a respected sea power. Under the Cholas, the South India reached new heights of excellence in art, religion and literature. In all of these spheres, the Chola period marked the culmination of movements that had begun in an earlier age under the Pallavas. Monumental architecture in the form of majestic temples and sculpture in stone and bronze reached a finesse never before achieved in India. Western Chalukya Empire The Western Chalukya Empire ruled most of the western Deccan, South India, between the 10th and 12th centuries. Vast areas between the Narmada River in the north and Kaveri River in the south came under Chalukya control. During this period the other major ruling families of the Deccan, the Hoysalas, the Seuna Yadavas of Devagiri, the Kakatiya dynasty and the Southern Kalachuris, were subordinates of the Western Chalukyas and gained their independence only when the power of the Chalukya waned during the latter half of the 12th century. The Western Chalukyas developed an architectural style known today as a transitional style, an architectural link between the style of the early Chalukya dynasty and that of the later Hoysala empire. Most of its monuments are in the districts bordering the Tungabhadra River in central Karnataka. Well known examples are the Kasivisvesvara Temple at Lakkundi, the Mallikarjuna Temple at Kuruvatti, the Kallesvara Temple at Bagali, Siddhesvara Temple at Haveri, and the Mahadeva Temple at Itagi. This was an important period in the development of fine arts in Southern India, especially in literature as the Western Chalukya kings encouraged writers in the native language of Kannada, and Sanskrit like the philosopher and statesman Basava and the great mathematician Bhāskara II. Late medieval period (c. 1200–1526 CE) The late medieval period is marked by repeated invasions of the Muslim Central Asian nomadic clans, the rule of the Delhi sultanate, and by the growth of other dynasties and empires, built upon military technology of the Sultanate. The Delhi Sultanate was a Muslim sultanate based in Delhi, ruled by several dynasties of Turkic, Turko-Indian and Pathan origins. It ruled large parts of the Indian subcontinent from the 13th century to the early 16th century. In the 12th and 13th centuries, Central Asian Turks invaded parts of northern India and established the Delhi Sultanate in the former Hindu holdings. The subsequent Mamluk dynasty of Delhi managed to conquer large areas of northern India, while the Khalji dynasty conquered most of central India while forcing the principal Hindu kingdoms of South India to become vassal states. The Sultanate ushered in a period of Indian cultural renaissance. The resulting "Indo-Muslim" fusion of cultures left lasting syncretic monuments in architecture, music, literature, religion, and clothing. It is surmised that the language of Urdu was born during the Delhi Sultanate period as a result of the intermingling of the local speakers of Sanskritic Prakrits with immigrants speaking Persian, Turkic, and Arabic under the Muslim rulers. The Delhi Sultanate is the only Indo-Islamic empire to enthrone one of the few female rulers in India, Razia Sultana (1236–1240). During the Delhi Sultanate, there was a synthesis between Indian civilization and Islamic civilization. The latter was a cosmopolitan civilization, with a multicultural and pluralistic society, and wide-ranging international networks, including social and economic networks, spanning large parts of Afro-Eurasia, leading to escalating circulation of goods, peoples, technologies and ideas. While initially disruptive due to the passing of power from native Indian elites to Turkic Muslim elites, the Delhi Sultanate was responsible for integrating the Indian subcontinent into a growing world system, drawing India into a wider international network, which had a significant impact on Indian culture and society. However, the Delhi Sultanate also caused large-scale destruction and desecration of temples in the Indian subcontinent. The Mongol invasions of India were successfully repelled by the Delhi Sultanate during the rule of Alauddin Khalji. A major factor in their success was their Turkic Mamluk slave army, who were highly skilled in the same style of nomadic cavalry warfare as the Mongols, as a result of having similar nomadic Central Asian roots. It is possible that the Mongol Empire may have expanded into India were it not for the Delhi Sultanate's role in repelling them. By repeatedly repulsing the Mongol raiders, the sultanate saved India from the devastation visited on West and Central Asia, setting the scene for centuries of migration of fleeing soldiers, learned men, mystics, traders, artists, and artisans from that region into the subcontinent, thereby creating a syncretic Indo-Islamic culture in the north. A Turco-Mongol conqueror in Central Asia, Timur (Tamerlane), attacked the reigning Sultan Nasir-u Din Mehmud of the Tughlaq Dynasty in the north Indian city of Delhi. The Sultan's army was defeated on 17 December 1398. Timur entered Delhi and the city was sacked, destroyed, and left in ruins after Timur's army had killed and plundered for three days and nights. He ordered the whole city to be sacked except for the sayyids, scholars, and the "other Muslims" (artists); 100,000 war prisoners were put to death in one day. The Sultanate suffered significantly from the sacking of Delhi. Though revived briefly under the Lodi Dynasty, it was but a shadow of the former. The Vijayanagara Empire was established in 1336 by Harihara I and his brother Bukka Raya I of Sangama Dynasty, which originated as a political heir of the Hoysala Empire, Kakatiya Empire, and the Pandyan Empire. The empire rose to prominence as a culmination of attempts by the south Indian powers to ward off Islamic invasions by the end of the 13th century. It lasted until 1646, although its power declined after a major military defeat in 1565 by the combined armies of the Deccan sultanates. The empire is named after its capital city of Vijayanagara, whose ruins surround present day Hampi, now a World Heritage Site in Karnataka, India. In the first two decades after the founding of the empire, Harihara I gained control over most of the area south of the Tungabhadra river and earned the title of Purvapaschima Samudradhishavara ("master of the eastern and western seas"). By 1374 Bukka Raya I, successor to Harihara I, had defeated the chiefdom of Arcot, the Reddys of Kondavidu, and the Sultan of Madurai and had gained control over Goa in the west and the Tungabhadra-Krishna River doab in the north. With the Vijayanagara Kingdom now imperial in stature, Harihara II, the second son of Bukka Raya I, further consolidated the kingdom beyond the Krishna River and brought the whole of South India under the Vijayanagara umbrella. The next ruler, Deva Raya I, emerged successful against the Gajapatis of Odisha and undertook important works of fortification and irrigation. Italian traveler Niccolo de Conti wrote of him as the most powerful ruler of India. Deva Raya II (called Gajabetekara) succeeded to the throne in 1424 and was possibly the most capable of the Sangama dynasty rulers. He quelled rebelling feudal lords as well as the Zamorin of Calicut and Quilon in the south. He invaded the island of Sri Lanka and became overlord of the kings of Burma at Pegu and Tanasserim. The Vijayanagara Emperors were tolerant of all religions and sects, as writings by foreign visitors show. The kings used titles such as Gobrahamana Pratipalanacharya (literally, "protector of cows and Brahmins") and Hindurayasuratrana (lit, "upholder of Hindu faith") that testified to their intention of protecting Hinduism and yet were at the same time staunchly Islamicate in their court ceremonials and dress. The empire's founders, Harihara I and Bukka Raya I, were devout Shaivas (worshippers of Shiva), but made grants to the Vaishnava order of Sringeri with Vidyaranya as their patron saint, and designated Varaha (the boar, an Avatar of Vishnu) as their emblem. Over one-fourth of the archaeological dig found an "Islamic Quarter" not far from the "Royal Quarter". Nobles from Central Asia's Timurid kingdoms also came to Vijayanagara. The later Saluva and Tuluva kings were Vaishnava by faith, but worshipped at the feet of Lord Virupaksha (Shiva) at Hampi as well as Lord Venkateshwara (Vishnu) at Tirupati. A Sanskrit work, Jambavati Kalyanam by King Krishnadevaraya, called Lord Virupaksha Karnata Rajya Raksha Mani ("protective jewel of Karnata Empire").[full citation needed] The kings patronised the saints of the dvaita order (philosophy of dualism) of Madhvacharya at Udupi. The empire's legacy includes many monuments spread over South India, the best known of which is the group at Hampi. The previous temple building traditions in South India came together in the Vijayanagara Architecture style. The mingling of all faiths and vernaculars inspired architectural innovation of Hindu temple construction, first in the Deccan and later in the Dravidian idioms using the local granite. South Indian mathematics flourished under the protection of the Vijayanagara Empire in Kerala. The south Indian mathematician Madhava of Sangamagrama founded the famous Kerala School of Astronomy and Mathematics in the 14th century which produced a lot of great south Indian mathematicians like Parameshvara, Nilakantha Somayaji and Jyeṣṭhadeva in medieval south India. Efficient administration and vigorous overseas trade brought new technologies such as water management systems for irrigation. The empire's patronage enabled fine arts and literature to reach new heights in Kannada, Telugu, Tamil, and Sanskrit, while Carnatic music evolved into its current form. Vijayanagara went into decline after the defeat in the Battle of Talikota (1565). After the death of Aliya Rama Raya in the Battle of Talikota, Tirumala Deva Raya started the Aravidu dynasty, moved and founded a new capital of Penukonda to replace the destroyed Hampi, and attempted to reconstitute the remains of Vijayanagara Empire. Tirumala abdicated in 1572, dividing the remains of his kingdom to his three sons, and pursued a religious life until his death in 1578. The Aravidu dynasty successors ruled the region but the empire collapsed in 1614, and the final remains ended in 1646, from continued wars with the Bijapur sultanate and others. During this period, more kingdoms in South India became independent and separate from Vijayanagara. These include the Mysore Kingdom, Keladi Nayaka, Nayaks of Madurai, Nayaks of Tanjore, Nayakas of Chitradurga and Nayak Kingdom of Gingee – all of which declared independence and went on to have a significant impact on the history of South India in the coming centuries. Mewar Dynasty (728-1947) For two and a half centuries from the mid 13th century, politics in Northern India was dominated by the Delhi Sultanate, and in Southern India by the Vijayanagar Empire. However, there were other regional powers present as well. After fall of Pala empire, the Chero dynasty ruled much of Eastern Uttar Pradesh, Bihar and Jharkhand from 12th CE to 18th CE. The Reddy dynasty successfully defeated the Delhi Sultanate; and extended their rule from Cuttack in the north to Kanchi in the south, eventually being absorbed into the expanding Vijayanagara Empire. In the north, the Rajput kingdoms remained the dominant force in Western and Central India. The Mewar dynasty under Maharana Hammir defeated and captured Muhammad Tughlaq with the Bargujars as his main allies. Tughlaq had to pay a huge ransom and relinquish all of Mewar's lands. After this event, the Delhi Sultanate did not attack Chittor for a few hundred years. The Rajputs re-established their independence, and Rajput states were established as far east as Bengal and north into the Punjab. The Tomaras established themselves at Gwalior, and Man Singh Tomar reconstructed the Gwalior Fort which still stands there. During this period, Mewar emerged as the leading Rajput state; and Rana Kumbha expanded his kingdom at the expense of the Sultanates of Malwa and Gujarat. The next great Rajput ruler, Rana Sanga of Mewar, became the principal player in Northern India. His objectives grew in scope – he planned to conquer the much sought after prize of the Muslim rulers of the time, Delhi. But, his defeat in the Battle of Khanwa consolidated the new Mughal dynasty in India. The Mewar dynasty under Maharana Udai Singh II faced further defeat by Mughal emperor Akbar, with their capital Chittor being captured. Due to this event, Udai Singh II founded Udaipur, which became the new capital of the Mewar kingdom. His son, Maharana Pratap of Mewar, firmly resisted the Mughals. Akbar sent many missions against him. He survived to ultimately gain control of all of Mewar, excluding the Chittor Fort. In the south, the Bahmani Sultanate, which was established either by a Brahman convert or patronised by a Brahman and from that source it was given the name Bahmani, was the chief rival of the Vijayanagara, and frequently created difficulties for the Vijayanagara. In the early 16th century Krishnadevaraya of the Vijayanagar Empire defeated the last remnant of Bahmani Sultanate power. After which, the Bahmani Sultanate collapsed, resulting it being split into five small Deccan sultanates. In 1490, Ahmadnagar declared independence, followed by Bijapur and Berar in the same year; Golkonda became independent in 1518 and Bidar in 1528. Although generally rivals, they did ally against the Vijayanagara Empire in 1565, permanently weakening Vijayanagar in the Battle of Talikota. In the East, the Gajapati Kingdom remained a strong regional power to reckon with, associated with a high point in the growth of regional culture and architecture. Under Kapilendradeva, Gajapatis became an empire stretching from the lower Ganga in the north to the Kaveri in the south. In Northeast India, the Ahom Kingdom was a major power for six centuries; led by Lachit Borphukan, the Ahoms decisively defeated the Mughal army at the Battle of Saraighat during the Ahom-Mughal conflicts. Further east in Northeastern India was the Kingdom of Manipur, which ruled from their seat of power at Kangla Fort and developed a sophisticated Hindu Gaudiya Vaishnavite culture. The Sultanate of Bengal was the dominant power of the Ganges–Brahmaputra Delta, with a network of mint towns spread across the region. It was a Sunni Muslim monarchy with Indo-Turkic, Arab, Abyssinian and Bengali Muslim elites. The sultanate was known for its religious pluralism where non-Muslim communities co-existed peacefully. The Bengal Sultanate had a circle of vassal states, including Odisha in the southwest, Arakan in the southeast, and Tripura in the east. In the early 16th-century, the Bengal Sultanate reached the peak of its territorial growth with control over Kamrup and Kamata in the northeast and Jaunpur and Bihar in the west. It was reputed as a thriving trading nation and one of Asia's strongest states.The Bengal Sultanate was described by contemporary European and Chinese visitors as a relatively prosperous kingdom. Due to the abundance of goods in Bengal, the region was described as the "richest country to trade with". The Bengal Sultanate left a strong architectural legacy. Buildings from the period show foreign influences merged into a distinct Bengali style. The Bengal Sultanate was also the largest and most prestigious authority among the independent medieval Muslim-ruled states in the history of Bengal. Its decline began with an interregnum by the Suri Empire, followed by Mughal conquest and disintegration into petty kingdoms. Bhakti movement and Sikhism The Bhakti movement refers to the theistic devotional trend that emerged in medieval Hinduism and later revolutionised in Sikhism. It originated in the seventh-century south India (now parts of Tamil Nadu and Kerala), and spread northwards. It swept over east and north India from the 15th century onwards, reaching its zenith between the 15th and 17th century CE. - The Bhakti movement regionally developed around different gods and goddesses, such as Vaishnavism (Vishnu), Shaivism (Shiva), Shaktism (Shakti goddesses), and Smartism. The movement was inspired by many poet-saints, who championed a wide range of philosophical positions ranging from theistic dualism of Dvaita to absolute monism of Advaita Vedanta. - Sikhism is based on the spiritual teachings of Guru Nanak, the first Guru, and the ten successive Sikh gurus. After the death of the tenth Guru, Guru Gobind Singh, the Sikh scripture, Guru Granth Sahib, became the literal embodiment of the eternal, impersonal Guru, where the scripture's word serves as the spiritual guide for Sikhs. - Buddhism in India flourished in the Himalayan kingdoms of Namgyal Kingdom in Ladakh, Sikkim Kingdom in Sikkim, and Chutiya Kingdom in Arunachal Pradesh of the Late medieval period. Early modern period (c. 1526–1858 CE) The early modern period of Indian history is dated from 1526 CE to 1858 CE, corresponding to the rise and fall of the Mughal Empire, which inherited from the Timurid Renaissance. During this age India's economy expanded, relative peace was maintained and arts were patronized. This period witnessed the further development of Indo-Islamic architecture; the growth of Maratha and Sikhs were able to rule significant regions of India in the waning days of the Mughal empire, which formally came to an end when the British Raj was founded. In 1526, Babur, a Timurid descendant of Timur and Genghis Khan from Fergana Valley (modern day Uzbekistan), swept across the Khyber Pass and established the Mughal Empire, which at its zenith covered much of South Asia. However, his son Humayun was defeated by the Afghan warrior Sher Shah Suri in the year 1540, and Humayun was forced to retreat to Kabul. After Sher Shah's death, his son Islam Shah Suri and his Hindu general Hemu Vikramaditya established secular rule in North India from Delhi until 1556, when Akbar the Great defeated Hemu in the Second Battle of Panipat on 6 November 1556 after winning Battle of Delhi. The famous emperor Akbar the Great, who was the grandson of Babar, tried to establish a good relationship with the Hindus. Akbar declared "Amari" or non-killing of animals in the holy days of Jainism. He rolled back the jizya tax for non-Muslims. The Mughal emperors married local royalty, allied themselves with local maharajas, and attempted to fuse their Turko-Persian culture with ancient Indian styles, creating a unique Indo-Persian culture and Indo-Saracenic architecture. Akbar married a Rajput princess, Mariam-uz-Zamani, and they had a son, Jahangir, who was part-Mughal and part-Rajput, as were future Mughal emperors. Jahangir more or less followed his father's policy. The Mughal dynasty ruled most of the Indian subcontinent by 1600. The reign of Shah Jahan was the golden age of Mughal architecture. He erected several large monuments, the most famous of which is the Taj Mahal at Agra, as well as the Moti Masjid, Agra, the Red Fort, the Jama Masjid, Delhi, and the Lahore Fort. It was the second largest empire to have existed in the Indian subcontinent, and surpassed China to become the world's largest economic power, controlling 24.4% of the world economy, and the world leader in manufacturing, producing 25% of global industrial output. The economic and demographic upsurge was stimulated by Mughal agrarian reforms that intensified agricultural production, a proto-industrializing economy that began moving towards industrial manufacturing, and a relatively high degree of urbanization for its time. The Mughal Empire reached the zenith of its territorial expanse during the reign of Aurangzeb, under whose reign the proto-industrialisation was waved and India surpassed Qing China in becoming the world's largest economy. Aurangzeb was less tolerant than his predecessors, reintroducing the jizya tax and destroying several historical temples, while at the same time building more Hindu temples than he destroyed, employing significantly more Hindus in his imperial bureaucracy than his predecessors, and advancing administrators based on their ability rather than their religion. However, he is often blamed for the erosion of the tolerant syncretic tradition of his predecessors, as well as increasing religious controversy and centralisation. The English East India Company suffered a defeat at the Anglo-Mughal War. The empire went into decline thereafter. The Mughals suffered several blows due to invasions from Marathas, Jats and Afghans. In 1737, the Maratha general Bajirao of the Maratha Empire invaded and plundered Delhi. Under the general Amir Khan Umrao Al Udat, the Mughal Emperor sent 8,000 troops to drive away the 5,000 Maratha cavalry soldiers. Baji Rao, however, easily routed the novice Mughal general and the rest of the imperial Mughal army fled. In 1737, in the final defeat of Mughal Empire, the commander-in-chief of the Mughal Army, Nizam-ul-mulk, was routed at Bhopal by the Maratha army. This essentially brought an end to the Mughal Empire. While Bharatpur State under Jat ruler Suraj Mal, overran the Mughal garrison at Agra and plundered the city taking with them the two great silver doors of the entrance of the famous Taj Mahal; which were then melted down by Suraj Mal in 1763. In 1739, Nader Shah, emperor of Iran, defeated the Mughal army at the Battle of Karnal. After this victory, Nader captured and sacked Delhi, carrying away many treasures, including the Peacock Throne. Mughal rule was further weakened by constant native Indian resistance; Banda Singh Bahadur led the Sikh Khalsa against Mughal religious oppression; Hindu Rajas of Bengal, Pratapaditya and Raja Sitaram Ray revolted; and Maharaja Chhatrasal, of Bundela Rajputs, fought the Mughals and established the Panna State. The Mughal dynasty was reduced to puppet rulers by 1757. Vadda Ghalughara took place under the Muslim provincial government based at Lahore to wipe out the Sikhs, with 30,000 Sikhs being killed, an offensive that had begun with the Mughals, with the Chhota Ghallughara, and lasted several decades under its Muslim successor states. Marathas and Sikhs In the early 18th century the Maratha Empire extended suzerainty over the Indian subcontinent. Under the Peshwas, the Marathas consolidated and ruled over much of South Asia. The Marathas are credited to a large extent for ending Mughal rule in India. The Maratha kingdom was founded and consolidated by Chatrapati Shivaji, a Maratha aristocrat of the Bhonsle clan. However, the credit for making the Marathas formidable power nationally goes to Peshwa Bajirao I. Historian K.K. Datta wrote that Bajirao I "may very well be regarded as the second founder of the Maratha Empire". By the early 18th century, the Maratha Kingdom had transformed itself into the Maratha Empire under the rule of the Peshwas (prime ministers). In 1737, the Marathas defeated a Mughal army in their capital, in the Battle of Delhi. The Marathas continued their military campaigns against the Mughals, Nizam, Nawab of Bengal and the Durrani Empire to further extend their boundaries. By 1760, the domain of the Marathas stretched across most of the Indian subcontinent. The Marathas even discussed abolishing the Mughal throne and placing Vishwasrao Peshwa on the Mughal imperial throne in Delhi. The empire at its peak stretched from Tamil Nadu in the south, to Peshawar (modern-day Khyber Pakhtunkhwa, Pakistan [note 3]) in the north, and Bengal in the east. The Northwestern expansion of the Marathas was stopped after the Third Battle of Panipat (1761). However, the Maratha authority in the north was re-established within a decade under Peshwa Madhavrao I. Under Madhavrao I, the strongest knights were granted semi-autonomy, creating a confederacy of Maratha states under the Gaekwads of Baroda, the Holkars of Indore and Malwa, the Scindias of Gwalior and Ujjain, the Bhonsales of Nagpur and the Puars of Dhar and Dewas. In 1775, the East India Company intervened in a Peshwa family succession struggle in Pune, which led to the First Anglo-Maratha War, resulting in a Maratha victory. The Marathas remained a major power in India until their defeat in the Second and Third Anglo-Maratha Wars (1805–1818), which resulted in the East India Company controlling most of India. The Sikh Empire, ruled by members of the Sikh religion, was a political entity that governed the Northwestern regions of the Indian subcontinent. The empire, based around the Punjab region, existed from 1799 to 1849. It was forged, on the foundations of the Khalsa, under the leadership of Maharaja Ranjit Singh (1780–1839) from an array of autonomous Punjabi Misls of the Sikh Confederacy. Maharaja Ranjit Singh consolidated many parts of northern India into an empire. He primarily used his Sikh Khalsa Army that he trained in European military techniques and equipped with modern military technologies. Ranjit Singh proved himself to be a master strategist and selected well-qualified generals for his army. He continuously defeated the Afghan armies and successfully ended the Afghan-Sikh Wars. In stages, he added central Punjab, the provinces of Multan and Kashmir, and the Peshawar Valley to his empire. At its peak, in the 19th century, the empire extended from the Khyber Pass in the west, to Kashmir in the north, to Sindh in the south, running along Sutlej river to Himachal in the east. After the death of Ranjit Singh, the empire weakened, leading to conflict with the British East India Company. The hard-fought first Anglo-Sikh war and second Anglo-Sikh war marked the downfall of the Sikh Empire, making it among the last areas of the Indian subcontinent to be conquered by the British. The Kingdom of Mysore in southern India expanded to its greatest extent under Hyder Ali and his son Tipu Sultan in the later half of the 18th century. Under their rule, Mysore fought series of wars against the Marathas and British or their combined forces. The Maratha–Mysore War ended in April 1787, following the finalizing of treaty of Gajendragad, in which, Tipu Sultan was obligated to pay tribute to the Marathas. Concurrently, the Anglo-Mysore Wars took place, where the Mysoreans used the Mysorean rockets. The Fourth Anglo-Mysore War (1798–1799) saw the death of Tipu. Mysore's alliance with the French was seen as a threat to the British East India Company, and Mysore was attacked from all four sides. The Nizam of Hyderabad and the Marathas launched an invasion from the north. The British won a decisive victory at the Siege of Seringapatam (1799). Hyderabad was founded by the Qutb Shahi dynasty of Golconda in 1591. Following a brief Mughal rule, Asif Jah, a Mughal official, seized control of Hyderabad and declared himself Nizam-al-Mulk of Hyderabad in 1724. The Nizams lost considerable territory and paid tribute to the Maratha Empire after being routed in multiple battles, such as the Battle of Palkhed. However, the Nizams maintained their sovereignty from 1724 until 1948 through paying tributes to the Marathas, and later, being vassels of the British. Hyderabad State became a princely state in British India in 1798. The Nawabs of Bengal had become the de facto rulers of Bengal following the decline of Mughal Empire. However, their rule was interrupted by Marathas who carried out six expeditions in Bengal from 1741 to 1748, as a result of which Bengal became a tributary state of Marathas. On 23 June 1757, Siraj ud-Daulah, the last independent Nawab of Bengal was betrayed in the Battle of Plassey by Mir Jafar. He lost to the British, who took over the charge of Bengal in 1757, installed Mir Jafar on the Masnad (throne) and established itself to a political power in Bengal. In 1765 the system of Dual Government was established, in which the Nawabs ruled on behalf of the British and were mere puppets to the British. In 1772 the system was abolished and Bengal was brought under the direct control of the British. In 1793, when the Nizamat (governorship) of the Nawab was also taken away from them, they remained as the mere pensioners of the British East India Company. In the 18th century, the whole of Rajputana was virtually subdued by the Marathas. The Second Anglo-Maratha War distracted the Marathas from 1807 to 1809, but afterward Maratha domination of Rajputana resumed. In 1817, the British went to war with the Pindaris, raiders who were based in Maratha territory, which quickly became the Third Anglo-Maratha War, and the British government offered its protection to the Rajput rulers from the Pindaris and the Marathas. By the end of 1818 similar treaties had been executed between the other Rajput states and Britain. The Maratha Sindhia ruler of Gwalior gave up the district of Ajmer-Merwara to the British, and Maratha influence in Rajasthan came to an end. Most of the Rajput princes remained loyal to Britain in the Revolt of 1857, and few political changes were made in Rajputana until Indian independence in 1947. The Rajputana Agency contained more than 20 princely states, most notable being Udaipur State, Jaipur State, Bikaner State and Jodhpur State. After the fall of the Maratha Empire, many Maratha dynasties and states became vassals in a subsidiary alliance with the British, to form the largest bloc of princely states in the British Raj, in terms of territory and population. With the decline of the Sikh Empire, after the First Anglo-Sikh War in 1846, under the terms of the Treaty of Amritsar, the British government sold Kashmir to Maharaja Gulab Singh and the princely state of Jammu and Kashmir, the second-largest princely state in British India, was created by the Dogra dynasty. While in Eastern and Northeastern India, the Hindu and Buddhist states of Cooch Behar Kingdom, Twipra Kingdom and Kingdom of Sikkim were annexed by the British and made vassal princely state. After the fall of the Vijayanagara Empire, Polygar states emerged in Southern India; and managed to weather invasions and flourished until the Polygar Wars, where they were defeated by the British East India Company forces. Around the 18th century, the Kingdom of Nepal was formed by Rajput rulers. In 1498, a Portuguese fleet under Vasco da Gama successfully discovered a new sea route from Europe to India, which paved the way for direct Indo-European commerce. The Portuguese soon set up trading posts in Goa, Daman, Diu and Bombay. After their conquest in Goa, the Portuguese instituted the Goa Inquisition, where new Indian converts and non-Christians were punished for suspected heresy against Christianity and were condemned to be burnt. Goa became the main Portuguese base until it was annexed by India in 1961. The next to arrive were the Dutch, with their main base in Ceylon. They established ports in Malabar. However, their expansion into India was halted after their defeat in the Battle of Colachel by the Kingdom of Travancore during the Travancore-Dutch War. The Dutch never recovered from the defeat and no longer posed a large colonial threat to India. The internal conflicts among Indian kingdoms gave opportunities to the European traders to gradually establish political influence and appropriate lands. Following the Dutch, the British—who set up in the west coast port of Surat in 1619—and the French both established trading outposts in India. Although these continental European powers controlled various coastal regions of southern and eastern India during the ensuing century, they eventually lost all their territories in India to the British, with the exception of the French outposts of Pondichéry and Chandernagore, and the Portuguese colonies of Goa, Daman and Diu. East India Company rule in India The English East India Company was founded in 1600 as The Company of Merchants of London Trading into the East Indies. It gained a foothold in India with the establishment of a factory in Masulipatnam on the Eastern coast of India in 1611 and a grant of rights by the Mughal emperor Jahangir to establish a factory in Surat in 1612. In 1640, after receiving similar permission from the Vijayanagara ruler farther south, a second factory was established in Madras on the southeastern coast. Bombay island, not far from Surat, a former Portuguese outpost gifted to England as dowry in the marriage of Catherine of Braganza to Charles II, was leased by the company in 1668. Two decades later, the company established a presence in the Ganges River delta when a factory was set up in Calcutta. During this time other companies established by the Portuguese, Dutch, French, and Danish were similarly expanding in the region. The company's victory under Robert Clive in the 1757 Battle of Plassey and another victory in the 1764 Battle of Buxar (in Bihar), consolidated the company's power, and forced emperor Shah Alam II to appoint it the diwan, or revenue collector, of Bengal, Bihar, and Orissa. The company thus became the de facto ruler of large areas of the lower Gangetic plain by 1773. It also proceeded by degrees to expand its dominions around Bombay and Madras. The Anglo-Mysore Wars (1766–99) and the Anglo-Maratha Wars (1772–1818) left it in control of large areas of India south of the Sutlej River. With the defeat of the Marathas, no native power represented a threat for the company any longer. The expansion of the company's power chiefly took two forms. The first of these was the outright annexation of Indian states and subsequent direct governance of the underlying regions that collectively came to comprise British India. The annexed regions included the North-Western Provinces (comprising Rohilkhand, Gorakhpur, and the Doab) (1801), Delhi (1803), Assam (Ahom Kingdom 1828) and Sindh (1843). Punjab, North-West Frontier Province, and Kashmir were annexed after the Anglo-Sikh Wars in 1849–56 (Period of tenure of Marquess of Dalhousie Governor General). However, Kashmir was immediately sold under the Treaty of Amritsar (1850) to the Dogra Dynasty of Jammu and thereby became a princely state. In 1854, Berar was annexed along with the state of Oudh two years later. The second form of asserting power involved treaties in which Indian rulers acknowledged the company's hegemony in return for limited internal autonomy. Since the company operated under financial constraints, it had to set up political underpinnings for its rule. The most important such support came from the subsidiary alliances with Indian princes during the first 75 years of Company rule. In the early 19th century, the territories of these princes accounted for two-thirds of India. When an Indian ruler who was able to secure his territory wanted to enter such an alliance, the company welcomed it as an economical method of indirect rule that did not involve the economic costs of direct administration or the political costs of gaining the support of alien subjects. In return, the company undertook the "defense of these subordinate allies and treated them with traditional respect and marks of honor." Subsidiary alliances created the princely states of the Hindu maharajas and the Muslim nawabs. Prominent among the princely states were Cochin (1791), Jaipur (1794), Travancore (1795), Hyderabad (1798), Mysore (1799), Cis-Sutlej Hill States (1815), Central India Agency (1819), Cutch and Gujarat Gaikwad territories (1819), Rajputana (1818) and Bahawalpur (1833). Indian indenture system The Indian indenture system was an ongoing system of indenture, a form of debt bondage, by which 3.5 million Indians were transported to various colonies of European powers to provide labor for the (mainly sugar) plantations. It started from the end of slavery in 1833 and continued until 1920. This resulted in the development of a large Indian diaspora that spread from the Caribbean (e.g. Trinidad and Tobago) to the Pacific Ocean (e.g. Fiji) and the growth of large Indo-Caribbean and Indo-African populations. Modern period and independence (after c. 1850 CE) Rebellion of 1857 and its consequences Bahadur Shah Zafar the last Mughal Emperor, crowned Emperor of India by the rebels, he was deposed by the British, and died in exile in Burma The Indian rebellion of 1857 was a large-scale rebellion by soldiers employed by the British East India Company in northern and central India against the company's rule. The spark that led to the mutiny was the issue of new gunpowder cartridges for the Enfield rifle, which was insensitive to local religious prohibition. The key mutineer was Mangal Pandey. In addition, the underlying grievances over British taxation, the ethnic gulf between the British officers and their Indian troops and land annexations played a significant role in the rebellion. Within weeks after Pandey's mutiny, dozens of units of the Indian army joined peasant armies in widespread rebellion. The rebel soldiers were later joined by Indian nobility, many of whom had lost titles and domains under the Doctrine of Lapse and felt that the company had interfered with a traditional system of inheritance. Rebel leaders such as Nana Sahib and the Rani of Jhansi belonged to this group. After the outbreak of the mutiny in Meerut, the rebels very quickly reached Delhi. The rebels had also captured large tracts of the North-Western Provinces and Awadh (Oudh). Most notably, in Awadh, the rebellion took on the attributes of a patriotic revolt against British presence. However, the British East India Company mobilised rapidly with the assistance of friendly Princely states, but it took the British the remainder of 1857 and the better part of 1858 to suppress the rebellion. Due to the rebels being poorly equipped and having no outside support or funding, they were brutally subdued by the British. In the aftermath, all power was transferred from the British East India Company to the British Crown, which began to administer most of India as a number of provinces. The Crown controlled the company's lands directly and had considerable indirect influence over the rest of India, which consisted of the Princely states ruled by local royal families. There were officially 565 princely states in 1947, but only 21 had actual state governments, and only three were large (Mysore, Hyderabad, and Kashmir). They were absorbed into the independent nation in 1947–48. British Raj (1858–1947) After 1857, the colonial government strengthened and expanded its infrastructure via the court system, legal procedures, and statutes. The Indian Penal Code came into being. In education, Thomas Babington Macaulay had made schooling a priority for the Raj in his famous minute of February 1835 and succeeded in implementing the use of English as the medium of instruction. By 1890 some 60,000 Indians had matriculated. The Indian economy grew at about 1% per year from 1880 to 1920, and the population also grew at 1%. However, from 1910s Indian private industry began to grow significantly. India built a modern railway system in the late 19th century which was the fourth largest in the world. The British Raj invested heavily in infrastructure, including canals and irrigation systems in addition to railways, telegraphy, roads and ports. However, historians have been bitterly divided on issues of economic history, with the Nationalist school arguing that India was poorer at the end of British rule than at the beginning and that impoverishment occurred because of the British. In 1905, Lord Curzon split the large province of Bengal into a largely Hindu western half and "Eastern Bengal and Assam", a largely Muslim eastern half. The British goal was said to be for efficient administration but the people of Bengal were outraged at the apparent "divide and rule" strategy. It also marked the beginning of the organised anti-colonial movement. When the Liberal party in Britain came to power in 1906, he was removed. Bengal was reunified in 1911. The new Viceroy Gilbert Minto and the new Secretary of State for India John Morley consulted with Congress leaders on political reforms. The Morley-Minto reforms of 1909 provided for Indian membership of the provincial executive councils as well as the Viceroy's executive council. The Imperial Legislative Council was enlarged from 25 to 60 members and separate communal representation for Muslims was established in a dramatic step towards representative and responsible government. Several socio-religious organisations came into being at that time. Muslims set up the All India Muslim League in 1906. It was not a mass party but was designed to protect the interests of the aristocratic Muslims. It was internally divided by conflicting loyalties to Islam, the British, and India, and by distrust of Hindus. The Akhil Bharatiya Hindu Mahasabha and Rashtriya Swayamsevak Sangh (RSS) sought to represent Hindu interests though the latter always claimed it to be a "cultural" organisation. Sikhs founded the Shiromani Akali Dal in 1920. However, the largest and oldest political party Indian National Congress, founded in 1885, attempted to keep a distance from the socio-religious movements and identity politics. The Bengali Renaissance refers to a social reform movement during the nineteenth and early twentieth centuries in the Bengal region of the Indian subcontinent during the period of British rule dominated by Bengali Hindus. Historian Nitish Sengupta describes the renaissance as having started with reformer and humanitarian Raja Ram Mohan Roy (1775–1833), and ended with Asia's first Nobel laureate Rabindranath Tagore (1861–1941). This flowering of religious and social reformers, scholars, and writers is described by historian David Kopf as "one of the most creative periods in Indian history." During this period, Bengal witnessed an intellectual awakening that is in some way similar to the Renaissance. This movement questioned existing orthodoxies, particularly with respect to women, marriage, the dowry system, the caste system, and religion. One of the earliest social movements that emerged during this time was the Young Bengal movement, which espoused rationalism and atheism as the common denominators of civil conduct among upper caste educated Hindus. It played an important role in reawakening Indian minds and intellect across the Indian subcontinent. During Company rule in India and the British Raj, famines in India were some of the worst ever recorded. These famines, often resulting from crop failures due to El Niño which were exacerbated by the destructive policies of the colonial government, included the Great Famine of 1876–78 in which 6.1 million to 10.3 million people died, the Great Bengal famine of 1770 where up to 10 million people died, the Indian famine of 1899–1900 in which 1.25 to 10 million people died, and the Bengal famine of 1943 where up to 3.8 million people died. The Third Plague Pandemic in the mid-19th century killed 10 million people in India. Between 15 and 29 million Indians died during the British rule. Despite persistent diseases and famines, the population of the Indian subcontinent, which stood at up to 200 million in 1750, had reached 389 million by 1941. World War I Indian Army gunners (probably 39th Battery) with 3.7-inch mountain howitzers, Jerusalem 1917. During World War I, over 800,000 volunteered for the army, and more than 400,000 volunteered for non-combat roles, compared with the pre-war annual recruitment of about 15,000 men. The Army saw action on the Western Front within a month of the start of the war at the First Battle of Ypres. After a year of front-line duty, sickness and casualties had reduced the Indian Corps to the point where it had to be withdrawn. Nearly 700,000 Indians fought the Turks in the Mesopotamian campaign. Indian formations were also sent to East Africa, Egypt, and Gallipoli. Indian Army and Imperial Service Troops fought during the Sinai and Palestine Campaign's defence of the Suez Canal in 1915, at Romani in 1916 and to Jerusalem in 1917. India units occupied the Jordan Valley and after the German spring offensive they became the major force in the Egyptian Expeditionary Force during the Battle of Megiddo and in the Desert Mounted Corps' advance to Damascus and on to Aleppo. Other divisions remained in India guarding the North-West Frontier and fulfilling internal security obligations. One million Indian troops served abroad during the war. In total, 74,187 died, and another 67,000 were wounded. The roughly 90,000 soldiers who lost their lives fighting in World War I and the Afghan Wars are commemorated by the India Gate. World War II Sikh soldiers of the British Indian army being executed by the Japanese. (Imperial War Museum, London) British India officially declared war on Nazi Germany in September 1939. The British Raj, as part of the Allied Nations, sent over two and a half million volunteer soldiers to fight under British command against the Axis powers. Additionally, several Indian Princely States provided large donations to support the Allied campaign during the War. India also provided the base for American operations in support of China in the China Burma India Theatre. Indians fought with distinction throughout the world, including in the European theatre against Germany, in North Africa against Germany and Italy, against the Italians in East Africa, in the Middle East against the Vichy French, in the South Asian region defending India against the Japanese and fighting the Japanese in Burma. Indians also aided in liberating British colonies such as Singapore and Hong Kong after the Japanese surrender in August 1945. Over 87,000 soldiers from the subcontinent died in World War II. The Indian National Congress, denounced Nazi Germany but would not fight it or anyone else until India was independent. Congress launched the Quit India Movement in August 1942, refusing to co-operate in any way with the government until independence was granted. The government was ready for this move. It immediately arrested over 60,000 national and local Congress leaders. The Muslim League rejected the Quit India movement and worked closely with the Raj authorities. Subhas Chandra Bose (also called Netaji) broke with Congress and tried to form a military alliance with Germany or Japan to gain independence. The Germans assisted Bose in the formation of the Indian Legion; however, it was Japan that helped him revamp the Indian National Army (INA), after the First Indian National Army under Mohan Singh was dissolved. The INA fought under Japanese direction, mostly in Burma. Bose also headed the Provisional Government of Free India (or Azad Hind), a government-in-exile based in Singapore. The government of Azad Hind had its own currency, court, and civil code; and in the eyes of some Indians its existence gave a greater legitimacy to the independence struggle against the British. By 1942, neighbouring Burma was invaded by Japan, which by then had already captured the Indian territory of Andaman and Nicobar Islands. Japan gave nominal control of the islands to the Provisional Government of Free India on 21 October 1943, and in the following March, the Indian National Army with the help of Japan crossed into India and advanced as far as Kohima in Nagaland. This advance on the mainland of the Indian subcontinent reached its farthest point on Indian territory, retreating from the Battle of Kohima in June and from that of Imphal on 3 July 1944. The region of Bengal in British India suffered a devastating famine during 1940–43. An estimated 2.1–3 million died from the famine, frequently characterised as "man-made", with most sources asserting that wartime colonial policies exacerbated the crisis. Indian independence movement (1885–1947) The numbers of British in India were small, yet they were able to rule 52% of the Indian subcontinent directly and exercise considerable leverage over the princely states that accounted for 48% of the area. One of the most important events of the 19th century was the rise of Indian nationalism, leading Indians to seek first "self-rule" and later "complete independence". However, historians are divided over the causes of its rise. Probable reasons include a "clash of interests of the Indian people with British interests", "racial discriminations", and "the revelation of India's past". The first step toward Indian self-rule was the appointment of councillors to advise the British viceroy in 1861 and the first Indian was appointed in 1909. Provincial Councils with Indian members were also set up. The councillors' participation was subsequently widened into legislative councils. The British built a large British Indian Army, with the senior officers all British and many of the troops from small minority groups such as Gurkhas from Nepal and Sikhs. The civil service was increasingly filled with natives at the lower levels, with the British holding the more senior positions. Bal Gangadhar Tilak, an Indian nationalist leader, declared Swaraj as the destiny of the nation. His popular sentence "Swaraj is my birthright, and I shall have it" became the source of inspiration for Indians. Tilak was backed by rising public leaders like Bipin Chandra Pal and Lala Lajpat Rai, who held the same point of view, notably they advocated the Swadeshi movement involving the boycott of all imported items and the use of Indian-made goods; the triumvirate were popularly known as Lal Bal Pal. Under them, India's three big provinces – Maharashtra, Bengal and Punjab shaped the demand of the people and India's nationalism. In 1907, the Congress was split into two factions: The radicals, led by Tilak, advocated civil agitation and direct revolution to overthrow the British Empire and the abandonment of all things British. The moderates, led by leaders like Dadabhai Naoroji and Gopal Krishna Gokhale, on the other hand, wanted reform within the framework of British rule. The British themselves adopted a "carrot and stick" approach in recognition of India's support during the First World War and in response to renewed nationalist demands. The means of achieving the proposed measure were later enshrined in the Government of India Act 1919, which introduced the principle of a dual mode of administration, or diarchy, in which elected Indian legislators and appointed British officials shared power. In 1919, Colonel Reginald Dyer ordered his troops to fire their weapons on peaceful protestors, including unarmed women and children, resulting in the Jallianwala Bagh massacre; which led to the Non-cooperation Movement of 1920–22. The massacre was a decisive episode towards the end of British rule in India. From 1920 leaders such as Mahatma Gandhi began highly popular mass movements to campaign against the British Raj using largely peaceful methods. The Gandhi-led independence movement opposed the British rule using non-violent methods like non-co-operation, civil disobedience and economic resistance. However, revolutionary activities against the British rule took place throughout the Indian subcontinent and some others adopted a militant approach like the Hindustan Republican Association, founded by Chandrasekhar Azad, Bhagat Singh, Sukhdev Thapar and others, that sought to overthrow British rule by armed struggle. The Government of India Act 1935 was a major success in this regard. The All India Azad Muslim Conference gathered in Delhi in April 1940 to voice its support for an independent and united India. Its members included several Islamic organisations in India, as well as 1400 nationalist Muslim delegates. The pro-separatist All-India Muslim League worked to try to silence those nationalist Muslims who stood against the partition of India, often using "intimidation and coercion". The murder of the All India Azad Muslim Conference leader Allah Bakhsh Soomro also made it easier for the pro-separatist All-India Muslim League to demand the creation of a Pakistan. After World War II (c. 1946–1947) In January 1946, several mutinies broke out in the armed services, starting with that of RAF servicemen frustrated with their slow repatriation to Britain. The mutinies came to a head with mutiny of the Royal Indian Navy in Bombay in February 1946, followed by others in Calcutta, Madras, and Karachi. The mutinies were rapidly suppressed. Also in early 1946, new elections were called and Congress candidates won in eight of the eleven provinces. Late in 1946, the Labour government decided to end British rule of India, and in early 1947 it announced its intention of transferring power no later than June 1948 and participating in the formation of an interim government. Along with the desire for independence, tensions between Hindus and Muslims had also been developing over the years. The Muslims had always been a minority within the Indian subcontinent, and the prospect of an exclusively Hindu government made them wary of independence; they were as inclined to mistrust Hindu rule as they were to resist the foreign Raj, although Gandhi called for unity between the two groups in an astonishing display of leadership. Muslim League leader Muhammad Ali Jinnah proclaimed 16 August 1946 as Direct Action Day, with the stated goal of highlighting, peacefully, the demand for a Muslim homeland in British India, which resulted in the outbreak of the cycle of violence that would be later called the "Great Calcutta Killing of August 1946". The communal violence spread to Bihar (where Muslims were attacked by Hindus), to Noakhali in Bengal (where Hindus were targeted by Muslims), in Garhmukteshwar in the United Provinces (where Muslims were attacked by Hindus), and on to Rawalpindi in March 1947 in which Hindus were attacked or driven out by Muslims. Independence and partition (c. 1947–present) In August 1947, the British Indian Empire was partitioned into the Union of India and Dominion of Pakistan. In particular, the partition of Punjab and Bengal led to rioting between Hindus, Muslims, and Sikhs in these provinces and spread to other nearby regions, leaving some 500,000 dead. The police and army units were largely ineffective. The British officers were gone, and the units were beginning to tolerate if not actually indulge in violence against their religious enemies. Also, this period saw one of the largest mass migrations anywhere in modern history, with a total of 12 million Hindus, Sikhs and Muslims moving between the newly created nations of India and Pakistan (which gained independence on 15 and 14 August 1947 respectively). In 1971, Bangladesh, formerly East Pakistan and East Bengal, seceded from Pakistan. In recent decades there have been four main schools of historiography in how historians study India: Cambridge, Nationalist, Marxist, and subaltern. The once common "Orientalist" approach, with its image of a sensuous, inscrutable, and wholly spiritual India, has died out in serious scholarship. The "Cambridge School", led by Anil Seal, Gordon Johnson, Richard Gordon, and David A. Washbrook, downplays ideology. However, this school of historiography is criticised for western bias or Eurocentrism. The Nationalist school has focused on Congress, Gandhi, Nehru and high level politics. It highlighted the Mutiny of 1857 as a war of liberation, and Gandhi's 'Quit India' begun in 1942, as defining historical events. This school of historiography has received criticism for Elitism. The Marxists have focused on studies of economic development, landownership, and class conflict in precolonial India and of deindustrialisation during the colonial period. The Marxists portrayed Gandhi's movement as a device of the bourgeois elite to harness popular, potentially revolutionary forces for its own ends. Again, the Marxists are accused of being "too much" ideologically influenced. The "subaltern school", was begun in the 1980s by Ranajit Guha and Gyan Prakash. It focuses attention away from the elites and politicians to "history from below", looking at the peasants using folklore, poetry, riddles, proverbs, songs, oral history and methods inspired by anthropology. It focuses on the colonial era before 1947 and typically emphasises caste and downplays class, to the annoyance of the Marxist school. More recently, Hindu nationalists have created a version of history to support their demands for "Hindutva" ("Hinduness") in Indian society. This school of thought is still in the process of development. In March 2012, Diana L. Eck, professor of Comparative Religion and Indian Studies at Harvard University, authored in her book "India: A Sacred Geography", that idea of India dates to a much earlier time than the British or the Mughals and it wasn't just a cluster of regional identities and it wasn't ethnic or racial. - Michael D. Petraglia; Bridget Allchin (2007). The Evolution and History of Human Populations in South Asia: Inter-disciplinary Studies in Archaeology, Biological Anthropology, Linguistics and Genetics. Springer Science & Business Media. p. 6. ISBN 978-1-4020-5562-1. Quote: "Y-Chromosome and Mt-DNA data support the colonization of South Asia by modern humans originating in Africa. ... Coalescence dates for most non-European populations average to between 73–55 ka." - Wright, Rita P. (2009), The Ancient Indus: Urbanism, Economy, and Society, Cambridge University Press, pp. 44, 51, ISBN 978-0-521-57652-9 - Wright, Rita P. (2009), The Ancient Indus: Urbanism, Economy, and Society, Cambridge University Press, pp. 115–125, ISBN 978-0-521-57652-9 - Flood, Gavin D. (1996), An Introduction to Hinduism, Cambridge University Press, p. 82, ISBN 978-0-521-43878-0 - Flood, Gavin. Olivelle, Patrick. 2003. The Blackwell Companion to Hinduism. Malden: Blackwell. pp. 273–274 - Researches Into the History and Civilization of the Kirātas by G. P. Singh p. 33 - A Social History of Early India by Brajadulal Chattopadhyaya p. 259 - Technology and Society by Menon, R.V.G. p. 15 - The Political Economy of Craft Production: Crafting Empire in South India, by Carla M. Sinopoli, p. 201 - Science in India by B.V. Subbarayappa - The Cambridge History of Southeast Asia: From Early Times to c. 1800, Band 1 by Nicholas Tarling, p. 281 - Flood, Gavin. Olivelle, Patrick. 2003. The Blackwell Companion to Hinduism. Malden: Blackwell. pp. 273–274. - Ancient Indian History and Civilization by Sailendra Nath Sen p. 281 - Societies, Networks, and Transitions, Volume B: From 600 to 1750 by Craig Lockard p. 333 - Power and Plenty: Trade, War, and the World Economy in the Second Millennium by Ronald Findlay, Kevin H. O'Rourke p. 67 - Essays on Ancient India by Raj Kumar p. 199 - Al Baldiah wal nahaiyah vol: 7 p. 141 "Conquest of Makran" - Meri 2005, p. 146. - The Princeton Encyclopedia of Islamic Political Thought: p. 340 - Sohoni, Pushkar (2018). The Architecture of a Deccan Sultanate: Courtly Practice and Royal Authority in Late Medieval India. London: I.B. Tauris. ISBN 9781784537944. - Richard M. Eaton (31 July 1996). The Rise of Islam and the Bengal Frontier, 1204-1760. University of California Press. pp. 64–. ISBN 978-0-520-20507-9. - "India before the British: The Mughal Empire and its Rivals, 1526–1857". University of Exeter. - Parthasarathi, Prasannan (2011), Why Europe Grew Rich and Asia Did Not: Global Economic Divergence, 1600–1850, Cambridge University Press, pp. 39–45, ISBN 978-1-139-49889-0 - Maddison, Angus (2003): Development Centre Studies The World Economy Historical Statistics: Historical Statistics, OECD Publishing, ISBN 9264104143, pp. 259–261 - Lawrence E. Harrison, Peter L. Berger (2006). Developing cultures: case studies. Routledge. p. 158. ISBN 978-0415952798. - Ian Copland; Ian Mabbett; Asim Roy; et al. (2012). A History of State and Religion in India. Routledge. p. 161. - History of Mysore Under Hyder Ali and Tippoo Sultan by Joseph Michaud p. 143 - Taçon, Paul S.C. (2018), "The Rock Art of South and East Asia", in Bruno David, Ian J. McNiven (ed.), The Oxford Handbook of the Archaeology and Anthropology of Rock Art, Oxford University Press, pp. 181–, ISBN 978-0-19-084495-0 - Mithen, Steven J. (2006), After the Ice: A Global Human History, 20,000-5000 BC, Harvard University Press, pp. 411–, ISBN 978-0-674-01999-7 - Upinder Singh 2008, p. 89. - Meenakshi Dubey-Pathak (2014), "The Rock Art of the Bhimbetka Area in India" (PDF), Adoranten: 16, 19 - Chauhan 2010, p. 147. - Petraglia & Allchin 2007, pp. 5–6. - Petraglia 2010, pp. 167–170. - Murray, Tim (1999). Time and Archaeology. London: Routledge. p. 84. ISBN 978-0-415-11762-3. - Chauhan 2010, pp. 147–160. - Dyson, Tim (2018), A Population History of India: From the First Modern People to the Present Day, Oxford University Press, p. 1, ISBN 978-0-19-882905-8 - Michael D. Petraglia; Bridget Allchin (22 May 2007). The Evolution and History of Human Populations in South Asia: Inter-disciplinary Studies in Archaeology, Biological Anthropology, Linguistics and Genetics. Springer Science & Business Media. p. 6. ISBN 978-1-4020-5562-1. - Fisher, Michael H. (2018), An Environmental History of India: From Earliest Times to the Twenty-First Century, Cambridge University Press, p. 23, ISBN 978-1-107-11162-2 - Claudio Tuniz; Richard Gillespie; Cheryl Jones (2016). The Bone Readers: Science and Politics in Human Origins Research. Routledge. pp. 163–. ISBN 978-1-315-41888-9. - Petraglia, Michael D.; Haslam, Michael; Fuller, Dorian Q.; Boivin, Nicole; Clarkson, Chris (25 March 2010). "Out of Africa: new hypotheses and evidence for the dispersal of Homo sapiens along the Indian Ocean rim" (PDF). Annals of Human Biology. 37 (3): 288–311. doi:10.3109/03014461003639249. PMID 20334598. S2CID 6421383. - Mellars, Paul; Gori, Kevin C.; Carr, Martin; Soares, Pedro A.; Richards, Martin B. (25 June 2013). "Genetic and archaeological perspectives on the initial modern human colonization of southern Asia". Proceedings of the National Academy of Sciences. 110 (26): 10699–10704. Bibcode:2013PNAS..11010699M. doi:10.1073/pnas.1306043110. PMC 3696785. PMID 23754394. - Dyson, Tim (2018), A Population History of India: From the First Modern People to the Present Day, Oxford University Press, p. 28, ISBN 978-0-19-882905-8 - Dyson, Tim (2018), A Population History of India: From the First Modern People to the Present Day, Oxford University Press, pp. 4–5, ISBN 978-0-19-882905-8 - Fisher, Michael H. (2018), An Environmental History of India: From Earliest Times to the Twenty-First Century, Cambridge University Press, p. 33, ISBN 978-1-107-11162-2 - Takezawa, Suichi. "Stepwells – Cosmology of Subterranean Architecture as seen in Adalaj" (PDF). The Diverse Architectural World of the Indian Sub-Continent. III. Retrieved 18 November 2009. - Wright, Rita P. (2010). The Ancient Indus: Urbanism, Economy, and Society. Cambridge University Press. p. 1. ISBN 978-0-521-57652-9. - McIntosh, Jane (2008), The Ancient Indus Valley: New Perspectives, ABC-CLIO, p. 387, ISBN 978-1-57607-907-2 - Early India: A Concise History, D.N. Jha, 2004, p. 31 - Sarkar, Anindya; Mukherjee, Arati Deshpande; Bera, M. K.; Das, B.; Juyal, Navin; Morthekai, P.; Deshpande, R. D.; Shinde, V. S.; Rao, L. S. (May 2016). "Oxygen isotope in archaeological bioapatites from India: Implications to climate change and decline of Bronze Age Harappan civilization". Scientific Reports. 6 (1): 26555. Bibcode:2016NatSR...626555S. doi:10.1038/srep26555. PMC 4879637. PMID 27222033. - Kumar, Vijay. "A note on Chariot Burials found at Sinauli district Baghpat U.P" (PDF). Indian Journal of Archaeology. - Singh, Upinder (2008). A History of Ancient and Early Medieval India: From the Stone Age to the 12th Century. Pearson Education India. ISBN 9788131711200. Retrieved 8 September 2018 – via Google Books. - Parpola, Asko (2020). "Royal "Chariot" Burials of Sanauli near Delhi and Archaeological Correlates of Prehistoric Indo-Iranian Languages". Studia Orientalia Electronica. 8: 176. doi:10.23993/store.98032. - Daniyal, Shoaib (2018). "Putting the horse before the cart: What the discovery of 4,000-year-old 'chariot' in UP signifies". Scroll.in. - Pattanaik, Devdutt (2020). "Who is a Hindu? The missing horse of Baghpat". MumbaiMirror. - Parpola 2020. - Antonova, Bongard-Levin & Kotovsky 1979, p. 51. - MacKenzie, Lynn (1995). Non-Western Art: A Brief Guide. Prentice Hall. p. 151. - Romila Thapar, A History of India: Part 1, pp. 29–30. - Upinder Singh 2008, p. 255. - Staal, Frits (1986), The Fidelity of Oral Tradition and the Origins of Science, Mededelingen der Koninklijke Nederlandse Akademie von Wetenschappen, Afd. Letterkunde, NS 49, 8. Amsterdam: North Holland Publishing Company, 40 pages - Stein, B. (2010), Arnold, D. (ed.), A History of India (2nd ed.), Oxford: Wiley-Blackwell, p. 47, ISBN 978-1-4051-9509-6 - Kulke & Rothermund 2004, p. 31. - Singhal, K.C; Gupta, Roshan. The Ancient History of India, Vedic Period: A New Interpretation. Atlantic Publishers and Distributors. ISBN 81-269-0286-8. pp. 150–151. - Day, Terence P. (1982). The Conception of Punishment in Early Indian Literature. Ontario: Wilfrid Laurier University Press. pp. 42–45. ISBN 978-0-919812-15-4. - Duiker, William; Spielvogel, Jackson (2012). World History. Cengage learning. p. 90. - Nelson, James M. (2009). Psychology, Religion, and Spirituality. Springer. p. 77. - Flood, Gavin D. (1996), An Introduction to Hinduism, Cambridge University Press, p. 37, ISBN 978-0-521-43878-0 - "India: The Late 2nd Millennium and the Reemergence of Urbanism". Encyclopædia Britannica. Retrieved 12 May 2007. - Reddy 2003, p. A11. - Michael Witzel (1989), Tracing the Vedic dialects in Dialectes dans les litteratures Indo-Aryennes ed. Caillat, Paris, 97–265. - Samuel 2010, p. 48–51, 61–93. - Kulke & Rothermund 2004, pp. 41–43. - Upinder Singh 2008, p. 200. - Charles Rockwell Lanman (1912), A Sanskrit reader: with vocabulary and notes, Boston: Ginn & Co., ... jána, m. creature; man; person; in plural, and collectively in singular, folks; a people or race or tribe ... cf. γένος, Lat. genus, Eng. kin, 'race' ... - Stephen Potter, Laurens Christopher Sargent (1974), Pedigree: the origins of words from nature, Taplinger, ISBN 9780800862480, ... *gen-, found in Skt. jana, 'a man', and Gk. genos and L. genus, 'a race' ... - Abhijit Basu (2013). Marvels and Mysteries of the Mahabharata. Leadstart Publishing Pvt Ltd. p. 153. - Witzel, Michael (1995). "Early Sanskritization. Origins and Development of the Kuru State". Electronic Journal of Vedic Studies. 1 (4): 1–26. doi:10.11588/ejvs.1995.4.823. - Samuel 2010, pp. 45–51. - H.C. Raychaudhuri (1950), Political History of Ancient India and Nepal, Calcutta: University of Calcutta, p. 58 - Samuel 2010. - James Heitzman (2008). The City in South Asia. Routledge. pp. 12–13. ISBN 978-1-134-28963-9. - Samuel 2010, pp. 48–51. - "The beginning of the historical period, c. 500–150 BCE". Encyclopædia Britannica. 2015. - Samuel 2010, pp. 42–48. - Samuel 2010, p. 61. - Samuel 2010, p. 49. - Juan Mascaró (1965). The Upanishads. Penguin Books. pp. 7–. ISBN 978-0-14-044163-5. - Olivelle, Patrick (2008), Upaniṣads, Oxford University Press, pp. xxiv–xxix, ISBN 978-0-19-954025-9 - Melton, J. Gordon; Baumann, Martin (2010), Religions of the World, Second Edition: A Comprehensive Encyclopedia of Beliefs and Practices, ABC-CLIO, p. 1324, ISBN 978-1-59884-204-3 - Flood, Gavin. Olivelle, Patrick. 2003. The Blackwell Companion to Hinduism. Malden: Blackwell. pp. 273–274. "The second half of the first millennium BC was the period that created many of the ideological and institutional elements that characterize later Indian religions. The renouncer tradition played a central role during this formative period of Indian religious history. ... Some of the fundamental values and beliefs that we generally associate with Indian religions in general and Hinduism, in particular, were in part the creation of the renouncer tradition. These include the two pillars of Indian theologies: samsara—the belief that life in this world is one of suffering and subject to repeated deaths and births (rebirth); moksa/nirvana—the goal of human existence....." - Laumakis, Stephen. An Introduction to Buddhist philosophy. 2008. p. 4 - Mary Pat Fisher (1997) In: Living Religions: An Encyclopedia of the World's Faiths I.B. Tauris : London ISBN 1-86064-148-2 – Jainism's major teacher is the Mahavira, a contemporary of the Buddha, and who died approximately 526 BC. p. 114 - Mary Pat Fisher (1997) In: Living Religions: An Encyclopedia of the World's Faiths I.B. Tauris : London ISBN 1-86064-148-2 '"The extreme antiquity of Jainism as a non-Vedic, indigenous Indian religion is well documented. Ancient Hindu and Buddhist scriptures refer to Jainism as an existing tradition which began long before Mahavira." p. 115 - Valmiki (1990). Goldman, Robert P (ed.). The Ramayana of Valmiki: An Epic of Ancient India, Volume 1: Balakanda. Ramayana of Valmiki. Princeton, New Jersey: Princeton University Press. p. 23. ISBN 978-0-691-01485-2. - Romila Thapar, A History of India Part 1, p. 31. - Upinder Singh 2008, pp. 18–21. - Brockington, J.L. (1998). The Sanskrit epics, Part 2. 12. Brill. p. 21. ISBN 978-90-04-10260-6. - Upinder Singh 2008, pp. 260–264. - Anguttara Nikaya I. p. 213; IV. pp. 252, 256, 261. - Reddy 2003, p. A107. - Thapar, Romila (2002). Early India: From the Origins to AD 1300. University of California. pp. 146–150. ISBN 978-0520242258. Retrieved 28 October 2013. - Raychaudhuri Hemchandra (1972), Political History of Ancient India, Calcutta: University of Calcutta, p. 107 - Republics in ancient India. Brill Archive. pp. 93–. GGKEY:HYY6LT5CFT0. - J.M. Kenoyer (2006), "Cultures and Societies of the Indus Tradition. In Historical Roots" in the Making of 'the Aryan, R. Thapar (ed.), pp. 21–49. New Delhi, National Book Trust. - Shaffer, Jim. 1993, "Reurbanization: The eastern Punjab and beyond". In Urban Form and Meaning in South Asia: The Shaping of Cities from Prehistoric to Precolonial Times, ed. H. Spodek and D.M. Srinivasan. - Ramesh Chandra Majumdar (1977). Ancient India. Motilal Banarsidass Publishers. ISBN 978-81-208-0436-4. - "Magadha Empire". - "Lumbini Development Trust: Restoring the Lumbini Garden". lumbinitrust.org. Archived from the original on 6 March 2014. Retrieved 6 January 2017. - Mookerji 1988, pp. 28–33. - Upinder Singh 2008, p. 273. - Mookerji 1988, p. 34. - Sastri, K. A. Nilakanta, ed. (1988) [First published 1952]. Age of the Nandas and Mauryas (2nd ed.). Motilal Banarsidass. p. 16. ISBN 978-81-208-0465-4. - Gabriel, Richard A. (2002), The great armies of antiquity (1.udg. ed.), Westport, Conn. [u.a.]: Praeger, p. 218, ISBN 978-0-275-97809-9, archived from the original on 5 January 2014 - Raychaudhuri, H. C.; Mukherjee, B. N. (1996) [First published 1923]. Political History of Ancient India: From the Accession of Parikshit to the Extinction of the Gupta Dynasty (8th ed.). Oxford University Press. pp. 204–210. ISBN 978-0-19-563789-2. - Turchin, Peter; Adams, Jonathan M.; Hall, Thomas D. (December 2006). "East–West Orientation of Historical Empires". Journal of World-Systems Research. 12 (2): 223. ISSN 1076-156X. Retrieved 12 September 2016. - Romila Thapar. A History of India: Volume 1. p. 70. - Thapar 2003, pp. 178–180. - Thapar 2003, pp. 204–206. - Bhandari, Shirin (5 January 2016). "Dinner on the Grand Trunk Road". Roads & Kingdoms. Retrieved 19 July 2016. - Kulke & Rothermund 2004, p. 67. - Romila Thapar. A History of India: Volume 1. p. 78. - Antonova, Bongard-Levin & Kotovsky 1979, p. 91. - Rosen, Elizabeth S. (1975). "Prince ILango Adigal, Shilappadikaram (The anklet Bracelet), translated by Alain Damelou. Review". Artibus Asiae. 37 (1/2): 148–150. doi:10.2307/3250226. JSTOR 3250226. - Sen 1999, pp. 204–205. - Essays on Indian Renaissance by Raj Kumar p. 260 - The First Spring: The Golden Age of India by Abraham Eraly p. 655 - * Zvelebil, Kamil. 1973. The smile of Murugan on Tamil literature of South India. Leiden: Brill. Zvelebil dates the Ur-Tholkappiyam to the 1st or 2nd century BCE - "Silappathikaram Tamil Literature". Tamilnadu.com. 22 January 2013. Archived from the original on 11 April 2013. - Mukherjee 1999, p. 277 - Manimekalai – English transliteration of Tamil original - Hardy, Adam (1995). Indian Temple Architecture: Form and Transformation : the Karṇāṭa Drāviḍa Tradition, 7th to 13th Centuries. Abhinav Publications. p. 39. ISBN 978-81-7017-312-0. - Le, Huu Phuoc (2010). Buddhist Architecture. Grafikol. p. 238. ISBN 978-0-9844043-0-8. - Stein, B. (27 April 2010), Arnold, D. (ed.), A History of India (2nd ed.), Oxford: Wiley-Blackwell, p. 105, ISBN 978-1-4051-9509-6 - "The World Economy (GDP) : Historical Statistics by Professor Angus Maddison" (PDF). World Economy. Retrieved 21 May 2013. - Maddison, Angus (2006). The World Economy – Volume 1: A Millennial Perspective and Volume 2: Historical Statistics. OECD Publishing by Organisation for Economic Co-operation and Development. p. 656. ISBN��978-92-64-02262-1. - Stadtner, Donald (1975). "A Śuṅga Capital from Vidiśā". Artibus Asiae. 37 (1/2): 101–104. doi:10.2307/3250214. ISSN 0004-3648. JSTOR 3250214. - K. A. Nilkantha Shastri (1970), A Comprehensive History of India: Volume 2, p. 108: "Soon after Agnimitra there was no 'Sunga empire'". - Bhandare, Shailendra. "Numismatics and History: The Maurya-Gupta Interlude in the Gangetic Plain" in Between the Empires: Society in India, 300 to 400 ed. Patrick Olivelle (2006), p. 96 - Sircar, D. C. (2008). Studies in Indian Coins. ISBN 9788120829732. - Schreiber, Mordecai (2003). The Shengold Jewish Encyclopedia. Rockville, MD: Schreiber Publishing. p. 125. ISBN 978-1-887563-77-2. - The Medical Times and Gazette, Volume 1. London: John Churchill. 1867. p. 506.(Original from the University of Michigan) - Donkin 2003, p. 67. - Collingham 2006, p. 245. - Law 1978, p. 164. - Greatest emporium in the world, CSI, UNESCO. - Loewe, Michael; Shaughnessy, Edward L. (1999). The Cambridge History of Ancient China: From the Origins of Civilization to 221 BC. Cambridge University Press. pp. 87–88. ISBN 978-0-521-47030-8. Retrieved 1 November 2013. - Runion, Meredith L. (2007). The history of Afghanistan. Westport: Greenwood Press. p. 46. ISBN 978-0-313-33798-7. The Yuezhi people conquered Bactria in the second century BCE. and divided the country into five chiefdoms, one of which would become the Kushan Empire. Recognizing the importance of unification, these five tribes combined under the one dominate Kushan tribe, and the primary rulers descended from the Yuezhi. - Liu, Xinrui (2001). Adas, Michael (ed.). Agricultural and pastoral societies in ancient and classical history. Philadelphia: Temple University Press. p. 156. ISBN 978-1-56639-832-9. - Buddhist Records of the Western World Si-Yu-Ki, (Tr. Samuel Beal: Travels of Fa-Hian, The Mission of Sung-Yun and Hwei-Sing, Books 1–5), Kegan Paul, Trench, Trubner & Co. Ltd. London. 1906 and Hill (2009), pp. 29, 318–350 - which began about 127 CE. "Falk 2001, pp. 121–136", Falk (2001), pp. 121–136, Falk, Harry (2004), pp. 167–176 and Hill (2009), pp. 29, 33, 368–371. - Grégoire Frumkin (1970). Archaeology in Soviet Central Asia. Brill Archive. pp. 51–. GGKEY:4NPLATFACBB. - Rafi U. Samad (2011). The Grandeur of Gandhara: The Ancient Buddhist Civilization of the Swat, Peshawar, Kabul and Indus Valleys. Algora Publishing. pp. 93–. ISBN 978-0-87586-859-2. - Oxford History of India – Vincent Smith - Los Angeles County Museum of Art; Pratapaditya Pal (1986). Indian Sculpture: Circa 500 B.C.–A.D. 700. University of California Press. pp. 151–. ISBN 978-0-520-05991-7. - Ancient and Medieval History of India – H.G. Rowlinson - "The History of Pakistan: The Kushans". kushan.org. Retrieved 6 January 2017. - Si-Yu-Ki, Buddhist Records of the Western World, (Tr. Samuel Beal: Travels of Fa-Hian, The Mission of Sung-Yun and Hwei-Sing, Books 1–5), Kegan Paul, Trench, Trubner & Co. Ltd. London. 1906 - "Gupta dynasty: empire in 4th century". Encyclopædia Britannica. Archived from the original on 30 March 2010. Retrieved 16 May 2010. - "The Story of India – Photo Gallery". PBS. Retrieved 16 May 2010. - Iaroslav Lebedynsky, Les Nomades, p. 172. - Early History of India, p. 339, Dr V.A. Smith; See also Early Empire of Central Asia (1939), W.M. McGovern. - Ancient India, 2003, p. 650, Dr V.D. Mahajan; History and Culture of Indian People, The Age of Imperial Kanauj, p. 50, Dr R.C. Majumdar, Dr A.D. Pusalkar. - Gopal, Madan (1990). K.S. Gautam (ed.). India through the ages. Publication Division, Ministry of Information and Broadcasting, Government of India. p. 173. - The precise number varies according to whether or not some barely started excavations, such as cave 15A, are counted. The ASI say "In all, total 30 excavations were hewn out of rock which also include an unfinished one", UNESCO and Spink "about 30". The controversies over the end date of excavation is covered below. - Tej Ram Sharma, 1978, "Personal and geographical names in the Gupta inscriptions. (1.publ.)", p. 254, Kamarupa consisted of the Western districts of the Brahmaputra valley which being the most powerful state. - Suresh Kant Sharma, Usha Sharma – 2005, "Discovery of North-East India: Geography, History, Culture, ... – Volume 3", p. 248, Davaka (Nowgong) and Kamarupa as separate and submissive friendly kingdoms. - The eastern border of Kamarupa is given by the temple of the goddess Tamreshvari (Pūrvāte Kāmarūpasya devī Dikkaravasini in Kalika Purana) near present-day Sadiya. "...the temple of the goddess Tameshwari (Dikkaravasini) is now located at modern Sadiya about 100 miles to the northeast of Sibsagar" (Sircar 1990, pp. 63–68). - Swami, Parmeshwaranand (2001). Encyclopaedic Dictionary of the Puranas. New Delhi: Sarup and Sons. p. 941. ISBN 978-81-7625-226-3. - Barpujari, H.K., ed. (1990). The Comprehensive History of Assam (1st ed.). Guwahati, India: Assam Publication Board. OCLC 499315420. - Sarkar, J.N. (1992), "Chapter II The Turko-Afghan Invasions", in Barpujari, H.K., The Comprehensive History of Assam, 2, Guwahati: Assam Publication Board, pp. 35–48 - "Pallava script". SkyKnowledge.com. 30 December 2010. - Nilakanta Sastri, pp. 412–413 - Hall, John Whitney, ed. (2005) . "India". History of the World: Earliest Times to the Present Day. John Grayson Kirk. North Dighton, MA: World Publications Group. p. 246. ISBN 978-1-57215-421-6. - "CNG: eAuction 329. India, Post-Gupta (Ganges Valley). Vardhanas of Thanesar and Kanauj. Harshavardhana. Circa AD 606–647. AR Drachm (13mm, 2.28 g, 1h)". www.cngcoins.com. - RN Kundra & SS Bawa, History of Ancient and Medieval India - International Dictionary of Historic Places: Asia and Oceania by Trudy Ring, Robert M. Salkin, Sharon La Boda p. 507 - "Harsha". Encyclopædia Britannica. 2015. - "Sthanvishvara (historical region, India)". Encyclopædia Britannica. Retrieved 9 August 2014. - "Harsha (Indian emperor)". Encyclopædia Britannica. Retrieved 9 August 2014. - Michaels 2004, p. 41. - Michaels 2004, p. 43. - Sanderson, Alexis (2009). "The Śaiva Age: The Rise and Dominance of Śaivism during the Early Medieval Period". In Einoo, Shingo (ed.). Genesis and Development of Tantrism. Institute of Oriental Culture Special Series no. 23. Tokyo: Institute of Oriental Culture, University of Tokyo. pp. 41–43. ISBN 978-4-7963-0188-6. - Sheridan, Daniel P. "Kumarila Bhatta", in Great Thinkers of the Eastern World, ed. Ian McGready, New York: Harper Collins, 1995, pp. 198–201. ISBN 0-06-270085-5. - Johannes de Kruijf and Ajaya Sahoo (2014), Indian Transnationalism Online: New Perspectives on Diaspora, ISBN 978-1-4724-1913-2, p. 105, Quote: "In other words, according to Adi Shankara's argument, the philosophy of Advaita Vedanta stood over and above all other forms of Hinduism and encapsulated them. This then united Hinduism; [...] Another of Adi Shankara's important undertakings which contributed to the unification of Hinduism was his founding of a number of monastic centers." - "Shankara", Student's Encyclopædia Britannica – India (2000), Volume 4, Encyclopædia Britannica (UK) Publishing, ISBN 978-0-85229-760-5, p. 379, Quote: "Shankaracharya, philosopher and theologian, most renowned exponent of the Advaita Vedanta school of philosophy, from whose doctrines the main currents of modern Indian thought are derived."; David Crystal (2004), The Penguin Encyclopedia, Penguin Books, p. 1353, Quote: "[Shankara] is the most famous exponent of Advaita Vedanta school of Hindu philosophy and the source of the main currents of modern Hindu thought." - Christophe Jaffrelot (1998), The Hindu Nationalist Movement in India, Columbia University Press, ISBN 978-0-231-10335-0, p. 2, Quote: "The main current of Hinduism – if not the only one – which became formalized in a way that approximates to an ecclesiastical structure was that of Shankara". - Shyama Kumar Chattopadhyaya (2000) The Philosophy of Sankar's Advaita Vedanta, Sarup & Sons, New Delhi ISBN 81-7625-222-0, 978-81-7625-222-5 - Edward Roer (Translator), Shankara's Introduction, p. 3, at Google Books to Brihad Aranyaka Upanishad at pp. 3–4; Quote – "[...] Lokayatikas and Bauddhas who assert that the soul does not exist. There are four sects among the followers of Buddha: 1. Madhyamicas who maintain all is void; 2. Yogacharas, who assert except sensation and intelligence all else is void; 3. Sautranticas, who affirm actual existence of external objects no less than of internal sensations; 4. Vaibhashikas, who agree with later (Sautranticas) except that they contend for immediate apprehension of exterior objects through images or forms represented to the intellect." - Edward Roer (Translator), Shankara's Introduction, p. 3, at Google Books to Brihad Aranyaka Upanishad at p. 3, OCLC 19373677 - KN Jayatilleke (2010), Early Buddhist Theory of Knowledge, ISBN 978-81-208-0619-1, pp. 246–249, from note 385 onwards; Steven Collins (1994), Religion and Practical Reason (Editors: Frank Reynolds, David Tracy), State Univ of New York Press, ISBN 978-0-7914-2217-5, p. 64; Quote: "Central to Buddhist soteriology is the doctrine of not-self (Pali: anattā, Sanskrit: anātman, the opposed doctrine of ātman is central to Brahmanical thought). Put very briefly, this is the [Buddhist] doctrine that human beings have no soul, no self, no unchanging essence."; Edward Roer (Translator), Shankara's Introduction at Google Books Katie Javanaud (2013), Is The Buddhist 'No-Self' Doctrine Compatible With Pursuing Nirvana?, Philosophy Now; John C. Plott et al. (2000), Global History of Philosophy: The Axial Age, Volume 1, Motilal Banarsidass, ISBN 978-81-208-0158-5, p. 63, Quote: "The Buddhist schools reject any Ātman concept. As we have already observed, this is the basic and ineradicable distinction between Hinduism and Buddhism". - The Seven Spiritual Laws Of Yoga, Deepak Chopra, John Wiley & Sons, 2006, ISBN 81-265-0696-2, 978-81-265-0696-5 - Schimmel, Annemarie Schimmel, Religionen – Islam in the Indian Subcontinent, Brill Academic Publishers, 1980, ISBN 978-90-04-06117-0, p. 4 - Avari, Burjor (2007). India: The Ancient Past. A History of the Indian-Subcontinent from 7000 BC to AD 1200. New York: Routledge. pp. 204–205. ISBN 978-0-203-08850-0. Madhyadesha became the ambition of two particular clans among a tribal people in Rajasthan, known as Gurjara and Pratihara. They were both parts of a larger federation of tribes, some of which later came to be known as the Rajputs - Kamath 2001, pp. 100–103. - Vinod Chandra Srivastava (2008). History of Agriculture in India, Up to C. 1200 A.D. Concept. p. 857. ISBN 978-81-8069-521-6. - The Dancing Girl: A History of Early India by Balaji Sadasivan p. 129 - Pollock, Sheldon (2006). The Language of the Gods in the World of Men: Sanskrit, Culture, and Power in Premodern India. University of California Press. pp. 241–242. ISBN 978-0-520-93202-9. - Sunil Fotedar (June 1984). The Kashmir Series: Glimpses of Kashmiri Culture – Vivekananda Kendra, Kanyakumari (p. 57). - R.C. Mazumdar, Ancient India, p. 383 - Thapar 2003, p. 334. - Chandra, Satish (2009). History of Medieval India. New Delhi: Orient Blackswan. pp. 19–20. ISBN 978-81-250-3226-7. - Kamath 2001, p. 89. - Puttaswamy, T.K. (2012). "Mahavira". Mathematical Achievements of Pre-modern Indian Mathematicians. London: Elsevier Publications. p. 231. ISBN 978-0-12-397913-1. - Sen 1999, p. 380. - Sen 1999, pp. 380–381. - Daniélou 2003, p. 170. - The Britannica Guide to Algebra and Trigonometry by William L. Hosch p. 105 - Wink, André (2002). Al-Hind: Early Medieval India and the Expansion of Islam, 7th–11th Centuries. Leiden: Brill. p. 284. ISBN 978-0-391-04173-8. - Avari 2007, p. 303. - Sircar, D. C. (1971). Studies in the Geography of Ancient and Medieval India. Motilal Banarsidass. p. 146. ISBN 9788120806900. - K.D. Bajpai (2006). History of Gopāchala. Bharatiya Jnanpith. p. 31. ISBN 978-81-263-1155-2. - Niyogi 1959, p. 38. - Prabhu, T. L. (4 August 2019). Majestic Monuments of India: Ancient Indian Mega Structures. Retrieved 25 July 2020. - Epigraphia Indica, XXIV, p. 43, Dr N.G. Majumdar - Nitish K. Sengupta (2011). Land of Two Rivers: A History of Bengal from the Mahabharata to Mujib. Penguin Books India. pp. 43–45. ISBN 978-0-14-341678-4. - Biplab Dasgupta (2005). European Trade and Colonial Conquest. Anthem Press. pp. 341–. ISBN 978-1-84331-029-7. - Hermann Kulke, Dietmar Rothermund (1998), A History of India, ISBN 978-0-203-44345-3 - History of Buddhism in India, Translation by A Shiefner - Chandra, Satish (2009). History of Medieval India. New Delhi: Orient Blackswan. pp. 13–15. ISBN 978-81-250-3226-7. - Sen 1999, p. 278. - PN Chopra; BN Puri; MN Das; AC Pradhan, eds. (2003). A Comprehensive History Of Ancient India (3 Vol. Set). Sterling. pp. 200–202. ISBN 978-81-207-2503-4. - History of Ancient India: Earliest Times to 1000 A.D. by Radhey Shyam Chaurasia p. 237 - Kulke & Rothermund 2004, p. 115. - Keay 2000, p. 215: The Cholas were in fact the most successful dynasty since the Guptas ... The classic expansion of Chola power began anew with the accession of Rajaraja I in 985. - "The Last Years of Cholas: The decline and fall of a dynasty". En.articlesgratuits.com. 22 August 2007. Archived from the original on 20 January 2010. Retrieved 23 September 2009. - K. A. Nilakanta Sastri, A History of South India, p. 158 - Buddhism, Diplomacy, and Trade: The Realignment of Sino-Indian Relations by Tansen Sen p. 229 - History of Asia by B.V. Rao p. 297 - Indian Civilization and Culture by Suhas Chatterjee p. 417 - A Comprehensive History of Medieval India: by Farooqui Salma Ahmed, Salma Ahmed Farooqui p. 24 - Ancient Indian History and Civilization by Sailendra Nath Sen pp. 403–405 - World Heritage Monuments and Related Edifices in India, Band 1 by ʻAlī Jāvīd pp. 132–134 - History of Kannada Literature by E.P. Rice p. 32 - Bilhana by Prabhakar Narayan Kawthekar, p. 29 - Asher & Talbot 2008, p. 47. - Metcalf & Metcalf 2006, p. 6. - Asher & Talbot 2008, p. 53. - Jamal Malik (2008). Islam in South Asia: A Short History. Brill Publishers. p. 104. ISBN 978-9004168596. - William Hunter (1903), A Brief History of the Indian Peoples, p. 124, at Google Books, 23rd Edition, pp. 124–127 - Ramananda Chatterjee (1961). The Modern Review. 109. Indiana University. p. 84. - Delhi Sultanate, Encyclopædia Britannica - Bartel, Nick (1999). "Battuta's Travels: Delhi, capital of Muslim India". The Travels of Ibn Battuta – A Virtual Tour with the 14th Century Traveler. Archived from the original on 12 June 2010. - Asher & Talbot 2008, pp. 50–52. - Richard Eaton (2000), Temple Desecration and Indo-Muslim States, Journal of Islamic Studies, 11(3), pp. 283–319 - Asher & Talbot 2008, pp. 50–51. - Ludden 2002, p. 67. - "Timur – conquest of India". Gardenvisit. Archived from the original on 12 October 2007. - Elliot & Dawson. The History of India As told By Its Own Historians Vol III. pp. 445–446. - History of Classical Sanskrit Literature: by M. Srinivasachariar p. 211 - Eaton 2005, pp. 28–29. - Sastri 2002, p. 239. - South India by Amy Karafin, Anirban Mahapatra p. 32 - Kamath 2001, p. 162. - Sastri 1955, p. 317. - The success was probably also due to the peaceful nature of Muhammad II Bahmani, according to Sastri 1955, p. 242 - From the notes of Portuguese Nuniz. Robert Sewell notes that a big dam across was built the Tungabhadra and an aqueduct 15 miles (24 km) long was cut out of rock (Sastri 1955, p. 243). - Columbia Chronologies of Asian History and Culture, John Stewart Bowman p. 271, (2013), Columbia University Press, New York, ISBN 0-231-11004-9 - Also deciphered as Gajaventekara, a metaphor for "great hunter of his enemies", or "hunter of elephants". (Kamath 2001, p. 163) - Sastri 1955, p. 244. - From the notes of Persian Abdur Razzak. Writings of Nuniz confirms that the kings of Burma paid tributes to Vijayanagara empire. (Sastri 1955, p. 245) - Kamath 2001, p. 163. - From the notes of Abdur Razzak about Vijayanagara: a city like this had not been seen by the pupil of the eye nor had an ear heard of anything equal to it in the world (Hampi, A Travel Guide 2003, p. 11) - From the notes of Duarte Barbosa. (Kamath 2001, p. 178) - Wagoner, Phillip B. (November 1996). "Sultan among Hindu Kings: Dress, Titles, and the Islamicization of Hindu Culture at Vijayanagara". The Journal of Asian Studies. 55 (4): 851–880. doi:10.2307/2646526. JSTOR 2646526. - Kamath 2001, p. 177. - Fritz & Michell, p. 14. sfn error: no target: CITEREFFritzMichell (help) - Kamath 2001, pp. 177–178. - "The austere, grandiose site of Hampi was the last capital of the last great Hindu Kingdom of Vijayanagar. Its fabulously rich princes built Dravidian temples and palaces which won the admiration of travellers between the 14th and 16th centuries. Conquered by the Deccan Muslim confederacy in 1565, the city was pillaged over a period of six months before being abandoned." From the brief description UNESCO World Heritage List. - "Vijayanagara Research Project::Elephant Stables". Vijayanagara.org. 9 February 2014. Archived from the original on 17 May 2017. Retrieved 21 May 2018. - History of Science and Philosophy of Science by Pradip Kumar Sengupta p. 91 - Medieval India: From Sultanat to the Mughals-Delhi Sultanat (1206–1526) by Satish Chandra pp. 188–189 - Art History, Volume II: 1400–present by Boundless p. 243 - Eaton 2005, pp. 100–101. - Kamath 2001, p. 174. - Vijaya Ramaswamy (2007). Historical Dictionary of the Tamils. Scarecrow Press. pp. li–lii. ISBN 978-0-8108-6445-0. - Eaton 2005, pp. 101–115. - Kamath 2001, pp. 220, 226, 234. - Singh, Pradyuman. Bihar General Knowledge Digest. ISBN 9789352667697. - Surendra Gopal (2017). Mapping Bihar: From Medieval to Modern Times. Taylor & Francis. pp. 289–295. ISBN 978-1-351-03416-6. - Surinder Singh; I. D. Gaur (2008). Popular Literature and Pre-modern Societies in South Asia. Pearson Education India. pp. 77–. ISBN 978-81-317-1358-7. - Gordon Mackenzie (1990). A manual of the Kistna district in the presidency of Madras. Asian Educational Services. pp. 9–10, 224–. ISBN 978-81-206-0544-2. - Sen, Sailendra (2013). A Textbook of Medieval Indian History. Primus Books. pp. 116–117. ISBN 978-93-80607-34-4. - Lectures on Rajput history and culture by Dr. Dasharatha Sharma. Publisher: Motilal Banarsidass, Jawahar Nagar, Delhi 1970. ISBN 0-8426-0262-3. - John Merci, Kim Smith; James Leuck (1922). "Muslim conquest and the Rajputs". The Medieval History of India pg 67–115 - The Discovery of India, J.L. Nehru - Farooqui Salma Ahmed, A Comprehensive History of Medieval India: From Twelfth to the Mid-Eighteenth Century, (Dorling Kindersley Pvt. Ltd., 2011) - Eaton 2005, p. 88. - The Five Kingdoms of the Bahmani Sultanate - Majumdar, R.C. (ed.) (2007). The Mughul Empire, Mumbai: Bharatiya Vidya Bhavan, ISBN 81-7276-407-1, p. 412 - Majumdar, Ramesh Chandra; Pusalker, A.D.; Majumdar, A.K., eds. (1960). The History and Culture of the Indian People. VI: The Delhi Sultanate. Bombay: Bharatiya Vidya Bhavan. p. 367. [Describing the Gajapati kings of Orissa] Kapilendra was the most powerful Hindu king of his time, and under him Orissa became an empire stretching from the lower Ganga in the north to the Kaveri in the south. - Sailendra Nath Sen (1999). Ancient Indian History and Civilization. New Age International. p. 305. ISBN 978-81-224-1198-0. - Yasmin Saikia (2004). Fragmented Memories: Struggling to be Tai-Ahom in India. Duke University Press. p. 8. ISBN 978-0-8223-8616-2. - Sarkar, J.N. (1992), "Chapter VIII Assam-Mughal Relations", in Barpujari, H.K. (ed.), The Comprehensive History of Assam, 2, Guwahati: Assam Publication Board, p. 213 - Williams 2004, pp. 83–84, the other major classical Indian dances are: Bharatanatyam, Kathak, Odissi, Kathakali, Kuchipudi, Cchau, Satriya, Yaksagana and Bhagavata Mela. - Massey 2004, p. 177. - Devi 1990, pp. 175–180. - Schomer & McLeod (1987), p. 1. - Johar, Surinder (1999). Guru Gobind Singh: A Multi-faceted Personality. MD Publications. p. 89. ISBN 978-81-7533-093-1. - Schomer & McLeod (1987), pp. 1–2. - Lance Nelson (2007), An Introductory Dictionary of Theology and Religious Studies (Editors: Orlando O. Espín, James B. Nickoloff), Liturgical Press, ISBN 978-0-8146-5856-7, pp. 562–563 - SS Kumar (2010), Bhakti – the Yoga of Love, LIT Verlag Münster, ISBN 978-3-643-50130-1, pp. 35–36 - Wendy Doniger (2009), Bhakti, Encyclopædia Britannica; The Four Denomination of Hinduism Himalayan Academy (2013) - Schomer & McLeod (1987), p. 2. - Novetzke, Christian (2007). "Bhakti and Its Public". International Journal of Hindu Studies. 11 (3): 255–272. doi:10.1007/s11407-008-9049-9. JSTOR 25691067. S2CID 144065168. - Singh, Patwant (2000). The Sikhs. Alfred A Knopf Publishing. p. 17. ISBN 0-375-40728-6. - Louis Fenech and WH McLeod (2014), Historical Dictionary of Sikhism, 3rd Edition, Rowman & Littlefield, ISBN 978-1-4422-3600-4, p. 17 - William James (2011), God's Plenty: Religious Diversity in Kingston, McGill Queens University Press, ISBN 978-0-7735-3889-4, pp. 241–242 - Mann, Gurinder Singh (2001). The Making of Sikh Scripture. United States: Oxford University Press. p. 21. ISBN 978-0-19-513024-9. - Asher & Talbot 2008, p. 115. - Robb 2001, pp. 90–91. - Taj Mahal, Description, World Heritage Centre - "The Islamic World to 1600: Rise of the Great Islamic Empires (The Mughal Empire)". University of Calgary. Archived from the original on 27 September 2013. - Jeroen Duindam (2015), Dynasties: A Global History of Power, 1300–1800, p. 105, Cambridge University Press - Rein Taagepera (September 1997). "Expansion and Contraction Patterns of Large Polities: Context for Russia". International Studies Quarterly. 41 (3): 475–504. doi:10.1111/0020-8833.00053. JSTOR 2600793. - Maddison, Angus (2003): Development Centre Studies The World Economy Historical Statistics: Historical Statistics, OECD Publishing, ISBN 92-64-10414-3, p. 261 - Parthasarathi, Prasannan (2011), Why Europe Grew Rich and Asia Did Not: Global Economic Divergence, 1600–1850, Cambridge University Press, p. 2, ISBN 978-1-139-49889-0 - Jeffrey G. Williamson, David Clingingsmith (August 2005). "India's Deindustrialization in the 18th and 19th Centuries" (PDF). Harvard University. Retrieved 18 May 2017. - John F. Richards (1995), The Mughal Empire, p. 190, Cambridge University Press - Lex Heerma van Voss; Els Hiemstra-Kuperus; Elise van Nederveen Meerkerk (2010). "The Long Globalization and Textile Producers in India". The Ashgate Companion to the History of Textile Workers, 1650–2000. Ashgate Publishing. p. 255. ISBN 978-0-7546-6428-4. - Abraham Eraly (2007). The Mughal World: Life in India's Last Golden Age. Penguin Books. p. 5. ISBN 978-0-14-310262-5. - Abhay Kumar Singh (2006). Modern World System and Indian Proto-industrialization: Bengal 1650-1800, (Volume 1). Northern Book Centre. ISBN 9788172112011. - Maddison, Angus (2003): Development Centre Studies The World Economy Historical Statistics: Historical Statistics, OECD Publishing, ISBN 9264104143, pages 259–261 - Giorgio Riello, Tirthankar Roy (2009). How India Clothed the World: The World of South Asian Textiles, 1500-1850. Brill Publishers. p. 174. ISBN 9789047429975. - Ian Copland; Ian Mabbett; Asim Roy; et al. (2012). A History of State and Religion in India. Routledge. p. 119. ISBN 978-1-136-45950-4. - Audrey Truschke (2017). Aurangzeb: The Life and Legacy of India's Most Controversial King. Stanford University Press. pp. 56, 58. ISBN 978-1-5036-0259-5. - Hasan, Farhat (1991). "Conflict and Cooperation in Anglo-Mughal Trade Relations during the Reign of Aurangzeb". Journal of the Economic and Social History of the Orient. 34 (4): 351–360. doi:10.1163/156852091X00058. JSTOR 3632456. - Vaugn, James (September 2017). "John Company Armed: The English East India Company, the Anglo-Mughal War and Absolutist Imperialism, c. 1675–1690". Britain and the World. 11 (1). - Royina Grewal (2007). In the Shadow of the Taj: A Portrait of Agra. Penguin Books India. pp. 220–. ISBN 978-0-14-310265-6. - Dupuy, R. Ernest and Trevor N. Dupuy (1993). The Harper Encyclopedia of Military History (4th ed.). Harper Collins Publishers. p. 711. - "Iran in the Age of the Raj". avalanchepress.com. Retrieved 6 January 2017. - Catherine Ella Blanshard Asher; Cynthia Talbot (2006). India before Europe. Cambridge University Press. p. 265. ISBN 978-0-521-80904-7. - A Popular Dictionary of Sikhism: Sikh Religion and Philosophy, p. 86, Routledge, W. Owen Cole, Piara Singh Sambhi, 2005 - Khushwant Singh, A History of the Sikhs, Volume I: 1469–1839, Delhi, Oxford University Press, 1978, pp. 127–129 - Pearson, M.N. (February 1976). "Shivaji and the Decline of the Mughal Empire". The Journal of Asian Studies. 35 (2): 221–235. doi:10.2307/2053980. JSTOR 2053980. - Capper, J. (1918). Delhi, the Capital of India. Asian Educational Services. p. 28. ISBN 978-81-206-1282-2. Retrieved 6 January 2017. - Sen, S.N. (2010). An Advanced History of Modern India. Macmillan India. p. 1941. ISBN 978-0-230-32885-3. Retrieved 6 January 2017. - Shivaji and his Times (1919) – J.N. Sarkar - An Advanced History of India, Dr. K.K. Datta, p. 546 - M.A.Ghazi (24 July 2018). Islamic Renaissance In South Asia (1707–1867) : The Role Of Shah Waliallah & His Successors. Adam Publishers & Distributors. ISBN 978-8174354006 – via Google Books. - Mehta, Jaswant Lal (2005). Advanced Study in the History of Modern India 1707-1813. Sterling. p. 204. ISBN 978-1-932705-54-6. - Sailendra Nath Sen (2010). An Advanced History of Modern India. Macmillan India. p. 16. ISBN 978-0-230-32885-3. - Bharatiya Vidya Bhavan, Bharatiya Itihasa Samiti, Ramesh Chandra Majumdar – The History and Culture of the Indian People: The Maratha supremacy - N.G. Rathod (1994). The Great Maratha Mahadaji Scindia. Sarup & Sons. p. 8. ISBN 978-81-85431-52-9. - Naravane, M.S. (2014). Battles of the Honorourable East India Company. A.P.H. Publishing Corporation. p. 63. ISBN 978-81-313-0034-3. - Ring, Trudy; Watson, Noelle; Schellinger, Paul (2012). Asia and Oceania: International Dictionary of Historic Places. Routledge. pp. 28–29. ISBN 978-1-136-63979-1. - Singh, Gulcharan (July 1981). "Maharaja Ranjit Singh and the Principles of War". USI Journal. 111 (465): 184–192. - Grewal, J.S. (1990). The Sikhs of the Punjab. The New Cambridge History of India. II.3. Cambridge University Press. pp. 101, 103–104. ISBN 978-0-521-26884-4. Aggrandisement which made him the master of an empire ... the British recognized Ranjit Singh as the sole sovereign ruler of the Punjab and left him free to ... oust the Afghans from Multan and Kashmir ... Peshawar was taken over ... The real strength of Ranjit Singh's army lay in its infantry and artillery ... these new wings played an increasingly decisive role ... possessed 200 guns. Horse artillery was added in the 1820s ... nearly half of his army in terms of numbers consisted of men and officers trained on European lines ... In the expansion of Ranjit Singh's dominions ... vassalage proved to be nearly as important as the westernized wings of his army. - History Modern India By S.N. Sen - Chaudhury, Sushil; Mohsin, KM (2012). "Sirajuddaula". In Islam, Sirajul; Jamal, Ahmed A. (eds.). Banglapedia: National Encyclopedia of Bangladesh (Second ed.). Asiatic Society of Bangladesh. Archived from the original on 14 June 2015. Retrieved 15 August 2018. - Singh, Vipul (2009). Longman History & Civics (Dual Government in Bengal). Pearson Education India. pp. 29–. ISBN 978-8131728888. - Madhya Pradesh National Means-Cum-Merit Scholarship Exam (Warren Hasting's system of Dual Government). Upkar Prakashan. 2009. pp. 11–. ISBN 978-81-7482-744-9. - Black, Jeremy (2006), A Military History of Britain: from 1775 to the Present, Westport, Conn.: Greenwood Publishing Group, p. 78, ISBN 978-0-275-99039-8 - "Treaty of Amritsar" (PDF). Archived from the original (PDF) on 26 August 2014. Retrieved 25 August 2014. - Rai, Mridu (2004). Hindu Rulers, Muslim Subjects: Islam, Rights, and the History of Kashmir. Princeton University Press. pp. 27, 133. ISBN 978-0-691-11688-4. - Indian History. Allied Publishers. 1988. pp. 3–. ISBN 978-81-8424-568-4. - Karl J. Schmidt (20 May 2015). An Atlas and Survey of South Asian History. Routledge. pp. 138–. ISBN 978-1-317-47681-8. - Glenn Ames (2012). Ivana Elbl (ed.). Portugal and its Empire, 1250–1800 (Collected Essays in Memory of Glenn J. Ames).: Portuguese Studies Review, Vol. 17, No. 1. Trent University Press. pp. 12–15 with footnotes, context: 11–32. - Sanjay Subrahmanyam, The Portuguese empire in Asia, 1500–1700: a political and economic history (2012) - Koshy, M.O. (1989). The Dutch Power in Kerala, 1729–1758. Mittal Publications. p. 61. ISBN 978-81-7099-136-6. - http://mod.nic.in Archived 12 March 2016 at the Wayback Machine 9th Madras Regiment - Holden Furber, Rival Empires of Trade in the Orient, 1600–1800, University of Minnesota Press, 1976, p. 201. - Philippe Haudrère, Les Compagnies des Indes Orientales, Paris, 2006, p. 70. - Dossier Goa – A Recusa do Sacrifício Inútil. Shvoong.com. - Markovits, Claude, ed. (2004) [First published 1994 as Histoire de l'Inde Moderne]. A History of Modern India, 1480–1950 (2nd ed.). London: Anthem Press. pp. 271–. ISBN 978-1-84331-004-4. - Ludden 2002, p. 133 - Brown 1994, p. 67 - Brown 1994, p. 68 - Saul David, p. 70, The Indian Mutiny, Penguin Books 2003 - Bandyopadhyay 2004, p. 172, Bose & Jalal 2003, p. 91, Brown 1994, p. 92 - Bandyopadhyay 2004, p. 177, Bayly 2000, p. 357 - Christopher Hibbert, The Great Mutiny: India 1857 (1980) - Pochhammer, Wilhelm von (1981), India's road to nationhood: a political history of the subcontinent, Allied Publishers, ISBN 978-81-7764-715-0 - "Law Commission of India – Early Beginnings" - Suresh Chandra Ghosh (1995). "Bentinck, Macaulay and the introduction of English education in India". History of Education. 24 (1): 17–25. doi:10.1080/0046760950240102. - I.D. Derbyshire (1987). "Economic Change and the Railways in North India, 1860–1914". Modern Asian Studies. 21 (3): 521–545. doi:10.1017/S0026749X00009197. JSTOR 312641. - Neil Charlesworth, British Rule and the Indian Economy, 1800–1914 (1981) pp. 23–37 - Robb, Peter (November 1981). "British Rule and Indian 'Improvement'". Economic History Review. 34 (4): 507–523. doi:10.2307/2595587. JSTOR 2595587. - S.A. Wolpert, Morley and India, 1906–1910, (1967) - Democracy and Hindu nationalism, Chetan Bhatt (2013) - Harjinder Singh Dilgeer. Shiromani Akali Dal (1920–2000). Sikh University Press, Belgium, 2001. - The History of the Indian National Congress, B. Pattabhi Sitaramayya (1935) - History of Bengali-speaking People by Nitish Sengupta, p. 253. - Nitish Sengupta (2001). History of the Bengali-speaking People. UBS Publishers' Distributors. p. 211. ISBN 978-81-7476-355-6. The Bengal Renaissance can be said to have started with Raja Ram Mohan Roy (1775–1833) and ended with Rabindranath Tagore (1861–1941). - Kopf, David (December 1994). "Amiya P. Sen. Hindu Revivalism in Bengal 1872". American Historical Review (Book review). 99 (5): 1741–1742. doi:10.2307/2168519. JSTOR 2168519. - Sharma, Mayank. "Essay on 'Derozio and the Young Bengal Movement'". - Davis, Mike. Late Victorian Holocausts. 1. Verso, 2000. ISBN 1-85984-739-0 p. 173 - Davis, Mike. Late Victorian Holocausts. 1. Verso, 2000. ISBN 1-85984-739-0 p. 7 - Amartya Sen (1981). Poverty and Famines: An Essay on Entitlement and Deprivation. Oxford University Press. p. 39. ISBN 978-0-19-828463-5. - Greenough, Paul Robert (1982). Prosperity and Misery in Modern Bengal: The Famine of 1943–1944. Oxford University Press. ISBN 978-0-19-503082-2. - "Plague". Archived from the original on 17 February 2009. Retrieved 5 July 2014.. World Health Organisation. - "Viewpoint: Britain must pay reparations to India - BBC News". BBC.com. - Colin Clark (1977). Population Growth and Land Use. Springer Science+Business Media. p. 64. ISBN 978-1349157754. - "Reintegrating India with the World Economy". Peterson Institute for International Economics. - Pati, p. 31 - "Participants from the Indian subcontinent in the First World War". Memorial Gates Trust. Retrieved 12 September 2009. - "Commonwealth War Graves Commission Annual Report 2007–2008 Online". Archived from the original on 26 September 2007. - Sumner 2001, p. 7. - Kux, Dennis (1992). India and the United States: estranged democracies, 1941–1991. Diane Publishing. ISBN 978-1-4289-8189-8. - Müller 2009, p. 55. - Fay 1993, p. viii - Sarkar 1989, p. 410 - Bandyopadhyay 2004, p. 426 - Arnold 1991, pp. 97–98 - Devereux (2000, p. 6) - Marshall, P. J. (2001), The Cambridge Illustrated History of the British Empire, Cambridge University Press, p. 179, ISBN 978-0-521-00254-7 Quote: "The first modern nationalist movement to arise in the non-European empire, and one that became an inspiration for many others, was the Indian Congress." - "Information about the Indian National Congress". www.open.ac.uk. Arts & Humanities Research council. Retrieved 29 July 2015. - "Census Of India 1931". archive.org. - Markovits, Claude (2004). A history of modern India, 1480–1950. Anthem Press. pp. 386–409. ISBN 978-1843310044. - Modern India, Bipin Chandra, p. 76 - India Awakening and Bengal, N.S. Bose, 1976, p. 237 - British Paramountcy and Indian Renaissance, Part–II, Dr. R.C. Majumdar, p. 466 - "'India's well-timed diversification of army helped democracy' | Business Standard News". business-standard.com. Retrieved 6 January 2017. - Anil Chandra Banerjee, A Constitutional History of India 1600–1935 (1978) pp. 171–173 - R, B.S.; Bakshi, S.R. (1990). Bal Gangadhar Tilak: Struggle for Swaraj. Anmol Publications Pvt. Ltd. ISBN 978-81-7041-262-5. Retrieved 6 January 2017. - India's Struggle for Independence – Chandra, Bipan; Mridula Mukherjee, Aditya Mukherjee, Sucheta Mahajan, K.N. Panikkar (1989), New Delhi: Penguin Books. ISBN 978-0-14-010781-4. - Albert, Sir Courtenay Peregrine. The Government of India. Clarendon Press, 1922. p. 125 - Bond, Brian (October 1963). "Amritsar 1919". History Today. Vol. 13 no. 10. pp. 666–676. - Qasmi, Ali Usman; Robb, Megan Eaton (2017). Muslims against the Muslim League: Critiques of the Idea of Pakistan. Cambridge University Press. p. 2. ISBN 9781108621236. - Haq, Mushir U. (1970). Muslim politics in modern India, 1857-1947. Meenakshi Prakashan. p. 114. This was also reflected in one of the resolutions of the Azad Muslim Conference, an organization which attempted to be representative of all the various nationalist Muslim parties and groups in India. - Ahmed, Ishtiaq (27 May 2016). "The dissenters". The Friday Times. However, the book is a tribute to the role of one Muslim leader who steadfastly opposed the Partition of India: the Sindhi leader Allah Bakhsh Soomro. Allah Bakhsh belonged to a landed family. He founded the Sindh People's Party in 1934, which later came to be known as ‘Ittehad’ or ‘Unity Party’. ... Allah Bakhsh was totally opposed to the Muslim League's demand for the creation of Pakistan through a division of India on a religious basis. Consequently, he established the Azad Muslim Conference. In its Delhi session held during April 27–30, 1940 some 1400 delegates took part. They belonged mainly to the lower castes and working class. The famous scholar of Indian Islam, Wilfred Cantwell Smith, feels that the delegates represented a ‘majority of India's Muslims’. Among those who attended the conference were representatives of many Islamic theologians and women also took part in the deliberations ... Shamsul Islam argues that the All-India Muslim League at times used intimidation and coercion to silence any opposition among Muslims to its demand for Partition. He calls such tactics of the Muslim League as a ‘Reign of Terror’. He gives examples from all over India including the NWFP where the Khudai Khidmatgars remain opposed to the Partition of India. - Ali, Afsar (17 July 2017). "Partition of India and Patriotism of Indian Muslims". The Milli Gazette. - "Great speeches of the 20th century". The Guardian. 8 February 2008. - Philip Ziegler, Mountbatten(1985) p. 401. - Symonds, Richard (1950). The Making of Pakistan. London: Faber and Faber. p. 74. OCLC 1462689. At the lowest estimate, half a million people perished and twelve millions became homeless. - Abid, Abdul Majeed (29 December 2014). "The forgotten massacre". The Nation. On the same dates [4 and 5 March 1947], Muslim League-led mobs fell with determination and full preparations on the helpless Hindus and Sikhs scattered in the villages of Multan, Rawalpindi, Campbellpur, Jhelum and Sargodha. The murderous mobs were well supplied with arms, such as daggers, swords, spears and fire-arms. (A former civil servant mentioned in his autobiography that weapon supplies had been sent from NWFP and money was supplied by Delhi-based politicians.) - Srinath Raghavan (12 November 2013). 1971. Harvard University Press. ISBN 978-0-674-73129-5. - Prakash, Gyan (April 1990). "Writing Post-Orientalist Histories of the Third World: Perspectives from Indian Historiography". Comparative Studies in Society and History. 32 (2): 383–408. doi:10.1017/s0010417500016534. JSTOR 178920. - Anil Seal, The Emergence of Indian Nationalism: Competition and Collaboration in the Later Nineteenth Century (1971) - Gordon Johnson, Provincial Politics and Indian Nationalism: Bombay and the Indian National Congress 1880–1915 (2005) - Rosalind O'Hanlon and David Washbrook, eds. Religious Cultures in Early Modern India: New Perspectives (2011) - Aravind Ganachari, "Studies in Indian Historiography: 'The Cambridge School'", Indica, March 2010, 47#1, pp. 70–93 - Hostettler, N. (2013). Eurocentrism: a marxian critical realist critique. Taylor & Francis. p. 33. ISBN 978-1-135-18131-4. Retrieved 6 January 2017. - "Ranjit Guha, "On Some Aspects of Historiography of Colonial India"" (PDF). - Bagchi, Amiya Kumar (January 1993). "Writing Indian History in the Marxist Mode in a Post-Soviet World". Indian Historical Review. 20 (1/2): 229–244. - Prakash, Gyan (December 1994). "Subaltern studies as postcolonial criticism". American Historical Review. 99 (5): 1475–1500. doi:10.2307/2168385. JSTOR 2168385. - Roosa, John (2006). "When the Subaltern Took the Postcolonial Turn". Journal of the Canadian Historical Association. 17 (2): 130–147. doi:10.7202/016593ar. - Menon, Latha (August 2004). "Coming to Terms with the Past: India". History Today. Vol. 54 no. 8. pp. 28–30. - "Harvard scholar says the idea of India dates to a much earlier time than the British or the Mughals". - "In The Footsteps of Pilgrims". - "India's spiritual landscape: The heavens and the earth". The Economist. 24 March 2012. - Dalrymple, William (27 July 2012). "India: A Sacred Geography by Diana L Eck – review". The Guardian. - Antonova, K.A.; Bongard-Levin, G.; Kotovsky, G. (1979). История Индии [History of India] (in Russian). Moscow: Progress. - Arnold, David (1991), Famine: Social Crisis and Historical Change, Wiley-Blackwell, ISBN 978-0-631-15119-7 - Asher, C.B.; Talbot, C (1 January 2008), India Before Europe (1st ed.), Cambridge University Press, ISBN 978-0-521-51750-8 - Bandyopadhyay, Sekhar (2004), From Plassey to Partition: A History of Modern India, Orient Longman, ISBN 978-81-250-2596-2 - Bayly, Christopher Alan (2000) , Empire and Information: Intelligence Gathering and Social Communication in India, 1780–1870, Cambridge University Press, ISBN 978-0-521-57085-5 - Bose, Sugata; Jalal, Ayesha (2003), Modern South Asia: History, Culture, Political Economy (2nd ed.), Routledge, ISBN 0-415-30787-2 - Brown, Judith M. (1994), Modern India: The Origins of an Asian Democracy (2nd ed.), ISBN 978-0-19-873113-9 - Bentley, Jerry H. (June 1996), "Cross-Cultural Interaction and Periodization in World History", The American Historical Review, 101 (3): 749–770, doi:10.2307/2169422, JSTOR 2169422 - Chauhan, Partha R. (2010). "The Indian Subcontinent and 'Out of Africa 1'". In Fleagle, John G.; Shea, John J.; Grine, Frederick E.; Baden, Andrea L.; Leakey, Richard E. (eds.). Out of Africa I: The First Hominin Colonization of Eurasia. Springer Science & Business Media. pp. 145–164. ISBN 978-90-481-9036-2. - Collingham, Lizzie (2006), Curry: A Tale of Cooks and Conquerors, Oxford University Press, ISBN 978-0-19-532001-5 - Daniélou, Alain (2003), A Brief History of India, Rochester, VT: Inner Traditions, ISBN 978-0-89281-923-2 - Datt, Ruddar; Sundharam, K.P.M. (2009), Indian Economy, New Delhi: S. Chand Group, ISBN 978-81-219-0298-4 - Devereux, Stephen (2000). Famine in the twentieth century (PDF) (Technical report). IDS Working Paper. 105. Brighton: Institute of Development Studies. Archived from the original (PDF) on 16 May 2017. - Devi, Ragini (1990). Dance Dialects of India. Motilal Banarsidass. ISBN 978-81-208-0674-0. - Doniger, Wendy, ed. (1999). Merriam-Webster's Encyclopedia of World Religions. Merriam-Webster. ISBN 978-0-87779-044-0. - Donkin, Robin A. (2003), Between East and West: The Moluccas and the Traffic in Spices Up to the Arrival of Europeans, Diane Publishing Company, ISBN 978-0-87169-248-1 - Eaton, Richard M. (2005), A Social History of the Deccan: 1300–1761: Eight Indian Lives, The new Cambridge history of India, I.8, Cambridge University Press, ISBN 978-0-521-25484-7 - Fay, Peter Ward (1993), The forgotten army : India's armed struggle for independence, 1942–1945, University of Michigan Press, ISBN 978-0-472-10126-9 - Fritz, John M.; Michell, George, eds. (2001). New Light on Hampi: Recent Research at Vijayanagara. Marg. ISBN 978-81-85026-53-4. - Fritz, John M.; Michell, George (2016). Hampi Vijayanagara. Jaico. ISBN 978-81-8495-602-3. - Guha, Arun Chandra (1971), First Spark of Revolution, Orient Longman, OCLC 254043308 - Gupta, S.P.; Ramachandran, K.S., eds. (1976), Mahabharata, Myth and Reality – Differing Views, Delhi: Agam prakashan - Gupta, S.P.; Ramachandra, K.S. (2007). "Mahabharata, Myth and Reality". In Singh, Upinder (ed.). Delhi – Ancient History. Social Science Press. pp. 77–116. ISBN 978-81-87358-29-9. - Kamath, Suryanath U. (2001) , A concise history of Karnataka: From pre-historic times to the present, Bangalore: Jupiter Books - Keay, John (2000), India: A History, Atlantic Monthly Press, ISBN 978-0-87113-800-2 - Kenoyer, J. Mark (1998). The Ancient Cities of the Indus Valley Civilisation. Oxford University Press. ISBN 978-0-19-577940-0. - Kulke, Hermann; Rothermund, Dietmar (2004) [First published 1986], A History of India (4th ed.), Routledge, ISBN 978-0-415-15481-9 - Law, R. C. C. (1978), "North Africa in the Hellenistic and Roman periods, 323 BC to AD 305", in Fage, J.D.; Oliver, Roland (eds.), The Cambridge History of Africa, 2, Cambridge University Press, ISBN 978-0-521-20413-2 - Ludden, D. (2002), India and South Asia: A Short History, One World, ISBN 978-1-85168-237-9 - Massey, Reginald (2004). India's Dances: Their History, Technique, and Repertoire. Abhinav Publications. ISBN 978-81-7017-434-9. - Metcalf, B.; Metcalf, T.R. (9 October 2006), A Concise History of Modern India (2nd ed.), Cambridge University Press, ISBN 978-0-521-68225-1 - Meri, Josef W. (2005), Medieval Islamic Civilization: An Encyclopedia, Routledge, ISBN 978-1-135-45596-5 - Michaels, Axel (2004), Hinduism. Past and present, Princeton, New Jersey: Princeton University Press - Mookerji, Radha Kumud (1988) [First published 1966], Chandragupta Maurya and his times (4th ed.), Motilal Banarsidass, ISBN 81-208-0433-3 - Mukerjee, Madhusree (2010). Churchill's Secret War: The British Empire and the Ravaging of India During World War II. Basic Books. ISBN 978-0-465-00201-6. - Müller, Rolf-Dieter (2009). "Afghanistan als militärisches Ziel deutscher Außenpolitik im Zeitalter der Weltkriege". In Chiari, Bernhard (ed.). Wegweiser zur Geschichte Afghanistans. Paderborn: Auftrag des MGFA. ISBN 978-3-506-76761-5. - Niyogi, Roma (1959). The History of the Gāhaḍavāla Dynasty. Oriental. OCLC 5386449. - Petraglia, Michael D.; Allchin, Bridget (2007). The Evolution and History of Human Populations in South Asia: Inter-disciplinary Studies in Archaeology, Biological Anthropology, Linguistics and Genetics. Springer Science & Business Media. ISBN 978-1-4020-5562-1. - Petraglia, Michael D. (2010). "The Early Paleolithic of the Indian Subcontinent: Hominin Colonization, Dispersals and Occupation History". In Fleagle, John G.; Shea, John J.; Grine, Frederick E.; Baden, Andrea L.; Leakey, Richard E. (eds.). Out of Africa I: The First Hominin Colonization of Eurasia. Springer Science & Business Media. pp. 165–179. ISBN 978-90-481-9036-2. - Pochhammer, Wilhelm von (1981), India's road to nationhood: a political history of the subcontinent, Allied Publishers, ISBN 978-81-7764-715-0 - Raychaudhuri, Tapan; Habib, Irfan, eds. (1982), The Cambridge Economic History of India, Volume 1: c. 1200 – c. 1750, Cambridge University Press, ISBN 978-0-521-22692-9 - Reddy, Krishna (2003). Indian History. New Delhi: Tata McGraw Hill. ISBN 978-0-07-048369-9. - Robb, P (2001). A History of India. London: Palgrave. - Samuel, Geoffrey (2010), The Origins of Yoga and Tantra, Cambridge University Press - Sarkar, Sumit (1989) [First published 1983]. Modern India, 1885–1947. MacMillan Press. ISBN 0-333-43805-1. - Sastri, K. A. Nilakanta (1955). A history of South India from prehistoric times to the fall of Vijayanagar. New Delhi: Oxford University Press. ISBN 978-0-19-560686-7. - Sastri, K. A. Nilakanta (2002) . A history of South India from prehistoric times to the fall of Vijayanagar. New Delhi: Oxford University Press. ISBN 978-0-19-560686-7. - Schomer, Karine; McLeod, W.H., eds. (1987). The Sants: Studies in a Devotional Tradition of India. Motilal Banarsidass. ISBN 978-81-208-0277-3. - Sen, Sailendra Nath (1 January 1999). Ancient Indian History and Civilization. New Age International. ISBN 978-81-224-1198-0. - Singh, Upinder (2008), A History of Ancient and Early Medieval India: From the Stone Age to the 12th Century, Pearson, ISBN 978-81-317-1120-0 - Sircar, D C (1990), "Pragjyotisha-Kamarupa", in Barpujari, H K (ed.), The Comprehensive History of Assam, I, Guwahati: Publication Board, Assam, pp. 59–78 - Sumner, Ian (2001), The Indian Army, 1914–1947, Osprey Publishing, ISBN 1-84176-196-6 - Thapar, Romila (1977), A History of India. Volume One, Penguin Books - Thapar, Romila (1978), Ancient Indian Social History: Some Interpretations (PDF), Orient Blackswan, archived from the original (PDF) on 14 February 2015 - Thapar, Romila (2003). The Penguin History of Early India (First ed.). Penguin Books India. ISBN 978-0-14-302989-2. - Williams, Drid (2004). "In the Shadow of Hollywood Orientalism: Authentic East Indian Dancing" (PDF). Visual Anthropology. Routledge. 17 (1): 69–98. doi:10.1080/08949460490274013. S2CID 29065670. - Basham, A.L., ed. The Illustrated Cultural History of India (Oxford University Press, 2007) - Buckland, C.E. Dictionary of Indian Biography (1906) 495pp full text - Chakrabarti D.K. 2009. India, an archaeological history : palaeolithic beginnings to early historic foundations. - Chattopadhyaya, D. P. (ed.). History of Science, Philosophy and Culture in Indian Civilization. 15-volum + parts Set. Delhi: Centre for Studies in Civilizations. - Dharma Kumar and Meghnad Desai, eds. The Cambridge Economic History of India: Volume 2, c. 1751 – c. 1970 (2nd ed. 2010), 1114pp of scholarly articles - Fisher, Michael. An Environmental History of India: From Earliest Times to the Twenty-First Century (Cambridge UP, 2018) - Guha, Ramachandra. India After Gandhi: The History of the World's Largest Democracy (2007), 890pp; since 1947 - James, Lawrence. Raj: The Making and Unmaking of British India (2000) online - Khan, Yasmin. The Raj At War: A People's History Of India's Second World War (2015); also published as India At War: The Subcontinent and the Second World War . - Khan, Yasmin. The Great Partition: The Making of India and Pakistan (2n d ed. Yale UP 2017) excerpt - Mcleod, John. The History of India (2002) excerpt and text search - Majumdar, R.C. : An Advanced History of India. London, 1960. ISBN 0-333-90298-X - Majumdar, R.C. (ed.) : The History and Culture of the Indian People, Bombay, 1977 (in eleven volumes). - Mansingh, Surjit The A to Z of India (2010), a concise historical encyclopedia - Markovits, Claude, ed. A History of Modern India, 1480–1950 (2002) by a team of French scholars - Metcalf, Barbara D. and Thomas R. Metcalf. A Concise History of Modern India (2006) - Peers, Douglas M. India under Colonial Rule: 1700–1885 (2006), 192pp - Richards, John F. The Mughal Empire (The New Cambridge History of India) (1996) - Riddick, John F. The History of British India: A Chronology (2006) excerpt - Riddick, John F. Who Was Who in British India (1998); 5000 entries excerpt - Rothermund, Dietmar. An Economic History of India: From Pre-Colonial Times to 1991 (1993) - Sharma, R.S., India's Ancient Past, (Oxford University Press, 2005) - Sarkar, Sumit. Modern India, 1885–1947 (2002) - Senior, R.C. (2006). Indo-Scythian coins and history. Volume IV. Classical Numismatic Group, Inc. ISBN 978-0-9709268-6-9. - Singhal, D.P. A History of the Indian People (1983) - Smith, Vincent. The Oxford History of India (3rd ed. 1958), old-fashioned - Spear, Percival. A History of India. Volume 2. Penguin Books. (1990) [First published 1965] - Stein, Burton. A History of India (1998) - Thapar, Romila. Early India: From the Origins to AD 1300 (2004) excerpt and text search - Thompson, Edward, and G.T. Garratt. Rise and Fulfilment of British Rule in India (1934) 690 pages; scholarly survey, 1599–1933 excerpt and text search - Tomlinson, B.R. The Economy of Modern India, 1860–1970 (The New Cambridge History of India) (1996) - Tomlinson, B.R. The political economy of the Raj, 1914-1947 (1979) online - Wolpert, Stanley. A New History of India (8th ed. 2008) online 7th edition - Bannerjee, Gauranganath (1921). India as known to the ancient world. London: Humphrey Milford, Oxford University Press. - Bayly, C.A. (November 1985). "State and Economy in India over Seven Hundred Years". The Economic History Review. 38 (4): 583–596. doi:10.1111/j.1468-0289.1985.tb00391.x. JSTOR 2597191. - Bose, Mihir. "India's Missing Historians: Mihir Bose Discusses the Paradox That India, a Land of History, Has a Surprisingly Weak Tradition of Historiography", History Today 57#9 (2007) pp. 34–. online - Elliot, Henry Miers; Dowson, John (1867). The History of India, as told by its own historians. The Muhammadan Period. London: Trübner and Co. - Kahn, Yasmin (2011). "Remembering and Forgetting: South Asia and the Second World War". In Martin Gegner; Bart Ziino (eds.). The Heritage of War. Routledge. pp. 177–193. - Jain, M. (2011). "4". The India They Saw: Foreign Accounts. Delhi: Ocean Books. - Lal, Vinay (2003). The History of History: Politics and Scholarship in Modern India. - Palit, Chittabrata (2008). Indian Historiography. - Sharma, Arvind (2003). Hinduism and Its Sense of History. Oxford University Press. ISBN 978-0-19-566531-4. - Sreedharan, E. (2004). A Textbook of Historiography, 500 B.C. to A.D. 2000. - Warder, A.K. (1972). An introduction to Indian historiography. - The Imperial Gazetteer of India. 1908–31. Highly detailed description of all of India in 1901. - "History of India Podcast" (Podcast).
Start a 10-Day Free Trial to Unlock the Full Review Why Lesson Planet? Find quality lesson planning resources, fast! Share & remix collections to collaborate. Organize your curriculum with collections. Easy! Have time to be more creative & energetic with your students! Lattice Multiplication (B) In this multiplication puzzle worksheet, students use the puzzle format to help multiply the give numbers in each puzzle. Students solve 4 multiplication problems. 7 Views 20 Downloads Smiling at Two Digit Multiplication! How do I solve a two-digit multiplication problem? Your class tackles this question by walking through problem solving methods. They first investigates and applies traditional multiplication methods, and they then compare those with... 3rd - 4th Math CCSS: Adaptable Multiplying 5 or More Digit Factors by 4 or 5 Digit Factors Without and With Grouping Students explore multiple digit multiplication. In this multiplication lesson, students use the Lattice Multiplication process to solve multiplication problems. Students solve story problems that include Philippine products. 4th - 6th Math
Enclosure or Inclosure[a] is a term, used in English landownership, that refers to the appropriation of "waste"[b] or "common land"[c] enclosing it and by doing so depriving commoners of their rights of access and privilege. Agreements to enclose land could be either through a formal or informal process. The process could normally be accomplished in three ways. First there was the creation of "closes",[d] taken out of larger common fields by their owners.[e] Secondly, there was enclosure by proprietors, owners who acted together, usually small farmers or squires, leading to the enclosure of whole parishes. Finally there were enclosures by Acts of Parliament. The primary reason for enclosure was to improve the efficiency of agriculture. However, there were other motives too, one example being that the value of the land enclosed would be substantially increased. There were social consequences to the policy, with many protests at the removal of rights from the common people. Enclosure riots are seen by historians as 'the pre-eminent form' of social protest from the 1530s to 1640s. After William I invaded and conquered England in 1066, he distributed the land amongst 180 barons, who held the land as tenants. However he promised the English people that he would keep the laws of Edward the Confessor. Thus commoners were still able to exercise their ancient customary rights. Land ownership in the UK is still based on the feudal system introduced by the Normans where all land was owned by the Crown. The original contract bound the people who occupied the land to provide some form of service. This evolved into a financial agreement that avoided or replaced the service. Following the introduction of the feudal system, there was an increase in the economic growth and urban expansion of the country. In the 13th century successful Lords did very well financially, however the peasants faced with ever increasing costs did not, and their landholding dwindled. But after outbreaks of the Black Death in the middle of the 14th century there was a major decline in population and crop yields.The decline in population left surviving farm workers in great demand. Landowners had to face the choice of raising wages to compete for workers or letting their lands go unused. Wages for labourers rose and translated into inflation across the economy. The ensuing difficulties in hiring labour has been seen as causing the abandonment of land and the demise of the feudal system, although some historians have suggested that the effects of the Black Death may have only sped up the process. From as early as the 12th century agricultural land had been enclosed. However, the history of enclosure in England is different from region to region. Parts of south-east England (notably sections of Essex and Kent) retained the pre-Roman Celtic field system of farming in small enclosed fields. Similarly in much of west and north-west England, fields were either never open, or were enclosed early. The primary area of field management, known as the "open field system", was in the lowland areas of England in a broad band from Yorkshire and Lincolnshire diagonally across England to the south, taking in parts of Norfolk and Suffolk, Cambridgeshire, large areas of the Midlands, and most of south central England. - Was the removal of common rights that people held over farm lands and parish commons. - It was the reallocation of scattered strips of land into large new fields that were enclosed either by hedges, walls or fences. - The newly created enclosed fields were reserved for the sole use of individual owners or their tenants. - Lord of the Manor - Freeholders or Yeomanry. Proprietors of large and small properties - Copyholders.[f] - Tenant farmers - Cottagers/ Cottar - Farm servants living in their employers' house Methods of enclosureEdit There were essentially two broad categories of enclosure, these were 'formal' or 'informal' agreements. Formal enclosure was achieved either through act of parliament from 1836 onwards, or by a written agreement signed by all parties involved. The written record would probably also include a map. With informal agreements there was either minimal or no written record other than occasionally a map of the agreement. The most straightforward informal enclosure was through 'unity of possession'. Under this, if an individual managed to acquire all the disparate strips of land in an area and consolidate them in one whole piece, for example a manor, then any communal rights would cease to exist as there was no one to exercise them. Open field systemEdit Before the enclosures in England, "common"[c] land was under the control of the manorial lord. The usual manor consisted of two elements, the peasant tenantry and the lord's holding, known as the demesne farm. The land the lord held was for his benefit and was farmed by his own direct employees or by hired labour. The tenant farmers had to pay rent. This could either be cash, labour or produce. Tenants had certain rights such as pasture, pannage, or estovers that could be held by neighbouring properties, or (occasionally) in gross[g] by all manorial tenants. "Waste"[b] land was often very narrow areas, typically less than 1 yard (0.91 m) wide, in awkward locations (such as cliff edges, or inconveniently shaped manorial borders), but also could be bare rock, it was not officially used by anyone, and so was often "farmed" by landless peasants. The remaining land was organised into a large number of narrow strips, each tenant possessing several disparate strips[h] throughout the manor, as would the manorial lord. The open-field system was administered by manorial courts, which exercised some collective control. The land in a manor under this system would consist of: - Two or three very large common fields[i] - Several very large common hay meadows[j] - Closes [d] - In some cases, a park - Common waste. [b] What might now be termed a single field would have been divided under this system among the lord and his tenants; poorer peasants (serfs or copyholders, depending on the era) were allowed to live on the strips owned by the lord in return for cultivating his land. The open-field system was probably a development of the earlier Celtic field system, which it replaced. The open field system used a three-field crop rotation system. Barley, oats, or legumes would be planted in one field in spring, wheat or rye in the second field in the autumn. There was no such thing as artificial fertilizer in mediaeval England, so the continual use of arable land for crops would exhaust the fertility of the soil. The open-field system solved that problem. It did this by allowing the third field, of the arable land, to be uncultivated each year and use that "fallow" field for grazing animals, on the stubble of the old crop. The manure the animals produced in the fallow field would help restore its fertility. The following year, the fields for planting and fallow would be rotated. The very nature of the three field rotation system imposed a discipline on lord and tenants in their management of the arable land. Every one had the freedom to do what they liked with their own land but had to follow the rhythms of the rotation system. The land-holding tenants had livestock, including sheep, pigs, cattle, horses, oxen, and poultry, and after harvest, the fields became 'common' so they could graze animals on that land. There are still examples of villages that use the open field system, one example being Laxton, Nottinghamshire. The end of the Open Field systemEdit Seeking better financial returns, landowners looked for more efficient farming techniques. They saw enclosure as a way to improve efficiency,[l] however it was not simply the fencing of existing holdings; there was also a fundamental change in agricultural practice. One of the most important innovations was the development of the Norfolk four-course system, which greatly increased crop and livestock yields by improving soil fertility and reducing fallow periods. Wheat was grown in the first year, turnips in the second, followed by barley, with clover and ryegrass in the third. The clover and ryegrass were grazed or cut for feed in the fourth year. The turnips were used for feeding cattle and sheep in the winter. The practice of growing a series of dissimilar types of crops in the same area in sequential seasons helped to restore plant nutrients and reduce the build-up of pathogens and pests. The system also improves soil structure and fertility by alternating deep-rooted and shallow-rooted plants. For example, turnips can recover nutrients from deep under the soil. Planting crops such as turnips and clover was not realistic under the open field system[m] , because the unrestricted access to the field meant that other villagers' livestock would graze on the turnips. Another important feature of the Norfolk system was that it used labour at times when demand was not at peak levels. From as early as the 12th century, some open fields in Britain were being enclosed into individually owned fields. After the Statute of Merton in 1235 manorial lords were able to reorganize strips of land such that they were brought together in one contiguous block. [n] Copyholders[f] had a "customary tenancy"[o] on their piece of land that was legally enforceable. The problem was that a "copyhold tenancy"[o] was only valid for the holder's life. The heir would not have the right to inheritance although usually by custom, in exchange for a fee (known as a fine), the heir could have the copyhold transferred. To remove their customary rights, the landlords converted the copyhold into a leasehold tenancy. Leasehold removed the customary rights but the advantage to the tenant was that the land could be inherited. There was a significant rise in enclosure during the Tudor period. Enclosure was quite often undertaken unilaterally by the landowner, sometimes illegally. The widespread eviction of people from their lands resulted in the collapse of the open field system in those areas. The deprivations of the displaced workers has been seen by historians as a cause of subsequent social unrest. In Tudor England the ever increasing demand for wool had a dramatic effect on the landscape. The attraction of large profits to be made from wool encouraged manorial lords to enclose common land and convert it from arable to (mainly) sheep pasture. The consequent eviction of commoners or villagers from their homes and loss of their livelihoods became an important political issue for the Tudors. The resulting depopulation was financially disadvantageous to the Crown. The authorities were concerned that many of the people subsequently dispossessed would become vagabonds and thieves. Also the depopulation of villages would produce a weakened workforce and enfeeble the military strength of the state. From the time of Henry VII, Parliament began passing Acts either to stop enclosure, to limit its effects, or at least to fine those responsible. The so-called 'tillage acts', were passed between 1489 and 1597.[p] The people who were responsible for the enforcement of the Acts were the same people who were actually opposed to them. Consequently, the Acts were not strictly enforced.[q] Ultimately with rising popular opposition to sheep farming, a statute in 1533 restricted the size of flocks of sheep to no more than 2400. Then in 1549 an Act was introduced that imposed a poll tax on sheep that was coupled with a levy on home produced cloth. The result made sheep farming less profitable. However, in the end it was market forces that were responsible for stopping the conversion of arable into pasture. An increase in corn prices during the second half of the 16th century made arable farming more attractive, so although enclosures continued the emphasis was more on efficient use of the arable land. Parliamentary Inclosure ActsEdit Historically, the initiative to enclose land came either from a landowner hoping to maximise rental from their estate, or a tenant farmer wanting to improve their farm. Before the 17th century enclosures were generally by informal agreement. When they first introduced enclosure by Act of Parliament the informal method continued too. The first enclosure by Act of Parliament was in 1604 and was for Radipole, Dorset. This was followed by many more Parliamentary Acts and by the 1750s the Parliamentary System became the more usual method. The Inclosure Act 1773 created a law that enabled "enclosure" of land, at the same time removing the right of commoners' access. Although there was usually compensation, it was often in the form of a smaller and poorer quality plot of land. Between 1604 and 1914 there were more than 5,200 enclosure bills which amounted to 6,800,000 acres (2,800,000 ha) of land that equated to approximately one fifth of the total area of England. Parliamentary enclosure was also used for the division and privatisation of common "wastes" such as fens, marshes, heathland, downland and moors. Commissioners of EnclosureEdit The statutory process included the appointment of commissioners.The Commissioners of Enclosure had absolute authority to enclose and redistribute common and open fields from around 1745 until the General Enclosure Act of 1845. After the 1845 Act permanent commissioners were appointed who could approve Enclosures without having to submit to Parliament. The Rev. William Homer was a Commissioner and he provided a job description in 1766: A Commissioner is appointed by Act of Parliament for dividing and allotting common fields and is directed to do it according to the respective interests of proprietors ... without undue preference to any, but paying regard to situation, quality and convenience. The method of ascertainment is left to the major part of the Commission ... and this without any fetter or check upon them beside their own honour confidence (and late indeed) awed by the solemnity of an oath. This is perhaps one of the greatest trusts ever reposed in one set of men; and merits all the return of caution attention and integrity which can result from an honest impartial and ingenuous mind. (From William Homer, An Essay on the Nature and Method [of] the Inclosure of Common Fields. 1766)— Beresford 1946, pp. 130–140 After 1899, the Board of Agriculture, which later became the Ministry of Agriculture and Fisheries, inherited the powers of the Enclosure Commissioners. One of the objectives of enclosure was to improve local roads. Commissioners were given authorisation to replace old roads and country lanes with new roads that were wider and straighter than those they replaced. The road system of England had been problematic for some time. An 1852 government report described the condition of a road between Surrey and Sussex as "very ruinous and almost impassable." In 1749 Horace Walpole wrote to a friend complaining that if he desired good roads "never to go into Sussex" and another writer said that the "Sussex road is an almost insuperable evil". The problem was that country lanes were worn out and this had been compounded by the movement of cattle. Thus the commissioners were given powers to build wide straight roads that would allow for the passage of cattle. The completed new roads would be subject to inspection by the local Justices, to make sure they were of a suitable standard. In the late eighteenth century the width of the enclosure roads was at least 60 feet (18 m), but from the 1790s this was decreased to 40 feet (12 m), and later 30 feet as the normal maximum width. Straight roads of early origin, if not Roman were probably enclosure roads. They were established in the period between 1750 and 1850. The building of the new roads, especially when linked up with new roads in neighbouring parishs and ultimately the turnpikes, was a permanent improvement to the road system of the country. Social and economic factorsEdit The social and economic consequences of Enclosure has been much discussed by historians. In the Tudor period Sir Thomas More in his Utopia said: The increase of pasture,' said I, 'by which your sheep, which are naturally mild, and easily kept in order, may be said now to devour men and unpeople, not only villages, but towns; for wherever it is found that the sheep of any soil yield a softer and richer wool than ordinary, there the nobility and gentry, and even those holy men, the dobots! not contented with the old rents which their farms yielded, nor thinking it enough that they, living at their ease, do no good to the public, resolve to do it hurt instead of good. They stop the course of agriculture, destroying houses and towns, reserving only the churches, and enclose grounds that they may lodge their sheep in them. As if forests and parks had swallowed up too little of the land, those worthy countrymen turn the best inhabited places into solitudes; for when an insatiable wretch, who is a plague to his country, resolves to enclose many thousand acres of ground, the owners, as well as tenants, are turned out of their possessions by trick or by main force, or, being wearied out by ill usage, they are forced to sell them; by which means those miserable people, both men and women, married and unmarried, old and young, with their poor but numerous families... (From Thomas Mores Utopia. 1518) An anonymous poem, known as "Stealing the Common from the Goose", has come to represent the opposition to the enclosure movement in the 18th century: "The law locks up the man or woman Who steals the goose from off the common, But lets the greater felon loose Who steals the common from the goose." (Part of 18th century poem by Anon.)— Boyle 2003, pp. 33–74 According to one academic: "This poem is one of the pithiest condemnations of the English enclosure movement—the process of fencing off common land and turning it into private property. In a few lines, the poem manages to criticize double standards, expose the artificial and controversial nature of property rights, and take a slap at the legitimacy of state power. And it does it all with humor, without jargon, and in rhyming couplets."— Boyle 2003, pp. 33–74 In 1770 Oliver Goldsmith wrote the poem The Deserted Village, in it condemns rural depopulation, the enclosure of common land, the creation of landscape gardens and the pursuit of excessive wealth. During the 19th and early 20th century historians generally had sympathy for the cottagers who rented their dwellings from the manorial lord and also the landless labourers. John and Barbara Hammond said that "enclosure was fatal to three classes: the small farmer, the cottager and the squatter." "Before enclosure the cottager[r] was a labourer with land; after enclosure was a labourer without land." Marxist historians, such as Barrington Moore Jr., focused on enclosure as a part of the class conflict that eventually eliminated the English peasantry and saw the emergence of the bourgeoisie. From this viewpoint, the English Civil War provided the basis for a major acceleration of enclosures. The parliamentary leaders supported the rights of landlords vis-a-vis the King, whose Star Chamber court, abolished in 1641, had provided the primary legal brake on the enclosure process. By dealing an ultimately crippling blow to the monarchy (which, even after the Restoration, no longer posed a significant challenge to enclosures) the Civil War paved the way for the eventual rise to power in the 18th century of what has been called a "committee of Landlords", a prelude to the UK's parliamentary system. After 1650 with the increase in corn prices and the drop in wool prices the focus shifted to implementation of new agricultural techniques, including fertilizer, new crops, and crop rotation, all of which greatly increased the profitability of large-scale farms. The enclosure movement probably peaked from 1760 to 1832; by the latter date it had essentially completed the destruction of the medieval peasant community. Surplus peasant labour moved into the towns to become industrial workers. The enclosure movement is considered by some scholars to be the beginnings of the emergence of capitalism. In contrast to the Hammonds' 1911 analysis of the events, critically J. D. Chambers and G. E. Mingay, suggested that the Hammonds exaggerated the costs of change when in reality enclosure meant more food for the growing population, more land under cultivation and on balance, more employment in the countryside. The ability to enclose land and raise rents certainly made the enterprise more profitable. |Rise in rent| |23 villages||Lincolnshire||before 1799||92%| D. McCloskey. "The openfields of England: rent, risk and the rate of interest, 1300-1815" Arnold Toynbee considered that the main feature distinguishing English agriculture was the massive reduction in common land between the middle of the 18th to the middle of the 19th century. The major advantages of the enclosures were: - Effective crop rotation; - Saving of time in travelling between dispersed fields; and - The ending of constant quarrels over boundaries and rights of pasture in the meadows and stubbles. He writes: "The result was a great increase in agricultural produce. The landowners having separated their plots from those of their neighbours and having consolidated them could pursue any method of tillage they preferred. Alternate and convertible husbandry … came in. The manure of the cattle enriched the arable land and grass crops on the ploughed-up and manured land were much better than were those on the constant pasture." Since the late 20th century, those contentions have been challenged by a new class of historians. The Enclosure movement has been seen by some as causing the destruction of the traditional peasant way of life, however miserable. Landless peasants could no longer maintain an economic independence so had to become labourers. Historians and economists such as M.E.Turner and D. McCloskey have examined the available contemporary data and concluded that the difference in efficiency between the open field system and enclosure is not so plain and obvious. |Mean acres per parish||309.4||216.0||181.3||218.9||158.2||137.3| |Mean produce per parish (bushels)||5,711.5||5,587.0||6,033.1||4,987.1||5,032.2||5,058.2| |Mean yield per acre (bushels)||18.5||25.9||33.3||22.8||31.8||36.8| M.E.Turners paper "English Open Field and Enclosures:Retardation or Productivity Improvements". After the Black Death, during the 14th to 17th centuries, landowners started to convert arable land over to sheep, with legal support from the Statute of Merton of 1235. Villages were depopulated. The peasantry responded with a series of revolts. In the 1381 Peasants' Revolt, enclosure was one of the side issues. However, in Jack Cade's rebellion of 1450 land rights were a prominent demand and by the time of Kett's Rebellion of 1549 enclosure was a main issue, as it was in the Captain Pouch revolts of 1604-1607 when the terms "leveller" and "digger" appeared, referring to those who levelled the ditches and fences erected by enclosers. D. C. Coleman writes that "many troubles arose over the loss of common rights" with resentment and hardship coming from various channels including the "loss of ancient rights in the woodlands to cut underwood, to run pigs". The protests against enclosure was not just confined to the countryside. Enclosure riots also occurred in towns and cities across England in the late 15th and early 16th century. The urban unrest was distributed across the whole of the country from York in the north, to Southampton in the south and Gloucester in the west, to Colchester in the east. The urban rioters were not necessarily agricultural workers but consisted of artisanal workers such as butchers, shoemakers, plumbers, clothmakers, millers, weavers, glovers, shearmen, barbers, cappers, tanners and glaziers. In May and June 1607 the villages of Cotesbach (Leicestershire); Ladbroke, Hillmorton and Chilvers Coton (Warwickshire); and Haselbech, Rushton and Pytchley (Northamptonshire) saw protests against enclosures and depopulation. The rioting that took place became known as the Midland Revolt and drew considerable popular support from the local people.[s] It was led by John Reynolds, otherwise known as 'Captain Pouch' who was thought to be an itinerant pedlar or tinker, by trade, and said to have originated from Desborough, Northamptonshire. He told the protesters he had authority from the King and the Lord of Heaven to destroy enclosures and promised to protect protesters by the contents of his pouch, carried by his side, which he said would keep them from all harm (after he was captured, his pouch was opened; all that was in it was a piece of mouldy cheese). A curfew was imposed in the city of Leicester, as it was feared citizens would stream out of the city to join the riots.[s] A gibbet was erected in Leicester as a warning, and was pulled down by the citizens. Newton Rebellion: 8 June 1607Edit The Newton Rebellion was one of the last times that the non-mining commoners of England and the gentry were in open, armed conflict. Things had come to a head in early June. James I issued a Proclamation and ordered his Deputy Lieutenants in Northamptonshire to put down the riots. It is recorded that women and children were part of the protest. Over a thousand had gathered at Newton, near Kettering, pulling down hedges and filling ditches, to protest against the enclosures of Thomas Tresham. The Treshams were unpopular for their voracious enclosing of land – the family at Newton and their better-known Roman Catholic cousins at nearby Rushton, the family of Francis Tresham, who had been involved two years earlier in the Gunpowder Plot and had by announcement died in London's Tower. Sir Thomas Tresham of Rushton was vilified as 'the most odious man' in Northamptonshire. The old Roman Catholic gentry family of the Treshams had long argued with the emerging Puritan gentry family, the Montagus of Boughton, about territory. Now Tresham of Newton was enclosing common land – The Brand Common – that had been part of Rockingham Forest. Edward Montagu, one of the Deputy Lieutenants, had stood up against enclosure in Parliament some years earlier, but was now placed by the King in the position effectively of defending the Treshams. The local armed bands and militia refused the call-up, so the landowners were forced to use their own servants to suppress the rioters on 8 June 1607. The Royal Proclamation of King James was read twice. The rioters continued in their actions, although at the second reading some ran away. The gentry and their forces charged. A pitched battle ensued in which 40–50 people were killed; the ringleaders were hanged and quartered. A much-later memorial stone to those killed stands at the former church of St Faith, Newton, Northamptonshire. 8th June 1607 This stone commemorates the Newton Rebellion of 8th June 1607 During this uprising over 40 Northamptonshire villagers are recorded to have been slain whilst protesting against the enclosure of common land by local landowners May their souls rest in peace. (Inscription on memorial stone at St Faiths'.) The Tresham family declined soon after 1607. The Montagu family went on through marriage to become the Dukes of Buccleuch, enlarging the wealth of the senior branch substantially. Western Rising 1630–32 and forest enclosureEdit Although Royal forests were not technically commons, they were used as such from at least the 1500s onwards. By the 1600s, when Stuart Kings examined their estates to find new revenues, it had become necessary to offer compensation to at least some of those using the lands as commons when the forests were divided and enclosed. The majority of the disafforestation took place between 1629 and 1640, during Charles I of England's Personal Rule. Most of the beneficiaries were Royal courtiers, who paid large sums to enclose and sublet the forests. Those dispossessed of the commons, especially recent cottagers and those who were outside of tenanted lands belonging to manors, were granted little or no compensation, and rioted in response. - British Agricultural Revolution – Mid-17th to 19th century revolution centred around agriculture - the Diggers – Group of Protestant agrarian socialists in 17th-century England - Gerrard Winstanley – English religious reformer, philosopher and activist (1609–1676) - Levellers – 1640s English political movement - Highland Clearances – Eviction of tenants from the Scottish Highlands in the 18th and 19th centuries - Lowland Clearances - Primitive accumulation of capital – Appropriation as the origin of capital - Swing Riots – 1830 uprisings by English agricultural workers - Abandoned village – Village that has been deserted - Accumulation by dispossession – Policies to centralize wealth and power - Tragedy of the commons – Self-interests causing depletion of a shared resource - Digital enclosure – model between surveillance and interactive economy In other countriesEdit - ^ Inclosure is an archaic spelling. Enclosure is the more usual spelling, but both forms are used in this article. - ^ a b c Land of a poor quality that was only useful for grazing animals or collecting fuel. Holdings described "not in use" or "waste" paid no tax. - ^ a b Although 'owned' by the manorial lord, commoners had legal rights over the land and the manorial lord could not enclose it. - ^ a b Small fields or paddocks usually created by the partitioning of larger ancient open field. - ^ By 1750 this had led to the loss of up to half the common fields of many English villages. - ^ a b Copyholders held their land according to the custom of the manor. The mode of landholding took its name from the fact that the title deed received by the tenant was a copy of the relevant entry in the manorial court roll - ^ Common in gross refers to a legal right granted to a person for access to another’s land, for example to graze their animals - ^ There was no standard size for a strip of land and most holdings had between forty and eighty. - ^ Large area of arable land divided into strips. - ^ Known as dole or dale meadow. - ^ Although one virgate is shown to be 30 acres, as it was not standardised one virgate could range from 15 to 40 acres. - ^ Efficiency meant improvements in per unit acre yields and in total parish output. - ^ M.E.Turner disagreed with this point of view. He posited that with a certain amount of organisation, turnips were grown in the open field system and were only grown marginally more under enclosure. - ^ Land owned by an individual, rather than in common, was known as Severals - ^ a b The Lord of the Manor has the freehold to all the land of the estate. A "customary tenancy" is parcel of land , from the estate, held at the will of the lord according to the custom of the manor. A "copyhold tenancy" was a "customary tenancy" held by the Copyholder. The Manorial court was responsible for dealing with these tenancies. - ^ The first being in 1489, this was followed by four acts under Henry VIII, one under Edward VI, one under Mary and three under Elizabeth I. - ^ The government also appointed eight Royal Commissions between 1517 and 1636. - ^ The legal definition of a cottage, in England, is a small house for habitation without land. During the reign of Elizabeth I a statute mandated that a cottage had to be built with at least 4 acres (16,000 m2) of land. Thus the cottager was someone who lived in a cottage with a smallholding of land, the statute was later repealed. - ^ a b The people involved in the protest were not just the dispossessed tenants of depopulated Midland villages but also included urban-dwellers struggling to make ends meet in the towns, especially Leicester and Kettering. - ^ a b Friar 2004, pp. 144–145. - ^ Amt 1991, pp. 240–248. - ^ a b c Kain, Chapman & Oliver 2004, pp. 9–10. - ^ Friar 2004, p. 90. - ^ a b Cahill 2002, p. 37. - ^ a b McCloskey 1972, p. 15-35. - ^ Mingay 2014, p. 33. - ^ a b c Liddy 2015, pp. 41–77. - ^ a b c d e f Monbiot 1995. - ^ Mulholland 2015. - ^ Cahill 2002, p. 397. - ^ a b Bauer et al. 1996, pp. 106–107. - ^ Prestwich 2007, pp. 454–457. - ^ Cartwright 1994, pp. 32–46. - ^ Hatcher 1994, pp. 3–35. - ^ a b c UK Parliament 2021. - ^ Thirsk 1958, p. 4. - ^ Hooke 1988, pp. 121–131. - ^ a b c Mingay 2014, p. 7. - ^ a b c d e f g Hammond & Hammond 1912, p. 28. - ^ a b c Hoyle 1990, pp. 1–20. - ^ Bartlett 2000, pp. 312–313. - ^ British Government 2019. - ^ a b Clark & Clark 2001, pp. 1009–1036. - ^ Friar 2004, p. 430. - ^ a b c Friar 2004, p. 300. - ^ Friar 2004, pp. 120 and 272. - ^ Friar 2004, p. 145. - ^ Friar 2004, p. 299-300. - ^ Hopcroft 1999, pp. 17–20. - ^ Bartlett 2000, pp. 308–309. - ^ Kanzaka 2002, pp. 593–618. - ^ Bartlett 2000, p. 310. - ^ a b Thompson 2008, pp. 621–642. - ^ Grant 1992, Chapter 8. - ^ Motamed, Florax & Masters 2014, pp. 339–368. - ^ a b c d Turner 1986, pp. 669–692. - ^ a b c d e f g h Friar 2004, p. 144-146. - ^ Overton 1996, p. 1. - ^ Overton 1996, pp. 117 and 167. - ^ Friar 2004, p. 390. - ^ Chisolm 1911. - ^ Beresford 1998, p. 28. - ^ a b c d e Bowden 2015, pp. 110–111. - ^ National Archive 2021. - ^ McCloskey 1975, pp. 146. - ^ a b The National Archives 2021. - ^ Mingay 2014, p. 48. - ^ Secretary of State 1852, p. 4. - ^ Jackman 1916, p. 295. - ^ a b c Mingay 2014, pp. 48–49. - ^ Friar 2004, p. 146. - ^ Whyte 2003, p. 63. - ^ a b Blum 1981, pp. 477–504. - ^ Bell 1944, pp. 747–772. - ^ a b Hammond & Hammond 1912, p. 100. - ^ Elmes 1827, pp. 178–179. - ^ Moore 1966, pp. 17, 19–29. - ^ Moore 1966, p. 23. - ^ Moore 1966, pp. 25–29. - ^ Moore 1966, pp. 29–30. - ^ Brantlinger 2018, pp. ix–xi. - ^ Hickel 2018, pp. 76–82. - ^ Chambers & Mingay 1982, p. 104. - ^ Mingay 2014, p. 87. - ^ McCloskey 1989, p. 17. - ^ Toynbee 2020, pp. 13–15. - ^ Neeson 1993, p. 223. - ^ Humphries 1990. - ^ Hobsbawm & Rudé 1973, p. 16. - ^ Fairlie 2009, pp. 16–31. - ^ Coleman 1977, p. 40. - ^ a b c d Hindle 2008, pp. 21–61. - ^ Wood 2001, pp. 118–119. - ^ Martin 1986, pp. 166–167. - ^ Hickel 2018, pp. 78–79. - ^ Sharp 1980, p. 57. - Amt, Emilie M. (1991). "The Meaning of Waste in the Early Pipe Rolls of Henry II". The Economic History Review. 44 (2): 240–248. doi:10.2307/2598295. JSTOR 2598295. - Bartlett, Robert (2000). J.M. Roberts (ed.). England Under the Norman and Angevin Kings 1075–1225. London: OUP. ISBN 978-0-19-925101-8. - Bauer, Alexander A.; Holtorf, Cornelius; Waterton, Emma; Garcia, Margarita Diaz-Andreu; Siberman, Neil Asher, eds. (1996). The Oxford Companion to Archaeology. Oxford University Press. ISBN 978-0-19-507618-9. - Bell, Howard J. (1944). "The Deserted Village and Goldsmith's Social Doctrines". PMLA. 59 (3): 747–772. doi:10.2307/459383. ISSN 0030-8129. JSTOR 459383. - Beresford, M. (1946). "Commissioners of Enclosure". The Economic History Review. 16 (2): 130–140. doi:10.2307/2590476. JSTOR 2590476. - Beresford, Maurice (1998). The Lost Villages of England (Revised ed.). Sutton. ISBN 978-07509-1848-0. - Blum, Jerome (1981). "English Parliamentary Enclosure". The Journal of Modern History. 5 (3): 477–504. doi:10.1086/242327. JSTOR 1880278. S2CID 144167728. - Bowden, Peter J. (2015). Wool Trade in Tudor and Stuart England. Routledge. ISBN 978-0415-75927-4. - Boyle, James (2003). "The Second Enclosure Movement and the Construction of the Public Domain". Law and Contemporary Problems. 66 (1/2): 33–74. JSTOR 20059171. - Brantlinger, Patrick (2018). Barbed Wire: Capitalism and the Enclosure of the Commons. Routledge. ISBN 9781138564398. - British Government (2019). "Practice guide 16: profits a prendre". HM Land Registry. Retrieved 13 May 2021. - Cahill, Kevin (2002). Who Owns Britain. London: Canongate Books. ISBN 1-84195-310-5. - Calder, Jonathan (2009). "J. L. Carr and St Faith's, Newton in the Willows". Liberal England. Retrieved 1 September 2021. - Cartwright, Frederick F. (1994). Disease and History. Dorset Press. pp. 32–46. ISBN 978-0880-29690-8. - Chambers, J. D.; Mingay, G. E. (1982). The Agricultural Revolution 1750–1850 (Reprinted ed.). Batsford. ISBN 978-07134-1358-8. - Chisolm, Hugh, ed. (1911). Wikisource. – via - Clark, Gregory; Clark, Anthony (2001). "Common Rights to Land in England, 1475-1839". The Journal of Economic History. v61 (4): 1009–1036. doi:10.1017/S0022050701042061. JSTOR 2697915. S2CID 154462400. - Coleman, D.C. (1977). The Economy of England, 1450-1750. Oxford University Press. p. 40. ISBN 0-19-215355-2. - Fairlie, Simon (2009). "A Short History of Enclosure in Britain". The Land Magazine (7 ed.): 16–31. - Elmes, James (1827). On Architectural Jurisprudence; in which the Constitutions, Canons, Laws and Customs etc. London: W.Benning. Retrieved 19 February 2022. - Friar, Stephen (2004). The Sutton Companion to Local History. Sutton Publishing. ISBN 0-7509-2723-2. - Grant, Annie (1992). "Animal Resources". In Astill, Grenville; Grant, Annie (eds.). The Countryside of Medieval England. Wiley-Blackwell. ISBN 978-06311-8442-3. - Hammond, J. L.; Hammond, Barbara (1912). The Village Labourer 1760–1832. London: Longman. - Hatcher, John (1994). "England in the Aftermath of the Black Death". Past & Present. 144 (144): 3–35. doi:10.1093/past/144.1.3. JSTOR 651142. - Hickel, Jason (2018). The Divide: Global Inequality from Conquest to Free Markets. W. W. Norton & Company. ISBN 978-0393651362. - Hindle, Steve (2008). "Imagining Insurrection in Seventeenth-Century England: Representations of the Midland Rising of 1607". History Workshop Journal. 66 (66): 21–61. doi:10.1093/hwj/dbn029. JSTOR 25473007. Retrieved 27 April 2021. - Hobsbawm, Eric; Rudé, George (1973). Captain Swing. Harmondsworth: Penguin. ISBN 978-0140-60013-1. - Hooke, Della (1988). "Early Forms of Open-Field Agriculture in England". Geografiska Annaler. Series B. Human Geography. 70 (1): 121–131. JSTOR 490748. - Hopcroft, Rosemary L. (1999). Regions, Institutions, and Agrarian Change in European History. Ann Arbor: University of Michigan Press. ISBN 978-0-472-11023-0. - Hoyle, R. W. (1990). "Tenure and the Land Market in Early Modern England: Or a Late Contribution to the Brenner Debate". The Economic History Review. 43 (1): 1–20. doi:10.2307/2596510. JSTOR 2596510. - Humphries, Jane (1990). "Enclosures, Common Rights, and Women: The Proletarianization of Families in the Late Eighteenth and Early Nineteenth Centuries". The Journal of Economic History. 50 (1): 17–42. doi:10.1017/S0022050700035701. S2CID 155042395. - Jackman, William T. (1916). The development of transportation in modern England (Volume 1). Cambridge: Cambridge University Press. OCLC 1110784622. - Kain, J.P.; Chapman, John; Oliver, R. (2004). The Enclosure Maps of England and Wales 1595–1918 A Cartographic Analysis and Electronic Catalogue. Cambridge: Cambridge University Press. ISBN 0-521-82771-X. - Kanzaka, Junichi (2002). "Villein Rents in Thirteenth-Century England: An Analysis of the Hundred Rolls of 1279-1280". The Economic History Review. 55 (4): 593–618. doi:10.1111/1468-0289.00233. JSTOR 3091958. - Liddy, Christian D. (2015). "Urban Enclosure Riots: Risings of the Commons in English Towns, 1480–1525". Past & Present. 226 (226): 41–77. doi:10.1093/pastj/gtu038. JSTOR 24545185. - McCloskey, D. (1975). "Economics of enclosure: a market analysis" (PDF). In Parker, W. N.; Jones, E.L. (eds.). European Peasants and their Markets: essays in Agrarian Economic History. pp. 123–160. ISBN 978-06916-1746-6. - McCloskey, D. (1972). "The Enclosure of Open Fields: Preface to a Study of Its Impact on the Efficiency of English Agriculture in the Eighteenth Century". The Journal of Economic History. 32 (1): 15–35. doi:10.1017/S0022050700075379. JSTOR 2117175. S2CID 155003917. - McCloskey, D (1989). David W Galenson (ed.). Markets in History: Economic studies of the past. The openfields of England: rent, risk and the rate of interest, 1300-1815. Cambridge University Press. ISBN 0-521-35200-2. - Martin, John E (1986). Feudalism to Capitalism: Peasant and Landlord in English Agrarian Development (Studies in Historical Sociology). Basingstoke, Hampshire: Palgrave. ISBN 978-033-340476-8. - Mingay, G.E. (2014). Parliamentary Enclosure in England. London: Routledge. ISBN 978-0-582-25725-2. - Monbiot, George (22 February 1995). "A Land Reform Manifesto". The Guardian. Retrieved 4 March 2012. - Moore, Barrington (1966). Social Origins of Dictatorship and Democracy: Lord and Peasant in the Making of the Modern World. Boston, Massachusetts: Beacon Press. ISBN 9780807050750. - More, Thomas (1901). Morley, Henry (ed.). Wikisource. . Translated by Gilbert Burnet. Cassell and Company – via - Motamed, Mesbah J.; Florax, Raymond J.G.M.; Masters, William A. (2014). "Agriculture, Transportation and the Timing of Urbanization: Global Analysis at the Grid Cell Level". Journal of Economic Growth. 19 (3): 339–368. doi:10.1007/s10887-014-9104-x. hdl:1871.1/e88da506-8a2e-4d79-94a3-8436e35a3783. JSTOR 44113430. S2CID 1143513. - Mulholland, Maureen (2015). Crowcroft, Robert; Cannon, John (eds.). law, development of. The Oxford Companion to British History (2 ed.). ISBN 978-01917-5715-0. - National Archive (2021). "Inclosure Act 1773". London: legislation.gov.uk. Retrieved 14 May 2021. - Neeson, J. M. (1993). Commoners: Common Right, Enclosure and Social Change in England, 1700–1820. Cambridge University Press. ISBN 0-521-56774-2. - Overton, Mark (1996). Agricultural Revolution in England: The transformation of the agrarian economy 1500–1850. Cambridge University Press. ISBN 978-0-521-56859-3. - Prestwich, Michael (2007). Plantagenet England 1225-1360. Oxford University Press. ISBN 978-0-19-922687-0. - Secretary of State (1852). "Surrey". Turnpike Trusts: County Reports of the Secretary of State. Accounts and Papers: Turnpike Roads. Vol. XLIV. London: HM Stationery Office. - Sharp, Buchanan (1980), In contempt of all authority, Berkeley: University of California Press, ISBN 0-520-03681-6, OL 4742314M, 0520036816 - The National Archives (2021). "Enclosure Awards and Maps". Retrieved 14 May 2021. - Thirsk, Joan (1958). Tudor Enclosures. London: The Historical Association. ISBN 9-780-852-78154-8. - Thompson, S. J. (2008). "Parliamentary Enclosure, Property, Population, and the Decline of Classical Republicanism in Eighteenth-Century Britain". The Historical Journal. 51 (3): 621–642. doi:10.1017/S0018246X08006948. JSTOR 20175187. S2CID 159999424. - Toynbee, Arnold (2020), The Industrial Revolution: A Translation into Modern English, Kindle edition - Turner, Michael (1986). "English Open Fields and Enclosures: Retardation or Productivity Improvements". The Journal of Economic History. 46 (3): 669–692. doi:10.1017/S0022050700046829. JSTOR 2121479. S2CID 153930819. - UK Parliament (2021). "Enclosing the Land". London: UK Parliament. Retrieved 14 May 2021. - Whyte, Ian (2003). Transforming Fell and Valley Landscape and Parliamentary Enclosure in North West England. University of Lancaster. ISBN 978-18622-0132-3. - Wood, Andy (2001). Riot, Rebellion and Popular Politics in Early Modern England (Social History in Perspective). Basingstoke, Hampshire: Palgrave. ISBN 978-033-363762-3.
Researchers at The Ohio State University, in collaboration with University of Texas, Dallas scientists and the National Institute for Materials Science in Japan, have found that graphene is more likely to become a superconductor than originally thought possible. Graphene by itself can conduct energy, as a normal metal is conductive, but it is only recently that we learned it can also be a superconductor, by making a so-called ‘magic angle’ twisting a second layer of graphene on top of the first, said Jeanie Lau, a professor of physics at Ohio State and co-author of the paper. And that opens possibilities for additional research to see if we can make this material work in the real world. Unlike most conventional conductors, superconductors are metals that can conduct electricity without resistance, thus suffering no loss of energy. Graphene, as a single layer, is not a superconductor. However, scientists at the Massachusetts Institute of Technology have published research that showed that graphene could become a superconductor if one piece of graphene were laid on top of another piece and the layers twisted to a specific angle what they termed the magic angle. That magic angle, scientists thought, was between 1 degree and 1.2 degrees a very precise angle. The question is, the magic angle, how magic does it have to be? said Emilio Codecido, a graduate student in Lau’s lab and a co-author on the paper. The Ohio State team found that the magic angle appears to be less magical than originally thought. Their work found that graphene layers still superconducted at a smaller angle, around 0.9 degrees. It is a small distinction, but it could open the possibility of new experiments to investigate graphene as a potential superconductor in the real world. So far, superconducting is limited outside of scientific laboratories because in order to superconduct electricity, the electric lines must be kept at extremely low temperatures. This research pushed our understanding of superconductors and the magic angle a little further than the theory and prior experiments might have expected, said Marc Bockrath, a co-author of the paper and physics professor at Ohio State. Superconductivity could revolutionize many industries electric transmission lines, communication lines, transportation, trains, Codecido said. Superconductivity in twisted bilayer graphene will teach us about superconductivity at much higher temperatures, temperatures that will be useful for real-world applications. That’s where future work will be focused.
Table of Contents - 1 What is the amplitude of a wave number? - 2 How do I find the amplitude of a wave? - 3 What is the symbol of amplitude? - 4 What is amplitude of a sine wave? - 5 What are examples of amplitude? - 6 How do you find amplitude of a graph? - 7 What is a wave amplitude? - 8 Does higher amplitude mean louder sound? - 9 How is the amplitude of a wave measured? - 10 How is the frequency of a sound wave calculated? What is the amplitude of a wave number? Amplitude—distance between the resting position and the maximum displacement of the wave. Frequency—number of waves passing by a specific point per second. Period—time it takes for one wave cycle to complete. How do I find the amplitude of a wave? Amplitude is generally calculated by looking on a graph of a wave and measuring the height of the wave from the resting position. The amplitude is a measure of the strength or intensity of the wave. For example, when looking at a sound wave, the amplitude will measure the loudness of the sound. What is the symbol of amplitude? The symbol for amplitude is A (italic capital a). The SI unit of amplitude is the meter [m], but other length units may be used. How can you tell if a wave has a high or low amplitude? The amount of energy carried by a wave is related to the amplitude of the wave. A high energy wave is characterized by a high amplitude; a low energy wave is characterized by a low amplitude. Putting a lot of energy into a transverse pulse will not effect the wavelength, the frequency or the speed of the pulse. What is amplitude and frequency? The amplitude of a wave is the height of a wave as measured from the highest point on the wave (peak or crest) to the lowest point on the wave (trough). Frequency refers to the number of waves that pass a given point in a given time period and is often expressed in terms of hertz (Hz), or cycles per second. What is amplitude of a sine wave? The amplitude of the sine function is the distance from the middle value or line running through the graph up to the highest point. In other words, the amplitude is half the distance from the lowest value to the highest value. What are examples of amplitude? The definition of amplitude refers to the length and width of waves, such as sound waves, as they move or vibrate. How much a radio wave moves back and forth is an example of its amplitude. How do you find amplitude of a graph? Amplitude is the distance between the center line of the function and the top or bottom of the function, and the period is the distance between two peaks of the graph, or the distance it takes for the entire graph to repeat. Using this equation: Amplitude =APeriod =2πBHorizontal shift to the left =CVertical shift =D. What is a example of amplitude? What is the measure of amplitude? Amplitude, in physics, the maximum displacement or distance moved by a point on a vibrating body or wave measured from its equilibrium position. For a longitudinal wave, such as a sound wave, amplitude is measured by the maximum displacement of a particle from its position of equilibrium. What is a wave amplitude? Amplitude, in physics, the maximum displacement or distance moved by a point on a vibrating body or wave measured from its equilibrium position. It is equal to one-half the length of the vibration path. Waves are generated by vibrating sources, their amplitude being proportional to the amplitude of the source. Does higher amplitude mean louder sound? The sound is perceived as louder if the amplitude increases, and softer if the amplitude decreases. As the amplitude of the sound wave increases, the intensity of the sound increases. Sounds with higher intensities are perceived to be louder. Relative sound intensities are often given in units named decibels (dB). How is the amplitude of a wave measured? Amplitude measures how much energy is being transported by the wave. The larger the amplitude, the more energy a wave has. The symbol for amplitude is a capital letter A. Be careful not to make the mistake of thinking amplitude is the distance from crest to trough. How are amplitude and frequency related to energy? One is amplitude, which is the distance from the rest position of a wave to the top or bottom. Large amplitude waves contain more energy. The other is frequency, which is the number of waves that pass by each second. If more waves pass by, more energy is transferred each second. How is high amplitude equivalent to loud sound? High amplitude is equivalent to loud sounds. The waveform representation converts the pressure variations of sound waves into a pictorial graph which is easier to understand. A sound wave is made of areas of high pressure alternated by an area of low pressure. The high-pressure areas are represented as the peaks of the graph. How is the frequency of a sound wave calculated? In sound, the frequency is also known as Pitch. The frequency of the vibrating source of sound is calculated in cycles per second. The SI Unit for Frequency being hertz and its definition being ‘1/T’ where T refers to the time period of the wave.
Critical Thinking Skills Guideby Becton Loveless Critical thinking is important. Generally speaking, critical thinking refers to the ability to understand the logical connections between ideas. When a person understands these connections, it makes it easier to construct logical arguments based on those ideas. It also becomes easier to evaluate the arguments that other people make to see if those arguments are based on sound reasoning. Since critical thinking involves connecting important concepts and ideas, critical thinkers often find it easier to solve problems in a systematic fashion. Critical thinkers can also prioritize which ideas are most relevant to their own arguments. From this general idea of what critical thinking involves, it should be easy to see why critical thinking would be important to students. Students who become critical thinkers are better equipped to deal with a wide range of problems that they encounter in school. These students are better able to build new concepts upon previous ideas that they’ve learned. This is a useful skill throughout school. Advanced mathematics are built upon simpler math ideas. Science experiments require basic understanding of various substances used in the lab. Advanced argumentation is rooted in the simple ability to identify information that supports the argument’s basic premise. Despite all the potential advantages that may come with possessing critical skills, these skills are not themselves taught directly in school. Such skills may be inadvertently taught during the course of various lessons and school work but, for the most part, critical thinking skills aren’t typically directly addressed. There are no classes committed to teaching critical thinking skills alone, leading teachers across multiple subjects to have to find ways of integrating critical thinking into their lessons independently. Critical Thinking by the End of High School Entering college, students have hopefully learned several advanced critical thinking skills that will support them through their college work. Specifically, there are six critical thinking skills that will support upper high school students and college students. These skills can help students to perform better in a range of subjects. Identification is important to critical thinking because it refers to the ability for a student to identify the existing problem and what factors impact that problem. This first critical thinking skill is what gives students the ability to see the scope of the problem and start thinking about how to solve the issue. In a new situation, learners ask what the problem is, why it might be happening, and what the outcome is. From this initial set of questions, they come to an understanding of the problem’s scope and potential solutions. Research of a problem cannot begin until identification has taken place. Once identification occurs, a learner can start researching that problem. How much research is necessary will depend on the scope of the problem. Mathematical problems, for instance, may rely on researching examples of the problem and reviewing more fundamental formulas. More complex problems, such as addressing large social issues, still rely on the same process of understanding the scope of the issue and identifying what materials need to be referenced to address the problem. Research is also important when it comes to understanding claims. Students should be able to hear a statement, question it, and verify that statement using objective evidence discovered through research. This is in contrast to the uncritical response of simply accepting the statement. Identifying bias is one of the more difficult skills for students to grasp. Everyone has bias, including students themselves. A learner needs to be able to identify bias in the materials that they’re looking at that might impact what’s being written. Authors may write things that favor a certain point of view, which would impact how much a reader could trust the material. On the other hand, students should also be able to examine their own biases. It’s important not to write in favor of one’s own view, which becomes increasingly important as a person progresses upward in their studies through higher education. It’s important for students to challenge their own perspectives but also to challenge the evidence that they read. The ability to make inferences is a critical skill for students to learn as they learn how to analyze data and piece together information. During the course of putting together information, it’s always important to learn how to draw conclusions based on that information. Students need to be able to look at a body of evidence and make a determination of what that data might mean. Not all inferences will be correct, so students also need to be able to reassess their inferences as new data comes up or as existing evidence is reassessed. To make correct inferences and formulate arguments, students need to be able to determine the relevance of the information that they receive. This is not an issue of examining bias so much as being able to identify the information that’s appropriate to solving a problem or making an argument. This is particularly important as students get into more advanced areas of research. For instance, as students start getting asked to write papers, they need to be able to search through primary and secondary documents that can support their argument. The more skilled that a student becomes at being able to determine the relevance of these documents, the less time students will have to spend sorting through irrelevant documents that don’t support their research. Perhaps counterintuitively, it’s also important for people to learn how to curb their curiosity. Curiosity is important in that it drives research and exploration of a topic. However, consistent with the need to determine relevance is the need to identify where to end a line of inquiry. Curiosity can send people exploring any number of topics during research that only burns time instead of informing a student’s research. The more skilled a student becomes at learning how to end certain paths of research the more they can focus on supporting their studies and finding evidence that will work in their research. Teaching Critical Thinking Skills Actually teaching critical thinking skills is something that teachers have instincts about and teach inadvertently without actually understanding how their lessons actually impact those skills. In truth, teachers should try to make critical thinking integral to their instructional design. Almost any instructor can begin teaching critical thinking by simply modeling the behavior for their students. They can assess information, its sources, and its biases. But to get in-depth critical thinking skills, teachers also need to present broad problems and scenarios that students need to explore for themselves. By presenting a problem or scenario that needs to be addressed and allowing students time to debate the issue, they can be guided to see the value of other arguments while learning how to construct their own arguments. This is also a process through which students can learn how to identify information that will help them present those arguments. Teachers can also provide feedback on these arguments to help students improve their research and argumentation process in the future. Another important part of teaching critical thinking skills includes asking questions. The questioning approach helps students to reassess their own perspectives and the evidence of others. When bringing up a topic or problem, instructors should ask some of the following: - What do you think about this issue and why do you think that? - Where did you get your information on this issue and why do you believe it? - What is the implication of what you’ve learned and what conclusions can be reached? - How do you view the problem and your information, and what other view could you take on it? The importance in these lines of questions is to make students consider their own perspectives as well as contrary evidence. By asking these questions, students get to reevaluate what they believe and questions whether they actually should believe it. Sometimes people hold certain beliefs without truly understanding why they believe it. By asking questions about one’s own knowledge, it becomes possible to understand one’s own knowledge base more deeply and discard information that may be inaccurate or too heavily biased. There are also writing activities that teachers can use as well. During writing, students can be asked to write freely about any number of topics. The point of this free writing session is to let students arrive at a conclusion about what they believe about a topic. This isn’t a critical thinking phase of writing but is instead simply meant to allow student freedom to reach a conclusion about what they believe. After the student has freely explored the topic, they move onto the critical thinking phase of their writing project. At this stage, the student begins to examine what sort of biases impacted the position they took on the topic and review their conclusions. The student determines whether their inferences were accurate. This is essentially a reflective period in which students need to refine their writing and attack their own work to make it better while continually asking themselves whether their evidence is sound and whether their biases impacted the final work. Critical Thinking Barriers There are often several barriers that keep students from fully developing critical thinking skills. Ironically, one of the biggest problems to critical thinking is the existing curriculum a school is using. Particularly when curriculum is heavily standardized, it makes it difficult for teachers to find opportunities to teach critical thinking. Too heavy of a focus on teaching standardized tests, including curriculum oriented toward making sure that students hit certain test scores, often means heavily fact based teaching that expects rote memorization. This leads to few chances to actually ask open questions in which students can question their knowledge base and critically assess a given situation. There are, of course, other barriers to critical thinking. Sometimes, the problem lies with the fact that teachers are simply unused to teaching these skills. Partly as a result of feeling pressured to achieve highly standardized test scores, teachers often focus too much on fact teaching and rarely get into asking the sort of open ended questions that can help to cultivate critical thinking. However, even when they have the opportunity to do so, teachers sometimes lack the training necessary to encourage critical thinking among students. Teachers may know many activities to teach students with without a concrete idea of how each contributes to the development of such skills. Teachers tend to be trained in how to pass along content rather than encouraging critical thinking. One of the major problems that teachers face is an issue of time. Teaching content knowledge or teaching to the test involves passing along content that can help teachers teach the information that will help students pass their exams. Passing along vast quantities of information for rote memorization can be done efficiently by simply giving students lots of information to learn. A significant amount of information can be passed along within a class when teaching an exam, but it’s much harder to teach critical thinking skills. Teaching critical thinking, on the other hand, requires instructors to set aside extensive periods of time to question and debate. Considering that teachers already struggle sometimes to fit in all of their activities, it’s difficult to ask them to accommodate large periods of time for passing along critical thinking skills. Creatively finding solutions to this problem requires teachers finding small periods in which to fit in critical thinking discussions, perhaps through the use of smaller question and answer activities during lectures. Or, teachers can try to change the format of their classes completely to make them more hands on, engaging environments in which critical thinking is ongoing.
View Article in PDF PARTICLE accelerators are a fundamental tool of modern science for advancing high-energy and nuclear physics, understanding the workings of stars, and creating new elements. The machines produce high electric fields that accelerate particles for use in applications such as cancer radiotherapy, nondestructive evaluation, industrial processing, and biomedical research. The steeper the change in voltage—that is, the more the voltage varies from one location to another—the more an accelerator can “push” particles to ever-higher energies in a short distance. With current accelerator technologies, electric-field gradients for ion accelerators are limited to approximately 30 megavolts per meter and low peak currents. However, all that may change, thanks to research conducted by Livermore’s Vincent Tang, Andréa Schmidt, Jennifer Ellsworth, Steve Falabella, and Brian Rusnak to better understand the acceleration mechanisms in Z-pinch machines. Scientists may eventually be able to use Z-pinches created from dense plasma foci for compact, scalable particle accelerators and radiation-source applications. With this simple technology, electric-field gradients greater than 100 megavolts per meter and with kiloampere-class peak currents may be possible. In its simplest form, a Z-pinch device uses the electric current in a plasma to generate a magnetic field that compresses the plasma, or “pinches it down.” The “Z” designation refers to the direction of the current in the device: the z axis in an x, y, z (three-dimensional) coordinate space. “This simple plasma configuration was the first one to be identified,” says Tang, an engineer who led the Z-pinch research. He explains that the static spark one gets between a doorknob and a finger is a type of Z-pinch plasma in nature. “In a basic laboratory setup,” says Tang, “one runs a current through two plates, the current ionizes a gas and forms a plasma, and the plasma then self-pinches.” In a Z-pinch machine, a cylinder of plasma (ionized gas) collapses on itself, momentarily producing extremely high temperatures and pressures at the center of the cylinder as well as very high electric fields. Z-pinches have been a subject of interest since the 1950s, when they were explored as a possible avenue for creating fusion power. At that time, research with pinch devices in the United Kingdom and U.S. proliferated. However, instabilities in the plasma led to this effort being abandoned. “Still,” says Tang, “the experiments created neutrons—a classic signal of fusion. It just wasn’t thermonuclear fusion, which is what scientists thought was needed to achieve energy gain.” Nuclear fusion was one of Tang’s interests in graduate school, so he had Z-pinch devices in mind when, in 2007, he was working on research involving compact directional neutron sources at Livermore. “I had just read a few papers on the high-electric-field gradients produced in Z-pinch devices. In the past, most people weren’t interested in specifically using the electric fields produced in a Z-pinch. The fields were considered a by-product and a nuisance, because most researchers were focused on using the devices for thermonuclear fusion. I brought up the possibility of using these fields for some of our accelerator applications to my colleague Brian Rusnak.” However, such machines were not well enough understood to harness the electric-field gradients they produced. “It was essentially a wide-open field of inquiry, with many unknowns,” says Tang. In the fall of 2010, Tang obtained Laboratory Directed Research and Development (LDRD) funding to better understand fast Z-pinches and demonstrate that they could accelerate particles such as protons and deuterons. Tang and his team combined new simulation and experimental approaches in their research. They concentrated their efforts on Z-pinches generated by dense plasma focus (DPF) devices. These devices have high-electric-field gradients, are technologically simple, and have open geometries that allow for beam injection and extraction. A DPF Z-pinch consists of two coaxially located electrodes with a high-voltage source connected between them, typically a capacitor bank. When the high-voltage source is energized with a low-pressure gas in the chamber, a plasma sheath forms at one end of the device. In the “run down” phase, the plasma sheath is pushed down the outside length of the inner electrode, ionizing and sweeping up neutral gas as it accelerates. “One can think of it as essentially a plasma rail gun,” Tang explains. When the plasma sheath reaches the end of the electrode, it begins to collapse radially inward during the “run in” phase. In the final pinch phase, the plasma implodes, creating a high-density region that typically emits high-energy electron and ion beams, x rays, and neutrons. For the simulation side of the research, the team turned to a fully kinetic, particle-scale simulation, using the commercially available LSP code for modeling a DPF device. LSP is a three-dimensional, electromagnetic particle-in-a-cell code designed specifically for large-scale plasma simulations. The code calculates the interaction between charged particles and external and self-generated electric and magnetic fields. Schmidt, who leads the simulation effort, explains, “We had to use the more computationally intensive particle approach instead of a fluid approach because the plasma distribution functions are not Maxwellian. The ions form beams when they accelerate, and high electric fields are created partially through kinetic electron instabilities that are not modeled in a fluid code. Until recently, though, we just didn’t have the computing power or the tools to model a plasma particle by particle.” With LSP running on 256 processors for a full week on Livermore’s Hera machine, the team became the first to model what happens in the pinch process at the particle scale. The results of the simulations reproduced experimental neutron yields on the order of 107 and megaelectronvolt-scale high-energy ion beams. “No previous, self-consistent simulations of DPF pinch have predicted megaelectronvolt ions, even though ion energies up to 8 megaelectronvolts have been measured on kilojoule-class DPF devices,” notes Schmidt. The team also compared its results with those from simulations performed with fluid codes and with hybrid codes that combine aspects of kinetic and fluid codes. “The fluid simulations predicted zero neutrons and were not capable of predicting ion beams, says Schmidt. The hybrid simulations underpredicted the experimental neutron yield by a factor of 100 and did not predict ions with energies above 200 kiloelectronvolts. The more complex, fully kinetic simulation was necessary to get the physics right.” The team also designed, fabricated, and assembled a tabletop DPF experiment to directly measure the acceleration gradients inside the Z-pinch. The first gradient recorded was a time-of-flight measurement of the DPF’s self-generated ion beam using a Faraday cup. “These measurements, made during subkilojoule DPF operation, now hold the record for the highest measured DPF gradient in that energy class,” says Ellsworth. A second and more sophisticated measurement of the gradient is now under way. The team has refurbished a radio-frequency-quadrupole accelerator to make an ion probe beam for the pinch plasma. (See the figure below.) The accelerator produces a 200-picosecond, 4-megaelectronvolt ion probe beam, which is injected into the hollow center of the DPF gun just as the pinch occurs. The researchers will use this tool to measure the acceleration of the probe beam through the Z-pinch. From that, they will deduce the acceleration gradient of the plasma and demonstrate the possibility of using the Z-pinch as an acceleration stage. “The probe-beam experiments will directly measure for the first time the particle acceleration gradients in the pinch,” says Tang. The initial measurements of beam energies, accelerating fields, and neutron yields are promising, matching well with simulation results. “The device is producing high gradients,” says Tang, “and we’re proving that one can use Z-pinches to accelerate injected beams. This research opens a world of possibilities for Z-pinch systems.” The most advanced application would be the use of Z-pinch devices as acceleration stages for particle accelerators. In addition to being compact, the devices are technologically simple, which means less cost and potentially less to go wrong. They also produce gradients much higher than those obtained with today’s standard radio-frequency stages. Some near-term applications might include using Z-pinch devices to produce well-defined particle beams for nuclear forensics, radiography, oil exploration, and detection of special nuclear materials. With the LDRD phase of DPF Z-pinch research coming to a close, Tang and his team are exploring various uses of a Z-pinch device with agencies in the U.S. departments of Energy, Defense, and Homeland Security. “It’s a very exciting time for us,” Tang says. “We now have a predictive capability for this phenomenon. As a result, we have a better idea of what happens in a Z-pinch plasma configuration. We hope to apply this discovery to future generations of accelerators and other areas of research. It’s a fundamental discovery and a contribution to basic science understanding. With this new simulation capability and the ramping up of the probe-beam experiments, we see an exciting future ahead.” Key Words: accelerator, dense plasma focus (DPF) device, LSP code, particle beam, Z-pinch. For further information contact Vincent Tang (925) 422-0126 (email@example.com). View Article in PDF
Inflation means that the general level of prices is going up, the opposite of deflation. More money will need to be paid for goods (like a loaf of bread) and services (like getting a haircut at the hairdresser's). Economists measure inflation regularly to know an economy's state. Inflation changes the ratio of money towards goods or services; more money is needed to get the same amount of a good or service, or the same amount of money will get a lower amount of a good or service. Economists defined certain customer baskets to be able to measure inflation. There can be positive and negative effects of inflation. Causes of inflation[change | change source] When the total money in an economy (the money supply) increases too rapidly, the quality of the money (the currency value) often decreases. Economists generally think that the increased money supply (monetary inflation) causes the price of goods/services price to increase (price inflation) over a longer period. They disagree on causes over a shorter period. Demand-Pull inflation[change | change source] The Demand-Pull inflation theory can be said simply "too much money chasing too few goods." In other words, if the will of buying goods is growing faster than amount of goods that have been made, then prices will go up. This is most likely happens in economies that are growing fast. Whenever a product is bought or sold beyond its real price for its worth, then Inflation of money occurs. If a company, for example, makes a small amount of goods which are sold over high quantity then it has to increase the prices so that it can manage the product quantity. Cost-Push inflation[change | change source] The Cost-Push inflation theory says that when the cost of making goods (which are paid by the company) go up, they have to make prices higher to make profit out of selling that product. The higher costs of making goods can include things like workers' wages, taxes to be paid to the government or bigger costs of raw materials from other countries. However, Austrian School economists think this is wrong, because if people have to pay higher prices, this just means they have less to spend on other things. Costs of inflation[change | change source] |6 June 1912||7 Pfennig| |6 August 1923||923 Papiermark| |27 August 1923||177.500 Papiermark| |17 September 1923||2,1 million Papiermark| |15 October 1923||227 million Papiermark| |5 November 1923||22,7 billion Papiermark| |15 November 1923||320 billion Papiermark| Almost everyone thinks excessive inflation is bad. Inflation affects different people in different ways. It also depends on whether inflation is expected or not. If the inflation rate is equal to what most people are expecting (anticipated inflation), then we can adjust and the cost is not as high. For example, banks can change their interest rates and workers can negotiate contracts that include automatic wage hikes as the price level goes up. Problems arise when there is unanticipated inflation: - Creditors lose and debtors gain if the lender does not guess inflation correctly. For those who borrow, this is similar to getting an interest-free loan. - Uncertainty about what will happen next makes corporations and consumers less likely to spend. This hurts economic output in the long run. - People with a fixed income, such as retirees, see a decline in their purchasing power and, consequently, their standard of living. - The entire economy must absorb repricing costs ("menu costs") as price lists, labels, menus and so forth have to be updated. - If the inflation rate is greater than in other countries, domestic products become less competitive. - Nominal interest rate rise because inflation is anticipated. Other websites[change | change source] - Inflation -Citizendium
Primary Mathematics Common Core Edition Textbook 1A |Dimensions||7.5 × 10.5 × 0.25 in| Textbooks present new concepts and learning tasks for students to complete with educator supervision. They include practice and review problems, and are designed to be used alongside workbooks. Features & Components: - Mathematical concepts are introduced in the opening pages and taught to mastery through specific learning tasks that allow for immediate assessment and consolidation. - The Concrete Pictorial Abstract approach enables students to encounter math in a meaningful way. - Direct correlation of the textbook to the workbook facilitates focused review and evaluation. - The modeling method enables students to visualize and solve problems quickly and efficiently. - New mathematical concepts are introduced through a spiral progression that builds on concepts already taught and mastered. - Metacognition is employed as a strategy for learners to monitor their thinking processes in problem solving. Speech and thought bubbles provide guidance through the thought processes, making even the most challenging problems accessible to students. - Color patches invite active student participation and facilitate lively discussion about concepts. - Regular reviews in the textbook provide consolidation opportunities. - The glossary effectively combines pictorial representation with simple mathematical definitions to provide a comprehensive reference guide for students. - A curriculum map details where to find textbook content that covers each of the Common Core State Standards. Note: Two textbooks (A and B) for each grade correspond to the two halves of the school year. Answer key not included. Soft cover. Table of Contents - Numbers 0 to 10 - Number Bonds Making Number Stories - Addition Within 10 Making Addition Stories Methods of Addition - Subtraction Within 10 Making Subtraction Stories Methods of Subtraction - Ordinal Numbers - Numbers to 20 Counting and Comparing Addition and Subtraction - Comparing Numbers Comparison by Subtraction - Numbers 0 to 10 All copyrights reserved by Marshall Cavendish Education Pte. Ltd.
Published at Monday, November 04th 2019. by Karoly Albert in Number. There are two fundamental problems with worksheets. First, young children do not learn from them what teachers and parents believe they do (Kostelnik, Soderman, & Whiren, 1993). Second, children’s time should be spent in more beneficial endeavors (Willis, 1995). The use of abstract numerals and letters, rather than concrete materials, puts too many young children at risk of school failure. This has implications for years to come. Worksheets and workbooks should be used in schools only when children are older and developmentally ready to profit from them (Bredekamp, S. & Rosegrant, T., 1992). Our challenge is to convince parents and others that in a play-based, developmentally appropriate curriculum children are learning important knowledge, skills, and attitudes that will help them be successful in school and later life. These Venn Diagram Worksheets are great for practicing solving set notation problems of different sets, unions, intersections, and complements with three sets. These Venn Diagram Worksheets use advanced combinations of unions, intersections, relative complements and complements of sets. You may select to use standard sets, complements of sets or both. These Venn Diagram Worksheets will produce 10 questions on a single Venn Diagram for the students to answer. This section contains all of the graphic previews for the Inequalities Worksheets. We currently have topics covering Graphing Single-Variable Inequalities, One-Step Inequalities by Adding and Subtracting, One-Step Inequalities by Multiplying or Dividing, Two-Step Inequalities, and Multi-Step Inequalities. These Inequalities Worksheets are a good resource for students in the 5th Grade through the 8th Grade. These measurement worksheets are great for practicing converting different liquid measure units. These measurement worksheets will produce twenty questions on different liquid measuring units per worksheet. A timed drill is a multiplication worksheet with all of the single digit multiplication problems on one page. A student should be able to work all of the problems on the multiplication worksheets correctly in the allowed time. These multiplication worksheets are appropriate for Kindergarten, 1st Grade, 2nd Grade, 3rd Grade, 4th Grade. and 5th Grade. This Addition Worksheet is great for adding double and near double number sets. Dots will be drawn to the side of the addition problem to aid in solving the problem. The student will need to use the dots next to the addition problem to learn how to properly add double number sets. You may select the range for the doubles and the number of problems. These fractions worksheets are great for practicing subtracting Mixed Number Fraction Problems. You may select whether or not the fractions worksheets require regrouping or not. The fractions worksheets may be selected for five different degrees of difficulty. The answer worksheets will show the progression on how to solve the problems. These worksheets will generate 10 or 15 mixed number subtraction problems per worksheet. Any content, trademark’s, or other material that might be found on the Sandbaronline website that is not Sandbaronline’s property remains the copyright of its respective owner/s. In no way does Sandbaronline claim ownership or responsibility for such items, and you should seek legal consent for any use of such materials from its owner. Copyright © 2020 Sandbaronline. All Rights Reserved.
Fraction to Decimal: You take the numerator and denominator and divide them both together in order to get your decimal. ex: 2 divided by 4 = 0.5 Decimal to Percent: Take your decimal and times it by 100. In order to do it without calculator, you move the decimal point to your right of how much 0's you have. ex: 0.5 x 100 = 40% Percent to Decimal: Its just like converting decimal to percent but divide it by 100. When doing it without calculator, you put a decimal point on the last number (on the right) and move it to your left of how much 0's the number your dividing with. ex: 40 divided by 100= 0.40 Decimal to fraction: Say it. If your decimal number is in the tenths, your denominator would be 10, if its hundredths, your denominator would be 1000, if its thousandths,your denominator would be 1000. ex: 0.5= 5/10, 0.55= 55/100, 0.557= 557/1000 Percent to fraction: Take your percent and make it your numerator, your denominator would always be 100. ex: 55%= 55/100
Introduction: Circumference of a Circle This instructable will teach and demonsrtrate how to get the circumfrence of a circle. Step 1: Pi? If you measure the distance around a circle and divide it by the distance across the circle through the center, you will always come close to a particular value, depending upon the accuracy of your measurement. This value is approximately 3.14159265358979323846... We use the Greek letter (pronounced Pi) to represent this value. The number goes on forever. However, using computers, mathematicians have been able to calculate the value of to thousands of places. the point in the center is the center. I called it Point A. Step 2: Diameter The distance around a circle is called the circumference. The distance across a circle through the center is called the diameter. is the ratio of the circumference of a circle to the diameter. Thus, for any circle, if you divide the circumference by the diameter, you get a value close to . The picture I have is the relationship, the numbers expressed is the formula: Where C is circumference and D is diameter. You can test this formula at home with a round dinner plate. If you measure the circumference and the diameter of the plate and then divide C by D, your quotient should come close to Pi . Another way to write this formula is: C = Pi * D where * means multiply. This second formula is commonly used in problems where the diameter is given and the circumference is not known (see the examples below). Step 3: Radius The radius of a circle is the distance from the center of a circle to any point on the circle. If you place two radii end-to-end in a circle, you would have the same length as one diameter. Thus, the diameter of a circle is twice as long as the radius. This relationship is expressed in the following formula: D = 2 * R ,(* = Multiply) where D is the diameter and R is the radius. Step 4: Ready? Circumference, diameter and radii are measured in linear units, such as inches and centimeters. A circle has many different radii and many different diameters, each passing through the center. A real-life example of a radius is the spoke of a bicycle wheel. A 9-inch pizza is an example of a diameter: when one makes the first cut to slice a round pizza pie in half, this cut is the diameter of the pizza. So a 9-inch pizza has a 9-inch diameter. Let's look at some examples of finding the circumference of a circle. In these examples, we will use Pi = 3.14 to simplify our calculations. Step 5: Simplify It for Me! Ok here is it simplified. R(radius) is 1/2 the D(diameter) so D = R * 2 C(circumference) is the D(diameter) * Pi(3.14) so C = D * Pi(3.14) Step 6: Practice Problem Ok the R = 2 in. so 2 * 2 = 4 so the D = 4 4 * 3.14 = 12.56 Step 7: More Practice ok now we are going to do the opposite, go from the Circumference to the Diameter and then the Radius. C = 43.96 D = C divided by 3.14 43.96 divided by 3.14 = 14, the Diameter 14 divided by 2 = 7, the radius. Participated in the Burning Questions Round 6.5
A picture's worth a thousand words. If you've been honing your Pictionary skills ever since you could pick up a pencil, then geometry has your name written all over it. Geometry isn't your typical math course. Sure, there's math involved, but it's more about drawings and doodles than numbers and calculations. (Score!) Then again, it's also about using logic and reasoning to back up your arguments. Unfortunately, "Because I said so!" only goes so far. (Boo!) With the help of hundreds of readings, activities, examples, and problems, our Common Core-aligned Semester A includes - learning how to work with geometric tools from the undefined notions (how mysterious!) to a compass and straightedge. - figuring out formulas and terms so you can get the answers you want/need. - mastering logic and proofs to give us the tools we need to work with other figures (like…more points, lines, and planes). - discussing triangular congruence and similarity, and even helping them find their centers. (By the time we're through with them, they'll be all SAS and no sass.) P.S. Geometry is a two-semester course. You're looking at Semester A, but you can check out Semester B here. Course BreakdownPurchase units individually Unit 1. Intro to Geometry We'll start off by introducing you to geometry and giving you a brief rundown of the reoccurring concepts you'll see throughout the course. We'll learn about some of its fundamental building blocks, like points, lines, and angles, and even venture into the third dimension and the Cartesian plane. Complimentary peanuts will be provided. Unit 2. Reasoning and Proof Ever wanted to be like the detectives or lawyers on Law & Order? If so, then this unit is for you. We'll learn everything about logic and reasoning, from conditionals and truth tables to syllogism and detachment. And of course, we can't forget the meat and potatoes of geometry: proofs. Pass the gravy, please. Unit 3. Parallel and Perpendicular Lines Soon after learning about the properties of parallel lines, we'll shake things up by throwing transversals into the mix. Make sure you hold onto your congruent angles, because these angles and their theorems are the main course of this unit. We'll perform constructions, prove theorems, and even talk about the slopes of these lines on the coordinate plane. And naturally, leaving out perpendicular lines just wouldn't be right. Unit 4. Congruent Triangles It's the same ol' same ol'. Well, we think that sameness—or congruence—can be good. (Oreos haven't changed in over 100 years and they're still delicious!) In this unit, we will learn how to prove congruence in triangles, and how to use that congruence to solve problems. We won't help you find a better place to stash your Oreos, though. You'll have to do that on your own. Unit 5. Relationships Within Triangles Up to now, most of our lessons have been focused on the tasty outer shell of triangles—properties that deal with their sides and angles, and proving congruencies. That's all good and fine, but the time has come for us to dive into the creamy center—special line segments and centers within the triangle, just dripping with delicious new postulates, corollaries, and theorems for us to enjoy. Unit 6. Similarity You probably thought you had escaped triangles, but we aren't finished with them yet. We're going to cover some new triangle theorems. At this point, these will practically seem like second nature. We'll also learn about the scale factor, which is the ratio by which the polygon is reduced or expanded. Who knew so much information was bundled up in just three sides? - Course Length: 18 weeks - Course Number: 310 - Grade Levels: 9, 10, 11 - Course Type: Basic Algebra I—Semester A (2013) Algebra I—Semester B (2013) Just what the heck is a Shmoop Online Course?
Astronomers have used the Hubble Space Telescope to observe a monster galaxy that was hidden behind walls of dust. The discovery offers important clues about an early phase of galaxy development, from a time just 3 billion years after the Big Bang. The research appears in the journal Nature. Galaxy formation theories have suggested that the universe’s heaviest galaxies develop from the inside out, forming their star-studded, central cores during early cosmic epochs. But scientists had never been able to observe this core construction until now. “It’s a formation process that can’t happen anymore,” says Erica Nelson, a Yale University graduate student who was lead author of the paper. “The early universe could make these galaxies, but the modern universe can’t. “It was this hotter, more turbulent place—these were boiling cauldrons, forging stars.”What’s so special about ‘Sparky’ Researchers saw this formation process under way in a massive galaxy in the early universe. They found a candidate galaxy with an infrared camera on the Hubble Space Telescope that was installed in 2009.Related Articles On Futurity - California Institute of TechnologyTeam to launch balloon into polar vortex - Vanderbilt UniversityThese stars are so fast they can escape the Milky Way - California Institute of TechnologyVoyager 1 cruises 'highway' at solar system's edge After finding this candidate, team members flew to Hawaii and observed it with the Near Infrared Spectrograph (NIRSPEC) on the world’s largest telescope at the the W.M. Keck Observatory in Hawaii. The galaxy boasted the most rapidly orbiting gas clouds ever measured, definitive evidence of a massive galaxy in the midst of core formation. Informally, the scientists began calling the galaxy “Sparky.”The T-Rex of galaxies Using archival, far-infrared images from NASA’s Spitzer Space Telescope and the ESA/NASA Herschel Space Observatory, the team found that Sparky is producing 300 stars every year. By comparison, the Milky Way produces about 10 stars a year. “Just like the hot oxygen-rich early Earth could produce dinosaurs, the hot, dense early universe could produce these galaxies,” says Pieter van Dokkum, chair of Yale’s astronomy department. “As T-Rex was an extreme animal, these are extreme galaxies. They are tightly packed with stars and erupting with star formation.”Hiding behind dust Sparky’s rapid gas movement was the big tip-off to core formation. Yet the same gas required to fuel star formation, along with swirling traces of metals, comes with thick dust. This dust enshrouds the galaxy, hiding it in visible light much the way the Sun can appear red and faint behind the smoke of a forest fire. Astronomers think this barely visible galaxy may be representative of a much larger population of similar objects that are even more obscured by dust. It was only by using infrared analysis and the most powerful telescopes in existence that the Yale team could confirm Sparky’s exact nature. The galaxy formed 11 billion years ago. “I think our discovery settles the question of whether this mode of building galaxies actually happened or not,” van Dokkum says. “The question now is, ‘How often did this occur?’ We suspect there are other galaxies like this that are even fainter in near-infrared wavelengths.” In fact, Sparky may have a lot of company. “We suspect there are 100 times more and we’re just missing them,” Nelson adds. Source: Yale University More than 2 million children live in orphanages and group homes around the world, and a new study offers some encouraging data on how those children fare over time. Children in institutions are as healthy and, in some ways, healthier than those in family-based care, according to the largest and most geographically and culturally diverse study of its kind. The Positive Outcomes for Orphans (POFO) study based at the Global Health Institute at Duke University is following more than 1,300 orphaned and separated children living in institutions and another 1,400 children in family-based care. This study includes three years of data from caregivers and children between the ages of 6 to 15 across study sites in Cambodia, Ethiopia, India, Kenya, and Tanzania. Researchers examined many aspects of child well-being, including physical health, emotional difficulties, growth, learning ability, and memory.Related Articles On Futurity - University of North Carolina at Chapel HillKids of all backgrounds gain skills in pre-k - University of Chicago'Good job' preps kids to tackle challenges - University at BuffaloNeed willpower? Watch your favorite TV rerun The new research, published in PLOS ONE, challenges views held by children’s rights organizations that institution-based caregiving results in universal negative effects on the development and well-being of children. “Our findings put less significance on the residential setting as a means to account for either positive or negative child well-being over time,” says Kathryn Whetten, professor of public policy and director of the Center for Health Policy and Inequalities Research. “This underscores the need to continue working to tease out the blend of caregiving characteristics that lead to the best child outcomes within each setting. “We believe returning children from institutions to biological families may not result in the best outcomes, at least without significant intervention for the biological family, including supervision and follow-up.” Ultimately, the evidence suggests a need to “focus on improving the quality of caregiving in family settings and group homes, the well-being of caregivers, and improving communities.” The National Institute of Child Health and Development supported the study. Source: Duke University The current Ebola outbreak sweeping through West Africa likely began at the funeral of a healer in Sierra Leone. “The funeral was for an herbalist or traditional medicine practitioner in Koindu, a town in Sierra Leone,” says Robert Garry, professor of microbiology and immunology at Tulane University. “The herbalist had treated several patients from neighboring Guinea, one or more of whom were apparently infected with Ebola virus.” Scientists were able to sequence 99 Ebola virus genomes using blood samples from 78 patients, painting a record “real-time” snapshot of how the virus rapidly mutated as the outbreak spread.Related Articles On Futurity - Emory University'Key step' closer to universal flu vaccine - University of NottinghamWhy viruses and bacteria trigger different immune responses - Penn StateTeam finds origin of genome's 'dark matter' The analysis, published in the journal Science, shows that the West African Ebola strain was distantly related to a strain that has been circulating in central Africa for decades, but likely migrated to the region in 2004. Scientists found 300 mutations that differentiate the viral genomes involved in this outbreak from previous outbreaks. “This is first study to document deep viral genomics during a human outbreak of a hemorrhagic fever like Ebola,” Garry says. “We get a close look at not only how the virus is evolving as it passes from one person to the next, but also how the virus changes as it replicates within a person.” The results can help researchers as they work to develop antibody-based treatments using the genetic profile of the virus. They also help improve the accuracy of diagnostic tests. “The diagnostics used in the field are polymerase chain reaction (PCR) based,” Garry says. “PCR depends on finding precise matches between a synthetic primer and the viral genome. If the virus genome mutates, the PCR assay may not work or not work as well.” Coauthors of the study are from Harvard University, the Broad Institute of MIT, and researchers in Sierra Leone. Source: Tulane University Women who are victims of sexual assault while in college are three times more likely than their peers to be assaulted again within a year, a new study reports. Researchers followed nearly 1,000 college women, most age 18 to 21, over a five-year period, studying their drinking habits and experiences of severe physical and sexual assault.Related Articles On Futurity - University of MelbourneYoung women 20 times more likely to die after prison - Michigan State UniversityChinese college kids less tied to social media - Princeton UniversityPakistani poor less likely to support militants Severe physical victimization includes assaults with or without a weapon. Severe sexual victimization includes rape and attempted rape, including incapacitated rape, where a victim is too intoxicated from drugs or alcohol to provide consent. “Initially, we were attempting to see if victimization increased drinking, and if drinking then increased future risk,” says Kathleen Parks, senior research scientist at the University at Buffalo. “Instead, we found that the biggest predictor of future victimization is not drinking, but past victimization.” The study provided some good news, however, Parks says. “We found that severe sexual victimization decreased across the years in college.”Drinking to cope In light of the recent report from the White House Task Force to Protect Students from Sexual Assault, the study suggests that campuses need to be aware of the increased risk of future victimization for women who have experienced sexual assault. Women who were victims showed an increase in drinking in the year following their assaults, perhaps as a coping mechanism. “Our findings show that women who have been victims may need to be followed for many months to a year to see if their drinking increases,” Parks says. Parks’ previous research has shown that freshmen college women have a much higher likelihood of victimization if they engage in binge drinking. The study was published online in the journal Psychology of Addictive Behaviors. The National Institute on Alcohol Abuse and Alcoholism provided funding. Source: University at Buffalo The post Sexual assault in college raises risk of future attacks appeared first on Futurity. Seven out of ten Americans say the recent recession’s impact will be permanent—that’s up from five out of ten in 2009 when the slump officially ended. Other the key findings of the John J. Heldrich Center for Workforce Development’s latest Work Trends report, include: - Despite sustained job growth and lower levels of employment, most Americans do not think the economy has improved in the last year or that it will in the next. - Just one in six Americans believes that job opportunities for the next generation will be better than for theirs; five years ago, four in ten held that view. - Roughly four in five Americans have little or no confidence that the federal government will make progress on the nation’s most important problems over the next year. Much of the pessimism is rooted in direct experience, says Carl Van Horn, coauthor of the report and Heldrich Center professor and director.Related Articles On Futurity - Duke UniversityTrees lag behind climate change - University of MichiganExperts predict 5 million more US jobs by 2015 - Video games prep miners for danger “Fully one-quarter of the public says there has been a major decline in their quality of life owing to the recession, and 42 percent say they have less in salary and savings than when the recession began,” Van Horn says. “Despite five years of recovery, sustained job growth and reductions in the number of unemployed workers, Americans are not convinced the economy is improving.” He adds that only one in three thinks the US economy has gotten better in the last year and one quarter thinks it will improve next year. The survey took place between July 24 and August 3 with a nationally representative sample of 1,153 Americans. The analysis summarizes the effects of the Great Recession by classifying Americans into one of five categories based on how much impact the recession had on their quality of life and whether the change was temporary or permanent. It reveals that: - 16 percent of the public, or 38 million people, were “devastated” because they experienced a “major, permanent” change in the quality of their life - 19 percent, or 46 million, were “downsized” due to “permanent but minor” changes in standards of living - 10 percent, or 24 million were “set back,” experiencing “major, but temporary” changes in their quality of life - 22 percent, or 53 million, were “troubled” by the recession and endured only a “minor and temporary” change - Only one in three of the nation’s 240 million adults reported that they were completely “unscathed” by the recession. “Looking at the aftermath of the recession, it is clear that the American landscape has been significantly rearranged,” says Professor Cliff Zukin, co-director of the Work Trends surveys with Van Horn. “With the passage of time, the public has become convinced that they are at a new normal of a lower, poorer quality of life. The human cost is truly staggering.”Describe the American worker The public paints an extremely negative picture of the American worker as unhappy, underpaid, highly stressed, and insecure about his or her job. Asked to describe the typical American worker, using a list of a dozen words or phrases, just 14 percent checked off happy at work and only 18 percent believe they are well paid. Two-thirds say that American workers are “not secure in their jobs” and “highly stressed.” Just one in five says the average American worker is well educated or innovative; just one in three checked off ambitious or highly skilled. And perhaps the most surprising, just one in three checked off that the average American worker is “better than workers in other countries.”Financial stress One of the reasons the public does not see the economy as having gotten better is that many remain under tremendous financial stress. Six in 10 Americans describe their financial condition negatively as only fair (40 percent) or poor (19 percent). One-third report being in good shape; just seven percent describe themselves as being in excellent financial health. Many report significant losses in the Great Recession. Just 30 percent say they have more in salary and savings than they did before the recession started, less than a third have the same, leaving 42 percent who report having less today than five years ago. Americans view the recession as causing fundamental and lasting changes in a number of areas of economic and social life. Three in five believe the ability of young people to afford college will not return to prerecession levels, which is significant given the role that education has historically played as a key to upward mobility. Other fundamental areas where a large segment of the public sees permanent changes are: job security (53 percent), the elderly having to find part-time work after retiring (51 percent), and workers having to take jobs below their skill level (44 percent).Lots of pessimism Americans are also pessimistic about the future. Only a quarter think economic conditions in the US will get better in the next year, and just 40 percent believe their family’s finances will get better over the next year. Consequently, most do not see themselves getting back to where they were any time soon. “Despite nearly five years of job growth and declining unemployment levels, Americans remain skeptical that the economy has improved and doubt that it will improve any time soon,” says Van Horn. “The slow, uneven, and painful recovery left Americans deeply pessimistic about the economy, their personal finances, and prospects for the next generation.” The report found the public sharply critical of Washington policymakers. More disapprove than approve of the job President Obama is doing by a margin of 46 percent to 54 percent. Even fewer approve of the job Congress is doing—14 percent. A plurality of 43 percent says they trust neither the president nor Congress to handle the economy. Finally, should Republicans win control of Congress in November, only 26 percent say this will help lower the unemployment rate. Thirty percent say this would make unemployment worse while 44 percent say it would make no difference. Researchers are using a living fish, called Polypterus, to help show what might have happened when fish first tried to walk out of the water. Polypterus is an African fish that can breathe air, “walk” on land, and looks much like those ancient fishes that evolved into tetrapods. About 400 million years ago, a group of fish began exploring land and evolved into tetrapods—today’s amphibians, reptiles, birds, and mammals. But just how these ancient fish used their fishy bodies and fins in a terrestrial environment and what evolutionary processes were at play remain scientific mysteries. The team of researchers raised juvenile Polypterus on land for nearly a year, with the aim of revealing how these “terrestrialized” fish looked and moved differently. “Stressful environmental conditions can often reveal otherwise cryptic anatomical and behavioral variation, a form of developmental plasticity,” says Emily Standen, a former McGill University postdoctoral student who led the project, now at the University of Ottawa. “We wanted to use this mechanism to see what new anatomies and behaviors we could trigger in these fish and see if they match what we know of the fossil record.”On their fins As reported in Nature, the fish showed significant anatomical and behavioral changes. The terrestrialized fish walked more effectively by placing their fins closer to their bodies, lifted their heads higher, and kept their fins from slipping as much as fish that were raised in water.Related Articles On Futurity - University of TorontoFossil shows herbivores evolved in 'staggered bursts' - University of SouthamptonIs salmon safe to eat after pregnancy? - Yale UniversityScientists turned these brown butterflies violet “Anatomically, their pectoral skeleton changed to become more elongate with stronger attachments across their chest, possibly to increase support during walking, and a reduced contact with the skull to potentially allow greater head/neck motion,” says Trina Du, a McGill PhD student and study collaborator. “Because many of the anatomical changes mirror the fossil record, we can hypothesize that the behavioral changes we see also reflect what may have occurred when fossil fish first walked with their fins on land,” says Hans Larsson, Canada Research Chair in Macroevolution at McGill. The terrestrialized Polypterus experiment is unique and provides new ideas for how fossil fishes may have used their fins in a terrestrial environment and what evolutionary processes were at play. Larsson adds, “This is the first example we know of that demonstrates developmental plasticity may have facilitated a large-scale evolutionary transition, by first accessing new anatomies and behaviors that could later be genetically fixed by natural selection”. The Canada Research Chairs Program, Natural Sciences and Engineering Research Council of Canada (NSERC), and Tomlinson Postdoctoral fellowship supported the work. Source: McGill University While smoke from electronic cigarettes may not have cancer-causing agents, it does have higher levels of some toxic metals compared to traditional cigarettes. Electronic cigarette smoke contains the toxic element chromium, which is absent from traditional cigarettes, as well as nickel at levels four times higher than normal cigarettes. Several other toxic metals such as lead and zinc were also found in second-hand e-cigarette smoke—though in concentrations lower than for normal cigarettes. “Our results demonstrate that overall electronic cigarettes seem to be less harmful than regular cigarettes, but their elevated content of toxic metals such as nickel and chromium do raise concerns,” says Constantinos Sioutas, professor of engineering at University of Southern California. Researchers began the study to quantify the level of exposure to harmful organics and metals in second-hand e-cigarette smoke, in hopes of providing insight for the regulatory authorities.Smoke-filled rooms “The metal particles likely come from the cartridge of the e-cigarette devices themselves—which opens up the possibility that better manufacturing standards for the devices could reduce the quantity of metals in the smoke,” says Arian Saffari, a PhD student and lead author of the paper that is published in the Journal of Environmental Science, Processes, and Imapcts.Related Articles On Futurity - University of RochesterMore smokers among those stung by racial bias - Penn StateSmartphones track smokers’ urge to quit - University of NottinghamNicotine patches don't work during pregnancy “Studies of this kind are necessary for implementing effective regulatory measures. E-cigarettes are so new, there just isn’t much research available on them yet.” Researchers conducted all of the experiments in offices and rooms. While volunteer subjects were smoking regular cigarettes and e-cigarettes, the researchers collected particles in the indoor air and studied the chemical content and sources of the samples. “Offices and rooms—not laboratories—are the environments where you’re likely to be exposed to second-hand e-cigarette smoke, so we did our testing there to better simulate real-life exposure conditions,” Saffari says. The researchers compared the smoke from a common traditional cigarette brand with smoke from one brand of e-cigarette. an Elips Serie The results could vary based on which type of cigarettes and e-cigarettes are tested, the researchers note. Researchers from Cornell University, University of Wisconsin-Madison, and LARS Laboratorio and the Fondazione IRCCS Instituto Nazionale dei Tumori in Milan, Italy, collaborated on the study. The Fondazione IRCCS Instituto Nazionale dei Tumori provided funding. When wine fermentation gets “stuck,” the yeast turning grape sugar into alcohol and carbon dioxide shut down too soon—and bacteria that eat the leftover sugar spoil the wine. Researchers have discovered a biochemical communication system behind this chronic problem.Related Articles On Futurity - Brandeis UniversityAncient jars held 2,000 liters of strong, sweet wine - University of MissouriFrog eggs hint at source of mildew on wine grapes - Stanford University'Hacked' yeast may replace pain-killing poppies Working through a prion—an abnormally shaped protein that can reproduce itself—the system enables bacteria in fermenting wine to switch yeast from sugar to other food sources without altering the yeast’s DNA. “The discovery of this process really gives us a clue to how stuck fermentations can be avoided,” says yeast geneticist Linda Bisson, a professor in the department of viticulture and enology at University of California, Davis. “Our goal now is to find yeast strains that essentially ignore the signal initiated by the bacteria and do not form the prion, but instead power on through the fermentation.” She suggests that the discovery of this biochemical mechanism, reported in the journal Cell, may also have implications for better understanding metabolic diseases, such as type 2 diabetes, in humans.Bacteria ‘jump-start’ yeast Biologists have known for years that an ancient biological circuit, based in the membranes of yeast cells, blocks yeast from using other carbon sources when the sugar glucose is present. This circuit, known as “glucose repression,” is especially strong in the yeast species Saccharomyces cerevisiae, enabling people to use that yeast for practical fermentation processes in winemaking, brewing, and bread-making, because it causes such efficient processing of sugar. In this study, the researchers found that the glucose repression circuit sometimes gets interrupted when bacteria jump-start the replication of the prions in membranes of yeast cells. The interference of the prions causes the yeast to process carbon sources other than glucose and become less effective in metabolizing sugar, dramatically slowing down the fermentation until it, in effect, becomes “stuck.” “This type of prion-based inheritance is useful to organisms when they need to adapt to environmental conditions but not necessarily permanently,” Bisson says. “In this case, the heritable changes triggered by the prions enable the yeast to also change back to their initial mode of operation if environmental conditions should change again.” In this study, the researchers demonstrate that the process leading to a stuck fermentation benefits both the bacteria and the yeast. As sugar metabolism slows down, conditions in the fermenting wine become more conducive to bacterial growth, and the yeast benefit by gaining the ability to metabolize not only glucose but also other carbon sources as well—maintaining and extending their lifespan.Winemaker options Now that this communication mechanism between the bacteria and yeast is more clearly understood, winemakers should be better able to avoid stuck fermentations. “Winemakers may want to alter the levels of sulfur dioxide used when pressing or crushing the grapes, in order to knock out bacteria that can trigger the processes that we now know can lead to a stuck fermentation,” Bisson says. “They also can be careful about blending grapes from vineyards known to have certain bacterial strains or they could add yeast strains that have the ability to overpower these vineyard bacteria.” Additional researchers contributed to the study from Whitehead Institute for Biomedical Research in Cambridge, Massachusetts; UC Davis; Massachusetts Institute of Technology; and Harvard University. The G. Harold and Leila Y. Mathers Foundation, Howard Hughes Medical Institute, and the National Institutes of Health provided funding for the work. Source: UC Davis Tiny diamonds invisible to the human eye—but confirmed by microscope—add weight to a theory first proposed in 2007 that a comet that exploded over North America sparked catastrophic climate change 12,800 years ago. A new paper published the Journal of Geology reports the definitive presence of nanodiamonds at some 32 sites in 11 countries on three continents in layers of darkened soil at the Earth’s Younger Dryas boundary. The boundary layer is widespread. The nanodiamonds, which often form during large impact events, are abundant along with cosmic impact spherules, high-temperature melt-glass, fullerenes, grape-like clusters of soot, charcoal, carbon spherules, glasslike carbon, heium-3, iridium, osmium, platinum, nickel, and cobalt. The combination of components is similar to that found in soils connected with the 1908 in-air explosion of a comet over Siberia and those found in the Cretaceous-Tertiary Boundary (KTB) layer that formed 65 million years ago when a comet or asteroid struck off Mexico and wiped out dinosaurs worldwide.Comet airbursts In the Oct. 9, 2007, issue of the Proceedings of the National Academy of Sciences, researchers proposed that a cosmic impact event, possibly multiple airbursts of comets, set off a 1,300-year-long cold spell known as the Younger Dryas, fragmented the prehistoric Clovis culture, and led to widespread extinctions across North America. Douglas Kennett, a coauthor of the new paper and now at Penn State, was a member of the original study. In that paper and in a series of subsequent studies, reports of nanodiamond-rich soils were documented at numerous sites. However, numerous critics refuted the findings, holding to a long-running theory that over-hunting sparked the extinctions and that the suspected nanodiamonds had been formed by wildfires, volcanism, or occasional meteoritic debris, rather than a cosmic event. The glassy and metallic materials in the YDB layers would have formed at temperatures in excess of 2,200 degrees Celsius and could not have resulted from the alternative scenarios, says coauthor James Kennett, professor emeritus at University of California, Santa Barbara, in a news release. He also was on the team that originally proposed a comet-based event.Nanodiamond creation In the new paper, researchers slightly revised the date of the theorized cosmic event and cited six examples of independent research that have found consistent peaks in the creation of the nanodiamonds that match their hypothesis. “The evidence presented in this paper rejects the alternate hypotheses and settles the debate about the existence of the nanodiamonds,” says the paper’s corresponding author Allen West of GeoScience Consulting of Dewey, Arizona. “We provide the first comprehensive review of the state of the debate and about YDB nanodiamonds deposited across three continents.” West worked in close consultation with researchers at the various labs that conducted the independent testing, including coauthor Joshua J. Razink, operator and instrument manager since 2011 of University of Oregon’s state-of-the-art high-resolution transmission electron microscope (HR-TEM) in the Center for Advanced Materials Characterization in Oregon (CAMCOR).Microscope ‘sees’ nanodiamonds Razink was provided with samples previously cited in many of the earlier studies, as well as untested soil samples delivered from multiple new sites. The samples were placed onto grids and analyzed thoroughly. “These diamonds are incredibly small, on the order of a few nanometers and are invisible to the human eye and even to an optical microscope,” Razink says. “For reference, if you took a meter stick and cut it into one billion pieces, each of those pieces is one nanometer. The only way to really get definitive characterization that these are diamonds is to use tools like the transmission electron microscope. It helps us to rule out that the samples are not graphene or copper. Our findings say these samples are nanodiamonds.” In addition to the HR-TEM done at the UO, researchers also used standard TEM, electron energy loss spectroscopy (EELS), energy-dispersive X-ray spectroscopy (EDS), selected area diffraction (SAD), fast Fourier transform (FFT) algorithms, and energy-filtered transmission electron microscopy (EFTEM). “The chemical processing methods described in the paper,” Razink says, “lay out with great detail the methodology that one needs to go through in order to prepare their samples and identify these diamonds.” Jon M. Erlandson, executive director of the Museum of Natural and Cultural History, is a coauthor of the new paper and the one that first proposed the cosmic event. Other coauthors of the new paper are from DePaul University, University of California, Los Angeles, the National Institute for Materials Science in Tsukuba, Japan, SRI International in Menlo Park, California, Northern Arizona University, Universidad Michoacana de San Nicolás de Hidalgo in Mexico, the US Geological Survey, University of South Carolina, University of Cincinnati, Kimstar Research, Pennsylvania State University, University of Aarhus in Denmark, Exploration Geologist in The Netherlands; Rochester Institute of Technology, Berkeley National Laboratory, Universitat de Valencia in Spain; and J. F. Jorda Pardo of the Universidad Nacional de Educacion a Distancia in Spain. The National Institute of Environmental Health Sciences, the US Department of Energy, and the National Science Foundation provided funding. Source: University of Oregon To listen to someone carefully, we first stop talking and then stop moving entirely. This strategy helps us hear better because it cuts unwanted sounds generated by our movements. This interplay between movement and hearing also has a counterpart deep in the brain. Indirect evidence has long suggested that the brain’s motor cortex, which controls movement, somehow influences the auditory cortex, which gives rise to our conscious perception of sound. A new study, appearing online in Nature, reveals exactly how the motor cortex, seemingly in anticipation of movement, can tweak the volume control in the auditory cortex. The new lab methods allowed the group to “get beyond a century’s worth of very powerful but largely correlative observations, and develop a new, and really a harder, causality-driven view of how the brain works,” says the study’s senior author Richard Mooney, a professor of neurobiology at Duke University School of Medicine, and a member of the Duke Institute for Brain Sciences.Related Articles On Futurity - University of Virginia'Master switch' may be key to brain cancer - Washington University in St. LouisSticky questions about role of Alzheimer’s gene - California Institute of TechnologyFor people with autism, neurons 'see' faces abnormally The findings contribute to the basic knowledge of how communication between the brain’s motor and auditory cortexes might affect hearing during speech or musical performance. Disruptions to the same circuitry may give rise to auditory hallucinations in people with schizophrenia. In 2013, researchers led by Mooney first characterized the connections between motor and auditory areas in mouse brain slices as well as in anesthetized mice. The new study answers the critical question of how those connections operate in an awake, moving mouse. “This is a major step forward in that we’ve now interrogated the system in an animal that’s freely behaving,” says David Schneider, a postdoctoral associate in Mooney’s lab. Mooney suspects that the motor cortex learns how to mute responses in the auditory cortex to sounds that are expected to arise from one’s own movements while heightening sensitivity to other, unexpected sounds. The group is testing this idea. “Our first step will be to start making more realistic situations where the animal needs to ignore the sounds that its movements are making in order to detect things that are happening in the world,” Schneider says.The brain’s ‘game of telephone’ In the latest study, the team recorded electrical activity of individual neurons in the brain’s auditory cortex. Whenever the mice moved—walking, grooming, or making high-pitched squeaks—neurons in their auditory cortex were dampened in response to tones played to the animals, compared to when they were at rest. To find out whether movement was directly influencing the auditory cortex, researchers conducted a series of experiments in awake animals using optogenetics, a powerful method that uses light to control the activity of select populations of neurons that have been genetically sensitized to light. Like the game of telephone, sounds that enter the ear pass through six or more relays in the brain before reaching the auditory cortex. “Optogenetics can be used to activate a specific relay in the network, in this case the penultimate node that relays signals to the auditory cortex,” Mooney says. About half of the suppression during movement was found to originate within the auditory cortex itself. “That says a lot of modulation is going on in the auditory cortex, and not just at earlier relays in the auditory system” Mooney says. More specifically, the team found that movement stimulates inhibitory neurons that in turn suppress the response of the auditory cortex to tones. The researchers then wondered what turns on the inhibitory neurons. The suspects were many. “The auditory cortex is like this giant switching station where all these different inputs come through and say, ‘Okay, I want to have access to these interneurons,’” Mooney says. “The question we wanted to answer is who gets access to them during movement?”Exciting results The team knew from previous experiments that neuronal projections from the secondary motor cortex (M2) modulate the auditory cortex. But to isolate M2′s relative contribution—something not possible with traditional electrophysiology—the researchers again used optogenetics, this time to switch on and off the M2′s inputs to the inhibitory neurons. Turning on M2 inputs reproduced a sense of movement in the auditory cortex, even in mice that were resting, the group found. “We were sending a ‘Hey I’m moving’ signal to the auditory cortex,” Schneider says. Then the effect of playing a tone on the auditory cortex was much the same as if the animal had actually been moving—a result that confirmed the importance of M2 in modulating the auditory cortex. On the other hand, turning off M2 simulated rest in the auditory cortex, even when the animals were still moving. “I couldn’t contain my excitement when we first saw that result,” says Anders Nelson, a neurobiology graduate student in Mooney’s group. The Helen Hay Whitney Foundation, the Holland-Trice Graduate Fellowship in Brain Sciences, and the National Institutes of Health supported the work. Source: Duke University Researchers have developed a vaginal suppository that, loaded with the antiviral drug Tenofovir, could help prevent the transmission of HIV and AIDS. The semi-soft suppository is made from the seaweed-derived food ingredient carrageenan. Women could use this method to protect against the spread of sexually transmitted infections during unprotected heterosexual intercourse, the researchers say.Related Articles On Futurity - Rice UniversityBioengineers prove test detects HIV's DNA - University of PennsylvaniaMolecular 'onion' could one day deliver drugs - Michigan State UniversityHow to improve the lives of kids with HIV With more than 34 million people worldwide living with HIV, microbicides—compounds that can be applied vaginally or rectally—offer a way to slow the spread of the virus, notes lead researcher Toral Zaveri, postdoctoral scholar in the food science department at Penn State. Containing agents known to prevent transmission of HIV and other viruses, microbicides can be inserted into the vagina prior to intercourse as a gel, cream, foam, sponge, suppository, or film. Zaveri pointed out that carrageenan was selected over gelatin (the traditional choice for semi-soft suppositories) because it offers a number of important advantages. Because carrageenan is plant-based, it is acceptable to vegetarians, there is no risk of animal-acquired infections, and it avoids religious objections. Also, it is more stable than gelatin at higher ambient temperatures common in tropical regions of the world. The suppositories hold particular promise for places such as regions of Africa where HIV is widespread and women often are not in control of sexual situations, according to Zaveri. “Condoms have been successful in preventing transmission of HIV and other sexually transmitted infections. However, effectiveness depends on correct and consistent use by the male partner,” she says. “Due to socioeconomic and gender inequities, women in some countries and cultures are not always in a position to negotiate regular condom use, so a drug-dispersing suppository can protect against transmission of HIV and other sexually transmitted infections during heterosexual intercourse with a partner whose infection status may or may not be known to the woman.”Testing the options As part of the research, Zaveri, who earned her doctorate in biomedical engineering at the University of Florida, conducted extensive sensory-perception testing to assess acceptability of the suppositories among women. Women participating in the study at the Sensory Evaluation Center in Penn State’s Department of Food Science were presented with suppositories—without the drug—in a variety of sizes, shapes, and textures. They indicated their preferences and rated the suppositories for willingness to try and imagined ease of insertion. The initial evaluations all were done only in the hand as part of this preclinical development effort. Many factors go into making choices, Zaveri explains, such as vaginal products women may have used previously, as well as their sexual and cultural practices. Understanding women’s perception of the suppository and reasons behind their choices is a critical step in the development of the suppository as a vaginal drug-delivery system. Zaveri also studied the release of Tenofovir from the suppositories in a simulated vaginal environment to ensure that the drug will be released once inserted in the body, even in the presence of semen. “Many people work on drug delivery and use different methods to create drug-delivery products, but not many focus on the end-user aspect of this,” she says. “Obviously, the product can be effective only if it is acceptable to women and they use it. We have gone a step farther with this study to validate the acceptability of our suppositories among women—and that’s critical.”Why a food additive? Zaveri notes that some may be surprised that biomedical research is done in the food science department. But she says it seemed natural given her collaboration on the study with Gregory Ziegler, who has expertise in biopolymers such as carrageenan, and John Hayes, who is known for his proficiency in sensory-perception research. “The biomedical use of a food additive—a material widely used in the food industry for its gelling, thickening, and stabilizing properties—as a medium for a drug-delivery system is a novel idea, but we were playing to all of our strengths on the team,” she says. Previous microbicides were generally solids or liquids. “We exploited the intermediate design space of viscoelastic materials known as gels,” says Ziegler, “thus avoiding some of the drawbacks of these other dosage forms.” The real beauty of the concept, Zaveri suggests, is its potential for relatively quick commercialization because the material used to formulate the suppositories, carrageenan, is already approved, and safety studies have been done in previous microbicide clinical trials. “Currently the suppositories are prepared in the lab by simple molding,” she says. “However, the research team is investigating methods for large-scale production and packaging—key factors to be considered for product commercialization. Considering the safety, efficacy, and user-acceptability tests that we are doing, it easily is possible for a company to take this product and run with it.” A National Institutes of Health grant to Hayes and Ziegler through the National Institute of Allergy and Infectious Diseases supported this work, which appears in PLOS ONE, Antiviral Research, and, most recently, the July and September issues of Pharmaceutics. Source: Penn State Scientists have discovered a fundamental constraint in the brain that may explain why it’s easier to learn a skill that’s related to an ability you already have. For example, a trained pianist can learn a new melody easier than learning how to hit a tennis serve. As reported in Nature, the researchers found for the first time that there are limitations on how adaptable the brain is during learning and that these restrictions are a key determinant for whether a new skill will be easy or difficult to learn. Understanding how the brain’s activity can be “flexed” during learning could eventually be used to develop better treatments for stroke and other brain injuries. Lead author Patrick T. Sadtler, a Ph.D. candidate in the University of Pittsburgh department of bioengineering, compared the study’s findings to cooking. “Suppose you have flour, sugar, baking soda, eggs, salt, and milk. You can combine them to make different items—bread, pancakes, and cookies—but it would be difficult to make hamburger patties with the existing ingredients,” Sadtler says. “We found that the brain works in a similar way during learning. We found that subjects were able to more readily recombine familiar activity patterns in new ways relative to creating entirely novel patterns.”Moving the cursor For the study, the research team trained animals (Rhesus macaques) to use a brain-computer interface (BCI), similar to ones that have shown recent promise in clinical trials for assisting quadriplegics and amputees.Related Articles On Futurity - Johns Hopkins UniversityDiscovery hints at better schizophrenia therapy - University of MissouriBabies don't get big and small groups - University of California, DavisNow, where did I put those keys? “This evolving technology is a powerful tool for brain research,” says Daofen Chen, program director at the National Institute of Neurological Disorders and Stroke (NINDS), part of the National Institutes of Health. “It helps scientists study the dynamics of brain circuits that may explain the neural basis of learning.” The researchers recorded neural activity in the subject’s motor cortex and directed the recordings into a computer, which translated the activity into movement of a cursor on the computer screen. This technique allowed the team to specify the activity patterns that would move the cursor. The test subjects’ goal was to move the cursor to targets on the screen, which required them to generate the patterns of neural activity that the experimenters had requested. If the subjects could move the cursor well, that meant that they had learned to generate the neural activity pattern that the researchers had specified. The results showed that the subjects learned to generate some neural activity patterns more easily than others, since they only sometimes achieved accurate cursor movements. The harder-to-learn patterns were different from any of the pre-existing patterns, whereas the easier-to-learn patterns were combinations of pre-existing brain patterns. Because the existing brain patterns likely reflect how the neurons are interconnected, the results suggest that the connectivity among neurons shapes learning.Only so flexible “We wanted to study how the brain changes its activity when you learn, and also how its activity cannot change. Cognitive flexibility has a limit—and we wanted to find out what that limit looks like in terms of neurons,” says Aaron P. Batista, assistant professor of bioengineering at University of Pittsburgh. Byron M. Yu, assistant professor of electrical and computer engineering and biomedical engineering at Carnegie Mellon, believes this work demonstrates the utility of BCI for basic scientific studies that will eventually impact people’s lives. “These findings could be the basis for novel rehabilitation procedures for the many neural disorders that are characterized by improper neural activity,” Yu says. “Restoring function might require a person to generate a new pattern of neural activity. We could use techniques similar to what were used in this study to coach patients to generate proper neural activity.” The researchers are part of the Center for the Neural Basis of Cognition (CNBC), a joint program between Carnegie Mellon University and the University of Pittsburgh. Additional researchers from University of Pittsburgh, Carnegie Mellon, and Stanford University and Palo Alto Medical Foundation contributed to the work. The NIH, National Science Foundation, and the Burroughs Wellcome Fund funded the research. Source: Carnegie Mellon Lowering a patient’s internal eye pressure is currently the only way to treat glaucoma. A tiny eye implant paired with a smart phone could help doctors measure and lower eye pressure. For the 2.2 million Americans battling glaucoma, the main course of action for staving off blindness involves weekly visits to eye specialists who monitor—and control—increasing pressure within the eye.Related Articles On Futurity - Princeton UniversityChicken eyes may hold clues to new state of matter - University of MelbourneTiny implant sends seizure alert - University of California, DavisCornea donor's age not a factor in most transplants Now, a tiny eye implant could enable patients to take more frequent readings from the comfort of home. Daily or hourly measurements of eye pressure could help doctors tailor more effective treatment plans. Internal optic pressure (IOP) is the main risk factor associated with glaucoma, which is characterized by a continuous loss of specific retina cells and degradation of the optic nerve fiber. The mechanism linking IOP and the damage is not clear, but in most patients IOP levels correlate with the rate of damage.A steady read Reducing IOP to normal or below-normal levels is currently the only treatment available for glaucoma. This requires repeated measurements of the patient’s IOP until the levels stabilize. The trick with this, though, is that the readings do not always tell the truth. Like blood pressure, IOP can vary day-to-day and hour-to-hour; other medications, body posture, or even a necktie that is knotted too tightly can affect it. If patients are tested on a low IOP day, the test can give a false impression of the severity of the disease and affect their treatment in a way that can ultimately lead to worse vision. The new implant was developed as a collaboration between Stephen Quake, a professor of bioengineering and of applied physics at Stanford University, and ophthalmologist Yossi Mandel of Bar-Ilan University in Israel. It consists of a small tube—one end is open to the fluids that fill the eye; the other end is capped with a small bulb filled with gas. As the IOP increases, intraocular fluid is pushed into the tube; the gas pushes back against this flow.Phone app As IOP fluctuates, the meniscus—the barrier between the fluid and the gas—moves back and forth in the tube. Patients could use a custom smartphone app or a wearable technology, such as Google Glass, to snap a photo of the instrument at any time, providing a critical wealth of data that could steer treatment. For instance, in one previous study, researchers found that 24-hour IOP monitoring resulted in a change in treatment in up to 80 percent of patients. The implant is currently designed to fit inside a standard intraocular lens prosthetic, which many glaucoma patients often get when they have cataract surgery, but the scientists are investigating ways to implant it on its own. The study appears in the current issue of Nature Medicine.Preventing blindness “For me, the charm of this is the simplicity of the device,” Quake says. “Glaucoma is a substantial issue in human health. It’s critical to catch things before they go off the rails, because once you go off, you can go blind. If patients could monitor themselves frequently, you might see an improvement in treatments.” Remarkably, the implant won’t distort vision. When subjected to the vision test used by the US Air Force, the device caused nearly no optical distortion, the researchers says. Before they can test the device in humans, however, the scientists say they need to re-engineer the device with materials that will increase the life of the device inside the human eye. Because of the implant’s simple design, they expect this will be relatively achievable. “I believe that only a few years are needed before clinical trials can be conducted,” says Mandel, head of the Ophthalmic Science and Engineering Laboratory at Bar-Ilan University, who collaborated on developing the implant. Source: Stanford University A rock slab that contains the fossils of 24 very young dinosaurs and one older one suggests a caretaker was watching the group of hatchlings, scientists say. Amateur paleontologists discovered the fossils, which are about 120 million years old, in the Lujiatun beds of the Yixian Formation in northeastern China’s Liaoning Province. Though the entire specimen is only about two feet across, it contains fossils from 25 creatures, all of the species Psittacosaurus lujiatunensis. Psittacosaurs were plant eaters and are among the most abundant dinosaurs yet discovered. The specimen had previously been described only briefly, in a one-page paper in 2004. The people who found and extracted the fossils did not record their exact original location, which hampers the investigation to some degree. But Peter Dodson, professor of paleontology at the University of Pennsylvania, and Brandon Hedrick, a doctoral student in the department of earth and environmental science, felt there was much more to say about the specimen. “I saw a photo of it and instantly knew I wanted to explore it in more depth,” Hedrick says.Caught in a slurry of debris To analyze the material in which the animals were preserved, the researchers examined thin slivers of rock under the microscope and samples of ground-up rock using a technique called X-ray diffraction, which relies on the fact that different kinds of minerals bend light in unique ways. Both analyses suggested the rock was composed of volcanic material, an indication that the animals were caught in flowing material from an eruption. The fossils’ orientation supported this idea. The findings are reported in the journal Cretaceous Research. “If they were captured in a flow, the long axis—their spines—would be oriented in the same direction,” Hedrick says. “That was what we found. They were likely trapped by a flow, though we can’t say exactly what kind of flow.” Because there was no evidence of heat damage to the bones, the researchers believe the flow was likely a lahar—a slurry of water, mud, rock, and other debris associated with volcanic eruptions.Was it a nest? The 24 younger animals appeared to be quite similar in size. Though the team considered whether they might have been embryos, still in their eggs, various observations suggest they had already hatched. First, there was no evidence of eggshell material. Also, other paleontologists have identified even smaller individual psittacosaurs. Also, “the ends of their bones were well developed, which indicates they were capable of moving around,” Hedrick says.Related Articles On Futurity - University of WashingtonTrout gut balloons for annual food frenzy - Cornell UniversityGenome of frog-killing fungus sequenced - Indiana UniversityLink between Neanderthals and humans is still missing The larger skull was firmly embedded in the same layer of rock as the 24 smaller animals. Two of the younger animals were in fact intertwined with the skull, signs that the animals were closely associated at the time of their death. The skull’s size, about 4.5 inches long, indicated that the animal was estimated to be between 4 and 5 years old. Earlier findings suggested that P. lujiatunensis did not reproduce until 8 or 9 years old, so this creature was probably not the parent of the younger dinosaurs. Given the close association of the young P. lujiatunensis with the older individual, however, the researchers believe this specimen may offer evidence of post-hatchling cooperation, a behavior exhibited by some species of modern-day birds. The older juvenile may well have been a big brother or sister helping care for its younger siblings. The researchers emphasize that they can’t definitively call this assemblage of fossils a nest, as some earlier analyses have. “It certainly seems like it might be a nest, but we weren’t able to satisfy the intense criteria to say definitively that it is,” Hedrick says. “It’s just as important to point out what we don’t know for sure as it is to say what we’re certain of.” As a next step, Hedrick and Dodson are examining the microstructure of the bones of the smaller animals to establish whether they were all at the same stage of development, which would lend support to the idea of this being one clutch of animals. Other researchers from Penn and from the Dalian Museum in China are coauthors of the paper. The National Science Foundation and University of Pennsylvania’s Paleobiology Stipend funded the study. Source: University of Pennsylvania The post Possible ‘babysitter’ spotted in nest of 24 dinosaurs appeared first on Futurity. A new smartphone app can help parents and pediatricians recognize jaundice in newborn babies. Skin that turns yellow can be a sure sign that a newborn isn’t adequately eliminating the chemical bilirubin. But that discoloration is sometimes hard to see, and severe jaundice, left untreated, can harm the baby.Related Articles On Futurity - Penn StateCoffee app finds caffeine's safe zone - University of TorontoEven with advice, new moms don’t sleep - University of LeedsKids will eat veggies if you start early and don't give up Engineers and physicians have developed a smartphone application that checks for jaundice in newborns and can deliver results to parents and pediatricians within minutes. It could serve as a screening tool to determine whether a baby needs a blood test—the gold standard for detecting high levels of bilirubin. “Virtually every baby gets jaundiced, and we’re sending them home from the hospital even before bilirubin levels reach their peak,” says James Taylor, professor of pediatrics and medical director of the newborn nursery at University of Washington Medical Center. “This smartphone test is really for babies in the first few days after they go home. A parent or health care provider can get an accurate picture of bilirubin to bridge the gap after leaving the hospital.” The research team will present its results at the Association for Computing Machinery’s International Joint Conference on Pervasive and Ubiquitous Computing in September in Seattle.Peace of mind The app, called BiliCam, uses a smartphone’s camera and flash and a color calibration card the size of a business card. A parent or health care professional would download the app, place the card on the baby’s belly, then take a picture with the card in view. The card calibrates and accounts for different lighting conditions and skin tones. Data from the photo are sent to the cloud and are analyzed by machine-learning algorithms, and a report on the newborn’s bilirubin levels is sent almost instantly to the parent’s phone. “This is a way to provide peace of mind for the parents of newborns,” says Shwetak Patel, associate professor of computer science and engineering and of electrical engineering. “The advantage of doing the analysis in the cloud is that our algorithms can be improved over time.” A noninvasive jaundice screening tool is available in some hospitals and clinics, but the instrument costs several thousand dollars and isn’t feasible for home use. Currently, both doctors and parents assess jaundice by looking for the yellow color in a newborn’s skin, but this visual assessment is only moderately accurate. The UW team developed BiliCam to be easy to use and affordable for both clinicians and parents, especially during the first several days after birth when it’s crucial to check for jaundice.Cheap screening Jaundice, or the yellowing of the skin, can happen when an excess amount of bilirubin collects in the blood. Bilirubin is a natural byproduct of the breakdown of red blood cells, which the liver usually metabolizes. But newborns often metabolize bilirubin slower because their livers aren’t yet fully functioning. If left untreated, severe jaundice can cause brain damage and a potentially fatal condition called kernicterus. The team ran a clinical study with 100 newborns and their families at the University of Washington Medical Center. They used a blood test, the current screening tool used in hospitals, and BiliCam to test the babies when they were between two and five days old. They found that BiliCam performed as well as or better than the current screening tool. Though it wouldn’t replace a blood test, BiliCam could let parents know if they should take that next step. “BiliCam would be a significantly cheaper and more accessible option than the existing reliable screening methods,” says Lilian de Greef, lead author and a doctoral student in computer science and engineering. “Lowering the access barrier to medical applications can have profound effects on patients, their caregivers, and their doctors, especially for something as prevalent as newborn jaundice.”Potential for developing countries The researchers plan to test BiliCam on up to 1,000 additional newborns, especially those with darker skin pigments. The algorithms will then be robust enough to account for all ethnicities and skin colors. This could make BiliCam a useful tool for parents and health care workers in developing countries where jaundice accounts for many newborn deaths. “We’re really excited about the potential of this in resource-poor areas, something that can make a difference in places where there aren’t tools to measure bilirubin but there’s good infrastructure for mobile phones,” Taylor says. Within a year, the researchers say BiliCam could be used by doctors as an alternative to the current screening procedures for bilirubin. They have filed patents on the technology, and within a couple of years hope to have Federal Drug Administration approval for the BiliCam app that parents can use at home on their smartphones. Additional researchers contributed from University of Washington and Southern Methodist University. The Coulter Foundation and a National Science Foundation Graduate Research Fellowship funded the research. Source: University of Washington Doctors have stumbled onto a potential new use for two approved medications. When used in combination, they heal wounds more quickly with less scar tissue. In mice and rats, injecting the two drugs in combination speeds the healing of surgical wounds by about one-quarter and significantly decreases scar tissue.Related Articles On Futurity - Cornell UniversityGenes let transplant recipients skip the drugs - Vanderbilt UniversityAlligator faces are super touchy-feely - Stony Brook UniversityCompact fluorescent bulbs may harm skin If the findings, published in the Journal of Investigative Dermatology, hold up in future human studies, the treatment might also speed skin healing in people with skin ulcers, extensive burns, and battlefield injuries. “The findings mean that wound healing is not only accelerated, but also that real skin regeneration is occurring,” says Zhaoli Sun, director of transplant biology research at Johns Hopkins School of Medicine. “These animals had more perfect skin repair in the wound area.” The wound healing potential of the two drugs was discovered incidentally while the researchers were working to prevent rejection of liver transplants. One of the drugs, AMD3100, is generally used to move stem cells from bone marrow to the bloodstream to be harvested and stored for patients recovering from cancer chemotherapy. The other, tacrolimus, tamps down immune response. Researchers noticed that in addition to successfully preventing liver graft rejection in their study, the drugs, when used together, seemed to improve wound healing in animals.Faster healing Focusing on just the wound healing “side effect,” the scientists launched the rodent study to determine what the mechanism behind its therapeutic effects might be. The researchers divided mice into four groups, each of which received four 5-millimeter circular cuts to remove skin and tissue from their backs. Some of the mice received injections of just AMD3100. Others received injections of tacrolimus in doses just one-tenth of what is usually given to prevent organ and tissue rejection. Another group received injections of both AMD3100 and low-dose tacrolimus. A group of control animals received saline injections. Animals that received only saline healed completely in 12 days, while those that received both drugs healed in nine days, a reduction of 25 percent. Those that received only one drug or the other recorded just a modest one-day improvement in healing time.Less scar tissue The researchers had similar findings in rats, though the drug combination worked slightly better, reducing healing time by 28 percent compared to saline. Additionally, they found that the wounds in animals that received the drug combination healed with less scar tissue and regrew skin’s hair follicles. Further tests showed that the drugs work synergistically, with AMD3100 pushing stem cells from bone marrow into the bloodstream and tacrolimus stimulating cells in wound areas to give off molecules that attract the stem cells. Though the study tested the drug combination only on surgical excisions, the researchers say the beneficial effects also apply to burn injuries and excisions in diabetic rats in studies that are now under way. Researchers from Johns Hopkins and the National Institute on Alcohol Abuse and Alcoholism participated in the study. Johns Hopkins University School of Medicine’s Transplant Biology Research Center and a gift from the family of Francesc Gines supported the research. Source: Johns Hopkins University The tectonic plate that dominates the Pacific “Ring of Fire” may not be as rigid as many scientists have assumed. New research suggests that cooling of the lithosphere—the outermost layer of Earth—makes some sections of the Pacific plate contract horizontally at faster rates than others, which causes the plate to deform. The effect is most pronounced in the youngest parts of the lithosphere—about 2 million years old or less—that make up some of the Pacific Ocean’s floor. Scientists predict the rate of contraction here to be 10 times faster than older parts of the plate that were created about 20 million years ago, and 80 times faster than very old parts of the plate that were created about 160 million years ago. The tectonic plates that cover Earth’s surface, including both land and seafloor, are in constant motion; they imperceptibly surf the viscous mantle below. Over time, the plates scrape against and collide into each other, forming mountains, trenches, and other geological features. On the local scale, the movements cover only inches per year and are hard to see. The same goes for deformations of the type described in the new paper that is published in Geology, but when summed over an area the size of the Pacific plate, they become statistically significant.Plates aren’t rigid The new calculations show the Pacific plate is pulling away from the North American plate a little more—approximately 2 millimeters a year—than the rigid-plate theory would account for. Overall, the plate is moving northwest about 50 millimeters a year.Related Articles On Futurity - Michigan State UniversityHumans’ first home is 30 million years old - After Fukushima, uncertainty escalated fear - University of Texas at AustinSatellite images guide Haiti relief efforts “The central assumption in plate tectonics is that the plates are rigid, but the studies that my colleagues and I have been doing for the past few decades show that this central assumption is merely an approximation, that is, the plates are not rigid,” says Richard Gordon, professor of geophysics at Rice University. “Our latest contribution is to specify or predict the nature and rate of deformation over the entire Pacific plate.” The researchers already suspected cooling had a role from their observation that the 25 large and small plates that make up Earth’s shell do not fit together as well as the “rigid model” assumption would have it. They also knew that lithosphere as young as 2 million years was more malleable than hardened lithosphere as old as 170 million years. “We first showed five years ago that the rate of horizontal contraction is inversely proportional to the age of the seafloor,” he says. “So it’s in the youngest lithosphere (toward the east side of the Pacific plate) where you get the biggest effects.”Still a 2D-problem The researchers saw hints of deformation in a metric called plate circuit closure, which describes the relative motions where at least three plates meet. If the plates were rigid, their angular velocities at the triple junction would have a sum of zero. But where the Pacific, Nazca, and Cocos plates meet west of the Galápagos Islands, the nonclosure velocity is 14 millimeters a year, enough to suggest that all three plates are deforming. “When we did our first global model in 1990, we said to ourselves that maybe when we get new data, this issue will go away,” Gordon says. “But when we updated our model a few years ago, all the places that didn’t have plate circuit closure 20 years ago still didn’t have it.” There had to be a reason, and it began to become clear when the researchers looked beneath the seafloor. “It’s long been understood that the ocean floor increases in depth with age due to cooling and thermal contraction. But if something cools, it doesn’t just cool in one direction. It’s going to be at least approximately isotropic. It should shrink the same in all directions, not just vertically,” he says. A previous study by Gordon and former Rice graduate student Ravi Kumar calculated the effect of thermal contraction on vertical columns of oceanic lithosphere and determined its impact on the horizontal plane, but viewing the plate as a whole demanded a different approach. “We thought about the vertically integrated properties of the lithosphere, but once we did that, we realized Earth’s surface is still a two-dimensional problem,” Gordon says.The big picture For the new study, Gordon and coauthor Corné Kreemer, associate professor at University of Nevada, Reno, started by determining how much the contractions would, on average, strain the horizontal surface. They divided the Pacific plate into a grid and calculated the strain on each of the nearly 198,000 squares based on their age, as determined by the seafloor age model published by the National Geophysical Data Center. “That we could calculate on a laptop,” Gordon says. “If we tried to do it in three dimensions, it would take a high-powered computer cluster.” The surface calculations were enough to show likely strain fields across the Pacific plate that, when summed, accounted for the deformation. As further proof, the distribution of recent earthquakes in the Pacific plate, which also relieve the strain, showed a greater number occurring in the plate’s younger lithosphere. “In the Earth, those strains are either accommodated by elastic deformation or by little earthquakes that adjust it,” he says. “The central assumption of plate tectonics assumes the plates are rigid, and this is what we make predictions from,” Gordon says. “Up until now, it’s worked really well. “The big picture is that we now have, subject to experimental and observational tests, the first realistic, quantitative estimate of how the biggest oceanic plate departs from that rigid-plate assumption.” The National Science Foundation supported the research. Source: Rice University In a new experiment called the “Holometer,” scientists are trying to answer some seemingly odd questions, including whether or not we live in a hologram. Much like characters on a television show would not know that their seemingly 3D world exists only on a 2D screen, we could be clueless that our 3D space is just an illusion. The information about everything in our universe could actually be encoded in tiny packets in two dimensions.Related Articles On Futurity - Purdue UniversityTiniest ever transistor made from single atom - Yale UniversityFront row seat in hunt for exoplanets - Stanford UniversityNew photon control cuts the ‘backscatter’ Get close enough to your TV screen and you’ll see pixels, small points of data that make a seamless image if you stand back. Scientists think that the universe’s information may be contained in the same way, and that the natural “pixel size” of space is roughly 10 trillion trillion times smaller than an atom, a distance that physicists refer to as the Planck scale. “We want to find out whether spacetime is a quantum system just like matter is,” says Craig Hogan, director of the Fermi National Accelerator Center for Particle Astrophysics and the developer of the holographic noise theory. “If we see something, it will completely change ideas about space we’ve used for thousands of years.” Quantum theory suggests that it is impossible to know both the exact location and the exact speed of subatomic particles. If space comes in 2D bits with limited information about the precise location of objects, then space itself would fall under the same theory of uncertainty. The same way that matter continues to jiggle, as quantum waves, even when cooled to absolute zero, this digitized space should have built-in vibrations even in its lowest energy state. Essentially, the experiment probes the limits of the universe’s ability to store information. If there are a set number of bits that tell you where something is, it eventually becomes impossible to find more specific information about the location—even in principle.Measuring the ‘jitter’ The instrument testing these limits is Fermilab’s Holometer, or holographic interferometer, the most sensitive device ever created to measure the quantum jitter of space itself. Now operating at full power, the Holometer uses a pair of interferometers placed close to one another. Each one sends a one-kilowatt laser beam, the equivalent of 200,000 laser pointers, at a beam splitter and down two perpendicular 40-meter arms. The light is then reflected back to the beam splitter where the two beams recombine, creating fluctuations in brightness if there is motion. Researchers analyze these fluctuations in the returning light to see if the beam splitter is moving in a certain way—being carried along on a jitter of space itself. “Holographic noise” is expected to be present at all frequencies, but the scientists’ challenge is not to be fooled by other sources of vibrations. The Holometer is testing a frequency so high—millions of cycles per second—that motions of normal matter are not likely to cause problems. Rather, the dominant background noise is more often due to radio waves emitted by nearby electronics. The Holometer experiment is designed to identify and eliminate noise from such conventional sources. “If we find a noise we can’t get rid of, we might be detecting something fundamental about nature—a noise that is intrinsic to spacetime,” says Fermilab physicist Aaron Chou, lead scientist and project manager for the Holometer. “It’s an exciting moment for physics. A positive result will open a whole new avenue of questioning about how space works.” The Holometer team comprises 21 scientists and students from Fermilab, Massachusetts Institute of Technology, University of Chicago, and University of Michigan. The Holometer experiment, funded by the US Department of Energy and other sources, is expected to gather data over the coming year. Source: University of Chicago The post ‘Quantum jitter’ to reveal if we live in a hologram appeared first on Futurity. The chance that the southwestern United States will experience a decade-long drought sometime in the next century is at least 50 percent, researchers say. Further, there is a 20 to 50 percent chance of a “megadrought”—one that could last up to 35 years. “For the southwestern US, I’m not optimistic about avoiding real megadroughts,” says Toby Ault, assistant professor of earth and atmospheric sciences at Cornell University.Related Articles On Futurity - Stanford UniversityNo real warming from urban 'heat island' - Penn StateHurricane tracker predicts storm's strength and path - University of PittsburghBatteries get better if graphite 'likes' water “As we add greenhouse gases into the atmosphere—and we haven’t put the brakes on stopping this—we are weighting the dice for megadrought.” As of August 12, 2014, most of California sits in a D4 “exceptional drought,” which is in the most severe category. Oregon, Arizona, New Mexico, Oklahoma, and Texas loiter in a substantially less severe D1 moderate drought. Climatologists don’t know whether the severe western and southwestern drought will continue, but “with ongoing climate change, this is a glimpse of things to come. It’s a preview of our future,” Ault says.Mass population migration While the 1930s Dust Bowl in the Midwest lasted four to eight years, depending on location, a megadrought can last more than three decades, which could lead to mass population migration on a scale never before seen in this country. The West and Southwest must look for mitigation strategies to cope with looming long-drought scenarios, Ault says. “This will be worse than anything seen during the last 2,000 years and would pose unprecedented challenges to water resources in the region. Computer models show that while the southern portions of the western United States (California, Arizona, New Mexico) will likely face drought, the chances for drought in northwestern states such as Washington, Montana, and Idaho may decrease.Conservative estimates? Prolonged droughts around the world have occurred throughout history, including the recent “Big Dry” in Australia and modern-era drought in sub-Saharan Africa. Tree-ring studies suggest a megadrought occurred during the 1150s along the Colorado River. In natural history, they occur every 400 to 600 years. But by adding the influence of growing greenhouse gas in the atmosphere, the drought models—and their underlying statistics—are now in a state of flux. Beyond the United States, southern Africa, Australia, and the Amazon basin are also vulnerable to the possibility of a megadrought. With increases in temperatures, drought severity likely will worsen, “implying that our results should be viewed as conservative,” reports the study that is published in the Journal of Climate. Scientists at University of Arizona and the US Geological Survey contributed to the study. The National Science Foundation, the National Center for Atmospheric Research, the US Geological Survey, and the National Oceanic and Atmospheric Administration provided funding. Source: Cornell University Not enough parenting interventions target men or make a dedicated effort to include them, despite fathers’ substantial impact on child development, well-being, and family functioning, researchers report. The team’s review of global publications found only 199 that offered evidence on father participation or impact. - Rice UniversityScientists pick career over kids - University of Texas at AustinTelecommuting blurs line between home, work - University of LeedsAlcohol: Children live what they learn “Despite robust evidence of fathers’ impact on children and mothers, engaging with fathers is one of the least well-explored and articulated aspects of parenting interventions,” says lead author Catherine Panter-Brick, professor of anthropology, health, and global affairs at Yale University. “It is therefore critical to evaluate implicit and explicit biases against men in their role as fathers manifested in current approaches to research, intervention, and policy.” The researchers’ results show that an overhaul of program design and delivery is necessary to get the necessary good-quality data on father and couple participation and impact. The researchers suggest that in both research and community-based practice, a “game change” in this field would consist of unequivocal engagement with co-parents. This would strategically improve upon the exclusive mother focus that marginalizes fathers and other co-parents in the bulk of parenting interventions implemented to-date. The team recommends a guide to develop best practices for building the evidence base of co-parenting interventions. Additional researchers contributed from Yale and the Fatherhood Institute in London.
Presentation on theme: "LINEAR MOMENTUM Momentum = Mass x Velocity p=mv The SI unit for momentum is kg·m/s Momentum and velocity are in the same direction Is a vector."— Presentation transcript: LINEAR MOMENTUM Momentum = Mass x Velocity p=mv The SI unit for momentum is kg·m/s Momentum and velocity are in the same direction Is a vector Using the equation p=mv At the same velocity, as mass increases – momentum increases At the same mass, as velocity increases – momentum increases 3 Example You are driving north, a deer with mass of 146 kg is running head-on toward you with a speed of 17 m/s. Find the momentum of the deer. p=mv p=146 kg * 17 m/s p= 2500 kg·m/s to the south CHANGING MOMENTUM A change in momentum takes time and force For example soccer – when receiving a pass it takes force to stop the ball It will take more force to stop a fast moving ball than to stop a slow moving ball A toy truck and a real truck moving at the same velocity, it will take more force to stop the real truck than to stop the toy truck IMPULSE Impulse is the applied force times the time interval FΔt Impulse = FΔt Force is reduced when the time interval increases Example – giving a little when you catch a ball Trampoline IMPULSE-MOMENTUM THEOREM From Newton’s second law (it will never go away) And the equation for acceleration: a=v/t We can find the equation for force in terms of momentum F=Δp/Δt Force = change in momentum / time interval We can also rearrange this equation to find the change in momentum in terms of external net force and time Δp=FΔt, Δp=FΔt=mv f -mv i Example A 0.50 kg football is thrown with a velocity of 15 m/s to the right. A stationary receiver catches the ball and brings it to rest in s. What is the force exerted on the ball by the receiver? p=mv Δp=FΔt=mv f -mv i v i = 15 m/s v f =0.0 m/s t=0.020 s F*0.020 s = 0 – 0.50 kg*15 m/s F= 380 N IMPULSE-MOMENTUM THEOREM AND STOPPING DISTANCE The impulse-momentum theorem can be used to determine the stopping distance of a car or any moving object Use Δp=FΔt So Δt=Δp/F=mv f -mv i /F Then when you find time you can use distance = average velocity * time Δx=1/2(v f +v i ) Δt Example A 2240 kg car traveling to the west slows down uniformly to rest from 20.0 m/s. The decelerating force on the car is 8410 N to the east. How far would the car move before stopping? Δt=Δp/F=mv f -mv i /F Δt=0-2240kg*20.0m/s / 8410 N Δt = 5.33 s Δx=1/2(v f +v i ) Δt Δx=1/2(0+20.0m/s)*5.33 x=53.3 m to the west What happens to momentum when two or more objects interact? First you have to consider total momentum of all objects involved. This is the sum of all momentums Like energy momentum is conserved Conservation of Momentum: Total initial momentum = total final momentum m 1 v 1i + m 2 v 2i = m 1 v 1f + m 2 v 2f Example A boy on a 2.0 kg skateboard initially at rest tosses an 8.0 kg jug of water in the forward direction. If the jug has a speed of 3.0 m/s relative to the ground and the boy and the skateboard move in the opposite direction at 0.60 m/s, find the mass of the boy. m 1 v 1i + m 2 v 2i = m 1 v 1f + m 2 v 2f m 1 =mass of jug = 8.0 kg m 2 = mass of boy and skateboard= 2.0 kg + x V 1i = 0, v 2i = 0 V 1f = 3.0 m/s, v 2f =0.60 m/s 0=8.0kg*3.0m/s forward + (2.0kg +x)*.60m/s back 24 kg·m/s = (2.0 kg + x)*.60 m/s 40 kg = 2.0 kg + x x= 38 kg Newton’s Third law and Collisions If the forces exerted in a collision are equal and opposite And the time each force is exerted would be the same Than the impulse on each object in a collision would be equal and opposite Since impulse is equal to the change in momentum than the change in momentum would be equal and opposite So if one object gained momentum after a collision than the other object must lose the same amount of momentum Collisions When two objects collide and then move together as one mass, the collision is called a Perfectly inelastic collision These are easy situations because the two objects become pretty much one object after the collision So the conservation of momentum equation becomes m 1 v 1i + m 2 v 2i = (m 1 + m 2 )v f The two objects will have the same final velocity Example A grocery shopper tosses a 9.0 kg bag of rice into a stationary 18.0 kg grocery cart. The bag hits the cart with a horizontal speed of 5.5 m/s toward the front of the cart. What is the final speed of the cart and the bag? m 1 v 1i + m 2 v 2i = (m 1 + m 2 )v f kg * 5.5 m/s = (18kg + 9kg)v f 50. kg m/s = 27*v f 1.9 m/s Kinetic Energy and Inelastic Collisions Total kinetic energy is not conserved in inelastic collisions, it does not remain constant Some of the energy is converted into sound energy and internal energy as the objects deforms during the collision This is why it is called inelastic, elastic usually means something that can keep its shape or return to its original shape In physics elastic means that the work done to deform an object is equal to the work done to return to its original shape In inelastic collisions some of the work done on the inelastic material is converted to other forms of energy such as heat or sound Example A 0.25 kg arrow with a velocity of 12 m/s to the west strikes and pierces the center of a 6.8 kg target. What is the final velocity of the combined mass? What is the decrease in kinetic energy during the collision? m 1 v 1i + m 2 v 2i = (m 1 + m 2 )v f kg *12 m/s = (0.25 kg+ 6.8 kg)v v f =0.43 m/s to the west KE i = 0+ ½ 0.25*(12m/s) 2 = 18 J KE f = ½ (7.1 kg)*(0.43 m/s) 2 =.66 J Change in kinetic = 17 J Elastic Collisions In an elastic collision, two objects collide and return to their original shapes with no loss of total kinetic energy. The two objects move separately Both total momentum and total kinetic energy are conserved Since both are conserved than these equations apply m 1 v 1i + m 2 v 2i =m 1 v 1f +m 2 v 2f ½ m 1 v 1i 2 + ½ m 2 v 2i 2 = ½ m 1 v 1f 2 + ½ m 2 v 2f 2 Example A 16.0 kg canoe moving to the left at 12.5 m/s makes an elastic head-on collision with a 14.0 kg raft moving to the right at 16.0 m/s. After the collision, the raft moves to the left at 14.4 m/s. Disregard any effects of the water. Find the velocity of the canoe after the collision. Verify your answer by calculating the total kinetic energy before and after the collision. Collisions in Two or more Dimensions Conservation of momentum still applies Use your vectors. Momentum is conserved in all directions p xi =p xf p yi =p yf m A v Axi +m B v Bxi = m A v Axf +m B v Bxf m A vA yi +m B v Byi = m A v Ayf +m B v Byf Center of Mass (CM) The point at which all mass of an object can be considered to be located An object can be considered a point or small particle no matter what the size or shape of the object Center of mass moves just as a particle moves no matter what the object does Finding the center of mass of an object Define a coordinate system (preferably one that would make the math easy) Center of mass can be found by finding the sum of all the masses times their respective distance from the defined origin and dividing by the sum of all the masses Center of gravity (CG) The point at which the force of gravity can be considered to act Gravity acts on all parts of the object But for determining translational motion we can assume that gravity acts in one particular spot Same spot as the center of mass
Science is still uncertain as to how exactly life first arose. While experiments with electricity and simple ingredients can make amino acids, the building blocks of proteins and the framework for all living things as we know them, how to make the jump from lifeless chains of molecules to biological life is still unknown. Scientists want to understand how nature forms the more complex chains of molecules that are closest to what make up living creatures. The closer these molecules appear to those found in living creatures, the smaller a leap it takes to make life. This would put scientists on the right track to understanding how life came about. One of the places scientists look for clues is early Earth, and the materials that would have been available then. Asteroids offer a time capsule of sorts of the kinds of materials that would have been around in the early solar system when Earth was young. By studying meteorites on Earth, scientists can gain an understanding of how our planet evolved — what materials were delivered ready-made to Earth, and which ones had to develop later, through chemistry or biology. One odd chemical in some of these meteorites is cyanide. While most people are familiar with cyanide as a deadly poison, its components are simply carbon and nitrogen, both elements crucial to life. Recently, researchers at NASA and Boise State University studied a slew of meteorites, looking for traces of cyanide, and found some in a surprisingly useful form. They published their results June 25 in Nature Communications. Poison or Life Giver? Most surprising to the researchers was seeing how the cyanide was bonded to other materials in the meteorites. The result is very similar to hydrogenase, an enzyme crucial to life. The idea is that if nature can – without the presence of life – produce something very similar to hydrogenase, then that leaves a much smaller gap to cross to create living things. Karen Smith, lead author of the study, posits that molecules like those they found in the meteorites might have been later incorporated into proteins in living creatures. She goes on to muse that the similarity of the compounds in the meteorites and in living creatures, “makes you wonder if there was a link between the two.” The particular meteorites researchers found bearing cyanide are a type known as CM chondrites. NASA’s OSIRIS-REx spacecraft is currently orbiting asteroid Bennu — likely also a CM chondrite — preparing to take samples that it will return to Earth in 2023. Its samples may teach astronomers more about where and how cyanide and other life-adjacent chemicals are distributed throughout the solar system, and perhaps one day show us how life came to be.
The cameras on NASA's Cassini spacecraft captured this rare look at Earth and its moon from Saturn orbit on July 19, 2013. The image has been magnified five times. Taken while performing a large wide-angle mosaic of the entire Saturn ring system, narrow-angle camera images were deliberately inserted into the sequence in order to image Earth and its moon. This is the second time that Cassini has imaged Earth from within Saturn's shadow, and only the third time ever that our planet has been imaged from the outer solar system. Another version of this image is available at PIA14949. Earth is the blue point of light on the left; the moon is fainter, white, and on the right. Both are seen here through the faint, diffuse E ring of Saturn. Earth was brighter than the estimated brightness used to calculate the narrow-angle camera exposure times. Hence, information derived from the wide-angle camera images was used to process this color composite. Both Earth and the moon have been increased in brightness for easy visibility; in addition, brightness of the moon has been increased relative to the Earth, and the brightness of the E ring has been increased as well. Images taken using red, green and blue spectral filters were combined to create this natural color view. (The accompanying wide-angle frame can be found here: PIA17171.) The images were obtained by the Cassini spacecraft cameras on July 19, 2013 at a distance of approximately 898.414 million miles (1.445858 billion kilometers) from Earth. Image scale on Earth is 5,382 miles (8,662 kilometers) per pixel. The illuminated areas of neither Earth nor the moon are resolved here. Consequently, the size of each "dot" is the same size that a point of light of comparable brightness would have in the narrow angle camera. The first image of Earth captured from the outer solar system was taken by NASA's Voyager 1 in 1990 and famously titled "Pale Blue Dot" (PIA00452). Sixteen years later, in 2006, Cassini imaged the Earth in the stunning and unique mosaic of Saturn called "In Saturn's Shadow-The Pale Blue Dot" (PIA08329). And, seven years further along, Cassini did it again in a coordinated event that became the first time that Earth's inhabitants knew in advance that they were being imaged from nearly a billion miles (nearly 1.5 billion kilometers) away. It was the also the first time that Cassini's highest-resolution camera was employed so that Earth and its moon could be captured as two distinct targets. The Cassini-Huygens mission is a cooperative project of NASA, the European Space Agency and the Italian Space Agency. The Jet Propulsion Laboratory, a division of the California Institute of Technology in Pasadena, manages the mission for NASA's Science Mission Directorate, Washington, D.C. The Cassini orbiter and its two onboard cameras were designed, developed and assembled at JPL. The imaging operations center is based at the Space Science Institute in Boulder, Colo. For more information about the Cassini-Huygens mission visit http://saturn.jpl.nasa.gov and http://www.nasa.gov/cassini. The Cassini imaging team homepage is at http://ciclops.org.
A triangle is a three-sided polygon. Knowing the rules and relationships between the various triangles helps to understand geometry. More importantly, for the high school student and the college-bound senior, this knowledge will help you save time on the all-important SAT tests. Measure the three sides of the triangle with a ruler. If all three sides are the same length, then it is an equilateral triangle, and the three angles contained by those sides are the same. So an equilateral triangle is also an equiangular triangle. An important point to remember is that, in this case, all three angles measure 60 degrees. Regardless of the length of the sides, each angle of the equiangular triangle will be 60 degrees. Cross-check by measuring the angles with the protractor. If each angle measures 60 degrees, then the triangle is equiangular and--by definition--equilateral. Label the triangle "isosceles" if only two sides are equal. Remember that the angles contained by the two equal sides (the base angles) will be equal to each other. So, if you know one base angle in an isosceles triangle, you can find the other two angles. For example, if one angle is 55 degrees, then the other base angle will be 55 degrees. The third angle will be 70 degrees, derived from 180 - (55+55). Conversely, if two angles are equal, then two sides will also be equal. Know that the equilateral triangle is a special case of the isosceles triangle since it has not two but all three sides and all three angles equal. A right triangle is also a special case of the isosceles triangle. The angles of the right isosceles triangle measure 90 degrees, 45 degrees and 45 degrees. If you know one angle, you can determine the other two. Learn that a right triangle has one 90-degree angle. The side opposite the 90-degree angle is the hypotenuse, and the other two sides are the legs of the triangle. The Pythagorean theorem relates to the right triangle and states that the square on the hypotenuse is equal to the sum of the squares on the other two sides. A special case of the right triangle is the 30-60-90 triangle. Look at the three angles of the triangle. If each angle is less than 60 degrees, then label the triangle an "acute" triangle. If even one angle measures more than 90 degrees, then the triangle is an obtuse triangle. The other two angles of the obtuse triangle will be less than 90 degrees. Learn these basic properties of triangles. They will help you save time when working on geometry problems. The sum of the angles of a triangle equals 180 degrees. So, if you know two angles, you can deduce the third. In special cases, knowing just one angle will give you the other two. If you know one interior angle, then you can find the exterior angle of the triangle by subtracting the interior angle from 180 degrees. For example, if the interior angle measures 80 degrees, the corresponding exterior angle will be 180 - 80 = 100 degrees. The largest side has the largest angle opposite it. It follows that the shortest side has the smallest angle opposite it.
The subject of economics has several concepts that need our attention. These concepts explain different phenomena. Elasticity is one such concept in economics. It talks about the sensitivity of one variable due to a change in other variables. In business and economics, elasticity refers to the degree of change, to which individuals, customers, producers, and suppliers alter demand and supply when variables like income is changed. There are different types of elasticity. Each of these explains the effect of changes on a specific variable. When we discuss the subject of economics, two of the most talked-about terms are- demand and supply. So, elasticity also covers these two terms. Elasticity of demand and elasticity of supply are the two main types of elasticity. However, they are further classified into sub-categories. In this blog, we will be mainly discussing elasticity and its different types. We will also look at the way elasticity works. Later in the blog, we will discuss the factors affecting the elasticity of demand. As Investopedia explains, “Elasticity is a measure of a variable's sensitivity to a change in another variable, most commonly this sensitivity is the change in price relative to changes in other factors. In business and economics, elasticity refers to the degree to which individuals, consumers, or producers change their demand or the amount supplied in response to price or income changes. It is predominantly used to assess the change in consumer demand as a result of a change in a good or service's price.” Elasticity is also defined in economics as the measurement of percentage change of one economics value in response to change in the other. Elasticity is a central concept in economics and has many applications. Basic demand and supply models explain that different variables like price, demand, income are generally related. So, what elasticity does is that it can provide crucial information about the strength and weakness of such relationships. Based on the value of elasticity variables are categorized as elastic or inelastic. An elastic variable (with an absolute elasticity value greater than 1) is one that responds more than proportionally to changes in other variables. In contrast, an inelastic variable (with an absolute elasticity value less than 1) is one which changes less than proportionally in response to changes in other variables. To better understand the working we should move to the next section of the blog. When the value of elasticity is greater than 1.0, it means that the demand for that good or service is affected by the price. On the other hand, when the value of elasticity is less than 1.0, the demand for goods/services remains unaffected by the change in price. We also call it inelastic. Inelastic means that the buying habit of consumers remains more or less the same, irrespective of the change in prices. There is one more situation that is just theoretical i.e. ‘perfectly inelastic’. This happens when the value of elasticity is zero. This would mean that the demand for the perfectly inelastic good will remain the same even if the prices are changed drastically. The explanation itself might have cleared that there are no real-world examples of perfectly inelastic goods. Even if there was a good, it might have been the costliest as the producers and suppliers would be free to charge anything considering the demand. Elasticity is a financial idea used to gauge the adjustment in the total amount demanded for a good or service according to value developments of that good or service. An item is viewed as elastic if the amount of interest in the item changes radically when its cost increments or diminishes. On the other hand, an item is viewed as inelastic if the amount of interest of the item changes almost no when its cost vacillates. Taking the examples of both kinds of goods. An example of a highly inelastic good is insulin. The consumers of insulin are diabetic patients who won’t deny buying if there is an increase in the prices. On the other hand are highly elastic products. There can be various examples of goods that fall in this category. For example, the demand for refrigerators go high during festive seasons as the prices are slashed and people wait for it. Referred Blog: Difference between Micro and Macro Economics As mentioned above in the blog, there are mainly two types of elasticity- Elasticity of Demand and Elasticity of Supply. Elasticity of demand is an economic measure of the sensitivity of demand relative to a change in another variable. The demand for a good or service depends on multiple factors such as price, income, and preference. Whenever there is a change in any of these variables it causes a change in the quantity demanded of the good or service. Four types of elasticity Price Elasticity of Demand or PED measures the responsiveness of quantity demanded to a change in price. There are two ways to measure PED- arc elasticity that measures over a price range, and point elasticity that measures at one point. Cross Elasticity of Demand (XED) is an economic concept that measures the responsiveness in the quantity demanded of one good when the price of other goods changes. Also called cross-price elasticity of demand, this measurement is calculated by taking the percentage change in the quantity demanded of one good and dividing it by the percentage change in the price of the other good. Income Elasticity of Demand measures the responsiveness in the quantity demanded for a good or service when the real income of the consumers is changed, keeping all the other variables constant. The formula for calculating income elasticity of demand is the percent change in quantity demanded divided by the percent change in income. This concept helps us to find whether a good is a necessity or luxury. Price Elasticity of supply (PES) measures the responsiveness to the supply of a good or service after a change in its market price. Some basic economic theories explain that when there is a fall in the price of a good its supply is also decreased and when the prices are on a rise the supply is increased. So, these were the four different types of elasticity that measure responsiveness of two main economic variables, demand and supply, when other market variables are changed. Three main factors affect a good’s price elasticity of demand. In general, we can say that the more good substitutes are there, the more elastic demand will be. This can be understood by an example. Suppose a coffee seller company increases the price for its cup of coffee by $1. The consumers are likely to switch to another company or they may even replace their cup of coffee with a cup of strong tea. This means that the cup of coffee is an elastic good as a small increase in the price is resulting in a large decrease in the demand. Another example could be of caffeine. Let us say that the price of caffeine goes up. But this time the consumers will not switch to another beverage or drink as there are very few good substitutes for caffeine. So, most people may not willingly give up their cup of caffeine. This means that caffeine is an inelastic product. These two examples also tell us that there may be an elastic product within an industry while the industry is inelastic. This is not a mystery at all. We all need a few things for survival and we can not give up on them. These products that we require for survival are termed as necessity products. For example, rice grains. A large part of the Indian population is a daily consumer of rice grains. So, even if the prices go higher the consumption won’t decrease drastically and the demand will almost remain the same. This makes the good inelastic. The third influential factor is time. We consume some goods as we are addicted to them. Two of the most popular examples are alcohol and tobacco. We will understand the role of time with an example. Suppose the government increases the taxes on tobacco which leads to an increase in the prices. So, a person addicted to smoking won’t stop buying cigarettes. This makes the product inelastic. However, if the prices go on increasing and the person now can not afford to spend extra on those cigarettes, he or she may get rid of the habit. This makes the price elasticity of cigarettes for that consumer elastic in the long run. Referred Blog: The success story of Lenskart Elasticity is a concept of economics that affects businesses. So, they need to understand whether their goods or services are elastic or inelastic. This helps them form business strategies and also in the marketing of those goods or services. Companies selling high elasticity goods compete with other businesses on price and they are required to have a high volume of sales transactions to remain solvent. On the other hand, firms that sell inelastic goods that are must-have enjoy the luxury of setting higher prices without worrying about the decrease in demand and sales. Besides affecting prices, the elasticity of goods also affects the customer retention rates of a company. Every business strives to sell goods or services that have inelastic demands; doing so will ultimately increase the customer retention rate. The customer will remain loyal to the business and will continue to buy the goods/services even in the case of a price surge. In this blog, we tried to explain to you another concept of economics i.e. elasticity. By now, it may be clear what it means. In simpler terms, elasticity is a measurement of change in a market variable in response to change in other market variables. We also explained how elasticity works. Based on the values of elasticity we categorize goods or services as elastic or inelastic. Elastic are those that are highly affected by changes in the variables while inelastic goods are those that have negligible effects of changes in the market variables. The four different types of elasticity explain the effect of variables on demand and supply. Elasticity is a great concept to understand the dynamics of the market. It plays a significant role in the success of businesses. Reliance Jio and JioMart: Marketing Strategy, SWOT Analysis, and Working EcosystemREAD MORE 6 Major Branches of Artificial Intelligence (AI)READ MORE Top 10 Big Data TechnologiesREAD MORE What is the OpenAI GPT-3?READ MORE Introduction to Time Series Analysis: Time-Series Forecasting Machine learning Methods & ModelsREAD MORE 7 types of regression techniques you should know in Machine LearningREAD MORE 8 Most Popular Business Analysis Techniques used by Business AnalystREAD MORE How Does Linear And Logistic Regression Work In Machine Learning?READ MORE 7 Types of Activation Functions in Neural NetworkREAD MORE What is TikTok and How is AI Making it Tick?READ MORE
When comparing theoretical models of how things work to real-world applications, physicists often approximate the geometry of objects using simpler objects. This could be using thin cylinders to approximate the shape of an airplane or a thin, massless line to approximate the string of a pendulum. Sphericity gives you one way of approximating how close objects are to sphere. You can, for example, calculate the sphericity as an approximation the Earth's shape which is, in fact, not a perfect sphere. When finding sphericity for a single particle or object, you can define sphericity as the ratio of surface area of a sphere that has the same volume as the particle or object to the surface area of the particle itself. This is not to be confused with Mauchly's Test of Sphericity, a statistical technique to test assumptions within data. Put into mathematical terms, the sphericity given by Ψ ("psi") is: for the volume of the particle or object Vp and surface area of the particle or object Ap. You can see why this is the case through a few mathematical steps to derive this formula. Deriving the Sphericity Formula First, you find another way of expressing the surface area of a particle. - As = 4πr2 : Start with the formula for the surface area of a sphere in terms of its radius r. - ( 4πr2 )3 : Cube it by taking it to the power of 3. - 43π3r6 : Distribute the exponent 3 throughout the formula. - 4π(42π2r6): Factor out the 4π by placing it outside using parentheses. - 4π x 32 (42π2r6 /32) : Factor out 32. - 36π (4πr3 /3)2 : Factor out the exponent of 2 from the parentheses to get the volume of a sphere. - 36πVp2 : Replace the content in the parentheses with the volume of a sphere for a particle. - As = (36Vp2)1/3 : Then, you can take the cube root of this result so that you are back to the surface area. - 361/3π1/3Vp2/3 : Distribute the exponent of 1/3 throughout the content in the parentheses. - π1/3(6Vp)2/3 : Factor out the π1/3 from the result of step 9. This gives you a method of expressing surface area. Then, from this result of a way of expressing surface area, you can rewrite the ratio of the surface area of a particle to the volume of a particle with which is defined as Ψ. Because it's defined as a ratio, the maximum sphericity an object can have is one, which corresponds to a perfect sphere. You can use different values for changing the volume of different objects to observe how sphericity is more dependent upon certain dimensions or measurements when compared to others. For example, when measuring sphericity of particles, elongating particles in one direction is much more likely to increase sphericity than changing the roundness of certain parts of it. Volume of Cylinder Sphericity Using the equation for sphericity, you can determine the sphericity of a cylinder. You should first figure out the volume of the cylinder.. Then, calculate the radius of a sphere that would have this volume. Find the surface area of this sphere with this radius, and then divide it by the surface area of the cylinder. If you have a cylinder with a diameter of 1 m and height of 3 m, you can calculate its volume as the product of the area of the base and height. This would be Because the volume of a sphere is V = 4πr3/3, you can calculate the radius of this volume as For a sphere with this volume, it would have a radius r = (2.36 m3 x (3/4π))1/3 = .83 m. The surface area of a sphere with this radius would be A = 4πr2 or 4πr2 or 8.56 m3. The cylinder has a surface area of 11.00 m2 given by A = 2(πr2) + 2πr x h, which is the sum of the areas of the circular bases and the area of the curved surface of the cylinder. This gives a sphericity Ψ of .78 from the division of the sphere's surface area with the cylinder's surface area. You can expedite this step-by-step process involving volume and surface area of a cylinder alongside volume and surface are of a sphere using computational methods that can calculate these variables one-by-one much more quickly than a human can. Performing computer-based simulations using these calculations are just one application of sphericity. Geological Applications of Sphericity Sphericity originated in geology. Because particles tend to take irregular shapes that have volumes that are difficult to determine, geologist Hakon Wadell created a more applicable definition that uses the ratio of the nominal diameter of the particle, the diameter of a sphere with the same volume as a grain, to the diameter of the sphere that would encompass it. Through this, he created the concept of sphericity that could be used alongside other measurements like roundness in evaluating the properties of physical particles. Aside from determining how close theoretical calculations are to real-world examples, sphericity has a variety of other uses. Geologists determine the sphericity of sedimentary particles to figure out how close they are to spheres. From there, they can calculate other quantities such as the forces between particles or perform simulations of particles in different environments. These computer-based simulations let geologists design experiments and study features of the earth such as the movement and arrangements of fluids between sedimentary rocks. Geologists can use sphericity to study the aerodynamics of volcanic particles. Three-dimensional laser scanning and scanning electron microscope technologies have directly measured the sphericity of volcanic particles. Researchers can compare these results to other methods of measuring sphericity such as the working sphericity. This is the sphericity of a tetradecahedron, a polyhedron with 14 faces, from the flatness and elongation ratios of the volcanic particles. Other methods of measuring sphericity include approximating the circularity of a particle's projection onto a two-dimensional surface. These different measurements can give researchers more accurate methods of studying the physical properties of these particles when released from volcanoes. Sphericity in Other Fields The applications to other fields are worth noting as well. Computer-based methods, in particular, can examine other features of the sedimentary material such as porosity, connectivity and roundness alongside sphericity to evaluate the physical properties of objects such as the degree of osteoporosis of human bones. It also lets scientists and engineers determine how useful biomaterials may be for implants. Scientists studying nanoparticles can measure the size and sphericity of silicon nanocrystals in finding out how they can be used in optoelectronic materials and silicon-based light emitters. These can later be put to use in various technologies like bioimaging and drug delivery. - Chemical Engineering Learning Resources: Sphericity of Cylinder - Sciencedirect: Sphericity - American Geosciences Institute: Sphericity - Sciencedirect: Sphericity and roundness computation for particles using the extreme vertices model - Phys: Researchers use silicon nanoparticles for bioimaging and drug delivery - Laerd Statistics: Sphericity About the Author S. Hussain Ather is a Master's student in Science Communications the University of California, Santa Cruz. After studying physics and philosophy as an undergraduate at Indiana University-Bloomington, he worked as a scientist at the National Institutes of Health for two years. He primarily performs research in and write about neuroscience and philosophy, however, his interests span ethics, policy, and other areas relevant to science.
History of Canada W. L. Grant Professor of Colonial History in Queen's University The Rebellion: Lord Durham - An Intercolonial Railway.—This wider union was at the time impossible; communication between even Upper and Lower Canada was so slow that John Beverley Robinson urged this as sufficient reason against their union. Nova Scotia and New Brunswick were altogether too far away. Durham therefore advised improvement of the canals, and the building of the Intercolonial Railway. "The formation of a railroad from Halifax to Quebec would entirely alter some of the distinguishing characteristics of the Canadas." Canals.—With an organized Cabinet at its head, parliament showed an energy unknown in former days. Between 1840 and 1850 our canal system was developed with great energy. The Lachine Canal was enlarged; the Cornwall Canal around the Long Sault Rapids was opened; the Beauharnois Canal enabled boats to pass the Coteau, Cedar, and Cascade Rapids; others were completed around the smaller rapids higher up. The Welland Canal was enlarged, new canals were dug on the Ottawa, and the St. Lawrence was bound to Lake Champlain by the Richelieu system. But just when these were finished and when we hoped by them to control the growing grain trade of the American West, we found that our water-ways and canals were being side-tracked by the building of railways all over the United States, and that we must imitate our neighbours, or fall hopelessly behind. Railways—The first Canadian railway had been opened by the Governor-general in 1836. It extended from La Prairie on the St. Lawrence opposite Montreal to St. Johns on the Richelieu. It was sixteen miles long, and the cars were drawn by horses; in 1837 the first locomotive was used on it; during the winter it ceased operations. In 1851 there were only sixty-six miles of railway in the whole of what is now the Dominion. Then an improvement set in under the guidance of Mr. Hincks. In that year a railway from Toronto to Montreal was incorporated, and a great plan formed for a line from the American border near Sarnia to Halifax. An agreement was made with the Maritime Provinces to share in its building, and Imperial aid was sought. But the mother country quarrelled with the delegates about the route through New Brunswick, and Hincks, impatient at the delay, arranged with some British capitalists to build a Grand Trunk Railway from Quebec to the American frontier. In 1853, this line was opened from Portland to Montreal; in 1856, from Montreal to Toronto; in 1858, from Toronto to Sarnia. In December, 1859, the great Victoria Bridge, crossing the St. Lawrence just below Montreal, was opened for traffic, though it was not formally declared complete until the next year, when Albert Edward, Prince of Wales, afterwards King Edward VII, came out specially from England for the purpose. By this time we had a line complete from Rivière du Loup, 100 miles below Quebec, to Sarnia at the foot of Lake Huron. Meanwhile in the western part of the province, the Great Western Railway had joined Toronto, Hamilton, and London. In 1867 there were 2,087 miles of railway in the Dominion, of which 1,275 were in Ontario, 523 in Quebec, 196 in New Brunswick, and 93 in Nova Scotia. Partly owing to the extravagance and mismanagement shown in its construction, the Grand Trunk was not at first a success, and the province had more than once to come to its help; in all it obtained provincial aid to the extent of about $16,000,000. Some of the other lines, in their desire for aid, used jobbery and corruption in parliament. But the good done outweighed tenfold the harm. The railways changed the whole face of the country; they brought comfort and prosperity to thousands of homes; travel and the intelligence which travel brings became the possession of all, not the perquisite of the few. Above all, they bound our country together. But for the railways the great union which solved so many difficulties would have been utterly impossible. Atlantic Navigation.—During these same years great advances were also made in steam navigation. Canada was thus bound closer to the Maritime Provinces, and the whole continent closer to Great Britain and to Europe. In 1831 the Royal William, a paddle-wheel steamer of 1300 tons, was built in Quebec, and plied between that port and Halifax; in 1833 she essayed a bolder feat, and in spite of stormy weather crossed the Atlantic from Quebec to London. Though one or two other vessels had previously used steam to assist their sails, she was the first ship to cross from the new world to the old with steam as the main motive power. But for some years longer the mails were carried in sailing ships; the average time taken by a letter from Liverpool to Halifax was thirty-five days, and to Quebec fifty days. In 1838 the sailing ship which was carrying Joseph Howe of Nova Scotia to England was overtaken and passed by a steamer and on his arrival he brought strongly before the Colonial Office the advantages of this method of navigation. A contract was entered into with the Cunards, prominent merchants of Halifax, and in 1840 the steamship Britannia entered Halifax harbour with the mails. This cut down the time from England to Nova Scotia to twelve and a half days, and five days later a fast steamer from Halifax entered Quebec. Canada was thus brought almost three times as close to Great Britain as she had been. In 1856 the Allan Line began to run regularly from Montreal to Liverpool and in 1859 introduced a weekly service. For some years its steamers were the fastest in the world, but later on a series of terrible disasters due to careless pilotage and to inadequate buoys and light-houses made the United States lines the favourites. In spite of such accidents, these great improvements in navigation did much to keep Canadians in touch with the old world, and to give them a broader point of view. The Maritime Provinces, 1763–1864 Railways.—During the next few years a vigorous policy of railway development was carried on. The Intercolonial Railway was projected, and a line built from Halifax to Truro; so that, when in 1864 federation was proposed, improved communications had bound the province into a whole. Railways. Liquor Traffic.—From 1848 till 1864 the chief matters of interest were the struggle to making King's College undenominational, the building of railways, and the fight over prohibition. In 1855 Mr. (afterwards Sir) Leonard Tilley brought in a bill to prohibit the liquor traffic, which was the curse of the province. The bill was passed, but was openly disregarded; just as much drinking went on as before, and the ministry which had passed it grew so unpopular that the Lieutenant-governor dismissed it, much against its will, and in the parliament which followed, the bill was repealed. In railway building the government endeavoured to co-operate with Canada and Nova Scotia, but this proved impossible, and the province went ahead on its own account, till by 1864 it had 196 miles in operation, chiefly between St. John and Shediac on the Gulf of St. Lawrence. The First Years of Confederation Joseph Howe.—In Howe were combined the oratory of Papineau and the wisdom of Baldwin. His power of persuading men was enormous. In 1850 the Colonial Secretary refused to guarantee the bonds of the proposed provincial railway for £800,000. Howe went over to England, and came back with the promise of a guarantee of £7,000,000 for a British North American system. In 1865, when the United States was on the point of denouncing the Reciprocity Treaty, a great convention of all the Boards of Trade of the United States and Canada was held at Detroit. Howe's speech in favour of the treaty was so eloquent that though the Americans at first were hostile, before he sat down they sprang to their feet, and passed a unanimous standing vote in its favour. His opposition to federation is a blot on his memory, but at least he died in the noble effort to erase it. The Intercolonial Railway.—In 1864 the delegates to Quebec from the Maritime Provinces had had their choice of taking the steamer from Pictou which called at Shediac, or of going by sea to Portland, Maine, and there meeting the Grand Trunk Railway. They had therefore demanded as one of the terms of Confederation the building of an intercolonial railway, and in 1867 this was begun with Mr. (afterwards Sir) Sandford Fleming as chief engineer. The Imperial Government offered aid, but insisted that as the line would be essential in time of war, it should not run too near the boundary. This added to the length and to the expense, but after long discussions the present northern route was adopted, the lines already built from Halifax to Truro and from St. John to Moncton (near Shediac) were made use of, and in 1876 the Intercolonial Railway, owned and operated by the Dominion, was opened from Halifax and St. John to Rivière du Loup, the terminus of the Grand Trunk Railway. Later on the Government bought from the Grand Trunk Railway its line from Rivière du Loup to Quebec, and still later, partly by building, partly by buying up other railways, extended it into Montreal. The line from Truro to the Strait of Canso was also taken over, and extended to Sydney. The Intercolonial has not been a commercial success, but if Canada was to become a nation, the various parts of the Dominion had to be united in bands of steel, no matter what the cost. Downfall of Macdonald.—In 1873 Prince Edward Island, which had refused to join in 1867, entered the Dominion. Of all British North America, only Newfoundland now remained outside. Never did the reputation of Sir John Macdonald stand so high as at this time. He had widened the bounds of the Dominion till they extended from sea to sea; he had steered her safely through rises in East and West; he had thrilled her with the sense of her loyalty to the Empire and had induced parliament to make a great sacrifice in that Empire's behalf. Vet before the end of the year, he was driven from power and plunged in deep disgrace. The Pacific Railway.—The Pacific Railway, promised to British Columbia, had long been the desire of those who with the eye of faith could see the future. In 1851 Joseph Howe told a great meeting in Halifax: "I believe that many in this room will live to hear the whistle of the steam-engine in the passes of the Rocky Mountains." In 1857 Chief-justice Draper of Upper Canada made the same prophecy in Great Britain. At the time it seemed a dream, but like so many of the dreams of great men it was to be realized, though not till it had overthrown a Canadian Government and stained the glory of our greatest statesman. "Ocean to Ocean."—As soon as the agreement with British Columbia was signed, the government sent out surveying parties, and in 1872 an expedition under their chief engineer, Sandford Fleming, crossed the Rockies by the Yellowhead Pass. It was Fleming's enthusiastic report, and still more Ocean to Ocean, a book describing the journey, written by the secretary, the Rev. G. M. Grant, of Halifax, which first inspired eastern Canada with a belief in the West, and showed us something of the great future of the vast domain which we had purchased so cheaply. The Pacific Scandal.—The Government had at first intended to build the line itself, but afterwards decided to employ a private company, known as The Canadian Pacific Railway Company, which had been formed with Sir Hugh Allan at its head—a prominent Montreal merchant, president of the Allan Line of ocean steamships. Hardly had Parliament met in 1873 when Mr. L. S. Huntington, a Liberal member, rose in his place and accused the Government of having sold the charter to Sir Hugh Allan and his friends in return for large contributions to help in the recent general election. What made it worse was that this money was said to have been obtained from American capitalists. For a time these charges were not believed, and though a committee was appointed nothing much was done; but the secret correspondence between Sir Hugh Allan and the American contractors was stolen and published, and a few days later copies were made public of letters and telegrams from Sir John Macdonald and Sir Georges Cartier, the genuineness of which could not be doubted, and which went far to arouse in the public mind suspicions of wide-spread corruption. As the proceedings of the committee went on, Macdonald's own evidence showed that he had received money from Sir Hugh Allan. Most Canadians knew that elections were not won without spending money, but it was too much to have the Prime Minister of Canada telegraphing "I must have another ten thousand; will be the last time of calling. Do not fail me"; or his chief subordinate Sir Georges Cartier, sending to Sir Hugh Allan "a memorandum of immediate requirements," which amounted to $200,000. Even had the demand been made of a relative or a party friend, the amount would have appeared excessive; made of a man who had no strong party ties, and who was seeking to obtain large favours from the Government, it was unpardonable. The crisis came when Donald Smith declared against the Government, and it resigned rather than face inevitable defeat (November, 1873). The Governor-general then called upon the Liberal leader, Mr. Alexander Mackenzie, to form a ministry. Mr. Mackenzie did so, then almost immediately dissolved Parliament and held a general election, in which the conscience of the country returned the Liberals to power by a large majority. Alexander Mackenzie.—Mackenzie (1822–1892) was a Scotchman, who by integrity and force of character had risen from being a stone-mason. Canada has never had a more honourable and faithful Minister of Public Works, one who more steadfastly refused to use government contracts to reward party favourites or to buy constituencies. His Government founded at Kingston the Royal Military College (1875), many or whose graduates have since taken an honoured place in the ranks of the British army; it established the Supreme Court of Canada (1875), though still allowing an appeal from it to the British Privy Council; it passed the "Canada Temperance Act," better known as the "Scott Act," which did not a little to check drunkenness; it greatly purified our elections by introducing vote by ballot (1874) and enacting that the whole general election must take place upon a single day. But Canada needed more than good administration; she needed a man with imagination, and this Mackenzie lacked. Sir Hugh Allan's Company had been dissolved, but a Canadian Pacific Railway was a necessity, and this the Prime Minister could not see. He endeavoured to connect the great lake and river stretches by short lines of rail, offered British Columbia a post-road and a telegraph line, but went ahead with the railway so slowly that the Pacific Province went to the verge of secession, and was held within the Dominion largely by the wisdom and skill of the Governor-general Lord Dufferin. The "National Policy."—Just at this time there swept over the whole world a wave of trade depression. American manufacturers unable to sell their goods home, dumped them in Canada; many of our business men went bankrupt; the cry for protection grew louder and louder. A little before this time a group of young men in Ontario, proud of their country and resolved to raise her to a place among the nations, had founded an association known as "Canada First"; its members did much to fire Canadians with a desire for self-sufficiency and for independence of American merchants. So far as Macdonald had studied the question he believed in free trade; and, just as at Confederation, he cautiously waited for a time. At last in the spring of 1876 he saw that protection would be popular, moved a motion in the House of Commons advocating it, and in the summer of the same year went through the country making speeches in its favour at great political picnics. Sir Charles Tupper ably seconded him. Dazzling pictures were drawn of how the tall chimneys of factories would rise throughout the country and depression pass away as if at an enchanter's wand. For a time the Liberals were so struck with his success that they thought of taking up protection themselves, but the members from Nova Scotia refused to adopt such a policy when their province imported so many of its manufactured articles from the United States. After much hesitation, the Government determined to stick to the existing low tariff and "kicked complaining industry into the camp of its opponents." In the election of 1878 the Liberals were at a great disadvantage; the country was unhappy and unprosperous, and all they could say was that this was due to causes beyond their control, that they were, as one of themselves unfortunately put it, only "flies on the wheel." The Conservatives, on the other hand, advocated a definite policy from which they promised the grandest results. The country chose the men who promised to do something, rejecting those who said that all must be left to nature, and in 1878 returned Macdonald to power by a large majority. His finance minister, Sir Leonard Tilley, promptly fulfilled his promise, and early in 1879 a protective tariff was introduced known as "The National Policy." Eighteen Years of Conservatism Canadian Pacific Railway.—The new Government set itself to carry out the bargain with British Columbia. In 1880 the Canadian Pacific Railway Company was incorporated, with Sir Donald Smith and his cousin, Sir George Stephen (afterwards Lord Mountstephen), as its chief members, and set to work in 1881, splendidly backed up by Sir John Macdonald and Sir Charles Tupper. Never did financiers more boldly stake their all upon the hazard of success; never did politicians, dependent upon votes for everything save life itself, plan a bolder enterprise in bolder confidence in the people of Canada. "They'll never stand it," said more than one old friend to Sir John Macdonald; but the Prime Minister knew the people of Canada better than that. By the contract the Government gave to the Company $25,000,000 in cash, 25,000,000 acres of land, and about 670 miles of railway already built or to be built through some of the most difficult parts. Smith, Stephen, and their fellow directors of the Bank of Montreal, embarked their last dollar in the enterprise. Even so, it seemed for a time as though it would fail. A prominent Canadian newspaper said that it would never pay for its axle grease; a prominent Canadian statesman laughed at the idea of building a railway through a "sea of mountains." But the courage of the directors, and the skill of their chief engineer, Mr. (afterwards Sir William) Van Horne, triumphed over every obstacle. The line was pushed rapidly around the rugged north shore of Lake Superior, over the tangled mass of rock and lake and wilderness between Lake Superior and Winnipeg, across a thousand miles of prairie where there was not an inhabitant save the buffalo and the Indian and a few hundreds of almost equally savage hunters, through the terrible Kicking Horse Pass, through Roger's Pass in the Selkirks which was discovered only in 1883 when the railway was already at the base of the mountains, then down the valley of the Fraser, and so at last out to Burrard's Inlet, an arm of the Pacific, where now stands the stately city of Vancouver. The line was built solidly but at headlong speed. On the prairie a record was established by the laying of six miles of rail in a day. A great army of men had to be fed a thousand miles from the base of supplies, but every difficulty yielded to the organizing skill of Van Horne. By the contract the Company had been given ten years to complete the line, but so swiftly did the work proceed that on November 7th, 1885, at the lonely little hamlet of Craigellachie in the Rockies, Sir Donald Smith drove home the last spike of the first Canadian transcontinental railway. The expense was enormous; the Government had again and again to come to the relief of the Company, and did so in splendid confidence in the future of Canada. Once after the departure of Sir Charles Tupper to become Canadian High Commissioner in England (1883), Macdonald's resolution faltered. It is said that Sir George Stephen had packed his bag and was about to leave Canada a ruined man, when a friend persuaded Macdonald to call another Cabinet meeting and to agree to give the last millions that were needed. On so narrow a chance hung the future of Canada. Discontent in the North-West.—the West seems fated to show at once the heights to which Canadian statesmen can rise, and the depths to which they can fall. The Canadian Pacific Railway was not yet finished when it was used to take out troops to quell a rebellion which wisdom could have prevented. In 1870 the half-breeds on the Red River had been granted 240 acres of land apiece, in settlement of their claim through their Indian mothers to be owners of the soil. Most of them soon sold out, and went west to join their friends on the banks of the Saskatchewan, where they took up land after the fashion of their ancestors in long strips fronting on the river. The Canadian Government was spending large sums of money in England to attract settlers, yet it would do nothing for these settlers who were already on the spot. Surveyors were sent out, who repeated the mistake made on the Red River in 1869. Each square mile surveyed was divided into four quarter-sections of 160 acres each. This to the half-breed simply meant the loss of his farm. It may be said that most of these men had already been granted land in Manitoba; that if they had been granted a new title to new land on the Saskatchewan, they would again have sold it to hungry land-sharks, and been no better off than before. The answer to this is that, as the Canadian agent on the spot suggested, they could have been granted the land on terms forbidding them to sell it, and that in any case it would have been better to give them what they wanted than to drive them into rebellion. Others of their requests, such as those for schools and hospitals, were still more reasonable. The Successors of Mowat.—Mr. Hardy remained Premier till 1899, when ill-health forced him to resign in favour of Sir George Ross. These successors of Sir Oliver Mowat did much good work. - They improved the municipal system. - They voted large sums of money for the improvement of the roads of the province, which had long been torn by winter frosts, washed away by spring floods, and very imperfectly repaired. - They opened up Northern Ontario. It was long supposed that in the country won for us by Sir Oliver Mowat, north of the Height of Land which separates the rivers flowing into the St. Lawrence from those flowing into Hudson or James Bay, there was nothing but rock and lake, fit only for the hunter or fisher; but in this despised region a splendid belt of clay soil was found and in 1901 the Government, in order to open it up, began to build into it, from North Bay on the Canadian Pacific Railway, a provincial railway known as the Timiskaming and Northern Ontario. Victory of the Conservatives.—But in spite of this work, the Government of Sir George Ross became unpopular. The Liberals had been in power for thirty-three years, and no party can hold power so long without attracting to itself the majority of those who are in politics for selfish and corrupt motives. Though the administration was not inefficient, the province felt that a change would be for the better. In January, 1905, at a general election, the Conservatives under Mr. (afterwards Sir) James P. Whitney, won by a large majority, and soon gave to the administration a new energy which showed them worthy of the choice of the province. The Railway and Municipal Board.—In the same year (1906) a Railway and Municipal Board was appointed, to decide questions at issue between railways, especially electric railways, and the municipalities. We thus see that in Ontario as in the Dominion we are finding the value of government by Commission. Development of New Ontario.—In 1903, as a construction gang was working on the Timiskaming and Northern Ontario Railway, a navvy stubbed his toe upon what proved to be a lump of almost pure silver; his discovery was followed up, and the province. found that it possessed one of the greatest silver fields of the world. The centre of this industry is at Cobalt, 338 miles from Toronto, and 103 from North Bay, and discoveries of gold since made further north at Porcupine and other points show that Ontario has yet to realize the fullness of her riches. The provincial government collects a large income from the taxes paid on the silver taken out and, in 1911, to aid the development of New Ontario as this new north land is now called, it voted $5,000,000. Increase of Territory.—The provincial railway, the Timiskaming and Northern Ontario, by 1910 had reached Cochrane, where it connects with the National Transcontinental Railway. In 1912, after much negotiation with the Dominion and with Manitoba, Ontario obtained possession of the territory now called the District of Patricia, with an area of 146,010 square miles, making the total area of the province 407,262 square miles. She was also granted a strip of territory five miles in width, lying between the District of Patricia and the Nelson River, to be located within fifty miles of the Hudson Bay coast, and a strip one half mile in width and five miles in length, to be located along the south shore of the Nelson River. These give access to Nelson on Hudson Bay, and afford ample harbour facilities and railway terminals. The railway will now be pushed ahead to this point. The Opposition.—Meanwhile, there is an active Liberal Opposition, under Mr. N. W. Rowell, an able and high-minded lawyer. The Other Provinces, 1867–1913 Manitoba was long the stormy petrel of Dominion politics. First came the Rebellion; then the various questions connected with the building of the Canadian Pacific Railway; then a long but successful fight with the railway (which by its contract had been given a monopoly), for the right to allow American lines to enter the province; then the question of separate schools (1890–96). Since then the province has steadily gone ahead. Whereas in 1885 it had but one line of rails, it has now a network of railways equalled only by Ontario. In 1912 its boundaries were extended to the north to give it access to James Bay and Hudson Bay, and there is no cloud upon the sun of its prosperity. British Columbia has as its chief industries lumbering, fruit farming, mining, and the canning of salmon. No part of Canada is more interesting than this Pacific Province with its varied resources, its delightful climate, its wild mountains and fertile valleys, its long indented sea-coast, which recalls the celebrated fiords of Norway and the tales of the old sea-rovers. Its chief problem has been that of the supply of labour. For a time it was hoped to solve this by allowing Oriental immigration under restrictions, but the desire to keep the province the home of a white race has been too strong to allow of this solution. Many of the present labour organizations are affiliated with those of the United States, and the province has more than once been hampered by labour quarrels which were really produced by quarrels beyond her borders. The provincial history was long a story of squabbles; quarrels between rival firms of canners, and between Canadian masters and Indian or Japanese workmen; quarrels between owners and men in the mines and the smelters; quarrels between the fruit farmers and the railways, which would not build the desired branches. But these quarrels are now at an end, and the Pacific Province is advancing as fast as any province in the Dominion. An energetic policy of road-making and of railway construction is being pursued, and the central and northern parts of the province are being rapidly opened up. The Dominion, 1896–1913 (continued) The Grand Trunk Pacific Railway.—From 1867 to 1897 Canada grew very slowly, and many not only of the immigrants but of our native born were lured away by the greater opportunities in the United States. At the end of the nineteenth century things began to improve. The Government, and more especially the Honourable Clifford Sifton, Minister of the Interior, had faith in Canada, spent large sums in advertising, and a stream of immigration began to flow in from England, Scotland, Ireland, the United States, and every country in Europe. Most of those who came did well, and sent back for their relatives and neighbours. Into Ontario, northern Quebec, and the western provinces they poured; Canada began to get breadth as well as length. Our population and our prosperity went up by leaps and bounds; most of the new-comers went West, but the farmers of the West bought the manufactures of the East, and the whole country profited. In the three western provinces there are at least 250,000,000 acres of cultivable land, and these increased in value between 1900 and 1912 by at least $10 an acre. The population of Winnipeg rose from 30,000 to 150,000, of Calgary from 5,000 to 50,000, and of other towns in proportion. The opening up of vast new districts meant the building of railways, and the coming of thousands of navvies. Men who had been laughed at as dreamers for saying that they would live to see the West export 20,000,000 bushels of wheat, lived to see it export fifty, eighty, one hundred millions. To carry out such a crop, and to carry in these thousands of settlers and their effects, meant such a railway problem as no country, with so small a population. had ever faced. The Canadian Pacific Railway showed great energy, and increased its mileage from 3,000 in 1885 to over 10,000 in 1911, but in spite of this it proved unable to carry the grain of the West, and in 1903 a second transcontinental railway, the Grand Trunk Pacific, was given a charter. By its contract with this Company, the Government abandoned the earlier method of giving land grants, but agreed to construct a National Transcontinental line from Moncton to Winnipeg and to lease it to the Grand Trunk Pacific on moderate terms. From Winnipeg west it guaranteed to a large extent the bonds of the Company, in return for control of its freight and passenger rates. Large portions of this railway are now in operation, and by 1915 it will be in running order from Moncton in New Brunswick to Prince Rupert on the Pacific. from Moncton to Winnipeg, and from Winnipeg to Prince Rupert it runs far north of the Canadian Pacific Railway, and there will certainly be need of both lines. At first it was intended that it should run through either the Peace River, or the Pine River Pass, but later on this was changed to the Yellowhead Pass, further south, the old route chosen by Sir Sandford Fleming for the Canadian Pacific Railway, but afterwards changed. The Canadian Northern Railway.—Meanwhile two great contractors, Sir William Mackenzie and Sir Donald Mann, had been building and buying railways all over the country and gradually knitting them up into a great system, called the Canadian Northern Railway, which will in a few years give us a third transcontinental system from Quebec to Vancouver, through the Yellow head Pass. To this the Dominion and the Provinces have given aid on a large scale, especially by guarantee of its bonds. The Hudson Bay Railway.—So far all traffic must pass through Winnipeg and out by one of three or four St. Lawrence or Atlantic ports. To improve this the Government is building, at a cost of about $30,000,000 a railway north from the Canadian Northern Railway to Hudson Bay. As it is no farther from Winnipeg to the Bay than to Fort William, and only as far from the Bay to England as from Montreal, this railway will save the whole cost of carrying grain from Fort William to Montreal. The difficulty will be that Hudson Strait, through which all steamers must go, is passable only from about July 15th till October 15th, or at most from July 1st to November 1st. Will not steamers charge very high rates to make up for the danger from the ice, and will not the railway be idle for eight months of the year? But so far in Canada the bold policy has always been the right policy, and we must hope that with ice-breaking steamers and other resources of science, the Strait will be kept open long enough to make the line a success. Government by Commission.—All this shows that Canada has entered upon an era of tremendous expansion, and the question of the best way to control these great companies takes up more and more of the time of Parliament. The result has been the creation of a number of Commissions, whose members can be dismissed by Parliament if they go wrong, but otherwise have power to act as they wish. Thus we have a Railway Commission, which has done a great deal to control the rates of railway, telephone, and express companies in the interests of the country, while so far it has been in no way unjust to the companies themselves, which have worked in hearty co-operation with it. In 1908 a Civil Service Commission was appointed. To this has been transferred the right of appointment of a large number of government officials. Previously, such appointments had often been made by the Ministry, not because of the merits of the candidates, but under pressure of their supporters, to advance the interests of the party. The Commission is less subject to such pressure, and is more free to make appointments on grounds of merit alone. There is also a Conservation Commission, with the Honourable Clifford Sifton at its head, on which the Dominion, the Provinces, and the Universities are represented. This body is doing good work at making known our great natural resources, and in suggesting the best methods of preserving them. Trade, Commerce, Transportation—The Government of Canada controls all trade and commerce, and all means of transportation which are of importance to more than one province. Railways are considered of such importance to the community that each new railway is given a gift, or subsidy, of several thousand dollars a mile, and the great transcontinental lines have been given special gifts of money and land worth many millions. The country owns and operates the Intercolonial Railway (I.C.R.), running between Montreal, Halifax, and Sydney, and the Prince Edward Island Railway; it owns, but has leased to the Grand Trunk Pacific (G.T.P.), the National Transcontinental Railway, between Moncton and Winnipeg, and by its Railway Commission it controls all the rates of railways and express companies. We subsidize lines of steamships on the Atlantic and the Pacific Oceans. We have built a splendid canal system at the cost of many millions, and are constantly adding to it. To the Dominion are intrusted the care of harbours, lighthouses, quarantine, and all the other necessities of a country with a great and growing trade. Canadian Debt.—Yet, large as is our revenue, at times there are expenses so great that we are forced to borrow money. We had to borrow many millions to aid in building the Canadian Pacific Railway, and shall have to borrow largely to complete the National Transcontinental Railway. The National Debt of Canada is at present about $310,000,000, a much smaller amount in proportion to our total wealth than we owed twenty years ago. In 1912 our prosperity was so great that we were able to pay off several millions.
How A Computer Works Part 4: The CPU and BIOS. Last time we looked at how chips or integrated circuits are created and how they evolved from the transistor technologies that replaced vacuum tubes in electronic devices. This time we look at two specific chips: The Central Processing Unit (CPU) and Basic In and Out Systems (BIOS) which work together to boot the computer up so you can use it. The primary differences between PC compatible, Apple Macintosh or older computers like the Commodore C-64 and Kaypro II is their internal chip designs and specific start up instructions. On a general level every computer from your lap top to the most powerful super computer used by government essential starts up using the same principles. The CPU is the heart of the computer. It pumps instructions through special flip-flop devices called registers. Registers, if youíll remember from last time, are like the fingers on your hands. No finger is 0 and a finger up is 1. The number of fingers on a hand determines how high the register can count. An 8 bit (or finger) register can count to 255. A 16 bit register can count to 65,535. A 20 bit register can count to 1,048,575. A 24 bit register can count to almost 17 million. A 32 bit register can count to just over 4 billion. The first computers like the Altair, Commodore C-64, TRS-80 and even the IBM were based on 8 and 16 bit technologies. IBM quickly switch to 16 and 20 bit technologies, while Apple Macintosh, Commodore Amiga and Atari ST all embraced 16 and 24 bit technology. In the 1990ís everyone switch to 32 bit technology with some 64 or even 128 bit functionality. But all computers basically start off in a similar way. If the CPU is the heart, then the BIOS is like the central nervous system of the body. It regulates and runs everything else using pre-programmed instructions. These instructions are written in CPU native code. Each CPU is made up of three sections, registers (see above), simple math processing and what is called an instruction set. This instruction set makes the various registers and math processors do their flip flops and directing information from register to register and out to various devices. The BIOS determines the flow path in and out of the CPU (data interchange) as well as within the CPU (data intrachange). The BIOS also provide a blue print for putting raw data onto the monitor screen, getting it from the keyboard and sending it to the printer. The BIOS also controls the time-share slices each input or output gets on the computer. In the PC compatible computer this is called the interrupt system and there were originally 7 and now 15 physical interrupt paths, but with all the new devices that were not around when the first PC came on the scene around 1980 a system was created to allow several devices to share these physical interrupts and this is one of the reason a PC can crash or boot improperly. Some devices must get priority, for these interrupts are permanently allocated and no other device can use them. These would be for monitor, drives and dedicated ports for communication. Some devices like hard drives were not planned for and got added several years after the first computers were marketed. Some manufacturers made add-in gismo you could plug into to accommodate a hard drive, but quickly the makers of the mother board added a separate controller for these drives and assigned an interrupt in the BIOS for their operation. When you turn on the computer by pressing the power switch (see our first installment) low voltage power is sent to the motherboard (see our second installment) and this provides power to the chips (see our third installment). The BIOS chip immediate sends instructions to the CPU in native language (so you canít use a PC BIOS on a Macintosh as they donít speak the same language) and a set of diagnostics are done. The CPU processes instructions from the BIOS and sends it down a pathway selected by the BIOS to see if a monitor and keyboard is present. On the PC the keyboard then generates an A20 line code which starts up high memory. If the keyboard is not connected or defective your boot-up will stop here. The BIOS will look for devices based on the instructions. If you have a very old BIOS from 1993 and you add a card for a USB port it will see the card but it has no idea was a USB is, because that wasnít around in 1993. An interrupt, however, will be assigned to this card but BIOS remains ignorant of the device or how it works. In our article last September on ATA devices we talked about protocols. If you have a computer with 1993 BIOS it is behind the time an may only work to ATA-4 protocol, so it will treat modern ATA100 hard drives as if they were an old fashioned ATA33 drive. The makers of some hard drives often include software BIOS to help compensate for this old age problem. The primary factor in this 1993 concept is that your computer will not automatically detect the drives or their configuration using the new cord protocols. Not even the software upgrade may compensate for this. You would need to upgrade your mother board for the newest connector that has more grounding and can sense the new color coded cord, plus a new BIOS that is designed to look for and configure your primary and secondary hard drives automatically.
Basic Aerodynamic Theory A (brief) introduction to basic aerodynamic principles Table of Contents This page is meant to be an extremely brief introduction to the complex world of aerodynamics for underclassmen who have not had much exposure to fluid mechanics. It is by no means a comprehensive guide but rather a starting point for those who want to delve deeper into flight sciences. - Airfoil Nomenclature - (Geometric) Angle of Attack - Lift and Drag - Normal and Axial Forces - Lift and Drag Coefficients - Thin Airfoil Theory - Kutta Condition - Angle of Zero Lift - Finite Wing Theory - Aspect Ratio - Trailing Vortices - Downwash Velocity - Induced Angle of Attack - How do planes fly anyway? - Additional Materials An airfoil is a 2D cross section of a infinitely long wing. You may wondering exactly why it matters that it's an infinite wing, but we'll get to that later. First, let's look at a labeled diagram of an airfoil, shown below in Figure 1. Figure 1: A labeled diagram of an airfoil The first important characteristic that we will discuss is the chord of the airfoil, shown in Figure 1 as the dotted red line. The chord is the line from the leading edge to the trailing edge of the airfoil. The chord is used to define the geometric angle of attack (often denoted with α), the angle between the freestream direction and the chord. The dotted blue line in Figure 1 shows the camber line, or the midpoint between the upper surface and lower surface. As I'm sure you know, when air moves around a wing or an airfoil a force known as lift is generated perpendicular to the freestream direction- otherwise there would be no such thing as planes! The lift is caused by a pressure difference between the upper and lower surfaces of the wing/airfoil due to the way air moves around the body. An additional force, known as drag, also develops parallel to and in the same direction as the freestream. Lift always points upwards by definition, but the shape of a body can sometimes cause this perpendicular force to point downwards; an example of this is in cars where lift is actually undesirable and the force is known as downforce. A freebody diagram of the forces on an airfoil due to the freestream is shown in Figure 2. Figure 2: A freebody diagram of the forces due to the freestream on an airfoil Notice in Figure 2 how there are two additional forces acting on the airfoil. These forces are known as the Normal and Axial forces measured relative to the chord. They are not seperate forces, but rather a way of representing the lift and drag in a different coordinate system. The relationship between these forces is shown in equations (1) & (2). Often times in aerodynamics, we want to nondimensionalize the lift and drag forces in order to get an idea of the properties of the airfoil or wing independent of the freestream velocity or the ambient pressure. We now introduce the 2D lift and drag coefficients, shown below in equations (3) and (4), respectively. It is often convient to express part of the denominator as a single quantity, known as the dynamic pressure. It is possible to calculate the 2D lift and drag using integral equations. However, it is beyond the scope of this tutorial and honestly not really applicable to our situation since we will mainly be using computational fluid dynamics (CFD) to calculate experimental lift and drag values. The results of the CFD analysis will then be used to calculate lift and drag coefficients and used to select airfoils/wings for our designs. Thin Airfoil Theory Thin airfoil theory uses the following assumptions for it's analysis: - "Zero" thickness: t << c - "Small" camber - "Small" angles of attack: small angle approximation (α< 10-15 degrees) - Inviscid (no viscosity) and incompressible flow - Airfoil does not disturb the flow Although some of these assumptions seem a little ridiculous, thin airfoil theory gives us a good starting point for analyzing flows. If the airfoil is symmetric, the lift coefficient can be described by equation (5) for small angles of attack. If the airfoil is not symmetric, we must define the quantity known as the angle of zero lift. This quantity physically represents the angle of attack where the airfoil has a lift force equal to zero. The angle is usually less than zero for a cambered airfoil or equal to zero for the case of a symmetric airfoil. The angle of zero lift can be calculated from a complex integral equation, but it again is not important to our applications and often times given. If the airfoil is asymmetric, the lift coefficient is now described by equation (6). The important thing to take away from this theory is that when the airfoil is very thin, the lift coefficient increases linearly as a function of the geometric angle of attack and angle of zero lift (if asymmetric). Another thing that is worth mentioning is the Kutta Condition. This principle is often best expressed through equation (7). This basically states that in order for there to be lift, there must be some circulation (denoted by Γ) around the body. This is not super important to our application but worth mentioning. Finite Wing Theory Great! We've made it to finite wings. The first thing that is important to mention is the aspect ratio (AR) of a wing. It is a dimensionless quantity that measures the ratio of the square of the wingspan (b) to the surface area (S) of the wing. This is explicitly outlined in equation (8). The greater the aspect ratio, the higher the coefficient of lift. This is usually why unmanned vehicles (our focus!) usually have really long, thin wings. However, loading on longer wings cause more of a moment on the fuselage so they're often avoided in many situations, like commercial aircraft. Have you ever looked out the window of an aircraft and seen how the wings flare up on the end? These are known as winglets and they're not just there for looks. They actually help reduce what are known as trailing vortices, a phenomenon that appears with finite wings. The vortices are caused by high pressure underneath the wing "leaking out" and moving upwards over the wing. As a result, these vortices actually induce a downwards velocity, known as the downwash velocity, which combines with the freestream to actually push downwards on the wing. The angle between the resulting velocity vector and the original freestream vector is known as the induced angle of attack and cause the geometric angle of attack to appear smaller than it actually is. This is shown in Figure 3. Figure 3: The downwash velocity and induced angle of attack from the trailing vortices Using what we know from thin airfoil theory, a smaller geometric angle of attack leads to a smaller lift coefficient. Therefore, a finite wing actually has less lift than an airfoil with the same cross-section at the same conditions. There is also now an induced drag force that forms on the wing due to this down wash velocity, also shown in Figure 3. The lift and drag coefficients can now be redefined for the 3D finite wing case, shown in equations (9) & (10). How do planes fly anyway?? Now the question that every aerospace engineer wants to know: how the hell do planes fly? You may have heard of a theory known as the Equal Transit Theory. This theory states that the airflowing around a wing separates at the leading edge and meets back up at the trailing edge. However, since the path on top of the wing is longer than the path underneath the wing, the velocity of air on top is higher and thus causes a pressure drop due to Bernoulli's principle. The pressure difference then causes an upwards force to attack on the wing, pushing it upwards. However, the equal transit theory is false! Although Bernoulli's principle does state that an increase in velocity causes a decrease in pressure (granted other conditions stay the same), it is simply proven false that air meets back up at the trailing edge. So what is the real reason? The answer has to do with the shape of the airfoil and its ability to push into the flow. Let's look at a pressure distribution of an airfoil at several angles of attack, shown in Figure 4. Figure 4: The pressure distribution on an airfoil subjected to different angles of attack If you look at Figure 4, you see that the pressure distribution changes with the angle of attack. At a negative angle of attack less than the angle of zero lift, the lift on the airfoil is actually pointing downwards (downforce), but increases as the angle of attack increases (up until stalling occurs). This is because of how the airfoil "pushes" into the airflow. At a positive angle of attack, the flow pushes into the bottom of the wing but flows away from most of the top of the wing. The "pushing" of the air on the bottom of the wing causes high pressure on the bottom of the wing and the lack of a body to push on the top causes low pressure. This then produces an upwards net force. Think of sticking your hand out the window in a car when you're going very fast. When you tilt your fingers upwards, the air pushes you up but pushes you down if you angle your fingers downwards. It's pretty much the same principle! This also explains why a symmetric airfoil will have zero lift at 0 angle of attack- the pressure contributions to lift cancel each other out. The important thing to take away from all this is that we can (and will) use CFD for complex airfoil and wing shapes to generate pretty accurate lift and drag values. We can then nondimensionalize these values in order to compare and contrast wing/airfoil shapes and select the appropriate one for our design. It also is important to recognize what happens in the finite wing cases and how trailing vortices decrease the lift and increase drag when compared to the theoretical 2D case. And that the equal transit theory is BS There are tons of resources out there if you're interested in learning more. Here are a few that I recommend: - Fundamentals of Aerodynamics, 6th Edition By John Anderson (MAE 150B Textbook) - Aerodynamics, Aeronautics, and Flight Mechanics, 2nd Edition By B.W. McCormick (MAE 154S Textbook)
Statistics is the science of acquiring, organizing, classifying, analyzing, interpreting, and presenting numerical data to make predictions about the populations from where the sample is drawn. Scientists develop many statistical methods or techniques to understand the data that they collect. They can be either descriptive or inferential statistics. With the help of inferential statistics, scientists can generalize or make predictions for a population from a specific sample chosen from them. In statistics, population means the entire set of observations that you can make. Studying an entire population is not practically possible for scientists. Therefore, they make samples or subsamples from the population. In this blog, we find out what is inferential statistics. What is inferential statistics? Inferential statistics take the help of various models that help you to compare your sample data with other sample data or with any other research. Therefore, inferential statistics meaning can be derived as making inferences about a population based on the samples. Some of the inferential statistics examples can be stated as follows: - You can randomly select a sample of marks received by the students in the 12th-grade board exam. - You can stand in a mall and ask a sample of 100 people whether they like shopping at a particular store. - You can survey whether a certain medicine is effective on a sample of people. Importance of random sampling in inferential statistics – While collecting a sample from a population, there must be a systematic way of selecting them. From the random sampling method, all the individual items from the population have an equal chance of getting selected in the samples. Therefore, the samples selected are free from unwanted biases. This method requires careful planning from the beginning. Inferential statistics and Descriptive statistics A descriptive statistic is like a summary statistic that describes or summarizes the characteristics of the data. In other words, descriptive statistics describe the data, whereas inferential statistics allows you to make predictions about the data. The different tools used in descriptive data include the sample mean, sample standard deviation, bar chart, boxplot, the shape of the sample probability distribution, etc. The distribution concerns the frequency of each value, and the central tendency concerns the mean of values. The variability concerns how spread out the values are. Some of the differences between descriptive and inferential statistics are given below; - Descriptive statistics is concerned with describing the population of the sample. Inferential statistics are used to conclude about the population after thorough observation and analysis. - Descriptive statistics collects, analyzes, organizes, and presents the data in a meaningful way. Inferential statistics estimate the parameters of the data, test hypothesis, and predicts future outcomes. - Descriptive statistics are used when the dataset is small, whereas inferential statistics are used when large. - In descriptive statistics, the final result is displayed in a diagrammatic or tabular form. On the other hand, the final result in inferential statistics is displayed in the form of probability. - The tools used in descriptive statistics are measures of dispersion and measures of central tendency. The tools used in Inferential statistics are analysis of variance and hypothesis test. Read Also: How To Become A Professional Scrum Master? Importance of Inferential statistics The importance of inferential statistics are defined as follows: - To make conclusions about the population from its sample. - To conclude whether the sample selected is statistically significant to the whole population. - To compare two models to find out which one is more statistically significant than the other. - In the case of feature selection, whether the addition or removal of a variable will improve the model or not. Inferential statistics and its types There are mainly two types of inferential statistics, which are also known as inferential statistics methods, are described below: - Estimation of parameters – This is the area where a statistic from your sample data is taken, and that data is used to define something about the population parameter. They can also be known as confidence intervals. - Testing hypotheses – This is the area where you can use the sample data to answer the research questions, which may include – ” Are the means of two or more populations different from each other?”. These hypothesis tests mainly allow concluding for the entire population. However, there might be some errors in hypothesis testings, namely, Type 1 error and Type 2 error. The steps of hypothesis testing are described below: - Step 1 – Mention the null and alternative hypotheses. - Step 2 – Selecting the appropriate inferential statistical test. - Step 3 – Select the level of significance. - Step 4 – Performing the test. - Step 5 – Make a conclusive statement that can be derived from the result of the test. Inferential statistics is a powerful tool that must be used properly to conclude about a population. It helps the scientists to analyze and interpret the data. Any wrong application or interpretation of inferential statistics may lead to distortion of final results. Hence, you must carefully apply tools or methods of inferential statistics to achieve the most accurate results. The Jigsaw Academy offers data science certification courses that can help you gain your data science certificate. To Know Some Great Stuff Do Visit resettgo To Know Some Great Stuff Do Visit uzzers To Know Some Great Stuff Do Visit renowz
A gravitational lens is a distribution of matter (such as a cluster of galaxies) between a distant light source and an observer, that is capable of bending the light from the source as the light travels towards the observer. This effect is known as gravitational lensing, and the amount of bending is one of the predictions of Albert Einstein's general theory of relativity. (Classical physics also predicts the bending of light, but only half that predicted by general relativity.) Although either Orest Khvolson (1924) or Frantisek Link (1936) is sometimes credited as being the first to discuss the effect in print, the effect is more commonly associated with Einstein, who published a more famous article on the subject in 1936. Fritz Zwicky posited in 1937 that the effect could allow galaxy clusters to act as gravitational lenses. It was not until 1979 that this effect was confirmed by observation of the so-called "Twin QSO" SBS 0957+561. Unlike an optical lens, a gravitational lens bends to the maximum light that passes closest to its center, and to a minimum light that travels furthest from its center. Consequently, a gravitational lens has no single focal point, but a focal line. The term "lens" in the context of gravitational light deflection was first used by O.J. Lodge, who remarked that it is "not permissible to say that the solar gravitational field acts like a lens, for it has no focal length". If the (light) source, the massive lensing object, and the observer lie in a straight line, the original light source will appear as a ring around the massive lensing object. If there is any misalignment, the observer will see an arc segment instead. This phenomenon was first mentioned in 1924 by the St. Petersburg physicist Orest Chwolson, and quantified by Albert Einstein in 1936. It is usually referred to in the literature as an Einstein ring, since Chwolson did not concern himself with the flux or radius of the ring image. More commonly, where the lensing mass is complex (such as a galaxy group or cluster) and does not cause a spherical distortion of space–time, the source will resemble partial arcs scattered around the lens. The observer may then see multiple distorted images of the same source; the number and shape of these depending upon the relative positions of the source, lens, and observer, and the shape of the gravitational well of the lensing object. 2. Weak lensing: where the distortions of background sources are much smaller and can only be detected by analyzing large numbers of sources in a statistical way to find coherent distortions of only a few percent. The lensing shows up statistically as a preferred stretching of the background objects perpendicular to the direction to the center of the lens. By measuring the shapes and orientations of large numbers of distant galaxies, their orientations can be averaged to measure the shear of the lensing field in any region. This, in turn, can be used to reconstruct the mass distribution in the area: in particular, the background distribution of dark matter can be reconstructed. Since galaxies are intrinsically elliptical and the weak gravitational lensing signal is small, a very large number of galaxies must be used in these surveys. These weak lensing surveys must carefully avoid a number of important sources of systematic error: the intrinsic shape of galaxies, the tendency of a camera's point spread function to distort the shape of a galaxy and the tendency of atmospheric seeing to distort images must be understood and carefully accounted for. The results of these surveys are important for cosmological parameter estimation, to better understand and improve upon the Lambda-CDM model, and to provide a consistency check on other cosmological observations. They may also provide an important future constraint on dark energy. 3. Microlensing: where no distortion in shape can be seen but the amount of light received from a background object changes in time. The lensing object may be stars in the Milky Way in one typical case, with the background source being stars in a remote galaxy, or, in another case, an even more distant quasar. The effect is small, such that (in the case of strong lensing) even a galaxy with a mass more than 100 billion times that of the Sun will produce multiple images separated by only a few arcseconds. Galaxy clusters can produce separations of several arcminutes. In both cases the galaxies and sources are quite distant, many hundreds of megaparsecs away from our Galaxy. Gravitational lenses act equally on all kinds of electromagnetic radiation, not just visible light. Weak lensing effects are being studied for the cosmic microwave background as well as galaxy surveys. Strong lenses have been observed in radio and x-ray regimes as well. If a strong lens produces multiple images, there will be a relative time delay between two paths: that is, in one image the lensed object will be observed before the other image. Henry Cavendish in 1784 (in an unpublished manuscript) and Johann Georg von Soldner in 1801 (published in 1804) had pointed out that Newtonian gravity predicts that starlight will bend around a massive object as had already been supposed by Isaac Newton in 1704 in his famous Queries No.1 in his book Opticks. The same value as Soldner's was calculated by Einstein in 1911 based on the equivalence principle alone. However, Einstein noted in 1915, in the process of completing general relativity, that his (and thus Soldner's) 1911-result is only half of the correct value. Einstein became the first to calculate the correct value for light bending. The first observation of light deflection was performed by noting the change in position of stars as they passed near the Sun on the celestial sphere. The observations were performed in May 1919 by Arthur Eddington, Frank Watson Dyson, and their collaborators during a total solar eclipse. The solar eclipse allowed the stars near the Sun to be observed. Observations were made simultaneously in the cities of Sobral, Ceará, Brazil and in São Tomé and Príncipe on the west coast of Africa. The observations demonstrated that the light from stars passing close to the Sun was slightly bent, so that stars appeared slightly out of position. The result was considered spectacular news and made the front page of most major newspapers. It made Einstein and his theory of general relativity world-famous. When asked by his assistant what his reaction would have been if general relativity had not been confirmed by Eddington and Dyson in 1919, Einstein famously made the quip: "Then I would feel sorry for the dear Lord. The theory is correct anyway." Even before his breakthrough in the formulation of general relativity, Einstein realized that due to light deflection it was also possible that a mass could deflect light along two different paths causing the observer to see multiple images of a single source; this effect would make the mass act as a kind of gravitational lens. However, as he only considered the effect in relation to single stars, he seemed to conclude that the phenomenon was unlikely to be observed for the foreseeable future since the necessary alignments between stars and observer would be highly improbable. Several other physicists speculated about gravitational lensing as well, but all reached the same conclusion that it would be nearly impossible to observe. In 1936, after some urging by Rudi W. Mandl, Einstein reluctantly published the short article "Lens-Like Action of a Star By the Deviation of Light In the Gravitational Field" in the journal Science. In 1937, Fritz Zwicky first considered the case where the newly discovered galaxies (which were called 'nebulae' at the time) could act as both source and lens, and that, because of the mass and sizes involved, the effect was much more likely to be observed. It was not until 1979 that the first gravitational lens would be discovered. It became known as the "Twin QSO" since it initially looked like two identical quasistellar objects. (It is officially named SBS 0957+561.) This gravitational lens was discovered by Dennis Walsh, Bob Carswell, and Ray Weymann using the Kitt Peak National Observatory 2.1 meter telescope. In the 1980s, astronomers realized that the combination of CCD imagers and computers would allow the brightness of millions of stars to be measured each night. In a dense field, such as the galactic center or the Magellanic clouds, many microlensing events per year could potentially be found. This led to efforts such as Optical Gravitational Lensing Experiment, or OGLE, that have characterized hundreds of such events. Explanation in terms of space–time curvature In general relativity, light follows the curvature of spacetime, hence when light passes around a massive object, it is bent. This means that the light from an object on the other side will be bent towards an observer's eye, just like an ordinary lens. Since light always moves at a constant speed, lensing changes the direction of the velocity of the light, but not the magnitude. Light rays are the boundary between the future, the spacelike, and the past regions. The gravitational attraction can be viewed as the motion of undisturbed objects in a background curved geometry or alternatively as the response of objects to a force in a flat geometry. The angle of deflection is: toward the mass M at a distance r from the affected radiation, where G is the universal constant of gravitation and c is the speed of light in a vacuum. Since the Schwarzschild radius is defined as , this can also be expressed in simple form as Search for gravitational lenses Most of the gravitational lenses in the past have been discovered accidentally. A search for gravitational lenses in the northern hemisphere (Cosmic Lens All Sky Survey, CLASS), done in radio frequencies using the Very Large Array (VLA) in New Mexico, led to the discovery of 22 new lensing systems, a major milestone. This has opened a whole new avenue for research ranging from finding very distant objects to finding values for cosmological parameters so we can understand the universe better. A similar search in the southern hemisphere would be a very good step towards complementing the northern hemisphere search as well as obtaining other objectives for study. If such a search is done using well-calibrated and well-parameterized instrument and data, a result similar to the northern survey can be expected. The use of the Australia Telescope 20 GHz (AT20G) Survey data collected using the Australia Telescope Compact Array (ATCA) stands to be such a collection of data. As the data were collected using the same instrument maintaining a very stringent quality of data we should expect to obtain good results from the search. The AT20G survey is a blind survey at 20 GHz frequency in the radio domain of the electromagnetic spectrum. Due to the high frequency used, the chances of finding gravitational lenses increases as the relative number of compact core objects (e.g. Quasars) are higher (Sadler et al. 2006). This is important as the lensing is easier to detect and identify in simple objects compared to objects with complexity in them. This search involves the use of interferometric methods to identify candidates and follow them up at higher resolution to identify them. Full detail of the project is currently under works for publication. In a 2009 article on Science Daily a team of scientists led by a cosmologist from the U.S. Department of Energy's Lawrence Berkeley National Laboratory has made major progress in extending the use of gravitational lensing to the study of much older and smaller structures than was previously possible by stating that weak gravitational lensing improves measurements of distant galaxies. Astronomers from the Max Planck Institute for Astronomy in Heidelberg, Germany, the results of which are accepted for publication on Oct 21, 2013 in the Astrophysical Journal Letters (arXiv.org), discovered what at the time was the most distant gravitational lens galaxy termed as J1000+0221 using NASA’s Hubble Space Telescope. While it remains the most distant quad-image lensing galaxy known, an even more distant two-image lensing galaxy was subsequently discovered by an international team of astronomers using a combination of Hubble Space Telescope and Keck telescope imaging and spectroscopy. The discovery and analysis of the IRC 0218 lens was published in the Astrophysical Journal Letters on June 23, 2014. Research published Sep 30, 2013 in the online edition of Physical Review Letters, led by McGill University in Montreal, Québec, Canada, has discovered the B-modes, that are formed due to gravitational lensing effect, using National Science Foundation's South Pole Telescope and with help from the Herschel space observatory. This discovery would open the possibilities of testing the theories of how our universe originated. Solar gravitational lens Albert Einstein predicted in 1936 that rays of light from the same direction that skirt the edges of the Sun would converge to a focal point approximately 542 AUs from the Sun. Thus, a probe positioned at this distance (or greater) from the Sun could use the sun as a gravitational lens for magnifying distant objects on the opposite side of the sun A probe's location could shift around as needed to select different targets relative to the Sun. This distance is far beyond the progress and equipment capabilities of space probes such as Voyager 1, and beyond the known planets and dwarf planets, though over thousands of years 90377 Sedna will move further away on its highly elliptical orbit. The high gain for potentially detecting signals through this lens, such as microwaves at the 21-cm hydrogen line, led to the suggestion by Frank Drake in the early days of SETI that a probe could be sent to this distance. A multipurpose probe SETISAIL and later FOCAL was proposed to the ESA in 1993, but is expected to be a difficult task. If a probe does pass 542 AU, magnification capabilities of the lens will continue to act at further distances, as the rays that come to a focus at larger distances pass further away from the distortions of the Sun's corona. A critique of the concept was given by Landis, who discussed issues including interference of the solar corona, the high magnification of the target, which will make the design of the mission focal plane difficult, and an analysis of the inherent spherical aberration of the lens. Measuring weak lensing Kaiser et al. (1995), Luppino & Kaiser (1997) and Hoekstra et al. (1998) prescribed a method to invert the effects of the Point Spread Function (PSF) smearing and shearing, recovering a shear estimator uncontaminated by the systematic distortion of the PSF. This method (KSB+) is the most widely used method in current weak lensing shear measurements. Galaxies have random rotations and inclinations. As a result, the shear effects in weak lensing need to be determined by statistically preferred orientations. The primary source of error in lensing measurement is due to the convolution of the PSF with the lensed image. The KSB method measures the ellipticity of a galaxy image. The shear is proportional to the ellipticity. The objects in lensed images are parameterized according to their weighted quadrupole moments. For a perfect ellipse, the weighted quadrupole moments are related to the weighted ellipticity. KSB calculate how a weighted ellipticity measure is related to the shear and use the same formalism to remove the effects of the PSF. KSB’s primary advantages are its mathematical ease and relatively simple implementation. However, KSB is based on a key assumption that the PSF is circular with an anisotropic distortion. It’s fine for current cosmic shear surveys, but the next generation of surveys (e.g. LSST) may need much better accuracy than KSB can provide. Because during that time, the statistical errors from the data are negligible, the systematic errors will dominate. Gravitational lens with the Einstein equations, Museum Boerhaave, Leiden Historical papers and references - Chwolson, O (1924). "Über eine mögliche Form fiktiver Doppelsterne". Astronomische Nachrichten. 221 (20): 329–330. Bibcode:1924AN....221..329C. doi:10.1002/asna.19242212003. - Einstein, Albert (1936). "Lens-like Action of a Star by the Deviation of Light in the Gravitational Field". Science. 84 (2188): 506–7. Bibcode:1936Sci....84..506E. doi:10.1126/science.84.2188.506. JSTOR 1663250. PMID 17769014. - Renn, Jürgen; Tilman Sauer; John Stachel (1997). "The Origin of Gravitational Lensing: A Postscript to Einstein's 1936 Science paper". Science. 275 (5297): 184–6. Bibcode:1997Sci...275..184R. doi:10.1126/science.275.5297.184. PMID 8985006. - Drakeford, Jason; Corum, Jonathan; Overbye, Dennis (March 5, 2015). "Einstein's Telescope - video (02:32)". New York Times. Retrieved December 27, 2015. - Overbye, Dennis (March 5, 2015). "Astronomers Observe Supernova and Find They're Watching Reruns". New York Times. Retrieved March 5, 2015. - Cf. Kennefick 2005 for the classic early measurements by the Eddington expeditions; for an overview of more recent measurements, see Ohanian & Ruffini 1994, ch. 4.3. For the most precise direct modern observations using quasars, cf. Shapiro et al. 2004 - Schneider, Peter; Ehlers, Jürgen; Falco, Emilio E. (1992). Gravitational Lenses. Springer-Verlag Berlin Heidelberg New York Press. ISBN 3-540-97070-3. - Gravity Lens – Part 2 (Great Moments in Science, ABS Science) - Dieter Brill, "Black Hole Horizons and How They Begin", Astronomical Review (2012); Online Article, cited Sept.2012. - Melia, Fulvio (2007). The Galactic Supermassive Black Hole. Princeton University Press. pp. 255–256. ISBN 0-691-13129-5. - Soldner, J. G. V. (1804). "On the deflection of a light ray from its rectilinear motion, by the attraction of a celestial body at which it nearly passes by". Berliner Astronomisches Jahrbuch: 161–172. - Newton, Isaac (1998). Opticks: or, a treatise of the reflexions, refractions, inflexions and colours of light. Also two treatises of the species and magnitude of curvilinear figures. Commentary by Nicholas Humez (Octavo ed.). Palo Alto, Calif.: Octavo. ISBN 1-891788-04-3. (Opticks was originally published in 1704). - Schneider, Peter; Ehlers, Jürgen; Falco, Emilio E. (1992). Gravitational Lenses. Springer-Verlag Berlin Heidelberg New York Press. ISBN 3-540-97070-3. - Will, C.M. (2006). "The Confrontation between General Relativity and Experiment". Living Reviews in Relativity. 9: 39. arXiv: . Bibcode:2006LRR.....9....3W. doi:10.12942/lrr-2006-3. - Dyson, F. W.; Eddington, A. S.; Davidson C. (1920). "A determination of the deflection of light by the Sun's gravitational field, from observations made at the total eclipse of 29 May 1919". Philosophical Transactions of the Royal Society. 220A: 291–333. Bibcode:1920RSPTA.220..291D. doi:10.1098/rsta.1920.0009. - Stanley, Matthew (2003). "'An Expedition to Heal the Wounds of War': The 1919 Eclipse and Eddington as Quaker Adventurer". Isis. 94 (1): 57–89. doi:10.1086/376099. PMID 12725104. - Dyson, F. W.; Eddington, A. S.; Davidson, C. (1 January 1920). "A Determination of the Deflection of Light by the Sun's Gravitational Field, from Observations Made at the Total Eclipse of May 29, 1919". Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences. 220 (571-581): 291–333. Bibcode:1920RSPTA.220..291D. doi:10.1098/rsta.1920.0009. - Rosenthal-Schneider, Ilse: Reality and Scientific Truth. Detroit: Wayne State University Press, 1980. p 74. (See also Calaprice, Alice: The New Quotable Einstein. Princeton: Princeton University Press, 2005. p 227.) - "A brief history of gravitational lensing — Einstein Online". www.einstein-online.info. Retrieved 2016-06-29. - F. Zwicky (1937). "Nebulae as Gravitational lenses" (PDF). Physical Review. 51 (4): 290. Bibcode:1937PhRv...51..290Z. doi:10.1103/PhysRev.51.290. - Walsh, D.; Carswell, R. F.; Weymann, R. J. (31 May 1979). "0957 + 561 A, B: twin quasistellar objects or gravitational lens?". Nature. 279 (5712): 381–384. Bibcode:1979Natur.279..381W. doi:10.1038/279381a0. PMID 16068158. - Cosmology: Weak gravitational lensing improves measurements of distant galaxies - Sci-News.com (21 Oct 2013). "Most Distant Gravitational Lens Discovered". Sci-News.com. Retrieved 22 October 2013. - van der Wel, A.; et al. (2013). "Discovery of a Quadruple Lens in CANDELS with a Record Lens Redshift". Astrophysical Journal Letters. 777: L17. arXiv: . Bibcode:2013ApJ...777L..17V. doi:10.1088/2041-8205/777/1/L17. - Wong, K.; et al. (2014). "Discovery of a Strong Lensing Galaxy Embedded in a Cluster at z = 1.62". Astrophysical Journal Letters. 789: L31. arXiv: . Bibcode:2014ApJ...789L..31W. doi:10.1088/2041-8205/789/2/L31. - NASA/Jet Propulsion Laboratory (October 22, 2013). "Long-sought pattern of ancient light detected". ScienceDaily. Retrieved October 23, 2013. - Hanson, D.; et al. (Sep 30, 2013). "Detection of B-Mode Polarization in the Cosmic Microwave Background with Data from the South Pole Telescope". Physical Review Letters. 14. 111. arXiv: . Bibcode:2013PhRvL.111n1301H. doi:10.1103/PhysRevLett.111.141301. - Clavin, Whitney; Jenkins, Ann; Villard, Ray (7 January 2014). "NASA's Hubble and Spitzer Team up to Probe Faraway Galaxies". NASA. Retrieved 8 January 2014. - Chou, Felecia; Weaver, Donna (16 October 2014). "RELEASE 14-283 - NASA's Hubble Finds Extremely Distant Galaxy through Cosmic Magnifying Glass". NASA. Retrieved 17 October 2014. - Einstein, Albert (1936). "Lens-Like Action of a Star by the Deviation of Light in the Gravitational Field". Science. 84 (2188): 506–507. Bibcode:1936Sci....84..506E. doi:10.1126/science.84.2188.506. PMID 17769014. - Eshleman, Von R. (1979). "Gravitational lens of the sun: its potential for observations and communications over interstellar distances," Science, 205 (4411): 1133-1135. - Geoffrey A. Landis, "Mission to the Gravitational Focus of the Sun: A Critical Analysis," ArXiv, paper 1604.06351, Cornell University, 21 Apr 2016 (downloaded 30 April 2016) - Claudio Maccone (2009). Deep Space Flight and Communications: Exploiting the Sun as a Gravitational Lens. Springer. - Landis, Geoffrey A., “Mission to the Gravitational Focus of the Sun: A Critical Analysis,” paper AIAA-2017-1679, AIAA Science and Technology Forum and Exposition 2017, Grapevine TX, January 9-13, 2017. Preprint at arXiv.org (accessed 24 December 2016). - Kaiser, Nick; Squires, Gordon; Broadhurst, Tom (August 1995). "A Method for Weak Lensing Observations". The Astrophysical Journal. 449: 460. arXiv: . Bibcode:1995ApJ...449..460K. doi:10.1086/176071. - Luppino, G. A.; Kaiser, Nick (20 January 1997). "Detection of Weak Lensing by a Cluster of Galaxies at z = 0.83". The Astrophysical Journal. 475 (1): 20–28. arXiv: . Bibcode:1997ApJ...475...20L. doi:10.1086/303508. - Loff, Sarah; Dunbar, Brian (February 10, 2015). "Hubble Sees A Smiling Lens". NASA. Retrieved February 10, 2015. - "Most distant gravitational lens helps weigh galaxies". ESA/Hubble Press Release. Retrieved 18 October 2013. - "ALMA Rewrites History of Universe's Stellar Baby Boom". ESO. Retrieved 2 April 2013. - "Accidental Astrophysicists". Science News, June 13, 2008. - "XFGLenses". A Computer Program to visualize Gravitational Lenses, Francisco Frutos-Alfaro - "G-LenS". A Point Mass Gravitational Lens Simulation, Mark Boughen. - Newbury, Pete, "Gravitational Lensing". Institute of Applied Mathematics, The University of British Columbia. - Cohen, N., "Gravity's Lens: Views of the New Cosmology", Wiley and Sons, 1988. - "Q0957+561 Gravitational Lens". Harvard.edu. - Bridges, Andrew, "Most distant known object in universe discovered". Associated Press. February 15, 2004. (Farthest galaxy found by gravitational lensing, using Abell 2218 and Hubble Space Telescope.) - Analyzing Corporations ... and the Cosmos An unusual career path in gravitational lensing. - "HST images of strong gravitational lenses". Harvard-Smithsonian Center for Astrophysics. - "A planetary microlensing event" and "A Jovian-mass Planet in Microlensing Event OGLE-2005-BLG-071", the first extra-solar planet detections using microlensing. - Gravitational lensing on arxiv.org - NRAO CLASS home page - AT20G survey - A diffraction limit on the gravitational lens effect (Bontz, R. J. and Haugan, M. P. "Astrophysics and Space Science" vol. 78, no. 1, p. 199-210. August 1981) - Further reading - Blandford & Narayan; Narayan, R (1992). "Cosmological applications of gravitational lensing". Annual Review of Astronomy and Astrophysics. 30 (1): 311–358. Bibcode:1992ARA&A..30..311B. doi:10.1146/annurev.aa.30.090192.001523. - Matthias Bartelmann; Peter Schneider (2000-08-17). "Weak Gravitational Lensing" (PDF). - Khavinson, Dmitry; Neumann, Genevra (June–July 2008). "From Fundamental Theorem of Algebra to Astrophysics: A "Harmonious" Path" (PDF). Notices of the AMS. 55 (6): 666–675.. - Petters, Arlie O.; Levine, Harold; Wambsganss, Joachim (2001). Singularity Theory and Gravitational Lensing. Progress in Mathematical Physics. 21. Birkhäuser. - Tools for the evaluation of the possibilities of using parallax measurements of gravitationally lensed sources (Stein Vidar Hagfors Haugan. June 2008) |Wikimedia Commons has media related to Gravitational lensing.| - Video: Evalyn Gates – Einstein's Telescope: The Search for Dark Matter and Dark Energy in the Universe, presentation in Portland, Oregon, on April 19, 2009, from the author's recent book tour. - Audio: Fraser Cain and Dr. Pamela Gay – Astronomy Cast: Gravitational Lensing, May 2007
College- and Career-Readiness Standards NQ.1: Express sequences and series using recursive and explicit formulas. NQ.2: Evaluate and apply formulas for arithmetic and geometric sequences and series. A.8: Determine characteristics of graphs of parent functions (domain/range, increasing/decreasing intervals, intercepts, symmetry, end behavior, and asymptotic behavior). A.10: Prove polynomial identities and use them to describe numerical relationships. A.12: Know and apply the Binomial Theorem for the expansion of (x + y)^n in powers of x and y for a positive integer n, where x and y are any numbers, with coefficients determined for example by Pascal’s Triangle. A.15: Determine asymptotes and holes of rational functions, explain how each was found, and relate these behaviors to continuity. A.18: Find the composite of two given functions and find the inverse of a given function. Extend this concept to discuss the identity function f(x) = x. A.21: Find the zeros of polynomial functions by synthetic division and the Factor Theorem. A.22: Graph and solve quadratic inequalities. F.24: Graph rational functions, identifying zeros and asymptotes when suitable factorizations are available, and showing end behavior. F.25: Compose functions. F.26: Verify by composition that one function is the inverse of another. F.27: Read values of an inverse function from a graph or a table, given that the function has an inverse. F.28: Produce an invertible function from a non-invertible function by restricting the domain. F.29: Understand the inverse relationship between exponents and logarithms and use this relationship to solve problems involving logarithms and exponents. F.30: Use special triangles to determine geometrically the values of sine, cosine, tangent for pi/3, pi/4 and pi/6, and use the unit circle to express the values of sine, cosine, and tangent for pi – x, pi + x, and 2pi – x in terms of their values for x, where x is any real number. F.31: Use the unit circle to explain symmetry (odd and even) and periodicity of trigonometric functions. F.32: Choose trigonometric functions to model periodic phenomena with specified amplitude, frequency, and midline. F.35: Prove the addition and subtraction formulas for sine, cosine, and tangent and use them to solve problems. F.36: Prove the Pythagorean identity sin²(theta) + cos²(theta) = 1 and use it to find sin(theta), cos(theta), or tan(theta) given sin(theta), cos(theta), or tan(theta) and the quadrant of the angle. G.37: Graph piecewise defined functions and determine continuity or discontinuities. G.38: Describe the attributes of graphs and the general equations of parent functions (linear, quadratic, cubic, absolute value, rational, exponential, logarithmic, square root, cube root, and greatest integer). G.39: Explain the effects of changing the parameters in transformations of functions. G.40: Predict the shapes of graphs of exponential, logarithmic, rational, and piece-wise functions, and verify the prediction with and without technology. SP.45: Analyze expressions in summation and factorial notation to solve problems. Correlation last revised: 9/15/2020
This article has multiple issues. Please help improve it or discuss these issues on the talk page. (Learn how and when to remove these template messages)(Learn how and when to remove this template message) In vascular plants, the root is the organ of a plant that typically lies below the surface of the soil. Roots can also be aerial or aerating, that is growing up above the ground or especially above water. Furthermore, a stem normally occurring below ground is not exceptional either (see rhizome). Therefore, the root is best defined as the non-leaf, non-nodes bearing parts of the plant's body. However, important internal structural differences between stems and roots exist. The fossil record of roots – or rather, infilled voids where roots rotted after death – spans back to the late Silurian. Their identification is difficult, because casts and molds of roots are so similar in appearance to animal burrows. They can be discriminated using a range of features. The first root that comes from a plant is called the radicle. A root's four major functions are 1) absorption of water and inorganic nutrients, 2) anchoring of the plant body to the ground, and supporting it, 3) storage of food and nutrients, 4) vegetative reproduction and competition with other plants. In response to the concentration of nutrients, roots also synthesise cytokinin, which acts as a signal as to how fast the shoots can grow. Roots often function in storage of food and nutrients. The roots of most vascular plant species enter into symbiosis with certain fungi to form mycorrhizae, and a large range of other organisms including bacteria also closely associate with roots. When dissected, the arrangement of the cells in a root is root hair, epidermis, epiblem, cortex, endodermis, pericycle and, lastly, the vascular tissue in the centre of a root to transport the water absorbed by the root to other places of the plant.[clarification needed] In its simplest form, the term root architecture refers to the spatial configuration of a plant’s root system. This system can be extremely complex and is dependent upon multiple factors such as the species of the plant itself, the composition of the soil and the availability of nutrients. The configuration of root systems serves to structurally support the plant, compete with other plants and for uptake of nutrients from the soil. Roots grow to specific conditions, which, if changed, can impede a plant's growth. For example, a root system that has developed in dry soil may not be as efficient in flooded soil, yet plants are able to adapt to other changes in the environment, such as seasonal changes. Root architecture plays the important role of providing a secure supply of nutrients and water as well as anchorage and support. The main terms used to classify the architecture of a root system are: - Branch magnitude: the number of links (exterior or interior). - Topology: the pattern of branching, including: - Herringbone: alternate lateral branching off a parent root - Dichotomous: opposite, forked branches - Radial: whorl(s) of branches around a root - Link length: the distance between branches. - Root angle: the radial angle of a lateral root’s base around the parent root’s circumference, the angle of a lateral root from its parent root, and the angle an entire system spreads. - Link radius: the diameter of a root. All components of the root architecture are regulated through a complex interaction between genetic responses and responses due to environmental stimuli. These developmental stimuli are categorised as intrinsic, the genetic and nutritional influences, or extrinsic, the environmental influences and are interpreted by signal transduction pathways. The extrinsic factors that affect root architecture include gravity, light exposure, water and oxygen, as well as the availability or lack of nitrogen, phosphorus, sulphur, aluminium and sodium chloride. The main hormones (intrinsic stimuli) and respective pathways responsible for root architecture development include: - Auxin – Auxin promotes root initiation, root emergence and primary root elongation. - Cytokinins – Cytokinins regulate root apical meristem size and promote lateral root elongation. - Gibberellins – Together with ethylene they promote crown primordia growth and elongation. Together with auxin they promote root elongation. Gibberellins also inhibit lateral root primordia initiation. - Ethylene – Ethylene promotes crown root formation. Early root growth is one of the functions of the apical meristem located near the tip of the root. The meristem cells more or less continuously divide, producing more meristem, root cap cells (these are sacrificed to protect the meristem), and undifferentiated root cells. The latter become the primary tissues of the root, first undergoing elongation, a process that pushes the root tip forward in the growing medium. Gradually these cells differentiate and mature into specialized cells of the root tissues. Growth from apical meristems is known as primary growth, which encompasses all elongation. Secondary growth encompasses all growth in diameter, a major component of woody plant tissues and many nonwoody plants. For example, storage roots of sweet potato have secondary growth but are not woody. Secondary growth occurs at the lateral meristems, namely the vascular cambium and cork cambium. The former forms secondary xylem and secondary phloem, while the latter forms the periderm. In plants with secondary growth, the vascular cambium, originating between the xylem and the phloem, forms a cylinder of tissue along the stem and root. The vascular cambium forms new cells on both the inside and outside of the cambium cylinder, with those on the inside forming secondary xylem cells, and those on the outside forming secondary phloem cells. As secondary xylem accumulates, the "girth" (lateral dimensions) of the stem and root increases. As a result, tissues beyond the secondary phloem including the epidermis and cortex, in many cases tend to be pushed outward and are eventually "sloughed off" (shed). At this point, the cork cambium begins to form the periderm, consisting of protective cork cells containing suberin. In roots, the cork cambium originates in the pericycle, a component of the vascular cylinder. The vascular cambium produces new layers of secondary xylem annually. The xylem vessels are dead at maturity but are responsible for most water transport through the vascular tissue in stems and roots. Tree roots usually grow to three times the diameter of the branch spread, only half of which lie underneath the trunk and canopy. The roots from one side of a tree usually supply nutrients to the foliage on the same side. Some families however, such as Sapindaceae (the maple family), show no correlation between root location and where the root supplies nutrients on the plant. There is a correlation of roots using the process of plant perception to sense their physical environment to grow, including the sensing of light, and physical barriers. Over time, roots can crack foundations, snap water lines, and lift sidewalks. The correct environment of air, mineral nutrients and water directs plant roots to grow in any direction to meet the plant's needs. Roots will shy or shrink away from dry or other poor soil conditions. |This section does not cite any sources. (March 2010) (Learn how and when to remove this template message)| A true root system consists of a primary root and secondary roots (or lateral roots). - the diffuse root system: the primary root is not dominant; the whole root system is fibrous and branches in all directions. Most common in monocots. The main function of the fibrous root is to anchor the plant. The roots, or parts of roots, of many plant species have become specialized to serve adaptive purposes besides the two primary functions[clarification needed], described in the introduction. - Adventitious roots arise out-of-sequence from the more usual root formation of branches of a primary root, and instead originate from the stem, branches, leaves, or old woody roots. They commonly occur in monocots and pteridophytes, but also in many dicots, such as clover (Trifolium), ivy (Hedera), strawberry (Fragaria) and willow (Salix). Most aerial roots and stilt roots are adventitious. In some conifers adventitious roots can form the largest part of the root system. - Aerating roots (or knee root or knee or pneumatophores or Cypress knee): roots rising above the ground, especially above water such as in some mangrove genera (Avicennia, Sonneratia). In some plants like Avicennia the erect roots have a large number of breathing pores for exchange of gases. - Aerial roots: roots entirely above the ground, such as in ivy (Hedera) or in epiphytic orchids. Many aerial roots, are used to receive water and nutrient intake directly from the air - from fogs, dew or humidity in the air. Some rely on leaf systems to gather rain or humidity and even store it in scales or pockets. Other aerial roots, such as mangrove aerial roots, are used for aeration and not for water absorption. Other aerial roots are used mainly for structure, functioning as prop roots, as in maize or anchor roots or as the trunk in strangler fig. In some Epiphytes - plants living above the surface on other plants, aerial roots serve for reaching to water sources or reaching the surface, and then functioning as regular surface roots. - Contractile roots: they pull bulbs or corms of monocots, such as hyacinth and lily, and some taproots, such as dandelion, deeper in the soil through expanding radially and contracting longitudinally. They have a wrinkled surface. - Coarse roots: Roots that have undergone secondary thickening and have a woody structure. These roots have some ability to absorb water and nutrients, but their main function is transport and to provide a structure to connect the smaller diameter, fine roots to the rest of the plant. - Fine roots: Primary roots usually <2 mm diameter that have the function of water and nutrient uptake. They are often heavily branched and support mycorrhizas. These roots may be short lived, but are replaced by the plant in an ongoing process of root 'turnover'. - Haustorial roots: roots of parasitic plants that can absorb water and nutrients from another plant, such as in mistletoe (Viscum album) and dodder. - Propagative roots: roots that form adventitious buds that develop into aboveground shoots, termed suckers, which form new plants, as in Canada thistle, cherry and many others. - Proteoid roots or cluster roots: dense clusters of rootlets of limited growth that develop under low phosphate or low iron conditions in Proteaceae and some plants from the following families Betulaceae, Casuarinaceae, Elaeagnaceae, Moraceae, Fabaceae and Myricaceae. - Stilt roots: these are adventitious support roots, common among mangroves. They grow down from lateral branches, branching in the soil. - Storage roots: these roots are modified for storage of food or water, such as carrots and beets. They include some taproots and tuberous roots. - Structural roots: large roots that have undergone considerable secondary thickening and provide mechanical support to woody plants and trees. - Surface roots: These proliferate close below the soil surface, exploiting water and easily available nutrients. Where conditions are close to optimum in the surface layers of soil, the growth of surface roots is encouraged and they commonly become the dominant roots. - Tuberous roots: A portion of a root swells for food or water storage, e.g. sweet potato. A type of storage root distinct from taproot. The distribution of vascular plant roots within soil depends on plant form, the spatial and temporal availability of water and nutrients, and the physical properties of the soil. The deepest roots are generally found in deserts and temperate coniferous forests; the shallowest in tundra, boreal forest and temperate grasslands. The deepest observed living root, at least 60 metres below the ground surface, was observed during the excavation of an open-pit mine in Arizona, USA. Some roots can grow as deep as the tree is high. The majority of roots on most plants are however found relatively close to the surface where nutrient availability and aeration are more favourable for growth. Rooting depth may be physically restricted by rock or compacted soil close below the surface, or by anaerobic soil conditions. |Species||Location||Maximum rooting depth (m)||References| |Boscia albitrunca||Kalahari desert||68||Jennings (1974)| |Juniperus monosperma||Colorado Plateau||61||Cannon (1960)| |Eucalyptus sp.||Australian forest||61||Jennings (1971)| |Acacia erioloba||Kalahari desert||60||Jennings (1974)| |Prosopis juliflora||Arizona desert||53.3||Phillips (1963)| Certain plants, namely Fabaceae, form root nodules in order to associate and form a symbiotic relationship with nitrogen-fixing bacteria called rhizobia. Due to the high energy required to fix nitrogen from the atmosphere, the bacteria take carbon compounds from the plant to fuel the process. In return, the plant takes nitrogen compounds produced from ammonia by the bacteria. The term root crops refers to any edible underground plant structure, but many root crops are actually stems, such as potato tubers. Edible roots include cassava, sweet potato, beet, carrot, rutabaga, turnip, parsnip, radish, yam and horseradish. Spices obtained from roots include sassafras, angelica, sarsaparilla and licorice. Sugar beet is an important source of sugar. Yam roots are a source of estrogen compounds used in birth control pills. The fish poison and insecticide rotenone is obtained from roots of Lonchocarpus spp. Important medicines from roots are ginseng, aconite, ipecac, gentian and reserpine. Several legumes that have nitrogen-fixing root nodules are used as green manure crops, which provide nitrogen fertilizer for other crops when plowed under. Specialized bald cypress roots, termed knees, are sold as souvenirs, lamp bases and carved into folk art. Native Americans used the flexible roots of white spruce for basketry. Tree roots can heave and destroy concrete sidewalks and crush or clog buried pipes. The aerial roots of strangler fig have damaged ancient Mayan temples in Central America and the temple of Angkor Wat in Cambodia. Vegetative propagation of plants via cuttings depends on adventitious root formation. Hundreds of millions of plants are propagated via cuttings annually including chrysanthemum, poinsettia, carnation, ornamental shrubs and many houseplants. Roots can also protect the environment by holding the soil to reduce soil erosion. This is especially important in areas such as sand dunes. - Absorption of water - Cypress knee - Drought rhizogenesis - Fibrous root system - Mycorrhiza – root symbiosis in which individual hyphae extending from the mycelium of a fungus colonize the roots of a host plant. - Plant physiology - Rhizosphere – region of soil around the root influenced by root secretions and microorganisms present - Root cutting - Rooting powder - Tanada effect - Retallack, G. J. (1986). "The fossil record of soils". In Wright, V. P. Paleosols: their Recognition and Interpretation (PDF). Oxford: Blackwell. pp. 1–57. - Hillier, R.; Edwards, D.; Morrissey, L.B. (2008). "Sedimentological evidence for rooting structures in the Early Devonian Anglo–Welsh Basin (UK), with speculation on their producers". Palaeogeography, Palaeoclimatology, Palaeoecology. 270 (3–4): 366–380. doi:10.1016/j.palaeo.2008.01.038. - Malamy, J. E. (2005). "Intrinsic and environmental response pathways that regulate root system architecture". Plant, Cell & Environment. 28: 67–77. doi:10.1111/j.1365-3040.2005.01306.x. - Caldwell, M. M., Dawson, T. E., & Richards, J. H. (1998). Hydraulic lift: consequences of water efflux from the roots of plants. Oecologia, 113(2), 151-161. - Fitter, A. H (1991). "The ecological significance of root system architecture: an economic approach". In Atkinson, D. Plant Root Growth: An Ecological Perspective. Blackwell. pp. 229–243. - Malamy, J. E.; Ryan K. S. (2001). "Environmental regulation of lateral root initiation in Arabidopsis". Plant Physiology. 127: 899–909. - Russell, P.J.; Hertz, P.E.; McMillan, B. (2013). Biology: The Dynamic Science. Cengage Learning. p. 750. ISBN 978-1-285-41534-5. Retrieved 2017-04-24. - Nakagawa, Y.; Katagiri, T.; Shinozaki, K.; Qi, Z.; Tatsumi, H.; Furuichi, T.; Kishigami, A.; Sokabe, M.; Kojima, I.; Sato, S.; Kato, T.; Tabata, S.; Iida, K.; Terashima, A.; Nakano, M.; Ikeda, M.; Yamanaka, T.; Iida, H. (2007). "Arabidopsis plasma membrane protein crucial for Ca2+ influx and touch sensing in roots". Proceedings of the National Academy of Sciences. 104 (9): 3639–3644. doi:10.1073/pnas.0607703104. - UV-B light sensing mechanism discovered in plant roots, San Francisco State University, December 8, 2008 - Hodge, A. (2012). "Plant Root Interactions". In Witzany, G.; Baluska, F. Biocommunication of Plants. Springer. pp. 157–169. ISBN 978-3-642-23523-8. - Carminati, Andrea; Vetterlein, Doris; Weller, Ulrich; Vogel, Hans-Jörg; Oswald, Sascha E. (2009). "When roots lose contact". Vadose Zone Journal. 8 (3): 805–809. doi:10.2136/vzj2008.0147. - Chen, Rosen & Masson, 1999, p. 343. - Nowak, Edward J.; Martin, Craig E. (1997). "Physiological and anatomical responses to water deficits in the CAM epiphyte Tillandsia ionantha (Bromeliaceae)". International Journal of Plant Sciences. 158 (6): 818–826. JSTOR 2475361. - Pütz, Norbert (2002). "Contractile roots". In Waisel Y.; Eshel A.; Kafkafi U. Plant roots: The hidden half (3rd ed.). New York: Marcel Dekker. pp. 975–987. - Canadell, J.; Jackson, R. B.; Ehleringer, J. B.; Mooney, H. A.; Sala, O. E.; Schulze, E.-D. (December 3, 2004). "Maximum rooting depth of vegetation types at the global scale". Oecologia. 108 (4): 583–595. doi:10.1007/BF00329030. - Stonea, E. L.; P. J. Kaliszb (1 December 1991). "On the maximum extent of tree roots". Forest Ecology and Management. 46 (1–2): 59–102. doi:10.1016/0378-1127(91)90245-Q. - Postgate, J. (1998). Nitrogen Fixation (3rd ed.). Cambridge, UK: Cambridge University Press. - Zahniser, David (February 21, 2008) "City to pass the bucks on sidewalks?" Los Angeles Times - Dennis D.Baldocchi and Liukang Xu. 2007. What limits evaporation from Mediterranean oak woodlands – The supply of moisture in the soil, physiological control by plants or the demand by the atmosphere? Vol 30, issue 10. Elsevier - Brundrett, M. C. 2002. Coevolution of roots and mycorrhizas of land plants. New phytologist 154(2): 275–304. (Available online: DOI | Abstract | Full text (HTML) | Full text (PDF)) - Chen, R., E. Rosen, P. H. Masson. 1999. Gravitropism in Higher Plants. Plant Physiology 120 (2): 343–350. (Available online: Full text (HTML) | Full text (PDF)) – article about how the roots sense gravity. - Clark, Lynn. 2004. Primary Root Structure and Development – lecture notes - Coutts, M. P. 1987. Developmental processes in tree root systems. Canadian Journal of Forest Research 17: 761–767. - Raven, J. A., D. Edwards. 2001. Roots: evolutionary origins and biogeochemical significance. Journal of Experimental Botany 52 (Suppl 1): 381–401. (Available online: Abstract | Full text (HTML) | Full text (PDF)) - Schenk, H. J., and R. B. Jackson. 2002. The global biogeography of roots. Ecological Monographs 72 (3): 311–328. - Sutton, R. F., and R. W. Tinus. 1983. Root and root system terminology. Forest Science Monograph 24 pp 137. - Phillips, W. S. 1963. Depth of roots in soil. Ecology 44 (2): 424. - Caldwell, M. M., Dawson, T. E., & Richards, J. H. (1998). Hydraulic lift: consequences of water efflux from the roots of plants. Oecologia, 113(2), 151-161. |Wikimedia Commons has media related to Roots.| - Cite error: The named reference :0was invoked but never defined (see the help page).
Science, Tech, Math › Math What Is a Two-Way Table of Categorical Variables? Share Flipboard Email Print Don Mason/Blend Images/Getty Images Math Statistics Descriptive Statistics Statistics Tutorials Formulas Probability & Games Inferential Statistics Applications Of Statistics Math Tutorials Geometry Arithmetic Pre Algebra & Algebra Exponential Decay Functions Worksheets By Grade Resources View More by Courtney Taylor Courtney K. Taylor, Ph.D., is a professor of mathematics at Anderson University and the author of "An Introduction to Abstract Algebra." Updated January 30, 2019 One of the goals of statistics is to arrange data in a meaningful way. Two-way tables are an important way to organized a particular type of paired data. As with the construction of any graphs or table in statistics, it is very important to know the types of variables that we are working with. If we have quantitative data, then a graph such as a histogram or stem and leaf plot should be used. If we have categorical data, then a bar graph or pie chart is appropriate. When working with paired data we must be careful. A scatterplot exists for paired quantitative data, but what kind of graph is there for paired categorical data? Whenever we have two categorical variables, then we should use a two-way table. Description of a Two-Way Table First, we recall that categorical data relates to traits or to categories. It is not quantitative and does not have numerical values. A two-way table involves listing all of the values or levels for two categorical variables. All of the values for one of the variables are listed in a vertical column. The values for the other variable are listed along a horizontal row. If the first variable has m values and the second variable has n values, then there will be a total of mn entries in the table. Each of these entries corresponds to a particular value for each of the two variables. Along each row and along each column, the entries are totaled. These totals are important when determining marginal and conditional distributions. These totals are also important when we conduct a chi-square test for independence. Example of a Two-Way Table For example, we will consider a situation in which we look at several sections of a statistics course at a university. We want to construct a two-way table to determine what differences, if any, there are between the males and females in the course. To achieve this, we count the number of each letter grade that was earned by members of each gender. We note that the first categorical variable is that of gender, and there are two possible values in the study of male and female. The second categorical variable is that of letter grade, and there are five values that are given by A, B, C, D and F. This means that we will have a two-way table with 2 x 5 = 10 entries, plus an additional row and an additional column that will be needed to tabulate the row and column totals. Our investigation shows that: 50 males earned an A, while 60 females earned an A.60 males earned a B, and 80 females earned a B.100 males earned a C, and 50 females earned a C.40 males earned D, and 50 females earned a D.30 males earned an F, and 20 females earned an F. This information is entered into the two-way table below. The total of each row tells us how many of each kind of grade was earned. The column totals tell us the number of males and the number of females. Importance of Two-Way Tables Two-way tables help to organize our data when we have two categorical variables. This table can be used to help us compare between two different groups in our data. For example, we could consider the relative performance of males in the statistics course against the performance of females in the course. Next Steps After forming a two-way table, the next step may be to analyze the data statistically. We may ask if the variables that are in the study are independent of one another or not. To answer this question we can use a chi-square test on the two-way table. Two-Way Table for Grades and Genders Male Female Total A 50 60 110 B 60 80 140 C 100 50 150 D 40 50 90 F 30 20 50 Total 280 260 540 Continue Reading Degrees of Freedom for Independence in Two-Way Table How to Find Chi-Square Statistical Functions in Excel Pie Charts, Histograms, and Other Graphs Used in Statistics How to Find Degrees of Freedom in Statistics The Differences Between Explanatory and Response Variables Chi-Square Goodness of Fit Test Critical Values with a Chi-Square Table Charts, Graphs, Maps Make Your Data Pop Learn About the Chi-Square Statistic Formula and How to Use It What Is a Histogram and How Is This Graph Used in Statistics? Systematic vs. Simple Random Samplings: What's the Difference What Is Paired Data? How Are Extrapolation and Interpolation Different? See an Example of a Confidence Interval for a Variance Do You Know the Probabilities for Rolling Two Dice? Is Mean Absolute Deviation the Same as Standard Deviation?
The most abundant structures that we have surviving from the Neolithic period (as upstanding remains and cropmarks) are monuments. These are the most visible and tangible statements of Neolithic belief, treatment of the dead, and identity. In this context monuments are structures with no clear functional or domestic role, contingent on the problems with defining such concepts in a Neolithic context (see Section 4). Monuments were usually associated with ceremony, ritual, mortuary rites and/or burial. In this section, a brief overview of the range and chronology of Neolithic monuments found in Scotland will be presented; more detailed case-studies and regional variations have already been discussed in Section 3. At a general level, Neolithic monuments in Scotland could be viewed as falling into two ‘phases’. The first are largely rectangular or linear in form, and mostly restricted to the 4th millennium BC. The other group are circular, or sub-circular in form, and largely date to the later Neolithic (3000-2500 cal BC). The movement from rectangular to round (to simplify) can be recognised across the British Neolithic (Bradley 2007), and indeed is also reflected in house forms (Section 4.3). This is not a hard and fast rule, however: for instance chambered tombs were built in a wide range of cairn shapes from round to long (although in the Neolithic all had linear, rectangular or square chambers). And it should also be recognised that in some cases a variety of monument forms (rectangular and round) occurred in the same location as part of monument complexes or multi-phase sites. This suggests that even if monument types were not enduring, some places were. The brief characterisations of monument types below are based on typological labels that mask a good deal of variation. However these are commonly accepted terms, and used throughout this document. No causewayed enclosures of Neolithic date have been confirmed in Scotland, although these monuments are commonly found in southern Britain. A number of potential examples have been identified in the cropmark record: Leadketty, Perth and Kinross; Sprouston, Scottish Borders (Figure 104) and West Lindsaylands, South Lanarkshire the most likely (RCAHMS 1978; Smith 1991; Barclay 1996; Oswald et al.2001). However, these enclosures could as easily be later prehistoric or medieval. It is also possible such enclosures could be found in an upland context, with many hilltop enclosures as yet undated. The most commonly found (and amongst the earliest) Neolithic monuments found in Scotland are the chamber tombs, of which over 600 have been recognised. These are largely monuments of the west, southwest and north of Scotland, although there are examples in the east (Figure 93). These were extensively catalogued by Audrey Henshall (Henshall 1972; Davidson & Henshall 1991; Henshall & Ritchie 1995, 2001). Generally these megaliths consist of some kind of chamber set within a stone cairn, some with passages. The cairn and chamber forms vary considerably, leading to a series of different regional ‘types’ identified (see Section 3). A review of dates by Noble (2006, 106-8) shows a wide date range for chambered tombs across Scotland from c3700 cal BC to the early centuries of the 3rd millennium BC. Some Orkney cairns (Maes Howe-type) are very late in the sequence. For instance, Quanterness was in use over the period 3510-3220 cal BC to 2850-2790 cal BC (95.4% probability) (Schulting et. al. 2010). Dating is further complicated by the multi-phase nature of these monuments, with, for instance, long cairns in the north being constructed in three or more phases of activity. The ‘tail’ of some long cairns may date to the final centuries of the Neolithic (e.g. Vestra Fiold, Orkney (C Richards pers. comm.); Tulach an t-Sionnaich, Caithness (Corcoran 1967). There have been few modern excavations of chambered tombs, and results of investigations have varied widely. Some (notably Orcadian monuments) have revealed huge assemblages of human and animal bone, and material culture. Others were largely empty. Earlier excavations have in some cases left large assemblages of material and human remains for analysis. Recent analysis of large bone assemblages from Orcadian tombs (Quanterness, Isbister, Holm of Papa Westray North) have revealed the potential of such monuments to help reveal information about diet, health and lifestyle. Chambered tombs seem to have been repositories of bones, with disarticulated skeletons the norm, often with a preference shown for long bones and skulls. In part this might be because corpses were probably excarnated before being put within tombs. The communal mass of bones may have been viewed as an ancestral resource, with the open entrances allowing bones to be taken in, and out, of the tombs, with forecourt areas at some monuments suggesting ceremonies took place. By the end of the 4th millennium BC many chambered tomb entrances were formally ‘blocked’ (Monamore, Arran; Mid Gleniron, Dumfries and Galloway). Long barrows / mortuary structures and enclosures The eastern half of Scotland has few megaliths, but does have a preponderance of timber and earthwork structures that in some cases had a mortuary role. The early Neolithic of the south and east in particular is characterised by a series of rectangular structures, ranging from small settlement ‘huts’ to massive cursus monuments. Within this continuum could be placed timber halls, mortuary and long enclosures, long mounds (long barrows, bank barrows and perhaps long cairns) and timber and earthwork cursus monuments (Loveday 2006; Brophy forthcoming). Settlement and timber hall sites were discussed in Section 4.3. There are at least 20 long barrows known in Scotland, some of which have only been recorded as cropmarks (including a fine example near the base of Dunadd, Argyll, a rare western long barrow). Few examples have been excavated, with Dalladies, Aberdeenshire, being the best-known example (Piggott 1971-2). This long barrow began life as a few pits, then timber and stone mortuary structures were built, before being sealed by a long earth and turf mound. Noble (2006) has compared this sequence with evidence for activities found beneath Pitnacree round barrow, Perth & Kinross, Slewcairn, and Lochhill long cairns, both Dumfries and Galloway. Unlike chambered tombs, long barrow burial areas were inaccessible once the mound was constructed. It seems likely that ceremonial activity was occurring in these locations pre-mound, with some so-called mortuary structures having no direct connection with human remains (Noble 2006). Other monuments may also have served a mortuary role, perhaps for instance the exposure and excarnation of the dead. A range of rectangular timber settings and enclosures may have served such roles. Inchtuthil, Perth and Kinross (c50m by 10m), was defined by a wooden fence set within a palisade slot (Barclay & Maxwell 1991), while the Balfarg Riding School, Fife, structures appear to have existed as free-standing timbers (Barclay & Russell-White 1993). Some ‘mortuary’ structures were trapezoidal in plan, such as Eweford, East Lothian (Lelong & MacGregor 2008). Other rectangular structures such as Carsie Mains and Littleour, both Perth and Kinross, may have served a ceremonial role (Brophy 2007). Indeed, these monuments probably served a range of purposes, with little explicit evidence for mortuary activity at even so-called mortuary enclosures. Such rectangular structures had a relatively long currency within the early Neolithic, dated from the middle of the 4th millennium through to around 3000BC. Cursus monuments / bank barrows There are at some 40 possible cursus monuments known in Scotland (see Brophy 1999; forthcoming). Cursus monuments are long -, and often wide – rectangular enclosures with rounded or squared ends (‘terminals’), defined either by an internal bank and external ditch arrangement, or free-standing timber posts (apparently unique to Scotland). Over half of these sites are the timber variant, measuring between 60m and 570m in length, and usually 20m to 30m in width; most such sites have one of more internal divisions of partitions. The earthwork cursus monument show more variation in size, between 190m and 2.5km in length, with width varying from 20m up to 160m. All but one of these is a cropmark site, and 14 cursus monuments have been excavated since the 1970s. Timber cursus monuments appear to be the earlier of the two cursus forms, dating either to 3900-3600 calBC (Thomas 2006) or perhaps slightly later (Whittle et al.2011). The earthwork cursus monuments in England tend to date to the second half of the 4th millennium BC (Barclay & Bayliss 1999), and the one ditched cursus in Scotland to have been successfully dated so far, Broich, Perth, at 3640-3370BC, accords with this (Tamlin Barton pers comm). The timber then earthwork sequence of cursus building traditions was played out at Holywood North, Dumfries and Galloway, where a timber cursus was replaced by an earthwork variant sharing the same footprint (Thomas 2007). An apparent timber cursus at Ewwford, East Lothian, was shown to consist of two parallel lines of postholes that were intermittently added to towards the middle of the 3rd millennium BC, rather than being a large cohesive monument (Lelong and MacGregor 2008); the cropmark record may include more examples of this type of structure masquerading as ‘cursus monuments’. Cursus monuments are traditionally regarded as having a processional role, although more recently they have been connected to both timber hall, and mortuary enclosure, traditions (Loveday 2006; Thomas 2006; Bradley 2007; Brophy forthcoming). Little evidence has been recovered for activities within cursus monuments, and material culture associations are rare. The only non-cropmark cursus is the Cleaven Dyke, Perth and Kinross, an unusual cursus-type earthwork that is still visible as an upstanding single bank 1.8km in length with two parallel, flanking ditches. Although an early Neolithic date could only be speculated upon during excavations, the monument was shown to be built in segments over an unknown period of time (Barclay & Maxwell 1998). This monument shares certain characteristics of another early Neolithic linear form, the bank barrow. Although bank barrows are relatively common in southern England, few examples have been identified in Scotland. Bank barrows are extremely lengthy long barrows (usually several hundred metres in length), with a single long mound, and in some cases, closely flanking ditches. Characteristically bank barrows have enlarged, or rounded, terminals, which may once have been free-standing barrows or mounds (Loveday 2006). The only non-cropmark example of this type of monument so far identified in Scotland is at Eskdalemuir, Dumfries and Galloway. Here, two lengthy earthworks run uphill on either side of the valley of the White Esk. This could be two separate monuments, or more likely one extremely long bank barrow (some 2km in length) which was at one time bisected by the river (RCAHMS 1993). A number of possible bank barrows have been identified as cropmarks, mostly in the eastern lowlands, although none have been confirmed by excavation (Brophy 1999; forthcoming). Despite the name, no evidence for burial activity has been found associated with a bank barrow in Scotland, and none have been dated; English examples tend to belong to the early Neolithic, and appear to be related to the cursus tradition. Timber circles / henge monuments/ stone circles and settings These variations on circular enclosure forms all suffer from problems with chronology and classification. Each monument form has its around or just before 3000BC, but variations on each were built well into the Bronze Age, and circles of earth, timber and stone seem to have been part of related traditions, often occurring in the same location. At least 80 timber circles have been recorded in Scotland, almost all as cropmarks, with some found during excavations. These are circular or elliptical settings of standing timbers, mostly with diameters in the range of 5m to 40, with a few slightly larger (Millican 2007). Aside from one problematic early date from Temple Wood, Argyll & Bute (Scott 1991), the remainder of timber circles, where dated, seem to have been built from 3100BC onwards, with examples continuing to be built throughout the 3rd millennium BC. Excavations have shed little light on the function of these monuments, although they are commonly found in association with ceremonial monuments such as cursus monuments and henges. Over 80 possible henge monuments have been found in Scotland, although recent excavations and radiocarbon dates suggest that many of these monuments were constructed in the Bronze Age (Bradley 2011; Brophy & Noble forthcoming). These monuments were earthwork enclosures with an internal ditch, external bank and one or two entrances. Again, the majority of henges in Scotland are known only as cropmarks, and they display a remarkable variation in terms of size, ranging from mini-henges (formerly known as hengiforms) less than 10m across, to the Ring of Brodgar, over 100m in diameter (albeit with no bank) (Barclay 2005). The earliest henge in Scotland is the Stones of Stenness, Orkney, dated by animal bone on the ditch base to 3100-2650 (Ritchie 1976). The Ring of Brodgar has recently been dated to the late Neolithic throught OSL dating of the ditch base, while Balfarg Riding School seems to have a Grooved Ware association. The henges at Forteviot 1 and 2, North Mains, and Pict’s Knowe all appear to be early Bronze Age, while Pullyhour, Caithness, is a monument of the 2nd millennium BC. Our understanding of the role of henges remains vague, with little direct evidence for activities within the enclosures, although acts of deposition have been recorded in henge ditches. A ceremonial role seems most likely, perhaps offering a more solidly bounded arena that timber circles. Recently it has been suggested that the internal ditch indicates henges were built to control or seal something in (Barclay 2005; Bradley 2011; Brophy & Noble forthcoming). Some stone circles have their origin in the late Neolithic, although given the difficulty in dating standing stones, the chronology of stone circles is far from obvious. (The smallest stone circles may have been built as late as 1000 BC.) The evidence from Calanais is not fully published. It seems possible that there were stone settings by around 3000 BC at the Ring of Brodgar and Cairnpapple (if the setting here was not timber) among other sites. The presence of standing stones within other henges, such as Balfarg, and Stones of Stenness, suggests a close relationship although again relative chronology here is unclear. Many stone circles, including the recumbent stone circles of NE Scotland, were built in the Bronze Age. How did timber circles, henges and stone circles relate to one another? Gibson (2004) has noted that wherever timber circles are found within henges, the former is always earlier (where dating evidence is available). Where the two occur together, timber circles were situated within the henge (with a notable exception at Forteviot henge 1, Perth and Kinross (Noble & Brophy 2011a)). Yet some timber circles stood alone and were never ‘replaced’ by a henge, while many henges have nothing to do with timber circles. More stone circles sit on their own than are found within henges, while evidence for stone replacing timber (as at Machrie Moor and Temple Wood) is to date limited. Many of these circular monument forms were subject to reuse and alteration in later prehistory, utilised as pyres, cremation cemeteries, or for metalworking, or transformed into cairns or barrows. Thus henges and stone circles must be investigated by Bronze Age specialists as well as those who study the Neolithic period. Round barrows / Round mounds Although rare in a Neolithic context, there are a number of possible late Neolithic round (non-megalithic) barrows known in Scotland, largely found in the northeast and east (Kinnes 1992; Sheridan 2010). The best-known example is Pitnacree, a large round mound in Strathtay that capped a complex sequence of timber and stone structures, perhaps in the late Neolithic. Sheridan (2010) has recently listed eight possible non-megalithic round barrows in Scotland (all but one in the NE), with some possible unexcavated examples identified in Strathtay (Brophy 2010). The chronology for these monuments is poor, with dates for the pre-mound activity at Pitnacree for instance unreliable (Ashmore et al. 2000). The recognition of Neolithic round barrows as opposed to Bronze Age examples (which are more common) is difficult without excavation, although Barclay (1999) suggests a height-diameter ratio could be used to make this distinction. It may well be that activity in these locations (not all of which was directly associated with burial) was brought to a halt by mound construction. Only one artificial Neolithic mound has been recognised in Scotland to date, Droughduil, Dumfries and Galloway (Figure 106). Thomas (2002, 2004) demonstrated through excavation that this huge mound with diameter of some 50m and height 10m was a natural sandy mound that was augmented in the Neolithic. The avenue of the Dunragit palisaded enclosure aligns on this mound. Although not on the same scale as Silbury Hill, Wiltshire, Thomas’s work has demonstrated the potential for such huge artificial mounds to be identified in Scotland. These huge enclosures are perhaps the largest expressions of Neolithic monumentality found in Scotland. These monuments consist of a large enclosed space defined by a boundary of timber posts (erroneously known as a palisade in most cases), often with a narrow entrance avenue. Three of the monuments – Forteviot and Leadketty, Perth and Kinross, and Meldon Bridge, Scottish Borders – have one boundary defined by a natural feature. One further site has been confirmed – Dunragit, Dumfries & Galloway, and excavations have been carried out at all but Leadketty, with radiocarbon dates suggesting these monuments were constructed c2800-2600 calBC (Noble & Brophy 2011b, 74).These were huge enclosed spaces – Leadketty is some 400m across, while Forteviot has a circumference of c750m. In each case the monument was shown to be defined by huge (oak) posts, with some kind of fence line connecting these at Meldon Bridge. These monuments in some cases enclosed earlier structures, and we have evidence for later monuments and activities within the boundaries. For instance at Forteviot (illus e) a middle Neolithic cremation cemetery preceded the palisaded enclosure, while two timber circles and two henges were later constructed within it (Noble & Brophy 2011b). At Meldon Bridge, pits with a fine assemblage of Impressed Ware pottery were found (Speak & Burgess 1999) while Dunragit has multiple phases of palisade construction (and was built in the location of a timber cursus (Thomas 2004)). It is likely these extravagant monuments were a last flourishing of mega-monumentality in Scotland’s Neolithic, and although evidence for function is limited, they would have been places where large number of people could have gathered for a range of activities. Other monuments may belong to related traditions, such as Blackshouse Burn, an embanked enclosure in an upland location, South Lanarkshire. This monument was originally defined by a double boundary of oak posts with a stone rubble bank between, surrounding an area some 300m in diameter (Lelong & Pollard 1998). Further mega-enclosures like this may remain unidentified, either in the cropmark record, or the uplands. Monument complexes / special places A key characteristic of Neolithic landscapes in northwest Europe is the creation of complexes of monuments in certain places, for instance at Stonehenge, the Cranborne Chase, the Bend of the Boyne and Carnac. In Scotland, there are some exceptional examples, where Neolithic (and often Bronze Age) ceremonial and burial monuments cluster together, places that were in use and reworked for many centuries. Important examples in Scotland include the Heart of Neolithic Orkney central mainland area; Balfarg, Fife; Forteviot-Leadketty, Strathearn, Perth and Kinross; Kilmartin Glen, Argyll & Bute, and Machrie Moor, Arran. Such landscapes appear to have had sacred importance in the Neolithic, perhaps established in the Mesolithic, or from pit-digging and deposition early in the Neolithic. These complexes offer excellent opportunities to follow social change through time, and suggest traditions that endured for huge periods of time and many generations. Before completing this section, it is worth looking at one other expression of belief, or ideology, that seems to have originated in the Neolithic (but overlaps considerably with the Chalcolithic and Bronze Age, and see the panel document for these periods as well). The meaning of rock-art seems beyond our grasp, with cup-and-ring marks, and other abstract and geometrical symbols, defying attempts to read them as texts (cf. Morris 1977, 1981). Recently, excavations at rock-art sites have started to shed light on the context of their production, and also some of the activities that went in the vicinity of rock-art panels. Excavations at Torbhlaren rock-art outcrops, Argyll and Bute, produced radiocarbon dates for material recovered from a fissure in the rock, and a stake-circle beside one panel. This allowed the excavators to argue the rock-art dated to between 2900 and 2300 cal BC (Jones et al.2011, 261). That the rock-art here was associated with structures and ‘deposits’ jammed into cracks in the rock adds much depth to our understanding of activities associated with rock-art. Test-pitting in the vicinity of rock-art panels at Ben Lawers, Perth and Kinross, was equally illuminating. Outcrops with rock-art were found to be associated with quartz working and deposition, some flint was found jammed into cracks in the rocks, and a cobbled surface was found. Such investigations might not help us ‘translate’ motifs, but offer a context and chronology for the creation of rock-art. This section of the document has offered an overview of the main types of Neolithic monuments found in Scotland, with a brief description of the main characteristics and chronology for each given. Inevitably these are broad brush labels, each of which hides considerable variability, although much of this level of detail can be explored in Section 3.
We think you have liked this presentation. If you wish to download it, please recommend it to your friends in any social system. Share buttons are a little bit lower. Thank you! Presentation is loading. Please wait. Published byChad Lynch Modified over 6 years ago How do you solve radical algebraic equations? =9 In this lesson you will learn to solve radical expressions by using rules of exponents Let’s Review Goal: isolate x A Common Mistake is the inverse of 2 2 Core Lesson =9 2 =9 2 2-x = 81 x = -79 In this lesson you have learned to solve radical expressions by using rules of exponents Guided Practice = x + 2 Extension Activities What are the steps that you need to solve a radical equation, like =10 What would x need to be for there to be no solution for the following equation. How do you know? = x Quick Quiz = 3 = 2 Y = 1 2 x y = 1 2 x - 1 What happens if we graph a system of equations and the lines are parallel? Is 15 a prime or composite number? Shape of DATA. How would you describe the shape of this graph? For example: Could you tell that the equations y=2x +1 and y= 2x-7 have no solution? Can you look at a system of linear equations and tell how many solutions. 7.5 Solving Radical Equations. What is a Radical Expression? A Radical Expression is an equation that has a variable in a radicand or has a variable with. WARM UP. Essential Question: How do you solve equations with exponents and radicals? SOLVING EQUATIONS WITH EXPONENTS AND RADICALS. How do you write expressions with exponents to help solve problems? For example, would you rather be paid $4 which doubles daily for four days or accept. How do you use equivalent fractions to subtract fractions and mixed numbers with unlike denominators? Lesson 13.4 Solving Radical Equations. Squaring Both Sides of an Equation If a = b, then a 2 = b 2 Squaring both sides of an equation often introduces. What happens if we graph a system of equations and the lines are the same? y = 2(2x+4) y = 4x+8. Can you figure out how many stamps are needed to send all these letters without counting each one? Objectives Solve exponential and logarithmic equations and equalities. Academy Algebra II/Trig 6.6: Solve Exponential and Logarithmic Equations Unit 8 Test ( ): Friday 3/22. Solving Equations with Exponents and Radicals Intro to Algebra. Other Types of Equations Solving an Equation by Factoring The Power Principle Solve a Radical Equation Solve Equations with Fractional Exponents Solve. SOLVING ONE STEP EQUATIONS. THINK – PAIR - SHARE Think: What does the word inverse mean? Pair Share. Skill 22: Solving a Radical Equation Mrs. Castro. How do you find the rule in a reducing pattern? For example: Find the Rule. What are the next 4 steps: 50, 40, 31, 23, 16….. How does drawing a picture help us solve multi-digit multiplication? 368 x 7 = Why is it so important to find the multiple of the divisor that is closest to the dividend? 7,4573= © 2021 SlidePlayer.com Inc. All rights reserved.
Algorithms and pseudocodes Choose 2 sorting algorithms and 2 searching algorithms, and describe them in detail, including the type of data structures they work well with. Complete the following: ? For 1 of the selected search algorithms, write pseudocode, and create a flowchart to show how the algorithm could be implemented to search data in the data structure. ? For 1 of the selected sort algorithms, write pseudocode, and create a flowchart to show how the algorithm could be implemented to sort data in the data structure. ? Give the pseudocode and flowchart that would show how one of the additional data structures could be implemented to search data. ? In addition, create a flowchart to show how to sort using one of the additional algorithms. o Give the pseudocode for the flowchart as well. Please submit the following for your assignment in a single MS Word document: ? 2 flowcharts (1 for a searching algorithm and 1 for a sorting algorithm) ? 2 pseudocode examples (1 for a searching algorithm and 1 for a sorting algorithm) Note: Diagrams created in separate programs should be copied and pasted into your document for submission. Complete the following: ? Describe how arrays are implemented in Java. ? Provide Java code to illustrate how to create an array, reference an array, and address an element of an array. ? Create a flowchart and provide the corresponding pseudocode to show how to sort an array using the bubble sort. ? Create a flowchart and provide the corresponding pseudocode to show how to search an array for a specified value using the sequential search algorithm. Complete the following: ? Create a flowchart to represent the Push and Pop operations for a Stack based on a linked list data structure. ? Create a flowchart to represent the Enqueue and Dequeue operations for a Queue based on a linked list data structure. ? Write the required Java code to implement either a Stack or a Queue data structure based on a linked list. The code should include the class constructors, the necessary properties, and methods to add and remove elements from the data structure. Do not use the built-in Java Stack class or the built-in Java Queue interface or the built-in Java linked list (you should create your own code for these classes). Delivering a high-quality product at a reasonable price is not enough anymore. That’s why we have developed 5 beneficial guarantees that will make your experience with our service enjoyable, easy, and safe. You have to be 100% sure of the quality of your product to give a money-back guarantee. This describes us perfectly. Make sure that this guarantee is totally transparent.Read more Each paper is composed from scratch, according to your instructions. It is then checked by our plagiarism-detection software. There is no gap where plagiarism could squeeze in.Read more Thanks to our free revisions, there is no way for you to be unsatisfied. We will work on your paper until you are completely happy with the result.Read more Your email is safe, as we store it according to international data protection rules. Your bank details are secure, as we use only reliable payment systems.Read more By sending us your money, you buy the service we provide. Check out our terms and conditions if you prefer business talks to be laid out in official language.Read more
1 Chapter 12: Gross Domestic Product and Growth Section 1 2 Key Terms national income accounting: a system economists use to collect and organize macroeconomic statistics on production, income, investment, and savings gross domestic product: the dollar value of all final goods and services produced within a country s borders in a given year intermediate goods: products used in the production of final goods durable goods: those goods that last for a relatively long time, such as refrigerators, cars, and DVD players 3 Key Terms, cont. nondurable goods: those goods that last a short period of time, such as food, light bulbs, and sneakers nominal GDP: GDP measured in current prices real GDP: GDP expressed in constant, or unchanging, prices gross national product: the annual income earned by U.S.-owned firms and people 4 Key Terms, cont. depreciation: the loss of the value of capital equipment that results from normal wear and tear price level: the average of all prices in the economy aggregate supply: the total amount of goods and services in the economy available at all possible price levels aggregate demand: the amount of goods and services in the economy that will be purchased at all possible price levels 5 Introduction What does the Gross Domestic Product (GDP) show about the nation s economy? GDP measures the amount of money brought into a nation in a single year through the selling of that nation s goods and services. GDP is a measurement of how well a nation s economy is doing for a particular year. A high GDP means the nation is doing well. A low GDP means the nation is doing poorly. 6 Nation Income Accounting Economists use a system called national income accounting to monitor the U.S. economy. They collect macroeconomic statistics, which the government uses to determine economic policies. The most important data economists analyze is gross domestic product (GDP), which is the dollar value of all final goods and services produced within a country s borders in a given year. 7 What is GDP? Basically, gross domestic product tracks exchanges of money. To understand GDP, you need to understand which exchanges are included in the final calculations and which ones are not. 8 Expenditure Approach One method used to calculate GDP is to estimate the annual expenditures on four categories of final goods and services: Consumer goods Business goods and services Government goods and services Net exports 9 Income Approach Another method calculates GDP by adding up all the incomes in the economy. The rationale for this approach is that when a firm sells a product or service, the selling price minus the dollar value of goods service purchased from other firms represents income from the firm s owners and employees. 10 Nominal versus Real GDP Nominal GDP is measured in current prices. To calculate nominal GDP, we use the current year s prices to calculate the value of the current year s output. The problem with nominal GDP is that it does not account for the rise in prices. Even though your output might be the same from year to year, the prices won t be and nominal GDP would be different. To solve this problem, economists determine real GDP, which is GDP expressed in constant, or unchanging, prices. 11 Limitation of GDP Checkpoint: What are two economic activities that GDP does not include? Nonmarket Activities GDP does not measure goods and services that people make or do themselves. The Underground Economy GDP does not account for black market activities or people paid under the table without being taxed Negative Externalities unintended economic side effects, like pollution, are not subtracted from GDP Quality of Life a high GDP does not necessarily mean people are happier 12 Other Output and Income Measures In addition to GDP, economists use other ways to measure the economy. The equations below summarize the formulas for calculating these other economic measurements. 13 Influences on GDP Aggregate Supply Aggregate supply is the total amount of goods and services in the economy available at all possible price levels. In a nation s economy, as the prices of most goods and services change, the price level changes and firms respond by changing their output. As the price level rises, real GDP, or aggregate supply rises. As the price level falls, real GDP falls. 14 Influences on GDP, cont. Aggregate Demand Aggregate demand is the amount of goods and services that will be purchased at all possible price levels. As price levels in the economy move up and down, individuals and firms change how much they buy in the opposite direction that aggregate supply changes. Any shift in aggregate supply or aggregate demand will have an impact on real GDP and the price level. 15 Aggregate Supply and Demand Aggregate supply and demand represent supply and demand on a nationwide level. The far righthand chart shows what happens to GDP and price levels when aggregate demand shifts. What do the positive and negative slopes of these curves mean? 16 Chapter 12: Gross Domestic Product and Growth Section 2 17 Key Terms business cycle: a period of macroeconomic expansion followed by one of macroeconomic contraction expansion: a period of growth as measured by a rise in real GDP economic growth: a steady, long-term increase in real GDP peak: the height of an economic expansion, when real GDP stops rising contraction: a period of economic decline marked by falling real GDP 18 Key Terms, cont. trough: the lowest point of an economic contraction, when real GDP stops falling recession: a prolonged economic contraction depression: a recession that is especially long and severe stagflation: a decline in real GDP combined with a rise in the price level leading indicators: a set of key economic variables that economists use to predict future trends in a business cycle 19 Phases of a Business Cycle Checkpoint: What are the four phases of a business cycle? Business cycles are made up of major changes in real GDP above or below normal levels. The business cycle consists of four phases: Expansion Peak Contraction Trough 20 Contractions There are three types of contractions, each with different characteristics. A recession is a prolonged economic contraction that generally lasts from 6 to 18 months and is marked by a high unemployment rate. A depression is a recession that is especially long and severe characterized by high unemployment and low economic output. Stagflation is a decline in real GDP combined with a rise in price level, or inflation. 21 Business Investment Business cycles are affected by four main economic variables. Business Investment When the economy is expanding, business investment increases, which in turn increases GDP and helps maintain the expansion. When firms decide to decrease spending, the result is a decrease in GDP and the price level. 22 Interest Rates and Credit Consumers often use credit to buy new cars, home, electronics, and vacations. If the interest rates on these goods rise, consumers are less likely to buy them. The same principle holds true for businesses who are deciding whether or not to buy new equipment or make large investments. 23 Consumer Expectations If people expect that the economy is going to start to contract, they may reduce spending. High consumer confidence, though, will lead to people buying more goods, pushing up GDP. 24 External Shocks Negative external shocks, like war breaking out in a country where U.S. banks and businesses have invested heavily, can have a great effect on business, causing GDP to decline. Positive external shocks, like the discovery of large oil deposits, can lead to an increase in a nation s wealth. 25 Business Cycle Forecasting Checkpoint: Why is it difficult to predict business cycles? To predict the next phase of a business cycle, forecasters must anticipate movements in real GDP before they occur. Economists use leading indicators to help them make these predictions. The stock market is a leading indicator. Today, the stock market turns sharply downward before a recession. 26 The Great Depression Before the 1930s, many economists believed that when an economy declined, it would recover quickly on its own. The Great Depression changed this belief and led economists to consider the idea that modern market economies could fall into long-lasting contractions. Not until World War II, more than a decade later, did the economy achieve full recovery. 27 The Great Depression, cont. Declining GDP and high unemployment were two major signs of the Great Depression, the longest recession in U.S. history. In what year did the Great Depression hit its trough? How long did it take GDP to return to its pre- Depression peak? 28 Later Recessions OPEC Embargo In the 1970s, the United States experienced an external shock when the price of gasoline and heating fuels skyrocketed as a result of the OPEC embargo on oil shipped to the United States. The U.S. economy also experienced a recession in the early 1980s and another brief one in 1991, followed by a period of steady economic growth. The attacks of 9/11 led to another sharp drop in consumer spending in many service industries. 29 The Business Cycle Today The economy began to grow slowly in 2001 and was surging by late 2003 with GDP growing at a rate of 7.5 percent over three months. However, growth slowed again as a result of high gas prices in The sub-prime mortgage crisis caused further decline in and 2009 marked a recession in the economy, but by the end of 2009, a rebound occurred. 30 Chapter 12: Gross Domestic Product and Growth Section 3 31 Key Terms real GDP per capita: real GDP divided by the total population of a country capital deepening: the process of increasing the amount of capital per worker saving: income not used for consumption savings rate: the proportion of disposable income that is saved technological progress: an increase in efficiency gained by producing more output without using more inputs 32 Measuring Economic Growth The basic measure of a nation s economic growth rate is the percentage of change in real GDP over a period of time. Economists prefer a measuring system that takes population growth into account. For this, they rely on real GDP per capita. 33 GDP and Quality of Life GDP measures the standard of living but it cannot be used to measure people s quality of life. In addition, GDP tells us nothing about how output is distributed across the population. While real GDP per capita tells us little about individuals it does give us a starting point for measuring a nation s quality of life. In general, though, nations with a high GDP per capita experience a greater quality of life. 34 Capital Deepening A nation with a large amount of physical capital will experience economic growth. The process of increasing the amount of capital per worker, known as capital deepening, is one of the most important sources of growth in modern economies. What is capital deepening? 35 Saving and Investment Checkpoint: How is saving linked to capital deepening? If the amount of money people save increases, then more investment funds are available to businesses. These funds can then be used for capital investment and expand the stock of capital in the business sector. 36 Population Growth If the population grows while the supply of capital remains constant, the amount of capital per worker will shrink, which is the opposite of capital deepening. This process leads to lower standards of living. On the other hand, a nation with low population growth and expanding capital stock will experience significant capital deepening. 37 Government Checkpoint: Do higher tax rates increase or reduce investment? If government raises taxes, households will have less money. People will reduce saving, thus reducing the money available to businesses for investment. However, if government invests the extra tax revenues in public goods, like infrastructure, this will increase investment, resulting in capital deepening. 38 Foreign Trade Foreign trade can result in a trade deficit, a situation in which the value of goods a country imports is higher than the value of goods it exports. If these imports consist of investment goods, running a trade deficit can foster capital deepening. When the funds are used for long-term investment, capital deepening can offset the negatives of a trade deficit by helping generate economic growth, helping a country pay back the money it borrowed in the first place. 39 Technological Progress Technological progress is a key source of economic growth. It can result from new scientific knowledge, new inventions, and new production methods Measuring technological progress can be done by determining how much growth in output comes from increases in capital and how much comes from increases in labor. Any remaining growth in output must come from technological progress. 40 Technological Progress, cont. Causes of technological progress include: Scientific research Innovation New products increase output and boost GDP and profits Scale of the market Larger markets provide more incentives for innovation Education and experience Increases human capital Natural resources Increased natural resources use can create a need for new technology Chapter 13. Aggregate Demand and Aggregate Supply Analysis Instructor: JINKOOK LEE Department of Economics / Texas A&M University ECON 203 502 Principles of Macroeconomics In the short run, real GDP and CHAPTER 7: AGGREGATE DEMAND AND AGGREGATE SUPPLY Learning goals of this chapter: What forces bring persistent and rapid expansion of real GDP? What causes inflation? Why do we have business cycles? How Chapter 7 (19) GDP: Measuring Total Production and Income Chapter Summary While microeconomics is the study of how households and firms make choices, how they interact in markets, and how the government Chapter 7 AGGREGATE SUPPLY AND AGGREGATE DEMAND* Key Concepts Aggregate Supply The aggregate production function shows that the quantity of real GDP (Y ) supplied depends on the quantity of labor (L ), CHAPTER 25 Measuring the Aggregate Economy The government is very keen on amassing statistics... They collect them, add them, raise them to the n th power, take the cube root and prepare wonderful diagrams. Macroeconomics Topic 1: Define and calculate GDP. Understand the difference between real and nominal variables (e.g., GDP, wages, interest rates) and know how to construct a price index. Reference: Gregory 23 Finance, Saving, and Investment Learning Objectives The flows of funds through financial markets and the financial institutions Borrowing and lending decisions in financial markets Effects of government Agenda What is a Business Cycle? Business Cycles.. 11-1 11-2 Business cycles are the short-run fluctuations in aggregate economic activity around its long-run growth path. Y Time 11-3 11-4 1 Components Chapter 8. GDP : Measuring Total Production and Income Instructor: JINKOOK LEE Department of Economics / Texas A&M University ECON 203 502 Principles of Macroeconomics Related Economic Terms Macroeconomics: 11.1 Estimating Gross Domestic Product (GDP) Objectives Describe what the gross domestic product measures. Learn two ways to calculate the gross domestic product, and explain why they are equivalent. 11.1 10 MEASURING A NATION S INCOME WHAT S NEW IN THE FIFTH EDITION: There is more clarification on the GDP deflator. The Case Study on Who Wins at the Olympics? is now an FYI box. LEARNING OBJECTIVES: By the Preview Objectives After studying this section you will be able to:. Identify National Income and Accounts (NIPA).. Explain how gross domestic product (GDP) is calculated. 3. Explain the difference between Macroeconomics 2301 Potential questions and study guide for exam 2 Any 6 of these questions could be on your exam! 1. GDP is a key concept in Macroeconomics. a. What is the definition of GDP? b. List and University of California-Davis Economics 1B-Intro to Macro Handout 8 TA: Jason Lee Email: firstname.lastname@example.org I. Introduction to Aggregate Demand/Aggregate Supply Model In this chapter we develop a model Economics for Managers by Paul Farnham Chapter 11: Measuring Macroeconomic Activity 11.1 Measuring Gross Domestic Product (GDP) GDP: the market value of all currently yproduced final goods and services Aggregate Demand and Aggregate Supply Ing. Mansoor Maitah Ph.D. et Ph.D. Aggregate Demand and Aggregate Supply Economic fluctuations, also called business cycles, are movements of GDP away from potential 20 Measuring GDP and Economic Growth After studying this chapter you will be able to Define GDP and explain why GDP equals aggregate expenditure and aggregate income Explain how Statistics Canada measures CHAPTER 5: MEASURING GDP AND ECONOMIC GROWTH Learning Goals for this Chapter: To know what we mean by GDP and to use the circular flow model to explain why GDP equals aggregate expenditure and aggregate 1 Objectives for Chapter 9 Aggregate Demand and Aggregate Supply At the end of Chapter 9, you will be able to answer the following: 1. Explain what is meant by aggregate demand? 2. Name the four categories The Circular Flow of Income and Expenditure Imports HOUSEHOLDS Savings Taxation Govt Exp OTHER ECONOMIES GOVERNMENT FINANCIAL INSTITUTIONS Factor Incomes Taxation Govt Exp Consumer Exp Exports FIRMS Capital LECTURE NOTES ON MACROECONOMIC PRINCIPLES Peter Ireland Department of Economics Boston College email@example.com http://www2.bc.edu/peter-ireland/ec132.html Copyright (c) 2013 by Peter Ireland. Redistribution Unit 4: Measuring GDP and Prices ECO 120 Global Macroeconomics 1 1.1 Reading Reading Module 10 - pages 106-110 Module 11 1.2 Goals Goals Specific Goals: Understand how to measure a country s output. Learn Pre-Test Chapter 11 ed17 Multiple Choice Questions 1. Built-in stability means that: A. an annually balanced budget will offset the procyclical tendencies created by state and local finance and thereby DEREE COLLEGE DEPARTMENT OF ECONOMICS EC 1101 PRINCIPLES OF ECONOMICS II FALL SEMESTER 2002 M-W-F 13:00-13:50 Dr. Andreas Kontoleon Office hours: Contact: firstname.lastname@example.org Wednesdays 15:00-17:00 Study 15 In this chapter, look for the answers to these questions: What are economic fluctuations? What are their characteristics? How does the model of demand and explain economic fluctuations? Why does the Chapter 2 The Measurement and Structure of the National Economy Multiple Choice Questions 1. The three approaches to measuring economic activity are the (a) cost, income, and expenditure approaches. (b) Economics 101 Multiple Choice Questions for Final Examination Miller PLEASE DO NOT WRITE ON THIS EXAMINATION FORM. 1. Which of the following statements is correct? a. Real GDP is the total market value 2 0 0 0 E D I T I O N CLEP O F F I C I A L S T U D Y G U I D E College Level Examination Program The College Board Principles of Macroeconomics Description of the Examination The Subject Examination in Defining Surpluses and Debt Politics, Surpluses,, and Debt Chapter 11 A surplus is an excess of revenues over payments. A deficit is a shortfall of revenues relative to payments. 2 Introduction After having EC2105, Professor Laury EXAM 2, FORM A (3/13/02) Print Your Name: ID Number: Multiple Choice (32 questions, 2.5 points each; 80 points total). Clearly indicate (by circling) the ONE BEST response to each Macroeconomics ECON 2204 Prof. Murphy Problem Set 2 Answers Chapter 4 #2, 3, 4, 5, 6, 7, and 9 (on pages 102-103) 2. a. When the Fed buys bonds, the dollars that it pays to the public for the bonds increase Chapter 12. Aggregate Expenditure and Output in the Short Run Instructor: JINKOOK LEE Department of Economics / Texas A&M University ECON 203 502 Principles of Macroeconomics Aggregate Expenditure (AE) Chapter 17 1. Inflation can be measured by the a. change in the consumer price index. b. percentage change in the consumer price index. c. percentage change in the price of a specific commodity. d. change Chapter 13, Business Cycles, and Macroeconomic Policy in the Open Economy Chapter Outline How Are Determined: A Supply-and-Demand Analysis The IS-LM Model for an Open Economy Macroeconomic Policy in an SHORT-RUN FLUCTUATIONS David Romer University of California, Berkeley First version: August 1999 This revision: January 2012 Copyright 2012 by David Romer CONTENTS Preface vi I The IS-MP Model 1 I-1 Monetary ECON 3023 Hany Fahmy FAll, 2009 Lecture Note: Introduction and Basic Concepts A. GDP, Economic Growth, and Business Cycles A.1. Gross Domestic Product (GDP) de nition and measurement The Gross Domestic MPRA Munich Personal RePEc Archive Is there a revolution in American saving? John Tatom Networks Financial institute at Indiana State University May 2009 Online at http://mpra.ub.uni-muenchen.de/16139/ Practice Problems on NIPA and Key Prices 1- What are the three approaches to measuring economic activity? Why do they give the same answer? The three approaches to national income accounting are the product HW 2 Macroeconomics 102 Due on 06/12 1.What are the three important macroeconomic goals about which most economists, and society at large, agree? a. economic growth, full employment, and low interest rates Lesson 7 - The Aggregate Expenditure Model Acknowledgement: Ed Sexton and Kerry Webb were the primary authors of the material contained in this lesson. Section : The Aggregate Expenditures Model Aggregate Chapter 24 Measuring the Wealth of Nations 2014 by McGraw-Hill Education 1 What will you learn in this chapter? How to calculate gross domestic product (GDP). Why each component of GDP is important. What 33 Aggregate Demand and Aggregate Supply R I N C I L E S O F ECONOMICS FOURTH EDITION N. GREGOR MANKIW Long run v.s. short run Long run growth: what determines long-run output (and the related employment ECON 3312 Macroeconomics Exam 3 Fall 2014 Name MULTIPLE CHOICE. Choose the one alternative that best completes the statement or answers the question. 1) Everything else held constant, an increase in net Pre-Test Chapter 10 ed17 Multiple Choice Questions 1. Refer to the above diagrams. Assuming a constant price level, an increase in aggregate expenditures from AE 1 to AE 2 would: A. move the economy from Macroeconomia Capitolo 7 Seguire l andamento della macroeconomia PowerPoint Slides by Can Erbil 2006 Worth Publishers, all rights reserved What you will learn in this chapter: How economists use aggregate A Model of Prices and Residential Investment Chapter 9 Appendix In this appendix, we develop a more complete model of the housing market that explains how housing prices are determined and how they interact Economics 101 Quiz #1 Fall 2002 1. Assume that there are two goods, A and B. In 1996, Americans produced 20 units of A at a price of $10 and 40 units of B at a price of $50. In 2002, Americans produced Chapter 5 MEASURING GDP AND ECONOMIC GROWTH* Key Concepts Gross Domestic Product Gross domestic product, GDP, is the market value of all the final goods and services produced within in a country in a given MEASURING GDP AND ECONOMIC GROWTH CHAPTER Objectives After studying this chapter, you will able to Define GDP and use the circular flow model to explain why GDP equals aggregate expenditure and aggregate Chapter 4 Consumption, Saving, and Investment Multiple Choice Questions 1. Desired national saving equals (a) Y C d G. (b) C d + I d + G. (c) I d + G. (d) Y I d G. 2. With no inflation and a nominal interest CH 10 - REVIEW QUESTIONS 1. The short-run aggregate supply curve is horizontal at: A) a level of output determined by aggregate demand. B) the natural level of output. C) the level of output at which the Name: Date: 1 A measure of how fast prices are rising is called the: A growth rate of real GDP B inflation rate C unemployment rate D market-clearing rate 2 Compared with a recession, real GDP during a Chapter 12 Monetary Policy and the Phillips Curve By Charles I. Jones Media Slides Created By Dave Brown Penn State University The short-run model summary: Through the MP curve the nominal interest rate Douglas, Fall 2009 December 15, 2009 A: Special Code 00004 PLEDGE: I have neither given nor received unauthorized help on this exam. SIGNED: PRINT NAME: Econ 202 Section 4 Final Exam 1. Oceania buys $40 NATIONAL INCOME AND PRODUCT ACCOUNTING MEASURING THE MACROECONOMY 1. NIPA: GNP and GDP 2. Saving and Wealth 3. Prices and Inflation 4. Unemployment 5. Problems with Measuring the Macroeconomy There are Econ 336 - Spring 2007 Homework 5 Name MULTIPLE CHOICE. Choose the one alternative that best completes the statement or answers the question. 1) The real exchange rate, q, is defined as A) E times P B) Answers to Text Questions and Problems Chapter 22 Answers to Review Questions 3. In general, producers of durable goods are affected most by recessions while producers of nondurables (like food) and services Pre- and Post-Test for The Great Depression Curriculum Answer Key 1. Deflation occurs when: a. there is a sustained increase in the price of gasoline. b. there is a sustained decrease in the price of gasoline. General ertificate of Education dvanced Subsidiary Examination June 2013 Economics EON2 Unit 2 The National Economy Friday 17 May 2013 1.30 pm to 2.45 pm For this paper you must have: an objective test chapter 7(23) Tracking the Macroeconomy Chapter Objectives Students will learn in this chapter: How economists use aggregate measures to track the performance of the economy. What gross domestic product, KrugmanMacro_SM_Ch07.qxp 11/9/05 4:47 PM Page 87 Tracking the Macroeconomy 1. Below is a simplified circular-flow diagram for the economy of Micronia. a. What is the value of GDP in Micronia? b. What is The U.S. and Midwest Economy in 2016: Implications for Supply Chain Firms Rick Mattoon Senior Economist and Economic Advisor Federal Reserve Bank of Chicago Right Place Supply Chain Management Conference roblem Set #4: Aggregate Supply and Aggregate Demand Econ 100B: Intermediate Macroeconomics 1) Explain the differences between demand-pull inflation and cost-push inflation. Demand-pull inflation results University of California-Davis Economics 1B-Intro to Macro Handout 3 TA: Jason Lee Email: email@example.com I. Measuring Output: GDP As was mentioned earlier, the ability to estimate the amount of production Lesson 8 - Aggregate Demand and Aggregate Supply Acknowledgement: Ed Sexton and Kerry Webb were the primary authors of the material contained in this lesson. Section 1: Aggregate Demand The second macroeconomic General ertificate of Education dvanced Subsidiary Examination January 2013 Economics EON2 Unit 2 The National Economy Monday 28 January 2013 1.30 pm to 2.45 pm For this paper you must have: an objective Strategy Document / Monetary policy in the period 5 March to 5 June Discussed by the Executive Board at its meeting of 5 February. Approved by the Executive Board at its meeting of 5 March Background Norges Economics 2 Spring 2016 Professor Christina Romer Professor David Romer LECTURE 17 MACROECONOMIC VARIABLES AND ISSUES March 17, 2016 I. MACROECONOMICS VERSUS MICROECONOMICS II. REAL GDP A. Definition B. Carnegie Mellon University Research Showcase @ CMU Tepper School of Business 7-1979 Economic Policy After the 1979 Oil Shock Allan H. Meltzer Carnegie Mellon University, firstname.lastname@example.org Follow this ANSWERS TO END-OF-CHAPTER QUESTIONS 7-1 In what ways are national income statistics useful? National income accounting does for the economy as a whole what private accounting does for businesses. Firms 7 The labour market, I: real wages, productivity and unemployment 7.1 INTRODUCTION Since the 1970s one of the major issues in macroeconomics has been the extent to which low output and high unemployment Project LINK Meeting New York, - October 1 Country Report: Australia Prepared by Peter Brain: National Institute of Economic and Industry Research, and Duncan Ironmonger: Department of Economics, University 2.5 Monetary policy: Interest rates Learning Outcomes Describe the role of central banks as regulators of commercial banks and bankers to governments. Explain that central banks are usually made responsible The Keynesian Cross Some instructors like to develop a more detailed macroeconomic model than is presented in the textbook. This supplemental material provides a concise description of the Keynesian-cross Econ 111 Summer 2007 Final Exam Name MULTIPLE CHOICE. Choose the one alternative that best completes the statement or answers the question. 1) The classical dichotomy allows us to explore economic growth Econ 202 Final Exam 1. If inflation expectations rise, the short-run Phillips curve shifts a. right, so that at any inflation rate unemployment is higher. b. left, so that at any inflation rate unemployment Chapter 18 MODERN PRINCIPLES OF ECONOMICS Third Edition Fiscal Policy Outline Fiscal Policy: The Best Case The Limits to Fiscal Policy When Fiscal Policy Might Make Matters Worse So When Is Fiscal Policy Economics 1021, Section 1 Prof. Steve Fazzari Solutions to Problem Set #2 Spring, 2013 1.a) Units of Price of Nominal GDP Real Year Stuff Produced Stuff GDP Deflator GDP 2003 500 $20 $10,000 95.2 $10,504 GDP Measuring Output and Income Part II Alternative Measures Real World Approximations Reading: RJB for lecture 5 GDP: Statistical Approximations The Bureau of Economic Analysis (BEA) provides both annual HOSP 2207 (Economics) Learning Centre Macroeconomics: GDP, GDP Deflator, CPI, & Inflation Macroeconomics is the big picture view of an economy. Microeconomics looks at the market for a specific good, like Competency: Basic Economic Concepts and Principles 1. Define money (characteristics, role, and forms) and trace how money and resources flow through the American economic system. 2. Utilize decision-making
The perimeter of a plane shape is the length of its outside boundary or the distance around its edges. An irregular shape does not have a definite shape. To determine the perimeter of such shape, string or thread can be used to measure it. Place the string around the edge, then straighten it out and measure it with a ruler from the mark part. A regular shape has a well-defined edge which may be straight lines or smooth curves. Examples are regular polygon and circles The Unit of Measurement Perimeter is measured in length units. These are kilometres (km), metres (m), centimetres (cm) and millimetres (mm). Use a ruler to measure the perimeter of triangle ABC. By measurement: AB: AB = 21mm, BC = 30mm, AC = 14mm Perimeter =Total length of sides = AB + BC +AC =21mm+ 30mm +14mm Using formulae to calculate perimeter The longer side of a rectangle is called the length and is usually represented by letter l. The shorter side is called the width or breadth and it may be represented by w ( or b). AB = DC = lcm and AD = BC = bcm Perimeter (P) = AB + BC + CD + DA = l + b + l + b = 2l + 2b = 2(l + b) P = 2 ( l + b) Note: This is also used to determine the perimeter of a parallelogram The length of a rectangular room is 10m and the width is 6cm. Find the perimeter of the room. Length of the room, l = 10m ; width/breadth of the room, w (or b) = 6m Perimeter = 2(l +b) = 2 (10m + 6m) = 2 ( 16m) = 32m Calculate the perimeter of a square whose length is 8cm. A square has all its four sides equal, so each length is l cm. The perimeter = l +l + l + l = 4l = 4 8 = 32m In general, perimeter of a square, P = 4l. This is also used to determine the perimeter of a rhombus A rectangle has a perimeter of 74m. Find: (a) the length of the rectangle if its breadth is 17m, (b) the breadth of the rectangle if its length is 25m. Note: since perimeter of a rectangle = 2( l + b) Length = ; Breadth = So, to find the length (a) Length = = = 37m – 17m = 20m = = 37m – 25m = 12m - The perimeter of a square is 840cm. Find the length of the square in metres. - A rectangle has sides of 9cm by 7.5cm. Find its perimeter - Esther fences a 3m by 4m rectangular plot to keep her chickens in. The fencing costs N 200 per metre. How much does it cost to fence the plot? Perimeter of triangles The perimeter = a +a +b = 2a +b Perimeter = a + a + a = 3a An isosceles triangle has a perimeter of 250mm. If the length of one of the equal sides is 8cm, calculate the length of the unequal side. First convert to the same unit of measurement 250mm = 25cm Sum of equal sides = 8cm + 8cm = 16cm The length of the unequal side = 25cm – 16cm = 9cm The perimeter = p + q + r + s The perimeter = a + b + a + c = 2a + b + c An isosceles trapezium has a perimeter of 50cm if the sizes of the unequal parallel sides are 12cm and 8cm. Calculate the size of one of the equal sides. Perimeter = 50cm Perimeter of an isosceles triangle = 2 (equal sides) + b + c = 2x + 8 + 12 50 = 2x + 20 50 -20 = 2x + 20 – 20 2x = 30 ; x = 15cm Therefore, one of the equal sides = 15cm Perimeter of Circles The circumference (C) of a circle is the distance around the circle. This means that the circumference of a circle is the same as its perimeter. AB = diameter, OA = OB = radii But AB = OA + OB i.e. d = r + r diameter , d = 2 radius (r) or radius, r = diameter (d)/ 2 The circumference, C of a circle is given by C =D, where D is the diameter of the circle. If R is the radius of the circle, then C = 2R. Therefore, C = D or C = 2R Calculate the perimeter of a circle if its (a) diameter is 14cm (b) radius is 4.9cm (Take ). - Diameter = 14cm Perimeter , C = D = 14 = 44cm - Radius= 4.9cm Perimeter = 2R = 2 4.9 = 30.8cm Calculate the perimeter of these figures. (Take ). - A semicircle is half of a circle. The diameter = 3.15 cm The perimeter of a circle = D = 3.15 = = 9.9cm The length of the curved edge = = 4.95cm The perimeter of the shape = 4.95cm + 3.15 cm = 8.1cm - A quadrant is a quarter of a circle The perimeter of a circle = 2R = 2 0.63 = 3.96m The length of the curved edge = = 0.99m Perimeter of the shape = 0.99m + 0.63m + 0.63m = 2.25m - Calculate the perimeter of a circle with radius 42cm. If a square has the same perimeter as the circle, calculate the length of one side of the square. (Take ) - The three sides of a triangle are ( x + 5)cm, ( 2x + 4 )cm and ( 2x -3)cm. - Find the perimeter of the triangle in terms of x - If x = 10, find the perimeter of the triangle AREA OF PLANE SHAPES The area of a plane shape is a measure of the amount of surface it covers or occupies. Area is measured in square units, e.g. square metre (m2), square millimetres (mm2). Finding the areas of regular shapes Area of Rectangles and Squares A rectangle 5cm long by 3cm wide can be divided into squares of side 1cm as shown below. By counting, the area of the rectangle is 15cm2. If we multiply the length of the rectangle by its width the answer is also 15cm2 i.e. length X width = 5cm X 3cm = 15cm2 In general, if A = area, l = length and w= width, Area of a rectangle = length X width Calculate the area of a rectangle of length 6cm and width 3.5cm. Area = length X width = 6cm X 3.5cm = 21cm2 The area of a rectangular carpet is 30m2. Find the length of the shorter side in metres if the length of the longer side is 6000mm. First convert the length i.e. 6000mm to metres 6000mm= = 6m If A= area, l = length and b = breadth Using breadth = ; breadth = = 5m The length of the shorter side is 5m A square has all its sides equal. Area = ( length of one side)2 i.e. A = l2 If Area, A is given then the length, l can be found by taking the root of both sides i.e. l = . Calculate the area of a square advertising board of length 5m. Area of square board = l X l = 5m X 5m =25m2 Area of shapes made from rectangles and squares Calculate the area of the shape below. All measurements are in metres and all angles are right angles. The shape can be divided into a 3X3 square, 6X10 and 2X4 rectangle. Area of shape = Area of square + area of 2 rectangles = ( (3X3) + (6X10) + (2 X4))m2 = 9 + 60 + 8 = 77m2 Area of parallelograms Area of a parallelogram = base X height Calculate the area of a parallelogram if its base is 9.2cm and its height is 6cm. Area of parallelogram = base X height = 9.2cm X 6cm = 55.2cm2 Area of Triangles In general: Area of any triangle = base height i.e the area of a parallelogram (or rectangle that encloses it). Calculate the area of the triangle with base 6cm and height 4cm. Base (b) = 6cm, Height (h) = 4cm Area = base height = 5 4 = 10cm2 Given that the area of triangle XYZ is 120cm2 and its height YD is 12cm. Find the length XZ. Let the base XZ be bcm; Height, YD (i.e. h)= 12cm Area of triangle XYZ= base height 120 = b 12 b = 20cm the length XZ is 20cm. Area of trapezium Area of trapezium = Where (a + b) is the sum of the parallel sides and h, the height of trapezium. Calculate the area of trapezium with the dimensions shown in the figure below. Area of trapezium = = = = 168cm2 Area of Circles Area, A =r2 or A = Find the area of a circle with radius 4.9cm (Take ). Area of a circle = = 4.92 cm2 The area of the circle is 75.46cm2r2 Find the area of a semicircle with diameter 20mm. (Take = 3.14) Diameter, d = 20mm; Radius, r = 20/2 = 10mm Area of a semicircle = = r2 = 102 = 157mm2 Area of the semicircle = 157mm2 - A string is wound 30 times around a cylindrical object of diameter 7m. Calculate the length of the string. ( Take ) - A rectangular garden is 20m by 18m. Calculate the area of a path 1m wide going round the outer edge of the garden. General Evaluation/Revision Questions - A regular polygon has all its sides …………… and all its angles ………….. - The distance around the circle is ……………………….. - What is the perimeter of a rhombus if the length of one side is 8cm? - A circle of diameter 21cm has a perimeter of 66cm. If the circle is halved. Determine the perimeter of the half. - What is the perimeter of a rectangle that measures 11cm by 3cm. (a) 39cm (b) 28cm (c) 36cm (d) 26cm - The diameter of a circle is 13.8cm long. Find the length of its radius (a) 27.6cm (b) 7.6cm (c) 6.9cm (d) 6.4cm - Two sides of an isosceles triangle are 3cm and 10cm. What must be the length of the third side? (a) 10cm (b) 6cm (c) 4cm (d) 8cm - If the width of a rectangle is the equal to the length of a square and the rectangle measures 6cm by 4cm. What is the difference perimeter of the square? (a) 26cm (b) 16cm (c) 24cm (d) 36cm - What is the difference in the perimeter of the rectangle and the square in question 4 above? (a) 4cm (b) 6cm (c) 8cm (d) 2cm - The diameter of a car wheel is 28cm, find its circumference. How far does the car move in metres when the wheel makes 150 turns? ( Take ) - (a) The longer side of a rectangle is 25cm and its perimeter is 80cm. Find the length of the shorter side. Determine its area (b) The area of a parallelogram is 8.5m2 and its base is 500cm. Find its height.
In statistics, the p-value is a function of the observed sample results (a statistic) that is used for testing a statistical hypothesis. Before performing the test a threshold value is chosen, called the significance level of the test, traditionally 5% or 1% and denoted as α. If the p-value is equal to or smaller than the significance level (α), it suggests that the observed data are inconsistent with the assumption that the null hypothesis is true, and thus that hypothesis must be rejected and the alternative hypothesis is accepted as true. When the p-value is calculated correctly, such a test is guaranteed to control the Type I error rate to be no greater than α. The p-value is calculated as the lowest α for which we can still reject the null hypothesis for a given set of observations. An equivalent interpretation is that p-value is the probability of finding the observed sample results, or "more extreme" results, when the null hypothesis is actually true (where "more extreme" is dependent on the way the hypothesis is tested). Since p-value is used in Frequentist inference (and not Bayesian inference), it does not in itself support reasoning about the probabilities of hypotheses, but only as a tool for deciding whether to reject the null hypothesis in favour of the alternative hypothesis. Statistical hypothesis tests making use of p-values are commonly used in many fields of science and social sciences, such as economics, psychology, biology, criminal justice and criminology, and sociology. - 1 Basic concepts - 2 Definition and interpretation - 3 Styles for writing p-value - 4 Calculation - 5 Examples - 6 History - 7 Misunderstandings - 8 Criticisms - 9 Related quantities - 10 See also - 11 Notes - 12 References - 13 Further reading - 14 External links The p-value is used in the context of null hypothesis testing in order to quantify the idea of statistical significance of evidence.[a] Null hypothesis testing is a reductio ad absurdum argument adapted to statistics. In essence, a claim is shown to be valid by demonstrating the improbability of the counter-claim that follows from its denial. As such, the only hypothesis which needs to be specified in this test, and which embodies the counter-claim, is referred to as the null hypothesis. A result is said to be statistically significant if it can enable the rejection of the null hypothesis. The rejection of the null hypothesis implies that the correct hypothesis lies in the logical complement of the null hypothesis. For instance, if the null hypothesis is assumed to be a standard normal distribution N(0,1), then the rejection of this null hypothesis can mean either (i) the mean is not zero, or (ii) the variance is not unity, or (iii) the distribution is not normal. In statistics, a statistical hypothesis refers to a probability distribution that is assumed to govern the observed data.[b] If is a random variable representing the observed data and is the statistical hypothesis under consideration, then the notion of statistical significance can be naively quantified by the conditional probability which gives the likelihood of the observation if the hypothesis is assumed to be correct. However, if is a continuous random variable, and we observed an instance , then Thus this naive definition is inadequate and needs to be changed so as to accommodate the continuous random variables. Nonetheless, it does help to clarify that p-values should not be confused with either the probability of the hypothesis given the data, or the probability of the hypothesis being true, or the probability of observing the given data. Definition and interpretation The p-value is defined as the probability, under the assumption of hypothesis , of obtaining a result equal to or more extreme than what was actually observed. Depending on how we look at it, the "more extreme than what was actually observed" can either mean (right tail event) or (left tail event) or the "smaller" of and (double tailed event). Thus the p-value is given by - for right tail event, - for left tail event, - for double tail event. The smaller the p-value, the larger the significance because it tells the investigator that the hypothesis under consideration may not adequately explain the observation. The hypothesis is rejected if any of these probabilities is less than or equal to a small, fixed, but arbitrarily pre-defined, threshold value , which is referred to as the level of significance. Unlike the p-value, the level is not derived from any observational data nor does it depend on the underlying hypothesis; the value of is instead determined based on the consensus of the research community that the investigator is working in. Since the value of that defines the left tail or right tail event is a random variable, this makes the p-value a function of and a random variable in itself defined uniformly over interval, assuming is continuous. Thus, the p-value is not fixed. This implies that p-value cannot be given a frequency counting interpretation, since the probability has to be fixed for the frequency counting interpretation to hold. In other words, if a same test is repeated independently bearing upon the same overall null hypothesis, then it will yield different p-values at every repetition. Nevertheless, these different p-values can be combined using Fisher's combined probability test. It should further be noted that an instantiation of this random p-value can still be given a frequency counting interpretation with respect to the number of observations taken during a given test, as per the definition, as the percentage of observations more extreme than the one observed under the assumption that the null hypothesis is true. Lastly, the fixed pre-defined level can be interpreted as the rate of falsely rejecting the null hypothesis (or type I error), since . Styles for writing p-value Depending on which style guide is applied, the "p" is styled either italic or not, capitalized or not, and hyphenated or not (p-value, p value, P-value, P value, p-value, p value, P-value, P value). Usually, instead of the actual observations, is instead a test statistic. A test statistic is a scalar function of all the observations, which summarizes the data by a single number. As such, the test statistic follows a distribution determined by the function used to define that test statistic and the distribution of the observational data. For the important case where the data are hypothesized to follow the normal distribution, depending on the nature of the test statistic, and thus our underlying hypothesis of the test statistic, different null hypothesis tests have been developed. Some such tests are z-test for normal distribution, t-test for Student's t-distribution, f-test for f-distribution. When the data do not follow a normal distribution, it can still be possible to approximate the distribution of these test statistics by a normal distribution by invoking the central limit theorem for large samples, as in the case of Pearson's chi-squared test. Thus computing a p-value requires a null hypothesis, a test statistic (together with deciding whether the researcher is performing a one-tailed test or a two-tailed test), and data. Even though computing the test statistic on given data may be easy, computing the sampling distribution under the null hypothesis, and then computing its CDF is often a difficult computation. Today this computation is done using statistical software, often via numeric methods (rather than exact formulas), while in the early and mid 20th century, this was instead done via tables of values, and one interpolated or extrapolated p-values from these discrete values. Rather than using a table of p-values, Fisher instead inverted the CDF, publishing a list of values of the test statistic for given fixed p-values; this corresponds to computing the quantile function (inverse CDF). Here a few simple examples follow, each illustrating a potential pitfall. One roll of a pair of dice Suppose a researcher rolls a pair of dice once and assumes a null hypothesis that the dice are fair. The test statistic is "the sum of the rolled numbers" and is one-tailed. The researcher rolls the dice and observes that both dice show 6, yielding a test statistic of 12. The p-value of this outcome is 1/36 (because under the assumption of the null hypothesis, the test statistic is uniformly distributed), or about 0.028 (the highest test statistic out of 6×6 = 36 possible outcomes). If the researcher assumed a significance level of 0.05, he or she would deem this result significant and would reject the hypothesis that the dice are fair. In this case, a single roll provides a very weak basis (that is, insufficient data) to draw a meaningful conclusion about the dice. This illustrates the danger with blindly applying p-value without considering the experiment design. Five heads in a row Suppose a researcher flips a coin five times in a row and assumes a null hypothesis that the coin is fair. The test statistic of "total number of heads" can be one-tailed or two-tailed: a one-tailed test corresponds to seeing if the coin is biased towards heads, while a two-tailed test corresponds to seeing if the coin is biased either way. The researcher flips the coin five times and observes heads each time (HHHHH), yielding a test statistic of 5. In a one-tailed test, this is the most extreme value out of all possible outcomes, and yields a p-value of (1/2)5 = 1/32 ≈ 0.03. If the researcher assumed a significance level of 0.05, he or she would deem this result to be significant and would reject the hypothesis that the coin is fair. In a two-tailed test, a test statistic of zero heads (TTTTT) is just as extreme, and thus the data of HHHHH would yield a p-value of 2×(1/2)5 = 1/16 ≈ 0.06, which is not significant at the 0.05 level. This demonstrates that specifying a direction (on a symmetric test statistic) halves the p-value (increases the significance) and can mean the difference between data being considered significant or not. Sample size dependence Suppose a researcher flips a coin some arbitrary number of times (n) and assumes a null hypothesis that the coin is fair. The test statistic is the total number of heads and is two-tailed test. Suppose the researcher observes heads for each flip, yielding a test statistic of n and a p-value of 2/2n. If the coin was flipped only 5 times, the p-value would be 2/32 = 0.0625, which is not significant at the 0.05 level. But if the coin was flipped 10 times, the p-value would be 2/1024 ≈ 0.002, which is significant at the 0.05 level. In both cases the data suggest that the null hypothesis is false (that is, the coin is not fair somehow), but changing the sample size changes the p-value and significance level. In the first case the sample size is not large enough to allow the null hypothesis to be rejected at the 0.05 level (in fact, the p-value can never be below 0.05). This demonstrates that in interpreting p-values, one must also know the sample size, which complicates the analysis. Alternating coin flips Suppose a researcher flips a coin ten times and assumes a null hypothesis that the coin is fair. The test statistic is the total number of heads and is two-tailed. Suppose the researcher observes alternating heads and tails with every flip (HTHTHTHTHT). This yields a test statistic of 5 and a p-value of 1 (completely unexceptional), as this is the expected number of heads. Suppose instead that test statistic for this experiment was the "number of alternations" (that is, the number of times when H followed T or T followed H), which is again two-tailed. This would yield a test statistic of 9, which is extreme, and has a p-value of . This would be considered extremely significant—well beyond the 0.05 level. These data indicate that, in terms of one test statistic, the data set is extremely unlikely to have occurred by chance, though it does not suggest that the coin is biased towards heads or tails. By the first test statistic, the data yield a high p-value, suggesting that the number of heads observed is not unlikely. By the second test statistic, the data yield a low p-value, suggesting that the pattern of flips observed is very, very unlikely. There is no "alternative hypothesis," (so only rejection of the null hypothesis is possible) and such data could have many causes – the data may instead be forged, or the coin flipped by a magician who intentionally alternated outcomes. This example demonstrates that the p-value depends completely on the test statistic used, and illustrates that p-values can only help researchers to reject a null hypothesis, not consider other hypotheses. Impossible outcome and very unlikely outcome Suppose a researcher flips a coin two times and assumes a null hypothesis that the coin is unfair: both sides are heads. The test statistic is the total number of heads (one-tailed). The researcher observes one head and one tail (HT), yielding a test statistic of 1 and a p-value of 0. In this case the data is inconsistent with the hypothesis–for a two-headed coin, a tail can never come up. In this case the outcome is not simply unlikely in the null hypothesis, but in fact impossible, and the null hypothesis can be definitely rejected as false. In practice such experiments almost never occur, as all data that could be observed would be possible in the null hypothesis (albeit unlikely). If the null hypothesis were instead that the coin came up heads 99% of the time (otherwise the same setup), the p-value would instead be[c] In this case the null hypothesis could not definitely be ruled out – this outcome is unlikely in the null hypothesis, but not impossible – but the null hypothesis would be rejected at the 0.05 level, and in fact at the 0.02 level, since the outcome is less than 2% likely in the null hypothesis. As an example of a statistical test, an experiment is performed to determine whether a coin flip is fair (equal chance of landing heads or tails) or unfairly biased (one outcome being more likely than the other). Suppose that the experimental results show the coin turning up heads 14 times out of 20 total flips. The null hypothesis is that the coin is fair, and the test statistic is the number of heads. If we consider a right-tailed test, the p-value of this result is the chance of a fair coin landing on heads at least 14 times out of 20 flips. This probability can be computed from binomial coefficients as This probability is the p-value, considering only extreme results which favor heads. This is called a one-tailed test. However, the deviation can be in either direction, favoring either heads or tails. We may instead calculate the two-tailed p-value, which considers deviations favoring either heads or tails. As the binomial distribution is symmetrical for a fair coin, the two-sided p-value is simply twice the above calculated single-sided p-value; i.e., the two-sided p-value is 0.115. In the above example, we thus have: - Null hypothesis (H0): The coin is fair, i.e. Prob(heads) = 0.5 - Test statistic: Number of heads - Level of significance: 0.05 - Observation O: 14 heads out of 20 flips; and - Two-tailed p-value of observation O given H0 = 2*min(Prob(no. of heads ≥ 14 heads), Prob(no. of heads ≤ 14 heads))= 2*min(0.058, 0.978) = 2*0.058 = 0.115. Note that the Prob(no. of heads ≤ 14 heads) = 1 - Prob(no. of heads ≥ 14 heads) + Prob(no. of head = 14) = 1 - 0.058 + 0.036 = 0.978; however symmetry of the binomial distribution makes this an unnecessary computation to find the smaller of the two probabilities. Here the calculated p-value exceeds 0.05, so the observation is consistent with the null hypothesis, as it falls within the range of what would happen 95% of the time were the coin in fact fair. Hence, we fail to reject the null hypothesis at the 5% level. Although the coin did not fall evenly, the deviation from expected outcome is small enough to be consistent with chance. However, had one more head been obtained, the resulting p-value (two-tailed) would have been 0.0414 (4.14%). This time the null hypothesis – that the observed result of 15 heads out of 20 flips can be ascribed to chance alone – is rejected when using a 5% cut-off. In the 1770s Laplace considered the statistics of almost half a million births. The statistics showed an excess of boys compared to girls. He concluded by calculation of a p-value that the excess was a real, but unexplained, effect. The p-value was first formally introduced by Karl Pearson in his Pearson's chi-squared test, using the chi-squared distribution and notated as capital P. The p-values for the chi-squared distribution (for various values of χ2 and degrees of freedom), now notated as P, was calculated in (Elderton 1902), collected in (Pearson 1914, pp. xxxi–xxxiii, 26–28, Table XII). The use of the p-value in statistics was popularized by Ronald Fisher, and it plays a central role in Fisher's approach to statistics. In the influential book Statistical Methods for Research Workers (1925), Fisher proposes the level p = 0.05, or a 1 in 20 chance of being exceeded by chance, as a limit for statistical significance, and applies this to a normal distribution (as a two-tailed test), thus yielding the rule of two standard deviations (on a normal distribution) for statistical significance – see 68–95–99.7 rule.[d] He then computes a table of values, similar to Elderton, but, importantly, reverses the roles of χ2 and p. That is, rather than computing p for different values of χ2 (and degrees of freedom n), he computes values of χ2 that yield specified p-values, specifically 0.99, 0.98, 0.95, 0,90, 0.80, 0.70, 0.50, 0.30, 0.20, 0.10, 0.05, 0.02, and 0.01. This allowed computed values of χ2 to be compared against cutoffs, and encouraged the use of p-values (especially 0.05, 0.02, and 0.01) as cutoffs, instead of computing and reporting p-values themselves. The same type of tables were then compiled in (Fisher & Yates 1938), which cemented the approach. As an illustration of the application of p-values to the design and interpretation of experiments, in his following book The Design of Experiments (1935), Fisher presented the lady tasting tea experiment, which is the archetypal example of the p-value. To evaluate a lady's claim that she (Muriel Bristol) could distinguish by taste how tea is prepared (first adding the milk to the cup, then the tea, or first tea, then milk), she was sequentially presented with 8 cups: 4 prepared one way, 4 prepared the other, and asked to determine the preparation of each cup (knowing that there were 4 of each). In this case the null hypothesis was that she had no special ability, the test was Fisher's exact test, and the p-value was so Fisher was willing to reject the null hypothesis (consider the outcome highly unlikely to be due to chance) if all were classified correctly. (In the actual experiment, Bristol correctly classified all 8 cups.) Fisher reiterated the p = 0.05 threshold and explained its rationale, stating: It is usual and convenient for experimenters to take 5 per cent as a standard level of significance, in the sense that they are prepared to ignore all results which fail to reach this standard, and, by this means, to eliminate from further discussion the greater part of the fluctuations which chance causes have introduced into their experimental results. He also applies this threshold to the design of experiments, noting that had only 6 cups been presented (3 of each), a perfect classification would have only yielded a p-value of which would not have met this level of significance. Fisher also underlined the frequentist interpretation of p, as the long-run proportion of values at least as extreme as the data, assuming the null hypothesis is true. In later editions, Fisher explicitly contrasted the use of the p-value for statistical inference in science with the Neyman–Pearson method, which he terms "Acceptance Procedures". Fisher emphasizes that while fixed levels such as 5%, 2%, and 1% are convenient, the exact p-value can be used, and the strength of evidence can and will be revised with further experimentation. In contrast, decision procedures require a clear-cut decision, yielding an irreversible action, and the procedure is based on costs of error, which he argues are inapplicable to scientific research. Despite the ubiquity of p-value tests, this particular test for statistical significance has been criticized for its inherent shortcomings and the potential for misinterpretation. The data obtained by comparing the p-value to a significance level will yield one of two results: either the null hypothesis is rejected, or the null hypothesis cannot be rejected at that significance level (which however does not imply that the null hypothesis is true). In Fisher's formulation, there is a disjunction: a low p-value means either that the null hypothesis is true and a highly improbable event has occurred, or that the null hypothesis is false. However, people interpret the p-value in many incorrect ways, and try to draw other conclusions from p-values, which do not follow. The p-value does not in itself allow reasoning about the probabilities of hypotheses; this requires multiple hypotheses or a range of hypotheses, with a prior distribution of likelihoods between them, as in Bayesian statistics, in which case one uses a likelihood function for all possible values of the prior, instead of the p-value for a single null hypothesis. The p-value refers only to a single hypothesis, called the null hypothesis, and does not make reference to or allow conclusions about any other hypotheses, such as the alternative hypothesis in Neyman–Pearson statistical hypothesis testing. In that approach one instead has a decision function between two alternatives, often based on a test statistic, and one computes the rate of Type I and type II errors as α and β. However, the p-value of a test statistic cannot be directly compared to these error rates α and β – instead it is fed into a decision function. - The p-value is not the probability that the null hypothesis is true, nor is it the probability that the alternative hypothesis is false – it is not connected to either of these. In fact, frequentist statistics does not, and cannot, attach probabilities to hypotheses. Comparison of Bayesian and classical approaches shows that a p-value can be very close to zero while the posterior probability of the null is very close to unity (if there is no alternative hypothesis with a large enough a priori probability and which would explain the results more easily). This is Lindley's paradox. But there are also a priori probability distributions where the posterior probability and the p-value have similar or equal values. - The p-value is not the probability that a finding is "merely a fluke." Calculating the p-value is based on the assumption that every finding is a fluke, that is, the product of chance alone. Thus, the probability that the result is due to chance is in fact unity. The phrase "the results are due to chance" is used to mean that the null hypothesis is probably correct. However, that is merely a restatement of the inverse probability fallacy, since the p-value cannot be used to figure out the probability of a hypothesis being true. - The p-value is not the probability of falsely rejecting the null hypothesis. This error is a version of the so-called prosecutor's fallacy. - The p-value is not the probability that replicating the experiment would yield the same conclusion. Quantifying the replicability of an experiment was attempted through the concept of p-rep. - The significance level, such as 0.05, is not determined by the p-value. Rather, the significance level is decided by the person conducting the experiment (with the value 0.05 widely used by the scientific community) before the data are viewed, and is compared against the calculated p-value after the test has been performed. (However, reporting a p-value is more useful than simply saying that the results were or were not significant at a given level, and allows readers to decide for themselves whether to consider the results significant.) - The p-value does not indicate the size or importance of the observed effect. The two do vary together, however, the larger the effect, the smaller sample size will be required to get a significant p-value (see effect size). Critics of p-values point out that the criterion used to decide "statistical significance" is based on an arbitrary choice of level (often set at 0.05). If significance testing is applied to hypotheses that are known to be false in advance, a non-significant result will simply reflect an insufficient sample size; a p-value depends only on the information obtained from a given experiment. The p-value is incompatible with the likelihood principle, and p-value depends on the experiment design, or equivalently on the test statistic in question. That is, the definition of "more extreme" data depends on the sampling methodology adopted by the investigator; for example, the situation in which the investigator flips the coin 100 times yielding 50 heads has a set of extreme data that is different from the situation in which the investigator continues to flip the coin until 50 heads are achieved yielding 100 flips. This is to be expected, as the experiments are different experiments, and the sample spaces and the probability distributions for the outcomes are different even though the observed data (50 heads out of 100 flips) are the same for the two experiments. Fisher proposed p as an informal measure of evidence against the null hypothesis. He called on researchers to combine p in the mind with other types of evidence for and against that hypothesis, such as the a priori plausibility of the hypothesis and the relative strengths of results from previous studies. Many misunderstandings concerning p arise because statistics classes and instructional materials ignore or at least do not emphasize the role of prior evidence in interpreting p; thus, the p-value is sometimes portrayed as the main result of statistical significance testing, rather than the acceptance or rejection of the null hypothesis at a pre-prescribed significance level. A renewed emphasis on prior evidence could encourage researchers to place p in the proper context, evaluating a hypothesis by weighing p together with all the other evidence about the hypothesis. A closely related concept is the E-value, which is the average number of times in multiple testing that one expects to obtain a test statistic at least as extreme as the one that was actually observed, assuming that the null hypothesis is true. The E-value is the product of the number of tests and the p-value. The 'inflated' (or adjusted) p-value, is when a group of p-values are changed according to some multiple comparisons procedure so that each of the adjusted p-values can now be compared to the same threshold level of significance (α), while keeping the type I error controlled. The control is in the sense that the specific procedures controls it, it might be controlling the familywise error rate, the false discovery rate, or some other error rate. - Confidence interval - False discovery rate - Fisher's method of combining p-values - Generalized p-value - Multiple comparisons - Null hypothesis - Statistical hypothesis testing - Note that the statistical significance of a result does not imply that the result is scientifically significant as well. - It should be noted that a statistical hypothesis is conceptually different from a scientific hypothesis. - Odds of TT is odds of HT and TH are and which are equal, and adding these yield - To be precise the p = 0.05 corresponds to about 1.96 standard deviations for a normal distribution (two-tailed test), and 2 standard deviations corresponds to about a 1 in 22 chance of being exceeded by chance, or p ≈ 0.045; Fisher notes these approximations. - Nuzzo, R. (2014). "Scientific method: Statistical errors". Nature 506 (7487): 150. doi:10.1038/506150a. - Hubbard, R. (2004). Blurring the Distinctions Between p’s and a’s in Psychological Research, Theory Psychology June 2004 vol. 14 no. 3 295-327 - Wetzels, R.; Matzke, D.; Lee, M. D.; Rouder, J. N.; Iverson, G. J.; Wagenmakers, E. -J. (2011). "Statistical Evidence in Experimental Psychology: An Empirical Comparison Using 855 t Tests". Perspectives on Psychological Science 6 (3): 291. doi:10.1177/1745691611406923. - Babbie, E. (2007). The practice of social research 11th ed. Thomson Wadsworth: Belmont, CA. - Stigler 1986, p. 134. - Pearson 1900. - Inman 2004. - Hubbard & Bayarri 2003, p. 1. - Fisher 1925, p. 47, Chapter III. Distributions. - Dallal 2012, Note 31: Why P=0.05?. - Fisher 1925, pp. 78–79, 98, Chapter IV. Tests of Goodness of Fit, Independence and Homogeneity; with Table of χ2, Table III. Table of χ2. - Fisher 1971, II. The Principles of Experimentation, Illustrated by a Psycho-physical Experiment. - Fisher 1971, Section 7. The Test of Significance. - Fisher 1971, Section 12.1 Scientific Inference and Acceptance Procedures. - Sterne, J. A. C.; Smith, G. Davey (2001). "Sifting the evidence–what's wrong with significance tests?". BMJ (Clinical research ed.) 322 (7280): 226–231. doi:10.1136/bmj.322.7280.226. PMC 1119478. PMID 11159626. - Schervish, M. J. (1996). "P Values: What They Are and What They Are Not". The American Statistician 50 (3). doi:10.2307/2684655. JSTOR 2684655. - Casella, George; Berger, Roger L. (1987). "Reconciling Bayesian and Frequentist Evidence in the One-Sided Testing Problem". Journal of the American Statistical Association 82 (397): 106–111. doi:10.1080/01621459.1987.10478396. - Sellke, Thomas; Bayarri, M. J.; Berger, James O. (2001). "Calibration of p Values for Testing Precise Null Hypotheses". The American Statistician 55 (1): 62–71. doi:10.1198/000313001300339950. JSTOR 2685531. - Casson, R. J. (2011). "The pesty P value". Clinical & Experimental Ophthalmology 39 (9): 849–850. doi:10.1111/j.1442-9071.2011.02707.x. - Johnson, D. H. (1999). "The Insignificance of Statistical Significance Testing". Journal of Wildlife Management 63 (3): 763–772. doi:10.2307/3802789. - Hubbard & Lindsay 2008. - Goodman, SN (1999). "Toward Evidence-Based Medical Statistics. 1: The P Value Fallacy.". Annals of Internal Medicine 130: 995–1004. doi:10.7326/0003-4819-130-12-199906150-00008. PMID 10383371. - Goodman, SN (1999). "Toward Evidence-Based Medical Statistics. 2: The Bayes factor..". Annals of Internal Medicine 130: 1005–1013. doi:10.7326/0003-4819-130-12-199906150-00019. PMID 10383350. - National Institutes of Health definition of E-value - Hochberg, Y.; Benjamini, Y. (1990). "More powerful procedures for multiple significance testing". Statistics in Medicine 9 (7): 811–818. doi:10.1002/sim.4780090710. PMID 2218183. (page 815, second paragraph) ||This article includes a list of references, but its sources remain unclear because it has insufficient inline citations. (July 2014)| - Pearson, Karl (1900). "On the criterion that a given system of deviations from the probable in the case of a correlated system of variables is such that it can be reasonably supposed to have arisen from random sampling". Philosophical Magazine Series 5 50 (302): 157–175. doi:10.1080/14786440009463897. - Elderton, William Palin (1902). "Tables for Testing the Goodness of Fit of Theory to Observation". Biometrika 1 (2): 155–163. doi:10.1093/biomet/1.2.155. - Fisher, Ronald (1925). Statistical Methods for Research Workers. Edinburgh: Oliver & Boyd. ISBN 0-05-002170-2. - Fisher, Ronald A. (1971) . The Design of Experiments (9th ed.). Macmillan. ISBN 0-02-844690-9. - Fisher, R. A.; Yates, F. (1938). Statistical tables for biological, agricultural and medical research. London. - Stigler, Stephen M. (1986). The history of statistics : the measurement of uncertainty before 1900. Cambridge, Mass: Belknap Press of Harvard University Press. ISBN 0-674-40340-1. - Hubbard, Raymond; Bayarri, M. J. (November 2003), P Values are not Error Probabilities, a working paper that explains the difference between Fisher's evidential p-value and the Neyman–Pearson Type I error rate α. - Hubbard, Raymond; Armstrong, J. Scott (2006). "Why We Don't Really Know What Statistical Significance Means: Implications for Educators". Journal of Marketing Education 28 (2): 114. doi:10.1177/0273475306288399. - Hubbard, Raymond; Lindsay, R. Murray (2008). "Why P Values Are Not a Useful Measure of Evidence in Statistical Significance Testing". Theory & Psychology 18 (1): 69–88. doi:10.1177/0959354307086923. - Stigler, S. (December 2008). "Fisher and the 5% level". Chance 21 (4): 12. doi:10.1007/s00144-008-0033-3. - Dallal, Gerard E. (2012). The Little Handbook of Statistical Practice. - Free online p-values calculators for various specific tests (chi-square, Fisher's F-test, etc.). - Understanding p-values, including a Java applet that illustrates how the numerical values of p-values can give quite misleading impressions about the truth or falsity of the hypothesis under test.
From Wikipedia, the free encyclopedia Population density (in agriculture : standing stock and standing crop) is a measurement of population per unit area or unit volume; it is a quantity of type number density. It is frequently applied to living organisms, and particularly to humans. It is a key geographic term. Lists of population density of different countries are below. Lists of other population densities are in See also section. Biological population densities Population density is population divided by total land area or water volume, as appropriate. Low densities may cause an extinction vortex and lead to further reduced fertility. This is called the Allee effect after the scientist who identified it. Examples of the causes in low population densities include: - Increased problems with locating sexual mates - Increased inbreeding Human population density For humans, population density is the number of people per unit of area, usually quoted per square kilometer or square mile (which may include or exclude, for example, areas of water or glaciers). Commonly this may be calculated for a county, city, country, another territory, or the entire world. The world's population is around 7 billion, and Earth's total area (including land and water) is 510 million square kilometers (197 million square miles). Therefore the worldwide human population density is around 7 billion ÷ 510 million = 13.7 per km2 (35 per sq. mile). If only the Earth's land area of 150 million km2 (58 million sq. miles) is taken into account, then human population density increases to 47 per km2 (120 per sq. mile). This includes all continental and island land area, including Antarctica. If Antarctica is also excluded, then population density rises to over 50 people per km2 (over 130 per sq. mile). However over half of the Earth's land mass consists of areas inhospitable to human habitation, such as deserts and high mountains, and population tends to cluster around seaports and fresh water sources. Thus this number by itself does not give any helpful measurement of human population density. Several of the most densely populated territories in the world are city-states, microstates, and dependencies. These territories have a relatively small area and a high urbanization level, with an economically specialized city population drawing also on rural resources outside the area, illustrating the difference between high population density and overpopulation. Cities with high population densities are, by some, considered to be overpopulated, though this will depend on factors like quality of housing and infrastructure and access to resources. Most of the most densely populated cities are in Southeast Asia, though Cairo and Lagos in Africa also fall into this category. City population and especially area are, however, heavily dependent on the definition of "urban area" used: densities are almost invariably higher for the central city area than when suburban settlements and the intervening rural areas are included, as in the areas of agglomeration or metropolitan area, the latter including sometimes neighboring cities. For instance, Milwaukee has a greater population density when just the inner city is measured, and the surrounding suburbs excluded. In comparison, based on a world population of seven billion, the world's inhabitants, as a loose crowd taking up ten square feet (one square metre) per person (Jacobs Method), would occupy a space a little larger than Delaware's land area. Most densely populated countries/regions By inhabited region (Pop. per km2) |Indo-Gangetic Plain (Pakistani Punjab to Bangladesh and Assam)||1 billion||1,000,000||1000| |Greater North China Plain||600 million||700,000||857| |Sichuan Basin||110 million||250,000||440| |Java Island||145 million||130,000||1115| |Taiheiyo Belt (Japan)||85 million||60,000||1417| |SE China coast (Guangdong, Hong Kong, Fujian)||140 million||100,000||1400| |Nile Delta||50 million||50,000||1000| |Southern India (Tamil Nadu, Pondicherry, Bengaluru, and Kerala)||120 million||170,000||706| |West Indian Coast (Maharashtra and Gujarat Coast)||70 million||100,000||700| |Colombian Andes (Colombia)||40 million||170,000||235| |Northern Europe (Benelux, North Rhine-Westphalia)||44 million||110,000||400| |NE US Coast||45 million||100,000||450| |S Central England||40 million||60,000||667| |Central Mexico||40 million||100,000||400| |Luzon Island||50 million||105,000||476| |South Korea||50 million||100,000||500| |Southeastern Brazil Coast||50 million||100,000||500| By political boundaries (Pop. per km2) (Pop. per km2) Other methods of measurement Although arithmetic density is the most common way of measuring population density, several other methods have been developed to provide a more accurate measure of population density over a specific area. - Arithmetic density: The total number of people / area of land (measured in square miles or square kilometers ). - Physiological density: The total population / area of arable land. - Agricultural density: The total rural population / area of arable land. - Residential density: The number of people living in an urban area / area of residential land. - Urban density: The number of people inhabiting an urban area / total area of urban land. - Ecological optimum: The density of population that can be supported by the natural resources. - Human geography - Idealized population - Optimum population - Population bottleneck - Population genetics - Population health - Population momentum - Population pyramid - Rural transport problem - Small population size - Distance sampling - List of population concern organizations Lists of entities by population density - List of cities by population density - List of city districts by population density - List of European cities proper by population density - List of islands by population density - List of countries by population density - List of U.S. states by population density - Matt Rosenberg Population Density. Geography.about.com. March 2, 2011. Retrieved on December 10, 2011. - Minimum viable population size. Eoearth.org (March 6, 2010). Retrieved on December 10, 2011. - U.S. & World Population Clocks. Census.gov. Retrieved on December 10, 2011. - World. CIA World Handbook - Department of Economic and Social Affairs Population Division (2009). "World Population Prospects, Table A.1" (PDF). 2008 revision. United Nations. Retrieved March 12, 2009. - The Monaco government uses a smaller surface area figure resulting in a population density of 18,078 per km2 - Human Population. Global Issues. Retrieved on December 10, 2011. - The largest cities in the world by land area, population and density. Citymayors.com. Retrieved on December 10, 2011. - The Population of Milwaukee County. Wisconline.com. Retrieved on December 10, 2011.
GCSE Ratios and Proportion GCSE Ratios and Proportion may be a bit of a mouthful but it’s the last main topic you need to cover for your maths exam so you can breathe a sigh of relief. Fortunately, if you have revised the other areas of GCSE Maths especially Number, you will already have built a foundation of knowledge to help you get started. This doesn’t mean you can take it easy, this section is more than basic maths so don’t get lazy. This is where you can grab your A* with both hands so keep up the good work! There are three parts to this section but this doesn’t mean you should divide your time equally in this way. Rates of change can be a difficult topic to grasp so keep this in mind when you allocate your study time, which you can do easily with GoConqr’s free study planner. We’re here to help you get to grips with the questions you can be asked on this topic on the exam. Check out our GCSE Maths page for information on the other sections and some revision tips and exam hints. A ratio means to compare amounts of one thing to another. For example, if the ratio of one length to another is 1:2, the second length is twice as long as the first. Ratios are usually written in the form a:b and questions on your exam paper may ask you to find an amount when you are given a ratio. Decimals and fractions can also be included in a ratio. There are many uses of ratios in everyday life such as in recipes or map scales. Have you ever seen a ratio in the corner of a map? This helps you calculate the real distance from one place to another – in real life the distance between things that are 5cm apart on a map, with a scale of 1:5000, will be 25000cm apart, i.e. 250 metres. Related to ratios is the study of proportion. You will need to have an understanding of how proportions and ratios relate to each other. This is important when you are revising both of these topics so if you are a fan of fractions, you should have no problems. If two amounts are in direct proportion to each other then they increase or decrease in proportion with each other. This involves very simple mathematics such as doubling or halving. For higher grades, students will also need to understand how inverse proportion works. Mainly, you will be asked to find the constant of variation. Measures also fits into a few other GCSE Maths topics including Number and Geometry. At this point you should already be aware of a vast amount of measurements so you can work on comparing and converting different units. You will also need to be familiar with compound units or compound measures which are derived from two other measurements. You can use the triangle method to memorise how to calculate mass, density and volume as well as distance, speed and time. Rates of Change As this is the difficult part of the topic, you only need to cover this if you are studying maths at a higher level. An example of a simple rate of change is the acceleration in a car. This is the rate of change of velocity. Graphs are an important part of determining a rate of change. At this level, you should be able to interpret the gradient of a straight line graph as the rate of change. You also need to be able to solve answers in growth and decay problems including compound interest.
Geography of Earth Soon after the formation of the earth, it was divided into metallic core,% silicate mantle and crust – which, along to with surface water, made it different from the other planets in our Solar System. The formation of the early mantle was important as it consisted primarily of ferromagnesium silicate minerals, some of which contained water as an essential component. The Earth’s atmosphere and hydrosphere developed from the degassing ( loss of gaseous elements such as carbon, hydrogen and oxygen ) of the early – formed core and mantle during this volcanic activity. Geographic Structure of Earth Geography of Earth’s Core - It has found that at a depth of about 2,900 kms. ( 1,800 miles ), the S Earthquake waves which can pass through only solid objects, suddenly disappear. Further, at this depth the velocity of the P earthquake waves which can travel through solids, liquids as well as gases, abruptly decreases from 13.7 kms per second to 8 kms per second. I this has been identified as the outer limit of the core. It is possible to determine this limit very clearly because at this depth the P waves suddenly decrease in velocity and the S waves are unable to penetrate. - On the basis of evidence of meteorites and our knowledge about density it has been concluded that the core is primarily made of iron which is in liquid condition. At a depth of 5,080 kms ( 3,160 miles ) there is again some increase \ in the velocity of P waves. From this it is concluded that the interior of the, core is probably solid where the high pressure ash solidified the iron. - Thus it is possible to subdivide the core into two parts the inner core and the outer core. The radius of the inner core is approximately 1,250 kms ( 1,400 miles ). In other words, the combined radius of the inner and the outer core taken together is 3,500 kms or about 2,200 miles which is more than half of the radius of the earth. It is estimated that the maximum temperature of the core is 5,500° C ( 10,000° F ) or slightly less and the specific gravity is 13. The outer core has sometimes been also referred to as the Shell. - The inner and the outer cores taken together are equivalent to the nife of Sucess. Geography of Mantle - Above the core and below the crust there is a thick intermediate layer called the mantle. Its thickness is about 2,900 kms ( 1800 miles ). This is solid, as both P and S waves pass through it quickly. The specific gravity varies from 3 to 3.5 in the upper part of the mantle to about 4.5 or more in the lower part of the mantle. - The mantle is composed of dense and rigid rocks which have predominance of minerals like magnesium and iron. These rocks are probably very similar to dunites and peridotites. - It is possible that the mantle is sub – divided into thinner layers each containing a distinctive group of minerals. But broadly speaking, we may subdivide it into two parts the lower mantle or the mesosphere and the upper mantle or the asthenosphere. - The lower part of the asthenosphere like the mesosphere, is solid, but the upper part of the asthenosphere is plastic and is in a partially molten condition. The velocity of the earthquake waves decreases in the asthenosphere, and the asthenosphere is, therefore, referred to as the low Velocity Zone. Geography of Earth’s Crust - This is the uppermost and the thinnest layer of the earth. Its thickness varies from 16 to 40 kms ( 10 to 25 miles ). In the continental areas its thickness is about 40 Kms ( 25 miles ) and has been found to vary from 30 to 65 kms ( 18 to 40 miles ). - It is composed of various kinds of rocks. In its uppermost part we find sedimentary rocks. This sedimentary layer is not continuous over the entire surface of the earth and is generally thin. Its thickness is usually less than 2 mifes ( 3.2 krns ), but in areas of folded mountains this may increase to 20 miles ( 32 kms ) or more. - Below the sedimentary cove is a layer of crystalline rocks, consisting of granites and gneisses in its upper section and basaltic rocks in its lower section. Some times these crystalline rocks cover wide areas on the surface of the earth, such as in Western Australia, Peninsular India, Middle Africa, Brazilian plateau, Eastern Canada, Scandinavia and North – east Asia. Underneath the ocean basins, the thickness of crust is less, and varies from 5 to 10 kms ( 3 to 6 miles ). Here the sedimentary layer is either thin or absent. Even the granitic layer is absent, and we come across mostly basaltic rocks. - Thus we may subdivide the earth’s crust into two parts – ( i ) an upper discontinuous layer composed mostly of the continuous layer which is exposed on the floor of the ocean basins but is found below the surface on the continents. The average specific gravity of this lower layer is 3.0. These two sub – divisions are roughly equivalent to the sea and sima, though sima was considered to be equivalent to the intermediate layer ( mantle ) by Suess. - The contact zone of the crust and the mantle is called Mohorovicic or Moho discontinuity. Here the rocks are different in chemical composition from those below and above. Here also the earthquake P waves suddenly increase in velocity from 6 to 7 kms per second in the crust to 8.2 kms per second. - Moho is found at the depth of 45 to 60 kms and sometimes 70 kms from the surface in the principal fold mountain areas, as the crust is thicker in these areas. On the other hand, Moho is found only at the depth of 5 to 7 kms below the floor of the oceanic plains, as the crust is thinner here. - Outside the mountainous zones. Moho is usually met with at the depth of 30 to 35 kms below the continental surfaces. - Lithosphere actually includes the crust and the uppermost part of the plastic asthenosphere. The land masses are composed of mainly brittle rocks and are 70 to 100 kms ( 80 to 100 miles ). - As the thin layer of lithosphere is located on the partially melted layer of the asthenosphere, the lithosphere is unstable and is liable to be easily deformed – This has important implications for, the recent theories of plate tectonics, sea – floor spreading as well as mountain building and volcanic action. - The forces or movements responsible for the formation of relief features and changes occurring in them are known as Earth Movements. These forces are divided into two broad categories : - Endogenic Forces which cause land upliftment, subsidence, folding faulting, earthquakes, volcnism etc. - Exogenetic Force which cause destruction of relief features through their weathering, erosinal and depositional activities. Geography of Endogenetic Forces - The forces coming from within the earth and causing horizontal and vertical movements are known as Endogenetic forces. It is these movements which lead to land upliftment and subsidence, folding and faulting, earthquakes and valcanism, etc. Endogenetic movements are responsible for giving birth to major relief features such as mountains, plateaus, plains, valleys, etc. These endogenetic movements fall into two major categories, on the basis of intensity. - Sudden Movements : Sudden movements result into sudden and rapid events such as earthquakes and volcanic eruptions. It must be noted that these events such as earthquakes and volcanic eruptions. It must be noted that these events are the result of long period of preparation deep within the earth. Only their effects in the form of earthquakes and volcanic eruptions are experienced as sudden events. These are also termed as constructive movements as they produce certain relief features such as volcanic mountains and lave plateaus ( e.g. Deccan ). - Diastrophic Movements : These movements comprising both vertical and horizontal movements operate very slowly and their effects become perceptible after thousands and millions of years. These movements are further sub – divided into : - Epeirogenetic Movements : These movements affect the continental masses, causing their uplift and subsidence or emergence and submergence through upward and downward movements respectively. In fact Epeirogenetic movements are vertical movements. - Orogenetic Movements : These movements are caused by the endogenetic forces working in horizontal manner and they involve fording, bending, faulting and thrusting. These endogenetic forces, also known as tangential forces, are of two types : - Compressional movement ( convergent movement ) : When ore genetic or horizontal forces operate face to face. They cause folding of the rock strata on the earth. - Tensional movement ( Divergent Movement ) : When horizontal forces operate in opposite directions. They produce cracks, fracture and faults in the crustal parts of the earth. Geography of Folds - Wave – like bends are formed in the crustal rocks due to tangential compressive force resulting from horizontal movement caused by the endogenetic force originating deep within the earth. Such bends are called ‘folds’ wherein some parts are bent up and some parts are bent down. The upfolded rock strata in arch – like form are – called ‘anticlines’ while the down folded structure forming through – like feature is called ‘syncline’. The two sides of a fold are called limbs of the fold. Geography Types of Folds - Symmetrical Folds are simple folds, the limbs ( both ) of which incline uniformly. These folds are an example of open fold. - A symmetrical Folds are characterized by unequal and irregular limbs. Both the limbs incline at different angles. - Monoclinal Folds are those in which one limb inclines moderately with regular slop while the other climb inclines steeply at right angle and the slope is almost vertical. It may be pointed out that vertical force and movements are held responsible for the formation of monoclinal folds. It is also opined that monoclinal folds are also formed due to unequal horizontal compressive forces coming from both the sides. - Isoclinal Folds are formed when the compressive forces are so strong that both the limbs of the fold become parallel but not horizontal.. - Recumbent Folds are formed when the compressive forces are so strong ( hat both the limbs of the fold become parallel as well as horizontal ). - Overturned Folds are those folds in which one limb the fold is thrust upon’ another fold due to intense compressive force. - Plunge Folds are formed when the axis of the fold instead of being parallel to the horizontal plane becomes tilted and forms plunge angle which is the angle between the axis and the horizontal plane. - Fan Folds represent an extensive and broad fold consisting of several minor anticlines and synclines. Such fold resembles as fan. Such feature is also called as anticlinorium or synclinorium. - Open Folds are those in which the angle between the two limbs of the fold is more that 90° but less than 180° ( i.e. obtuse angle ). Such open folds are formed due to moderate nature of compressive force. - Closed Folds are those folds in which the angle between the two limbs of a fold is acute angle. Such folds are formed because of intense compressive force. - When the tensional force is moderate, the crustal rocks develop only cracks ( fractures ) but when intense tensional force work the rock beds are dislocated and displaced also, resulting into formation of faults. Thu§, faults are those fractures in the rock body along which there has been an observable amount of displacement. - Normal Faults : The faults having displacement of both the rock blocks in opposite direction are called normal faults. Movement of rocks take place vertically, so that one side is raised or upthrown. In the case of normal faults, there occurs extension of the faulted area. - Reverse Faults ( Thrust Fault ) : On account of extreme compression, along with the tensional force, rocks snap and one block of fractured rock is pushed over the underlying block. Fractured rock blocks move towards each other in the Reverse faults. There is thus shortening of the crust in these faults. - Lateral or Strike – Slip Fault : This fault is formed when the rock blocks are displaced horizontally along the fault plane due to horizontal movement. They are commonly produced where one tectonic plate slides past another at a transform fault boundary. Landforms related to faulting. - Rift Valley : A linear depression or trough created by the sinking of the intermediate crustal rocks between two or more parallel faults is known as a Rift Valley. The East African Rift Valley System and Rhine rift valley are famous examples of these morphological features. - Dead Sea, the second most saline lake in the world after lake Van is situated in a rift valley. Narmada and Tapti, rivers are believed to be flowing in a rift valley. - Ramp Valley : When both the side block of rocks are raised and the middle portion remains standstill, the resultant trough is known as a Ramp Valley. Brahmaputra Valley is regarded as a Ramp Valley. - Block Mountain : Also known as fault block mountains, these mountains are the result of faulting caused by tensile and compressive forces. They represent the upstanding parts of the ground between two faults or on either side of a rift valley. Noted examples are.Vosges and Black Forest mountains bordering the faulted Rihne rift valley, Wasatch range in USA and Sierra Navada mountains of California ( considered to be the most extensive Block mountain of the world ). Geography of Rocks - Rocks are solid material which made up of Earth’s crust. They include hard and resistant material like granite and marble, and the loose material like silt and sand. The commonly found minerals in the rock are feldspar and quartz. The metal compound of rocks is known as ‘ores’. - The igneous rocks is made up of ( i ) solidification of Magma and ( ii ) granitisation. It is the ancestor of all other rocks and make up 85 percent of or more of the earth’s crust, ft is also called primary rocks from which all the rocks is made. - The term ‘Magma’ refers to molten underground material. - When the molten material reaches the surface, it is known as ‘Lava’. - On the basis of origin the igneous rocks can be classified into intrusive and extrusive varieties. - ‘Batholiths’ are the intrusive rock below the surface of the Earth. - The igneous rocks can be classified broadly on the four bases : ( i ) Process of origin, ( ii ) Place of origin, (iii) Mineral composition, (iv) Texture. - The underground igneous rocks may be classified into two categories ( i ) Hypabyssal Rocks – It is formed just below the surface of the Earth usually in dykes and sills ( ii ) Plutonic Rocks – It is formed deep beneath the ground in the plutons and batholiths. - The igneous rocks are made mainly of silicate ( SiO2 ) and often combine with other oxides of aluminum, potassium, sodium, calcium, iron, magnesium etc. some of the important igneous rocks are granites, rhyolite, pegmatite, syenite, diorite, andesite gabbro, basalt, dolerite and periodotite. - Igneous rocks are formed by the cooling, solidification and crystallization of molten earth materials, known as magma and lava. - Igneous rocks are also called as Primary rocks or Parent rocks because these were originated first during the formation of crust through the process of cooling of the earth surface. - They do not have distinct beds or strata like the sedimentary rocks. - These are granular and crystalline rocks. The size of the crystals vary from one rock to another. - Igneous rocks are generally hard and water percolates through them with great difficulty along the joints. - Since water does not percolate easily, these rocks are less affected by chemical weathering. - These rocks are more prone to mechanical weathering due to their granular structure. - These rocks are non – fossiliferous. - Most of the igneous rocks consist of silicate minerals. - On the basis of chemical composition, Igneous Rocks are divided into : ( i ) Acidic Igneous Rocks having more silica. They are light rocks relatively, e.g. Granites ( ii ) Basic Igneous Rocks have lower amount of silica. They are dark – coloured due to pre – dominance of ferro – magnesium, e.g. Gabbro, Basalt, etc. - On the basis of mode of occurrence, Igneous Rocks are classified into two major groups : Intrusive Igneous Rocks : When the rising magma is cooled and solidified below the surface of the earth, they are known as Intrusive Igneous Rocks. These are further sub – divided into : a ) Plutonic Igneous Rocks : They result from the cooling of magma very deep inside the earth. Due to very slow cooling at that great depth, large grains are developed, e.g. Granite. b ) Hypabyssal Igneous Rocks : They are formed when magma cools & solidify just beneath the earth surface. They take different shapes and forms depending upon the hollow places in which they solidify. - Batholith : These are large intrusive mass of igneous rocks, usually granite, formed by the deep – seated intrusion of magma on a large scale. They are known to be the largest kind of intrusive bodies and are usually dome shaped with very steep walls. They are known to be present in the core of most of the mountains. - Lacolith : These are of mushroom shape having convex upper suface and a relatively flat lower one. The ascending magma forces the upper layer of the sedimentary rocks to take the form of a convex arch or a dome. - Lopoliths : They represent inter – stratal bowl – like bodies formed by the solidification of magma in a concave shallow basin and the sagging of rocks under the weight of the intruded magma. - Phacoliths : These lens – shaped bodies are formed due to injection of magma along the anticlines and synclines in the folded strata. - Sills : They are bed – like intrusive bodies formed by the solidification of magma parallel to the bedding planes of the sedimentary rocks. - Dykes : These wall – like formation of solidified magma are found mostly perpendicular to the beds of sedimentary rocks. - Sills, Lacoliths, Lopoliths and Phacoliths are concordant intrusive bodies while Batholiths and Dykes are discordant intrusive bodies. - Extrusive Igneous Rocks : These igneous rocks are formed by the cooling and solidification of molten lava on the earth’s surface. Basalt is the most important example of extrusive igneous rocks, others being Gabbro and Obridian. These are generally fine grained or glassy because of quick rate of cooling of lava. The extrusive igneous rocks are divided into two sub – groups : ( i ) Explosive Type : Volcanic materials of violent volcanic eruptions include ‘bombs’ ( big fragments of rocks ), lapilli ( pea – sized fragments ) and volcanic dusts and ashes. ( ii ) Quiet Type : In this, lava appear on the surface through cracks and fissures and their continous flow form extensive lava plateaus, e.g., Deccan Plateau, Columbia Plateau ( USA ). - Sedimentary rocks are constituted of sediments, a material from air and water that settle down. About 70 per cent of the rock exposed to the surface of the earth is sedimentary rocks. It is also called stratified rocks because it is found in the layers. - The fossils are found in the layers of sedimentary rocks. A fossil refers to any part of the once living things preserved in the rock. It may be entire body, a single bone or a set of footprints. It tells up about the life in past and they help us to date environment. Fossils also show what kind of animals lived in the past - The layers of sedimentary rocks hold all reserve of coal, oil and natural gas. - The ‘Lithification’ is a process that turns loose sediments into rock. It takes millions of years. - The ‘compaction’ refers to squeezing of sediments to form hard rock. - The ‘cementation’ refers to binding together of the compact sediments. - ‘Limestone’ is a chemically precipitated sedimentary work which is formed by the compaction and lithification of the shells from marine organism. Sandstone is formed by the compaction of quartz grains. - The five factors which controls the properties of sedimentary rocks are – ( i ) Kind of rock in the source area ( ii ) Environment of the source area ( iii ) Earth movement ( iv ) Environment of deposit in areas ( tectonism ) and ( v ) Post depositional change of the sediments. - The mechanically formed sedimentary rock contain pieces of other rocks. Agents like running water, wind and moving ice break them into smaller pieces and deposits them at new sites where they form new sedimentary rocks. - Organically formed sedimentary rocks consists of the remains of animals and plants. Limestone, chalk and corals are the most common of this type of sedimentary rocks. - Chemically formed rocks are formed by the direct precipitation of mineral matter from solution. Rock – salt is an example of such rocks. Gypsum is also formed in a similar manner. - ‘Sandstone’ is a common sedimentary rock, is formed mainly of quartz particles cemented together by silica, lime or iron oxide. - ‘Shale’ is most abundant of all sedimentary rock. It is compacted silt and clay. Kaolin and clay minerals are abundant in it. - Rock gypsum is a white to reddish in colour. Gypsum and rock salt are formed by the evaporation of sea water and salt lakes. - ‘Chalk’ is a calcareous rock made up of microscopic skeletal elements from a varieties of limes secreting organism. It is composed of almost pure calcium carbonate. - Rocks formed from material derived from pre – existing rocks and from organic sources by the process of denudation are known as sedimentary rocks : In other words, rocks formed due to aggregation and compaction of sediments are called sedimentary rocks. - Sedimentary rocks contain different layers of sediments. - About 75% of the surface area of the globe is covered by the sedimentary rocks while rest 25% area is occupied by the igneous and metamorphic rocks. - Though sedimentary rocks cover largest area of the earth’s surface, they constitute only 5% of the composition of the crust while 95% of the crust is composed of igneous and metamorphic rocks. - Layers of sedimentary rocks are seldom found in original and horizontal manner. They are prone to folding and faulting due to compressional and tensional forces. - Most of the sedimentary rocks are permeable and porous but a few of them are also non – porous such as clay. - Shale is the most abundant sedimentary rock. Geography of Metamorphic Rocks - The Metamorphic rocks are formed when igneous or sedimentary rocks are transformed underground by the altering of sedimentary igneous rocks. It is generally caused by heat, pressure, chemical action, volcanic activity or movement at earth’s crust. - The ‘Metamorphism’ is a process by which an already consolidated rocks undergoes changes in or modification of texture, composition or structure either physical or chemical. - The metamorphic rocks may be classified into two categories : ( i ) The Foliated and ( ii ) Non – Foliated. The foliated rock is characterised by parallel arrangement of slaty minerals such as mica. In the non – foliated metamorphic rock, the minerals grains are equdiamensional e.g. quartzite and marble. - The formation of metamorphic rock refers that in course of time shale may get changed to slate and schist, limestones to marble, sandstone to quartzite and granite to gneiss. - Shale after being squeezed sheared under mountain building force, is altered into slate. This grey or brick red rocks splits neatly into thin plates and is used as roofing shingles and as flagstones. - Slate may change into schist with continued application of pressure and internal shearing. It is the most advanced grade of metamorphic rock. - Limestone after going under metamorphism, becomes marble. Calcite and dolomite are the main rock forming minerals. - Quartzite is a metamorphosed form of sedimentary rock. The slow movements of underground water carry silica into the sandstone and completely fill the space between the grains. Pressure and kneading of the rock is not essential in producing a quartzites. - Gneiss is formed either from intrusive igneous rock or from elastic sedimentary rocks that have been in the close contact with the intrusive magmas. It is coarser than Schist. - The Metamorphic rocks are generally hard and gems are found in the metamorphic rocks. - Rock cycle is a general model that describes how various geological process create, modify and influence rocks. It is relationship between the three types of rocks. The first part of the rock cycle take place on the earth surface. It is a continuous process through which old rocks are transformed into new one. - All of the rock type can be returned to the earth’s interior by tectonic forces at areas known as ‘subduction zones’. Geography of Earthquakes - When there is sudden disturbance of rocks in the earth’s interior, vibrations spread out in all directions from the source of the disturbance. An earthquake is the passage of these vibrations through earth’s crust. - Earthquakes are caused by either volcanic explosions or by sudden movement of rocks generally along fault planes. Accordingly earthquakes are distinguished as volcanic & tectonic earthquakes. - Tectonic earthquakes are felt over a much wider area & are more common. Volcanic earthquakes are generally of shallow origin & their area of disturbance is relatively small. - Tectonic earthquakes may originate at depths, which may vary from only a few kilometers to over 700 Kms. They are classified as : - Shallow or Normal ( Depth of origin is less than 60 Kms ) - Intermediate ( Depth of origin between 60 & 300 Kms ) - Deep. ( Depth of origin is between 300 to 700 Kms ) Intensity & Magnitude of Earthquack - The place of origin of the earthquake below the ground is called the focus. The point or line on the surface vertically above the focus is called epicenter. - The intensify of the earthquake is also maximum at the epicentre. - In 1902, a scale of intensity based on the amount of damage to various types of structure was developed by Italian seismologist Mercalli. - Magnitude of an earthquake is the total amount of energy released during an earthquake. The richter scale devised by CE Richter is used to describe the magnitude of an earthquake. Effects of Earthquakes - The geomorphological effects are not spectacular but create sudden topographie changes. - Vertical displacements along faults are common. On one side of the fault, the surface rocks are raised while on the other side they may be depressed. - Due to passage of land waves, fissure gape open at the crest. - Glaciers are broken and icebergs suddenly become abundant. - In alluvial plains, the sandy deposits gets compacted with the passage of earthquake vibrations and water filled sand escapes with great force. - Groundwater may be disturbed by earthquakes in other ways. Lakes may be drained off by the opening of cracks and new lakes may be formed in depression. - The appalling loss of human life is mainly related to the secondary events which are triggered by the earthquake such as the collapse of buildings, fires, landsides, floods & seismic Sea waves. - The giant seawaves associated with earthquakes of high magnitude are called tsunami in Japan. - In deep waters, the tsunami waves have wavelingth of hundreds of kilometers and travels at speeds of 700 to 1000 Km an hour. The energy transmitted in immense because the whole depth of water is involved. Therefore when these waves reach shallow coastal waters & narrow bays & intels, they have been known to grow into a wall of water 30m or greater. Distribution of Earthquakes - Earthquakes are not randomly distributed over the globe but they tend to occur in narrow continuous belt. - These earthquake belts encircle large seismically quite regions, which - constitute the plates of lithosphere. The plates are the continuous motion with respect to one another, and this relative movement of plates or plate motion is the fundamental cause of the earthquakes. - Most earthquakes occur on the boundaries between lithospheric plates and arise directly from the motion between the plates, though there are some that cannot be so simply related to the movements of the plates. - We may identify three well – defined belts or zones of seismic activity in the world there most earthquakes originate. These earthquake zones are – ( i ) a Circum – Pacific zone, ( ii ) a Mediterranean and Trans – Asiatic zone, and ( iii ) a zone following the mid – oceanic ridges with an extension along the East African rift valley system. - The Circum – Pacific Zone : The Circum – Pacific zone follows the oceanic trenches and the associated island areas where plates converge and the oceanic lithosphere is thrust down into the asthenosphere and re-melted, and this melting supplies the magma for the volcanic arcs which occur behind the trenches. - The epicenters of western side of the Pacific, this zone, starting from Alaska runs towards the south parallel to the Kurite, Japan, Marianas and Philippine trenches, beyond which it divides into two branches, one going west parallel to the Indonesian trench and the other towards the Keramac – Tonga trenches to the north – west to New Zealand. - On the eastern side of the Pacific, the earthquake zone follows the west coast of North America, being particularly important in California, although there is no ocean trench associated with it there. It continues southwards parallel to the middle American trench and further south along the Peru and Chile trench on the west coast of South America. - Shallow, intermediate as well as deep earthquakes are found to occur along the Circum – Pacific belt, but deep earthquakes are practically restricted to this zone. - We have already referred to the existence of inclined earthquake zone known as Beniff zone along the Pacific coasts where the foci of earthquakes deepen from shallow through intermediate to deep in a landwatd direction from the trenches. - This inclined earthquake or Beniff zone depicts the route along which the oceanic lithosphere descends into the mantle along eh trenches, and its downward progress is recorded by a series of earthquakes, which have their foci at various depths and many are as deep as 300 to 700 kms below the sea level. - Further, the earthquakes are not restricted, to the plate boundary itself, but occur over a broad zone, several hundred kilometres wide, adjacent to the plate boundary. Such earthquakes may be called plate – boundary related earthquakes. - They do not reflect the plate motions directly but are secondarily caused by “the stresses set up at the plate boundary. The best examples of such earthquakes are to be found in Japan where the plate boundaries are in the deep ocean trenches off the Japanese islands, and that is where the great plate boundary earthquakes occur. - But many smaller earthquakes’ occur scattered throughout the Japanese island, caused by the overall compression of the whole region. - The Mediterranean and Trans-Asiatic zone : This earthquake belt extends along the Alpine mountain system of Europe and North Africa, through Asia Minor and the Caucasus, Iran and Pakistan and China. This zone is characterized mostly by larger earthquakes of shallow origin and some of intermediate origin. Deep focus earthquakes are almost absent. This belt is not associated with oceanic trenches but with the Tertiary and Recent organic belts where continental plates collide and the lithosphere buckles under the force of the collision forming the great mountain ranges. - The mid – oceanic ridges and the African rift system zone : This zone lies mostly along the mid – oceanic ridges and the transform faults and contains mostly earthquakes of the shallow variety. These constitute major fracture zones where the plates diverge and new oceanic crust is being formed by the upwelling of magma on the mid – ocean ridges. An extension of this belt is to the found along the Red Sea and the rift valleys of East Africa. Geography of Volcanoes - A volcano is a vent or opening usually circular in form through which heated materials consisting of gases, water, liquid lava and fragments of rocks are ejected from the highly heated interior to the surface of the earth. - Volcanic eruptions are closely associated with several interconnected processes such as ( i ) the gradual increase in temperature with increasing depth at a rate of 1° C per 32m due to heat generated by degeneration of radioactive elements inside the earth, ( ii ) origin of magma because of lowering of melting point caused by reduction in pressure of overlying rocks due to fractures caused by splitting of plates ( iii ) origin of gases and water vapour due to heating of water ( iv ) Ascent of magma due to pressure from gases and vapour ( v ) occurence of volcanic eruption. These eruptions are closely associated with plate boundaries. - Volcanoes are classified under different schemes. - Classification on the basis of periodicity of eruptions. a ) Active volcano e.g. Etna, stromboli, pinatubo etc. b ) Dormant volcano e.g. visuvious, barren island volcano ( Andamans ). c ) Extinct volcano e.g. where no indication of future eruption is estimated. - Classification on the basis of the mode of eruption. ( i ) Central eruption type or explosive type e.g. Hawiian type, strombolian type, volcanian type, pelean type, visuvius type. ( ii ) Fissure eruption or quiet eruption type e.g. lava flow or flood, mud flow and fumaroles. - Large quantities of lava quietly well up from fissure and spread out over the surrounding countryside. Successive lava flows results in the growth of a lava platform which may be extensive to be called a plateau like “Deccan” “Columbia Snake Plateau”, “Drakenberg Mountains”, “Victoria and Kimberley” districts of Australia, “Jawa Island”. Topography Produced by Volcanoes - Cinder or Ash cone : They are of low height and are formed of volcanic dust, ashes and pyroclastic matter. Its formation takes place due to accumulation of finer particles around the volcanoes vent. - Composite cones : They are formed due to the accumulation of different layers of various volcanic materials. - Parasite cones : When lava comes out of the minor pipes coming out of the main central pipe, parasite cones are formed. - Basic lava cone : It has less quantity of silica in its lava. - Acidic lava cone : It has more silica in its lava. - Lava domes : These are formed due to accumulation of solidified lavas around the volcanic vents. . - Lava plugs : They are formed due to plugging of volcanic pipes and vents when volcano becomes extinct. - Craters : The depression formed at the mouth of a volcanic vent is called a crater. When it is filled with water it becomes a ‘crater lake’ e.g. lake lonar in Maharashtra. - Calderas : Generally enlarged form of craters is called caldera. If is formed due to subsidence of a crater. - Geysers : They are intermittent hot springs that from time to time spout steam and hot water from their craters. - Fumaroles : It is a vent through which there is emission of gases and water vapour. - About 15% of world’s active volcanoes are found along the “construction or divergent plate margins, whereas 80% volcanoes are associated with the” destructive or convergent plate boundaries. - The ‘circum Pacific belt’ or Pacific ring of fire includes volcanoes of the eastern and western coastal areas of the Pacific Ocean, island areas and festoons off the east coast of Asia etc. - The Mid – Continental Belt include volcanoes of Alpine mountain chain, the Mediterranean sea and the fault zone of eastern Africa e.g. Stramboli, Visuvious, Etna, Kilimanjaro, Mera, Elgon, Birunga etc. - The Mid-Atlantic Belt : The volcanoes of these areas are mainly of fissures eruption type. The most active volcanic area is Iceland. - Cerro Aconcagua ( 6960 metres ), the highest peak in Andes, South .America, is an extinct volcano while Kilimanjaro ( 5895 metre ). - Tanzania, Africa and volcano Llullailiaco ( 6723 metres ) in Chile, South America are classified as dormant. - The Hawaiian type volcanoe are characterised by the eruption of lava of the basic ( basaltic ) composition with the temperature about 1200 degree Celsius, which overflowing from the crater run down the slopes at the velocity of 8 – 10 miles / sec, thus forming lava stream the length of which is as great as 40 – 50 km and even 80 km the lava is relatively poor in gases, and the explosion are hardly ever noticed. - The ‘Volcanic Pipes’ refers to a special type of the so called monogenic volcanoes, since there origin finds its expression in a single explosion without any emergence of lava. The diameter of a pipe usually comprises 80 to 100 metres and they are filled. - The gases accompanying the eruption of volcanoes of all types continue to emanates from the main vent and from fissure on the slope and at the foot of the volcano long after eruption proper. These are so called Fumargles and Solfataras gases. - The Pacific zone is the largest volcanic zone. It include 60 percent of the all the volcanoes and the large number of those that have recently becomes extinguished. It extends across the Kamchataka Peninsula, the kuril Islands, the Islands of Japan, Philippines, New Guinea, New Zealand and the Solotnan islands. It also passes through the Antarctica and western coast of America. - The Mediterranean and Indonesian zone extended from Alps across Apennines, causes, the mountains of Asia Minor and the islands of Malay Archipelago. This is the zone where largest volcanoes of Europe is situated. - The ‘Atlantic Zone’ is associated with the central and most elevated part of the submarine mid-Atlantic ridge and the fault accompanying it. It include Canary Islands, Cape Verde, Azores etc. - The ‘Indo – African Zone’ is mainly represented by those volcanoes that are found on the islands of the Indian Ocean, for example, Comoro islands, islands of Mauritius, Saint Paul island etc. - Active Volcanos : The Volcanos which continue to erupt periodically are called active volcanoes. Mona Loa in Hawaii Island, Etna in Sicily and Vesuvius in Italy are example of active volcanoes. There are about 850 Active Volconoes out of which nearly 80 are in the oceans. Volcanoes are said to be active when eruption occurs frequently. - Dormant Volcanos : It is the volcanos which have been quiescent for a long time but in which there is a possibility of eruption are called “Dormant Volcanoes”. Fujiyama of Japan and Karakota of Indonesia are examples of dormant volcanoes. It is dormant because no eruption has occured during historic time. - Extinct Volcanos : It is the volcanoes in which the eruption has completely stopped and is not likely to occure are called Extinct Volcanoes. For example, Popa Mountain in Burma ( Myanmar ), Mt. Kilimanjaro in Africa and Mountains in Mouritius, Malagassy and several other islands in the Indian ocean. Geographic Product of Volcano - Solid Product : The solid products of volcanic activities are called pyroclastic, since they consists of fragmental material that emerged during volcanic explosion as a result of the ejection into atmosphere and dispersion of huge masses of lava, as well as the fragments of rocks which are exploded parts of the craters. - Liquid Product : Liquid product of volcanic eruption are represented by lava. According to their chemical composition, Lava can be either acid, medium, basic or ultrabasic. The chemical composition of the lava determines their most important physical properties, viscosity and mobility which the characteristic of volcanic eruption. - Gaseous Product : It is the volcanic product which are released from the vent, funnel, subordinate vents and numerous Fissure 60 – 90% of them consists of water vapour, during the condensation of which the atmosphere becomes very often characterised by heavy rains accompanying eruption of volcanoes. In addition to water vapours their composition includes H2S, So, Co, Co2, HF, NH4 Cl, NH3, H2, H2BO3 & other gases. Exogenetic Forces Geography - The exogenetic or geomorphic proceses or forces originate outside the earth’s crust, mainly from the atmosphere and therefore have been termed as exogenetic processes. These processes are continuously engaged in the destruction of the relief features created by the endogenetic forces. The major geomorphic processes include weathering, mass wasting and erosion. - Weathering is disintegration and decomposition of rocks while erosion is the process of removed, transportation and deposition of the weathered particles. These two processes, together are known as “Denudation”. Weathering process brings mechanical disintegration and chemical decaying of rocks. Weather conditions are the most decisive phenomenon so the process is commonly known as weathering. However the type and rate of weathering are also influenced by rock structure, topography and vegetation. Weathering is a static process. It is also the process of soil genesis. It is of three types : - Mechnical Weathering : When a region undergoes mechanical weathering, rocks are broken into small pieces. This mechanical disintegration takes place in different ways. - Frost Action : in cold climatic region when water fills the pores, cracks and crevices in rocks and freezes, it expands and exert a brusting pressure. The rocks are ruptured and fragmented. - Thermal Expansion and Contraction – In the area of hot deserts, the diurnal range of temperature brings the expansion and contraction of surface rocks, leading to their disintegration into smaller pieces. - Exfoliation : This is the expansion by unloading process. Unloading occurs when large igneous bodies are exposed through the erosional removal of over-lying rock and the reduction in the pressure. On being exposed to the surface they expand slightly in volume. This lead to the beaking of thick shells like a onion’s layers from the parent mass just lying below. - Chemical Weathering : It changes the basic properties of the rock. Principal processes of chemical weathering are : - Solution : Here the rocks are completely dissolved. It leads to the evolution of karst topography where the water dissolve the rock structure of limestone, salt, gypsum, chalk etc. - Oxidation : The presence of dissolved oxygen in water when comes in contact with mineral surface it leads to oxidation. Though it is an universal phenomenon but it is more apparent in rocks containing iron. - Hydration : Most of the rock forming minerals absorbs water. This not only increase their volume but also produces chemical changes resulting in the formation of new minerals which are softer and more voluminous. For example this process converts hematite into limonite. - Carbonation : Water combining with carbon-dioxide produces carbonic acid which dissolves several elements of minerals and the rock is weakened and broken into pieces. - Biological Weathering : This type of weathering is performed by the tree roots, animals and human beings. As the plant roots grow, they wedge the rocks apart and causes the widening of joints and other fractures: Micro animals like earth worms, ants termites and other burrowing animals move materials to or near the surface where they are more closely subjected to chemical weathering. - Physical weathering is more important in hot and dry climatic regions because of high diurnal range of temperature found there. - Intense chemical weathering occurs in hot and humid regions. - Chemical weathering is minimal in deserts and polar regions. - The rocks in dry temperate climates are more susceptible to mechanical weathering due to frost action. - Weathering is at its minimum in the polar regions due to permanent ice – cover. - Carbonate rocks having more soluble minerals are easily affected by chemical weathering. - Climate is thus, the single most important factor influencing weathering. - Attrition : Rock fragments carried by the river strike and roll against each other. - Corrasion / Abrasion : River, along with its bed wears its bed & banks. - Corrosion : The river water dissolves the minerals in soluble rocks. - Hydraulic Action : The sheer weight of the water itself wears away the bed and banks. - Deep, narrow V – shaped valley : It is formed as the swift flowing river erodes its bed faster than the sides. - Potholes : The grinding action of the pebbles caused by the swirling action of water deeper the circular depressions in the river bed forming potholes. - Interlocking Spurs : It is caused by vertical river erosion where spurs alternate on each side of the river as if they are interweaving. - Waterfalls & Rapid s: They are formed when the erosion caused by the river steepens its valley suddenly forcing the water to jump or fall over the steep slope or when river water plunges down the edge of a plateau, e.g. Angel fall on River Rio, Caroni in Venezuela ( highest in the world ), Niagra fall ( USA ), etc. - Gorges & Canyons : These are deep, narrow I shaped valleys having very steep sides, formed due to vertical corrosion in the upper course of the river. Canyons are usually found in arid areas and are narrower and deeper than gorges, e.g. Grand Canyon of USA cut by River Colorado. - River Capture or River Piracy : The river that is more powerful captures the headwaters of a weaker river by headward erosion, i.e. towards its source. In the given figure, due to capture of water of Cz by C1, the part of C2‘s valley that has become dry is called the wind gap, lying below the Elbow of Capture. C2 then becomes too small for its valley, hence called Misfit river. - V – Shaped Valley : An open V – shape valley due to valley widening caused by reduced river gradient and velocity. - Alluvial Fans : When river debouches from the mountains to the plains, steep fall in river gradient forces the river to deposit its sediment in a fan shape, called alluvial fans. - Meanders : In the middle course, due to reduced slope and increased volume of water, the river resorts to pronounced meanders. - OX – bow lakes : It is a crescent shaped lake, once been part of river – meander cut through by lateral erosion of the banks at the meander neck. - Floodplain : A flat tract of land mainly in the middle and lower courses, consists of alluvium deposited by the river. - Natural Levee : In times of flood, sediment is deposited along the banks and in the channels, elevating the channel & the bank. These raised banks are known as natural levee. Features in the Lower Courses - Braided Rivers : Due to reduced gradient and sediment – carrying capacity, large amount of deposited material on the river bed cause the river to divide and move around these barriers, resulting in braiding. - When a bar of resistant rock lies transversely across the river valley, e.g. “Niagara Fall” ( USA ) and “Kaieteur Fall” ( Guyana ). - When a fault line scarp caused by faulting lies across the river e.g. “Victoria Fall” ( on Zambazi river ). - When water plunges down the edge of a plateau e.g. “Livingstone Fall” ( on river Zaire ). - Glaciation produces hanging valley where tributary streams reaches the main V – shaped valley below as water fall, e.g. “Yosemite Fall” ( California ). - “Gersoppa Fall” : In the Western Ghat of India is the greatest fall in the world in the wet season. - Indus, Brahmputra, Ganga, Columbia rivers cut gorge across the mountain chain. - Colorado rivers has cut gorge 1.6 km. deep and 480 km. long into the Colorado plateau and because of its size the gorge is called a Canyon. - Canyons are usually formed in dry region where large rivers are actively eroding vertically and where weathering of the valley side is at a minimum, e.g. “Bryce Canyon” ( Utah USA ). Condition necessary for delta formation are : - The river must have a large load and this will happen when there is active erosion in upper coarse of the valley. - The coast should be sheltered preferably tideless. - The sea adjoining the delta should be shallow or else the load will disappear in sea. - There should be no large lake in the river course to filter off the sediments. - There should be no strong current running at right angle to the river mouth. Types of Delta - Arcuate : Composed of coarse sediments such as gravel and sand and is triangular in shape. It always has a number of distributaries. River having this type of delta are “Nile”, “Ganga”, “Indus”, “Irrawady”, “Mekong”, “Hwang Ho”, “Niger”. - Bird’s Foot / Degitate : It is composed of very fine sediment called silt. The river channel divides into few distributaries only and maintain clearly defined channels across the delta. The “Mississippi Delta” is one of the best example. - Estuarine : Develops in the mouth of a submerged river. Rivers like “Amazon”, “Ebe”, “Ob”, “Vestula” form this type of delta. - Cuspate : Only few rivers like Ebro of Spain form such type of delta. These have tooth – like projection. - Delta can and do form on the shores of high tidal seas e.g. river Colorado ( Gulf of California ) and River Fraser ( British Columbia ). - Any rivers, irrespective of its development can build a delta. The “Kander” whose valley is in stage of youth has built delta lake in lake Thun ( Switzerland ). - Snowline : The ‘level above which there is a perpetual snow cover is called the snowline. The height of this range from sea level around the poles to 4800 metres in the mountains of E. Africa on equator and 9000 feet in the Alps and 17,000 feet at the Equator in general. - Ice – sheets : Masses of ice which covers large areas of a continent are called ice – sheet as in Antarctica & Greenland. - Valley Glaciers : those masses of ice which occupy mountain valley like in Himalayas, Andes, Alps and Rockies. - Ice Shelves and Icebergs : When the ice – sheets reach right down to the sea they often extend outwards into the polar waters and float as ice shelves. When they break into individual blocks called ‘ice – bergs’. - Piedmont Glacier : At the foot of the mountain ranges several glaciers may converge to form an extensive ice – mass called a piedmont glacier, like “Malaspina Glaciers” of Alaska. - Plucking : The tearing away of blocks of rocks which have become frozen into the base and sides of a glader. - Abrasion : By this process the glacier scratches, scrapes, polishes and scours the valley floor with the debris frozen into it. - Landforms of Highland Glaciation : Corrie, Cirque or Cwm, Aretes and Pyramidal Peeks, Bergschrund, U – shaped Glacial Valley, Hanging Valleys, Rock Basins & Rock steps and Moraines. - The downslope movement of a glacier from its snow covered valley – head tends to produce a depression where the ‘firm’ or ‘neve’ accumulates and this horse – shoe shaped basin is called ‘Cirque’ in French or “Corrie” in Scotland or a “Cwm” in Wales. When the ice melts the collected water form a Corrie Lake or Tan. - When two Corries cut back on opposite sides of a mountain, knife – edged ridges are formed called aretes. Where three or more Cirque cut back together, their ultimate recession will form an angular horn or ‘Pyramidal Peak’. - At the head of a glacier, where it begins to leave the snowfield of a corrie, a deep vertical crack opens up called a “Bergeschrund” or “Rimaya”. Further down where the glacier negotiates a bend more “Crevasses” or creeks are formed. - The glacier on its downward journey begins to wear away the sides and floor of the valley and the interlocking spurs are thus blunted to form “Transcated spurs” and the floor of the valley is deepened as U – Shaped. After the disappearance of the ice, the deep narrow glacial troughs may be filled with water forming ‘ribbon lakes’ or ‘trough lakes’ or ‘finger lakes’. - As the main valley contain much larger glader than the tributary valley it erodes much faster. After the melting of ice the tributary valley appears as hanging over the main valley so that its stream plunges down as a water fall. Such tributary valleys are termed “Hanging Valley”. - A glacier erodes and excavates the bedrock in an irregular manner and this unequal excavation gives rise to many rocks basin later filled by lakes in the valley trough. Where a tributary valleys joins a main valley the additional weight of ice in the main valley cuts into the valley floor at the point of convergence forming a rock step. - ‘Moraines’ are made up of pieces of rocks that are shattered by frost action and are brought down the valley. On the basis of place where the moraines are deposited the moraines are termed in different ways like medial moraine, ground moraine, terminal moraine etc. - When the lower end of the trough is drowned by the sea it forms a deep, steep – side inlet called “fiord” as on the Norwegian and South Chilean coast. Arid or Desert Landforms - Most of the world’s deserts are located in latitudinal belt of 15° to 30° North and South of Equator. These mainly occur in the trade wind belt on western sides of continents. On – shore local winds do blow across these coast but they rarely bring rain because they have to cross cool currents which parallel the coast in these latitudes. - ‘Continental Deserts’ occurs in the interior of continents where the traveling winds have lost much of their moisture as happens when winds blow over a dry land or over high mountains. - Tropical Hot Desert or Trade Wind Desert : Sahara, Arabian, Iranian, Thar, Kalahari, Namib, Atacama, Australian Desert etc. - ‘Continental Desert’ : Gobi, Turkistan, Arizona, Nevada Deserts. - The works of winds and water in eroded elevated uplands, transporting the worn – off materials and depositing them elsewhere, has given rise to different types of desert landscape. - Sandy Desert : Called ‘erg’ in Sahara and ‘Koum’ in Turkistan; there are sea of sand with undulating sand dunes in the heart of the deserts. - Stony Desert : Called ‘reg’ in Algeria and ‘Serir’ in Libya and Egypt; surface is covered with boulders and angular pebbles and gravels. - Rocky Desert : Called “Hamada” in Sahara; bare rock surface. - Badlands : Develop in semi desert region mainly as a result of water erosion produced by violent rain storms. The land is broken by extensive gullies and ravines. Painted Desert of Arizona is the best example. - Mountain Desert : On highlands such as mountain and plateaus. In the Sahara desert the Ahaggar Mountain and Tibesfi mountain are example of it. Wind erosion is carried out in the following ways :- ( a ) Deflection : Lifting and blowing away of loose material from the ground. Deflection results in the lowering of the land surface to form large depression called “deflection hollows” like Qattara depression of the Sahara desert. ( b ) Abrasion : The sand blasting of rock surfaces by wind when they hurl sand particles against them. Abrasion is most effective at or near the base of rocks. ( c ) Attrition : When wind born particles roll against one another in collision they wear each other away so that their sizes are greatly reduced into smaller parts. - ‘Rock pedestal’ are the result of abrasion effecting on any projecting rock mass formed by alternate layers of hard and soft rock. Soft rock layer is eroded much in comparison to hard rock. Such rock pillar is further eroded much near the base and gives mushroom shape to the rock. - ‘Zeugens’ are tabular masses with layers of soft rocks lying beneath a surface of more resistant rocks. When the abrasion wears them the “ridge and furrow” landscape develops. The hard rock then stands above the furrow as ridge or zeugen. - ‘Yardangs’ are another type of ridge and furrow landscape and develops when bands of hard and soft rocks lie parallel to the prevailing winds. - ‘Mesa’ is a flat table like landmass with very resistant horizontal top layer and very steep sides. Continuous denudation through the ages may reduce mesas in the area so that they become isolated flat – topped hills called “buttes”. - ‘Inselbergs’ are isolated residual hills rising abruptly from the ground level with very steep slope and rounded tops. They are often composed of granite and gneiss. - ‘Depression’ are produced by wind deflection and reach down to water bearing rocks, developing a swamp or an oasis. Quttara Depression is the best example. - ‘Dunes’ are gentle ripples or sandy ridge which are produced by the deposition of sand grains brought by wind eddies from the neighbouring desert region. - ‘Barchans’ or ‘Barkhan’ are crescent – shaped dunes lying at right angels to the prevailing winds. The crest of the barchans moves forward as more sand is accumulated by the wind. - ‘Seif’ is a narrow ridge of sand lying parallel to the direction of the prevailing winds. - The wind blows fine particles out of deserts each year. Some of them are blown into the sea, the rest are deposited on the land where they accumulate to form loess. There are extensive loess in Northern China, blown out of the Gobi Desert to the West. Water Action in Desert - The rare but heavy rainstorm gives birth to rushing torrents on steep slopes and sheet flood water on gentle slopes. The run off on steep slopes in usually via ‘rills’ ( shallow groves ) which lead into “gullies”, which, in turn, connects with steep sided deep and often flat floored valley called “Wadis or Chebka”. - In intermontane desert basins, intermittent rivers drain into the centre of the basin and the alluvial fans build up around the edge of such a basin may eventually join together to form a continuous depositional features, sloping gently to the centre of the basin. This feature is called “Bahade or Bajada”. - As the edge of desert and semi desert highlands gets pushed back by erosion and weathering, a gentle sloping platform develops called “pediments”. The slope of the land changes abruptly where a pediments joins the highland mass. - Sometimes water collected in the depression or a desert basin does not completely disappear by evaporation or sleep age and a temporary lake is formed called “playas” “Salinas” or “Salars”. - The coastline is the margin of the land. This is also the “Cliff line” on rocky coast. - The water throws up the beach by breaking waves is called “Swash” and when the swash drains back down the beach it is called “backwash”. - When waves break at the rate of ten or less a minute, each breaking wave is able to run its course without interfering the wave behind it. These waves are called “constructive waves”. When waves break more frequently than the backwash of a wave it runs into the swash of the wave behind. These waves remove pebbles and sands from a coast. They are destructive waves. - Corrosive Action : Boulders, pebbles and sand are hurled against the base of the cliff by breaking waves. - Hydraulic Action : Breaking waves causes air in the cracks and crevices to become suddenly compressed. After the retreat of waves air expands often explosively and causes the rocks to shatter as the cracks become enlarged. - Attrition : Boulders and pebbles dashed against the shore are themselves broken into finer and finer particles. Coastal Features of Deposition - Constructive waves deposit pebbles, sand and mud along a coast and form a gentle sloping platform called a “beach”. Beaches usually lie between high and low water level but storm waves along some coast throw pebbles and stones well beyond the normal level reached by waves at high level tide and the material deposited in this way is called “storm beach”. - Material which is eroded from a coast may be carried along the coast by long shore drift and deposited further along the coast as a spit. This is likely to happen along indented coast and the coast broken by river mouths. - Bay – bar formation starts as a spit growing out from a headlands but ultimately it stretches across the bay to the next headland. Along the coast of Poland the bay bars are called “Nehrungs”. - When a bar links an inland to the mainland it is called “tombolo”. - Tides tends to deposits fine silts along gently shelving coast, especially in bays and estuaries. The deposition of these silts together with river alluvium, results in building up of a platform of muds called “mud-flat”. - In the tropical region, mud flats often become “mangrove swamps”. Types of Coast Coastal region may be either submerged or become uplifted by change, in the land or sea levels. It can be further divided into highlands or lowland types. The types of coast can be divided into – - Ria Coast : When a highland coast is submerged the lower parts of its river valley becomes flooded. These submerged parts of the valley are called “rias” e.g. in S.W. Ireland, S.W. England, N. W. Spain, Brittany etc. - Longitudinal Coast : When a highland coast whose valley are parallel to the coast is submerged, some of the valley are flooded and separating mountain range becomes chain of Islands. These types of coast occurs in Yugoslavia along part of pacific coast of North and South America. - Fiord Coast : When glaciated highland coast become submerged the flooded lower parts of the valley are called “fiords”. Fiord coast are common in south Island of New Zealand, Greenlands, Norway and British Columbia. - Submerged Lowland Coast : A rise in sea level along a lowland coast causes the sea to penetrate inland along the river valley. The flooded parts of the valley are called “estuaries”. The Baltic coast of Poland and Germany and the Dutch coast are good examples of estuarine coasts. - Emerged Highland Coast : An old sea beach backed by a sea cliff lying from 7.5 metres to 30 metres above sea levels is often characterised by this type of coast. The main cause of its formation is the change in either sea level or the level of the land. Raised beaches are common in Western Scotland. - Emerged Lowland Coast : This forms when a part of continental shelf emerges from the sea and forms a coastal plain. This coast has no bays or headlands and deposition takes place in the shallow water off-shore, producing off-shore bars, lagoons, spits and beaches. S.E. coast of USA, North coast of Gulf of Mexico are such examples. - Knick Point : The point where the old and rejuvenated profiles meet is called the knick point or “Rejuvenated Head”. It is the major split line where the courses of the river get changed. There may be circumstances of the emergence of knick points. - Valley – in – Valley : When the old course pf valley is upliftted and a greater slope is attained the old V – shaped open valley begins to perform downward cutting with greater speed than Sideward. This lead to the development of deep cut narrow valley within the old valley, it is known as ‘valley in valley’. - Erosional Surface : Erosional surface emerges when the peneplain is uplifted, an uplifted peneplain makes a flatter summit of the plateau and that becomes the source region of several rivers. That plateau is basically an Erosional surface or eroded plain. For example Chotanagpur plateau and Applachain plateau. - Incised Meanders : Incised Meanders emerges in the meanders of old course when the meandering course of the rivers is upraised & deepening begins then a typical deep cut narrow valley emerges with the old meander. This time Ox-bow lakes are not made. Whenever the valley becomes straight due to erosion at the pressing point detached land looks like a Ii hill or hillock. Colorado river present a good site for incised meander. Application Form Submission 16 Dec 2020 to 16 Jan 2021.
Normal distribution, also called Gaussian distribution, the most common distribution function for independent, randomly generated variables. Its familiar bell-shaped curve is ubiquitous in statistical reports, from survey analysis and quality control to resource allocation. The graph of the normal distribution is characterized by two parameters: the mean, or average, which is the maximum of the graph and about which the graph is always symmetric; and the standard deviation, which determines the amount of dispersion away from the mean. A small standard deviation (compared with the mean) produces a steep graph, whereas a large standard deviation (again compared with the mean) produces a flat graph. See the . The normal distribution is produced by the normal density function, p(x) = e−(x − μ)2/2σ2/σSquare root of√2π. In this exponential function e is the constant 2.71828…, is the mean, and σ is the standard deviation. The probability of a random variable falling within any given range of values is equal to the proportion of the area enclosed under the function’s graph between the given values and above the x-axis. Because the denominator (σSquare root of√2π), known as the normalizing coefficient, causes the total area enclosed by the graph to be exactly equal to unity, probabilities can be obtained directly from the corresponding area—i.e., an area of 0.5 corresponds to a probability of 0.5. Although these areas can be determined with calculus, tables were generated in the 19th century for the special case of = 0 and σ = 1, known as the standard normal distribution, and these tables can be used for any normal distribution after the variables are suitably rescaled by subtracting their mean and dividing by their standard deviation, (x − μ)/σ. Calculators have now all but eliminated the use of such tables. For further details see probability theory. The term “Gaussian distribution” refers to the German mathematician Carl Friedrich Gauss, who first developed a two-parameter exponential function in 1809 in connection with studies of astronomical observation errors. This study led Gauss to formulate his law of observational error and to advance the theory of the method of least squares approximation. Another famous early application of the normal distribution was by the British physicist James Clerk Maxwell, who in 1859 formulated his law of distribution of molecular velocities—later generalized as the Maxwell-Boltzmann distribution law. The French mathematician Abraham de Moivre, in his Doctrine of Chances (1718), first noted that probabilities associated with discretely generated random variables (such as are obtained by flipping a coin or rolling a die) can be approximated by the area under the graph of an exponential function. This result was extended and generalized by the French scientist Pierre-Simon Laplace, in his Théorie analytique des probabilités (1812; “Analytic Theory of Probability”), into the first central limit theorem, which proved that probabilities for almost all independent and identically distributed random variables converge rapidly (with sample size) to the area under an exponential function—that is, to a normal distribution. The central limit theorem permitted hitherto intractable problems, particularly those involving discrete variables, to be handled with calculus. Learn More in these related Britannica articles: statistics: The normal distributionThe most widely used continuous probability distribution in statistics is the normal probability distribution. The graph corresponding to a normal probability density function with a mean of μ = 50 and a standard deviation of σ = 5 is shown in Figure… probability theory: Probability density functions…cumulative distribution function of the normal distribution with mean 0 and variance 1 has already appeared as the function Gdefined following equation (12). The law of large numbers and the central limit theorem continue to hold for random variables on infinite sample spaces. A useful interpretation of the central… chromatography: Column efficiencyIf the peak is a Gaussian distribution, statistical methods show that its width may be determined from the standard deviation, σ, by the formula w= 4σ. Poor chromatograms are those with early peaks (small t r) that are broad (large w), hence giving small Nvalues, while excellent chromatograms are… probability and statistics: The spread of statistical mathematics…later as the Gaussian or normal distribution.… central limit theorem…a theorem that establishes the normal distribution as the distribution to which the mean (average) of almost any set of independent and randomly generated variables rapidly converges. The central limit theorem explains why the normal distribution arises so commonly and why it is generally an excellent approximation for the mean… More About Normal distribution6 references found in Britannica articles - central limit theorem - frequency distributions - probability theory
El Salvador -- Geography -- Official Name: El Salvador Capital City: San Salvador Official Currency: USD Religions: Roman Catholic, Protestant, others Land Area: 21,040 sq km Landforms: Mostly mountains with narrow coastal belt and central plateau. Land Divisions: 14 prefectures El Salvador -- History -- In the early sixteenth century, the Spanish conquistadors ventured into ports to extend their dominion to the area that would be known as El Salvador. They were firmly resisted by the Pipil and their remaining Mayan-speaking neighbors. Pedro de Alvarado, a lieutenant of Hernan Cortes, led the first effort by Spanish forces in June 1524. The people defeated the Spaniards and forced them to withdraw to Guatemala. Two subsequent expeditions took place—the first in 1525, followed by a smaller group in 1528—to bring the Pipil under Spanish rule. Towards the end of 1810, a combination of internal and external factors allowed Central American elites to attempt to gain independence from the Spanish crown. The internal factors were mainly the interest the elites had in controlling the territories they owned without involvement from Spanish authorities. The external factors were the success of the French and American revolutions in the eighteenth century and the weakening of the military power of the Spanish crown because of its wars against Napoleonic France. The independence movement was consolidated on November 5, 1811, when the Salvadoran priest, Jose Matias Delgado, sounded the bells of the Iglesia La Merced in San Salvador, making a call for the insurrection. After many years of internal fights, the Acta de Independencia (Act of Independence) of Central America was signed in Guatemala on September 15, 1821. When these provinces were joined with Mexico in early 1822, El Salvador resisted, insisting on autonomy for the Central American countries. In 1823, the United Provinces of Central America was formed by the five Central American states under General Manuel Jose Arce. When this federation was dissolved in 1838, El Salvador became an independent republic. El Salvador's early history as an independent state was marked by frequent revolutions. From 1872 to 1898, El Salvador was a prime mover in attempts to reestablish an isthmian federation. The governments of El Salvador, Honduras, and Nicaragua formed the Greater Republic of Central America via the Pact of Amapala in 1895. Although Guatemala and Costa Rica considered joining the Greater Republic (which was rechristened the United States of Central America when its constitution went into effect in 1898), neither country joined. This union, which had planned to establish its capital city at Amapala on the Golfo de Fonseca, did not survive a seizure of power in El Salvador in 1898. The enormous profits that coffee yielded as a monoculture export served as an impetus for the process whereby land became concentrated in the hands of an oligarchy of few families. A succession of presidents from the ranks of the Salvadoran oligarchy, nominally both conservative and liberal, throughout the last half of the nineteenth century generally agreed on the promotion of coffee as the predominant cash crop, on the development of infrastructure (railroads and port facilities) primarily in support of the coffee trade, on the elimination of communal landholdings to facilitate further coffee production, on the passage of anti-vagrancy laws to ensure that displaced campesinos and other rural residents provided sufficient labor for the coffee fincas (plantations), and on the suppression of rural discontent. In 1912, the national guard was created as a rural police force. The coffee industry grew inexorably in El Salvador. As a result, the elite provided the bulk of the government's financial support through import duties on goods imported with the foreign currencies that coffee sales earned. This support, coupled with the humbler and more mundane mechanisms of corruption, ensured the coffee growers of overwhelming influence within the government. The economy, based on coffee-growing after the mid-19th century, as the world market for indigo withered away, prospered or suffered as the world coffee price fluctuated. From 1931—the year of the coup in which Gen. Maximiliano Hernandez Martinez came to power until he was deposed in 1944 there was brutal suppression of rural resistance. The most notable event was the 1932 Salvadoran peasant uprising, commonly referred to as La Matanza (the massacre), headed by Farabundo Marti and the retaliation led by Martinez's government, in which approximately 30,000 indigenous people and political opponents were murdered, imprisoned or exiled. Until 1980, all but one Salvadoran temporary president was an army officer. Periodic presidential elections were seldom free or fair, and an oligarchy in alliance with military forces ruled the nation. A second peasant uprising against the oligarchy resulted in the Salvadoran Civil War (1980-1992), notable for atrocities on the part of the National Guard and government-related death squads. In 1972 Jose Napoleon Duarte (PDC) was elected President but betrayed by the military Party, Party of National Conciliation, tortured and had to flee. After a Coup d'etat in October 1979, the RGJunta and elections in 1984 he became president. The El Mozote massacre, and the murder of Catholic missionaries and other religious aid workers, such as Jean Donovan, were some notorious consequences of the war, which lasted until the Chapultepec Peace Accords were signed in January 1991. Five different factions of the guerrillas formed the Frente Farabundo Marti para la Liberacion Nacional party (FMLN) in order to seek office through democratic elections. Since then, the FMLN has gradually gained representation, particularly in the Legislative Assembly and local governments. Since 1989 the Nationalist Republican Alliance (ARENA) party, founded by Roberto D'Aubuisson, has won every presidential election. In 1998, El Salvador became one of three Latin-American countries where abortion is illegal with no exceptions, along with Chile and Nicaragua. El Salvador -- Economy -- According to the IMF and CIA World Factbook, El Salvador has the third largest economy in the region (behind Costa Rica and Panama) when comparing nominal Gross Domestic Product and purchasing power GDP. El Salvador's GDP per capita stands at US$5,800 , however, this "developing country" still faces many social issues and is among the 10 poorest countries in Latin America. Approximately 2.4 million (30.7%) people live below the poverty line, its GDP real growth rate is low compared to its neighbors, and 6% of the population is unemployed with much underemployment. Most of El Salvador's economy has been hampered by natural disasters such as earthquakes and hurricanes, but El Salvador currently has a steadily growing economy. GDP in purchasing power parity (PPP) in 2007 was estimated at $41.65 billion USD. The service sector is the largest component of GDP at 60.7%, followed by the industrial sector at 29.6% (2006 est.). Agriculture represents only 7.6% of GDP (2006 est.). The Salvadoran economy has experienced mixed results from the recent government's commitment to free market initiatives and conservative fiscal management that include the privatization of the banking system, telecommunications, public pensions, electrical distribution, and some electrical generation, reduction of import duties, elimination of price controls, and an improved enforcement of intellectual property rights. The GDP has been growing since 1996 at an annual rate that averages 2.8% real growth. In 2006 the GDP's real growth rate was 4.2%. A problem that the Salvadoran economy faces is the inequality in the distribution of income. In 1999, the richest fifth of the population received 45% of the country's income, while the poorest fifth received only 5.6%. In December 1999, net international reserves equaled US$1.8 billion or roughly five months of imports. Having this hard currency buffer to work with, the Salvadoran government undertook a monetary integration plan beginning January 1, 2001 by which the U.S. dollar became legal tender alongside the Salvadoran colon and all formal accounting was done in U.S. dollars. This way, the government has formally limited its possibility of implementing open market monetary policies to influence short term variables in the economy. As of September 2007, net international reserves stood at $2.42 billion. Since 2004, the colon stopped circulating and is now never used in the country for any type of transaction. In general, there was discontent with the shift to the U.S. dollar, primarily because of wage stagnation vis-a-vis basic commodity pricing in the marketplace. Additionally there are contentions that, according to Colin's Law, a reversion to the colon would be disastrous to the economy. The change to the dollar also precipitated a trend toward lower interest rates in El Salvador, helping many to secure much needed credit for house or car purchases. A challenge in El Salvador has been developing new growth sectors for a more diversified economy. As many other former colonies, for many years El Salvador was considered a mono-export economy (an economy that depended heavily on one type of export). During colonial times, the Spanish decided that El Salvador would produce and export indigo, but after the invention of synthetic dyes in the 19th century, Salvadoran authorities and the newly created modern state turned to coffee as the main export. Since the cultivation of coffee required the highest lands in the country, many of these lands were expropriated from indigenous reserves and given or sold cheaply to those that could cultivate coffee. The government provided little or no compensation to the indigenous peoples. On occasion, this compensation implied merely the right to work for seasons in the newly created coffee farms and to be allowed to grow their own food. Such actions provided the basis of conflicts that would shape the political landscape of El Salvador for years to come. For many decades, coffee was one of the only sources of foreign currency in the Salvadoran economy. The Salvadoran Civil War in the 1980s and the fall of international coffee prices in the 1990s pressured the Salvadoran government to diversify the economy. The government has followed policies that intend to develop other export industries, such as textiles and sea products. Tourism is another industry Salvadoran authorities see as a possibility. But rampant crime rates, lack of infrastructure, and inadequate social capital have prevented this resource from being properly exploited and is still underdeveloped. There are 15 free trade zones in El Salvador. The largest beneficiary has been the maquila industry, which provides 88,700 jobs directly, and consists primarily of supplying labor for the cutting and assembling of clothes for export to the United States. El Salvador signed the Central American Free Trade Agreement (CAFTA) — negotiated by the five countries of Central America and the Dominican Republic — with the United States in 2004. CAFTA requires that the Salvadoran government adopt policies that foster free trade. El Salvador has signed free trade agreements with Mexico, Chile, the Dominican Republic, and Panama and increased its trade with those countries. El Salvador, Guatemala, Honduras, and Nicaragua also are negotiating a free trade agreement with Canada. In October 2007, these four countries and Costa Rica began free trade agreement negotiations with the Central Americaan Union. Negotiations started in 2006 for a free trade agreement with Colombia. Fiscal policy has been the biggest challenge for the Salvadoran government. The 1992 peace accords committed the government to heavy expenditures for transition programs and social services. The Stability Adjustment Programs (PAE, for the initials in Spanish) initiated by President Cristiani's administration committed the government to the privatization of banks, the pension system, and the electric and telephone companies. The total privatization of the pension system has implied a serious burden for the public finance system, because the newly created private Pension Association Funds did not absorb coverage of retired pensioners covered under the old system. The government lost the revenues from contributors and absorbed completely the costs of coverage of retired pensioners. This has been the main source of fiscal imbalance. ARENA governments have financed this deficit with the emission of bonds, something the leftist FMLN has opposed. Debates surrounding the emission of bonds have stalled the approval of the national budget for many months on several occasions. The emission of bonds and the approval of government loans need a qualified majority (3/4 of the votes) in the National Legislature. If the deficit is not financed through a loan it is enough with a simple majority to approve the budget (50% of the votes plus 1). Despite such challenges to keep public finances in balance, El Salvador still has one of the lowest tax burdens in the American continent (around 11% of GDP). Many specialists claim that it is impossible to advance significant development programs with such little public sector aid. (The tax burden in the United States is around 25% of the GDP and in developed countries of the EU it can reach around 50%.) The government has focused on improving the collection of its current revenues with a focus on indirect taxes. Leftist politicians criticize such a structure since indirect taxes (like the value-added tax) affect everyone alike, whereas direct taxes can be weighed according to levels of income. A 10% value-added tax (IVA ins Spanish), implemented in September 1992, was raised to 13% in July 1995. The VAT is the biggest source of revenue, accounting for about 52.3% of total tax revenues in 2004. Inflation has been steady and among the lowest in the region. Since 1997 inflation has averaged 3%, with recent years increasing to nearly 5%. From 2000 to 2006 total exports have grown 19% from $2.94 billion to $3.51 billion. During this same period total imports have risen 54% from $4.95 billion to $7.63 billion. This has resulted in a 102% increase in the trade deficit from $2.01 billion to $4.12 billion. Remittances from Salvadorans living and working in the United States, sent to family in El Salvador, are a major source of foreign income and offset the substantial trade deficit of $4.12 billion. Remittances have increased steadily in the last decade and reached an all-time high of $3.32 billion in 2006 (an increase of 17% over the previous year). approximately 16.2% of gross domestic product(GDP). Remittances have had positive and negative effects on El Salvador. In 2005 the number of people living in extreme poverty in El Salvador was 16%, according to a United Nations Development Program report, without remittances the number of Salvadorans living in extreme poverty would rise to 37%. While Salvadoran education levels have gone up, wage expectations have risen faster than either skills or productivity. For example, some Salvadorans are no longer willing to take jobs that pay them less than what they receive monthly from family members abroad. This has led to an influx of Hondurans and Nicaraguans who are willing to work for the prevailing wage. Also, the local propensity for consumption over investment has increased. Money from remittances have also increased prices for certain commodities such as real estate. Many Salvadorans abroad earning much higher wages can afford higher prices for houses in El Salvador than local Salvadorans and thus push up the prices that all Salvadorans must pay. El Salvador -- Culture -- The Roman Catholic Church plays an important role in the Salvadoran culture. Archbishop Oscar Romero is a national hero for his role in speaking out against human rights violations that were occurring in the lead up to the Salvadoran Civil War. Significant foreign personalities in El Salvador were the Jesuit priests and professors Ignacio Ellacuria, Ignacio Martin-Baro, and Segundo Montes, who were murdered in 1989 by the Salvadoran Army during the heat of the civil war. Painting, ceramics and textile goods are the main manual artistic expressions. Writers Francisco Gavidia (1863–1955), Salarrue (Salvador Salazar Arrue) (1899-1975), Claudia Lars, Alfredo Espino, Pedro Geoffroy Rivas, Manlio Argueta, Jose Roberto Cea, and poet Roque Dalton are among the most important writers to stem from El Salvador. Notable 20th century personages include the late filmmaker Baltasar Polio, artist Fernando Llort, and caricaturist Tono Salazar. Amongst the more renowned representatives of the graphic arts are the painters Noe Canjura, Carlos Canas, Julia Diaz, Camilo Minero, Ricardo Carbonell, Roberto Huezo, Miguel Angel Cerna (the painter and writer better known as MACLo), Esael Araujo, and many others. Spanish is the main and official language of El Salvador. The local Spanish vernacular is called Caliche. Nahuat is the indigeous language that has survived, though it is only used by small communities of elderly Salvadorans in western El Salvador. El Salvador -- Political system, law and government -- Politics of El Salvador takes place in a framework of a presidential representative democratic republic, whereby the President of El Salvador is both head of state and head of government, and of a multi-party system. Executive power is exercised by the government. Legislative power is vested in both the government and the Legislative Assembly. The Judiciary is independent of the executive and the legislature. Executive branch: El Salvador elects its head of state – the President of El Salvador – directly through a fixed-date general election whose winner is decided by absolute majority. If an absolute majority (50% + 1) is not achieved by any candidate in the first round of a presidential election, then a run-off election is conducted 30 days later between the two candidates who obtained the most votes in the first round. The presidential period is five years, and re-election is not permitted. The most recent presidential election, held on 21 March 2004, resulted in the election of Tony Saca of the ARENA party with almost 58 percent of the vote, the highest in Salvadoran history. The turnout of 70 percent was also a record. The youthful Saca, who embraced pro-business and pro-U.S. policies, recovered ground lost in the 1999 Presidential election, which ARENA had barely survived, and in the March 2000 legislative races, in which ARENA had been eclipsed as the largest single party by the Farabundo Marti National Liberation Front and had retained overall control of the Assembly only by forging a coalition with a smaller party. Legislative branch: Salvadorans also elect a single-chamber, unicameral national legislature – the Legislative Assembly of El Salvador – of 84 members (deputies) elected by closed-list proportional representation for three-year terms, with the possibility of immediate re-election. Twenty of the 84 seats in the Legislative Assembly are elected on the basis of a single national constituency. The remaining 64 are elected in 14 multi-member constituencies (corresponding to the country's 14 departments) that range from 3-16 seats each according to department population size.
Short Answers Questions 1. What is meant by periodic and non- periodic motion?. Give any two examples, for each motion. 2. What is meant by force constant of a spring?. 3. Define time period of simple harmonic motion. 4. Define frequency of simple harmonic motion. 5. What is an epoch?. 6. Write short notes on two springs connected in series. 7. Write short notes on two springs connected in parallel. 8. Write down the time period of simple pendulum. 9. State the laws of simple pendulum?. 10. Write down the equation of time period for linear harmonic oscillator. 11. What is meant by free oscillation?. 12. Explain damped oscillation. Give an example. 13. Define forced oscillation. Give an example. 14. What is meant by maintained oscillation?. Give an example. 15. Explain resonance. Give an example. Long Answers Questions 1. What is meant by simple harmonic oscillation?. Give examples and explain why every simple harmonic motion is a periodic motion whereas the converse need not be true. 2. Describe Simple Harmonic Motion as a projection of uniform circular motion. 3. What is meant by angular harmonic oscillation?. Compute the time period of angular harmonic oscillation. 4. Write down the difference between simple harmonic motion and angular simple harmonic motion. 5. Discuss the simple pendulum in detail. 6. Explain the horizontal oscillations of a spring. 7. Describe the vertical oscillations of a spring. 8. Write short notes on the oscillations of liquid column in U-tube. 9. Discuss in detail the energy in simple harmonic motion. 10. Explain in detail the four different types of oscillations. 1. Consider the Earth as a homogeneous sphere of radius R and a straight hole is bored in it through its centre. Show that a particle dropped into the hole will execute a simple harmonic motion such that its time period is Earth is assumed to be a homogeneous sphere. Its centre is at O and Radius = R The hole is bored straight through the centre along its diameter. The acceleration due to gravity at the surface of the earth = g Mass of the body dropped inside the hole = m After time t, the depth it reached (inside the earth) = d The value of ‘g’ decreases with deportation. So acceleration due to gravity at deportation = ‘g' i.e.,g' = g(l -d//R) = g( (R-d) / R) ...(1) Let y be the distance from the centre of the earth Then y = Radius - distance = R - d Substitute y in (1) g' = g y/R Now, force on the body of mass m due to this new acceleration g' will be F = mg' = mgy /R and this force is directed towards the mean position O. The body dropped in the hole will execute S.H.M Spring factor k = mg/Radius 2. Calculate the time period of the oscillation of a particle of mass m moving in the potential defined as where E is the total energy of the particle. Length of simple pendulum l = 0.9 m Inclined plane with the horizontal plane α = 45° Time period of oscillation of simple pendulum T = ? 3. Consider a simple pendulum of length l = 0.9 m which is properly placed on a trolley rolling down on a inclined plane which is at θ = 45° with the horizontal. Assuming that the inclined plane is frictionless, calculate the time period of oscillation of the simple pendulum. Answer: 0.86 s 4. A piece of wood of mass m is floating erect in a liquid whose density is ρ. If it is slightly pressed down and released, then executes simple harmonic motion. Show that its time period of oscillation is Spring factor of liquid = Aρg Inertra factor of wood piece = m 5. Consider two simple harmonic motion along x and y-axis having same frequencies but different amplitudes as x = A sin (ωt + φ) (along x axis) and y = B sin ωt (along y axis). Then show that and also discuss the special cases when Note: when a particle is subjected to two simple harmonic motion at right angle to each other the particle may move along different paths. Such paths are called Lissajous figures. a. y=B/A x equation is a straight line passing through origin with positive slope. b. y= - B/A x equation is a straight line passing through origin with negative slope. c. equation is an ellipse whose center is origin. d. x2+y2 = A2, equation is a circle whose center is origin . e. equation is an ellipse (oblique ellipse which means tilted ellipse) 6. Show that for a particle executing simple harmonic motion a. the average value of kinetic energy is equal to the average value of potential energy. b. average potential energy = average kinetic energy = ½ (total energy) Hint : average kinetic energy = <kinetic energy > = 1/T ∫0T(Kinetic energy) and average Potential energy = <Potential energy> =1/T ∫0T(Potential energy) 7. Compute the time period for the following system if the block of mass m is slightly displaced vertically down from its equilibrium position and then released. Assume that the pulley is light and smooth, strings and springs are light. Hint and answer: Pulley is fixed rigidly here. When the mass displace by y and the spring will also stretch by y. Therefore, F = T = ky Mass displace by y, pulley also displaces by y. T = 4ky.
Overview of techniques involved (e.g. 'omics') 2.1. Introduction to DNA and genetics - DNA is transcribed into ‘messenger RNA’ (mRNA). Transcription means that the information from a strand of DNA is essentially copied into a new molecule of mRNA. - mRNA is then translated into a protein.The information in mRNA is used as a template for building the protein. This is known as the ‘central dogma’ of molecular biology. It describes a process called gene expression. DNA transcription and Figure 1: DNA makes RNA makes protein. Taken from http://www.atdbio.com/content/14/Transcription-Translation-and-Replication DNA is made up of a double strand of molecules called ‘bases’: - Each base is attached to another base on the other strand, making a ‘base pair’. - The bases come in four types. These are given the letters A (adenine), T (thymine), G (guanine) and C (cytosine). - It is the order of these bases that makes up the DNA sequence. The pairing of the bases links the two strands of DNA together into a double helix shape. A can only link with T, and G can only link with C. DNA Helix and base Figure 2: model of DNA, featuring four chemical ‘letters‘- adenine, thymine, guanine and cytosine, or A, T, G and C. Taken from the US National Library of Medicine. The sequence of the bases is important because this is the set of instructions the body uses to build proteins. Proteins are used in every cell. They ensure that each part of the body does what it needs to do to allow us to grow and function. A gene is a length of DNA that contains the code for a particular protein. The structure of any given protein depends largely on the sequence of bases in a gene or genes. Any change to the DNA sequence, such as a genetic variant or mutation, may affect the production of a normal protein. This may in turn affect the normal function of a cell or tissue. Mike Gilchrist from the UK’s Medical Research Council (MRC) writes: “Can we get a feel for how much ‘information’ the genome (the entire set of DNA in an individual) contains? In simple terms a good thick airport novel contains about a million letters, and so a library of 3.000 such books would hold the same amount of ‘text’ as the human genome. This would fit easily into bookcases lining one wall of a generously sized living room. If we were to devote our leisure time to reading these ‘books’, and could get through one a week, it would take sixty years to plough through our entire genome..”. “…our genome sequence is largely made up of repetitive, or uninformative, sequence [that is, not gene sequences], which we think of as having little impact on the day to day running of our bodies. This material is sometimes called ‘junk’ DNA… Our genes, which do most of the useful work, occupy the remaining ‘interesting’ 2% of the genome, and are the key to our biology. We have about 20.000 to 25.000 genes, and each is responsible for making one of the many smaller molecules (mostly proteins) that we need to grow and to function… In addition, the sequence around a gene contains signals which, in concert with other genes, tell the body where and when to produce that gene’s protein. This is the origin of our need as biologists to sequence the genome, as only this way can we begin to understand the functioning of all these genes.”
The definition of functional programming is quite easy. Functional programming is the programming with mathematical functions. Is that all? Of course, not! Functional programming is the programming with mathematical functions. I think, you already guess it. The key to this definition is the expression mathematical function. Mathematical functions are functions that return every time the same result when given the same arguments. They behave like an infinite big lookup table. The property, that a function (expression) always returns the same result when given the same arguments, is called referential transparency. Referential transparency has far reaching consequences: - Mathematical functions can not have a side effect and can therefore not change the state outside the function body. - The function call can be replaced with its result, but can also be reordered or put on a different thread. - The program flow is defined by the data dependencies and not by the sequence of instructions. - Mathematical functions are a lot easier to refactor and to test because you can reason about the function in isolation. That sounds very promising. But which so many advantages, there is a massive restriction. Mathematical functions can not talk to the outside world. Examples? Mathematical functions can't - get user input or read from files. - write to the console of into a file. - return random numbers or time, because the return values are different. - build a state. Thanks to mathematical functions, the definition of functional is very concise but helps not so much. The key question still remains. How can you program something useful with functional programming? Mathematical functions are like islands that have no communication with the outside world. Or to say it in the words of Simon Peyton Jones, one of the fathers of Haskell. The only effect that mathematical functions can have is to warm up your room. Now I will be a little bit more elaborated. What are the characteristics of functional programming languages? Characteristics of functional programming languages Haskell will help me a lot on my tour through the characteristics of functional programming. There are two reasons for using Haskell. - Haskell is a pure functional programming language and therefore you can study very well the characteristics of functional programming by using Haskell. - Haskell may be the most influential programming language of the last 10 - 15 years. My second statement needs a proof. I will provide them in the next post for Python and in particular C++. Therefore, a few words about Java, Scala, and C#. - Philip Wadler, another father of Haskell, was one of the implementors of generics in Java. - Martin Odersky, the father of Scala, that adapted a lot from Haskell, was also involved in the implementation of generics in Java. - Erik Meijer is a passionate admirer and researcher around Haskell. He used the Haskell concepts of monads and created the well know C# library LINQ. I will even go one step further. How knows functional programming and in particular Haskell, know, how the mainstream programming languages will develop in the next years. Even a pure object-oriented language like Java can not withstand the pressure of the functional ideas. Java has now generics and lambda expressions. But now back to my subject. What are the characteristics of functional programming languages? On my search for the functional characteristics, I identified seven typical properties. These must not be all characteristics and each functional programming language has not to support them. But the characteristics helps a lot to bring meat to the abstract definition of functional programming. The graphic gives, on one hand, the characteristics of functional programming and gives, on the other hand, the outline of my next posts. I will provide a lot of examples in Haskell, C++, and Python. But what do the seven characteristics mean? First-class Functions are typical for functional programming languages. These functions can accept functions as an argument or return functions. Therefore, the functions have to be higher-order functions. That means, they behave like data. Pure functions always return the same result when given the same arguments and can not have a side effect. They are the reason that Haskell is called a pure functional language. A pure functional language has only immutable data. That means, they can not have a while or for loop which is based on a counter. Instead of the loops, they use recursion. The key characteristic of functional programming is that you can easy compose functions. This is because of their bread and butter data structure list. If an expression evaluates its arguments immediately, it's called greedy or eager evaluation. If the expression evaluates the arguments only, if needed, it's called lazy evaluation. Lazy evaluation will reduce time and memory if the evaluated expression is not needed. I think, you already guess it. The classical programming languages are greedy. They evaluate their expressions immediately. I start in my next post with first-class functions. We have them since the beginning of C++. Thanks a lot to my Patreon Supporters: Matt Braun, Roman Postanciuc, Tobias Zindl, Marko, G Prvulovic, Reinhold Dröge, Abernitzke, Frank Grimm, Sakib, Broeserl, António Pina, Darshan Mody, Sergey Agafyin, Андрей Бурмистров, Jake, GS, Lawton Shoemake, Animus24, Jozo Leko, John Breland, espkk, Wolfgang Gärtner, Louis St-Amour, Stephan Roslen, Venkat Nandam, Jose Francisco, Douglas Tinkham, Kuchlong Kuchlong, Avi Kohn, Robert Blanch, Truels Wissneth, Kris Kafka, Mario Luoni, Neil Wang, Friedrich Huber, Sudhakar Balagurusamy, lennonli, and Pramod Tikare Muralidhara. Thanks in particular to Jon Hess, Lakshman, Christian Wittenhorst, Sherhy Pyton, and Dendi Suhubdy I'm happy to give online-seminars or face-to-face seminars world-wide. Please call me if you have any questions. Here is a compilation of my standard seminars. These seminars are only meant to give you a first orientation.
The pancreas is an organ of the digestive system located deep in the upper part of the abdomen, behind the stomach and in front of the spine. The pancreas is only about 2 inches wide and 6 to 8 inches long, and sits horizontally across the abdomen. It is composed of 3 contiguous parts: - The large, rounded portion of the gland is called the head. It is located on the right side of the abdomen and abutting the beginning of the small intestine, which is called the duodenum. - The middle section, called the body, is tucked behind the stomach. - The thin end of the pancreas, called the tail, is located on the left side of the abdomen, next to the spleen. The pancreas is a glandular tissue comprised of two cell types: 1. exocrine (produces and secretes substances into a duct, which drains into the duodenum) and 2. endocrine (produces and secretes substances into the blood) tissue. The exocrine tissue comprises 95 percent of the pancreas, and the endocrine tissue makes up the remaining 5 percent of the pancreas. Exocrine glandular tissue produces pancreatic enzymes. These enzymes travel down the pancreatic duct and into the duodenum where they aid in the digestion of food. The endocrine glandular tissue of the pancreas produces hormones and releases them into the bloodstream. Two of these hormones, insulin and glucagon, help control blood sugar levels. Cancer of the Pancreas The word cancer is used to describe any one of a group of diseases in which abnormal cells grow out of control, and can spread. These abnormal cells are different from normal cells in both appearance and function. Pancreatic cancer occurs when abnormal cells grow out of control in the tissue of the pancreas and form a tumor. Because the pancreas lies deep in the abdomen, a doctor performing an examination on a patient would not be able to feel a pancreatic tumor. Pancreatic cancer has no early warning signs, and there are currently no effective screening tests. As a result, pancreatic cancer is usually discovered late. Often, the diagnosis is not made until the cancer has spread to other areas of the body (stage IV). However, research focused on better diagnostic tests and newer treatments provides a more optimistic future for patients diagnosed with pancreatic cancer. In fact, a blood test and better scans are in development. Before 2030, pancreatic cancer is expected to be the second-leading cause of cancer-related deaths in the United States, second only to lung cancer. In 2019, the American Cancer Society estimates there will be 56,770 new pancreatic cancer cases, and 45,750 people will pass away from the disease. Types of Pancreatic Cancer The most common type of pancreatic cancer arises from the exocrine cells and is called a pancreatic ductal adenocarcinoma (PDAC). These tumors are designated “ductal” because they microscopically form structures that resemble the pancreatic ducts. About two-thirds of all pancreatic cancers arise in the head of the pancreas. The remainder arise in the body and tail. These tumors are malignant, meaning they can invade nearby tissues and organs. Cancerous cells can also spread through the blood and lymphatic systems to other parts of the body. When this occurs, they are called metastatic cancer. Tumors can also resemble the endocrine cells of the pancreas. These types of rare tumors are called islet cell tumors, pancreatic endocrine neoplasms, or pancreatic neuroendocrine tumors, are generally less aggressive, and may be curable if detected early. It is important to distinguish between exocrine and endocrine tumors because each has different signs and symptoms, is diagnosed using different tests, has different treatments, and has different prognoses (likely course of the disease). Our research efforts are focused on PDAC. Precursors to Pancreatic Cancer An understanding of the lesions that give rise to pancreatic cancer is important because many of these precursor lesions can be identified and removed before they cause pancreatic cancer. Some of these precursors form cysts, which are collections of fluid within the substance of the pancreas. Almost 3 percent of American adults have a pancreatic cyst. Improvements in imaging tests over the past decade have led to a significant increase in the number of patients found to have a cyst in their pancreas. Most of these cysts are harmless and can be safely watched and followed. Intraductal papillary mucinous neoplasms (IPMNs) and mucinous cystic neoplasms (MCNs) have been recognized as special types of cysts in the pancreas because they are precursor lesions that can later progress to invasive cancers if left untreated. Both IPMNs and MCNs are called “mucinous” because they produce large amounts of mucus, which, in the case of IPMNs, can clog and enlarge the pancreatic duct. IPMNs and MCNs are very different from most pancreatic tumors because they may be present for a long time without spreading. Surgical removal is the treatment of choice for IPMN cysts that are at high risk for progressing to invasive pancreatic cancer. However, doctors have to balance the risk of over-treating patients with harmless cysts with the benefit of removing a precancerous lesion. Many small IPMN and MCN cysts can safely be followed with annual surveillance imaging, most commonly using magnetic resonance imaging (MRI) scans. Because it can be hard to tell which IPMNs and MCNs are precancerous and which are harmless, researchers have been studying them and their genetic makeup for new ways to determine which are more likely to progress to pancreatic cancer. Our researchers are actively developing new molecular tests to better classify pancreatic cysts. Recently, researchers at Johns Hopkins identified a panel of molecular markers and clinical features that show promise for classifying pancreatic cysts and determining which cysts require surgery. This panel has the potential to lower the number of unnecessary surgeries by an overwhelming 91 percent. This more specific panel of markers is likely to provide physicians with additional information to help them determine whether surgery or surveillance of the cyst(s) is the most appropriate course of action for their patients, based on the type of mutation they see in a particular cyst. Avoiding unnecessary pancreatic surgery is important, and this research on cysts is one step forward. Pancreatic Cancer Causes Genes and Pancreatic Cancer All the cells in the body contain DNA. DNA is the molecule in the cell nucleus that carries the instructions (genes) for making living organisms. When cells grow and divide, they also copy their DNA. Research conducted by Dr. Vogelstein at Johns Hopkins found that random, unpredictable ‘mistakes’ that occur when DNA is copied account for nearly two-thirds of cancer mutations, and that environmental factors account for another 29 percent. Mutations in DNA occur frequently, especially when cells divide. Cells have an exceptional ability to repair these changes in DNA. However, the DNA repair mechanisms can also fail. When they do, these mistakes in DNA can be passed along to future copies of the altered cell. More abnormal cells can then be produced and when these abnormal cells continue to grow unchecked, cancer may develop. The DNA mutations that cause pancreatic cancer may be either inherited from a parent or acquired as we age. Inherited mutations are carried in the DNA of a person’s reproductive cells and can be passed on to that person’s children. Not everyone who has an inherited mutation will develop pancreatic cancer. Acquired mutations are ones that develop during a person’s lifetime, either as random mutations in DNA or in response to injuries from harmful environmental factors such as exposure to the carcinogens in tobacco smoke or cosmic rays. Scientists believe that most cancers result from complex DNA changes that involve many different genes. Some of these outside factors are called risk factors. Certain risk factors increase the chances of a person developing cancer. Not everyone who has an acquired mutation will develop pancreatic cancer. It is important to note that pancreatic cancer is relatively rare, striking only 12 to 13 people per 100,000 each year, so even doubling a rare risk still means that the risk is very low. Family History and Pancreatic Cancer Pancreatic cancer may be inherited because it can run in families. This means that blood relatives of patients with pancreatic cancer may have an increased risk of developing the disease. The risk depends on the gene inherited. If the gene inherited isn’t known, inherited risk can still be estimated based on the number of first-degree relatives (i.e., a sibling, parent, or child) that an individual has who have been diagnosed with pancreatic cancer. One first-degree relative with pancreatic cancer means a two to four-fold risk, two relatives increases the risk by six or seven-fold, and three first-degree relatives, which is highly unusual, results in a 32-fold risk. Having a family member who had pancreatic cancer younger than 50 years of age is an added risk of pancreatic cancer. However, not everyone with a family history of pancreatic cancer will develop the disease. - Random genetic mistakes - Smoking or environmental causes Data for hereditary and smoking causes are from the American Cancer Society. Inherited mutations in known cancer-causing genes such as BRCA2, BRCA1, PALB2, p16/CDKN2A, ATM, STK11, PRSS1, SPINK1, and in one of the DNA mismatch repair (DNA is not properly repaired) genes have been shown to increase the risk of developing pancreatic cancer. These genes are therefore called familial pancreatic cancer genes. However, not everyone who has one of these mutations will develop pancreatic cancer. It is estimated that 10 percent of pancreatic cancer is familial. Researchers around the world have set up pancreatic cancer registries to study the hereditary factors that influence pancreatic cancer. The qualifications for joining a registry may vary from one registry to another and may include providing answers to a questionnaire and a blood or saliva sample. Some registries enroll patients and family members who have at least one relative who has pancreatic cancer. Other registries require that enrollees have at least two relatives who have pancreatic cancer. Registry participants must be 18 years of age or older. Screening programs are currently being explored for patients with a known genetic abnormality that predisposes them to pancreatic cancer or who have a strong family history of pancreatic cancer. These screening programs often include screening with endoscopic ultrasound (EUS) and MRI. This is because these imaging technologies may be useful for detecting small lesions and may identify early pancreatic tumors. Hereditary syndromes are inherited genetic mutations in one or more genes that may predispose the affected individuals to the development of certain cancers and may also cause the early age of onset of these cancers. The hereditary syndromes listed below have been associated with the development of pancreatic cancer. Familial Breast Cancer Syndrome. People who have the breast cancer 2 gene (BRCA2) mutation have an increased risk of several cancers, among them pancreatic. Inherited mutations in the BRCA2 gene are particularly common in the Ashkenazi Jewish population. It has recently been suggested that cancers that arise in patients with a BRCA2 mutation may be particularly sensitive to treatment with drugs called PARP inhibitors. Although the association is not as strong as it is with BRCA2, inherited mutations in the first breast cancer gene, BRCA1, may also increase the risk of pancreatic cancer. Familial Atypical Multiple Mole Melanoma (FAMMM) Syndrome. People with FAMMM syndrome, also called p16-Leiden, have many different-sized skin moles that are asymmetrical and raised. Most cases of FAMMM syndrome are caused by inherited mutations in the p16/CDKN2A gene. Peutz-Jeghers Syndrome (PJS). People with this rare syndrome have mutations in the STK11/LKB1 gene. Polyps in the small intestine and dark spots on the mouth and fingers characterize the syndrome. In people with PJS, the risks of gastrointestinal tumors such as esophageal, small bowel, colorectal, and pancreatic cancer are increased. Hereditary Pancreatitis. Hereditary pancreatitis is a rare disease in which patients develop recurrent episodes of severe recurrent pancreatitis at an early age. The main genes related to this disorder are PRSS1, SPINK1, and the cystic fibrosis gene, CFTR. About 30 to 40 percent of people with hereditary pancreatitis will develop pancreatic cancer by age 70, and the risk is especially high among patients with hereditary pancreatitis who also smoke cigarettes. Hereditary Nonpolyposis Colon Cancer (HNPCC; Lynch Syndrome). People with HNPCC have a higher-than-normal chance of developing colon, pancreatic, uterine, stomach, or ovarian cancer. People with this disorder have inherited mutations in DNA mismatch repair genes. Recently it has been shown that the drug Keytruda (pembrolizumab) may be very effective in the treatment of cancers that arise in patients with HNPCC who develop pancreatic cancer. Partner and Localizer of BRCA2 (PALB2). Mutations in this gene, which is related to BRCA2, also increases the risk of breast and pancreatic cancer. ATM. Mutations in this gene may increase the risk of pancreatic cancer. Risk factors are characteristics, habits, or environmental exposures that have been shown to increase the odds of developing a disease. Some can be controlled, while others cannot. Risk Factors You Can Influence Smoking. Smoking or being exposed to secondhand smoke is the leading preventable cause of pancreatic cancer. People who smoke have twice the chance of getting pancreatic cancer compared with people who do not smoke. Importantly, the risk of cancer falls after smoking cessation. Over time, smokers who quit will decrease their risk of developing pancreatic cancer, and after 10 years the risk in ex-smokers is the same as that of nonsmokers. Obesity. People who are significantly overweight are more likely to develop pancreatic cancer compared with those who are not overweight, with those patients who are obese during their teens and twenties having the highest risk. Other Risk Factors Age. As people get older, their risk of pancreatic cancer increases. Pancreatic cancer mostly affects people 55 years of age or older. Race. In the United States, pancreatic cancer is more common in African Americans than in Caucasians, although the reasons are not clear. Differences in dietary habits, the rates of obesity and diabetes, and the frequency of cigarette smoking exist between these groups. Genetic or other unknown factors may also explain the higher incidence in African Americans. Medical Factors. The incidence of pancreatic cancer is higher in people who have any of the following medical conditions. • Chronic pancreatitis (inflammation that causes irreversible damage to the pancreas) • Long-term diabetes mellitus (high blood sugar) • Helicobacter pylori infection or ulcers Adult Onset Diabetes. Long-term diabetes is a risk factor for pancreatic cancer. New onset diabetes in an older person can be the first sign of pancreatic cancer. In fact, up to 80 percent of patients with pancreatic cancer are either prediabetic or are in a presymptomatic phase of diabetes. Presence of Risk Factors When a person has one, or even more than one, of these risk factors, it does not mean that he/she will develop pancreatic cancer. Conversely, some people who do not have any risk factors will still get pancreatic cancer. Researchers are working to understand how lifestyle and environmental risk factors interact with an individual’s genetic makeup to influence pancreatic cancer development. Most importantly, the best way to reduce your risk of developing pancreatic cancer is to not smoke, and to maintain a healthy body weight. Pancreatic Cancer Diagnosis Several steps are involved in making a diagnosis of pancreatic cancer. The first thing your doctor will do is ask questions about your medical history, family history, possible risk factors, and symptoms. Answering these questions honestly and completely will help both you and your doctor during the diagnostic process. MEDICAL HISTORY QUESTIONS - Do you have pain? - Where is the pain located? - How long have you had the pain? - How intense is the pain (i.e., on a scale from 0 to 10)? - Is there something you can do that causes the pain to come back? - Is there something you can do that makes the pain go away? - Have you lost weight without trying? - What other symptoms do you have? - If you have jaundice: When did you notice the jaundice? - If you have dark urine or light stools: How long have you had this? - Has anyone in your family ever had cancer? - Has anyone in your family ever had pancreatic cancer? Answering these questions honestly and completely will help both you and your doctor during the diagnostic process. A doctor will perform a physical examination and check your abdomen for tenderness, fluid buildup, enlargement of your gallbladder or liver (that may result from blockage of the bile duct), and masses. Your lymph nodes will be checked for tenderness and swelling, and any sign of jaundice will be noted. Your doctor also may order blood or urine tests, testing of stool samples, or imaging tests. Blood tests are frequently performed for diagnostic purposes. No single blood test can be used to make a diagnosis of pancreatic cancer. When a person has pancreatic cancer, however, elevated levels of bilirubin or liver enzymes may be present. Different tumor markers in the blood are used to detect and monitor many types of cancer. Tumor markers are substances, usually complex proteins, produced by tumor cells. Proteins form the basis of body structures such as cells, tissues, and organs. Enzymes and some hormones are composed of protein. Some tumor markers can indicate specific types of cancer; others are found in several types of cancer. Two commercially available tumor marker tests are of use in patients with pancreatic cancer: cancer antigen 19-9 (CA 19-9) and carcinoembryonic antigen (CEA). These markers are not accurate enough to be used to screen healthy people or to make a diagnosis of pancreatic cancer. However, CA 19-9 and CEA are frequently used to track the progress of treatment in patients with pancreatic cancer. CA 19-9 is a substance found on the surface of certain types of cells and is shed by tumor cells, making it useful in following the course of cancer. The presence of the protein CEA may indicate cancer because elevations in CEA levels are not usually found in people who are healthy. CEA is not as useful as is CA 19-9 in pancreatic cancer testing. We have funded researchers at Johns Hopkins who have designed a blood test called CancerSEEK that can detect the presence of pancreatic cancer as part of a panel of eight common cancers –pancreas, ovary, liver, stomach, esophagus, colorectum, lung and breast. It can identify the presence of relatively early cancer, and can detect the organ of origin of the cancers. This test is an important breakthrough because these eight cancers account for more than 60 percent of cancer deaths. While further testing is needed, the goal is for CancerSEEK to be offered as part of routine medical checks. If you have blood and urine testing, your doctor will receive written reports from the laboratory. If the results show high levels of bilirubin, it may be an indication of pancreatic cancer. However, many other medical situations can cause an elevation in bilirubin. Additional testing will almost always be needed to confirm a diagnosis of pancreatic cancer. Liver function tests will also be performed on blood samples to determine if a tumor is affecting the liver. Imaging tests are important tests used to detect pancreatic cancer. These tests use a variety of methods to see inside the body. CT scans – or some variation of a CT scan — of the chest, abdomen, and pelvis are most commonly used in the diagnosis of pancreatic cancer. A CT scan, formerly called computed axial tomography (CAT) scan, uses a large machine shaped like a donut to take detailed, cross-sectional, X-ray images from many different angles while you lie on a table that moves into the machine. The computer combines these images into a series of views of the area in question for diagnostic purposes. A CT scan may be done at a special center or in a hospital, but it does not require an overnight stay. This test is not painful, and no sedation is needed. A dye, called a contrast agent, can be injected into a vein to produce better CT images of body structures. Typically, a contrast agent is also given by mouth to provide better images of the stomach and small intestines. In many centers, modifications of basic CT scanners are used to image the pancreas more accurately. A multiphase CT scan is a sensitive imaging test used to evaluate patients suspected of having pancreatic cancer. Multiphase CT scanning may produce detailed, 3-dimensional images of the pancreas. A helical CT scanner with multiple detector rows, called a multidetector row helical CT (MDCT) scan, is one of the latest technological advances in CT scanners. MDCT has advantages over other CT methods, including improved image resolution and the ability to rapidly scan large volumes, thus allowing for imaging of the entire pancreas in a single breath-hold by the patient. Ultrasound is another imaging test that is commonly used. During this test, sound waves are bounced off internal organs to produce echoes. The computer creates patterns from these echoes, as normal and abnormal tissues produce different patterns. EUS and LUS EUS (endoscopic ultrasound) and LUS (laproscopic ultrasound) are minimally invasive procedures. EUS is performed using an endoscope, which is a long, thin instrument with a light at the end used to look deep inside the body. During EUS, an endoscope is passed down the esophagus, through the stomach, and into the duodenum. The machine that makes the sound waves is then turned on, and images are created by visualizing the pancreas through the stomach or the duodenum. Advantages of EUS are that the ultrasound probe can be placed immediately adjacent to the pancreas, producing detailed images. It also allows for biopsies of the pancreas to be obtained to confirm the presence of cancer. Magnetic Resonance Imaging MRI is a noninvasive, painless imaging method that is commonly used today. MRI uses powerful magnets, instead of X-rays as in a CT scan, to view internal structures and organs. Since it does not involve radiation, MRI may be safer in patients who require repeated imaging over many years, such as patients with pancreatic cysts. The energy from the radio waves is absorbed by the body then released. A computer translates the patterns formed by this energy release into detailed images of areas inside the body. MRI produces cross-sectional slices like a CT scanner, but also produces slices that are parallel to the length of the body. MRIs are performed at a special imaging center or at a hospital. If you have any metal in your body, you should check with your doctor prior to undergoing an MRI scan. Some types of metal implants (such as prosthetic hips, prosthetic knees, pacemakers, and heart valves) may cause problems when exposed to high magnetic forces such as those used in MRI. Positron Emission Tomography (PET) Scan PET scan is an imaging test that shows not only anatomy, but also biological function. During a PET scan, a small amount of radioactive glucose (sugar) is injected into a vein. Cancer cells take up sugar at higher rates than normal cells. A special camera detects the radioactivity that is taken up by malignant tissue, and a computer creates detailed images. The images created by a PET scan can be used to find cancer cells in the pancreas and in other areas of the body. Recently developed machines combine CT imaging with PET scanning to more accurately identify where cancer is located within the body. Endoscopic Retrograde Cholangiopancreatography (ERCP) ERCP is an invasive procedure that is used in conjunction with a dye to view the bile and pancreatic ducts for obstructions. During an ERCP, you will receive an anesthetic to numb the throat and medication for sedation. A thin tube is passed down the throat, through the stomach, and into the small intestine. From there, the gastroenterologist who is performing the procedure will identify the bile duct and pancreatic duct so that the dye can be injected into them. Then, X-rays are taken. This is an outpatient procedure but also may be performed in the hospital. ERCP is especially helpful in patients with jaundice because a stent can be inserted into the bile duct and left in place to keep the bile duct open, often relieving the jaundice and its associated symptoms. Tissue samples also can be taken during the procedure. ERCP can cause complications, and is usually used to help manage symptoms and not for diagnostic purposes. Because the only definitive way to diagnose cancer is to directly visualize cancer cells under a microscope, after having the necessary blood tests and scans, a biopsy may be performed when pancreatic cancer is suspected. A biopsy is the process of removing tissue samples, which are then examined under a microscope to check for cancer cells. A biopsy can be performed in an outpatient setting or in the hospital. Biopsy specimens can be obtained in different ways as listed below. Fine-Needle Aspiration (FNA) Biopsy In a FNA biopsy, imaging by CT scan or EUS is used together with a long, thin needle to obtain tissue specimens. The CT scan or EUS imaging method allows the doctor to view the position of the needle to ensure that the needle is in the tumor. EUS also can be used to place the needle directly through the wall of the duodenum or stomach and into the tumor for collection of tissue specimens. A brush biopsy procedure is used with ERCP. A small brush is inserted through an endoscope into the bile and pancreatic ducts. Cells are scraped off the insides of the ducts with the brush. Laparoscopy is a minimally invasive surgical procedure. You will receive general anesthesia during the procedure. A laparoscope is inserted through a small incision in the abdomen. The doctor can then view the tumor and remove tissue samples for examination. QUESTIONS TO ASK AFTER DIAGNOSIS Asking good questions will help you get the best care possible for pancreatic cancer. You have a right to have all questions answered to your satisfaction. - What type of pancreatic cancer do I have, and what is the stage (resectable, borderline resectable, locally advanced or metastatic)? - Should I have any additional tests to more accurately stage my cancer? - What is the treatment plan that you recommend? - What are the potential benefits, risks, and side effects of that treatment? - Where will the treatment be given, and how often? - How will I know if the treatment is working? - Who will be part of my care team? - Are clinical trials available for my type and stage of pancreatic cancer? - If surgery is recommended, is the center that will perform my surgery a high-volume one? - If I have border-line resectable or locally advanced pancreatic cancer, what will your institution do to try to make my cancer resectable? - Should I have my tumor or my blood (germline) genetically sequenced? - Can you estimate the amount of time I may need to recover from surgery? Signs and Symptoms of Pancreatic Cancer A Silent Disease Pancreatic cancer is often called a silent disease because many times there are no signs or symptoms until the cancer is in an advanced stage. Even when there are early signs and symptoms, they may be vague and easily attributed to another disease. The signs and symptoms also may be confusing to patients and healthcare providers because they vary depending on where the tumor is located in the pancreas (the head, body, or tail). It is important to see your doctor if you have any of the signs or symptoms of pancreatic cancer. Jaundice is a yellowing of the skin and the whites of the eyes. Symptoms that may occur with jaundice are itching (which may be severe), dark urine, and light or clay-colored stool. Jaundice occurs when bilirubin stains the skin. Bilirubin is a dark-yellow brown substance made in the liver that travels down the bile duct and into the small intestine. When the bile duct is blocked by a tumor or when a tumor is located in the head of the pancreas near the bile duct, the bile is prevented from reaching the intestines. The bile then accumulates in tissues, blood, and the skin, leading to jaundice. There are other, more common causes of jaundice, such as hepatitis (inflammation of the liver) or obstruction of the bile duct by a gallstone. Skin can start to itch or turn yellow when bilirubin builds up in the skin. This common sign of advanced pancreatic cancer occurs when the tumor presses on organs and nerves around the pancreas. The pain may be constant or intermittent and can be worse after eating or when lying down. Many conditions other than pancreatic cancer can also cause back pain, which makes this a challenging symptom. Digestive Problems or Pain People with pancreatic cancer may lose weight, may have little or no appetite, or may suffer from malnutrition. When pancreatic enzymes cannot be released into the intestine, digesting food, especially high-fat foods, may be difficult. Over time, significant weight loss and malnutrition may result, at which time a doctor should be consulted. Nausea or Vomiting If the tumor blocks the upper part of the small intestine (the duodenum), nausea and vomiting may result. Similar to back pain, abdominal pain is a common sign of advanced pancreatic cancer which occurs when the tumor presses on organs and nerves around the pancreas. Many conditions other than pancreatic cancer can also cause abdominal pain, which makes this a challenging symptom. Pancreatic cancer can cause blood to clot more easily, and can be the first sign of the tumor. These clots occur in the veins and can block blood flow. They can occur in the legs (deep vein thrombosis), lung (pulmonary embolism), or organs such as the pancreas itself or liver (portal vein thrombosis). An inflammation of the pancreas called pancreatitis can be a sign of pancreatic cancer when pancreatitis is chronic or when it appears for the first time and is not related to either drinking alcohol or gallstones. Developing diabetes mellitus (sugar diabetes), especially after the age of 50, can be a sign of pancreatic cancer. The majority of patients with diabetes, however, will not develop pancreatic cancer. As noted earlier, long-term diabetes is also a risk factor for pancreatic cancer. When to See a Doctor Many other illnesses can cause these signs and symptoms, but it is important to take them seriously and see your doctor as soon as possible. If you have a first-degree relative (parent, sibling or child) with pancreatic cancer, tell your doctor and consider joining a pancreatic cancer registry. Cancer registries are used to collect accurate and complete data about people with cancer that can be used for cancer control and epidemiological research, public health program planning, and to improve patient care. Collecting this information also increases the chances of finding a cure, because the data helps physicians and researchers learn more about the causes of cancer and how to detect cancer earlier. Data from registries may point out environmental risk factors or high risk behaviors, so that measures to prevent people from getting cancer can be identified. Additionally, local, state, and national cancer agencies and cancer control programs may use registry data from defined areas to make important decisions about public health.
|This article needs additional citations for verification. (April 2010)| |3D computer graphics| In 3D computer graphics, 3D modeling is the process of developing a mathematical representation of any three-dimensional surface of an object (either inanimate or living) via specialized software. The product is called a 3D model. It can be displayed as a two-dimensional image through a process called 3D rendering or used in a computer simulation of physical phenomena. The model can also be physically created using 3D printing devices. 3D modeling software is a class of 3D computer graphics software used to produce 3D models. Individual programs of this class are called modeling applications or modelers. 3D models represent a 3D object using a collection of points in 3D space, connected by various geometric entities such as triangles, lines, curved surfaces, etc. Being a collection of data (points and other information), 3D models can be created by hand, algorithmically (procedural modeling), or scanned. 3D models are widely used anywhere in 3D graphics. Actually, their use predates the widespread use of 3D graphics on personal computers. Many computer games used pre-rendered images of 3D models as sprites before computers could render them in real-time. Today, 3D models are used in a wide variety of fields. The medical industry uses detailed models of organs; these may be created with multiple 2-D image slices from an MRI or CT scan. The movie industry uses them as characters and objects for animated and real-life motion pictures. The video game industry uses them as assets for computer and video games. The science sector uses them as highly detailed models of chemical compounds. The architecture industry uses them to demonstrate proposed buildings and landscapes through Software Architectural Models. The engineering community uses them as designs of new devices, vehicles and structures as well as a host of other uses. In recent decades the earth science community has started to construct 3D geological models as a standard practice. 3D models can also be the basis for physical devices that are built with 3D printers or CNC machines. Almost all 3D models can be divided into two categories. - Solid - These models define the volume of the object they represent (like a rock). These are more realistic, but more difficult to build. Solid models are mostly used for nonvisual simulations such as medical and engineering simulations, for CAD and specialized visual applications such as ray tracing and constructive solid geometry - Shell/boundary - these models represent the surface, e.g. the boundary of the object, not its volume (like an infinitesimally thin eggshell). These are easier to work with than solid models. Almost all visual models used in games and film are shell models. Because the appearance of an object depends largely on the exterior of the object, boundary representations are common in computer graphics. Two dimensional surfaces are a good analogy for the objects used in graphics, though quite often these objects are non-manifold. Since surfaces are not finite, a discrete digital approximation is required: polygonal meshes (and to a lesser extent subdivision surfaces) are by far the most common representation, although point-based representations have been gaining some popularity in recent years. Level sets are a useful representation for deforming surfaces which undergo many topological changes such as fluids. The process of transforming representations of objects, such as the middle point coordinate of a sphere and a point on its circumference into a polygon representation of a sphere, is called tessellation. This step is used in polygon-based rendering, where objects are broken down from abstract representations ("primitives") such as spheres, cones etc., to so-called meshes, which are nets of interconnected triangles. Meshes of triangles (instead of e.g. squares) are popular as they have proven to be easy to render using scanline rendering. Polygon representations are not used in all rendering techniques, and in these cases the tessellation step is not included in the transition from abstract representation to rendered scene. There are three popular ways to represent a model: - Polygonal modeling - Points in 3D space, called vertices, are connected by line segments to form a polygonal mesh. The vast majority of 3D models today are built as textured polygonal models, because they are flexible and because computers can render them so quickly. However, polygons are planar and can only approximate curved surfaces using many polygons. - Curve modeling - Surfaces are defined by curves, which are influenced by weighted control points. The curve follows (but does not necessarily interpolate) the points. Increasing the weight for a point will pull the curve closer to that point. Curve types include nonuniform rational B-spline (NURBS), splines, patches and geometric primitives - Digital sculpting - Still a fairly new method of modeling, 3D sculpting has become very popular in the few years it has been around. There are currently 3 types of digital sculpting: Displacement, which is the most widely used among applications at this moment, volumetric and dynamic tessellation. Displacement uses a dense model (often generated by Subdivision surfaces of a polygon control mesh) and stores new locations for the vertex positions through use of a 32bit image map that stores the adjusted locations. Volumetric which is based loosely on Voxels has similar capabilities as displacement but does not suffer from polygon stretching when there are not enough polygons in a region to achieve a deformation. Dynamic tesselation Is similar to Voxel but divides the surface using triangulation to maintain a smooth surface and allow finer details. These methods allow for very artistic exploration as the model will have a new topology created over it once the models form and possibly details have been sculpted. The new mesh will usually have the original high resolution mesh information transferred into displacement data or normal map data if for a game engine. The modeling stage consists of shaping individual objects that are later used in the scene. There are a number of modeling techniques, including: Modeling can be performed by means of a dedicated program (e.g., Cinema 4D, form•Z, Maya, 3DS Max, Blender, Lightwave, Modo, solidThinking) or an application component (Shaper, Lofter in 3DS Max) or some scene description language (as in POV-Ray). In some cases, there is no strict distinction between these phases; in such cases modeling is just part of the scene creation process (this is the case, for example, with Caligari trueSpace and Realsoft 3D). Complex materials such as blowing sand, clouds, and liquid sprays are modeled with particle systems, and are a mass of 3D coordinates which have either points, polygons, texture splats, or sprites assigned to them. Compared to 2D methods 3D photorealistic effects are often achieved without wireframe modeling and are sometimes indistinguishable in the final form. Some graphic art software includes filters that can be applied to 2D vector graphics or 2D raster graphics on transparent layers. Advantages of wireframe 3D modeling over exclusively 2D methods include: - Flexibility, ability to change angles or animate images with quicker rendering of the changes; - Ease of rendering, automatic calculation and rendering photorealistic effects rather than mentally visualizing or estimating; - Accurate photorealism, less chance of human error in misplacing, overdoing, or forgetting to include a visual effect. Disadvantages compare to 2D photorealistic rendering may include a software learning curve and difficulty achieving certain photorealistic effects. Some photorealistic effects may be achieved with special rendering filters included in the 3D modeling software. For the best of both worlds, some artists use a combination of 3D modeling followed by editing the 2D computer-rendered images from the 3D model. 3D model market A large market for 3D models (as well as 3D-related content, such as textures, scripts, etc.) still exists - either for individual models or large collections. Online marketplaces for 3D content, such as TurboSquid, The3DStudio, CreativeCrash, CGTrader, FlatPyramid, NoneCG, CGPeopleNetwork and DAZ 3D, allow individual artists to sell content that they have created. Often, the artists' goal is to get additional value out of assets they have previously created for projects. By doing so, artists can earn more money out of their old content, and companies can save money by buying pre-made models instead of paying an employee to create one from scratch. These marketplaces typically split the sale between themselves and the artist that created the asset, artists get 40% to 95% of the sales according the marketplace. In most cases, the artist retains ownership of the 3d model; the customer only buys the right to use and present the model. Some artists sell their products directly in its own stores offering their products at a lower price by not using intermediaries. Over the last several years numerous marketplaces specialized in 3D printing models have emerged. Some of the 3D printing marketplaces are combination of models sharing sites, with or without a built in e-com capability. Some of those platforms also offer 3D printing services on demand, software for model rendering and dynamic viewing of items, etc. Among the most popular 3D printing file sharing platforms are Shapeways, Thingiverse, CGTrader, Threeding and MyMiniFactory. 3D printing is a form of additive manufacturing technology where a three dimensional object is created by laying down successive layers of material. The first widely available commercial application of human virtual models appeared in 1998 on the Lands' End web site. The human virtual models were created by the company My Virtual Mode Inc. and enabled users to create a model of themselves and try on 3D clothing. There are several modern programs that allow for the creation of virtual human models (Poser being one example). 3D modeling is used in various industries like films, animation and gaming, interior designing and architecture. They are also used in the medical industry for the interactive representations of anatomy. A wide number of 3D software are also used in constructing digital representation of mechanical models or parts before they are actually manufactured. CAD/CAM related software are used in such fields, and with these software, not only can you construct the parts, but also assemble them, and observe their functionality. 3D modelling is also used in the field of Industrial Design, wherein products are 3D modeled before representing them to the clients. In Media and Event industries, 3D modelling is used in Stage/Set Design. - List of 3D modeling software - List of common 3D test models - 3D computer graphics software - 3D printing - 3D scanner - Additive Manufacturing File Format - Cloth modeling - Computer facial animation - Digimation's Library example - Digital geometry - Edge loop - Evolver is a portal, 3D modeler and market place for 3D characters - Geological modeling - Industrial CT scanning - Open CASCADE - Polygon mesh - Polygonal modeling - Scaling (geometry) - Stanford Bunny - Triangle mesh - Utah teapot - Marching cubes - "ERIS Project Starts". ESO Announcement. Retrieved 14 June 2013. - "3D Scanning Advancements in Medical Science". Konica Minolta. Retrieved 24 October 2011. - Jon Radoff, Anatomy of an MMORPG, August 22, 2008 - "3D Marketplace 3D Model Licensing Documentation". Retrieved 30 October 2011. - "Is there an "iTunes app store" for 3D printer models?". Quora. Retrieved 2014-01-12. - "Lands' End First With New 'My Virtual Model' Technology: Takes Guesswork Out of Web Shopping for Clothes That Fit". PRNewswire. Lands' End. February 12, 2004. Retrieved 2013-11-24. |Look up modeler in Wiktionary, the free dictionary.|
Your experience on this site will be improved by allowing cookies Visually display data correctly Compare multiple data sets Determine the spread and central tendencies A Box Plot is a graphical representation of data that displays the distribution, central tendency, and spread of a dataset, making it useful for identifying outliers and comparing multiple datasets. A Box Plot, also known as a box-and-whisker plot, is a visual tool for representing data distributions. It summarizes key statistics, including the median, quartiles, and potential outliers, providing a clear view of how data is spread and any unusual data points. Components of a Box Plot: Box: Represents the interquartile range (IQR), which encompasses the middle 50% of the data. The box is divided into two parts at the median. Whiskers: Lines extending from the box indicate the range of the data, excluding potential outliers. Outliers: Individual data points that fall significantly outside the whiskers and may indicate unusual values. Collect Data: Gather the dataset you want to analyze. This data can represent measurements, counts, or observations. Sort Data: Arrange the data values in ascending order. Calculate Quartiles: Determine the first quartile (Q1), which is the median of the lower half of the data. Determine the third quartile (Q3), which is the median of the upper half of the data. Calculate Interquartile Range (IQR): Subtract Q1 from Q3 to find the IQR, representing the middle 50% of the data. Identify Outliers: Define a range for potential outliers. Points falling outside this range are considered outliers. Create the Box Plot: Draw a box from Q1 to Q3, with a line at the median (Q2). Extend whiskers from the box to the minimum and maximum data points within the defined range. Mark any outliers as individual data points. Label Axes: Label the x-axis (horizontal) with a description of the data and the y-axis (vertical) with the data values. Analyze the Plot: Examine the Box Plot to understand the distribution, central tendency (median), and spread (IQR). Identify any potential outliers. Visual representation of data distribution Identification of central tendency and spread Detection of potential outliers Comparison of multiple datasets A Box Plot is a powerful tool for visualizing and summarizing data distributions. By examining the key components of the plot, you can quickly grasp the central tendency and spread of the data, as well as identify potential outliers. Riaan is a dynamic leader, coach, facilitator, Lean Six Sigma Master Black Belt with over 20 years of hands-on experience driving business results. Riaan is highly skilled and has worked across diverse industries internationally. With a degree in Chemical Engineering, Riaan started in the major breweries and bakeries in South Africa and was so dedicated to his work that he was often known to take his work home with him.
The thermometer is a very useful scientific instrument to check body temperature and assess whether one has fever. Every hospital and doctor uses this tool to measure the body temperature and any fluctuation over a period of time for correct diagnosis. There are two ways to measure the body temperate – core temperature measurement and surface temperature measurement. When we measure the temperature of deep tissues of the body, it’s known as core measurement while the surface measurement incudes the gauging of the surface skin tissues. - For core temperature, the measurement is taken through any of these body parts, including rectum, oral cavity and ear canal - In surface temperature, the measurement is taken through armpit and forehead Out of the two available methods of measurement, the core temperature is regarded more accurate and recommended by doctors. However, the surface might not deliver that much accuracy purely because of constant changes in the surroundings. So, the surface is only good if the core is not feasible in some cases when you use body temperature thermometer to asses fever in people. General guidelines on using thermometers We use thermometers to get an accurate reading so that it becomes clear whether fever is present. The instrument should be high-quality to deliver consistent and right results. More than that, you should know how to use the device well to not go wrong with the measurement itself. Keep in mind a few things while using a thermometer – - Before buying the instrument, compare the accuracy and suitability so that you can get the desired level of convenience and quality with the reading - The thermometer you choose should suit the age and health condition of the individual or people it’s meant for - In case of any doubt with the method of measurement, consult a doctor and get better results - If someone carries the risk of infection, then make sure not to use the same device for checking other people’s fever as this can help spread the disease further - It’d great if you could have personal thermometers for people with communicable diseases as this will avoid the risk that cross-infection generally carries - Before using the device, refer to the instructions provided by the manufacturer so that you can understand the right way of use and method of reading - To get accurate and consistent results, stay away from activities that might impact temperature measurement in any which way - Don’t drink either hot water or hot beverages right before your measurement of oral temperature as this can deprive you of the right readings - Always refer to the use instructions to clean and maintain the thermometer as each manufacturer may have an entirely different set of procedures to follow - In cases where temperature comparison is needed, you should take the reading on a fixed time each use and follow the same method with all your attempts - The readings may not necessarily be correct as various factors can have an influence, so consult the doctor for any doubt in this regard
This parallelogram is a rhomboid as it has no right angles and unequal sides. |Edges and vertices||4| |Symmetry group||C2, +, (22)| |Area||b × h (base × height);| ab sin θ (product of adjacent sides and sine of any vertex angle) In Euclidean geometry, a parallelogram is a simple (non-self-intersecting) quadrilateral with two pairs of parallel sides. The opposite or facing sides of a parallelogram are of equal length and the opposite angles of a parallelogram are of equal measure. The congruence of opposite sides and opposite angles is a direct consequence of the Euclidean parallel postulate and neither condition can be proven without appealing to the Euclidean parallel postulate or one of its equivalent formulations. By comparison, a quadrilateral with just one pair of parallel sides is a trapezoid in American English or a trapezium in British English. The three-dimensional counterpart of a parallelogram is a parallelepiped. The etymology (in Greek παραλληλ-όγραμμον, a shape "of parallel lines") reflects the definition. - 1 Special cases - 2 Characterizations - 3 Other properties - 4 Area formula - 5 Proof that diagonals bisect each other - 6 Lattice of parallelograms - 7 Parallelograms arising from other figures - 8 See also - 9 References - 10 External links - Rhomboid – A quadrilateral whose opposite sides are parallel and adjacent sides are unequal, and whose angles are not right angles - Rectangle – A parallelogram with four angles of equal size (right angles). - Rhombus – A parallelogram with four sides of equal length. - Square – A parallelogram with four sides of equal length and angles of equal size (right angles). - Two pairs of opposite sides are parallel (by definition). - Two pairs of opposite sides are equal in length. - Two pairs of opposite angles are equal in measure. - The diagonals bisect each other. - One pair of opposite sides is parallel and equal in length. - Adjacent angles are supplementary. - Each diagonal divides the quadrilateral into two congruent triangles. - The sum of the squares of the sides equals the sum of the squares of the diagonals. (This is the parallelogram law.) - It has rotational symmetry of order 2. - The sum of the distances from any interior point to the sides is independent of the location of the point. (This is an extension of Viviani's theorem.) - There is a point X in the plane of the quadrilateral with the property that every straight line through X divides the quadrilateral into two regions of equal area. Thus all parallelograms have all the properties listed above, and conversely, if just one of these statements is true in a simple quadrilateral, then it is a parallelogram. - Opposite sides of a parallelogram are parallel (by definition) and so will never intersect. - The area of a parallelogram is twice the area of a triangle created by one of its diagonals. - The area of a parallelogram is also equal to the magnitude of the vector cross product of two adjacent sides. - Any line through the midpoint of a parallelogram bisects the area. - Any non-degenerate affine transformation takes a parallelogram to another parallelogram. - A parallelogram has rotational symmetry of order 2 (through 180°) (or order 4 if a square). If it also has exactly two lines of reflectional symmetry then it must be a rhombus or an oblong (a non-square rectangle). If it has four lines of reflectional symmetry, it is a square. - The perimeter of a parallelogram is 2(a + b) where a and b are the lengths of adjacent sides. - Unlike any other convex polygon, a parallelogram cannot be inscribed in any triangle with less than twice its area. - The centers of four squares all constructed either internally or externally on the sides of a parallelogram are the vertices of a square. - If two lines parallel to sides of a parallelogram are constructed concurrent to a diagonal, then the parallelograms formed on opposite sides of that diagonal are equal in area. - The diagonals of a parallelogram divide it into four triangles of equal area. All of the area formulas for general convex quadrilaterals apply to parallelograms. Further formulas are specific to parallelograms: A parallelogram with base b and height h can be divided into a trapezoid and a right triangle, and rearranged into a rectangle, as shown in the figure to the left. This means that the area of a parallelogram is the same as that of a rectangle with the same base and height: The base × height area formula can also be derived using the figure to the right. The area K of the parallelogram to the right (the blue area) is the total area of the rectangle less the area of the two orange triangles. The area of the rectangle is and the area of a single orange triangle is Therefore, the area of the parallelogram is Another area formula, for two sides B and C and angle θ, is The area of a parallelogram with sides B and C (B ≠ C) and angle at the intersection of the diagonals is given by When the parallelogram is specified from the lengths B and C of two adjacent sides together with the length D1 of either diagonal, then the area can be found from Heron's formula. Specifically it is where and the leading factor 2 comes from the fact that the chosen diagonal divides the parallelogram into two congruent triangles. Area in terms of Cartesian coordinates of vertices Let vectors and let denote the matrix with elements of a and b. Then the area of the parallelogram generated by a and b is equal to . Let vectors and let . Then the area of the parallelogram generated by a and b is equal to . Let points . Then the area of the parallelogram with vertices at a, b and c is equivalent to the absolute value of the determinant of a matrix built using a, b and c as rows with the last column padded using ones as follows: Proof that diagonals bisect each other - (alternate interior angles are equal in measure) - (alternate interior angles are equal in measure). (since these are angles that a transversal makes with parallel lines AB and DC). Also, side AB is equal in length to side DC, since opposite sides of a parallelogram are equal in length. Therefore, triangles ABE and CDE are congruent (ASA postulate, two corresponding angles and the included side). Since the diagonals AC and BD divide each other into segments of equal length, the diagonals bisect each other. Separately, since the diagonals AC and BD bisect each other at point E, point E is the midpoint of each diagonal. Lattice of parallelograms Parallelograms can tile the plane by translation. If edges are equal, or angles are right, the symmetry of the lattice is higher. These represent the four Bravais lattices in 2 dimensions. |Symmetry||p4m, [4,4], order 8n||pmm, [∞,2,∞], order 4n||p1, [∞+,2,∞+], order 2n| Parallelograms arising from other figures An automedian triangle is one whose medians are in the same proportions as its sides (though in a different order). If ABC is an automedian triangle in which vertex A stands opposite the side a, G is the centroid (where the three medians of ABC intersect), and AL is one of the extended medians of ABC with L lying on the circumcircle of ABC, then BGCL is a parallelogram. The midpoints of the sides of an arbitrary quadrilateral are the vertices of a parallelogram, called its Varignon parallelogram. If the quadrilateral is convex or concave (that is, not self-intersecting), then the area of the Varignon parallelogram is half the area of the quadrilateral. Tangent parallelogram of an ellipse For an ellipse, two diameters are said to be conjugate if and only if the tangent line to the ellipse at an endpoint of one diameter is parallel to the other diameter. Each pair of conjugate diameters of an ellipse has a corresponding tangent parallelogram, sometimes called a bounding parallelogram, formed by the tangent lines to the ellipse at the four endpoints of the conjugate diameters. All tangent parallelograms for a given ellipse have the same area. It is possible to reconstruct an ellipse from any pair of conjugate diameters, or from any tangent parallelogram. Faces of a parallelepiped - "CIMT - Page no longer available at Plymouth University servers" (PDF). www.cimt.plymouth.ac.uk. Archived from the original (PDF) on 2014-05-14. - Owen Byer, Felix Lazebnik and Deirdre Smeltzer, Methods for Euclidean Geometry, Mathematical Association of America, 2010, pp. 51-52. - Zalman Usiskin and Jennifer Griffin, "The Classification of Quadrilaterals. A Study of Definition", Information Age Publishing, 2008, p. 22. - Chen, Zhibo, and Liang, Tian. "The converse of Viviani's theorem", The College Mathematics Journal 37(5), 2006, pp. 390–391. - Problem 5, 2006 British Mathematical Olympiad, . - Dunn, J.A., and J.E. Pretty, "Halving a triangle", Mathematical Gazette 56, May 1972, p. 105. - Weisstein, Eric W. "Triangle Circumscribing". Wolfram Math World. - Weisstein, Eric W. "Parallelogram." From MathWorld--A Wolfram Web Resource. http://mathworld.wolfram.com/Parallelogram.html - Mitchell, Douglas W., "The area of a quadrilateral", Mathematical Gazette, July 2009. |Wikimedia Commons has media related to Parallelograms.| - Parallelogram and Rhombus - Animated course (Construction, Circumference, Area) - Weisstein, Eric W. "Parallelogram". MathWorld. - Interactive Parallelogram --sides, angles and slope - Area of Parallelogram at cut-the-knot - Equilateral Triangles On Sides of a Parallelogram at cut-the-knot - Definition and properties of a parallelogram with animated applet - Interactive applet showing parallelogram area calculation interactive applet
This topic will focus on sustainable production and consumption and the role of: Students will investigate and reflect on: The challenge of the 21st century is clear: to feed the world’s growing population while safeguarding our natural resources in the process. We have a long way to go considering one billion people are undernourished today. With more people to feed and house there is growing competition for land and water and energy between housing land, farming land and mining land Our land, our water and our non-renewable energy are precious resources and we all should use them wisely. Right now, it takes less than a second to add two people to the world population. In the same second, farmland available to feed our growing population is shrinking by an area about the size of a soccer field. We must produce more food from less land to feed our growing population. Past farming practices have left the soil vulnerable to be swept away by wind and rain. Already, an area large enough to feed Europe has been so severely degraded, it cannot produce food. Better farming practices can halt and even reverse the process of soil degradation. At the same time, farmers are now using existing farmland more efficiently. It is pivotal that governments monitor what is happening to their land and incorporate soil protection measures involving agriculture, forestry, water management, industry and waste disposal sectors. 93% of the food we eat is produced by Australian farmers Australian farmers look after 60% of the landscape Less than 6% of the Australia’s landscape is suitable for growing crops and fruit and vegetables By 2050, our global population is expected to reach 9 billion people and the demand for food will grow by 70%. Even if we convert all remaining land to cropland, we will get nowhere near meeting the future demand for food without increasing agricultural productivity and efficiency. Success requires farmers having access to a range of agricultural solutions, education to gain necessary skills, and financial incentives. Sustainable farming solutions include not tilling the land, crop rotations, bringing vegetation back to degraded land and planting vegetation around fields to prevent erosion and transitioning to green energy technology. Resourceful land use also contributes to mitigating climate change. Globally 2 to 3 billion metric tons of carbon can be stored per year in soil. Farmers can produce higher yields on existing farmland, prevent further loss of fertile land, and find innovative ways to make use of marginal land, especially in developing countries. Technology is an important part of the solution, but we must also partner to share knowledge. An unprecedented level of global collaboration must take place between farmers, consumers and entrepreneurs, governments and companies, civil society and multilateral organizations. Governments must support resource use efficiency and environmental stewardship, and the private sector must develop new technologies that enable these practices. People should be able to make informed choices about the crops they grow, the products they buy, and the agricultural systems they use. Agriculture should be viewed as a productive investment that drives economic development and builds long-term economic, political and environmental stability. Source Success in agriculture hinges on many factors, but farmers worldwide have perhaps one common fear: lack of water. And for good reason. According to the Food and Agriculture Organization (FAO), agriculture uses about 70 percent of the world’s fresh water and shortage will have a huge impact on food security. It is imperative that we all make water efficiency a priority if we are to manage water scarcity. Farmers need incentives to implement better water management. They need infrastructure and financial support to explore innovative solutions that produce crops with greater water efficiency. Worldwide up to 40% of the water used by some farmers is lost due to inefficient practices such as field flooding. A recent study by the 2030 Water Resources Group found that existing agricultural technology can sustainably increase water use efficiency, at reasonable cost and with little investment. For example, improving soil structure can conserve water. Weed control using herbicides lowers the need for tillage, leaves roots in the soil and improves water absorption. Efficient irrigation systems deliver water to roots and planting grasses around paddocks helps keep water in the soil. In combination, these practices dramatically reduce surface evaporation and water run-off. Stopping run-off can also ensure agricultural chemicals and soil from paddocks don’t reach rivers and streams. In addition, we need to broaden our focus to include land productivity and water productivity. We have to get the more crop per drop. Use of drought resistant and water and fertiliser efficient crops and pastures can help produce reliable yields even when water is scarce. There is no one solution to deal with water scarcity. Investment is needed to develop innovative water-efficient technologies, drought-tolerant seeds, crop protection products and optimized irrigation systems. But the best solutions can only help when farmers can afford them and understand the advantage of using them. Thus, infrastructure for knowledge sharing and access to technology must be strengthened. Incentives such as access to affordable credit and financial risk-management mechanisms such as insurance for weather-related crop losses will also be critical. Access to safe water plays a pivotal role in sustainable development and poverty reduction. To positively alter the way the world uses limited water resources, communities need to understand their options for managing water, make better choices, and to take responsibility for them. Source With appropriate settings and technologies, projected increases in water demand need not increase pressure on water limited catchments. Payments for carbon sequestration could be harnessed to reward rural land owners for restoring ecosystems, increasing native habitat by 17% and decreasing extinction risks by 10%, without large additional government outlays. Non-agricultural water use is projected to increase by 65 to 150% by 2050, while the value of national economic output increases by more than 150%. While water use is projected to double by 2050, this growth can be met while enhancing urban water security and avoiding increased environmental pressures through increased water recycling, desalination and integrated catchment management. Water is a Precious Resource. Water use efficiency. Biodiversity can be defined as the functioning of all living things through providing "ecosystem services". Those services are creating clean air, clean water, shelter for food and fibre production and native animals and cultural value. Biodiversity exists in the soil, vegetation supported by the soil, wildlife that access soil, vegetation and habitat. Biodiversity is in the eye of the beholder. For some it is our life support system, for others it is a resource to be used, for others it is a precious cultural symbol. Australians have long had a sense that our biodiversity is special, but despite our sense of its importance, in many parts of our country biodiversity is in trouble. Values are the lasting beliefs or ideals that will influence a person’s attitude and which serve as broad guidelines for that person’s behaviour. Understanding biodiversity, and why it matters, is assisted by comprehending the range of distinctive values that individuals and societies may assign to the living world and the ecosystems that it comprises. It is an indication in itself of the complexity of views about biodiversity, and the variety of interactions with it, that at least five separate categories are necessary to cover all possibilities. Source CSIRO Biodiversity: Science and Solutions for Australia Our planet is undergoing a biodiversity crisis. Globally, at least 16,000 species are threatened with extinction, including 12 per cent of birds, 23 per cent of mammals and 32 per cent of amphibians. Biologists know what is causing this environmental crisis — human impacts from development, deforestation, pollution and climate change are destroying the homes and habitat of wildlife around the world. More importantly, biologists understand that the trend can be reversed. The greatest threat to biodiversity is the size and rate of growth of human population. Every day, more people need more space, consume more resources and generate more waste as world population continues to grow at an alarming rate. Human population growth is reducing biodiversity in the following ways: All forms of food production contribute to a loss of biodiversity to varying degrees, and it is important that impacts on biodiversity are managed effectively. There is no denying that farming practices throughout the 1800s and first half of the 1900s had some detrimental impacts on Australia’s biodiversity. This was mainly due to government-mandated land clearing, in a belief that Australia should be farmed using European methods. Land clearing reduced areas of native vegetation that, coupled with some traditional land management practices, resulted in a decline in biodiversity. Many ecosystems have been lost during the past 200 years. Some of these ecosystems include: Loss of species is a major threat to biodiversity in Australia. Species of animals and plants under threat may be listed in one of the following categories: FARMERS ARE BEST KNOWN for growing crops and raising livestock to provide the food and fibre needs for Australian families, but lately, it’s all about the work they do on the farm to look after the environment. Today, Australian farmers strive to protect, manage and enhance biodiversity on their land. For example, planting native trees and shrubs on their properties can help alleviate problems such as erosion and soil structure decline, making the land more productive as well as increasing biodiversity and providing natural shelter. Biodiversity is a priority natural resource management (NRM) issue for farming industries. Farmers nationwide have responded to the challenge of biodiversity conservation by: NRM is an important activity on 94% of Australian farms, resulting in improved productivity and sustainability. By applying three principles – retain, restore and revegetate – cattle and sheep farmers can protect and even enhance biodiversity on their farms. Australian farmers are planting more trees for environmental purposes than a decade ago. In 2001, farmers planted 20.6 million tree seedlings for NRM, compared with nine million in 1991. On average, each Australian farmer plants 150 tree seedlings a year, solely for conservation purposes. Many of Australia’s farmers are active members of Landcare groups, and have been since Landcare’s inception in 1989. Landcare was established by the National Farmers’ Federation and the Australian Conservation Foundation to provide a vision for transformation to ecological sustainability through collective community lead groups. Initiatives such as Landcare Week are opportunities to recognise the role Australian farmers play as environmental stewards and land managers. “Today, with support from the federal government, Landcare has grown into an environmental movement. Farmers are Australia’s frontline environmentalists, looking after 61% of Australia’s valuable land resources. After all, farmers have the most to lose if the environment becomes damaged: we simply cannot farm without healthy soils, healthy water resources and healthy air quality. Farmers know that good environmental outcomes and increased agricultural production go hand in hand, which is why natural resource management is a fundamental activity on Australian farms. According to the Australian Bureau of Statistics, 94% of farmers undertake some form of natural resource management, including planting trees and shrubs, fencing off rivers, streams and gullies to protect regrowth, and restoring wetlands. Australian agriculture has also led the nation in reducing greenhouse gas emissions – a massive 40% reduction between 1990 and 2006. Australian farmers are also investing financially in natural resource management. The Organisation for Economic Co-operation and Development estimates that the management of soil resources, water resources and biodiversity costs $3.5 billion in Australia annually, around 10% of agriculture’s GDP, and for every government dollar invested, Australian farmers contribute $2.60 in environmental management and protection,” Jock Laurie former president on National Farmers Federation Adapted from an NFF media release For more information, visit the Landcare website: landcareonline.com.au This material had been adapted from resources created by the CSIRO and Target 100 Resource Smart School Australian Sustainable Schools Initiative Biodiversity & cotton Renewable energy is energy which can be obtained from natural resources that can be constantly replenished. Renewable energy technologies include technologies that use—or enable the use of—one or more renewable energy sources. Types of renewable energy technologies include: Renewable energy technologies also include hybrid and related technologies. For example technologies that: Rapid improvements in technology and pricing present fresh opportunities to replace polluting energy sources like coal and coal seam gas with energy from the sun, sea and wind. By using energy more wisely and harnessing the power of renewable energy we can create opportunities for new employment and economic growth, foster regional development, and reduce our contribution to global climate change. It is time to make the switch to a clean energy economy by: Australia has the natural and institutional resources to prosper in almost all scenarios for global energy and resource use. Global demand for exports is projected to treble by 2050 as global per capita income also trebles. We should expect long term growth of world energy demand, but demand for specific materials and energy exports could vary. Even in scenarios with strong global action to reduce emissions, energy and other resources could remain one of the pillars of the Australian economy, as long as commercially viable technology solutions are developed in a timely fashion to manage environmental impacts. Domestically, energy affordability can improve, especially when we enhance the efficiency and productivity of the energy system. Transport affordability might also improve, especially through the large-scale adoption of electric vehicles. Energy is one of the fastest growing costs for farmers, with electricity and diesel accounting for a significant proportion of total farm costs. Energy use efficiency describes the total amount of energy used on farm (in the form of electricity, diesel, or other sources) compared to the amount of production. If energy consumed can be reduced, while production is maintained or increased, energy use efficiency is improved. This may be one of the fastest and easiest ways to improve profitability, and will also reduce greenhouse gas (GHG) emissions. Research indicates there are significant opportunities to reduce energy - and therefore costs on Australian farms. It is important for farmers to monitoring their energy use to estimate use and costs, and track these costs over time. An audit can also identify energy and cost savings, such as fuel switching and tariff negotiation. As well as being a major cost, diesel and electricity are also significant contributors to GHG emissions. So maximising energy efficiency can not only help farmers be more profitable it can also help farming communities be more sustainable. Renewable energy and farming are a winning combination. Wind, solar, and biomass energy can be harvested forever, providing farmers with a long-term source of income. Renewable energy can also help reduce pollution, global warming, and dependence on imported fuels. Farms have long used wind power to pump water and generate electricity. Some large organisations have installed large wind turbines on farms to provide power to electric companies and consumers. Some farmers have also purchased wind turbines; others are starting to form wind power cooperatives. The amount of energy from the sun that reaches Earth each day is enormous. All the energy stored in Earth's reserves of coal, oil, and natural gas is equal to the energy from only 20 days of sunshine. Most areas of farmland in Australia receive enough sunshine to make solar energy practical. Solar energy can be used in agriculture in a number of ways, saving money, increasing self-reliance, and reducing pollution. Solar energy can cut a farm's electricity and heating bills. Solar water heaters can provide hot water for dairy operations and houses. Photovoltaics (solar electric panels) can power farm operations and remote water pumps, lights, and electric fences. Farm buildings can be renovated to capture natural daylight, instead of using electric lights. The options that make the most sense for farmers depend on local renewable resources, energy markets, and the types of support available from federal and state government. Biomass energy is produced from plants and organic wastes—everything from crops, trees, and crop residues to manure. Crops grown for energy could be produced in large quantities, just as food crops are. Crops and biomass wastes can be converted to energy on the farm or sold to energy companies that produce fuel for cars and tractors and heat and power for homes and businesses. Farm Case Studies and Renewable energy. 10 Basic Electricity Facts. Australian Sustainable Schools Initiative. Water use efficiency. A sustainable cotton industry. The Australian textile industry is diverse and has some 680 firms that supply textiles to consumers and other manufacturing sectors. Textile production undergoes many separate processing steps with each step having a potential environmental consequence. One of the many sustainability issues that relate to this large and important industry is textile waste sent to landfill. This issue is a great cost to the industry and the economy. Both the industry and consumers produce textile waste. Resource Smart School. Small Steps to Reduce Waste. Redistributing surplus fresh food. Save money and our environment. The Conversation - Food waste article search. Find ethical & sustainable food. There are simple choices and changes we can make in our daily lives that will help us live more sustainably. We need to change the way we live to reduce our over-consuming lifestyles. Australians consume more of just about everything, per person, than people in other countries. We consume lots of food, paper, timber, metals, energy, water, plastic, glass – you name it, as a nation we consume a lot of it. One way to measure our environmental impact is through an ecological footprint. On a global scale, Australia is a big-foot. We are our very own Yeti of consumption of natural resources. Over consumption is a significant contributor to global warming and climate change and this consumption is the major cause of Australia’s large footprint. The other cause is our reliance on fossil fuels for everything ranging from the electricity in our homes to the petrol in our cars. Our way of life in Australia is threatening the future of our planet. We all have a part to play, as we all contribute to our ecological footprint and to global warming. This is where we can work together. You have the power to change the way you live and reduce Australia’s ecological footprint. One step at a time. Visit here to find out how you can get involved. Cotton is both a food and a fibre. Half of the weight of all cotton harvested is seed which is predominately made into cotton seed oil for food. Almost all parts of the cotton plant are used in some way including the lint, cottonseed, linters, stalks and seed hulls. You can approach this topic from the perspective of waste in food or fashion focussing on cotton. Waste can occur at all stages of a cotton products lifecycle from practices on farm (e.g. energy use) to processing, to use, care and reuse. Where do you imagine the most waste occurs? Every decision you make as a consumer has a consequence. Take a look at the research below and to help you determine what decisions you can make when purchasing and caring for your garments to reduce waste. You may also be interested in the areas of research and innovation that explore ways to reuse fashion by products and parts of the cotton plant. Students will investigate and reflect on how they can act to encourage themselves and others to have a healthy lifestyle and value the people behind the food we eat and the clothes we wear. Social and community health is about building resilient individuals and communities. Personal health consists of focusing on all aspects of ‘health’: Emotional / social Good health is about understanding the balance between what we put into our bodies and what we do with them; it’s about how we deal with the stresses of everyday life; it’s about how we manage our emotions and interact with others. All aspects of our health are impacted upon by the food we eat, the exercise we undertake and the ways we take care of our mental health. Studies have proven the links between a healthy diet, and efficient and effective brain, physical and psychological functioning. The Australian Guide To Healthy Eating recommends eating a variety of nutritious foods including vegetables, fruit, grain and lean meat to achieve a balanced healthy diet. This guide indicates the need to increase the consumption of cereals, legumes, vegetables and fruit, while consuming meat, fish and dairy products in lesser quantities. Meat is an important source of protein and certain micro-nutrients, including iron, zinc and vitamins and milk is a rich source of protein and calcium. That’s why the Australian Guide recommends two daily serves of milk and dairy products, along with two serves of meat, fish and eggs. Protein intake is important for a balanced diet, as insufficient protein intake can lead to obesity due to excessive carbohydrate and fat intake to meet energy requirements. Plant foods – including grains, fruits, vegetables and legumes – are also vital for keeping us healthy. That’s why the Australian Guide To Healthy Eating recommends a higher consumption of plant foods each day, along the lines of seven serves of cereals, five serves of vegetables and legumes, and two serves of fruit. Doing that can reduce the risk of obesity, diabetes, heart disease and some types of cancer, and increase life expectancy. Regular physical activity is an important contributor to good overall health, including promoting healthy weight and reducing chronic disease risk. However, the physical activity levels of many people, both in Australia and around the world, are less than the optimal level recommended to gain a health benefit. The World Health Organization attributes the trend toward physical inactivity to be due in part to insufficient participation in physical activity during leisure time, (recognised globally as participating in less than 30 minutes of moderate intensity physical activity on most days of the week), and to an increase in sedentary behaviour as part of the activities undertaken at work and at home. The Double Pyramid developed by the Barilla Centre for Food and Nutrition in Italy shows the synergies between food that is good for our health and environment. This model consists of two pyramids: one is the traditional food pyramid like the Australian guide, while the other is an upside-down pyramid ranking the environmental impacts of the same foods. In general, foods at the base of the food pyramid are also those with the lowest environmental impact. Cereal grain crops are primary producers and have a lower water and carbon footprint. Legumes such as chickpea and lentil have less than half of the greenhouse emissions of other cereal crops, as they are able to fix nitrogen naturally from the air and do not require any nitrogen fertilisers. Compared with animal products, emissions from vegetables are lower on a per tonne basis. Most emissions associated with vegetable production come from fertiliser use, electricity use and post-harvest refrigeration and transport. Even a modest replacement of energy-intensive animal products with less-energy-intensive grains, fruits and vegetables would be significant at the global scale. Given that fewer than 3% of people in Australia and the UK are vegetarian, it’s unrealistic to suggest a meat-free diet for everyone. It is important to note In Australia and New Zealand, grazing animals are mainly grass-fed rather than grain-fed (more common in the US), which may play an important role in soil carbon sequestration in grasslands, which reduces greenhouse gas emissions. If a healthy and sustainable plant-based diet is better for our health and environment, why is it that consumption of plant foods in many developed countries does not meet recommended levels? In Victoria, for example, fewer than 8% of adults consume the recommended daily intake of five or more serves of vegetables, and fewer than 46% eat the recommended daily intake of two or more serves of fruit. The recent Australian Health Survey found that one in four adults were eating no vegetables on an average day and only 7% were eating the recommended five servings. Our current diets cost more than healthy diets, so factors other than price must be helping drive preference for unhealthy choices. These likely include the abundant availability, accessibility, advertising and promotion of junk foods that exploit people’s vulnerabilities. It’s therefore important not to blame victims for responding as expected to unhealthy food environments. Given the rapidly rising costs to all Australians of our growing waistlines – 25% of us are now obese, one of the highest rates in the world – failing to act is already proving extremely expensive, in both personal and economic terms. It has been suggested that the government can help by promoting a healthier diet by considering educational and policy measures, such as reinstating the healthy-food star rating systems and restricting junk food promotion. Before you let total and utter despair get the best of you – we can all work together to break the vicious cycle of rising obesity and ensure nutrition policy actions tackle barriers to healthy eating. Ways to do this include increasing availability of healthy foods and drinks in schools and hospitals and regulating against “junk” food and drink advertising directed to children. Together, these small steps can help shift the whole population to a healthier diet. These statistics show there is huge room for improvement and opportunities for you as an individual, as a school and as a community to design and deliver idea/s and solutions for action.
Isotopes of plutonium Actinides and fission products by half-life |Actinides by decay chain||Half-life |Fission products of 235U by yield| No fission products ...nor beyond 15.7M Legend for superscript symbols Plutonium (Pu) is an artificial element, except for trace quantities of primordial 244Pu, and thus a standard atomic mass cannot be given. Like all artificial elements, it has no stable isotopes. It was synthesized long before being found in nature, the first isotope synthesized being 238Pu in 1940. Twenty plutonium radioisotopes have been characterized. The most stable are Pu-244, with a half-life of 80.8 million years, Pu-242, with a half-life of 373,300 years, and Pu-239, with a half-life of 24,110 years. All of the remaining radioactive isotopes have half-lives that are less than 7,000 years. This element also has eight meta states, though none is very stable; all meta states have half-lives of less than one second. The isotopes of plutonium range in atomic weight from 228.0387 u (Pu-228) to 247.074 u (Pu-247). The primary decay modes before the most stable isotope, Pu-244, are spontaneous fission and alpha emission; the initial mode after is beta emission. The primary decay products before Pu-244 are isotopes of uranium and neptunium (neglecting the wide range of daughter nuclei created by fission processes), and the primary products after are isotopes of americium. - Plutonium-238 has a half-life of 87.74 years and emits alpha particles. Pure Pu-238 for radioisotope thermoelectric generators which power some spacecraft is produced by neutron capture on neptunium-237 but plutonium from spent nuclear fuel can contain as much as a few percent of Pu-238, from either 237Np, alpha decay of 242Cm, or (n,2n) reactions. - Plutonium-239 is the most important isotope of plutonium, with a half-life of 24,100 years. Pu-239 and Pu-241 are fissile, meaning that the nuclei of its atoms can break apart by being bombarded by slow moving thermal neutrons, releasing energy, gamma radiation and more neutrons. It can therefore sustain a nuclear chain reaction, leading to applications in nuclear weapons and nuclear reactors. Pu-239 is synthesized by irradiating uranium-238 with neutrons in a nuclear reactor, then recovered via nuclear reprocessing of the fuel. Further neutron capture produces successively heavier isotopes. - Plutonium-240 has a high rate of spontaneous fission, raising the background neutron radiation of plutonium containing it. Plutonium is graded by proportion of Pu-240: weapons grade (< 7%), fuel grade (7–19%) and reactor grade (> 19%). Lower grades are less suited for nuclear weapons and thermal reactors but can fuel fast reactors. - Plutonium-241 is fissile, but also beta decays with a half-life of 14 years to americium-241. - Plutonium-242 is not fissile, not very fertile (requiring 3 more neutron captures to become fissile), has a low neutron capture cross section, and a longer half-life than any of the lighter isotopes. - Plutonium-244 is the most stable isotope of plutonium, with a half-life of about 80 million years, long enough to be found in trace quantities in nature. It is not significantly produced in nuclear reactors because Pu-243 has a short half-life, but some is produced in nuclear explosions. Production and uses Pu-239, a fissile isotope which is the second most used nuclear fuel in nuclear reactors after U-235, and the most used fuel in the fission portion of nuclear weapons, is produced from U-238 by neutron capture followed by two beta decays. Pu-240, Pu-241, Pu-242 are produced by further neutron capture. The odd-mass isotopes Pu-239 and Pu-241 have about a 3/4 chance of undergoing fission on capture of a thermal neutron and about a 1/4 chance of retaining the neutron and becoming the following isotope. The even-mass isotopes are fertile material but not fissile and also have a lower overall probability (cross section) of neutron capture; therefore, they tend to accumulate in nuclear fuel used in a thermal reactor, the design of nearly all nuclear power plants today. In plutonium that has been used a second time in thermal reactors in MOX fuel, Pu-240 may even be the most common isotope. All plutonium isotopes and other actinides, however, are fissionable with fast neutrons. Pu-240 does have a moderate thermal neutron absorption cross section, so that Pu-241 production in a thermal reactor becomes a significant fraction as large as Pu-239 production. Pu-241 has a half-life of 14 years, and has slightly higher thermal neutron cross sections than Pu-239 for both fission and absorption. While nuclear fuel is being used in a reactor, a Pu-241 nucleus is much more likely to fission or to capture a neutron than to decay. Pu-241 accounts for a significant proportion of fissions in thermal reactor fuel that has been used for some time. However, in spent nuclear fuel that does not quickly undergo nuclear reprocessing but instead is cooled for years after use, much or most of the Pu-241 will beta decay to americium-241, one of the minor actinides, a strong alpha emitter, and difficult to use in thermal reactors. Pu-242 has a particularly low cross section for thermal neutron capture; and it takes four neutron absorptions to become another fissile isotope (either curium-245 or Pu-241) and fission. Even then, there is a chance either of those two fissile isotopes will fail to fission but instead absorb the fourth neutron, becoming curium-246 (on the way to even heavier actinides like californium, which is a neutron emitter by spontaneous fission and difficult to handle) or becoming Pu-242 again; so the mean number of neutrons absorbed before fission is even higher than 4. Therefore Pu-242 is particularly unsuited to recycling in a thermal reactor and would be better used in a fast reactor where it can be fissioned directly. However, Pu-242's low cross section means that relatively little of it will be transmuted during one cycle in a thermal reactor. Pu-242's half-life is about 15 times as long as Pu-239's half-life; therefore it is 1/15 as radioactive and not one of the larger contributors to nuclear waste radioactivity. 242Pu's gamma ray emissions are also weaker than those of the other isotopes. Pu-243 has a half-life of only 5 hours, beta decaying to americium-243. Because Pu-243 has little opportunity to capture an additional neutron before decay, the nuclear fuel cycle does not produce the extremely long-lived Pu-244 in significant quantity. Pu-238 is not normally produced in as large quantity by the nuclear fuel cycle, but some is produced from neptunium-237 by neutron capture (this reaction can also be used with purified neptunium to produce Pu-238 relatively free of other plutonium isotopes for use in radioisotope thermoelectric generators), by the (n,2n) reaction of fast neutrons on Pu-239, or by alpha decay of curium-242 which is produced by neutron capture from Am-241. It has significant thermal neutron cross section for fission, but is more likely to capture a neutron and become Pu-239. Pu-240, Pu-241 and Pu-242 The fission cross section for 239Pu is 747.9 barns for thermal neutrons, while the activation cross section is 270.7 barns (the ratio approximates to 11 fissions for every 4 neutron captures). The higher plutonium isotopes are created when the uranium fuel is used for a long time. It is the case that for high burnup used fuel that the concentrations of the higher plutonium isotopes will be higher than the low burnup fuel which is reprocessed to obtain weapons grade plutonium. |238U||2.683||0.000||α||4.468 x 109 years| Plutonium-239 is one of the three fissile materials used for the production of nuclear weapons and in some nuclear reactors as a source of energy. The other fissile materials are uranium-235 and uranium-233. Plutonium-239 is virtually nonexistent in nature. It is made by bombarding uranium-238 with neutrons in a nuclear reactor. Uranium-238 is present in quantity in most reactor fuel; hence plutonium-239 is continuously made in these reactors. Since plutonium-239 can itself be split by neutrons to release energy, plutonium-239 provides a portion of the energy generation in a nuclear reactor. |Element||Isotope||Thermal neutron capture cross section (barn) |Thermal neutron fission Cross section (barn) |U||238||2.68||5·10−6||α||4.47 x 109 years| There are small amounts of Pu-238 in the plutonium of usual plutonium-producing reactors. However, isotopic separation would be quite expensive compared to another method: when a U-235 atom captures a neutron, it is converted to an excited state of U-236. Some of the excited U-236 nuclei undergo fission, but some decay to the ground state of U-236 by emitting gamma radiation. Further neutron capture creates U-237 which has a half-life of 7 days and thus quickly decays to Np-237. Since nearly all neptunium is produced in this way or consists of isotopes which decay quickly, one gets nearly pure Np-237 by chemical separation of neptunium. After this chemical separation, Np-237 is again irradiated by reactor neutrons to be converted to Np-238 which decays to Pu-238 with a half-life of 2 days. |Np||237||165 (capture)||α||2,144,000 years| Pu-240 as obstacle to nuclear weapons Pu-240 undergoes spontaneous fission as a secondary decay mode at a small but significant rate. The presence of Pu-240 limits the plutonium's nuclear bomb potential because the neutron flux from spontaneous fission, initiates the chain reaction prematurely and reduces the bomb's power by exploding the core before full implosion is reached. Plutonium consisting of more than about 90% Pu-239 is called weapons-grade plutonium; plutonium from spent nuclear fuel from commercial power reactors generally contains at least 20% Pu-240 and is called reactor-grade plutonium. However, modern nuclear weapons use fusion boosting which mitigates the predetonation problem; if the pit can generate a nuclear weapon yield of even a fraction of a kiloton, which is enough to start deuterium-tritium fusion, the resulting burst of neutrons will fission enough plutonium to ensure a yield of tens of kilotons. Pu-240 contamination is the reason plutonium weapons must use the implosion method. Theoretically, pure Pu-239 could be used in a gun-type nuclear weapon, but achieving this level of purity is prohibitively difficult. Pu-240 contamination has proven a mixed blessing to nuclear weapons design. While it created delays and headaches during the Manhattan Project because of the need to develop implosion technology, those very same difficulties are currently a barrier to nuclear proliferation. Implosion devices are also inherently more efficient and less prone toward accidental detonation than are gun-type weapons. isotopic mass (u) |range of natural |228Pu||94||134||228.03874(3)||1.1(+20-5) s||α (99.9%)||224U||0+| |232Pu||94||138||232.041187(19)||33.7(5) min||EC (89%)||232Np||0+| |233Pu||94||139||233.04300(5)||20.9(4) min||β+ (99.88%)||233Np||5/2+#| |234Pu||94||140||234.043317(7)||8.8(1) h||EC (94%)||234Np||0+| |235Pu||94||141||235.045286(22)||25.3(5) min||β+ (99.99%)||235Np||(5/2+)| |237m1Pu||145.544(10) keV||180(20) ms||IT||237Pu||1/2+| |237m2Pu||2900(250) keV||1.1(1) µs| |239Pu[n 3][n 4]||94||145||239.0521634(20)||2.411(3)×104 a||α||235U||1/2+| |239m1Pu||391.584(3) keV||193(4) ns||7/2-| |239m2Pu||3100(200) keV||7.5(10) µs||(5/2+)| |241Pu[n 3]||94||147||241.0568515(20)||14.290(6) a||β- (99.99%)||241Am||5/2+| |241m1Pu||161.6(1) keV||0.88(5) µs||1/2+| |241m2Pu||2200(200) keV||21(3) µs| |243Pu[n 3]||94||149||243.062003(3)||4.956(3) h||β-||243Am||7/2+| |243mPu||383.6(4) keV||330(30) ns||(1/2+)| |244Pu[n 5]||94||150||244.064204(5)||8.00(9)×107 a||α (99.88%)||240U||0+||Trace| CD: Cluster decay EC: Electron capture IT: Isomeric transition SF: Spontaneous fission - Bold for stable isotopes - Fissile nuclide - Most useful isotope for nuclear weapons - Primordial radionuclide - Values marked # are not purely derived from experimental data, but at least partly from systematic trends. Spins with weak assignment arguments are enclosed in parentheses. - Uncertainties are given in concise form in parentheses after the corresponding last digits. Uncertainty values denote one standard deviation, except isotopic composition and standard atomic mass from IUPAC which use expanded uncertainties. - Isotope masses from: - Isotopic compositions and standard atomic masses from: - J. R. de Laeter, J. K. Böhlke, P. De Bièvre, H. Hidaka, H. S. Peiser, K. J. R. Rosman and P. D. P. Taylor (2003). "Atomic weights of the elements. Review 2000 (IUPAC Technical Report)". Pure and Applied Chemistry 75 (6): 683–800. doi:10.1351/pac200375060683. - M. E. Wieser (2006). "Atomic weights of the elements 2005 (IUPAC Technical Report)". Pure and Applied Chemistry 78 (11): 2051–2066. doi:10.1351/pac200678112051. Lay summary. - Half-life, spin, and isomer data selected from the following sources. See editing notes on this article's talk page. - G. Audi, A. H. Wapstra, C. Thibault, J. Blachot and O. Bersillon (2003). "The NUBASE evaluation of nuclear and decay properties". Nuclear Physics A 729: 3–128. Bibcode:2003NuPhA.729....3A. doi:10.1016/j.nuclphysa.2003.11.001. - National Nuclear Data Center. "NuDat 2.1 database". Brookhaven National Laboratory. Retrieved September 2005. - N. E. Holden (2004). "Table of the Isotopes". In D. R. Lide. CRC Handbook of Chemistry and Physics (85th ed.). CRC Press. Section 11. ISBN 978-0-8493-0485-9. - Plus radium (element 88). While actually a sub-actinide, it immediately precedes actinium (89) and follows a three element gap of instability after polonium (84) where no isotopes have half-lives of at least four years (the longest-lived isotope in the gap is radon-222 with a half life of less than four days). Radium's longest lived isotope, at a notable 1600 years, thus merits the element's inclusion here. - Specifically from thermal neutron fission of U-235, e.g. in a typical nuclear reactor. - Milsted, J.; Friedman, A. M.; Stevens, C. M. (1965). "The alpha half-life of berkelium-247; a new long-lived isomer of berkelium-248". Nuclear Physics 71 (2): 299. doi:10.1016/0029-5582(65)90719-4. "The isotopic analyses disclosed a species of mass 248 in constant abundance in three samples analysed over a period of about 10 months. This was ascribed to an isomer of Bk248 with a half-life greater than 9 y. No growth of Cf248 was detected, and a lower limit for the β− half-life can be set at about 104 y. No alpha activity attributable to the new isomer has been detected; the alpha half-life is probably greater than 300 y." - This is the heaviest isotope with a half-life of at least four years before the "Sea of Instability". - Excluding those "classically stable" isotopes with half-lives significantly in excess of 232Th, e.g. while 113mCd has a half-life of only fourteen years, that of 113Cd is nearly eight quadrillion. - Sasahara, Akihiro; Matsumura, Tetsuo; Nicolaou, Giorgos; Papaioannou, Dimitri (April 2004). "Neutron and Gamma Ray Source Evaluation of LWR High Burn-up UO2 and MOX Spent Fuels". Journal of Nuclear Science and Technology 41 (4): 448–456. doi:10.3327/jnst.41.448. - "PLUTONIUM ISOTOPIC RESULTS OF KNOWN SAMPLES USING THE SNAP GAMMA SPECTROSCOPY ANALYSIS CODE AND THE ROBWIN SPECTRUM FITTING ROUTINE" (PDF). - National Nuclear Data Center Interactive Chart of Nuclides - Miner 1968, p. 541 |Isotopes of neptunium||Isotopes of plutonium||Isotopes of americium| |Table of nuclides| |Isotopes of the chemical elements|
Tangents Teacher Resources Find Tangents educational ideas and activities Showing 81 - 100 of 681 resources Students examine and discuss techniques using trigonometric ratios for right triangles. They observe examples of trigonometric ratios, discuss alternative methods for checking their results, and complete a worksheet. In this math worksheet, students complete the graphic organizers while filling in the missing values by using the trigonometric ratios. In this Law of Sines worksheet, students use trigonometric ratios to determine angle measurement and/or the length of the side of a triangle. This one-page worksheet contains ten trigonometric ratio problems. In this trigonometric ratios worksheet, students solve and complete 10 various types of problems. First, they write the degrees listed in radian measure. Then, students find the exact ratios for each equation. They also find the value of x in each triangle and evaluate. Learners investigate properties of the unit circle using sine, cosine and tangent. They analyze the answers to questions about the unit circle based on a right triangle. Students play a game based on the unit circle used in trigonometry. In this lesson on the unit circle, students draw a card a perform a given task including measuring degrees, radians, locating coordinates and finding the value of various functions. Students match points on a unit circle to corresponding angles. In this geometry lesson, students evaluate the six trigonometric values and their angles. Students convert between degrees and radians and find the Cartesian equivalent coordinates. In this Pre-Calculus worksheet, students solve problems involving angles and the unit circle. The four page worksheet contains a combination of twenty multiple choice and free response problems. Answers are provided. Students investigate the law of sines and cosines. In this law of sines and cosines instructional activity, students construct a 3d sketch of a card table. Students discuss the law of sines and cosines with the teacher. Students use the law of sines and cosines to help in constructing their card table. Students research the cost of materials and determine the cost of their card table. Students solve problems with circles and their properties. In this geometry lesson, students calcualte the diameter, radius, circumference and area of a circle. They identify the secant and tangent lines. They find the measurements of the chords. Students calculate the tangent ratio given a right triangle. In this trigonometry lesson, students use a right triangle and the Pythagorean Theorem to solve problems. Eighth graders use trigonometric ratios for finding a missing angle or side of a right triangle. Use of TI-83 calculator is utilized within this lesson plan. Students identify and use the trigonometric ratios to solve problems with right triangles. In this exploring circles activity, 10th graders solve 155 various types of problems that include exploring circles. First, they determine the circumference of a circle with a radius of the given length. Then, students create circle graphs with the given information. They also determine the measure of various angles and arc of a circle. In this tangents, secants and chords worksheet, 10th graders identify and solve 48 different problems that include using 3 different theorems for defining circles. First, they determine the area of each circle with C as the center and a given tangent line. Then, students determine the lengths of the given line segments. Students apply properties of a unit circle to solve triangle problems. In this geometry lesson, students identify the six trigonometric values using the unit circle and the right triangle. Students identify tangents, chords and secants. In this geometry lesson, students graph circles and identify angles created by secant lines, tangent lines and chords. Astrolabes have been used by explorers and astronomers throughout the ages. But how exactly do they work and what can a young mathematician do with one today? High schoolers will build a simple version of this tool and then using the altitude, measure the height of various objects. Accompanying worksheets guide learners through different discoveries using their knowledge of similar triangles and trigonometric ratios. For this circles worksheet, 10th graders solve and complete 20 different problems related to various circles. First, they find the radius and diameter of a circle with a given circumference. Then, students determine the measure of each minor arc of a regular decagon inscribed in a circle. In this circle learning exercise, students complete 8 problem solving questions regarding circle measurements and angles. Students are asked to show their proofs by using a computer program titled Geometer's Sketchpad. Students investigate the properties of circles and use it to solve real world situtaions. For this geometry lesson, students solve for the radius and diameter of the cirlce.
Understanding Functions in Python- Wakeupcoders A function is a segment of clean, reusable code that does a single, connected operation. A higher level of code reuse and improved programme modularity are provided via functions. As you are well aware, Python has a large number of built-in functions, such as print() and others, but you may also write your own. User-defined functions are what these processes are known as. Definition of function To offer the necessary functionality, you can define functions. Here are some straight forward Python definition rules. - The word “def” before the function name and parenthesis (()) in a function block. - These parenthesis are where you should put any arguments or input parameters. Within these parenthesis, parameters can also be defined. - The documentation string for the function, often known as the docstring, can be used as an optional initial statement in a function. - Every function has an indented code block that begins with a colon (:). - Returning an expression to the caller is possible when leaving a function using the return statement. Return None is the same as a return statement with no arguments. How to create a function The Python def keyword is used to define functions. How to call the function Use the function name in parentheses to invoke the function: What are arguments in function and how we can pass it in the function Functions accept parameters that can contain data. The function name is followed by parenthesis that list the arguments. Simply separate each argument with a comma to add as many as you like. The function in the following example only takes one parameter (fname). A first name is sent to the function when it is called, and it is utilised there to print the whole name example: print(name + "is the name") Count of arguments A function must always be called with the appropriate amount of parameters by default. In other words, if your function requires 2 parameters, you must call it with 2 arguments, not more or fewer. def test_function(name, place): print(name + " " + place) if you are unsure about the the count of arguments we need to pass we can use args * arbitrary arguments, just pass * before the arguments. for example: print("the recommended fruits are " + fruits) test_function("apple", "banana", "papaya") How to pass list as an arguments? Any data type can be sent as an argument to a function (string, integer, list, dictionary, etc.), and the function will treat it as that data type. For instance, if a List is passed as an argument, it will still be a List when it gets to the function. for i in vegetables: vegetables = ["onion", "potato", "tomato"] Use the return statement to allow a function to return a value return 3 + sum Python also permits function recursion, allowing specified functions to call one another. A frequent idea in math and programming is recursion. It denotes that a function makes a call to itself. This has the advantage of allowing you to loop over data to arrive at a conclusion. The developer should exercise extreme caution when using recursion since it is quite simple to write a function that never ends or consumes excessive amounts of memory or processing resources. Recursion, however, may be a very effective and mathematically beautiful technique to programming when used properly. Test() is a function that we built in this example to call itself (“recurse”). The data is the i variable, which decreases by one (-1) with each recursion. When the criterion is not bigger than 0, the recursion comes to an end (i.e. when it is 0). It may take some time for a novice developer to understand how precisely something operates; the best way to do so is to test and alter it. if(i > 0): result = i + test(i - 1) output = 0 Let’s Wind up: If you liked this Blog and found this blog informative do share with your friends and co-workers to help them. Till then keep Reading and Enjoying. If you are having any feedback or looking a way to connect with us. Let’s connect together www.wakeupcoders.com/contact & think beyond everything.
the Physics Education Technology Project This interactive simulation provides four components for exploring balanced and unbalanced forces. In the introductory activity, users choose from among 5 objects of different masses, set the surface with or without friction, then "push" the object along a straight line. The simulation displays force vectors and free body diagrams to match the motion. Record your "push" and replay to see the sum of forces. The second activity focuses on the role of friction when objects are pushed on a wood surface. Set your own gravitational constant and watch the effects on static and kinetic friction. The third activity lets users display simultaneous graphs of applied force, acceleration, velocity, and position. The final activity, "Robot Moving Company", is a game where users apply a force to deliver objects of different mass from one point to another. This resource is part of PhET, the Physics Education Technology Project, a collection of simulation-based learning objects developed for learners of physics, chemistry, math, earth science, and biology. Please note that this resource requires Java Applet Plug-in. 6-8: 4E/M2. Energy can be transferred from one system to another (or from a system to its environment) in different ways: 1) thermally, when a warmer object is in contact with a cooler one; 2) mechanically, when two objects push or pull on each other over a distance; 3) electrically, when an electrical source such as a battery or generator is connected in a complete circuit to an electrical device; or 4) by electromagnetic waves. 3-5: 4F/E1a. Changes in speed or direction of motion are caused by forces. 3-5: 4F/E1bc. The greater the force is, the greater the change in motion will be. The more massive an object is, the less effect a given force will have. 6-8: 4F/M3a. An unbalanced force acting on an object changes its speed or direction of motion, or both. 9-12: 4F/H1. The change in motion (direction or speed) of an object is proportional to the applied force and inversely proportional to the mass. 9-12: 4F/H4. Whenever one thing exerts a force on another, an equal amount of force is exerted back on it. 11. Common Themes 6-8: 11B/M4. Simulations are often useful in modeling events and processes. Next Generation Science Standards Motion and Stability: Forces and Interactions (HS-PS2) Students who demonstrate understanding can: (9-12) Analyze data to support the claim that Newton's second law of motion describes the mathematical relationship among the net force on a macroscopic object, its mass, and its acceleration. (HS-PS2-1) Disciplinary Core Ideas (K-12) Forces and Motion (PS2.A) The motion of an object is determined by the sum of the forces acting on it; if the total force on the object is not zero, its motion will change. The greater the mass of the object, the greater the force needed to achieve the same change in motion. For any given object, a larger force causes a larger change in motion. (6-8) All positions of objects and the directions of forces and motions must be described in an arbitrarily chosen reference frame and arbitrarily chosen units of size. In order to share information with other people, these choices must also be shared. (6-8) Newton's second law accurately predicts changes in the motion of macroscopic objects. (9-12) Crosscutting Concepts (K-12) Cause and Effect (K-12) Cause and effect relationships may be used to predict phenomena in natural or designed systems. (6-8) NGSS Science and Engineering Practices (K-12) Analyzing and Interpreting Data (K-12) Analyzing data in 6–8 builds on K–5 and progresses to extending quantitative analysis to investigations, distinguishing between correlation and causation, and basic statistical techniques of data and error analysis. (6-8) Analyze and interpret data to provide evidence for phenomena. (6-8) Developing and Using Models (K-12) Modeling in 6–8 builds on K–5 and progresses to developing, using and revising models to describe, test, and predict more abstract phenomena and design systems. (6-8) Develop and use a model to describe phenomena. (6-8) Modeling in 9–12 builds on K–8 and progresses to using, synthesizing, and developing models to predict and show relationships among variables between systems and their components in the natural and designed worlds. (9-12) Develop and use a model based on evidence to illustrate the relationships between systems or between components of a system. (9-12) Using Mathematics and Computational Thinking (5-12) Mathematical and computational thinking at the 9–12 level builds on K–8 and progresses to using algebraic thinking and analysis, a range of linear and nonlinear functions including trigonometric functions, exponentials and logarithms, and computational tools for statistical analysis to analyze, represent, and model data. Simple computational simulations are created and used based on mathematical models of basic assumptions. (9-12) Use mathematical representations of phenomena to describe explanations. (9-12) Create or revise a simulation of a phenomenon, designed device, process, or system. (9-12) %0 Electronic Source %D August 15, 2011 %T PhET Simulation: Forces and Motion %I Physics Education Technology Project %V 2016 %N 7 December 2016 %8 August 15, 2011 %9 application/java %U https://phet.colorado.edu/en/simulation/forces-and-motion Disclaimer: ComPADRE offers citation styles as a guide only. We cannot offer interpretations about citations as this is an automated procedure. Please refer to the style manuals in the Citation Source Information area for clarifications.
Lesson Plan: The Mode of a Data Set Mathematics • 6th Grade This lesson plan includes the objectives, prerequisites, and exclusions of the lesson teaching students how to find and interpret the mode of a data set. Students will be able to - recognize that the mode is one of the measures of central tendency, - recall that a data set can have a unique mode, more than one mode, or no mode (when no value appears more than another), - calculate the mode of a data set, - calculate the mode for data given in a table or a bar graph, - calculate an unknown value in a data set given its mode. Students should already be familiar with - calculating the mean of a data set, - calculating the median of a data set. Students will not cover - estimating the mode of a data set for grouped data, - the mode of grouped data.
In mathematics, a ratio is a relationship between two numbers indicating how many times the first number contains the second. For example, if a bowl of fruit contains eight oranges and six lemons, then the ratio of oranges to lemons is eight to six (that is, 8:6, which is equivalent to the ratio 4:3). Thus, a ratio can be a fraction as opposed to a whole number. Also, in this example the ratio of lemons to oranges is 6:8 (or 3:4), and the ratio of oranges to the total amount of fruit is 8:14 (or 4:7). The numbers compared in a ratio can be any quantities of a comparable kind, such as objects, persons, lengths, or spoonfuls. A ratio is written "a to b" or a:b, or sometimes expressed arithmetically as a quotient of the two. When the two quantities have the same units, as is often the case, their ratio is a dimensionless number. A rate is a quotient of variables having different units. But in many applications, the word ratio is often used instead for this more general notion as well. Notation and terminology The ratio of numbers A and B can be expressed as: - the ratio of A to B - A is to B (followed by "as C is to D") - A fraction that is the quotient: A divided by B: , which can be expressed as either a simple or a decimal fraction. The proportion expressing the equality of the ratios A:B and C:D is written A:B = C:D or A:B::C:D. This latter form, when spoken or written in the English language, is often expressed as - A is to B as C is to D. A, B, C and D are called the terms of the proportion. A and D are called the extremes, and B and C are called the means. The equality of three or more proportions is called a continued proportion. Ratios are sometimes used with three or more terms. The ratio of the dimensions of a "two by four" that is ten inches long is 2:4:10. A good concrete mix is sometimes quoted as 1:2:4 for the ratio of cement to sand to gravel. For a mixture of 4/1 cement to water, it could be said that the ratio of cement to water is 4:1, that there is 4 times as much cement as water, or that there is a quarter (1/4) as much water as cement. History and etymology It is impossible to trace the origin of the concept of ratio, because the ideas from which it developed would have been familiar to preliterate cultures. For example, the idea of one village being twice as large as another is so basic that it would have been understood in prehistoric society. However, it is possible to trace the origin of the word "ratio" to the Ancient Greek λόγος (logos). Early translators rendered this into Latin as ratio ("reason"; as in the word "rational"). (A rational number may be expressed as the quotient of two integers.) A more modern interpretation of Euclid's meaning is more akin to computation or reckoning. Medieval writers used the word proportio ("proportion") to indicate ratio and proportionalitas ("proportionality") for the equality of ratios. Euclid collected the results appearing in the Elements from earlier sources. The Pythagoreans developed a theory of ratio and proportion as applied to numbers. The Pythagoreans' conception of number included only what would today be called rational numbers, casting doubt on the validity of the theory in geometry where, as the Pythagoreans also discovered, incommensurable ratios (corresponding to irrational numbers) exist. The discovery of a theory of ratios that does not assume commensurability is probably due to Eudoxus of Cnidus. The exposition of the theory of proportions that appears in Book VII of The Elements reflects the earlier theory of ratios of commensurables. The existence of multiple theories seems unnecessarily complex to modern sensibility since ratios are, to a large extent, identified with quotients. This is a comparatively recent development however, as can be seen from the fact that modern geometry textbooks still use distinct terminology and notation for ratios and quotients. The reasons for this are twofold. First, there was the previously mentioned reluctance to accept irrational numbers as true numbers. Second, the lack of a widely used symbolism to replace the already established terminology of ratios delayed the full acceptance of fractions as alternative until the 16th century. Book V of Euclid's Elements has 18 definitions, all of which relate to ratios. In addition, Euclid uses ideas that were in such common usage that he did not include definitions for them. The first two definitions say that a part of a quantity is another quantity that "measures" it and conversely, a multiple of a quantity is another quantity that it measures. In modern terminology, this means that a multiple of a quantity is that quantity multiplied by an integer greater than one—and a part of a quantity (meaning aliquot part) is a part that, when multiplied by an integer greater than one, gives the quantity. Euclid does not define the term "measure" as used here, However, one may infer that if a quantity is taken as a unit of measurement, and a second quantity is given as an integral number of these units, then the first quantity measures the second. Note that these definitions are repeated, nearly word for word, as definitions 3 and 5 in book VII. Definition 3 describes what a ratio is in a general way. It is not rigorous in a mathematical sense and some have ascribed it to Euclid's editors rather than Euclid himself. Euclid defines a ratio as between two quantities of the same type, so by this definition the ratios of two lengths or of two areas are defined, but not the ratio of a length and an area. Definition 4 makes this more rigorous. It states that a ratio of two quantities exists when there is a multiple of each that exceeds the other. In modern notation, a ratio exists between quantities p and q if there exist integers m and n so that mp>q and nq>p. This condition is known as the Archimedes property. Definition 5 is the most complex and difficult. It defines what it means for two ratios to be equal. Today, this can be done by simply stating that ratios are equal when the quotients of the terms are equal, but Euclid did not accept the existence of the quotients of incommensurate, so such a definition would have been meaningless to him. Thus, a more subtle definition is needed where quantities involved are not measured directly to one another. Though it may not be possible to assign a rational value to a ratio, it is possible to compare a ratio with a rational number. Specifically, given two quantities, p and q, and a rational number m/n we can say that the ratio of p to q is less than, equal to, or greater than m/n when np is less than, equal to, or greater than mq respectively. Euclid's definition of equality can be stated as that two ratios are equal when they behave identically with respect to being less than, equal to, or greater than any rational number. In modern notation this says that given quantities p, q, r and s, then p:q::r:s if for any positive integers m and n, np<mq, np=mq, np>mq according as nr<ms, nr=ms, nr>ms respectively. There is a remarkable similarity between this definition and the theory of Dedekind cuts used in the modern definition of irrational numbers. Definition 6 says that quantities that have the same ratio are proportional or in proportion. Euclid uses the Greek ἀναλόγον (analogon), this has the same root as λόγος and is related to the English word "analog". Definition 7 defines what it means for one ratio to be less than or greater than another and is based on the ideas present in definition 5. In modern notation it says that given quantities p, q, r and s, then p:q>r:s if there are positive integers m and n so that np>mq and nr≤ms. As with definition 3, definition 8 is regarded by some as being a later insertion by Euclid's editors. It defines three terms p, q and r to be in proportion when p:q::q:r. This is extended to 4 terms p, q, r and s as p:q::q:r::r:s, and so on. Sequences that have the property that the ratios of consecutive terms are equal are called geometric progressions. Definitions 9 and 10 apply this, saying that if p, q and r are in proportion then p:r is the duplicate ratio of p:q and if p, q, r and s are in proportion then p:s is the triplicate ratio of p:q. If p, q and r are in proportion then q is called a mean proportional to (or the geometric mean of) p and r. Similarly, if p, q, r and s are in proportion then q and r are called two mean proportionals to p and s. Number of terms and use of fractions In general, a comparison of the quantities of a two-entity ratio can be expressed as a fraction derived from the ratio. For example, in a ratio of 2:3, the amount, size, volume, or quantity of the first entity is that of the second entity. If there are 2 oranges and 3 apples, the ratio of oranges to apples is 2:3, and the ratio of oranges to the total number of pieces of fruit is 2:5. These ratios can also be expressed in fraction form: there are 2/3 as many oranges as apples, and 2/5 of the pieces of fruit are oranges. If orange juice concentrate is to be diluted with water in the ratio 1:4, then one part of concentrate is mixed with four parts of water, giving five parts total; the amount of orange juice concentrate is 1/4 the amount of water, while the amount of orange juice concentrate is 1/5 of the total liquid. In both ratios and fractions, it is important to be clear what is being compared to what, and beginners often make mistakes for this reason. Fractions can also be inferred from ratios with more than two entities; however, a ratio with more than two entities cannot be completely converted into a single fraction, because a fraction can only compare two quantities. A separate fraction can be used to compare the quantities of any two of the entities covered by the ratio: for example, from a ratio of 2:3:7 we can infer that the quantity of the second entity is that of the third entity. Proportions and percentage ratios If we multiply all quantities involved in a ratio by the same number, the ratio remains valid. For example, a ratio of 3:2 is the same as 12:8. It is usual either to reduce terms to the lowest common denominator, or to express them in parts per hundred (percent). If a mixture contains substances A, B, C and D in the ratio 5:9:4:2 then there are 5 parts of A for every 9 parts of B, 4 parts of C and 2 parts of D. As 5+9+4+2=20, the total mixture contains 5/20 of A (5 parts out of 20), 9/20 of B, 4/20 of C, and 2/20 of D. If we divide all numbers by the total and multiply by 100%, we have converted to percentages: 25% A, 45% B, 20% C, and 10% D (equivalent to writing the ratio as 25:45:20:10). If the two or more ratio quantities encompass all of the quantities in a particular situation, it is said that "the whole" contains the sum of the parts: for example, a fruit basket containing two apples and three oranges and no other fruit is made up of two parts apples and three parts oranges. In this case, , or 40% of the whole is apples and , or 60% of the whole is oranges. This comparison of a specific quantity to "the whole" is called a proportion. If the ratio consists of only two values, it can be represented as a fraction, in particular as a decimal fraction. For example, older televisions have a 4:3 aspect ratio, which means that the width is 4/3 of the height - this can also be expressed as 1.33:1 or just 1.33 rounded to two decimal places. Modern widescreen TVs have a 16:9 aspect ratio, or 1.78 rounded to two decimal places. One of the popular widescreen movie formats is 2.35:1 or simply 2.35. Representing ratios as decimal fractions simplifies their comparison. When comparing 1.33, 1.78 and 2.35, it is obvious which format offers wider image. Such a comparison works only when values being compared are consistent, like always expressing width in relation to height. Ratios can be reduced (as fractions are) by dividing each quantity by the common factors of all the quantities. As for fractions, the simplest form is considered that in which the numbers in the ratio are the smallest possible integers. Thus, the ratio 40:60 is equivalent in meaning to the ratio 2:3, the latter being obtained from the former by dividing both quantities by 20. Mathematically, we write 40:60 = 2:3, or equivalently 40:60::2:3. The verbal equivalent is "40 is to 60 as 2 is to 3." A ratio that has integers for both quantities and that cannot be reduced any further (using integers) is said to be in simplest form or lowest terms. Sometimes it is useful to write a ratio in the form 1:x or x:1, where x is not necessarily an integer, to enable comparisons of different ratios. For example, the ratio 4:5 can be written as 1:1.25 (dividing both sides by 4) Alternatively, it can be written as 0.8:1 (dividing both sides by 5). Some ratios are between incommensurable quantities—quantities whose ratio is an irrational number. The earliest discovered example, found by the Pythagoreans, is the ratio of the diagonal to the side of a square, which is the square root of 2. Another well-known example is the golden ratio, which is defined as both sides of the equality a:b = (a+b):a. Writing this in fractional terms as and finding the positive solution gives the golden ratio which is irrational. Thus at least one of a and b has to be irrational for them to be in the golden ratio. An example of an occurrence of the golden ratio is as the limiting value of the ratio of two successive Fibonacci numbers: even though the n-th such ratio is the ratio of two integers and hence is rational, the limit of the sequence of these ratios as n goes to infinity is the irrational golden ratio. Similarly, the silver ratio is defined as both sides of the equality a:b = (2a+b):a. Again writing it in fractional terms and obtaining the positive solution, we obtain which is irrational, so of two quantities a and b in the silver ratio at least one of them must be irrational. Odds (as in gambling) are expressed as a ratio. For example, odds of "7 to 3 against" (7:3) mean that there are seven chances that the event will not happen to every three chances that it will happen. The probability of success is 30%. In every ten trials, there are expected to be three wins and seven losses. Ratios may be unitless, as in the case they relate quantities in units of the same dimension, even in their units of measurement are initially different. For example, the ratio 1 minute : 40 seconds can be reduced by changing the first value to 60 seconds. Once the units are the same, they can be omitted, and the ratio can be reduced to 3:2. On the other hand, there are non-dimensionless ratios, also known as rates. In chemistry, mass concentration ratios are usually expressed as weight/volume fractions. For example, a concentration of 3% w/v usually means 3g of substance in every 100mL of solution. This cannot be converted to a dimensionless ratio, as in weight/weight or volume/volume fractions. The locations of points relative to a triangle with vertices A, B, and C and sides AB, BC, and CA are often expressed in extended ratio form as triangular coordinates. In barycentric coordinates, a point with coordinates is the point upon which a weightless sheet of metal in the shape and size of the triangle would exactly balance if weights were put on the vertices, with the ratio of the weights at A and B being the ratio of the weights at B and C being and therefore the ratio of weights at A and C being In trilinear coordinates, a point with coordinates x:y:z has perpendicular distances to side BC (across from vertex A) and side CA (across from vertex B) in the ratio x:y, distances to side CA and side AB (across from C) in the ratio y:z, and therefore distances to sides BC and AB in the ratio x:z. Since all information is expressed in terms of ratios (the individual numbers denoted by x, y, and z have no meaning by themselves), a triangle analysis using barycentric or trilinear coordinates applies regardless of the size of the triangle. - Dilution ratio - Dimensionless quantity - Financial ratio - Fold change - Interval (music) - Odds ratio - Parts-per notation - Price–performance ratio - Proportionality (mathematics) - Ratio distribution - Ratio estimator - Rate (mathematics) - Rate ratio - Relative risk - Rule of three (mathematics) - Sex ratio - "Ratio" The Penny Cyclopædia vol. 19, The Society for the Diffusion of Useful Knowledge (1841) Charles Knight and Co., London pp. 307ff - "Proportion" New International Encyclopedia, Vol. 19 2nd ed. (1916) Dodd Mead & Co. pp270-271 - "Ratio and Proportion" Fundamentals of practical mathematics, George Wentworth, David Eugene Smith, Herbert Druery Harper (1922) Ginn and Co. pp. 55ff - The thirteen books of Euclid's Elements, vol 2. trans. Sir Thomas Little Heath (1908). Cambridge Univ. Press. pp. 112ff. - D.E. Smith, History of Mathematics, vol 2 Dover (1958) pp. 477ff
Conjoined twins are two babies who are born physically connected to each other. Conjoined twins develop when an early embryo only partially separates to form two individuals. Although two fetuses will develop from this embryo, they will remain physically connected — most often at the chest, abdomen or pelvis. Conjoined twins may also share one or more internal organs. Though many conjoined twins are not alive when born (stillborn) or die shortly after birth, advances in surgery and technology have improved survival rates. Some surviving conjoined twins can be surgically separated. The success of surgery depends on where the twins are joined and how many and which organs are shared, as well as the experience and skill of the surgical team. There are no specific signs or symptoms that indicate a conjoined twin pregnancy. As with other twin pregnancies, the uterus may grow faster than with a single fetus, and there may be more fatigue, nausea and vomiting early in the pregnancy. Conjoined twins can be diagnosed early in the pregnancy using standard ultrasound. How twins are joined Conjoined twins are typically classified according to where they're joined, usually at matching sites, and sometimes at more than one site. They sometimes share organs or other parts of their bodies. The specific anatomy of each pair of conjoined twins is unique. Conjoined twins may be joined at any of these sites: - Chest. Thoracopagus (thor-uh-KOP-uh-gus) twins are joined face to face at the chest. They often have a shared heart and may also share one liver and upper intestine. This is one of the most common sites of conjoined twins. - Abdomen. Omphalopagus (om-fuh-LOP-uh-gus) twins are joined near the bellybutton. Many omphalopagus twins share the liver, and some share the lower part of the small intestine (ileum) and colon. They generally do not share a heart. - Base of spine. Pygopagus (pie-GOP-uh-gus) twins are commonly joined back to back at the base of the spine and the buttocks. Some pygopagus twins share the lower gastrointestinal tract, and a few share the genital and urinary organs. - Length of spine. Rachipagus (ray-KIP-uh-gus), also called rachiopagus (ray-kee-OP-uh-gus), twins are joined back to back along the length of the spine. This type is very rare. - Pelvis. Ischiopagus (is-kee-OP-uh-gus) twins are joined at the pelvis, either face to face or end to end. Many ischiopagus twins share the lower gastrointestinal tract, as well as the liver and genital and urinary tract organs. Each twin may have two legs or, less commonly, the twins share two or three legs. - Trunk. Parapagus (pa-RAP-uh-gus) twins are joined side to side at the pelvis and part or all of the abdomen and chest, but with separate heads. The twins can have two, three or four arms and two or three legs. - Head. Craniopagus (kray-nee-OP-uh-gus) twins are joined at the back, top or side of the head, but not the face. Craniopagus twins share a portion of the skull. But their brains are usually separate, though they may share some brain tissue. - Head and chest. Cephalopagus (sef-uh-LOP-uh-gus) twins are joined at the face and upper body. The faces are on opposite sides of a single shared head, and they share a brain. These twins rarely survive. In rare cases, twins may be conjoined with one twin smaller and less fully formed than the other (asymmetric conjoined twins). In extremely rare cases, one twin may be found partially developed within the other twin (fetus in fetu). Identical twins (monozygotic twins) occur when a single fertilized egg splits and develops into two individuals. Eight to 12 days after conception, the embryonic layers that will split to form monozygotic twins begin to develop into specific organs and structures. It's believed that when the embryo splits later than this — usually between 13 and 15 days after conception — separation stops before the process is complete, and the resulting twins are conjoined. An alternative theory suggests that two separate embryos may somehow fuse together in early development. What might cause either scenario to occur is unknown. Because conjoined twins are so rare, and the cause isn't clear, it's unknown what might make some couples more likely to have conjoined twins. Pregnancy with conjoined twins is complex and greatly increases the risk of serious complications. Conjoined babies require surgical delivery by cesarean section (C-section) due to their anatomy. As with twins, conjoined babies are likely to be born prematurely, and one or both could be stillborn or die shortly after birth. Severe health issues for twins can occur immediately — such as trouble breathing or heart problems — and later in life, such as scoliosis, cerebral palsy or learning disabilities. Potential complications depend on where the twins are joined, which organs or other parts of the body they share, and the expertise and experience of the health care team. When conjoined twins are expected, the family and the health care team need to discuss in detail the possible complications and how to prepare for them.
Applications of percentWorksheets: 4Study Guides: 1Experimental ProbabilityFreeWorksheets: 3Study Guides: 1Numbers and percentsWorksheets: 3Study Guides: 1Perimeter and areaWorksheets: 4Study Guides: 1Plane figuresWorksheets: 4Study Guides: 1SequencesWorksheets: 4Study Guides: 1Theoretical probability and countingWorksheets: 3Study Guides: 1 SC.CC.EE.8. Expressions and Equations Analyze and solve linear equations and pairs of simultaneous linear equations. EE.8.7. Solve linear equations in one variable. EE.8.7(a) Give examples of linear equations in one variable with one solution, infinitely many solutions, or no solutions. Show which of these possibilities is the case by successively transforming the given equation into simpler forms, until an equivalent equation of the form x = a, a = a, or a = b results (where a and b are different numbers). EE.8.7(b) Solve linear equations with rational number coefficients, including equations whose solutions require expanding expressions using the distributive property and collecting like terms. Understand the connections between proportional relationships, lines, and linear equations. EE.8.5. Graph proportional relationships, interpreting the unit rate as the slope of the graph. Compare two different proportional relationships represented in different ways. For example, compare a distance-time graph to a distance-time equation to determine which of two moving objects has greater speed. EE.8.6. Use similar triangles to explain why the slope m is the same between any two distinct points on a non-vertical line in the coordinate plane; derive the equation y = mx for a line through the origin and the equation y = mx + b for a line intercepting the vertical axis at b. Work with radicals and integer exponents. EE.8.1. Know and apply the properties of integer exponents to generate equivalent numerical expressions. For example, 3^2 x 3^-5 = 3^-3 = 1/3^3 = 1/27. EE.8.2. Use square root and cube root symbols to represent solutions to equations of the form x^2 = p and x^3 = p, where p is a positive rational number. Evaluate square roots of small perfect squares and cube roots of small perfect cubes. Know that square root of 2 is irrational. EE.8.3. Use numbers expressed in the form of a single digit times a whole-number power of 10 to estimate very large or very small quantities, and to express how many times as much one is than the other. For example, estimate the population of the United States as 3 times 10^8 and the population of the world as 7 times 10^9, and determine that the world population is more than 20 times larger. EE.8.4. Perform operations with numbers expressed in scientific notation, including problems where both decimal and scientific notation are used. Use scientific notation and choose units of appropriate size for measurements of very large or very small quantities (e.g., use millimeters per year for seafloor spreading). Interpret scientific notation that has been generated by technology. Define, evaluate, and compare functions. F.8.1. Understand that a function is a rule that assigns to each input exactly one output. The graph of a function is the set of ordered pairs consisting of an input and the corresponding output. F.8.3. Interpret the equation y = mx + b as defining a linear function, whose graph is a straight line; give examples of functions that are not linear. For example, the function A = s^2 giving the area of a square as a function of its side length is not linear because its graph contains the points (1,1), (2,4) and (3,9), which are not on a straight line. Use functions to model relationships between quantities. F.8.4. Construct a function to model a linear relationship between two quantities. Determine the rate of change and initial value of the function from a description of a relationship or from two (x, y) values, including reading these from a table or from a graph. Interpret the rate of change and initial value of a linear function in terms of the situation it models, and in terms of its graph or a table of values. Solve real-world and mathematical problems involving volume of cylinders, cones, and spheres. G.8.9. Know the formulas for the volumes of cones, cylinders, and spheres and use them to solve real-world and mathematical problems. Understand and apply the Pythagorean Theorem. G.8.7. Apply the Pythagorean Theorem to determine unknown side lengths in right triangles in real-world and mathematical problems in two and three dimensions. Understand congruence and similarity using physical models, transparencies, or geometry software. G.8.1. Verify experimentally the properties of rotations, reflections, and translations: G.8.1(a) Lines are taken to lines, and line segments to line segments of the same length. G.8.1(b) Angles are taken to angles of the same measure. G.8.1(c) Parallel lines are taken to parallel lines. G.8.2. Understand that a two-dimensional figure is congruent to another if the second can be obtained from the first by a sequence of rotations, reflections, and translations; given two congruent figures, describe a sequence that exhibits the congruence between them. SC.CC.MP.8. Mathematical Practices MP.8.1. Make sense of problems and persevere in solving them. MP.8.2. Reason abstractly and quantitatively. SC.CC.NS.8. The Number System Know that there are numbers that are not rational, and approximate them by rational numbers. NS.8.1. Know that numbers that are not rational are called irrational. Understand informally that every number has a decimal expansion; for rational numbers show that the decimal expansion repeats eventually, and convert a decimal expansion which repeats eventually into a rational number. SC.CC.SP.8. Statistics and Probability Investigate patterns of association in bivariate data. SP.8.1. Construct and interpret scatter plots for bivariate measurement data to investigate patterns of association between two quantities. Describe patterns such as clustering, outliers, positive or negative association, linear association, and nonlinear association. SP.8.2. Know that straight lines are widely used to model relationships between two quantitative variables. For scatter plots that suggest a linear association, informally fit a straight line, and informally assess the model fit by judging the closeness of the data points to the line. NewPath Learning resources are fully aligned to US Education Standards. Select a standard below to view correlations to your selected resource:
About This Chapter Fundamentals of Statistics - Chapter Summary In this chapter, our instructors will help you review several of the equations and processes related to statistics, including descriptive and inferential statistics. You'll cover topics like data gathering for statistic calculation, populations, samples and the organization of statistics in graphs, charts, and tables. Information about data sets will also be reviewed. After completing this chapter, you will have also learned about the following: - Standard deviation - Types of statistics - Mean, median, mode and range - Standard deviation - Components of data sets, including quartiles and percentiles - Minimums, maximums and outliers - Frequency tables In addition to the videos included in each lesson, self-assessment quizzes are included, both printable and interactive, that can be used to check your retention of lesson topics. The lessons you complete and the scores you earn can be viewed via the personal dashboard, which tracks your progression through the chapter. Along with other included features, these can help you stay on schedule to complete this chapter and move on to other topics that you want to learn about. 1. Descriptive & Inferential Statistics: Definition, Differences & Examples Descriptive and inferential statistics each give different insights into the nature of the data gathered. One alone cannot give the whole picture. Together, they provide a powerful tool for both description and prediction. 2. Difference between Populations & Samples in Statistics Before you start collecting any information, it is important to understand the differences between population and samples. This lesson will show you how! 3. What is Random Sampling? - Definition, Conditions & Measures Random sampling is used in many research scenarios. In this lesson, you will learn how to use random sampling and find out the benefits and risks of using random samples. 4. How to Calculate Mean, Median, Mode & Range Measures of central tendency can provide valuable information about a set of data. In this lesson, explore how to calculate the mean, median, mode and range of any given data set. 5. Population & Sample Variance: Definition, Formula & Examples Population and sample variance can help you describe and analyze data beyond the mean of the data set. In this lesson, learn the differences between population and sample variance. 6. Calculating the Standard Deviation In this lesson, we will examine the meaning and process of calculating the standard deviation of a data set. Standard deviation can help to determine if the data set is a normal distribution. 7. Maximums, Minimums & Outliers in a Data Set When analyzing data sets, the first thing to identify is the maximums, minimums, and outliers. This lesson will help you learn how to identify these important items. 8. Quartiles & the Interquartile Range: Definition, Formulate & Examples Quartiles and the interquartile range can be used to group and analyze data sets. In this lesson, learn the definition and steps for finding the quartiles and interquartile range for a given data set. 9. Finding Percentiles in a Data Set: Formula & Examples Percentiles are often used in academics to compare student scores. Finding percentiles in a data set can be a useful way to organize and compare numbers in a data set. 10. Frequency & Relative Frequency Tables: Definition & Examples Frequency and relative frequency tables are a good way to visualize information. This is especially useful for information that is grouped into categories where you are looking for popularity or mode. 11. Cumulative Frequency Tables: Definition, Uses & Examples Cumulative frequency tables can help you analyze and understand large amounts of information. In this lesson, you practice creating and interpreting cumulative frequency tables. 12. Creating & Interpreting Histograms: Process & Examples Creating histograms can help you easily identify and interpret data. This lesson will give you several examples to better understand histograms and how to create them. Earning College Credit Did you know… We have over 200 college courses that prepare you to earn credit by exam that is accepted by over 1,500 colleges and universities. You can test out of the first two years of college and save thousands off your degree. Anyone can earn credit-by-exam regardless of age or education level. To learn more, visit our Earning Credit Page Transferring credit to the school of your choice Not sure what college you want to attend yet? Study.com has thousands of articles about every imaginable degree, area of study and career path that can help you find the school that's right for you. Other chapters within the MTTC Professional Readiness Examination (096): Practice & Study Guide course - Basic Number Sense - Fundamentals of Calculation - Mathematical Reasoning & Problem-Solving - Logic in Mathematics - Expressions, Functions & Operations - Expressions & Equations in Algebra - Understanding Algebraic Functions - Using Algebraic Functions - Polynomials, Rational Equations & Trigonometric Equations - Introduction to Basic Geometry - Measuring Geometric Figures - Relationships Between Figures in Geometry - Basics of Trigonometry - Bivariate Relationships in Statistics - Using Tables & Graphs - Interpreting Statistical Data - Probability Overview - Introduction to Conditional Probability & Diagrams - Structure, Analysis & Word Meanings in English - Literary Themes & Main Ideas - Strategies for Effective Writing - Reading Strategies & Literary Analysis - Critical Reasoning for Test-Taking - Conventions in Writing: Effective Usage - Constructing Sentences & Paragraphs - Argumentative Writing Overview - Structuring Informational & Explanatory Texts - MTTC Professional Readiness Flashcards
Elections for the Senate The powers and operations of the Senate are inextricably linked with the manner of its election, particularly its direct election by the people of the states by a system of proportional representation. This chapter therefore examines the bases of the system of election as well as describing its salient features. The constitutional framework The Constitution provides that “The Senate shall be composed of senators for each State, directly chosen by the people of the State, voting, until the Parliament otherwise provides, as one electorate”. Each Original State had initially six members of the Senate and now has twelve. The Parliament is authorised to increase the number of senators elected by each state subject to the qualification that “equal representation of the several Original States shall be maintained and that no Original State shall have less than six senators”. Senators representing the states are elected for terms of six years, half the Senate retiring at three yearly intervals except in cases of or following simultaneous dissolution of both Houses. A state may not be deprived of its equal representation in the Senate by any alteration of the Constitution without the consent of the electors of the state. Bases of the constitutional arrangements The constitutional foundations for composition of the Senate reflect the federal character of the Commonwealth. Arrangements for the Australian Senate correspond with those for the United States Senate in that each state is represented equally irrespective of geographical size or population; and senators are elected for terms of six years. Both Senates are essentially continuing Houses: in Australia half the Senate retires every three years; in the United States, a third of the Senate is elected at each biennial election. A major distinction is, however, that the United States Senate can never be dissolved whereas the Australian Senate may be dissolved in the course of seeking to settle disputes over legislation between the two Houses. An important innovation in Australia was the requirement that senators should be “directly chosen by the people of the State”. Direct election of United States senators was provided in the constitution by an amendment which took effect in 1913, prior to which they were elected by state legislatures. The innovatory character of Australia's Senate is also illustrated by contrasting it with the Canadian Senate created by the British North America Act 1867. The provinces are not equally represented in the Canadian Senate; and senators are appointed by the national government, initially for life and now until age 75. Composition on this antiquated basis has deprived the Canadian Senate of the legitimacy deriving from popular choice and has meant, in practice, that the Canadian Senate has not contributed either to enhancing the representivity of the Canadian Parliament (the more desirable because of the first-past-the-post method of election used in the House of Commons) nor to assuaging the pressures of Canada's culturally and geographically diverse federation. Prominent proposals for reform of Canada's Senate in recent decades have included equality of representation for provinces and direct election of senators. The principle of equal representation of the states is vital to the architecture of Australian federalism. It was a necessary inclusion at the time of federation in order to secure popular support for the new Commonwealth in each state especially the smaller states. It ensures that a legislative majority in the Senate is geographically distributed across the Commonwealth and prevents a parliamentary majority being formed from the representatives of the three largest cities and their environs alone. In contemporary Australia it acknowledges that the states continue to be the basis of activity in the nation whether for political, commercial, cultural or sporting purposes. Many organisations in Australia, at the national level, are constituted on the basis of equal state representation or with some modification thereof; this includes the major political parties. By contrast, very few nation-wide bodies are organised on the principle of the election and composition of the House of Representatives. Indeed, in Australia's national life, a body such as the House of Representatives is, if not an aberration, at least relatively unusual. This demonstrates that in Australia federalism is organic and not simply a nominal or contrived feature of government and politics. Constitutional provisions governing composition of the Senate thus remain as valid for Australia in the 21st century as they were in securing support for the Commonwealth in the nation-building final decade of the 19th century. In addition to senators elected by the people of the states, the Constitution also provides, in section 122, that in respect of territories, the Parliament “may allow the representation of such territory in either House of the Parliament to the extent and on the terms which it thinks fit”. Since 1975 the Northern Territory and the Australian Capital Territory have each elected two senators. The particular arrangements for election and terms of territory senators are set out in detail below. The principles of direct election by the people and equal representation of the states are entrenched in the Constitution and cannot be altered except by means of referendum and with the consent of every state. On the other hand, the principle of choosing senators “by the people of the State, voting ... as one electorate” is susceptible to change by statutory enactment. It is, however, essential to the effectiveness of the Senate as a component of the bicameral Parliament. Current electoral arrangements and proportional representation As explained in Chapter 1, the Senate, since proportional representation was introduced in 1948, taking effect from 1949, has been the means of a marked improvement in the representivity of the Parliament. The 1948 electoral settlement for the Senate mitigated the dysfunctions of the single member electorate basis of the House of Representatives by enabling additional, discernible bodies of electoral opinion to be represented in Parliament. The consequence has been that parliamentary government of the Commonwealth is not simply a question of majority rule but one of representation. The Senate, because of the method of composition, is the institution in the Commonwealth which reconciles majority rule, as imperfectly expressed in the House of Representatives, with adequate representation. Proportional representation applied in each state with the people voting as one electorate has been twice affirmed. In 1977, the people at referendum agreed to an amendment to the Constitution so that in filling a casual vacancy by the parliament of a state (or the state governor as advised by the state executive council), the person chosen will be drawn, where possible, from the party of the senator whose death or resignation has given rise to the vacancy. A senator so chosen completes the term of the senator whose place has been taken and is not required, as was previously the case, to stand for election at the next general election of the House of Representatives or periodical election of the Senate. The previous arrangement had the defect of, on occasions, distorting the representation of a state as expressed in a periodical election. The Constitution thus reinforces a method of electing senators which is itself only embodied in the statute law. The present combination of statute and constitutional law serves to underline and preserve the representative character of the Senate. If the statute law were amended so as to abandon the principle of state-wide electorates for choosing of senators in favour of Senate electorates, this would not only have the defect of replicating the House of Representatives system, which by itself is an inadequate means of even trying to represent electoral opinion fairly, but would invalidate the special method of filling a casual vacancy now provided for in section 15 of the Constitution. Single member constituencies would probably be unconstitutional, as they would result in only part of the people of a state voting in each periodical Senate election. There are grounds for concluding that anything other than state-wide electorates and proportional representation would be unconstitutional. The second affirmation of state-wide electorates for the purpose of electing the Senate may be found in the decision of the Commonwealth Parliament, on the basis of a private senator's bill, to remove the authority of the Queensland Parliament to make laws dividing Queensland “into divisions and determining the number of senators to be chosen for each division”. The irresistible conclusion of any analysis of basic arrangements for election of senators is that, for reasons of principle and practice, these features are essential: direct election by the people; equality of representation of the states; distinctive method of election based on proportional representation as embodied in the 1948 electoral settlement for the Senate; elections in which each state votes as one electorate; and filling of casual vacancies according to section 15 of the Constitution. Terms of Service – State Senators Except in cases of simultaneous dissolution, senators representing the states are elected for terms of six years. Terms commence on 1 July following the election. The commencement date was originally 1 January but was altered by referendum in 1906 in an ultimately unsuccessful attempt to avoid the problem of unsynchronised elections for both Houses. The terms of senators elected following a dissolution of the Senate commence on 1 July preceding the date of the general election. Following a general election for the Senate, senators are divided into two classes. Unless another simultaneous election for both Houses intervenes, those in the first class retire on 30 June two years after the general election; those in the second class retire on 30 June five years after the general election. The method of dividing senators is described below. Terms of service – Territory Senators Territory senators' terms commence on the date of their election and end on the day of the next election. They therefore do not have the fixed six year terms commencing on 1 July of the senators elected to represent the states. Their terms are, however, unbroken, which is important in ensuring that the Senate has a full complement of members during an election period. Their elections coincide with general elections for the House of Representatives. Number of senators Under the Constitution each original state is represented by a minimum of six senators. This number has been twice increased, in 1948 (taking effect at the 1949 elections) to 10, and in 1983 (taking effect in the election of 1984) to 12. The Senate's size also increased after 1975 following election of two senators each by the Australian Capital Territory and the Northern Territory. The size of the Senate was 36 from 1901 until 1949; 60 from 1950 to 1975; 64 from 1976 to 1984; and 76 since 1985. The places of half of the senators for each state are open to election each three years, under the system of rotation. Electoral arrangements for territory senators are described below. Election timing – periodical elections Section 13 of the Constitution provides that a periodical election for the Senate must “be made” within one year before the relevant places in the Senate are to become vacant. The relevant places of senators become vacant on 30 June. This means that the election must occur on or after 1 July of the previous year. The question which arises is whether the whole process of election, commencing with the issue of the writs, must occur within one year of the places becoming vacant, or whether only the polling day or subsequent stages must occur within that period, so that the writs for the election could be issued before 1 July. This question has not been definitely decided. In Vardon v O'Loghlin (1907) 5 CLR 201, the question before the High Court was whether, the election of a senator having been found to be void, this created a vacancy which could be filled by the parliament of the relevant state under section 15 of the Constitution. The Court found that this situation did not create a vacancy which could be filled by that means, but that the senator originally returned as elected was never elected. A contrary argument was raised to the effect that, under section 13 of the Constitution, the term of service of a senator began on 1 January [now 1 July] following the day of his election, and it would lead to confusion if it were held that the subsequent voiding of the election, perhaps a year or more after the commencement of the term, could not be filled as a vacancy under section 15. In dismissing this argument, the Court, in the judgment delivered by Chief Justice Samuel Griffith, made the following observation: It is plain, however, that sec. 13 was framed alio intuitu, i.e., for the purpose of fixing the term of service of senators elected in ordinary and regular rotation. The term “election” in that section does not mean the day of nomination or the polling day alone, but comprises the whole proceedings from the issue of the writ to the valid return. And the election spoken of is the periodical election prescribed to be held in the year at the expiration of which the places of elected senators become vacant. The words “the first day of January following the day of his election” in this view mean the day on which he was elected during that election. For the purpose of determining his term of service any accidental delay before that election is validly completed is quite immaterial. This part of the judgment has been taken to indicate that, in interpreting the provision in section 13 whereby the periodical Senate election must be made within one year of the relevant places becoming vacant, the Court would hold that the whole process of election, not simply the polling day or subsequent stages, must occur within that period. This question, however, has not been distinctly decided. It would still be open to the Court to hold that only the polling day or subsequent stages must occur within the prescribed period, and there are various arguments which could be advanced to support this interpretation. The view that the requirement that the election “be made” within the relevant period means only that the election must be completed in that period is quite persuasive. If it were decided, however, to hold a periodical Senate election with only the polling day or subsequent stages occurring within the prescribed period, there would be a risk of the validity of the election being successfully challenged and the election held to be void. This would lead to the major consequence that the whole election process would have to start again. It may be doubted whether the Court would favour an interpretation which would bring about this consequence. Section 13 of the Constitution, as has been noted, also provides that the term of service of a senator is taken to begin on the first day of July following the day of the election. In this provision, the term “day of …. election” clearly means the polling day for the election. This is in accordance with the finding in Vardon v O'Loghlin. The day of election is polling day provided that the election is valid; if the election is found to be invalid then no election has occurred and the question of what is the day of election does not arise. Election timing – simultaneous general elections The provision for dating a senator's term from 1 July preceding simultaneous general elections for both Houses has been seen to be the source of a problem stemming from the preference of governments, for financial reasons as well as others of party advantage, to avoid separate dates for a general election of the House of Representatives (the term of which is governed by the date of the simultaneous dissolution) and an ensuing periodical election for half the Senate. The consequence in most cases has been to hold an “early” general election of the House to coincide with the next periodical Senate election. An instance where an “early” general election for the House was not subsequently held in order to synchronise with the next periodical election for the Senate was May 1953; the 1955 general election for the House is the only occasion when an “early” general election has been called to coincide with election of senators to fill the places of second class (long term) senators elected following simultaneous elections for both Houses. Elections arising from simultaneous dissolutions, held in August 1914, July 1987 and July 2016 did not give rise in significant form to the issue of keeping elections for the two Houses synchronised because of the close proximity of the commencing dates for Senate and House terms in the relevant circumstances. However, the simultaneous dissolution of May 2016, only days before the last possible date to dissolve both Houses under section 57, led to a longer than usual campaign period to ensure a July election and minimal backdating of senators' terms. The early dissolution of the House of Representatives in November 1929 had, in the event, no effect on synchronisation of Senate and House elections because another early dissolution, occasioned by defeat of the Scullin Government on the floor of the House, was needed in December 1931, a date when a periodical election for the Senate was convenient. The House of Representatives was prematurely dissolved in 1963; as a consequence there was a periodical election for the Senate the following year. Subsequently there were general elections for the House in 1966, 1969 and 1972, and periodical elections for the Senate in 1967 and 1970. This sequence of unsynchronised elections ended with the simultaneous dissolutions of April 1974. The case for synchronisation of elections for the two Houses is more a question of convenience and partisan advantage than one of institutional philosophy. Financial considerations simply buttress arguments of party advantage. In a truly bicameral system there is no requirement at all for synchronisation of elections. Proposals to make this a requirement of the Australian Constitution have four times failed at referendum, even though “expert” opinion continues to favour a constitutional amendment of this character. If there is to be change, a more practical approach would be an alteration of the Constitution to provide that the terms of senators elected in a simultaneous dissolution election should be deemed to commence on 1 July following (rather than preceding) the date of election. Provided that the House of Representatives was not subsequently dissolved within two years of election, synchronisation of a general election for the House and a periodical election for the Senate could be restored with relative ease. Such a proposal, if adopted, would remove the current defect in simultaneous dissolution arrangements of circumscribing the standard six-year term for senators by anything up to one year. This approach would, on the other hand, avoid the two major deficiencies posed by simultaneous election proposals: the augmented power placed in the hands of a prime minister by extending executive government authority over the life of the House of Representatives to half the Senate; and diminishing bicameralism by irrevocably tying the electoral schedule for the Senate to that of the House of Representatives. Effective bicameralism requires that the second chamber should have a significant measure of autonomy in its electoral cycle, as well as distinctive electoral arrangements. Issue of writs Writs for the election of senators are issued by the state governor under the authority of the relevant state legislation. The practice is for the governors of the states (when the elections are concurrent) to fix times and polling places identical with those for the elections for the House of Representatives, the writs for which are issued by the Governor-General. In practice, the Prime Minister informs the Governor-General of the requirements of section 12 of the Constitution, which provides that writs for the election of senators are issued by the state governors, observes that it would be desirable that the states should adopt the polling date proposed by the Commonwealth, and requests the Governor-General to invite the state governors to adopt a suggested date. Theoretically, a state could fix some date for the Senate poll other than that suggested by the Commonwealth, provided it is a Saturday. Different states, too, could fix different Saturdays for a Senate poll. This power vested in the states to issue writs for Senate elections, fixing the date of polling, gives expression to the state basis of representation in the Senate. The Constitution provides that, in the case of a dissolution of the Senate, writs are issued within ten days from the proclamation of the dissolution. The Governor-General issues the writs for elections of territory senators. Under changes introduced in the 2007 election, claims for enrolment or transfer of enrolment could not be considered if lodged after 8 pm on the date of issue of the writs, and the rolls closed on the third working day after the writs were issued. These provisions were ruled invalid by the High Court in Rowe v Electoral Commissioner (2010) 243 CLR 1 and replacement legislation providing for the rolls to close seven days after the date of the writs was enacted in 2011. A claim for enrolment or transfer of enrolment received between the close of rolls and polling day (“the suspension period”), and that was delayed in the post by an industrial dispute, is regarded as having been received before the rolls closed. Claims received during the suspension period are not considered until after polling day. Potential disenfranchisement of claimants for enrolment or transfer during the suspension period was the subject of a challenge before the 2016 election but the challenge was dismissed by the High Court in Murphy & Anor v Electoral Commissioner HCA Trans 111. In Getup Ltd v Electoral Commissioner FCA 869, the Federal Court held that an online enrolment form signed with a digital pen was in order. Nominations close at least 10 days but not more than 27 days after the issue of the writ. A candidate for election to either House of the Parliament must be at least 18 years old; an Australian citizen; and an elector entitled to vote, or a person qualified to become such an elector. A person meeting the three qualifications may be disqualified for several reasons. Members of the House of Representatives, state parliaments or the legislative assemblies of the Australian Capital Territory or the Northern Territory cannot be chosen or sit as senators. Members of local government bodies, however, are offered some protection by s. 327(3) of the Commonwealth Electoral Act, but the High Court has not ruled conclusively on this matter. Others disqualified under the Constitution, section 44, are: - anyone who is a citizen or subject of a foreign power; - anyone convicted and under sentence, or subject to be sentenced, for an offence punishable by Commonwealth or state law by a sentence of 12 months or more; - anyone who is an undischarged bankrupt; - anyone who holds an office of profit under the Crown; and - anyone with a pecuniary interest in any agreement with the Commonwealth Public Service (except as a member of an incorporated company of more than 25 people). A person convicted of certain electoral-related offences is disqualified for 2 years. For cases of the disqualification of senators and senators elect, see Chapter 6, Senators, Qualifications of senators. No one may nominate as a candidate for more than one election held on the same day. Hence it is not possible for anyone to nominate for more than one division for the House of Representatives, or more than one state or territory for the Senate, or for both the House and the Senate. Nominations must be made by 12 noon on the day nominations close and the onus is on candidates to ensure nominations reach the electoral officer in time. Candidates may withdraw their nominations at any time up to the close of nominations, but cannot do so after nominations have closed. Nominations of candidates for the Senate, made on the appropriate nomination form (or a facsimile of the form), are made to the Australian Electoral Officer for the state or territory for which the election is to be held. A candidate may be nominated by 100 electors or the registered officer of the registered political party which has endorsed the candidate. Nomination of a candidate of a registered political party not made by the registered officer must be verified. Sitting independent candidates require only one nominee. Nomination forms are not valid unless the persons nominated: - consent to act if elected; - declare that they are qualified to be elected and that they are not candidates in any other election to be held on the same day; - state whether they are Australian citizens by birth or became citizens by other means; and - provide relevant particulars. Candidates in a Senate election may make a request on the nomination form to have their names grouped on the ballot paper. A party name or abbreviation (or for a group endorsed by more than one registered party, a composite name) may be printed on the ballot paper adjacent to the group voting square and any party logo. A deposit must be lodged with each nomination. The deposit, payable in legal tender or banker's cheque only, is $2,000 for a Senate nomination. The deposit is returned in a Senate election if, in the case of un-grouped candidates, the candidate's total number of first preference votes is at least four percent of the total number of formal first preference votes; or, where the candidate's name is included in a group, the sum of the first preference votes polled by all the candidates in the group is at least four percent of the total number of formal first preference votes. Where the number of nominations does not exceed the number of vacancies, the Australian Electoral Officer, on nomination day, declares the candidates elected. In a Senate election, if any candidate dies between the close of nominations and polling day, and the number of remaining candidates is not greater than the number of candidates to be elected, those candidates are declared elected. However, if the remaining candidates are greater in number than the number of candidates to be elected, the election proceeds. A vote recorded on a Senate ballot paper for a deceased candidate is counted to the candidate for whom the voter has recorded the next preference, and the numbers indicating subsequent preferences are regarded as altered accordingly. In a House of Representatives election, if a candidate dies between the close of nominations and polling day, the election in that division is deemed to have wholly failed and does not proceed. A new writ is issued for another election in that division, but this supplementary election is held using the electoral roll prepared for the original election. The statutory provisions regarding death after the close of nominations of a nominated candidate for the Senate could seriously prejudice the prospects of a political party unless a sufficient number of candidates is nominated to avoid disadvantage in the event of a death. The constitutionality of the statutory requirements for the registration of a political party (500 members, no overlapping membership with other parties) was upheld in Mulholland v Australian Electoral Commission (2004) 220 CLR 181. Polling takes place on a Saturday between the hours of 8 am and 6 pm. The Divisional Returning Officer for each electoral Division arranges for appointment of all polling officials for the Division and makes all necessary arrangements for equipping polling places with voting screens, ballot boxes, ballot papers and certified lists of voters. Candidates are prohibited from taking any part in the actual conduct of the polling. They may appoint a scrutineer to represent them at each polling place. The scrutineer has the right to observe the sealing of the empty ballot box before the poll commences at 8 am; observe the questioning of voters by the officer issuing ballot papers; object to the right of any person to vote; and observe all aspects of voting by voters in polling places, hospitals, prisons and remote mobile teams. Voting is compulsory for all electors with the exception of those living or travelling abroad, itinerant electors and electors located in the Antarctic. Contrary to the widely held belief that an elector only has to attend a polling place and have their name marked off the roll, the electoral Act specifically states that it shall be the duty of every elector to vote in each election and is quite specific about how ballot papers must be marked. The fact that voting is a private act performed in public means that the identity will never be discovered of electors who may deface their ballot paper or place it unmarked in the ballot box. Nonetheless, the law is still very clear on this point. Some prisoners are excluded from voting although some of the relevant provisions of the Commonwealth Electoral Act were ruled invalid in the case of Roach v Electoral Commissioner (2007) 233 CLR 162. Replacement legislation was enacted in 2011. The penalty for failing to vote without a valid and sufficient reason is $20 or, if the matter is dealt with in court, a fine not exceeding $50. Electors may vote at any polling place in the House of Representatives electorate for which they are enrolled, at any polling place in the same state or territory (absent voting) or at an interstate voting centre if they are travelling interstate on election day. Under prescribed circumstances electors may vote by post or cast a pre-poll vote. Special arrangements are also made for ballots to be cast by eligible voters in hospitals, prisons and remote locations including Antarctica, and those travelling or residing abroad. The ballot paper A ballot paper for a Senate election has two parts, each reflecting particular methods of registering a vote. Electors may use only one method. The two parts are separated by a thick horizontal line known as the dividing line, and the two methods are referred to as voting “above the line” or “below the line”. Introduced in 1983 to addresss an increasing proportion of informal votes for the Senate, the provisions for group voting tickets simplified voting for the Senate if electors chose not to indicate their order of preference for all candidates for that state or territory. By placing the number 1 in a box above the line for their chosen party, group or incumbent senator, voters could thereby adopt the registered preferences of the object of their choice. The constitutional validity of this method of voting was upheld in McKenzie v Commonwealth (1984) 57 ALR 747, Abbotto v Australian Electoral Commission (1997) 144 ALR 352 and Ditchburn v Australian Electoral Officer for Queensland (1991) 165 ALR 147. In due course, however, the potential for the system to be exploited by micro-parties with appealing names whose exchanges of preferences resulted in the election of candidates with miniscule primary votes became increasingly apparent. Recommendations by the Joint Standing Committee on Electoral Matters in an interim report on the conduct of the 2013 federal election for the abolition of group and individual voting tickets and the adoption of optional preferential voting both above and below the line were given effect in the Commonwealth Electoral Amendment Act 2016. The new provisions were the subject of an immediate challenge that was unanimously dismissed by the High Court which found that they did not impinge on the constitutional requirements for there to be one method of choosing senators which shall be uniform for all the States (s. 9) or for senators to be directly chosen by the people of the State (s. 7). Where groups of candidates or individual incumbent senators have registered as such, a series of boxes is printed on the top part of the Senate ballot paper above the candidates' names. The voter may vote above the line by numbering at least 6 of the boxes in the order of his or her choice, starting with the number 1. Alternatively, where the voter wishes to indicate preferences among individual Senate candidates on the bottom part of the ballot paper, the voter must place a number 1 in the square opposite the name of the candidate most preferred, and give preference votes for at least 11 other candidates by placing the numbers 2, 3, 4 (and so on, as the case requires) in the squares opposite their names so as to indicate an order of preference for them. The top part of the ballot paper is left blank. Counting the vote At the close of the poll each polling place becomes a counting centre under the control of an assistant returning officer who will have been the officer-in-charge of that polling place during the hours of polling. Only ordinary votes (not postal, pre-poll or absentee votes) are counted at the counting centres on election night. Votes for the House of Representatives are counted before Senate ballot papers, as there is widespread community interest in the formation of government and usually considerable time before the Senate terms begin. Furthermore, the nature of the Senate voting system means that a quota cannot be struck on polling night, so only provisional figures can be calculated from the ballot papers counted at polling places. Ballot papers are sorted by the polling officials according to the formal first preference votes marked and the results are then tabulated and sent to the Divisional Returning Officer. Results are relayed through a computer network to the AEC's Virtual Tally Room where progressive figures are displayed. When scrutiny of ordinary votes at each counting centre ends, ballot papers are placed in sealed parcels and delivered to the Divisional Returning Officer. Other votes are counted at the office of the Divisional Returning Officer after election night. In recent times, amendments to the electoral Act have permitted the computerised scrutiny of votes in Senate elections which has reduced the time taken to calculate results, particularly in the larger States. After the 2013 election, during the course of a recount of the Western Australian Senate vote, it was discovered that 1370 ballot papers had been lost. An official inquiry failed to locate the papers or identify the circumstances of the loss. Given the closeness of the results and the different outcome from the recount, the AEC itself lodged a petition with the High Court sitting as the Court of Disputed Returns asking for the election result to be declared void. Two other parties lodged similar petitions. The Court declared the election void, holding that it was precluded by the Commonwealth Electoral Act 1918 from reconstructing the result from earlier records of the lost ballot papers, the loss of which, combined with the closeness of the count inevitably affected the result. The election was held again on 5 April 2014, with a date for the return of the writs that allowed all elected or re-elected senators to begin their terms on 1 July 2014. Candidates may appoint scrutineers who are entitled to be present throughout the counting of votes. The number of scrutineers for a candidate at each counting centre is limited to the number of officers engaged in the counting. Formal voting in a Senate election Following a 2008 decision of the Federal Court sitting as the Court of Disputed Returns, a series of principles have been set out by the Court to be applied to the consideration of the admission or rejection of ballot papers. In summary, these principles are to (i) err in favour of the franchise; (ii) only have regard for what is on the ballot paper; and (iii) the ballot paper should be construed as a whole. Subsection 268(3) limits the reasons for informality to those specified and requires a ballot paper to be given effect to according to the voter's intention, so far as it is clear. However, the tests which apply to acceptance of a Senate ballot paper as formal are complicated because a Senate vote can be recorded either by numbering of preferences for individual candidates below the line or for parties or groups above the line. Additionally, a ballot paper may be accepted as formal even where the voter has erroneously attempted to record both types of votes. Thus three distinct cases may arise. The first case is a vote above the line. A ballot paper is formal if: - the numbers 1 to at least 6 are written in the squares printed above the line in order of preference for the parties or groups represented; or - if there are 6 or fewer squares printed above the line, they are numbered consecutively from 1. Specific allowances are made for voters who deviate from these requirements. A ballot paper is formal if the voter marks only the number 1 in a box above the line, or the number 1 and one or more higher numbers. In addition, a tick or a cross in a box above the line is accepted as the equivalent of the number 1. If a number is repeated, that number and any higher number are disregarded. If a number is missed, any numbers higher than the missing number are disregarded. The second case is a vote below the line. A ballot paper is formal if: - the numbers 1 to at least 12 are written in the squares printed below the line in order of preference for individual candidates; or - if there are 12 or fewer squares printed below the line, they are numbered consecutively from 1. Specific allowances are again made for voters who deviate from these requirements. If there are more than 6 squares printed below the line on a ballot paper, a vote is formal if the voter has numbered any of those squares consecutively from 1 to 6. In addition, a tick or a cross in a box below the line is accepted as the equivalent of the number 1. If a number is repeated, that number and any higher number are disregarded. If a number is missed, any numbers higher than the missing number are disregarded. Finally, if a ballot has been marked both above and below the line and each vote would have been formal if recorded on its own, the vote below the line is included in the scrutiny rather than the party or group vote above the line. As noted in Chapter 6, upon the finding that Senator Wood had not been eligible to contest an election for the Senate in July 1987, it was determined that the place should be filled by counting or recounting of ballot papers cast for candidates for election for the Senate at the election. It was held “that the ballot papers for an election to the Senate, conducted under the system of proportional preferential voting prescribed by Part XVIII of the Commonwealth Electoral Act, for which an unqualified person was a candidate, were not invalid but indications of voters' preference for the candidate were ineffective”. Determining the successful candidates The essential features of the Senate system of election are as follows: To secure election, candidates must secure a quota of votes. The quota is determined by dividing the total number of formal first preference votes in the count by one more than the number of senators to be elected for the state or territory and increasing the result by one. A quota cannot be determined until the total number of formal ballot papers is calculated, which means waiting until the statutory period (13 days) for the receipt of postal votes has passed. Should a candidate gain an exact quota, the candidate is declared elected and those ballot papers are set aside as finally dealt with, as there are no surplus votes. For each candidate elected with a surplus, commencing with the candidate elected first, a transfer value is calculated for all the candidate's ballot papers. All those ballot papers are then re-examined and the number showing a next available preference for each of the continuing candidates is determined. Each of these numbers, ignoring any fractional remainders, is added to the continuing candidates' respective progressive totals of votes. Surplus votes are transferred at less than their full value. The transfer value is calculated by dividing the successful candidate's total surplus by the total number of the candidate's ballot papers. Where a transfer of ballot papers raises the numbers of votes obtained by a candidate up to a quota, the candidate is declared elected. No more ballot papers are transferred to that elected candidate at any succeeding count. When all surpluses have been distributed and vacancies remain to be filled, and the number of continuing candidates exceeds the number of unfilled vacancies, exclusion of candidates with the lowest numbers of votes commences. Bulk exclusions are proceeded with if possible; otherwise exclusions of single candidates take place. Excluded candidates' votes are transferred at full value in accordance with their next preferences to the remaining candidates. Under certain circumstances the transfer of a surplus may be deferred until after an exclusion or bulk exclusion. Step 5 is continued, as necessary, until either all vacancies are filled or the number of candidates in the count is equal to the number of vacancies remaining to be filled. In the latter case, the remaining candidates are declared elected. In counting votes in a Senate election, if only two candidates remain for the last vacancy to be filled and they have an equal number of votes, the Australian Electoral Officer for the state or territory has a casting vote, but does not otherwise vote in the election. Recounts normally occur only when the result of an election is very close. At any time before the declaration of the result of an election, the officer conducting the election may, at the written request of a candidate or on the officer's own decision, recount some or all of the ballot papers. The Electoral Commissioner or an Australian Electoral Officer may direct a recount. A recount last occurred in 2013 after the result of the count in Western Australia was so close as to raise questions about the safety of the original result. The election was ultimately declared void. Return of the writ Writs must be returned within 100 days of issue. Following the declaration of the result in a Senate election, the Australian Electoral Officer for a state or territory certifies the names of the candidates elected for the state or territory, and returns the writ and the certificate to the Governor of the state or, in the case of the ACT and the Northern Territory, to the Governor-General. The State Governors forward their respective writs to the Governor-General whose Official Secretary in turn passes them to the Clerk of the Senate for tabling at the swearing in of new Senators. Meeting of new parliament Under the Constitution, section 5, after any general election (for the House of Representatives and usually a periodical election for the Senate) the Parliament shall be summoned to meet not later than 30 days after the day appointed for the return of the writs. Disputed returns and qualifications Under the Commonwealth Electoral Act the validity of any election or return may be disputed only by petition addressed to the Court of Disputed Returns. The High Court of Australia is the Court of Disputed Returns and it has jurisdiction either to try the petition or to refer it for trial to the Federal Court. A petition must: - set out the facts relied on to invalidate the election; - sufficiently identify the specific matters on which the petition relies; - detail the relief to which the petitioner claims to be entitled; - be signed; - be attested by two witnesses whose occupations and addresses are stated; - be filed in the Registry of the High Court within 40 days after the return of the writ or the notification of the appointment of a person to fill a vacancy; - be accompanied by the sum of $500 as security for costs. The Court has wide powers which include power to declare that any person who was returned was not duly elected; to declare any candidate duly elected who was not returned as elected; and to declare any election absolutely void. The requirement for a petition to be lodged within the 40 day limit cannot be set aside. The Court cannot void a whole general election. The Court must sit as an open Court and be guided by the substantial merits and good conscience of each case without regard to legal forms or technicalities, or whether the evidence before it is in accordance with the law of evidence or not. Questions of fact may be remitted to the Federal Court. All decisions of the Court are final and conclusive and without appeal and cannot be questioned in any way. If the Court of Disputed Returns finds that a candidate has committed or has attempted to commit bribery or undue influence, and that candidate has been elected, then the election will be declared void. Any question arising in the Senate respecting the qualification of a senator or respecting a vacancy may be referred by resolution to the Court of Disputed Returns. For cases on the qualifications of senators, see Chapter 6, Senators, under that heading. Division of the Senate following simultaneous general elections After a general election for the Senate, following simultaneous dissolutions of both Houses, it is necessary for the Senate to divide senators into two classes for the purpose of restoring the rotation of members. On all eight occasions that it has been necessary to divide the Senate for the purposes of rotation, the practice has been to allocate senators according to the order of their election. In 2016, the effective part of the resolution provided as follows: - Senators listed at positions 7 to 12 on the certificate of election of senators for each state shall be allocated to the first class and receive 3 year terms. - Senators listed at positions 1 to 6 on the certificate of election of senators for each state shall be allocated to the second class and receive 6 year terms. In its report of September 1983 the Joint Select Committee on Electoral Reform proposed that “following a double dissolution election, the Australian Electoral Commission conduct a second count of Senate votes, using the half Senate quota, in order to establish the order of election to the Senate, and therefore the terms of election”. The committee also recommended that there should be a constitutional referendum on “the practice of ranking senators in accordance with their relative success at the election” so that “the issue is placed beyond doubt and removed from the political arena”. The Commonwealth Electoral Act was subsequently amended to authorise a recount of the Senate vote in each state after a dissolution of the Senate to determine who would have been elected in the event of a periodical election for half the Senate. Following the 1987 dissolution of the Senate, the then Leader of the Government in the Senate, Senator John Button, successfully proposed that the method used following previous elections for the full Senate should again be used in determining senators in the first and second classes respectively. The Opposition on that occasion unsuccessfully moved an amendment to utilise section 282 of the Commonwealth Electoral Act for the purpose of determining the two classes of senators, in accordance with the September 1983 recommendation of the Joint Select Committee on Electoral Reform. According to the leading Opposition speaker, Senator Short, the effect of using the historical rather than the proposed new method was that two National Party senators would be senators in the first (three-year) class rather than the second (six-year) class, whilst two Australian Democrat senators would be senators in the second rather than the first class. On 29 June 1998 the Senate agreed to a motion, moved by the Leader of the Opposition in the Senate, Senator Faulkner, indicating support for the use of section 282 of the Commonwealth Electoral Act in a future division of the Senate. The stated reason for the motion was that the new method should not be adopted without the Senate indicating its intention in advance of a simultaneous dissolution, but it was pointed out that the motion could not bind the Senate for the future. An identical motion was moved by Senator Ronaldson (Shadow Special Minister of State) on 22 June 2010 and agreed to without debate. No such resolution preceded the 2016 dissolution and the order of election method was again followed. The recount method would have resulted in two minor party senators being allocated six-year terms at the expense of two major party senators. Casual vacancies in the Senate are created by death, resignation or absence without permission. In the case of resignation, a senator writes to the President, or the Governor-General if there is no President or the President is absent from the Commonwealth. A resignation may take the following form— Dear Mr/Madam President I resign my place as a senator for the State of , pursuant to section 19 of the Constitution of the Commonwealth of Australia. Where the letter of resignation is sent to the Governor-General, the form may be as follows: Section 19 of the Constitution provides — “A senator may, by writing addressed to the President, or to the Governor-General if there is no President or if the President is absent from the Commonwealth, resign his place, which thereupon shall become vacant.” As the President of the Senate is absent from the Commonwealth, I address my resignation to you. I resign my place as a senator for the State of ..........., pursuant to section 19 of the Constitution of the Commonwealth of Australia. If the President resigns as a senator, the resignation is addressed to the Governor-General. The following principles have been observed in relation to the manner in which senators may resign their place: - a resignation by telegram or other form of unsigned message is not effective; - a resignation must be in writing signed by the senator who wishes to resign and must be received by the President; whether the writing is sent by post or other means is immaterial; - it is only upon the receipt of the resignation by the President that the senator's place becomes vacant under section 19 of the Constitution; - a resignation cannot take effect before its receipt by the President; - a resignation from a current term may not take effect at a future time; - the safest procedure is for the resignation, in writing, to be delivered to the President in person in order that the President can be satisfied that the writing is what it purports to be, namely, the resignation of the senator in question; resignations transmitted by facsimile or other electronic means and confirmed by telephone are accepted. On 5 July 1993 Senator Tate, having just commenced a new term as a senator for Tasmania, resigned before taking his seat in the Senate. The resignation of Senator Tate before his swearing in did not affect the procedure for his replacement. The interesting questions that would have arisen had he resigned before the end of his term were deferred till 2013 when Senator Bob Carr resigned, having just been elected to a new term starting on 1 July 2014. He submitted what was in effect a “double resignation”, resigning both from his place in respect of his term ending on 30 June and also in respect of his new term commencing on 1 July. Notification of both vacancies was provided to the Governor of NSW by the President of the Senate pursuant to section 21 of the Constitution. The resignation of a senator-elect in Senator Bob Carr's case was taken as giving rise to a double vacancy in respect of his current term and the term to which he had been elected. The death of a senator-elect has also been regarded as creating a casual vacancy to be filled in accordance with section 15 of the Constitution. Presumably a senator-elect could become disqualified and similarly create a casual vacancy. The disqualification of a senator at the time of election, however, does not create a vacancy but a failure of election which is remedied by a recount of ballot papers. The Constitution, section 20, states that the “place of a senator becomes vacant if for two consecutive months of any session of the Parliament” a senator fails to attend the Senate without its permission. In 1903 the seat of Senator John Ferguson was declared vacant owing to absence without leave for two months. For the purposes of section 20, a record is kept in the Journals of the Senate of senators' attendance. Method of filling casual vacancies Casual vacancies are filled in accordance with section 15 of the Constitution. The purpose of the current section 15, inserted by an amendment of the Constitution in 1977, is to preserve as much as possible the proportional representation determined by the electors in elections for the Senate. The main features of the section are as follows: - When a casual vacancy arises, the Houses of the Parliament, or the House where there is only one House, of the state represented by the vacating senator chooses a person to hold the place until the expiration of the term. - If the Parliament is not in session, the Governor of the state, with the advice of the Executive Council thereof, may appoint a person to hold the place until the expiration of 14 days from the beginning of the next session of the parliament of the state or the expiration of the term, whichever first happens. A person chosen is to be, where relevant and possible, a member of the party to which the senator whose death or resignation gave rise to the vacancy. The pertinent paragraph of section 15 states: Where a vacancy has at any time occurred in the place of a senator chosen by the people of a State and, at the time when he was so chosen, he was publicly recognised by a particular political party as being an endorsed candidate of that party and publicly represented himself to be such a candidate, a person chosen or appointed under this section in consequence of that vacancy, or in consequence of that vacancy and a subsequent vacancy or vacancies, shall, unless there is no member of that party available to be chosen or appointed, be a member of that party. Section 15 also provides: - in accordance with the last preceding paragraph, a member of a particular political party is chosen or appointed to hold the place of a senator whose place had become vacant; and - before taking his seat he ceases to be a member of that party (otherwise than by reason of the party having ceased to exist), he shall be deemed not to have been so chosen or appointed and the vacancy shall be again notified in accordance with section twenty-one of this Constitution. Casual vacancies arising in the Senate representation of the Australian Capital Territory or the Northern Territory are filled by the respective territory legislative assemblies. If the legislature is out of session, a temporary appointment can be made in the case of the Australian Capital Territory by the Chief Minister, and in the case of the Northern Territory by the Administrator. Provisions relating to political parties, similar to those of section 15 of the Constitution, also apply. The term of a senator filling a casual vacancy commences on the date of his or her choice by the appointing body. When a senator is appointed to a vacant place by the governor of a state and the appointment is “confirmed” by the state parliament within the 14 days allowed by section 15, the senator is not regarded as commencing a new term on the appointment by the parliament and is not sworn again. The 14 day period is regarded as commencing on the day after the first day of the session, in accordance with the normal rule of statutory interpretation. If there is a “gap” between the expiration of the 14 day period and the appointment of the senator by the parliament, the senator is sworn again. The “double resignation” of Senator Bob Carr in 2013 created interesting questions for the Parliament of New South Wales in choosing a replacement. Senator Carr's party nominated one person to fill both the remainder of his current term and the new term to which he had been elected, but the Parliament, after considering advice from the Crown Solicitor, determined that it could fill the current vacancy only and could not act prospectively to fill a future vacancy. The advice was tabled in the New South Wales Legislative Council on 12 November 2013. With the NSW Houses not scheduled to sit between 17 June and 12 August 2014, further advice was sought from the NSW Crown Solicitor about whether an appointment could be made by the Governor and whether a resolution of the Senate encouraging the NSW Parliament to fill the vacancy could somehow act as a “trigger” for the Houses to meet and fill the vacancy. Not surprisingly (NSW having always taken a strict view of when a governor's appointment could be made) the advice on both questions was negative. In any case, the Senate did not contemplate such a resolution. However, the NSW Houses resolved to meet on 2 July 2014 and again chose Senator O'Neill to fill the second vacancy created by the resignation of Senator Bob Carr. For the avoidance of doubt, the President, on 1 July 2014, reminded the NSW Governor of his earlier notification of the vacancy existing from that date. Delay in filling casual vacancies The 1977 alteration of the Constitution has not entirely solved all problems in the filling of casual vacancies. There is nothing to compel a state parliament to fill a vacancy. This was illustrated in 1987 following the resignation of Tasmanian Senator Grimes, who had been elected to the Senate as an endorsed candidate of the Australian Labor Party. In accordance with the Constitution, section 15, the Parliament of Tasmania met in joint sitting on 8 May 1987. The Leader of the Australian Labor Party in the House of Assembly and Leader of the Opposition, Mr Batt, nominated John Robert Devereux to fill the vacancy. In the ensuing debate it became apparent that government members as well as a number of independent members of the Legislative Council intended to vote against the nomination. The basis for doing so, in terms of the Constitution, was expressed as follows by Mr Groom, Minister for Forests: It has been suggested by some people that there is a convention which requires us to accept Mr Devereux's nomination without question, but section 15 of the Constitution clearly states that it is for the Parliament to choose the person to fill the vacancy and not the party. We can choose only a person who is a member of the same party as the retired senator — that is well recognised — but we are not bound to accept the nomination of the party concerned. The matter shortly came to a vote. Votes were tied at 26 each. The question was thus resolved in the negative in accordance with the rules adopted for the joint sitting. Subsequently a member of the Legislative Council who had voted “No” in the division nominated William G McKinnon, a financial member of the Australian Labor Party and former member of the Tasmanian Parliament, to fill the vacancy and produced a letter from the nominee agreeing to the nomination. After a brief suspension the chair of the Joint Sitting declared that the “letter is not in order”. He continued: It does not comply with rule 16(6) in that the letter does not declare that the person is eligible to be chosen for the Senate and that the nomination is in accordance with section 15 of the Constitution of the Commonwealth of Australia. Therefore I am in the position of being unable to accept the nomination. The joint sitting adjourned soon afterwards without any further voting. The filling of the casual vacancy was, in the event, overtaken by simultaneous dissolutions of the Senate and the House. In the subsequent election John Devereux was among the endorsed ALP candidates in Tasmania who were elected. In the Senate itself, the Opposition granted a pair to the government following Senator Grimes' resignation so that in party terms relative strengths were maintained. The Opposition's position on the matter was stated in the following terms: “the person appointed to fill casual vacancies of this kind ought to be the person nominated by the retiring senator's political party”. There was no certainty as to the outcome of the dispute. According to Senator Gareth Evans, representing the Attorney-General in the Senate, “we have all the makings, however, of a deadlock, and that is what will prevail in the absence of legal challenge and in the absence of a change of heart in Tasmania at the moment”. Failure to fill a casual vacancy promptly means that a state's representation in the Senate is deficient and the principle of equality of representation infringed. The Senate itself takes a keen interest in prompt filling of casual vacancies and has on several occasions expressed by resolution concern about delay. On 19 March 1987, in the case of the Tasmanian vacancy, the Senate expressed the view that the nominee of the relevant party should be appointed. Because of the delay in filling a casual vacancy created by the resignation of Senator Vallentine on 31 January 1992, the Senate passed a resolution on 5 March 1992 expressing its disapproval “of the action of the Western Australian Government for failing to appoint Christabel Chamarette [the candidate endorsed by the relevant political group] as a Senator for Western Australia, condemns the Western Australian Government for denying electors of that state their rightful representation in the Senate, and condemns the Western Australian Government for the disrespect it has shown to the Senate”. On 3 June 1992 the Senate passed the following resolution: That the Senate — - believes that casual vacancies in the Senate should be filled as expeditiously as possible, so that no State is without its full representation in the Senate for any time longer than is necessary; - recognises that under section 15 of the Constitution an appointment to a vacancy in the Senate may be delayed because the Houses of the Parliament of the relevant State are adjourned but have not been prorogued, which, on a strict construction of the section, prevents the Governor of the State making the appointment; and - recommends that all State Parliaments adopt procedures whereby their Houses, if they are adjourned when a casual vacancy in the Senate is notified, are recalled to fill the vacancy, and whereby the vacancy is filled: - within 14 days after the notification of the vacancy, or - where under section 15 of the Constitution the vacancy must be filled by a member of a political party, within 14 days after the nomination by that party is received, whichever is the later. This resolution was passed because the government of Western Australia had adopted the “strict construction” referred to in the resolution, that the state governor could not fill the vacancy because the state Parliament was not prorogued but the Houses had adjourned. Other states from time to time have adopted the view that their governors fill vacancies when their Houses are adjourned. This resolution was reaffirmed in 1997. The Senate passed a resolution on 4 March 1997 calling on two states to fill casual vacancies expeditiously. The resolution was prompted largely by statements by the Premier of Queensland that a casual vacancy in that state caused by a mooted resignation of a senator might not be filled in accordance with section 15 of the Constitution. A resolution of 15 May 1997 referred to the tardiness of the Victorian government in filling vacancies. In 2015, a resolution agreed to on 26 March reaffirmed earlier resolutions and called on NSW to take all necessary steps to fill the vacancy caused by the resignation of Senator Faulkner. Despite the 1991 precedent, a governor's appointment was not made after the state Parliament was prorogued, and the vacancy remained unfilled until after the NSW Houses met following the state election. The obligation on states to fill casual vacancies as expeditiously as possible is matched by an obligation on the Senate to swear in and seat the appointees at the earliest possible time. The Senate has always adhered to this principle. A list of casual vacancies filled under section 15 of the Constitution is contained in Appendix 7. Until 1975 all members of the Senate were elected to represent the people of the states. In the elections in December 1975 following simultaneous dissolution of the two Houses on 11 November 1975 the Australian Capital Territory and the Northern Territory each elected two senators for the first time. Legislation for election of territory senators was enacted in the Senate (Representation of Territories) Act 1973. This legislation was based on the Constitution, section 122, which provides that, in relation to territories, the Parliament “may allow the representation of such territory in either House of the Parliament to the extent and on the terms which it thinks fit”. The provisions for the representation of the territories in the Senate are now contained in the Commonwealth Electoral Act, ss 40-44. The legislation was not enacted without controversy. Indeed, it was one of the bills cited as a ground for the simultaneous dissolutions of 1974 and was eventually passed into law at the joint sitting of that year. It was subsequently twice challenged in the High Court, surviving the first challenge by a majority 4 to 3 decision, and the second by a majority of 5 to 2. The principal issue in dispute was the contention that territory senators would undermine the constitutional basis of the Senate as a house representing the people by states and that territory representation would disrupt the numerical balance between large and small states. Other questions related to the voting rights of territory senators; the effect of territory senators on the nexus between the sizes of the two Houses and on quorums in the Senate; and applicable criteria in determining whether a territory should be represented in the Senate. A full account of the matter is contained in ASP, 6th ed. That edition concluded that “the broadest possible representation of all the people of Australia best serves that [the Senate's] checks and balances role”. Given that each territory's representation is currently limited to two senators, the practice of electing both at the one election by proportional representation preserves the Senate's role as a House which enhances the representative capacity of the Parliament and provides a remedy for the defects in the electoral method used for the House of Representatives. As indicated in Chapter 1, since the 1980 general election all members of the House of Representatives for ACT electorates have usually been members of the Australian Labor Party. Throughout much of this period, one senator has been a member of the ALP, the other senator from the Liberal Party. One-party representation in the House has also been common for the Northern Territory, so that its two senators are also essential to providing that territory with balanced representation. The writ for election of senators for a territory is issued by the Governor-General and is addressed to the Australian Electoral Officer for that Territory; following declaration of the result of a Senate election in a territory, the writ is returned to the Governor-General.
The study is the first comprehensive effort to directly compare the impacts of biological diversity loss to the anticipated effects of a host of other human-caused environmental changes. The results highlight the need for stronger local, national and international efforts to protect biodiversity and the benefits it provides, according to the researchers, who are based at nine institutions in the United States, Canada and Sweden. “Loss of biological diversity due to species extinctions is going to have major impacts on our planet, and we better prepare ourselves to deal with them,” said University of Michigan ecologist Bradley Cardinale, one of the authors. The study is scheduled for online publication in the journal Nature on May 2. “These extinctions may well rank as one of the top five drivers of global change,” said Cardinale, an assistant professor at the U-M School of Natural Resources and Environment and an assistant professor in the Department of Ecology and Evolutionary Biology. Studies over the last two decades have demonstrated that more biologically diverse ecosystems are more productive. As a result, there has been growing concern that the very high rates of modern extinctions – due to habitat loss, overharvesting and other human-caused environmental changes – could reduce nature’s ability to provide goods and services like food, clean water and a stable climate. But until now, it’s been unclear how biodiversity losses stack up against other human-caused environmental changes that affect ecosystem health and productivity. “Some people have assumed that biodiversity effects are relatively minor compared to other environmental stressors,” said biologist David Hooper of Western Washington University, the lead author of the Nature paper. “Our new results show that future loss of species has the potential to reduce plant production just as much as global warming and pollution.” In their study, Hooper and his colleagues used combined data from a large number of published studies to compare how various global environmental stressors affect two processes important in all ecosystems: plant growth and the decomposition of dead plants by bacteria and fungi. The new study involved the construction of a data base drawn from 192 peer-reviewed publications about experiments that manipulated species richness and examined the impact on ecosystem processes. The global synthesis by Hooper and his colleagues found that in areas where local species loss this century falls within the lower range of projections (loss of 1 to 20 percent of plant species), negligible impacts on ecosystem plant growth will result, and changes in species richness will rank low relative to the impacts projected for other environmental changes. In ecosystems where species losses fall within intermediate projections (21 to 40 percent of species), however, species loss is expected to reduce plant growth by 5 to 10 percent, an effect that is comparable in magnitude to the expected impacts of climate warming and increased ultraviolet radiation due to stratospheric ozone loss. At higher levels of extinction (41 to 60 percent of species), the impacts of species loss ranked with those of many other major drivers of environmental change, such as ozone pollution, acid deposition on forests, and nutrient pollution. “Within the range of expected species losses, we saw average declines in plant growth that were as large as changes seen in experiments simulating several other major environmental changes caused by humans,” Hooper said. “I think several of us working on this study were surprised by the comparative strength of those effects.” The strength of the observed biodiversity effects suggests that policymakers searching for solutions to other pressing environmental problems should be aware of potential adverse effects on biodiversity, as well, the researchers said. Still to be determined is how diversity loss and other large-scale environmental changes will interact to alter ecosystems. “The biggest challenge looking forward is to predict the combined impacts of these environmental challenges to natural ecosystems and to society,” said J. Emmett Duffy of the Virginia Institute of Marine Science, a co-author of the paper. Authors of the Nature paper, in addition to Hooper, Cardinale and Duffy, are: E. Carol Adair of the University of Vermont and the National Center for Ecological Analysis and Synthesis; Jarrett E.K. Byrnes of the National Center for Ecological Analysis and Synthesis; Bruce Hungate of Northern Arizona University; Kristen Matulich of University of California Irvine; Andrew Gonzalez of McGill University; Lars Gamfeldt of the University of Gothenburg; and Mary O’Connor of the University of British Columbia and the National Center for Ecological Analysis and Synthesis. Funding for the study included grants from the National Science Foundation and the National Center for Ecological Analysis and Synthesis. “This analysis establishes that reduced biodiversity affects ecosystems at levels comparable to those of global warming or air pollution,” said Henry Gholz, program director in the National Science Foundation’s Division of Environmental Biology, which funded the research. Jim Erickson | Newswise Science News Further reports about: > Biodiversity > Climate change > Ecological Analysis > Ecological Impact > Nature Immunology > Pollution > biological diversity > ecosystem > ecosystem process > environmental change > environmental problem > environmental stress > environmental stressors > global warming > mental stress > natural ecosystem > synthetic biology 100 % Organic Farming in Bhutan – a Realistic Target? 15.06.2018 | Humboldt-Universität zu Berlin What the size distribution of organisms tells us about the energetic efficiency of a lake 05.06.2018 | Leibniz-Institut für Gewässerökologie und Binnenfischerei (IGB) Moving into its fourth decade, AchemAsia is setting out for new horizons: The International Expo and Innovation Forum for Sustainable Chemical Production will take place from 21-23 May 2019 in Shanghai, China. With an updated event profile, the eleventh edition focusses on topics that are especially relevant for the Chinese process industry, putting a strong emphasis on sustainability and innovation. Founded in 1989 as a spin-off of ACHEMA to cater to the needs of China’s then developing industry, AchemAsia has since grown into a platform where the latest... The BMBF-funded OWICELLS project was successfully completed with a final presentation at the BMW plant in Munich. The presentation demonstrated a Li-Fi communication with a mobile robot, while the robot carried out usual production processes (welding, moving and testing parts) in a 5x5m² production cell. The robust, optical wireless transmission is based on spatial diversity; in other words, data is sent and received simultaneously by several LEDs and several photodiodes. The system can transmit data at more than 100 Mbit/s and five milliseconds latency. Modern production technologies in the automobile industry must become more flexible in order to fulfil individual customer requirements. An international team of scientists has discovered a new way to transfer image information through multimodal fibers with almost no distortion - even if the fiber is bent. The results of the study, to which scientist from the Leibniz-Institute of Photonic Technology Jena (Leibniz IPHT) contributed, were published on 6thJune in the highly-cited journal Physical Review Letters. Endoscopes allow doctors to see into a patient’s body like through a keyhole. Typically, the images are transmitted via a bundle of several hundreds of optical... Light detection and control lies at the heart of many modern device applications, such as smartphone cameras. Using graphene as a light-sensitive material for... Water molecules exist in two different forms with almost identical physical properties. For the first time, researchers have succeeded in separating the two forms to show that they can exhibit different chemical reactivities. These results were reported by researchers from the University of Basel and their colleagues in Hamburg in the scientific journal Nature Communications. From a chemical perspective, water is a molecule in which a single oxygen atom is linked to two hydrogen atoms. It is less well known that water exists in two... 13.06.2018 | Event News 08.06.2018 | Event News 05.06.2018 | Event News 15.06.2018 | Materials Sciences 15.06.2018 | Ecology, The Environment and Conservation 15.06.2018 | Power and Electrical Engineering
Grade 12 Physics – Circular Motion of a Charged Particle Moving in a Magnetic Field 1. An electron with a mass of 9.1 x 10-31 kg and charge of 1.6 x 10-19 C, is accelerated to a velocity of 4 x 106 m/s, then enters a uniform magnetic field of 5 x 10-3 T at an angle of 90° to the field. What is the radius of the circular path it follows? -31 7 2. An electron with a mass of 9.11 x 10 kg travels at 2 x 10 m/s in a plane perpendicular to a 0.1 T magnetic field. Calculate the radius of the circular path the particle takes on. 3. A proton (1.6 x 10-27 kg) in the diagram below enters a region possessing a uniform magnetic field. The proton's speed is 1 x 107 m/s. a) Determine the magnitude and direction of the magnetic field which will cause the proton to follow a circular path 10 cm in diameter. b) What force (magnitude and direction) will act on the proton? v + c B 10 cm e x d 4. A negative charge feels a force to the left when moving perpendicular to a magnetic field that is directed into the page. Find the direction of its velocity. 5. A magnetic field is vertically upwards. Charge A moves vertically downwards in the region of this magnetic field. Charge B is stationary within this magnetic field. Which charge feels the greater force? Explain. 6. A charged particle enters a magnetic field directed out of the page, as shown below. Is the particle positively or negatively charged? 7. A particle that has lost one electron is moving at 1.9 x 104m/s at right angles to a uniform magnetic field of 1.0 x 10-3T. If the radius of its path is 40cm, what is its mass? 8. A particle with a charge of +1.0ec and a mass of 3.9 x 10-25kg is accelerated from rest through a potential difference of 1.0 x 105V while passing through parallel plates. It then exits the parallel plates and enters a magnetic field of 0.1T that is perpendicular to its motion. Find the radius of the path it would follow in the magnetic field. (Hint: before you can find the radius, you need its velocity when it exits the plates) 9. An ion with a charge of +2.0ec moves through a magnetic field of 2.0T along a path with a radius of 5.0cm. If the magnetic field is at a right angle with its path, find the momentum of the ion. 10. Consider the diagram below. If charge q enters the magnetic field with a kinetic energy of 4.0J and a radius of path of 0.5m, what is the force acting on the charge? Answers: 1. 4.6 x 10-3 m 2. 1.1 mm 3. (a) B = 2.0 T [up] (b) 3.2 x 10 – 12 N [South] 4. South 5. Both feel no force. Charge A has no velocity and Charge B is moving parallel to the magnetic field. 6. Negative 7. 3.4 x 10-27kg 8. 7.0m 9. 3.2 x 10-20 kgm/s 10. 16 N This one is tricky. KE = ½mv2 = ½ mvv, so mv=2KE/v and we know mv=qBR, so 2KE/v=qBR, which re-arranges to 2KE/R=qvB but, we want force and F=qvB, so using the above, F=2KE/R 1.a) r = mv / qB so, B = mv / qr = (1.6 x 10 – 27 kg) (1 x 107 m/s) / (1.6 x 10 – 19 C)(0.10 m) B = 1.04 T This field must be pointing out of the page. b) F = q v B = (1.6 x 10 – 19 C)(1 x 107 m/s)(1.04 T) = 1.66 x 10 – 12 N downward.
A heuristic technique (//; Ancient Greek: εὑρίσκω, "find" or "discover"), or a heuristic for short, is any approach to problem solving or self-discovery that employs a practical method that is not guaranteed to be optimal, perfect or rational, but which is nevertheless sufficient for reaching an immediate, short-term goal. Where finding an optimal solution is impossible or impractical, heuristic methods can be used to speed up the process of finding a satisfactory solution. Heuristics can be mental shortcuts that ease the cognitive load of making a decision.:94 Examples that employ heuristics include using trial and error, a rule of thumb, an educated guess, an intuitive judgment, a guesstimate, profiling, or common sense. Heuristics are the strategies derived from previous experiences with similar problems. These strategies depend on using readily accessible, though loosely applicable, information to control problem solving in human beings, machines and abstract issues. The most fundamental heuristic is trial and error, which can be used in everything from matching nuts and bolts to finding the values of variables in algebra problems. In mathematics, some common heuristics involve the use of visual representations, additional assumptions, forward/backward reasoning and simplification. Here are a few commonly used heuristics from George Pólya's 1945 book, How to Solve It: - If you are having difficulty understanding a problem, try drawing a picture. - If you can't find a solution, try assuming that you have a solution and seeing what you can derive from that ("working backward"). - If the problem is abstract, try examining a concrete example. - Try solving a more general problem first (the "inventor's paradox": the more ambitious plan may have more chances of success). In psychology, heuristics are simple, efficient rules, learned or inculcated by evolutionary processes, that have been proposed to explain how people make decisions, come to judgments, and solve problems typically when facing complex problems or incomplete information. Researchers test if people use those rules with various methods. These rules work well under most circumstances, but in certain cases can lead to systematic errors or cognitive biases. The study of heuristics in human decision-making was developed in the 1970s and 80s by psychologists Amos Tversky and Daniel Kahneman, although the concept was originally introduced by Nobel laureate Herbert A. Simon. Simon's original, primary object of research was problem solving which showed that we operate within what he calls bounded rationality. He coined the term "satisficing", which denotes the situation where people seek solutions or accept choices or judgments that are "good enough" for their purposes, but which could be optimized. Rudolf Groner analyzed the history of heuristics from its roots in ancient Greece up to contemporary work in cognitive psychology and artificial intelligence, and proposed a cognitive style "heuristic versus algorithmic thinking" which can be assessed by means of a validated questionnaire. Gerd Gigerenzer and his research group argued that models of heuristics need to be formal to allow for predictions of behavior that can be tested. They study the fast and frugal heuristics in the "adaptive toolbox" of individuals or institutions, and the ecological rationality of these heuristics, that is, the conditions under which a given heuristic is likely to be successful. The descriptive study of the "adaptive toolbox" is done by observation and experiment, the prescriptive study of the ecological rationality requires mathematical analysis and computer simulation. Heuristics – such as the recognition heuristic, the take-the-best heuristic, and fast-and-frugal trees – have been shown to be effective in predictions, particularly in situations of uncertainty. It is often said that heuristics trade accuracy for effort but this is only the case in situations of risk. Risk refers to situations where all possible actions, their outcomes and probabilities are known. In the absence of this information, that is under uncertainty, heuristics can achieve higher accuracy with lower effort. This finding, known as a less-is-more effect, would not have been found without formal models. The valuable insight of this program is that heuristics are effective not despite of their simplicity — but because of it. Furthermore, Gigerenzer and Wolfgang Gaissmaier found that both individuals and organizations rely on heuristics in an adaptive way. Heuristics, through greater refinement and research, have begun to be applied to other theories, or be explained by them. For example: the cognitive-experiential self-theory (CEST) also is an adaptive view of heuristic processing. CEST breaks down two systems that process information. At some times, roughly speaking, individuals consider issues rationally, systematically, logically, deliberately, effortfully, and verbally. On other occasions, individuals consider issues intuitively, effortlessly, globally, and emotionally. From this perspective, heuristics are part of a larger experiential processing system that is often adaptive, but vulnerable to error in situations that require logical analysis. In 2002, Daniel Kahneman and Shane Frederick proposed that cognitive heuristics work by a process called attribute substitution, which happens without conscious awareness. According to this theory, when somebody makes a judgment (of a "target attribute") that is computationally complex, a rather easier calculated "heuristic attribute" is substituted. In effect, a cognitively difficult problem is dealt with by answering a rather simpler problem, without being aware of this happening. This theory explains cases where judgments fail to show regression toward the mean. Heuristics can be considered to reduce the complexity of clinical judgments in health care. Informal models of heuristics - Affect heuristic - Anchoring and adjustment – Describes the common human tendency to rely more heavily on the first piece of information offered (the "anchor") when making decisions. For example, in a study done with children, the children were told to estimate the number of jellybeans in a jar. Groups of children were given either a high or low "base" number (anchor). Children estimated the number of jellybeans to be closer to the anchor number that they were given. - Availability heuristic – A mental shortcut that occurs when people make judgments about the probability of events by the ease with which examples come to mind. For example, in a 1973 Tversky & Kahneman experiment, the majority of participants reported that there were more words in the English language that start with the letter K than for which K was the third letter. There are actually twice as many words in the English Language that have K as the third letter as those that start with K, but words that start with K are much easier to recall and bring to mind. - Contagion heuristic - Effort heuristic - Escalation of commitment – Describes the phenomenon where people justify increased investment in a decision, based on the cumulative prior investment, despite new evidence suggesting that the cost, starting today, of continuing the decision outweighs the expected benefit. This is related to the sunk cost fallacy. - Familiarity heuristic – A mental shortcut applied to various situations in which individuals assume that the circumstances underlying the past behavior still hold true for the present situation and that the past behavior thus can be correctly applied to the new situation. Especially prevalent when the individual experiences a high cognitive load. - Naïve diversification – When asked to make several choices at once, people tend to diversify more than when making the same type of decision sequentially. - Peak–end rule - Representativeness heuristic – A mental shortcut used when making judgments about the probability of an event under uncertainty. Or, judging a situation based on how similar the prospects are to the prototypes the person holds in his or her mind. For example, in a 1982 Tversky and Kahneman experiment, participants were given a description of a woman named Linda. Based on the description, it was likely that Linda was a feminist. Eighty to ninety percent of participants, choosing from two options, chose that it was more likely for Linda to be a feminist and a bank teller than only a bank teller. The likelihood of two events cannot be greater than that of either of the two events individually. For this reason, the representativeness heuristic is exemplary of the conjunction fallacy. - Scarcity heuristic - Simulation heuristic - Social proof Formal models of heuristics Heuristics were also found to be used in the manipulation and creation of cognitive maps. Cognitive maps are internal representations of our physical environment, particularly associated with spatial relationships. These internal representations of our environment are used by our memory as a guide in our external environment. It was found that when questioned about maps imaging, distancing, etc., people commonly made distortions to images. These distortions took shape in the regularization of images (i.e., images are represented as more like pure abstract geometric images, though they are irregular in shape). There are several ways that humans form and use cognitive maps. Visual intake is a key part of mapping. The first is by using landmarks. This is where a person uses a mental image to estimate a relationship, usually distance, between two objects. The second is route-road knowledge, and is generally developed after a person has performed a task and is relaying the information of that task to another person. The third is a survey. A person estimates a distance based on a mental image that, to them, might appear like an actual map. This image is generally created when a person's brain begins making image corrections. These are presented in five ways: - Right-angle bias: when a person straightens out an image, like mapping an intersection, and begins to give everything 90-degree angles, when in reality it may not be that way. - Symmetry heuristic: when people tend to think of shapes, or buildings, as being more symmetrical than they really are. - Rotation heuristic: when a person takes a naturally (realistically) distorted image and straightens it out for their mental image. - Alignment heuristic: similar to the previous, where people align objects mentally to make them straighter than they really are. - Relative-position heuristic: people do not accurately distance landmarks in their mental image based on how well they remember that particular item. Another method of creating cognitive maps is by means of auditory intake based on verbal descriptions. Using the mapping based from a person's visual intake, another person can create a mental image, such as directions to a certain location. "Heuristic device" is used when an entity X exists to enable understanding of, or knowledge concerning, some other entity Y. A good example is a model that, as it is never identical with what it models, is a heuristic device to enable understanding of what it models. Stories, metaphors, etc., can also be termed heuristic in that sense. A classic example is the notion of utopia as described in Plato's best-known work, The Republic. This means that the "ideal city" as depicted in The Republic is not given as something to be pursued, or to present an orientation-point for development; rather, it shows how things would have to be connected, and how one thing would lead to another (often with highly problematic results), if one opted for certain principles and carried them through rigorously. "Heuristic" is also often used as a noun to describe a rule-of-thumb, procedure, or method. Philosophers of science have emphasized the importance of heuristics in creative thought and the construction of scientific theories. (See The Logic of Scientific Discovery by Karl Popper; and philosophers such as Imre Lakatos, Lindley Darden, William C. Wimsatt, and others.) In legal theory, especially in the theory of law and economics, heuristics are used in the law when case-by-case analysis would be impractical, insofar as "practicality" is defined by the interests of a governing body. The present securities regulation regime largely assumes that all investors act as perfectly rational persons. In truth, actual investors face cognitive limitations from biases, heuristics, and framing effects. For instance, in all states in the United States the legal drinking age for unsupervised persons is 21 years, because it is argued that people need to be mature enough to make decisions involving the risks of alcohol consumption. However, assuming people mature at different rates, the specific age of 21 would be too late for some and too early for others. In this case, the somewhat arbitrary deadline is used because it is impossible or impractical to tell whether an individual is sufficiently mature for society to trust them with that kind of responsibility. Some proposed changes, however, have included the completion of an alcohol education course rather than the attainment of 21 years of age as the criterion for legal alcohol possession. This would put youth alcohol policy more on a case-by-case basis and less on a heuristic one, since the completion of such a course would presumably be voluntary and not uniform across the population. The same reasoning applies to patent law. Patents are justified on the grounds that inventors must be protected so they have incentive to invent. It is therefore argued that it is in society's best interest that inventors receive a temporary government-granted monopoly on their idea, so that they can recoup investment costs and make economic profit for a limited period. In the United States, the length of this temporary monopoly is 20 years from the date the patent application was filed, though the monopoly does not actually begin until the application has matured into a patent. However, like the drinking-age problem above, the specific length of time would need to be different for every product to be efficient. A 20-year term is used because it is difficult to tell what the number should be for any individual patent. More recently, some, including University of North Dakota law professor Eric E. Johnson, have argued that patents in different kinds of industries – such as software patents – should be protected for different lengths of time. Stereotyping is a type of heuristic that people use to form opinions or make judgments about things they have never seen or experienced. They work as a mental shortcut to assess everything from the social status of a person (based on their actions), to whether a plant is a tree based on the assumption that it is tall, has a trunk, and has leaves (even though the person making the evaluation might never have seen that particular type of tree before). Stereotypes, as first described by journalist Walter Lippmann in his book Public Opinion (1922), are the pictures we have in our heads that are built around experiences as well as what we are told about the world. A heuristic can be used in artificial intelligence systems while searching a solution space. The heuristic is derived by using some function that is put into the system by the designer, or by adjusting the weight of branches based on how likely each branch is to lead to a goal node. Critiques and controversies |Look up heuristic in Wiktionary, the free dictionary.| |Wikibooks has more on the topic of: Heuristic| - Myers, David G. (2010). Social psychology (Tenth ed.). New York, NY. ISBN 9780073370668. OCLC 667213323. - "Heuristics - Explanation and examples". Conceptually. Retrieved 2019-10-23. - Pearl, Judea (1983). Heuristics: Intelligent Search Strategies for Computer Problem Solving. New York, Addison-Wesley, p. vii. ISBN 978-0-201-05594-8 - Emiliano, Ippoliti (2015). Heuristic Reasoning: Studies in Applied Philosophy, Epistemology and Rational Ethics. Switzerland: Springer International Publishing. pp. 1–2. ISBN 978-3-319-09159-4. - "The Definitive Glossary of Higher Mathematical Jargon — Heuristics". Math Vault. 2019-08-01. Retrieved 2019-10-23. - Pólya, George (1945) How to Solve It: A New Aspect of Mathematical Method, Princeton, NJ: Princeton University Press. ISBN 0-691-02356-5 ISBN 0-691-08097-6 - Gigerenzer, Gerd (1991). "How to Make Cognitive Illusions Disappear: Beyond "Heuristics and Biases"" (PDF). European Review of Social Psychology. 2: 83–115. CiteSeerX 10.1.1.336.9826. doi:10.1080/14792779143000033. Retrieved 14 October 2012. - Daniel Kahneman, Amos Tversky, and Paul Slovic, eds. (1982) Judgment under Uncertainty: Heuristics & Biases. Cambridge, UK, Cambridge University Press ISBN 0-521-28414-7 - Heuristics and heuristic evaluation. Interaction-design.org. Retrieved 2013-09-01. - Rudolf Groner, Marina Groner & Walter F. Bischof (1983). Methods of heuristics. Hillsdale N.J., Lawrence Erlbaum. - Rudolf Groner & Marina Groner (1991). Heuristische versus algorithmische Orientierung als Dimension des individuellen kognitiven Stils. In K. Grawe, N. Semmer, R. Hänni (Hrsg.), Über die richtige Art, Psychologie zu betreiben . Göttingen, Hogrefe. - Gerd Gigerenzer, Peter M. Todd, and the ABC Research Group (1999). Simple Heuristics That Make Us Smart. Oxford, UK, Oxford University Press. ISBN 0-19-514381-7 - Gigerenzer, Gerd, and Reinhard Selten, eds. (2002) Bounded rationality: The adaptive toolbox. Cambridge, Massachusetts, MIT press. ISBN 978-0262571647 - Gigerenzer, Gerd; Hertwig, Ralph; Pachur, Thorsten (2011-04-15). Heuristics: The Foundations of Adaptive Behavior. Oxford University Press. doi:10.1093/acprof:oso/9780199744282.001.0001. hdl:11858/00-001M-0000-0024-F172-8. ISBN 9780199894727. - Gigerenzer, Gerd; Gaissmaier, Wolfgang (January 2011). "Heuristic Decision Making". Annual Review of Psychology. 62: 451–482. doi:10.1146/annurev-psych-120709-145346. hdl:11858/00-001M-0000-0024-F16D-5. PMID 21126183. SSRN 1722019. - De Neys, Wim (2008-10-18). "Cognitive experiential self theory - Psychlopedia". Perspectives on Psychological Science. 7 (1): 28–38. doi:10.1177/1745691611429354. PMID 26168420. Retrieved 2013-09-01. - Epstein, S.; Pacini, R.; Denes-Raj, V.; Heier, H. (1996). "Individual differences in intuitive-experiential and analytical-rational thinking styles". Journal of Personality and Social Psychology. 71 (2): 390–405. doi:10.1037/0022-35184.108.40.2060. PMID 8765488. - Kahneman, Daniel; Shane Frederick (2002). "Representativeness Revisited: Attribute Substitution in Intuitive Judgment". In Thomas Gilovich; Dale Griffin; Daniel Kahneman (eds.). Heuristics and Biases: The Psychology of Intuitive Judgment. Cambridge: Cambridge University Press. pp. 49–81. ISBN 978-0-521-79679-8. OCLC 47364085. - Kahneman, Daniel (December 2003). "Maps of Bounded Rationality: Psychology for Behavioral Economics" (PDF). American Economic Review. 93 (5): 1449–1475. CiteSeerX 10.1.1.194.6554. doi:10.1257/000282803322655392. ISSN 0002-8282. - Cioffi, Jane (1997). "Heuristics, servants to intuition, in clinical decision making". Journal of Advanced Nursing. 26: 203–208. doi:10.1046/j.1365-2648.1997.1997026203.x. PMID 9231296. - Smith, H. (1999). "Use of the anchoring and adjustment heuristic by children". Current Psychology. 18 (3): 294–300. doi:10.1007/s12144-999-1004-4. - Harvey, N (2007). "Use of heuristics: Insights from forecasting research". Thinking & Reasoning. 13 (1): 5–24. doi:10.1080/13546780600872502. - Sternberg, Robert J.; Karin Sternberg (2012). Cognitive Psychology (6th ed.). Belmont, CA: Wadsworth, Cengage Learning. pp. 310–1315. ISBN 978-1-111-34476-4. - K. M. Jaszczolt (2006). "Defaults in Semantics and Pragmatics", The Stanford Encyclopedia of Philosophy, ISSN 1095-5054 - Roman Frigg and Stephan Hartmann (2006). "Models in Science", The Stanford Encyclopedia of Philosophy, ISSN 1095-5054 - Kiss, Olga (2006). "Heuristic, Methodology or Logic of Discovery? Lakatos on Patterns of Thinking". Perspectives on Science. 14 (3): 302–317. doi:10.1162/posc.2006.14.3.302. - Gerd Gigerenzer and Christoph Engel, eds. (2007). Heuristics and the Law, Cambridge, The MIT Press, ISBN 978-0-262-07275-5 - Johnson, Eric E. (2006). "Calibrating Patent Lifetimes" (PDF). Santa Clara Computer & High Technology Law Journal. 22: 269–314. - Bodenhausen, Galen V.; et al. (1999). "On the Dialectics of Discrimination: Dual Processes in Social Stereotyping", in Dual-process Theories in Social Psychology edited by Shelly Chaiken and Yaacov Trope. NY: Guilford Press. pp. 271–92. ISBN 978-1572304215. Retrieved 24 March 2015. - Kleg, Milton (1883). Hate Prejudice and Racism. Albany: State University of New York Press. p. 135. ISBN 978-0791415368. Retrieved 24 March 2015. - Gökçen, Sinan. "Pictures in Our Heads". European Roma Rights Centre. Retrieved 24 March 2015. - Gilovich, Thomas; Griffin, Dale; Kahnemann, Daniel, eds. (2002). Heuristics and Biases: The Psychology of Intuitive Judgment. Cambridge University Press. pp. 8–9. - How To Solve It: Modern Heuristics, Zbigniew Michalewicz and David B. Fogel, Springer Verlag, 2000. ISBN 3-540-66061-5 - Russell, Stuart J.; Norvig, Peter (2003), Artificial Intelligence: A Modern Approach (2nd ed.), Upper Saddle River, New Jersey: Prentice Hall, ISBN 0-13-790395-2 - The Problem of Thinking Too Much, 2002-12-11, Persi Diaconis
Algebra II is a branch of mathematics that builds upon the concepts introduced in Algebra I. It covers a broad range of topics including polynomials, functions, matrices, logarithms, and trigonometry. It is an essential course for anyone planning to pursue a career in science, technology, engineering, or mathematics (STEM) fields. In Algebra II, students learn how to solve equations that involve variables, exponents, and logarithms. They also learn how to graph various functions, including polynomial, rational, and exponential functions. The study of matrices and determinants is also introduced in Algebra II, which plays a crucial role in various applications such as solving linear equations and finding the eigenvalues of a matrix. One of the essential topics in Algebra II is polynomial functions. A polynomial is a mathematical expression consisting of variables and coefficients, combined using the operations of addition, subtraction, multiplication, and exponentiation. In Algebra II, students learn how to factor polynomials, find their zeros, and graph them. They also learn about the fundamental theorem of algebra, which states that every polynomial of degree n has n complex roots. Another critical topic in Algebra II is logarithms. A logarithm is an operation that describes how many times a given number must be multiplied by itself to produce a specific value. Logarithms are used extensively in science and engineering to represent quantities that vary exponentially. In Algebra II, students learn how to evaluate logarithmic expressions, solve logarithmic equations, and use logarithmic functions to model real-world phenomena. Trigonometry is also an important topic in Algebra II. Trigonometry deals with the study of triangles and their properties. It includes functions such as sine, cosine, and tangent, which are used to calculate the relationships between the sides and angles of a triangle. In Algebra II, students learn how to use trigonometry to solve problems involving angles, triangles, and circles. Matrices and determinants are also introduced in Algebra II. Matrices are arrays of numbers or symbols arranged in rows and columns. They are used to represent linear equations and transformations in space. Determinants are used to calculate the area or volume of a parallelogram or parallelepiped, respectively. In Algebra II, students learn how to perform basic operations on matrices, such as addition, subtraction, multiplication, and finding the inverse of a matrix. Algebra II is a critical course for students planning to pursue a career in STEM fields. It provides them with the necessary foundation to understand the fundamental principles of mathematics and its applications in real-world problems. For example, a computer programmer needs to understand how to write code that involves variables, functions, and matrices. A physicist needs to understand how to solve equations involving variables and exponents. A biologist needs to understand how to use logarithmic functions to model the growth of populations. Moreover, the problem-solving skills developed in Algebra II are transferable to other areas of study and professions. Students learn how to approach problems systematically, break them down into smaller parts, and apply mathematical concepts to solve them. These skills are valuable in any field that requires critical thinking, analytical reasoning, and problem-solving. In conclusion, Algebra II is an essential course for anyone planning to pursue a career in STEM fields. It covers a broad range of topics, including polynomials, functions, matrices, logarithms, and trigonometry. The skills developed in Algebra II, such as problem-solving, critical thinking, and analytical reasoning, are transferable to other areas of study and professions. Algebra II provides students with the necessary foundation to understand the fundamental principles of mathematics and its applications in real-world problems.
Weather on the Earth is driven by multiple factors, including thermal energy from within the Earth's core and from the sun. Certain areas of the Earth are known for specific weather patterns that occur as a result of these factors. One area that scientists, geologists and meteorologists study frequently is the Intertropical Convergence Zone, which is a band near the equator where the southern and northern trade winds meet. Low Air Pressure In the Intertropical Convergence Zone, the northern and southern trade winds come together. Because of the rotation of the Earth, the winds cannot really cross the equator without losing energy. Instead of continuing over the Earth horizontally, the winds thus move vertically toward the upper atmosphere. The heating of the Earth's ocean currents by the sun assists in this process, making the air warmer and letting it rise. The result is that the Intertropical Convergence Zone has low air pressure near the Earth's surface. The lack of horizontal wind movement in the region caused sailors to nickname the Intertropical Convergence Zone, "the doldrums." The frequent rising of air in the Intertropical Convergence Zone means that moisture constantly is being brought high enough in the atmosphere to a point cool enough to allow the moisture to condense into clouds. The Intertropical Convergence Zone therefore can see incredible amounts of precipitation and high humidity. Although some areas of the zone do have a dry season, others do not. Afternoon showers are a feature of the zone. Sciencing Video Vault Rainfall in the Intertropical Convergence Zone typically is not gentle rainfall that lasts for long periods. Instead, the high amounts of energy from thermal and solar heating cause moisture to condense quickly into clouds in the hottest part of the day. Circular typhoons thus often form as the air currents move. Some of the strongest winds on the Earth have been recorded in these storms. Thunderstorms with heavy lightening also are common. Intertropical Convergence Zone Location The Intertropical Convergence Zone is characterized by inconsistent location around the equator. As the Earth moves with the seasons, the area which receives the highest amount of heat energy from the sun varies. The thermal equator around which the Intertropical Convergence Zone forms thus moves, depending on the season. In some cases, this shift can result in the complete reversal of normal trade wind patterns, particularly in the Indian Ocean. Impact of the Intertropical Convergence Zone The characteristics of the Intertropical Convergence Zone have an enormous impact on weather all around the globe. Shifting of wind patterns in the Intertropical Convergence Zone can move thermal energy and moisture to different parts of the Earth than usual and can slow or even stop ocean currents. This affects all plant and animal life either directly or indirectly, since ecosystems are dependent largely on weather patterns and temperature.
Meter Stick Math Lab Take a meter stick. Create a right angled triangle anywhere in the room where a right angle is present, using the meter stick as the hypotenuse. Measure the two angles, and both shorter sides. Recreate your triangle on the applet ‘Triangles that have hypotenuse 1’ below, to see how accurate your measurements are! As we know, a triangle has three sides and three angles. On a right-angled triangle, it is given that one angle is 90°. As long as we are given two sides on a right angled triangle, or one side and one angle, we can calculate the remaining angle(s) and sides using values from the ‘triangle with hypotenuse 1’. Triangles that have hypotenuse length 1 On this applet, the hypotenuse has been set to 1. (a) What angle makes the red and green side equal in length? (b) What angle makes the adjacent side (red) 0.5? (c) What is the relationship between the lengths of the red side, the green side and the hypotenuse? (d) What does a very large angle do to the red side? (e) What does a very large angle do to the green side? (f) When the angle is 20°, the green side is 0.342; the red side is 0.94. At what angle do they swap, that is, the red is 0.342 and the green is 0.94? (g) What does Pythagoras’ Theorem say about the length of the red side, the green side and the hypotenuse? Try it out for any angle. Create right angled triangles using one angle and the hypotenuse Use this applet to create triangles that have the following: Sines, Cosines, Tangents The green and red sides of the right-angled triangle hypotenuse 1 are available on any scientific calculator. On the applet above (hypotenuse 1), change the angle to 24°. On a scientific calculator, ensure the screen says ‘DEG’ for degree mode; and type in sin(24°); cos(24°) and tan(24°). It is possible that you notice that the green length; which is opposite the marked angle; is the sine of the marked angle (rounded to 2 d.p). The red, adjacent length is the cosine. The tangent? That is the opposite side divided by the adjacent side (green over red). The tangent makes more sense when seen on a circle, as it is is a length marked on the tangent of the circle at the angle of reference. Every other right-angled triangle is simply an enlargement of the right-angled triangle that has hypotenuse = 1. Suppose we have a right-angled triangle with hypotenuse = 8 and given angle = 24°. Type in and on your scientific calculator. Verify your answers with the first applet above – change the hypotenuse to 8 and the angle to 24. On a right angled triangle with hypotenuse length 1, the sine of an angle is the length of the side opposite that angle; the cosine the adjacent; the tangent the division of opposite over adjacent. For enlargements, the scale factor is the length of the hypotenuse: Application (Skill #2) Find the sine and cosine of the angle in the triangle, either using the first applet or a calculator. Multiply by the length of the hypotenuse to find the lengths of the shorter sides. Example of calculation. Sines, Cosines and Tangents for any angle Next: Visual Trig Values
Introduction to linear growth equation: A linear growth is defined as the growth of the dependent variable with direct proportion to independent variable. Example let y=mx+b where m>0. Then whenever say x increases by 1, x becomes x+1. y = f(x) = mx+b. f(x+1) = m(x+1)+b. So increase in y = f(x+1)-f(x) = m(x+1)+b -(mx+b) = mx+m+b -mx-b = m. So increase of y/increase of x = m/1 =m. Since m>0, y increases by m for each increase of x. Hence a linear growth equation is defined as y=mx+b for all positive values of m. The slope of the line = positive as m is positive. Examples are: y=x, y=2x+1, y =x/2+1, y =3x-2. In these examples of linear relations slope is positive, When here we get `dy/dx` >0. In a functional language, we know that whenever first derivative is positive, the function is an increasing function. This is an additional proof for the increase in value of y for increase in value of x. Since the growth is linear y is related to x by a linear relation or in other words, the graph of the function is always a straight line with positive slope, The example below guides us to find out linear growth equation. Find whether the following is a linear growth" 1. y =x-3 - This is the equation of straight line with slope 1>0, So linear growth equation. 2. y = -2x+5, This is the equation of straight line with slope -2<0, So though linear, not a growth. 3. y =x²+2x+3. This is not a linear equation hence not a linear growth equation. Other forms of linear relation equation: The linear growth equations can also be of the other form as given below: Standard form: ax+by+c =0. Now we find the slope for this by differentiating. We get a + b `dy/dx` =0. So `dy/dx` = -`(a)/(b)` . Now we have this value positive if a and b have different signs. Or if a>0, b<0 then a/b is negative thus -a/b positive. So the equation will be linear growth. So we come to conclusion that ax+by+c=0 where a and b have different signs, is a linear growth equation. Intercept form: In intercept form we have equation `(x)/(a)` + `(y)/(b)` =1. Here coefficient of x =1/a and coefficient of y is 1/b. So we get by the previous deduction that if a and b have different signs, this equation is a linear growth equation. Linear Growth Equation with Diagram linear growth equation Equations and Graphs of positive slope Straight Lines Problems on Linear Growth Equations We solve some problems here on linear growth equation. Prob 1: Find the equation of line passing through (2,3) and (5,9) is a linear growth equation or not, Sol: Now we have the slope formula for finding the slope of line passing through two points. Slope of the line = `(y2-y1)/(x2-x1)` = `(9-3)/(5-2)` = 2 >0. Since slope >0 this is a linear growth equation. Prob 2: Find a if the equation represented by ax+2y+3 =0 is a linear growth equation. Sol: For finding whether the equation is a linear growth one or not, we find the slope. Differentiating this equation wrt x, a + 2 `dy/dx` =0 `dy/dx` = -`(a)/(2)` .>0 if a<0, So the solution for a is (0,`oo` ). Prob 3; Find the linear growth equation between x and y if linear growth is 2 and initial dependent variable is 2. Sol: Here we have m=2 and b =2 = y intercept (initial dependent variable) so y =2x+2 is the equation. Prob 4: In a business, variable cost = 5 per unit, sales =10 per unit, Fixed cost = 3000 then find the linear growth equation for costs and sales. Sol: Let c be the total cost, and s total sales. Then we have S = C + P So if x is production in number of units, sales =10x, variable cost =5x, fixed cost =3000. So we have S=10x =5x+3000+P Or P = 5x -3000 where P is the profit, x the no of units produced. Prob 4: Find the velocity of a vehicle at t seconds if initial velocity is u and acceleration is a. Sol: We have velocity at time t = v = initial velocity + acceleration * time or v=u+at. where u is a constant. Thus v and t are linearly related. So v=u+at is a linear growth equation for all a >0. Conclusion: In this article, we found out linear growth equation. This has many applications in Physics, as to accleration-velocity. This principle is also used to find Break even point in businesses, business plans in times of competitions etc.
2. Your final document may look like: Open a Word document. Scientists also use subscripts to cite references like this: World War. In a real-world setting, we use these characters when we’re using math equations, chemical equations, citations or footnotes, for example. On Microsoft Word documents, enter the hexadecimal code as in the above table, then press alt and x keys together. For certain symbols that are almost always superscript, such as ® and ™, simply insert the symbol and it will automatically be formatted as superscript. Once you are satisfied, click again on the Superscript icon to exit the superscript mode. Tip: You can also format text as superscript or subscript by selecting options in the Font dialog box, as described in the next procedure. Go to the Font section of the Home tab on the main ribbon. Use a font which already has superscript characters in it. It is the small letter/number above a particular letter/number. Press the Ctrl key + number 1 key to open the Format Cells window. What does superscript mean? At OfficeBeginner we share tips and tutorials for MS Office and Google Suite. Ask Question Asked 1 year, 8 months ago. On the toolbar that is above the text (Table text is always treated as mtext) you will see an icon underlined b over an a. For superscript, enter a higher percentage in the Offset box. Select the character you want to change to subscript. For superscript, press The command \limits changes the way the limits are displayed in the integral, if not prese… To do this in a word, follow the steps given below. A superscript or subscript is a number, figure, symbol, or indicator that is smaller than the normal line of type and is set slightly above it (superscript) or below it (subscript). Setting the Superscript property to True sets the Subscript property to False, and vice versa. Click on the Home tab on the top menu bar. . Select the character you want to change to superscript. How to insert Subscript and Superscript in Word. How to superscript and subscript in Excel with custom format . Examples of Superscript are X 2, (a 2 +b 2) =a 2 +b 2 +2ab, etc. The visual weight of the first "2" matches the other characters better. Start typing the sentence which will contain the subscript, but stop before typing the first character that you want Word to display as a subscript. Select the right part of text or number/s that you want In Microsoft Word, to format text as superscript or subscript: Select the text you want to format with the cursor. Read more… Other directed-graph variations are given with a superscript value (2) … Step 4: Go to the Home tab on the Ribbon and click on the subscript X2 icon in the Font group. There are two ways to do this, and we will explain both: Open a Word document. If you apply subscript, the selection will be lowered slightly below the line of text and sized to a smaller size. Subscript is an antonym for superscript. . Insert a superscript symbol. ... What is an example of a superscript? If you are writing a scientific paper in MS Word, you are likely to use notations—words or numbers either raised or lowered above the normal writing line, and somewhat smaller in size than the regular text in the document. This example inserts text at the beginning of the active document and formats two characters in the fourth word as superscript. Click or scroll to Letterlike Symbols, and click the symbol you want to insert. Tip: Use the tag to define subscript text. For subscript, enter a lower percentage in the Offset box. We have shown two methods each for implementing both processes, and you may use the one most suitable to your liking. Where to find superscript in Word. and the Equal sign (=) at the same time. Place cursor where you want the superscript to go From now on, whatever you type, Word will display in regular size and in line. How to add a superscript in text mode. Start typing the sentence which will contain the superscript, but stop before typing the first character that you want Word to display as a superscript. In regular letters, type a sentence containing the characters you want to change to superscript. . Click in the text area. In the “Font” section of the menu bar, you will see a small arrow on the bottom right-hand side of the “Font” box.The arrow will be pointing diagonally to the bottom right. For example, it is possible to write 1,000,000 as 1×106, where 6 is the superscript. 1. For example, to superscript 2 in a mathematical equation like this (X 2), you’ll need to: Select the 2; Hit Ctrl + Shift + = on your keyboard Strikethrough. Superscript is when the text is written above the usual line of writing! A great example of this would be, if you are trying to say the number “10 to the power of” something, so let’s say 10 ^ 3, you would use superscript for this. To use them in Datawrapper, copy them and then paste them in your data. If you want to strike through text, use the Strikethrough button. Word will change the character you had selected to a smaller size and position it above the regular text line. Each style includes an explanation of its system, just like reference examples. For example, create two symbolic variables with subscripts using syms. Note that this is the plus symbol to the left of the Enter / Enter / Enter key, on the standard keyboard, not the numeric key. For subscript, press Ctrl and the Equal sign (=) at the same time. Click on the Home tab on the top menu bar. In regular letters, type a sentence containing the characters you want to change to subscript. Formatting text as superscript makes it appear slightly above the regular text line. Word highlights the text. For example m2. How to use superscript in a sentence. Word will change the character you had selected to a smaller size and position it below the regular text line. If you need a superscript in your Word document, here's how to go about it. For superscript, press Ctrl, Shift, and the Plus sign (+) at the same time. Find and Replace dialog box will display, enter “m2” in the box of Find what , and select one … The superscript sign(') used to indicate the omission of a letter or letters from a word, the possessive case, and the plurals of numbers, letters, and abbreviations. In the formula bar, highlight the character you want to set as superscript. An example of superscript is 2 5. I Hope I got myself clear! In this case its ‘y^2’. EXAMPLE: Superscript: E=mc2 EXAMPLE: Subscript: H2O EXAMPLE: Strikethrough Strikethrough Superscript How to Do Superscript in Word Open Word and type in the text. sentence examples. This shortcut works in Word and PowerPoint to quickly create (or remove) superscripts on the fly. Repeat Step #2 and Step #3 at any other place in the sentence where you want subscripted characters to appear. In the Symbol box, select the symbol you want, press Insert, and then pick Close. Method #2: Type in Characters in the Superscript Mode Step #1: Open a Word document.. Open a Word document. I'm writing about C* algebras and I'm trying to write *-strings efficiently. Find and Replace dialog box will display, enter “m2” in the box of Find what, and select one symbol you prefer to replace “2” in the box of Replace with. Go to Home and select More font options (...). In each example the first "2" is professionally designed, and is included as part of the glyph set; the second "2" is a manual approximation using a small version of the standard "2." Active 1 year, 8 months ago. Select the text that you want to format as superscript or subscript. Chemical formulas use subscripts to denote the structures of substances. Please notice the left-aligned values in column B and right-aligned numbers in column A in the screenshot above. Learn more. Example. superscript definition: 1. a word, letter, number, or symbol written or printed just above a word, letter, number, or…. Word macros - page 2 True title case. In diesem Beispiel wird Text am Anfang des aktiven Dokuments eingefügt, und zwei Zeichen im vierten Wort werden hochgestellt formatiert. each of the words formatted so that its first letter is capitalized, thus: A Tale Of Two Cities but formatting styles often dictate that articles, prepositions, and conjunctions should be in lower case, thus: A Tale of Two Cities. Word highlights the text. Turn off superscripting of ordinal numbers, Format text as superscript or subscript in PowerPoint and Outlook. Do the following. Footnotes appear at the bottom of the page beneath a short horizontal line. For example: When you add a trademark, copyright, or other symbol to your presentation, you might want the symbol to appear slightly above the rest of your text. (Do not press Shift.). To make text appear slightly above (superscript) or below (subscript) your regular text, you can use keyboard shortcuts. The net charge on an ion is denoted by a superscript showing both the size and charge. Repeat Step #2 and Step #3 at any other place in the sentence where you want superscripted text to appear. Word has the ability to set a block of text in title case, i.e. This shortcut works in Word and PowerPoint to quickly create (or remove) superscripts on the fly. If you want to place text higher than the baseline, use the Superscript button. (For this example, I want to enter superscript for the area of villas, you could enter other symbol to replace them) Next, go to Home tab, click Replace in the Editing group. Definition and Usage The tag defines superscript text. For instance: Regular Text Superscript. This example inserts text at the beginning of the active document and formats two characters in the fourth word as superscript. Word will display all the characters in a smaller font and place them below the normal text line. Select the character or characters you want to change, and press CTRL and +. the writer of the TUGboat article is wrong in claiming superscript ordinal suffixes are solely a 'Victorian fetish' peculiar to English; his text, confessedly a 'rant', is riddled with loaded words like 'obscenity' and 'ilk'. The superscript and subscript buttons in Microsoft Word. . I am trying to output the area using a message box, and it should be displayed as, for example, 256 unit^2... How can I write a superscript (for powers) and a subscript (like O2 for oxygen)??? Learn more. This example makes the last character in cell A1 a superscript character. Start typing the sentence which will contain the … The number 5 above the number 2 is an example of superscript. The tag defines superscript text. Superscript in Word. Dorothy Parker, a famous American woman writer in the first half of the 20( superscript th) century, played a key role in helping create Hemingway's legend. Here’s how to make superscript text or subscript text in HTML: 14. It is always smaller than the usual font and is typically found in mathematical or scientific formulas. If you want to place text lower than the baseline, use the Subscript button. Superscript text can be used for footnotes, like WWW .. Tip: You can make text superscript or subscript without changing the font size. Click where you'd like the superscript to appear. Subscript. Click the checkbox for the Superscript option and click OK. If you’re creating a footnote, you might also want to do this with a number. But where do you find superscript in Word if the autocorrect is having a fag out the back and doesn’t fix it automatically? Select Superscript or Subscript in the Effects group. Go to Home and select Superscript To apply superscript for numbers, follow the below steps. To put a superscript in Word, type the word in the usual way. For example, to superscript 2 in a mathematical equation like this (X 2), you’ll need to: Select the 2; Hit Ctrl + Shift + = on your keyboard It is typically used for footnotes, endnotes, and mathematical or scientific formulas.. Insert a subscript. On the slide, click where you want to add the symbol. A character set, printed, or written above and immediately to one side of another. To undo superscript or subscript formatting, select your text and press Ctrl+Spacebar. Examples of Superscript are X 2, (a 2 +b 2) =a 2 +b 2 +2ab, etc. You can use subscript and superscript, too. Superscript diacritics placed after a letter are ambiguous between simultaneous modification of the sound and phonetic detail at the end of the sound. You can apply superscript or subscript formatting in Microsoft Word using buttons on the Ribbon, keyboard shortcuts or the Font dialog box. How to insert Non Breaking Spaces in Word, How to Setup Custom Page Size in Word Doc, How to Highlight Duplicates in Google Sheets. Note that this is the plus symbol to the left of the Enter / Enter / Enter key, on the standard keyboard, not the numeric key. For example, alt + 8308 will make superscript ⁴. A subscript or superscript is a character (such as a number or letter) that is set slightly below or above the normal line of type, respectively. Save my name, email, and website in this browser for the next time I comment. Superscript: A superscript is a character or string that is smaller than the preceding text and sits above the baseline. Word will display all the characters in a smaller font and place them raised above the normal text line. superscript example sentences. Example: To write the superscript number 2 you will enter =CHAR (178) into the empty cell After entering the correct CHAR-function with the numerical code you need, press enter and the superscript number or... Now you need to copy … Hit Replace All on the bottom. In the Symbol box, select (normal text) from the Font drop-down list. MS Word allows you to enter characters (text and numbers) in superscript and subscript. Use keyboard shortcuts to apply superscript or subscript Select the text or number that you want. or Subscript Microsoft Word allows easy methods for doing superscripts and subscripts. Superscript and subscript both affect the placement of text compared to other text in a document. Superscripts have several applications in both math and writing. Click on the Superscript icon in the ribbon to start the superscript mode, and continue typing until you have entered all the characters you want in superscript. If you’re creating a footnote, you might also want to do this with a number. It will look like this: . To do this in a word, follow the steps given below. Superscript can be … Use your mouse to select the text that would like to format as superscript. What is another word for a subscript? Superscript definition is - a distinguishing symbol (such as a numeral or letter) written immediately above or above and to the right or left of another character. Select either superscript (X 2) or subscript (X 2) as required. Put in the following 3^ then highlight both characters. The superscript characters are displayed in the cell, but not in the formula bar. If your app isn't full screen, you may need to select Symbol again before More Symbols. Please see the image below to see a real-world example of … A superscript is a character, symbol or number set slightly above the normal line of text. The movement you move from the cell it will convert to normal numbers. Superscript and Subscript in Microsoft Word. (version 2016) For subscript, press Go to the Font section of the Home tab on the main ribbon. this is so typical of this community, you ask for vertical lines in tables or superscripts and people step up to tell you DONT because style. In this page you can discover 7 synonyms, antonyms, idiomatic expressions, and related words for subscript, like: sequel, index, addendum, appendix, inferior, adscript and superscript. Superscript can be used for exponents in mathematics. Subscript text can be used for chemical formulas, like H2O to be written as H 2 O. Superscript: The tag is used to add a superscript text to the HTML document. CSE (Council of Science Editors) style is is the standard format used in the physical and life … That means you don’t have to use the insert equation commands. , Shift, and the Plus sign (+) at the same time. To undo superscript or subscript formatting, select your text and press Ctrl+Spacebar. EXAMPLE: Strikethrough. Definition and Usage. ActiveDocument.Range(Start:=20, End:=22).Font.Superscript = True This example formats the selected text as superscript. Select either superscript (X 2) or subscript (X 2) as required. Highlight the letter, number, word, or sentence that you want to change to superscript (as indicated in the image above); Make sure that the menu bar has the “Home” tab Selected. Superscript in Word 1. That means you don’t have to use the insert equation commands. Example. Set myRange = ActiveDocument.Range(Start:=0, End:=0) myRange.InsertAfter "Superscript in the 4th word." CSE. superscript number in the … The superscript icon is x 2 and – in the latest version of Microsoft Word – it appears in the Home section of the ‘ribbon’ menu at the top of the screen, in the font sub-section. superscript lower case letters. superscript definition: 1. a word, letter, number, or symbol written or printed just above a word, letter, number, or…. Superscript text appears half a character above the normal line, and is sometimes rendered in a smaller font. F Word’s Find and Replace to the rescue… In this example, I’ll use H2O but the same technique applies for anything similar. On the Insert tab, click Symbol. Repeat Step #2 for other characters you want to change. Once you are satisfied, click again on the Subscript icon to exit the subscript mode. 5. This time, we decided to shine a light on the most frequently used citation formats with superscript numbers. In the Symbol box, in the Subset drop-down list, select Superscripts and Subscripts. Subscript or Superscript is a number, symbol, figure, character, or an indicator that is smaller than the usual (normal) font. This guy here adds a superscript like (TM): Adding a TM superScript to a string. Superscript and Subscript in Microsoft Word. Examples of superscript superscript To type a superscript (as in mc2) you rotated the carriage slightly, typed the superscripted letter, then rotated the carriage back again. There are the following steps to create a subscript in Word document - References to the notes are given in superscript. Here are some examples. In Microsoft Word, to format text as superscript or subscript: Select the text you want to format with the cursor. The codes can also be used in some types of mathematical symbols, in the integral included in the example the _ is used to set the lower bound and the ^ for the upper bound. superscript meaning: 1. a word, letter, number, or symbol written or printed just above a word, letter, number, or…. Viewed 3k times 1. Superscript: Select the text you want to turn into superscript. The highlighting disappears. Superscript text can be used for footnotes, like WWW. superscript numerals may be used for citation of references in the text. Select either superscript (X 2) or subscript (X 2) as required. To put a superscript in Word, type the word in the usual way. This will apply the formatting to the selected text. Of Mutability has a superscript from Pound's Cantos. Superscript means to make the selected letter or number smaller to the power, which is to the right of any text or number/s to form a meaningful word, number or formula. Superscript is the small letter / number above a particular letter / number. Click on the Superscript icon in the ribbon. It's called Stack. If … Highlight the letters you want to convert to superscript and then click on the … Superscript can be used for exponents in mathematics. Word adds a small superscript number where you placed the insertion point. Easy in Microsoft Word In Word, you … Continue reading Run > Charmap) or Office Insert > Symbol menu to copy and paste or insert superscript (or subscript) characters you want on the contentes of these cells. Your final document may look like: Superscripts and subscripts are very useful for people writing scientific papers. Click on the Subscript icon in the ribbon to start the subscript mode, and continue typing until you have entered all the characters you want in subscript. Select the character or characters you want to change, and press CTRL and +.. 3. Subscript text, meanwhile, appears slightly below the regular text line: 1. For superscript, press Ctrl, Shift, and the Plus sign (+) at the same time. Undo superscript or subscript formatting. From now on, whatever you type, Word will display in regular size and in line. Superscript is something we’ve seen on refrigerators, chemistry, and mathematics notebooks! 4. Select the right part of text or number/s that you want; Afterwards, On the Home tab, in the Font group, click on the Superscript (X 2) Now, your text or number changes as a Superscript. The superscript and subscript buttons in Microsoft Word. (Do not press Shift.). (version 2011) For subscript, press Tip: You can also format text as superscript or subscript by selecting options in the Font dialog box. Type 1st, for example, and normally it will automatically be converted to 1 st, with the ‘st’ appearing in superscript. 34 sentence examples: 1. Definite integrals are some of the most common mathematical expressions, let's see an example: In LaTeX, subscripts and superscripts are written using the symbols ^ and _, in this case the x and y exponents where written using these codes. If you apply superscript, the selected letter, number or symbol will be raised slightly above the line of text and sized to a smaller size. Shift, and the Minus sign (-) at the same time. 1. Example of subscript and superscript. Click on the Home tab on the top menu bar. The number 5 on the number 2 is an example of a superscript. Each time you add a footnote on this page, another number will be added to the list. Click on the Home tab on the top menu bar. Click on that and you will get a superscript … That will convert the code into a superscript or subscript. On the Home tab, in the Font group, click Superscript Set myRange = ActiveDocument.Range(Start:=0, End:=0) myRange.InsertAfter "Superscript in the 4th word." This example inserts text at the beginning of the active document and formats two characters in the fourth word as superscript. Example sentences with the word superscript. You can also place your cursor where … For certain symbols that are almost always superscript, such as ® and ™, simply insert the symbol and it will automatically be formatted as superscript. To create superscript or a subscript do the following. Select the text you want to set in subscript. Superscript. Sentence Examples. For example, the chemical formula of water is H2O, where 2 is the subscript. Select the text you want to set in superscript. We compiled a list of the main numeric citation styles using superscript numbers to help you get started with your paper.
Covid-19 has led the world to go through a phenomenal transition . E-learning is the future today. Stay Home , Stay Safe and keep learning!!! This page is about Math Dictionary r. R Radian : The angle made by taking the radius and wrapping it along the edge of the circle. One Radian is 180/π degrees, or about 57.296° Radical: The root of a quantity as indicated by the radical sign. Radius: A line segment drawn from the center of a circle to a point on the circle. Random : Without order. Not able to be predicted. Happening by chance. Range (statistics): The difference between the lowest and highest values. Range of a function : The set of all output values of a function. Rate : Rate = distance traveled / time taken. Its unit is either m/s or km/h. Ratio : A ratio shows the relative sizes of two or more values. Ratios can be shown in different ways. Using the ":" to separate example values, or as a single number by dividing one value by the total. Rational Expression : An expression that is the ratio of two polynomials. Rational Number : Any number that can be made by dividing one integer by another. The word comes from "ratio". Ray: Given any two points A and B , AB is equal to the union of AB and all of those points. Real Number : Positive or negative, large or small, whole numbers or decimal numbers are all Real Numbers. Reason: A true statement justifying a step in a proof; the use of logic, examples, etc. to determine a result. Rectangle: A parallelogram containing one right angle; a quadrilateral with four right angles.(from math dictionary r) Rectangular coordinates: An ordered pair of real numbers that establishes the location of a point in a coordinate plane using the distances from two perpendicular intersecting lines called the coordinate axes. (See also Cartesian coordinates.) Rectangular Prism (Cuboid) : A solid (3-dimensional) object which has six faces that are rectangles. Recurring Decimal : A decimal number that has digits that repeat forever. Reflection: An isometry where if l is any line and P is any point not on l , then rl (P) = P' where l is the perpendicular bisector of PP' and if P∈l then rl(P) = P . Reflexive property of equality: A property of real numbers that states a = a . Reflex angle : A Reflex Angle is more than 180° but less than 360° Regular polygon : A polygon which is both equilateral and equiangular. Regular pyramid: A pyramid whose base is a regular polygon and whose lateral faces are congruent isosceles triangles. Remote interior angles: Either interior angle of a triangle that is not adjacent to a given exterior angle of the triangle. Also called non-adjacent interior angles. Restricted domain: The domain resulting from a restriction placed on a function, based on the context of the problem. Rhombus: A parallelogram with two adjacent congruent sides; a quadrilateral with four congruent sides. Right angle: An angle formed by two perpendicular lines, the measure of which is 90°. Right circular cylinder: A cylinder whose bases are circles and whose altitude passes through the center of both bases. Right Angle : A right angle is an angle which is equal to 90°, one quarter of a full revolution. Right circular cone: A cone whose base is a circle and whose altitude passes through the center of its base. Right pyramid: A pyramid whose lateral faces are isosceles triangles. Right triangle: A triangle with one right angle. Roman Numerals :How ancient Romans used to write numbers. I, V, X, L, C, D, M are the symbols used. Root : Where a function equals zero. Rotation: An isometry where if P is a fixed point in the plane, θ is any angle and A ≠ P then ' Rp,θ(A) = A' where m∠APA' =θ and Rp,θ(P) = P Rotational symmetry: A geometric figure has rotational symmetry if the figure is the image of itself under a rotation about a point through any angle whose measure is not a multiple of 3600.A regular hexagon has rotational symmetry of 600 , 1200 , 1800 , 2400 , and 3000 (from math dictionary r) Run : How far a line goes along (for a given distance going up). Rise/Run gives you the slope of the line. Math dictionary r
An operating system (OS) is a program that manages the computer's resources --- its CPU, primary storage, its input/output devices --- so that the resources can be correctly and fairly used by one or more persons and/or computer programs. The OS is the program that a computer executes when first started, and it is the program that executes when a user program needs help using the computer's devices. The OS executes when there are no other programs to execute. It also executes when something goes wrong. When started, an operating system will initialize the various registers, buffers, and controllers used by the computer. (Please see the lecture on computer architecture to review the concepts of register, buffer, and controller. Indeed, if you have not done it, please study completely the computer-architecture lecture notes.) Here is a partial list of what an OS must manage: The OS also provides the interrupt-handling programs that the processor executes when an input/output device signals an interrupt. The OS must also provide a means where a program can communicate over the network to another program on another computer. All CPU's have wired into them a tiny start-up program, called an initial program loader (IPL). When the computer is switched on, the CPU immediately starts executing the instructions in its IPL. A typical IPL checks that primary storage, the display, the disk, etc., are operational, and then it looks for other instructions to execute on the disk drive: the IPL loads and executes the instructions that begin at disk address, ``track 0, sector 0.'' These instructions from the disk tell the processor to copy the operating system's kernel into primary storage and jump to the first instruction of the kernel. When the kernel starts, it initializes devices, creates buffers, and copies into primary storage helper programs, such as the interrupt handlers for the various storage devices. The kernel uses additional primary storage to help it manage the processor, primary storage, and other devices; the details will be developed in the later sections. After the IPL has successfully started the OS kernel, we find primary storage looking somewhat like this: That is, the OS kernel is loaded into storage, and the buffers and interrupt handler programs described in the lecture on architecture are present as well. (The OS kernel ensures they are placed in storage.) The window manager and other supporting OS programs are present as well, and the remaining free storage space can hold user programs. The OS is now ready to interact with the user. A process (also called a task) is a partly executed program. That is, once a program is copied from disk into primary storage and made eligible to execute, it becomes a process, and while it executes, it remains a process. If it is not finished, but it remains in storage while another process executes, it is still a process. Once it terminates, it is erased from primary storage. Examples of processes are (Note: if you have not done so, it is time to review the material in the computer architecture lectures about interrupt handling.) The CPU executes just one process, and the processes that are ready and waiting for their turn at execution are kept in a queue (a waiting line, like at the post office) in primary storage. Here is a picture of primary storage where there are five processes: one is executing, three of them are ready and waiting for their turn to execute, and one process is ``blocked'' because it requested a disk read that is underway but not finished: When a program is started, the operating system allocates a segment of primary storage for holding the program's instructions and its data values. The OS also contructs a structure called a process control block (PCB). A PCB remembers the process's name (ID), the instruction where its execution starts, initial values for the CPU registers when the process starts, its priority number (more on this later), and its current state. A newly created process has the state, Ready. Each Ready process is eligible to execute, so the OS places the newly created process's ID at the end of the process queue. The picture shows that three processes are Ready for execution. The OS keeps a table holding all the PCBs, so that the OS knows about all processes in storage. One process is executing, and its state is marked Executing in its PCB. The process that is waiting on the disk to finish work is marked Blocked, and it is not listed in the process queue. While a process executes, the CPU clock is ticking. After some number of ticks (say, about 100 milliseconds's worth), the clock signals the control unit that the executing process's time slice has been completely used --- the clock does this by setting a bit in the interrupt register. When the control unit next checks the interrupt register, it detects the clock interrupt, and it starts the interrupt handler for the clock. The interrupt handler for the clock does a process switch (task switch): Here is a revised picture of storage after the executing process P3 has used all its time slice and has been replaced by the next process, P5, in the process queue: In addition to a clock interrupt, there are two other actions that the OS takes to update the states of processes: Here is the picture that results when the executing process P5 shown above issues a WRITE instruction and must be Blocked: Here is the previous picture revised after process P2's disk READ finishes and the interrupt handler has done its work: It is important to remember that IDs are inserted at the rear of the queue and removed from the front. This arrangement is fair but is altered when some processes are ``more important'' than others. Most operating systems assign priority numbers to processes --- higher-priority processes may use the CPU more often than lower-priority ones. For example, non-graphical or ``background'' processes are given lower priorities, whereas user-started, graphical programs are given higher priorities. (The philosophy is that the human user is happier to see progress on the display.) Processes that have been executing for a long time receive lower priorities as time elapses, as do programs that do lots of input/output with secondary-storage devices. The priorities added to the processes are saved in the PCBs and are saved with the process's IDs in the process queue. But the priorities complicate the management of the process queue, because now a Ready process can be inserted at the rear of the queue and then moved forwards in the queue if it has a high priority. The design of the appropriate data structure to implement this so-called priority queue is often studied in a data-structures course. But remember that the user program, once started, is one single process, and it has one PCB. The process's time slice is shared between its threads, so there is no advantage to writing a program with multiple threads to gain more use of the CPU. Instead, multi-threaded programs are written to provide an elegant solution to a difficult problem. (Example: Implement a ``pipeline'' of two threads that solve a problem of the form, ``for each data item in a sequence, do Step1 then Step2.'') In Java, you use threads each time you use the javax.swing graphics framework. But you don't see them in your coding --- they are constructed for each window, frame, and dialog you construct. You can start a thread yourself in Java by using new Thread(ob), where ob is an object that implements Runnable interface. Here are the problems that can arise: When a program is copied from disk storage into primary storage for execution, it is time to convert the variable-name addresses into actual storage-cell addresses. The OS's loader program has the job of copying the instructions from disk to primary storage and inserting the correct storage addresses for the variable names. When a user program is started, a typical operating system will allocate a storage partition large enough to hold all the program's instructions, plus additional space for data values. But the partition size, ultimately, is a guess, and some programs will use all of its partition space to hold data values. (In an object-oriented language, an instruction like new Object(...) uses some of the storage partition for a data value. In a C-like language, malloc(...) does the same. Both of these commands start a helper program in the OS that marks as used some of the unused storage within the program's partition.) When a user program has consumed all its storage, the OS notices and can take two actions: The second solution is the preferred one, and it is based on the observation that all of a program's instructions need not be present in primary storage at the same time. Indeed, we need keep only the part of the program that is executing now or will execute in the near future. The same viewpoint is true for the cells used to hold data --- not all data is used all the time, and the data not recently used might be copied onto disk storage for later use. So, a process's partition is divided into equal-sized fragments, called pages, and the instructions and data values are addressed relative to the starting point of a page; the loader program makes these adjustments when it loads a page into primary storage. Here is a picture of a process's partition, divided into pages: When the program is first copied (loaded) into storage, perhaps not all of it is copied --- perhaps just its first part, which executes first; the rest is left waiting on the disk. Also, perhaps some of the partition is used for data values: When the program executes, more and more instructions are read, and perhaps all the instructions in the page are executed, and the next instruction resides on the disk and not in storage. This situation is called a page fault, and the OS must find on the disk the page that contains the next instruction. The loader loads that page into unused space in the storage partition, and execution continues: Each time a page from disk is loaded, the loader must update the addresses in the newly loaded page so that they match correctly the addresses in primary storage where the page was loaded. Eventually, all the partition's pages fill with instructions and data: Now, if there is a page fault and there is no more free space in the partition, then one of existing pages must be replaced (``swapped out'') by the needed page: Paging can also let a process construct more and more data values --- when a collection of data values have filled a page and space is needed for more new data values, then an existing page of data can be swapped out (copied to disk) to make room for a page of new data values. Ultimately, a process grows well beyond its initial storage partition. There is a severe penalty to be paid when a program has saved too many pages on disk, and they must be continually swapped --- almost all of the process's time slice is spent on the swapping and almost none is spent on the computation. This is called thrashing. Modern operating systems allocate huge partitions for processes to reduce the possibility of thrashing. Most computers use a memory controller (recall that this is the processor that rests between the system bus and primary storage); the controller can be programmed so that it checks, for each read/write into storage, whether the requested storage address in held within the storage partition owned by the currently executing process. If the answer is no, then the storage reference is not performed, and instead, the memory controller sets a bit in the interrupt register, signalling an address exception. When the CPU's control unit detects the interrupt, it starts an interrupt handler that removes the process from Executing state, terminates it, and starts the next Ready process in the process queue. An error message is constructed and transmitted to whatever output device is used by the erroneous process. To ensure this, for each input/output device, the operating system builds a queue that holds the IDs of the processes that wish to use the device. The queues are necessary because more than one process might wish to use the same device, and a typical device operates so slowly (relative to the speed of the CPU), that it is common that multiple processes use their time slices and generate requests to use the same device. This means the processes are forced to wait for their turns at using the device. For each device, the OS contains a helper program (called a device driver) that does the physical reads and writes to the device. A user program must start the device driver to use a device; the user cannot use the device directly. (The technical description: the devices are wired so that only processes with executive privilege can start them. The OS has executive privilege; a user program does not.) For example, an executing process might wish to write some data to the disk. The program uses an instruction of the form, WRITE(disk,address,data). This instruction is actually a request to the OS's disk device driver to do a write. The device driver checks the disk to see if another process is already using the disk. If yes, then the executing process's ID is placed into the queue for the disk, and the executing process is Blocked. If no, then the device driver copies the data and its destination address into the disk's address and data buffers and signals the disk's controller to start a write. The executing process is Blocked. When the disk finishes the write, its controller signals the CPU by setting a bit in the interrupt vector, and this triggers the actions described earlier in the section on process management. (But now, we also realize that the interrupt handler for the disk not only moves the process from Blocked to Ready, as described earlier, but it also consults the queue for the disk to see if there is another blocked process that is waiting the use the disk. If yes, the disk is restarted for the next process in the disk's queue.) Queues, device drivers, and start-up commands are the basic tools the OS uses for managing all input/output devices. The operating system includes programs that impose a ``filing system'' on computer directories and files so that files on disk can be systematically stored and quickly found. Here are some basic concepts: The operating system uses a tree-like structure to organize the folders and files on disk: The tree is saved on disk in a ``flattened'' form, where the linkages are remembered as disk addresses: Since a disk is divided into a form of pages, called sectors, the sector addresses are used to locate folders and files. Like pages, sectors have fixed size, so if a folder's directory is too large to fit into one sector, or a file is too long to fit into one sector, links to additional sectors are used. Tree structures are a crucial data structure to the OS and many important programs (compilers, data bases, learning programs, etc.) File usage follows this sequence of steps: Although disk storage shares concepts with primary storage, there is no notion of ``paging'' that a disk can use when a file is written to the disk and the file grows so large that it does not fit. At best, the disk controller can try to write the large file into several disjoint partitions on the disk and ``chain'' the partitions together. But if a disk fills, it is a disaster. The OS tries to prevent a disk from filling so much that it causes the other processes to stop. Commands like these are part of the OS command language. The command language lets a user program or a person ask the OS for help. The command language can be inserted as instructions inside a user program (say, in a program written in C or Python) or as instructions that a human types at the keyboard. The commands are sometimes known as system calls. Indeed, when you start a command-prompt window on your computer's display, you create a ``connection'' that lets you ``talk'' to the OS in its command language. Almost everyone has used some of the OS's file-management commands in a command window: dir // list the files in the directory that is opened by the command window rm filename // delete the file, filename, from the directory move filename1 filename // change the name (move) filename1 to filename2 ...Within the command window, you can start various OS helper programs, for example, ask the OS to ask the clock for the current time. Or, you can tell the OS to start a user program that you have saved as a file. (This is commonly done by merely stating the program's filename.) The window manager can also be started by an OS command (system call) that requests a window to be (re)painted; a user program uses such system calls to paint its graphical user interface on the display. Indeed, because various processes might paint their windows on the display, the window manager must remember which regions of the display are ``owned'' by which processes. This ``ownership'' becomes important when the mouse is moved and the keyboard or mouse is pressed, because the window manager must direct the information from the keyboard or mouse press to the process that owns the region over which the press occurred. For example, perhaps the mouse button is pressed. The mouse hardware sets the appropriate bit in the interrupt register, and the mouse interrupt handler asks the window manager which process ``owns'' the pixel over which the mouse was clicked. That process is then notified that it has received a mouse click as input. (This technique works, assuming that the process where the mouse clicked is waiting to be contacted! If it isn't, then the click is ignored and nothing happens.) The notification is is done with a system call, from the process manager to the process awaiting the mouse click. (In Java, the the system call is received by the JVM, which constructs a new Event object that it sends to an actionPerformed method.) In a similar way, keyboard input is ``read'' by a process, one key press (interrupt) at a time, with the help of the window manager. The previous section noted that one process might contact another by means of a system call. Indeed, processes might exchange messages using inter-process SEND and RECEIVE operations (which are like READ and WRITE operations, except that a storage device is not involved). For reasons of efficiency, one process might exchange information with another by depositing the information into a disk file and then sending a message to the other process, telling it to look on the disk for the information. This form of communication, by means of a shared resource (here, the disk file) quickly becomes dangerous when information is repeatedly exchanged, because the process that is depositing new information on the disk might be doing so at the same time that another process is retrieving the earlier information from the same place on the disk. This issue arises often within the coding of the programs in the operating system itself. To help ensure correct exchange of information on shared devices, the OS kernel provides system calls for shared use of a device or file. These calls help multiple processes synchronize their actions. A process uses the system calls somewhat like this: GET_MUTEX(sem) // requests use of shared resource, where sem is a // cell, called a semaphore, that remembers if the // resource is in use. If it isn't, the execution // proceeds. If it is, this process is Blocked. ...use shared resource ... RELEASE_MUTEX(sem) // cell sem is reset to remember the resource is free // If a process is Blocked, waiting on sem, the process // is made Ready to restartMUTEX is slang for ``mutual exclusion'' (exclusive use), and sem is slang for ``semaphore'' (???). Proper use of semaphores is a critical topic of study in operating systems.
Stars are born within the clouds of dust and scattered throughout most galaxies. A familiar example of such as a dust cloud is the Orion Nebula, revealed in vivid detail in the adjacent image, which combines images at visible and infrared wavelengths measured by NASA's Hubble Space Telescope and Spitzer Space Telescope. Turbulence deep within these clouds gives rise to knots with sufficient mass that the gas and dust can begin to collapse under its own gravitational attraction. As the cloud collapses, the material at the center begins to heat up. Known as a protostar, it is this hot core at the heart of the collapsing cloud that will one day become a star. Three-dimensional computer models of star formation predict that the spinning clouds of collapsing gas and dust may break up into two or three blobs; this would explain why the majority the stars in the Milky Way are paired or in groups of multiple stars. As the cloud collapses, a dense, hot core forms and begins gathering dust and gas. Not all of this material ends up as part of a star — the remaining dust can become planets, asteroids, or comets or may remain as dust. In some cases, the cloud may not collapse at a steady pace. In January 2004, an amateur astronomer, James McNeil, discovered a small nebula that appeared unexpectedly near the nebula Messier 78, in the constellation of Orion. When observers around the world pointed their instruments at McNeil's Nebula, they found something interesting — its brightness appears to vary. Observations with NASA's Chandra X-ray Observatory provided a likely explanation: the interaction between the young star's magnetic field and the surrounding gas causes episodic increases in brightness. Main Sequence Stars A star the size of our Sun requires about 50 million years to mature from the beginning of the collapse to adulthood. Our Sun will stay in this mature phase (on the main sequence as shown in the Hertzsprung-Russell Diagram) for approximately 10 billion years. Stars are fueled by the nuclear fusion of hydrogen to form helium deep in their interiors. The outflow of energy from the central regions of the star provides the pressure necessary to keep the star from collapsing under its own weight, and the energy by which it shines. As shown in the Hertzsprung-Russell Diagram, Main Sequence stars span a wide range of luminosities and colors, and can be classified according to those characteristics. The smallest stars, known as red dwarfs, may contain as little as 10% the mass of the Sun and emit only 0.01% as much energy, glowing feebly at temperatures between 3000-4000K. Despite their diminutive nature, red dwarfs are by far the most numerous stars in the Universe and have lifespans of tens of billions of years. On the other hand, the most massive stars, known as hypergiants, may be 100 or more times more massive than the Sun, and have surface temperatures of more than 30,000 K. Hypergiants emit hundreds of thousands of times more energy than the Sun, but have lifetimes of only a few million years. Although extreme stars such as these are believed to have been common in the early Universe, today they are extremely rare - the entire Milky Way galaxy contains only a handful of hypergiants. Stars and Their Fates In general, the larger a star, the shorter its life, although all but the most massive stars live for billions of years. When a star has fused all the hydrogen in its core, nuclear reactions cease. Deprived of the energy production needed to support it, the core begins to collapse into itself and becomes much hotter. Hydrogen is still available outside the core, so hydrogen fusion continues in a shell surrounding the core. The increasingly hot core also pushes the outer layers of the star outward, causing them to expand and cool, transforming the star into a red giant. If the star is sufficiently massive, the collapsing core may become hot enough to support more exotic nuclear reactions that consume helium and produce a variety of heavier elements up to iron. However, such reactions offer only a temporary reprieve. Gradually, the star's internal nuclear fires become increasingly unstable - sometimes burning furiously, other times dying down. These variations cause the star to pulsate and throw off its outer layers, enshrouding itself in a cocoon of gas and dust. What happens next depends on the size of the core. Average Stars Become White Dwarfs For average stars like the Sun, the process of ejecting its outer layers continues until the stellar core is exposed. This dead, but still ferociously hot stellar cinder is called a a White Dwarf. White dwarfs, which are roughly the size of our Earth despite containing the mass of a star, once puzzled astronomers - why didn't they collapse further? What force supported the mass of the core? Quantum mechanics provided the explanation. Pressure from fast moving electrons keeps these stars from collapsing. The more massive the core, the denser the white dwarf that is formed. Thus, the smaller a white dwarf is in diameter, the larger it is in mass! These paradoxical stars are very common - our own Sun will be a white dwarf billions of years from now. White dwarfs are intrinsically very faint because they are so small and, lacking a source of energy production, they fade into oblivion as they gradually cool down. This fate awaits only those stars with a mass up to about 1.4 times the mass of our Sun. Above that mass, electron pressure cannot support the core against further collapse. Such stars suffer a different fate as described below. White Dwarfs May Become Novae If a white dwarf forms in a binary or multiple star system, it may experience a more eventful demise as a nova. Nova is Latin for "new" - novae were once thought to be new stars. Today, we understand that they are in fact, very old stars - white dwarfs. If a white dwarf is close enough to a companion star, its gravity may drag matter - mostly hydrogen - from the outer layers of that star onto itself, building up its surface layer. When enough hydrogen has accumulated on the surface, a burst of nuclear fusion occurs, causing the white dwarf to brighten substantially and expel the remaining material. Within a few days, the glow subsides and the cycle starts again. Sometimes, particularly massive white dwarfs (those near the 1.4 solar mass limit mentioned above) may accrete so much mass in the manner that they collapse and explode completely, becoming what is known as a supernova. Supernovae Leave Behind Neutron Stars or Black Holes Main sequence stars over eight solar masses are destined to die in a titanic explosion called a supernova. A supernova is not merely a bigger nova. In a nova, only the star's surface explodes. In a supernova, the star's core collapses and then explodes. In massive stars, a complex series of nuclear reactions leads to the production of iron in the core. Having achieved iron, the star has wrung all the energy it can out of nuclear fusion - fusion reactions that form elements heavier than iron actually consume energy rather than produce it. The star no longer has any way to support its own mass, and the iron core collapses. In just a matter of seconds the core shrinks from roughly 5000 miles across to just a dozen, and the temperature spikes 100 billion degrees or more. The outer layers of the star initially begin to collapse along with the core, but rebound with the enormous release of energy and are thrown violently outward. Supernovae release an almost unimaginable amount of energy. For a period of days to weeks, a supernova may outshine an entire galaxy. Likewise, all the naturally occurring elements and a rich array of subatomic particles are produced in these explosions. On average, a supernova explosion occurs about once every hundred years in the typical galaxy. About 25 to 50 supernovae are discovered each year in other galaxies, but most are too far away to be seen without a telescope. If the collapsing stellar core at the center of a supernova contains between about 1.4 and 3 solar masses, the collapse continues until electrons and protons combine to form neutrons, producing a neutron star. Neutron stars are incredibly dense - similar to the density of an atomic nucleus. Because it contains so much mass packed into such a small volume, the gravitation at the surface of a neutron star is immense. Like the White Dwarf stars above, if a neutron star forms in a multiple star system it can accrete gas by stripping it off any nearby companions. The Rossi X-Ray Timing Explorer has captured telltale X-Ray emissions of gas swirling just a few miles from the surface of a neutron star. Neutron stars also have powerful magnetic fields which can accelerate atomic particles around its magnetic poles producing powerful beams of radiation. Those beams sweep around like massive searchlight beams as the star rotates. If such a beam is oriented so that it periodically points toward the Earth, we observe it as regular pulses of radiation that occur whenever the magnetic pole sweeps past the line of sight. In this case, the neutron star is known as a pulsar. Black Holes: If the collapsed stellar core is larger than three solar masses, it collapses completely to form a black hole: an infinitely dense object whose gravity is so strong that nothing can escape its immediate proximity, not even light. Since photons are what our instruments are designed to see, black holes can only be detected indirectly. Indirect observations are possible because the gravitational field of a black hole is so powerful that any nearby material - often the outer layers of a companion star - is caught up and dragged in. As matter spirals into a black hole, it forms a disk that is heated to enormous temperatures, emitting copious quantities of X-rays and Gamma-rays that indicate the presence of the underlying hidden companion. From the Remains, New Stars Arise: The dust and debris left behind by novae and supernovae eventually blend with the surrounding interstellar gas and dust, enriching it with the heavy elements and chemical compounds produced during stellar death. Eventually, those materials are recycled, providing the building blocks for a new generation of stars and accompanying planetary systems.
1 Types of angle Recommended grade: 6. Object of activity: Differentiation between angles according to type: acute angle, right angle, obtuse angle, and straight angle Target language: Acute angle, right angle, obtuse angle, straight line / straight angle Aids: Cards with angle sizes and bag, blackboard Time allowed: 10 minutes We familiarize the students with the object of the game, with the rules, and the expressions used: acute angle, right angle, obtuse angle, straight line, which we write above the four columns on the blackboard. We prepare cards in advance, on which the size of the angle is given in degrees, and we place them in the bag. There must be at least as many cards as there are students and there should be the same number of cards showing every type of angle, or possibly one extra e.g. for 22 students we need five complete sets of four and one extra card showing an acute angle and one extra card showing a right angle. Note: If the number of students is not divisible by four, then the incomplete groups shall also present its missing members. Each student draws a card and represents an angle of a given size. He/she must not disclose the number on the card to anyone. The students walk round the classroom and ask each other, e.g.: Are you an acute angle? Nobody else should hear the questions and answers. The goal is to form groups of four students, where each of them represents a different type of angle. Lastly, each student presents their angle aloud, e.g.: An angle of 162 degrees is an obtuse angle. He/she writes the size of the angle on the blackboard next to the corresponding expression. Are you an acute/right/obtuse/straight angle? An angle of 162 degrees is an obtuse angle. Jsi ostrý/pravý/tupý/přímý úhel? Úhel 162 je tupý úhel. 2 Reading, writing, and comparing decimal numbers Recommended grade: 6. Object of activity: Reading, writing, and comparison of decimal numbers Target language: Units, tenths, hundredths, thousandths; grading adjectives, small, large Aids: Worksheets A and B Time allowed: 15 minutes Initially, students are familiarized with the names of individual decimal places in English: Students workin in pairs receive two versions of worksheets, version A and version B. Each student reads his/her number to his/her neighbour and the latter writes it in the first or third column of the table in the worksheet (depending on version). After they have written down all the missing numbers, they fill in the symbols,, in the centre column and complete the remaining tasks on the worksheet. They compare their results with those of the other pairs. Version A: Read the decimals from the first column to your neighbour. Version B: Read the decimals from the third column to your neighbour. Write the decimals down in the chart. Write the signs greater than, less than, equal to. Compare your results with the others. Verze A: Přečti desetinná čísla z prvního sloupce svému sousedovi. Verze B: Přečti desetinná čísla ze třetího sloupce svému sousedovi. Zapište desetinná čísla do tabulky. Doplňte znaménka větší než, menší než, rovná se. Své výsledky porovnej s ostatními. 3 Ratio and percentage Recommended grade: 7. Object of activity: To write down a ratio as a fraction and to expres it as a percentage Target language: See worksheet; ratio, fractions, percentage Aids: Worksheets, blackboard Time allowed: 15 minutes The students who fill in a questionnaire with ten questions first, win (part A of the worksheet). We prepare a table on the blackboard with ten rows and three columns. The first column contains the numbers from the questionnaire from 1 to 10. When the students have fcompleted the answers, we jointly fill in the number of positive answers in the second column and the number of negative answers in the third column. Using data from the table, the students complete the remaining tasks B and C on the worksheet. Lastly, we jointly check the results, paying attention to the correct pronunciation of fractions, ratios, and percentages. Fill in the questionnaire. Tick yes or no. Vyplňte dotazník. Zaškrtněte ano, nebo ne. Write down all ratios and fractions. Zapište všechny poměry a zlomky. If the number of answers is 27 and the ratio Pokud je počet odpovědí 27 a poměr is, then fractions are and. je, potom zlomky jsou a. What percentage of you? Kolik procent z vás? the ratio of twelve to fifteen poměr dvanáct ku patnácti twelve twenty sevenths dvanáct sedmadvacetin 35 % thirty five percent 35 % třicet pět procent 4 Map scale Recommended grade: 7. Object of activity: Estimation of distance working with a map Target language: Scale, ratio, distance, map, approximate estimation, in a straight line (air line) Cross-curricular relationships: Geography, ICT Aids: Wall map of the Czech Republic., tape measure, worksheet, school atlases, rulers, Internet source, Google Earth program Time allowed: 45 minutes The instructor repeats the ratio arithmetic expressing ratios in simplest terms, increasing and decreasing in a given ratio. We emphasize that we will always be measuring distances during the lesson using a so-called straight line. The students first estimate the distances of cities and record them in the worksheet. Next, they measure the exact distances of the first eight (cities) using their atlases and the last two using the the wall map. They again write the data into their worksheet, always including the map scale. They calculate the actual distance, using the scale. That check the calculated values using the measure of distance feature in the Google Earth programme, individually, or jointly with the teacher. The students determine the variance between their estimates and the actual distances. They add the number of kilometres by which their estimates differed from actual distances. The students may comment: I estimated that the distance between Prague and Liberec was 130 km, but it is 115 km, so I was about 15 km out. Lastly, we announce which students made the most accurate estimates. Estimate the distances and write Odhadněte vzdálenosti a zapište them on the worksheet. je do pracovního listu. Find out the scale of the map and write it Zjistěte měřítko mapy a zapište ho. down. Measure the distances on the map, and using Změřte vzdálenosti na mapě a pomocí the scale, find the real distances. měřítka zjistěte skutečné vzdálenosti. How different are your estimations from Jak moc jsou rozdílné vaše odhady the reality? od skutečnosti? The most accurate estimation. Nejpřesnější odhad. 5 Fractions, decimal places and equivalent percentages Recommended grade: 7. Object of activity: Understanding relationships between ratios, decimal places, and percentages Target language: Fraction, decimal, percentage Aids: Cards with numerical values, data projector, blackboard Time allowed: 10 minutes We write FRACTIONS on one piece of large-format paper, on the second DECIMALS on the second and PERCENTAGE on the third and post them in the class at an adequate distance from one another. We hand out cards with values to the students. They divide themselves into three groups according to whether they have a fraction, decimal number, or percentage. They stand by the corresponding sign. Each student group reads his/her value on the card correctly. We provide an example on the blackboard of such a triad where the values are equal (we can use the first row from the table, or an entirely different example). The students then form a fraction triad, namely, decimal number, percentage, whose value is equal; Form groups of three so that your fraction, decimal and percentage are all equal. While the students are looking for partners to form a triad, we prepare a table with correct answers. When the students have finished (a time limit should be set), we display the table using a data projector, or we have it prepared on the blackboard and we check jointly with the class, whether it is correct. If you ve got a fraction on your card, Pokud máte na kartičce zlomek, go to the big FRACTION card. jděte k velké kartě ZLOMKY. If you ve got a decimal on your card, Pokud máte na kartičce desetinné go to the big DECIMAL card. číslo, jděte k velké kartě DESETINNÁ ČÍSLA. If you ve got a percentage on your card, Pokud máte na kartičce procento, go to the big PERCENTAGE card. jděte k velké kartě PROCENTA. Please, read your number. Prosím, čtěte své číslo. These are equal. Tyto jsou rovnocenné. Get into groups of three so that your fraction, Vytvořte skupiny po třech tak, že decimal and percentage are all equal. zlomek, desetinné číslo a procento se rovnají. Correct answers: Fraction Decimal Percentage % 6 % % % % % % % Supplementary activity 1: Students mark fractions, decimal numbers and percentages on a grid, so that the coloured part corresponds to the given numerical value, while the grid shall is a single entity. Supplementary activity 2: The students convert the coloured part of the shape they received into fractions (recorded as fractions) as instructed or they colour the part of the shape according to the specified fraction. 7 Equivalent fractions Bingo Recommended grade: 7. Object of activity: Practice of the simplification of fractions using the game Bingo Target language: Simplify, fraction, equivalent fraction Aids: Table with simplified fractions, cards with un-simplified fractions, bag, blackboard, or data projector Time allowed: 10 minutes We can either magnify the table with the simplified fractions on A3 paper and tape it to the blackboard, rewrite it on the blackboard, or use the data projector. We cut up the non-simplified fractions on individual cards and place them in a bag. We ask the students to sketch a square table on the paper. Their individual pages comprise three identical small squares; the whole table containing nine squares. The students randomly select nine fractions from the next table, i.e. with the simplified fractions on the blackboard and write them into their table: Draw a table of three by three squares, choose nine fractions from the board and write them down in your table. When the students have finished, we explain the next procedure. We show them a bag and tell them that it contains non-simplified fractions: In this bag there are non-simplified fractions. To demonstrate this, we pull out a card, read the fraction, or even show it to them if necessary (depends on the language ability of the given class), and ask them to say how the fraction looks when simplified: What is the simplified form of this fraction? When the students answer, we ask them if they have this fraction in their table: Is it in your table? If so, they cross it out: If so, cross it out. If not, they do nothing (classic Bingo). The student who crosses out all of the fractions in his/her table first, shouts Bingo! Lastly, we jointly go through all of the answers. Draw a table of three by three squares, Načrtněte si tabulku tři krát tři choose nine fractions from the board čtverečky a vyberte si devět zlomků and write them down in your table. z tabule a napište je do tabulky. In this bag there are un-simplified fractions. V tomto sáčku jsou nezkrácené zlomky. What s the simplified form of this fraction? Jaká je zkrácená podoba tohoto zlomku? Is it in your table? Je ve tvé tabulce? If so, cross it out. Pokud ano, přeškrtněte ho. When you cross out all the fractions in your Až přeškrtáte všechny zlomky table, call out Bingo. ve své tabulce, vykřikněte Bingo. 8 Table with simplified fractions: ¾ 1/4 Table with non-simplified fractions and correct answers: 3/4 9 Fractions in everyday life Recommended grade: 7. Purpose of activity: Practice of adding fractions Target language: Fractions Aids: Aloocation of tasks, blackboard Time allowed: 15 minutes Divide the students into groups. Read the first task. We can write the fractions on the blackboard to improve understanding and retention. We read out the task slowly several times. The group answering first and correctly is awarded a point (we record the points on the blackboard). We do the same for the taks which follow. The group with the highest number of points for correct and quick answers wins. 1) Two pizzas are cut into 5 ths. Mrs East eats 2/5 of the ham and pineapple pizza and 3/5 of the mushroom pizza. How much did she eat altogether? 5/5 of the pizza = 1 pizza 2) Three cakes are cut into 8 ths. Mrs Smith ate 2/8 of the chocolate cake, Mrs Evans ate 4/8s of the carrot cake and Mrs Scott ate 3/8 of the lemon cake. How much cake was eaten altogether? 9/8 of the cake 3) Two loaves of bread are sliced into 12ths. 3/12 of the granary and 3/12 of the wholemeal bread was made into sandwiches. How much bread was used? 6/12 of the loaves = ½ of bread or 1 loaf 4) Two apple pies are sliced into total of 10 slices. 2/10 of one apple pie was eaten with custard, and 7/10 of the other apple pie was eaten with cream. How many pieces of apple pie are left? 1 piece 5) If a running track is 1/4 of a kilometre long, what is the total distance a runner traverses, if he runs around the track four times? 1km Alternative: We copy the tasks, cut them up, and divide the students into groups. Each group receives the same task Groups finding the correct answer first, win. 10 Whole numbers (integers) casino Recommended grade: 7. Object of activity: Addition and subtraction of integers Target language: Integers, negative numbers, natural numbers, addition, subtraction Aids: Blackboard, worksheet with crossword puzzle and game sheet (can be copied double-sided onto A4 paper), glue Time allowed: minutes We introduce the object of the lesson to the students, namely, practising adding and subtracting integers. We review the meaning of integers (whole numbers). We distribute worksheets to the students with integer addition and subtraction problems, icluding the results, not all the results being correct. The students must decide which result is correct and which is not. If they believe that the result is wrong, they write the correct one next to it. Decide if the result is correct or not. If the result is ok, tick the box good. If there is a mistake, tick the box no good, and write down the correct answer. The same problems, although with the correct answers, are also written on the blackboard, but the students must not see them yet. On the line under the word bet the students write how much they wish to bet on the correct result. If they are sure of the answer, they bet a lot (max. 100 points); if they are unsure, they bet less (min. 10 points): Write down your bet on the bet line. If you are confident, bet a lot. If you aren t so confident, bet a little. When they are ready, we uncover the problems with the correct results on the blackboard and the students check their answers: Check to see if you were correct. For every correct answer, the student adds the points that he/she originally bet on the correct result in his/her 100-point opening bet. If you were correct, add the amount you bet to your total score. For every incorrect answer, the student shall subtract the points that he/she originally bet on the correct result in his/her 100 point opening bet: If you were wrong, subtract the amount you bet from your total score. The student with the highest number of points, wins. Note: If any students lose all their points or if they have a negative number, we can lend them an additional 100 points; at the end of the game, but they must give us back 200 points. Decide if the result is correct or not. Rozhodněte, zda je výsledek správný. If the result is ok, tick the box under good. Pokud je výsledek v pořádku, zaškrtněte políčko pod správný. If there is a mistake, tick the box under Pokud je tam chyba, zaškrtněte no good and write down the correct answer. políčko pod nesprávný a napište správnou odpověď. Write down your bet on the bet line. Zapište svou sázku do řádku sázek. If you are confident, bet a lot. Pokud jste si jistí, vsaďte hodně. If you aren t so confident, bet a little. Pokud si nejste příliš jistí, vsaďte málo. 11 Check to see if you were correct. If you were correct, add the amount of your bet to your total score. If you were wrong, subtract the amount of your bet from your total score. Zkontrolujte, zda jste měli pravdu. Pokud jste měli pravdu, připočítejte sázku ke svým celkovým bodům. Pokud jste se zmýlili, odečtete sázku od svých celkových bodů. Supplementary activity: We prepare a crossword puzzle with a mystery word for the students, in which they write the correct results in words; that is, e.g. if the result is, they will write minus five into the crossword puzzle. Note: The letters contained in the mystery word must be below one another in the crossword puzzle. Correct answers: Results: The results recorded in the crossword puzzle, mystery word: WEDNESDAY 1. minus twelve 2. 0 zero three hundred and fifty 4. minus two 5. minus fifty three 6. minus five one hundred and fifty 8. minus one hundred and twelve 9. minus thirty seven 12 Three-dimensional geometric shapes pexeso Recommended grade: Object of activity: Figure differentiation Target language: Cube, cuboid, cylinder, pyramid, sphere, cone Aids: Cards with words and 3D geometric shapes Time allowed: 10 minutes We familiarize the students with the English names of figures. The use of cards with illustrated figures and cards with names of figures would be appropriate. We match the figure with its name as a class and ensure that they are correctly pronounced. We divide the students into groups and provide each group with a sheet with cards to be cut up. The students quickly cut up the cards, mix them up and play classic pexeso (mix and match). As in the introduction, they search for the name corresponding to the picture of the figure. When the card is turned over, they read the word and name the figure. Match a name to each 3D shape. Please, cut out the cards and shuffle them. Place them on the table face down. Read the word / name of the shape every time. Přiřaď 3D tvar k jeho názvu. Prosím, nastříhejte karty a zamíchejte je. Položte je na lavici lícovou stranou dolů. Pokaždé přečtěte slovo / pojmenujte tvar. 13 Circle the correct number Recommended grade: Object of activity: Circling the correct number drawn Target language: Numbers Aids: Blackboard, starting line, list of numbers, chalk Time allowed: 10 minutes We write various numerals across the whole blackboard (their degree of difficulty depends on the students level of knowledge). The students divide into a maximum of two or three teams. We mark a chalk line on the floor from which individuals will start racing to the board. The instructor calls out the numbers or asks a student to stand beside the blackboard and read numbers out at random. When the first number is called, the first person in each team runs to the blackboard and tries to be the first to circle the called number. He/she goes back to his/her team, hands the chalk to the next person in line and that person runs to the blackboard to circle the number next called, etc. The team circling the highest number of correct numbers, wins. When the first number is called, the first person in each team runs forward and tries to be the first to circle the correct answer. Hand the chalk to the next team member, then go to the back of your team s line. Když je vyvoláno první číslo, první z každého týmu běží vpřed, aby jako první zakroužkoval správnou odpověď. Dejte křídu dalšímu v týmu a poté si stoupněte na konec řady svého týmu. Alternative: We create a set of numbers for each team. The teams stand directly near the blackboard; as soon as a member from a team circles the correct answer, the next number is called. We do not wait for anyone. The team circling all the numbers called first, wins. 14 The perimeters of a triangle and a quadrangle Recommended grade: 6. Object of activity: Practice of calculation of the perimeter of a triangle and a quadrangle Target language: Perimeter, side, vertex (pl. vertices), square, rectangle, triangle, rhombus, rhomboid, trapezoid, sketch, formula, calculation; grading adjectives Aids: Blackboard, worksheets, drawing materials, scissors, quarts paper, bag Time allowed: minutes We review alongside the class, the units of length, names and characteristics of various geometric figures and how to calculate their perimeter. The students individually complete the worksheet. Then they exchange it with a classmate who checks the answers. We then jointly check the answers, write them on the blackboard and read them in English. The students sketch two figures of their choice of various dimensions and them out. They mix up all of the figures and place them in a bag. Each student shall pull out two figures from the bag, measure them and calculate their perimeters. They write the perimeters on the figures. The students stand in a circle and introduce their figure to the others. He/she name the figure and state its perimeter, all in English. For example: This is a square / I ve got a square. Its perimeter is / The perimeter of my square is 23 centimetres. The student then places the figure on the floor in a line according to the type and size of its perimeter, creating a line of triangles, squares, rectangles, etc. The figures in each line are arranged according to size. Lastly, the students compare the figures in individual rows, e.g.: My rectangle has got the second longest perimeter. This is a square / I ve got a square. Its perimeter is / The perimeter of my square is 23 centimetres. Compare these geometrical shapes. My rectangle has got the second longest perimeter. Toto je čtverec / Já mám čtverec. Jeho obvod / Obvod mého čtverce je 23 cm. Porovnejte tyto geometrické tvary. Můj obdélník má druhý nejdelší obvod. Supplementary activity: The students can arrange a large picture from the figures, or from several smaller pictures. 15 Rounding off natural numbers Recommended grade: 6. Object of activity: Practice in rounding-off natural numbers to tens Target language: Rounding, tens, units, rule, round up, round down, numbers Aids: Blackboard, magnetic table, magnets, cards with natural numbers (see attachment), alarm clock Time allowed: minutes We repeat the basic rules of rounding off natural numbers to the nearest ten to the students. If our knowledge of English permits, we try to repeat the rules for roundin-goff in English (we have the number 48 on the card): We want to round 48 to the nearest ten. Is 48 nearer to 40 or 50? It s nearer to 50, so 48 rounded to the nearest ten is 50. What is the rule? When do we round up? When do we round down? We round up if units are 5, 6, 7, 8, 9; we round down if units are 4, 3, 2 or 1. We write a numerical axis from 10 to 100 on the blackboard; we highlight the numerals from the order of tens 0, 10, 20, 30, 40, 50, 60, 70, 80, 90, 100. We place the card with the number 48 on the numerical axis. We distribute cards with a numbers to each student and he/she places it on the numerical axis. If we have enough space in the classroom, we ask the students to sit in a circle. We set the time on the alarm clock. We let the card with the number circulate (we can use more than one card). When the alarm rings, the student holding the card in his/her hand must read the number in English and correctly round it off. The others repeat both of the numbers after him/her. If he/she correctly rounds it off and reads it in English, he/she receives a reward. If not (even if it is correctly rounded off, but incorrectly read), the card circulates further until the alarm clock rings again. We want to round 48 to the nearest ten. Chceme zaokrouhlit 48 k nejbližší desítce. Is 48 nearer to 40 or 50? Je 48 blíže k 40 nebo 50? It s nearer to 50, so 48 rounded Je blíže k 50, takže 48 zaokrouhleno to the nearest ten is 50. k nejbližší desítce je 50. What is the rule? Jaké je pravidlo? When do we round up? Kdy zaokrouhlujeme nahoru? When do we round down? Kdy zaokrouhlujeme dolů? We round up if units are 5, 6, 7, 8, 9; we round down if units are 4, 3, 2 or 1. Zaokrouhlujeme nahoru, když jsou jednotky 5, 6, 7, 8, 9, a dolů, když jsou jednotky 4, 3, 2 a 1. 16 Two-dimensional geometric shapes loop Recommended grade: Object of activity: Differentiation and naming of two-dimensional figures Target language: Domino, square, circle, rectangle, ellipse, triangle, pentagon, hexagon, rhombus Aids: Worksheet, shapes cut out of paper, cards with illustrated shapes, blackboard Time allowed: 15 minutes We familiarize the students with the English names of two-dimensional figures. As a simple aid, we use shapes cut out of coloured paper, which we then place on the blackboard and the students match their English names. After the students have familiarized themselves with the names of shapes, we cover up the aid, or take it down. We divide the students into pairs and hand out the cards to them. They place the cards together, so that the name of the figure and the corresponding twodimensional figure are next to one another. A picture of the figure with the name on the card remains on the last card we began the game with. If not, the players must go through all of the cards and find the mistake. When the game is over, we name the figures again. 17 Word problems scavenger hunt Recommended grade: Object of activity: Practice of word problems (repetition) Target language: See problem texts Aids: Cards with problems (see worksheet), writing materials, paper for rough calculations, classroom area Time allowed: minutes (according to the number of problems, mystery word being chosen at random) Students divide into small groups. We place the cards with problems randomly around the classroom. If the weather is good, we can play the game in a park or in the school yard. The cards contain problems the students must solve. In the top left-hand corner is a number corresponding to the answer of a problem on a different card. The top right-hand corner always contains one letter from the mystery word. Each group selects a problem to start with and stand there. This will prevent groups from pushing their way to a single card. They record the solution of the problem and the letter from the right corner on a piece of paper. Then they move to the card with the number they calculated in the previous problem. They continue in this fashion, until they have solved all the problems. The last problem will return them to the first problem. They solve the mystery word by arranging the combination of letters from the right-hand corner. The group solving the mystery word first and all whose results are correct, wins. Mystery word: PYTHAGORAS 18 Decimal system Recommended grade: 8. Object of activity: Practice of recording in the decimal system Target language: Decimal system, contracted notation, extended notation, equals, pair Aids: Cards with numbers in contracted notation and in extended notation of the decimal system (see worksheet), hat or bag Time allowed: 10 minutes We prepare cards with numbers both in contracted notation and in extended notation of the decimal system. The students draw cards with numbers in extended notation from the hat or bag. Each student draws three to five numbers. We crumple up the cards with numbers in contracted notation and throw them around the class. Note: Numbers may be repeated; the number of contracted notation numbers must equal the number of extended notation numbers. To make it more difficult, there should be numbers placed around the class, which do not fit into either category. The students search round the classroom for their numbers in contracted notation. They leave the numbers they do not need lying down. The place the corresponding pairs of numbers together on their desk. The person matching his/her numbers together first, wins. Lastly, everyone reads his/her pair of numbers. Write down three / four / five numbers. Vylosujte si tři / čtyři čísla / pět čísel. Your numbers are in an extended notation. Vaše čísla jsou v rozšířeném zápisu. The other numbers are in a contracted Ostatní čísla jsou ve zkráceném notation. zápisu. Be careful, there are also numbers which Pozor, jsou zde také čísla, která don t match with anything. neodpovídají žádnému ze zadání. Find the number that matches yours. Najděte číslo, které tvoří pár s vaším číslem. Alternative 1: Instead of using numbers in extended notation, the students draw numbers in contracted notation. Alternative 2: The game can be used to practice other mathematics topics, e.g. divisibility, common multiples, comparison of numbers, volumes, perimeters (values or equations), etc. 19 Square roots Recommended grade: 8. Object of activity: Practice of finding square roots of numbers up to 20 Target language: Root (of a number), square root (of), third root (of) Aids: Smallish soft ball Time:allowed 10 minutes We throw the ball to a particular student and give him/her a problem to solve. The student provides his/her answer and throws the ball back to us. If the student gives the wrong answer, he/she must forfeit a point. The student recovers it, if he/she solves a mathematical problem the teacher or a classmate has provided. T: The square root of nine is Druhá odmocnina z devíti jsou S: The square root of nine is three. Druhá odmocnina z devíti jsou tři. Give me a forfeit. Dej mi fant. Alternative: We throw the ball among the students and the student who catches it must solve the problem. After the student has answered, he/she throws the ball to one of his/her classmates and gives him/her a problem. We continue this way, until all the students in the class have had a turn. Supplementary activity 1: We give the problems of solving square roots of fractions, or of decimal numbers. For example: Supplementary activity 2: We provide problems for partially solving the square roots of natural numbers. For example: 20 Pythagoras theorem length of the hypotenuse Recommended grade: 8. Object of activity: Practice of calculations using the Pythagoras theorem Target language: Right-angled triangle, hypotenuse, leg, root of (a number), power of two Aids: Cut-out right-angle triangles with lengths of hypotenuses, free area on the floor, wall, or blackboard, numbers Time allowed: 15 minutes We lay out the cut-out triangles on the classroom floor. If there is not enough space in the classroom, we fasten the triangles on the blackboard and wall using self-adhesive plasticine. We allot to the students numbers representing the length of the hypotenuses (orally or in writing). The students walk round the classroom and look for the triangle whose hypotenuse is the same as their allotted number, e.g. the student with number 7 selects the triangle with legs and 2. Note: We set a time limit for the task. The student who finds the corresponding triangle first, receives a point. If one of the students cannot find the corresponding triangle within the time limit, a classmate who has already found his/her triangle, can help. Lastly, the students shall justify why they chose this triangle in particular: The square root of three squared plus two squared equals seven, so the length of the hypotenuse is 7. Find a triangle whose length of the hypotenuse is equal to the number you have got. The square root of three squared plus two squared equals seven, so the length of the hypotenuse is square root of seven. Najděte trojúhelník, jehož délka přepony je stejná jako číslo, které jste dostali. Druhá odmocnina ze tří na druhou plus dvě na druhou se rovná sedm, takže délka přepony je druhá odmocnina ze sedmi. FIRST GRADE Number and Number Sense Hundred Chart Puzzle Reporting Category Number and Number Sense Topic Count and write numerals to 100 Primary SOL 1.1 The student will a) count from 0 to 100 and write ELEMENTARY & MIDDLE SCHOOL MATHEMATICS Teaching Developmentally, 5/E 2003 John A.Van de Walle 0-205-38689-X Bookstore ISBN Visit www.ablongman.com/replocator to contact your local Allyn & Bacon/Longman S D C S SAN DIEGO CITY SCHOOLS MIDDLE LEVEL Instruction & Curriculum Mathematics Mathematics Routine Bank Middle Level Routines in Mathematics: Number Sense and Algebraic Thinking page 2 Middle Level Routines A Guide to Effective Instruction in Mathematics Kindergarten to Grade 6 A Resource in Five Volumes from the Ministry of Education Volume Two Problem Solving and Communication Every effort has been made The Praxis Study Companion Core Academic Skills for Educators: Mathematics 5732 www.ets.org/praxis Welcome to the Praxis Study Companion Welcome to The Praxis Study Companion Prepare to Show What You Know Helping Your Child Learn Mathematics U.S. Department of Education Margaret Spellings Secretary First published in December 1994. Revised in 1999, 2004 and 2005. This booklet is in the public domain. Authorization 1 Place Value Activity Package Activities humbly borrowed from various sources. Where possible, sources are acknowledged with the activity. Package assembled by Manuel Silva Numeracy Project Support Teacher Book 5 Teaching Addition, Subtraction, and Place Value Numeracy Professional Development Projects Revised 2012 Effective Mathematics Teaching The Numeracy Professional Development Projects assist teachers Primes Name Period A Prime Number is a whole number whose only factors are 1 and itself. To find all of the prime numbers between 1 and 100, complete the following exercise: 1. Cross out 1 by Shading in 4. Give 5 different numbers such that their average is 21. The numbers are: I found these numbers by: 5. A median of a set of scores is the value in the middle when the scores are placed in order. In case Problem solving with EYFS, Key Stage 1 and Key Stage 2 children Problem solving with EYFS, Key Stage 1 and Key Stage 2 children First published in 2010 Ref: 00433-2010PDF-EN-01 Disclaimer The Department 100 ideas to start the year Back-to-school icebreakers 1. Opening-day letter. Write a letter to your students. In that letter, introduce yourself to students. Tell them about your hopes for the new school 1 Star s ADD TO ONE-THOUSAND There are exactly three different pairs of positive integers that add to make six. 1 + 5 = 6 2 + 4 = 6 3 + 3 = 6 How many different pairs of positive integers add to make one-thousand? TEACHING ADULTS TO MAKE SENSE OF NUMBER TO SOLVE PROBLEMS USING THE LEARNING PROGRESSIONS Mā te mōhio ka ora: mā te ora ka mōhio Through learning there is life: through life there is learning! The Tertiary Problem 3 If A is divided by B the result is 2/3. If B is divided by C the result is 4/7. What is the result if A is divided by C? Suggested Questions to ask students about Problem 3 The key to this question Teaching Chess the Easy and Fun Way with Mini-Games Teach Clear Thinking Promote Math Skills Enhance Memory & Reasoning Supporting the Common Core State Math Standards Kathy Price Andre E. Zupans Teaching Support for problem solving in GCSE Mathematics A J562 and Mathematics B J567 July 2010 Making the changes work for you The School Mathematics Project www.ocr.org.uk/2010 Support for problem solving in New York State P-12 Common Core Learning Standards for Mathematics This document includes all of the Common Core State Standards in Mathematics plus the New York recommended additions. All of the New York Surface Area A skyline is a view of the outline of buildings or mountains shown on the horizon. You can see skylines during the day or at night, all over the world. Many cities have beautiful skylines. Common Core State Standards for Mathematics Table of Contents Introduction 3 Standards for Mathematical Practice 6 Standards for Mathematical Content Kindergarten 9 Grade 1 13 Grade 2 17 Grade 3 21 Grade Advanced Problems in Core Mathematics Stephen Siklos Fourth edition, October 2008 ABOUT THIS BOOKLET This booklet is intended to help you to prepare for STEP examinations. It should also be useful as preparation Chapter 1 Combinatorics Copyright 2009 by David Morin, email@example.com (Version 4, August 30, 2009) This file contains the first three chapters (plus some appendices) of a potential book on Probability UC Davis, School and University Partnerships CAHSEE on Target Mathematics Curriculum Published by The University of California, Davis, School/University Partnerships Program 2006 Director Sarah R. Martinez, Guide for Texas Instruments TI-83, TI-83 Plus, or TI-84 Plus Graphing Calculator This Guide is designed to offer step-by-step instruction for using your TI-83, TI-83 Plus, or TI-84 Plus graphing calculator The Praxis Study Companion Mathematics: Content Knowledge 5161 www.ets.org/praxis Welcome to the Praxis Study Companion Welcome to the Praxis Study Companion Prepare to Show What You Know You have been
Top! And it's gone... How would you measure a time span of less than a trillionth of a trillionth of a second (10-24 s)? That's how long top quarks are expected to live after being produced in high-energy collisions. Remarkably, although this fleeting particle was discovered more than 15 years ago and we know its mass, charge, and several other properties, no precise measurement of its lifetime existed; only upper and lower bounds. That was until Prof. Aran Garcia-Bellido and his colleagues on the DZero experiment at the Tevatron collider in Fermilab decided to measure the lifetime of the top quark. Some subatomic particles, such as the electron and the proton, appear to be completely stable but others disintegrate into different (lighter) particles after they are produced. For example, muons produced in the upper atmosphere by cosmic rays live around a microsecond, or 10-6 seconds and decay into electrons and neutrinos. A particle's observed lifetime, or how long it lives before it decays, depends on its speed relative to the observer (us), and the inherent probability of its coupling (or decay) to lighter particles, which depends on its own mass, and how many different lighter particles it can decay to without breaking conservation laws. In general, the heavier the particle, the faster it will disintegrate. Since the top quark is the heaviest elementary particle yet discovered, we should expect its decay to be quite fast. And, indeed, the top quark is so massive (170 GeV) that it has one of the shortest lifetimes in Nature: half a yoctosecond, or 5 10-25 s. This time is so short that it is hard to imagine! So how can particle physicists measure such an astonishingly small period of time? They measure instead a property that is easier to access: the particle's "width" Γ, which, according to the Heisenberg Uncertainty Principle, is inversely proportional to its lifetime. The width of a particle is the inherent uncertainty in the mass that the particle assumes. It has nothing to do with spatial length or width, but rather with the spread in its observed mass. If we could measure the mass of the top quark thousands of times with no experimental error, we would not get a unique value, but rather a distribution described by a mean and a width. This curve is called a Breit-Wigner or a Lorentzian, and represents the spread of measured masses of unstable particles. The width is related to how accurately we can define the mass of any particle at rest: the shorter the lifetime (the wider the curve), the less sure we can be about the value of that mass. Besides this natural width, there is also the experimental uncertainty of the measuring apparatus to contend with. Because top quarks decay to other particles so quickly, we can only infer the existence of a top quark by measuring its decay products: the lighter particles that are stable enough to leave signatures in our detectors. By measuring the energies of these decay products, and adding them up to reconstruct the original top quark, we estimate, though imperfectly, the mass of the top quark and its experimental uncertainty. This can be seen in Fig. 1. Figure 1: The reconstructed top quark mass distribution. This is the combined mass of the decay products: t→bW→bjj, where the top quark decays to a bottom quark (a partner of the top quark) and a W boson, which immediately decays to two light quarks. The top-quark mass is therefore reconstructed from "a b jet and two light jets," which are the remnants of the three quarks. The simulated signal expected for decays of top quarks is given by the red histogram, and the green and brown histograms represent the simulated background processes (amounting to less than 30% of the total). The points with error bars represent the DZero data, which agree well with the signal + background predictions. The data peaks at 170 GeV but has a broad distribution with a width of several tens of GeV, which can be attributed mainly to the experimental resolution. The last bin contains the overflow events. Physicists from CDF, a competing experiment at the Tevatron, have also performed detailed measurements of the width of top quark based purely on the observed mass distribution. However, these measurements are limited by detector resolution, which is far greater that the expected natural width of about 1 GeV, and can only set upper bounds on the width of the top quark. In a recently published article in Physical Review Letters, Prof. Garcia-Bellido and collaborators go around this experimental hurdle by doing an indirect extraction of the top quark width: by measuring the probability or "partial width" of the dominant decay mode of top quarks Γ(t→bW), which can be identified through the production of top quarks in a specific reaction, where a b quark and W boson fuse to produce a single top quark that subsequentially breaks up into a b quark and W boson. The total top width is obtained in this case from the ratio: of the measured partial width, and the fraction of times that the t→bW decay occurs in Nature, B(t→bW), which was measured previously by DZero. The key element in the new result is that the mass distribution is ignored, and the partial width obtained from a measurement of the total rate of a rare top quark process (bW→t), which is proportional to the partial width Γ(t→bW). Measuring the rate of "single top quarks" has an uncertainty far smaller than the experimental error on the more direct measurement of the width. Figure 2: The expected (blue) and observed (red) measurement of the width of the top quark, in GeV. The hatched areas represent 1 standard deviation around the most probable value (the peak). The result of this technique applied to the DZero data, as seen in Fig. 2, yields a measurement of the width of the top quark of 2.0±0.7 GeV, or a lifetime of (3±1)10-25 s. This is the most precise determination of the top quark lifetime to date. --Submitted by Assistant Professor of Physics Aran Garcia-Bellido
NASA researchers at the John Hopkins University Applied Physics Laboratory believe powerful meteor showers release water vapour into the Moon’s atmosphere. The amount of water released depends on the size and frequency of impact but the discovery could solve a decades-old mystery. Until now, astronomers have been well-aware of lunar water at the Moon’s chilled polar caps, in some of its permanent surface shadows and within its ancient volcanic craters. And more recently, NASA’s Lunar Reconnaissance Orbiter (LRO) has also detected evidence of “bouncing water molecules” on the surface. In March this year, NASA said small batches of water around the surface were excited during lunar daytime enough to break away from the Moon’s surface. Now, scientists are thrilled to learn more about how meteor streams help populate the Moon’s thin atmosphere with water vapour. The incredible discovery was presented this week in the journal Nature Geosciences. Lead author Mehdi Benna of NASA’s Goddard Space Flight Center said the study has also helped identify four previously undetected meteor showers. Water on the Moon: NASA has found meteor impacts release water into the Moon's atmosphere The water-releasing impacts occurred on January 9, April 2, April 5 and April 9 in 2014. Dr Benna said: “We traced most of these events to known meteoroid streams, but the really surprising part is that we also found evidence of four meteoroid streams that were previously undiscovered.” The spikes in the Moon’s atmospheric water levels were all recorded by NASA’s Lunar Atmosphere and Dust Environment Explorer (LADEE). The remote robotic instrument was sent to the Moon to study the lunar orb’s paper-thin atmosphere. LADEE found “sufficiently large” meteor strikes breached the upper levels of the Moon’s soil enough to release water from lower, hydrated levels. The Moon doesn’t have significant amounts of H2O or OH in its atmosphere For instance, NASA detected a spike in water levels during the prolific Geminid meteor shower in December 2013. The US space agency said there is strong evidence of water (H2O) and hydroxyl (OH) locked away within the Moon. But scientists are still uncertain just how much water there is, where it is stored and how it got there in the first place. One explanation is ionised hydrogen carried to the Moon on solar winds from the Sun could explain the presence of water. Meteor impacts now present an alternative to how water makes its way into the Moon’s atmosphere. Richard Elphic, the LADEE project scientist at NASA’s, said: “The Moon doesn’t have significant amounts of H2O or OH in its atmosphere most of the time. “But when the Moon passed through one of these meteoroid streams, enough vapour was ejected for us to detect it. “And then, when the event was over, the H2O or OH went away.” According to NASA, meteors need to penetrate the Moon’s surface by at least three inches (eight centimetres) to release water. Right beneath the surface is a thin “transition” layer of soil, followed by a hydrated where molecules of water are glued to the Moon’s regolith or rocky soil. NASA said: “From the measurements of water in the exosphere, the researchers calculated that the hydrated layer has a water concentration of about 200 to 500 parts per million, or about 0.02 to 0.05 percent by weight. “This concentration is much drier than the driest terrestrial soil and is consistent with earlier studies. “It is so dry that one would need to process more than a metric ton of regolith in order to collect 16 ounces of water.” But where does the water come from? According to Dr Benna, the water is most likely ancient in origin. The lunar expert said it dates back to the Moon’s formation or was deposited there “early in its history”. Whatever the case may be, the new research has ruled out the water is being left on the Moon by the meteor showers themselves.
Cosmic ray visual phenomena Cosmic ray visual phenomena, or light flashes (LF), are spontaneous flashes of light visually perceived by some astronauts outside the magnetosphere of the Earth, such as during the Apollo program. While LF may be the result of actual photons of visible light being sensed by the retina, the LF discussed here could also pertain to phosphenes, which are sensations of light produced by the activation of neurons along the visual pathway. Researchers believe that the LF perceived specifically by astronauts in space are due to cosmic rays (high-energy charged particles from beyond the Earth's atmosphere), though the exact mechanism is unknown. Hypotheses include Cherenkov radiation created as the cosmic ray particles pass through the vitreous humour of the astronauts' eyes, direct interaction with the optic nerve, direct interaction with visual centres in the brain, retinal receptor stimulation, and a more general interaction of the retina with radiation. Conditions under which the light flashes were reportedEdit Astronauts who had recently returned from space missions to the Hubble Space Telescope, the International Space Station and Mir Space Station reported seeing the LF under different conditions. In order of decreasing frequency of reporting in a survey, they saw the LF in the dark, in dim light, in bright light and one reported that he saw them regardless of light level and light adaptation. They were seen mainly before sleeping. Some LF were reported to be clearly visible, while others were not. They manifested in different colors and shapes. How often each type was seen varied across astronauts' experiences, as evident in a survey of 59 astronauts. On Lunar missions, astronauts almost always reported that the flashes were white, with one exception where the astronaut observed "blue with a white cast, like a blue diamond." On other space missions, astronauts reported seeing other colors such as yellow and pale green, though rarely. Others instead reported that the flashes were predominantly yellow, while others reported colors such as orange and red, in addition to the most common colors of white and blue. The main shapes seen are "spots" (or "dots"), "stars" (or "supernovas"), "streaks" (or "stripes"), "blobs" (or "clouds") and "comets". These shapes were seen at varying frequencies across astronauts. On the Moon flights, astronauts reported seeing the "spots" and "stars" 66% of the time, "streaks" 25% of the time, and "clouds" 8% of the time. Astronauts who went on other missions reported mainly "elongated shapes". About 40% of those surveyed reported a "stripe" or "stripes" and about 20% reported a "comet" or "comets". 17% of the reports mentioned a "single dot" and only a handful mentioned "several dots", "blobs" and a "supernova". A reporting of motion of the LF was common among astronauts who experienced the flashes. For example, Jerry Linenger reported that during a solar storm, they were directional and that they interfered with sleep since closing his eyes would not help. Linenger tried shielding himself behind the station's lead-filled batteries, but this was only partly effective. The different types of directions that the LF have been reported to move in vary across reports. Some reported that the LF travel across the visual field, moving from the periphery of the visual field to where the person is fixating, while a couple of others reported motion in the opposite direction. Terms that have been used to describe the directions are "sideways", "diagonal", "in-out" and "random". In Fuglesang et al. (2006), it was pointed out that there were no reports of vertical motion. Occurrences and frequenciesEdit There appear to be individual differences across astronauts in terms of whether they reported seeing the LF or not. While these LF were reported by many astronauts, not all astronauts have experienced them on their space missions, even if they have gone on multiple missions. For those who did report seeing these LF, how often they saw them varied across reports. On the Apollo 15 mission all three astronauts recorded the same LF, which James Irwin described as "a brilliant streak across the retina". Frequency during missionsEdit On Lunar missions, once their eyes became adapted to the dark, Apollo astronauts reported seeing this phenomenon once every 2.9 minutes on average. On other space missions, astronauts reported perceiving the LF once every 6.8 minutes on average. The LF were reported to be seen primarily before the astronauts slept and in some cases disrupted sleep, as in the case of Linenger. Some astronauts pointed out that the LF were seemingly perceived more frequently as long as they were perceived at least once before and attention was directed to the perception of them. One astronaut, on his first flight, only took note of the LF after being told to look out for them. These reports are not surprising considering that the LF may not stand out clearly from the background. Fluctuations during and across missionsEdit Apollo astronauts reported that they observed the phenomenon more frequently during the transit to the Moon than during the return transit to Earth. Avdeev et al. (2002) suggested that this might be due to a decrease in sensitivity to the LF over time while in space. Astronauts on other missions reported a change in the rate of occurrence and intensity of the LF during the course of a mission. While some noted that the rate and intensity increased, others noted a decrease. These changes were said to take place during the first days of a mission. Other astronauts have reported changes in the rate of occurrence of the LF across missions, instead of during a mission. For example, Avdeev himself was on Mir for six months during one mission, six months during the second mission a few years later and twelve months during a third mission a couple of years after. He reported that the LF were seen less frequently with each subsequent flight. Orbital altitude and inclination have also correlated positively with rate of occurrence of the LF. Fuglesang et al. (2006) have suggested that this trend could be due to the increasing particles fluxes at increasing altitudes and inclinations. During the Apollo 16 and Apollo 17 transits, astronauts conducted the Apollo Light Flash Moving Emulsion Detector (ALFMED) experiment where an astronaut wore a helmet designed to capture the tracks of cosmic ray particles to determine if they coincided with the visual observation. Examination of the results showed that two of fifteen tracks coincided with observation of the flashes. These results in combination with considerations for geometry and Monte Carlo estimations led researchers to conclude that the visual phenomena were indeed caused by cosmic rays. SilEye-Alteino and ALTEA projectsEdit The SilEye-Alteino and Anomalous Long Term Effects in Astronauts' Central Nervous System (ALTEA) projects have investigated the phenomenon aboard the International Space Station, using helmets similar in nature to those in the ALFMED experiment. The SilEye project has also examined the phenomenon on Mir. The purpose of this study was to examine the particle tracks entering the eyes of the astronauts when the astronaut said they observed a LF. In examining the particles, the researchers hoped to gain a deeper understanding of what particles might be causing the LF. Astronauts wore the SilEye detector over numerous sessions while on Mir. During those sessions, when they detected a LF, they pressed a button on a joystick. After each session, they recorded down their comments about the experience. Particle tracks that hit the eye during the time when the astronauts indicated that they detected a LF would have had to pass through silicon layers, which were built to detect protons and nuclei and distinguish between them. The findings show that "a continuous line" and "a line with gaps" was seen a majority of the time. With less frequency, a "shapeless spot", a "spot with a bright nucleus" and "concentric circles" were also reported.:518 The data collected also suggested to the researchers that one's sensitivity to the LF tends to decrease during the first couple of weeks of a mission. With regards to the probable cause of the LF, the researchers concluded that nuclei are likely to be the main cause. They based this conclusion off of the finding that in comparison to an "All time" period, an "In LF time window" period saw the nucleus rate increase to about six to seven times larger, while the proton rate only increased by twice the amount when comparing the two time periods. Hence, the researchers ruled out the Cherenkov effect as a probable cause of the LF observed in space, at least in this case. Ground experiments in the 1970sEdit Experiments conducted in the 1970s also studied the phenomenon. These experiments revealed that although several explanations for why the LF were observed by astronauts have been proposed, there may be other causes as well. Charman et al. (1971) asked whether the LF were the result of single cosmic-ray nuclei entering the eye and directly exciting the eyes of the astronauts, as opposed to the result of Cherenkov radiation within the retina. The researchers had observers view a neutron beam, composed of either 3 or 14 MeV monoenergetic neutrons, in several orientations, relative to their heads. The composition of these beams ensured that particles generated in the eye were below 500 MeV, which was considered the Cherenkov threshold, thereby allowing the researchers to separate one cause of the LF from the other. Observers viewed the neutron beam after being completely dark-adapted. The 3 MeV neutron beam produced no reporting of LF whether it was exposed to the observers through the front exposure of one eye or through the back of the head. With the 14 MeV neutron beam, however, LF were reported. Lasting for short periods of time, "streaks" were reported when the beam entered one eye from the front. The "streaks" seen had varying lengths (a maximum of 2 degrees of visual angle), and were seen to either have a blueish-white color or be colorless. All but one observer reported seeing fainter but a higher number of "points" or short lines in the center of visual field. When the beam entered both eyes in a lateral orientation, the number of streaks reported increased. The orientation of the streaks corresponded to the orientation of the beam entering the eye. Unlike in the previous case, the streaks seen were more abundant in the periphery than the center of visual field. Lastly, when the beam entered the back of the head, only one person reported seeing the LF. From these results, the researchers concluded that at least for the LF seen in this case, the flashes could not be due to Cherenkov radiation effects in the eye itself (although they did not rule out the possibility that the Cherenkov radiation explanation was applicable to the case of the astronauts). They also suggested that because the number of LF observed decreased significantly when the beam entered the back of the head, the LF were likely not caused by the visual cortex being directly stimulated as this decrease suggested that the beam was weakened as it passed through the skull and brain before reaching the retina. The most probable explanation proposed was that the LF were a result of the receptors on the retina being directly stimulated and "turned on" by a particle in the beam. In another experiment, Tobias et al. (1971) exposed two people to a beam composed of neutrons ranging from 20 to 640 MeV after they were fully dark-adapted. One observer, who was given four exposures ranging in duration from one to 3.5 seconds, observed "pinpoint" flashes. The observer described them as being similar to "luminous balls seen in fireworks, with initial tails fuzzy and heads like tiny stars". The other observer who was given one exposure lasting three seconds long, reported seeing 25 to 50 "bright discrete light, he described as stars, blue-white in color, coming towards him".:596 Based on these results, the researchers, like in Charman et al. (1971), concluded that while the Cherenkov effect may be the plausible explanation for the LF experienced by astronauts, in this case, that effect cannot explain the LF seen by the observers. It is possible that the LF observed were the result of interaction of the retina with radiation. They also suggested that the tracks seen may point to tracks that are within the retina itself, with the earlier portions of the streak or track fading as it moves. Considering the experiments conducted, at least in some cases the LF observed appear to be caused by activation of neurons along the visual pathway, resulting in phosphenes. However, because the researchers cannot definitively rule out the Cherenkov radiation effects as a probable cause of the LF experienced by astronauts, it seems likely that some LF may be the result of Cherenkov radiation effects in the eye itself, instead. The Cherenkov effect can cause Cherenkov light to be emitted in the vitreous body of the eye and thus allow the person to perceive the LF. Hence, it appears that the LF perceived by astronauts in space have different causes. Some may be the result of actual light stimulating the retina, while others may be the result of activity that occurs in neurons along the visual pathway, producing phosphenes. - Hecht, Selig; Shlaer, Simon; Pirenne, Maurice Henri (July 1942). "Energy, Quanta, and Vision". Journal of General Physiology. 25 (6): 819–840. doi:10.1085/jgp.25.6.819. PMC 2142545. PMID 19873316. - Dobelle, W. H.; Mladejovsky, M. G. (December 1974). "Phosphenes produced by electrical stimulation of human occipital cortex, and their application to the development of a prosthesis for the blind". The Journal of Physiology. 243 (2): 553–576. doi:10.1113/jphysiol.1974.sp010766. PMC 1330721. PMID 4449074. - Mewaldt, R. A. (1996). "Cosmic Rays". In Rigden, John S. (ed.). MacMillan Encyclopedia of Physics. 1. Simon & Schuster MacMillan. ISBN 978-0-02-897359-3. - Narici, L.; Belli, F.; Bidoli, V.; Casolino, M.; De Pascale, M. P.; et al. (January 2004). "The ALTEA/ALTEINO projects: studying functional effects of microgravity and cosmic radiation" (PDF). Advances in Space Research. 33 (8): 1352–1357. Bibcode:2004AdSpR..33.1352N. doi:10.1016/j.asr.2003.09.052. PMID 15803627. - Tendler, Irwin I.; Hartford, Alan; Jermyn, Michael; LaRochelle, Ethan; Cao, Xu; Borza, Victor; Alexander, Daniel; Bruza, Petr; Hoopes, Jack; Moodie, Karen; Marr, Brian P.; Williams, Benjamin B.; Pogue, Brian W.; Gladstone, David J.; Jarvis, Lesley A. (2020). "Experimentally Observed Cherenkov Light Generation in the Eye During Radiation Therapy". International Journal of Radiation Oncology*Biology*Physics. Elsevier BV. 106 (2): 422–429. doi:10.1016/j.ijrobp.2019.10.031. ISSN 0360-3016. PMC 7161418. PMID 31669563. - Narici, L.; Bidoli, V.; Casolino, M.; De Pascale, M. P.; Furano, G.; et al. (2003). "ALTEA: Anomalous long term effects in astronauts. A probe on the influence of cosmic radiation and microgravity on the central nervous system during long flights". Advances in Space Research. 31 (1): 141–146. Bibcode:2003AdSpR..31..141N. doi:10.1016/S0273-1177(02)00881-5. PMID 12577991. - Charman, W. N.; Dennis, J. A.; Fazio, G. G.; Jelley, J. V. (April 1971). "Visual Sensations produced by Single Fast Particles". Nature. 230 (5295): 522–524. Bibcode:1971Natur.230..522C. doi:10.1038/230522a0. PMID 4927751. - Tobias, C. A.; Budinger, T. F.; Lyman, J. T. (April 1971). "Radiation-induced Light Flashes observed by Human Subjects in Fast Neutron, X-ray and Positive Pion Beams". Nature. 230 (5296): 596–598. Bibcode:1971Natur.230..596T. doi:10.1038/230596a0. PMID 4928670. - Fuglesang, Christer; Narici, Livio; Picozza, Piergiorgio; Sannita, Walter G. (April 2006). "Phosphenes in Low Earth Orbit: Survey Responses from 59 Astronauts". Aviation, Space, and Environmental Medicine. 77 (4): 449–452. PMID 16676658. - Sannita, Walter G.; Narici, Livio; Picozza, Piergiorgio (July 2006). "Positive visual phenomena in space: A scientific case and a safety issue in space travel". Vision Research. 46 (14): 2159–2165. doi:10.1016/j.visres.2005.12.002. PMID 16510166. - Linenger, Jerry M. (13 January 2000). Off The Planet: Surviving Five Perilous Months Aboard The Space Station MIR. McGraw-Hill. ISBN 978-0-07-136112-5. - Irwin, James B. (1983). More Than Earthlings. Pickering & Inglis. p. 63. ISBN 978-0-7208-0565-9. - Avdeev, S.; Bidoli, V.; Casolino, M.; De Grandis, E.; Furano, G.; et al. (April 2002). "Eye light flashes on the Mir space station". Acta Astronautica. 50 (8): 511–525. Bibcode:2002AcAau..50..511A. doi:10.1016/S0094-5765(01)00190-4. PMID 11962526. - "Experiment: Light Flashes Experiment Package (Apollo light flash moving emulsion detector)". Experiment Operation During Apollo IVA at 0-g. NASA. 2003. Archived from the original on 11 May 2014. - Osborne, W. Zachary; Pinsky, Lawrence S.; Bailey, J. Vernon (1975). "Apollo Light Flash Investigations". In Johnston, Richard S.; Dietlein, Lawrence F.; Berry, Charles A. (eds.). Biomedical Results of Apollo. NASA. NASA SP-368.
The typical wave motion of a string might be described as shown in Figure 3- 1. The “amplitude” is the height or depth of the wave and the “node” is where the amplitude is zero, that is to say, crossing the horizontal line as shown below. Notice that the ends of the wave are also nodes. The “wavelength,” is typically represented by the symbol, l, (l is the 11th letter of the Greek alphabet pronounced lambda) and in this application is the distance between alternate nodes, adjacent maxima, or adjacent minima, of the amplitude, as outlined in Figure 3- 1. (All of us old folks frequently use “maxima” for the plural of maximum. The old rule for spelling plural words ending in “um” is to replace the um with an “a”. Thus maximum becomes maxima, minimum becomes minima, datum becomes data, but languages change with time and these usages may be losing preference.) If you have more than the usual math background you might recognize that the waveform is that of a cosine. Figure 3- 1 Test your understanding of these concepts by working the problems shown below. Please don’t look at the answers at the end of the chapter until you have given a serious effort to solving the problems. Problem 3-1. Sketch a waveform that has the same amplitude but a shorter wavelength as that in Figure 3- 1, which shows two wavelengths. The frequency, f, is related to the pitch of the note, that is, the higher the frequency the higher the pitch. In this discussion, I will use the terms frequency and pitch interchangeably although for different types of waves there may be a difference. When you think of a wave, think of one wavelength as outlined above, that is, showing one part above the line and one part below. Although the wave concepts we discuss generally apply to all waves, we will focus on sound waves and string vibration waves. The math Equation 3- 1 can be simplified for our study because for a particular string at a given temperature, all the variables, tension, T, string length, L, and string mass per unit length, m, can be kept constant. That allows us to absorb all those variables, and the 2 in the denominator, into a “new” constant (uppercase) K, to simplify the equation to that shown Equation 3- 2. Each string on the guitar would have a different value for K because the mass per unit length, m, would be different. However, two identical strings such as two A strings on the mandolin should have identical values for K. Equation 3- 2 This frequency-wavelength relationship can easily be demonstrated with a graph. If we let K take a value of some arbitrary number that is easy to plot, let’s say 10, then we can calculate some values of l and plot a graph. These results are shown in Figure 3- 2 below when K = 10. When l takes on the values shown in the left column the value of f is shown in the right column according to the equation. For example, if l is 2, f = 10/2 or 5.0 so f = 5.00. The red points on the graph are the values for the frequency, f for each value of l. Figure 3- 2 This graph is a pretty vivid demonstration of the relationship between f and l, that is, as the wavelength, l, increases the frequency decreases. Or, as the wavelength decreases the frequency increases and it is the latter that we see vividly as we analyze string harmonics of increasing complexity.
Click For Photo: https://scx2.b-cdn.net/gfx/news/2019/2-rudnuniversi.jpg A RUDN University physicist has developed a formula for evaluation of the effect of dark matter on the size of the shadow of a black hole. It turned out that the effect would be noticeable only if the concentration of this hypothetical form of matter around black holes in the centers of galaxies is abnormally high. If it is not the case, then it is unlikely that dark matter could be detected using the shadow of a black hole. The work was published in the journal Physics Letters B: Nuclear, Elementary Particle and High Energy Physics. In April 2019, the Event Horizon Telescope received the first-ever image of the shadow of a supermassive black hole located in the center of the M87 galaxy. To get this shot, astronomers had to combine eight observatories located around the globe. The image does not have sufficient resolution to clearly define the geometry of the central black hole, but researchers hope to achieve higher quality in the future. Determining the shape of its shadow will allow astronomers to test various versions of the theory of gravity and, possibly, find a "bridge" that would combine quantum mechanics and the general theory of relativity. Roman - Konoplya - Associate - Professor - Educational Roman Konoplya, an associate professor of the Educational and Scientific Institute of Gravity and Cosmology of the RUDN University, wondered if hypothetical dark matter, which accounts for about 85 percent of all matter in the universe, can have a visible effect on the shape and radius of the shadow of a black hole—a dark spot that appears due to the curvature of the trajectories of photons in the super-powerful gravitational field of such an object. The cosmologist obtained a formula that makes it possible to determine the change in the radius of the shadow depending on the amount of dark matter surrounding it. Wake Up To Breaking News!
Ever wondered how scientists determine the elemental composition of materials? One of the answers lies in XRF Analysis. A revolutionary technique that has transformed the world of analytical chemistry. Let’s dive in to understand what XRF analysis is and how it works. Introduction to XRF Analysis XRF stands for X-ray Fluorescence. It is a non-destructive analytical technique used to determine the elemental composition of materials. XRF analysis can measure elements from Beryllium (Be) to Uranium (U) in concentration ranges from parts-per-million (ppm) to 100%. Understanding the Science Behind XRF Analysis The science behind XRF analysis is fascinating. When a material is exposed to high-energy X-rays or gamma rays, the atoms in the material become excited. As the atoms return to their ground state, they emit secondary X-rays, or fluorescent X-rays, that are characteristic of the elements present in the material. The energy and intensity of these fluorescent X-rays can be measured to determine the type and quantity of the elements in the sample. This is the principle behind XRF analysis. Applications of XRF Analysis XRF analysis has a wide range of applications in various fields. It is used in the mining industry to identify the composition of ore samples. In the environmental sector, XRF analysis can detect and measure pollutants in soil, water, and air samples. In the field of archaeology, it is used to determine the elemental composition of artifacts, helping to reveal information about the artifact’s origin and history. In the manufacturing industry, XRF analysis is used to ensure quality control by analyzing the composition of raw materials and finished products. Advantages of XRF Analysis One of the main advantages of XRF analysis is that it is non-destructive. This means that the sample is not damaged or altered during the analysis, making it ideal for precious samples like archaeological artifacts. XRF analysis is also fast and accurate. It can analyze a wide range of elements in a short period of time, providing results that are accurate and reliable. Furthermore, it is capable of analyzing both solid and liquid samples, making it a versatile analytical tool. In a nutshell, XRF analysis is a powerful tool in the field of analytical chemistry. It provides a non-destructive, fast, and accurate method for determining the elemental composition of materials. Whether it’s revealing the secrets of an ancient artifact, ensuring the quality of products, or monitoring environmental pollutants, XRF analysis plays a crucial role in our understanding of the world around us. Check more about: https://pdinstruments.com/en/fusion-technology/vulcan.html
The Universe is not the same today as it was yesterday. With each moment that goes by, a number of subtle but important changes occur, even if many of them are imperceptible on measurable, human timescales. The Universe is expanding, which means that the distances between the largest cosmic structures are increasing with time. A second ago, the Universe was slightly smaller; a second from now, the Universe will be slightly larger. But those subtle changes both build up over large, cosmic timescales, and affect more than just distances. As the Universe expands, the relative importance of radiation, matter, neutrinos, and dark energy all change. The temperature of the Universe changes. And what you'd see in the sky would change dramatically as well. All told, there are six different eras we can break the Universe into, and we're already in the final one. The reason for this can be understood from the graph above. Everything that exists in our Universe has a certain amount of energy in it: matter, radiation, dark energy, etc. As the Universe expands, the volume that these forms of energy occupy changes, and each one will have its energy density evolve differently. In particular, if we define the observable horizon by the variable a, then: - matter will have its energy density evolve as 1/a3, since (for matter) density is just mass over volume, and mass can easily be converted to energy via E = mc2, - radiation will have its energy density evolve as 1/a4, since (for radiation) the number density is the number of particles divided by volume, and the energy of each individual photon stretches as the Universe expands, adding an additional factor of 1/a relative to matter, - and dark energy is a property of space itself, so its energy density remains constant (1/a0), irrespective of the Universe's expansion or volume. A Universe that has been around longer, therefore, will have expanded more. It will be cooler in the future and was hotter in the past; it was gravitationally more uniform in the past and is clumpier now; it was smaller in the past and will be much, much larger in the future. By applying the laws of physics to the Universe, and comparing the possible solutions with the observations and measurements we've obtained, we can determine both where we came from and where we're headed. We can extrapolate our past history all the way back to the beginning of the hot Big Bang and even before, to a period of cosmic inflation. We can extrapolate our current Universe into the far distant future as well, and foresee the ultimate fate that awaits everything that exists. When we draw the dividing lines based on how the Universe behaves, we find that there are six different eras that will come to pass. - Inflationary era: which preceded and set up the hot Big Bang. - Primordial Soup era: from the start of the hot Big Bang until the final transformative nuclear & particle interactions occur in the early Universe. - Plasma era: from the end of non-scattering nuclear and particle interactions until the Universe cools enough to stably form neutral matter. - Dark Ages era: from the formation of neutral matter until the first stars and galaxies reionize the intergalactic medium of the Universe completely. - Stellar era: from the end of reionization until the gravity-driven formation and growth of large-scale structure ceases, when the dark energy density dominates over the matter density. - Dark Energy era: the final stage of our Universe, where the expansion accelerates and disconnected objects speed irrevocably and irreversibly away from one another. We already entered this final era billions of years ago. Most of the important events that will define our Universe's history have already occurred. 1.) Inflationary era. Prior to the hot Big Bang, the Universe wasn't filled with matter, antimatter, dark matter or radiation. It wasn't filled with particles of any type. Instead, it was filled with a form of energy inherent to space itself: a form of energy that caused the Universe to expand both extremely rapidly and relentlessly, in an exponential fashion. - It stretched the Universe, from whatever geometry it once had, into a state indistinguishable from spatially flat. - It expanded a small, causally connected patch of the Universe to one much larger than our presently visible Universe: larger than the current causal horizon. - It took any particles that may have been present and expanded the Universe so rapidly that none of them are left inside a region the size of our visible Universe. - And the quantum fluctuations that occurred during inflation created the seeds of structure that gave rise to our vast cosmic web today. And then, abruptly, some 13.8 billion years ago, inflation ended. All of that energy, once inherent to space itself, got converted into particles, antiparticles, and radiation. With this transition, the inflationary era ended, and the hot Big Bang began. 2.) Primordial Soup era. Once the expanding Universe is filled with matter, antimatter and radiation, it's going to cool. Whenever particles collide, they'll produce whatever particle-antiparticle pairs are allowed by the laws of physics. The primary restriction comes only from the energies of the collisions involved, as the production is governed by E = mc2. As the Universe cools, the energy drops, and it becomes harder and harder to create more massive particle-antiparticle pairs, but annihilations and other particle reactions continue unabated. 1-to-3 seconds after the Big Bang, the antimatter is all gone, leaving only matter behind. 3-to-4 minutes after the Big Bang, stable deuterium can form, and nucleosynthesis of the light elements occurs. And after some radioactive decays and a few final nuclear reactions, all we have left is a hot (but cooling) ionized plasma consisting of photons, neutrinos, atomic nuclei and electrons. 3.) Plasma era. Once those light nuclei form, they're the only positively (electrically) charged objects in the Universe, and they're everywhere. Of course, they're balanced by an equal amount of negative charge in the form of electrons. Nuclei and electrons form atoms, and so it might seem only natural that these two species of particle would find one another immediately, forming atoms and paving the way for stars. Unfortunately for them, they're vastly outnumbered — by more than a billion to one — by photons. Every time an electron and a nucleus bind together, a high-enough energy photon comes along and blasts them apart. It isn't until the Universe cools dramatically, from billions of degrees to just thousands of degrees, that neutral atoms can finally form. (And even then, it's only possible because of a special atomic transition.) At the beginning of the Plasma era, the Universe's energy content is dominated by radiation. By the end, it's dominated by normal and dark matter. This third phase takes us to 380,000 years after the Big Bang. 4.) Dark Ages era. Filled with neutral atoms, at last, gravitation can begin the process of forming structure in the Universe. But with all these neutral atoms around, what we presently know as visible light would be invisible all throughout the sky. Why's that? Because neutral atoms, particularly in the form of cosmic dust, are outstanding at blocking visible light. In order to end these dark ages, the intergalactic medium needs to be reionized. That requires enormous amounts of star-formation and tremendous numbers of ultraviolet photons, and that requires time, gravitation, and the start of the cosmic web. The first major regions of reionization take place 200-250 million years after the Big Bang, but reionization doesn't complete, on average, until the Universe is 550 million years old. At this point, the star-formation rate is still increasing, and the first massive galaxy clusters are just beginning to form. 5.) Stellar era. Once the dark ages are over, the Universe is now transparent to starlight. The great recesses of the cosmos are now accessible, with stars, star clusters, galaxies, galaxy clusters, and the great, growing cosmic web all waiting to be discovered. The Universe is dominated, energy-wise, by dark matter and normal matter, and the gravitationally bound structures continue to grow larger and larger. The star-formation rate rises and rises, peaking about 3 billion years after the Big Bang. At this point, new galaxies continue to form, existing galaxies continue to grow and merge, and galaxy clusters attract more and more matter into them. But the amount of free gas within galaxies begins to drop, as the enormous amounts of star-formation have used up a large amount of it. Slowly but steadily, the star-formation rate drops. As time goes forward, the stellar death rate will outpace the birth rate, a fact made worse by the following surprise: as the matter density drops with the expanding Universe, a new form of energy — dark energy — begins to appear and dominate. 7.8 billion years after the Big Bang, distant galaxies stop slowing down in their recession from one another, and begin speeding up again. The accelerating Universe is upon us. A little bit later, 9.2 billion years after the Big Bang, dark energy becomes the dominant component of energy in the Universe. At this point, we enter the final era. 6.) Dark Energy age. Once dark energy takes over, something bizarre happens: the large-scale structure in the Universe ceases to grow. The objects that were gravitationally bound to one another before dark energy's takeover will remain bound, but those that were not yet bound by the onset of the dark energy age will never become bound. Instead, they will simply accelerate away from one another, leading lonely existences in the great expanse of nothingness. The individual bound structures, like galaxies and groups/clusters of galaxies, will eventually merge to form one giant elliptical galaxy. The existing stars will die; new star formation will slow down to a trickle and then stop; gravitational interactions will eject most of the stars into the intergalactic abyss. Planets will spiral into their parent stars or stellar remnants, owing to decay by gravitational radiation. Even black holes, with extraordinarily long lifetimes, will eventually decay from Hawking radiation. In the end, only black dwarf stars and isolated masses too small to ignite nuclear fusion will remain, sparsely populated and disconnected from one another in this empty, ever-expanding cosmos. These final-state corpses will exist even googols of years onward, continuing to persist as dark energy remains the dominant factor in our Universe. This last era, of dark energy domination, has already begun. Dark energy became important for the Universe's expansion 6 billion years ago, and began dominating the Universe's energy content around the time our Sun and Solar System were being born. The Universe may have six unique stages, but for the entirety of Earth's history, we've already been in the final one. Take a good look at the Universe around us. It will never be this rich — or this easy to access — ever again.
The Solar System and Beyond Supidsara Duangchuai ( 702 ) Ai Thiti hohirunkul ( 702 ) Big The Sun is the star at the center of the Solar System . It has a diameter of about 1,392,000 kilometers ( 865,000 mi ) , about 109 times that of Earth , and its mass ( about 2 × 10 30 kilograms, 330,000 times that of Earth ) accounts for about 99.86% of the total mass of the Solar System . About three quarters of the Sun's mass consists of hydrogen , while the rest is mostly helium . Less than 2% consists of heavier element including oxygen , carbon , neon , iron , and others . The Sun's color is white, although from the surface of the Earth it may appear yellow because of atmospheric scattering of blue light .] Its stellar classification , based on spectral class, is G2V , and is informally designated a yellow star , because its visible radiation is most intense in the yellow - green portion of the spectrum . The Sun Mercury is the innermost and smallest planet in the Solar System , [a] orbiting the Sun once every 87.969 Earth days . The orbit of Mercury has the highest eccentricity of all the Solar System planets, and it has the smallest axial tilt . It completes three rotations about its axis for every two orbits . The perihelion of Mercury's orbit precesses around the Sun at an excess of 43 arcseconds per century; a phenomenon that was explained in the 20th century by Albert Einstein 's General Theory of Relativity . Mercury is bright when viewed from Earth , ranging from −2.3 to 5.7 in apparent magnitude , but is not easily seen as its greatest angular separation from the Sun is only 28.3° . Since Mercury is normally lost in the glare of the Sun, unless there is a solar eclipse it can be viewed only in morning or evening twilight . is the second planet from the Sun , orbiting it every 224.7 Earth days. The planet is named after Venus , the Roman goddess of love and beauty. After the Moon , it is the brightest natural object in the night sky, reaching an apparent magnitude of −4.6, bright enough to cast shadows. Because Venus is an inferior planet from Earth , it never appears to venture far from the Sun: its elongation reaches a maximum of 47.8°. Venus reaches its maximum brightness shortly before sunrise or shortly after sunset, for which reason it has been known as the Morning Star or Evening Star. Venus Earth (or the Earth ) is the third planet from the Sun , and the densest and fifth-largest of the eight planets in the Solar System . It is also the largest of the Solar System's four terrestrial planets . It is sometimes referred to as the World , the Blue Planet,[note or by its Latin name, Terra .[note Home to millions of species including humans , Earth is currently the only place in the universe where life is known to exist. The planet formed 4.54 billion years ago, and life appeared on its surface within a billion years. Earth Mars Mars is the fourth planet from the Sun in the Solar System . The planet is named after the Roman god of war , Mars . It is often described as the "Red Planet", as the iron oxide prevalent on its surface gives it a reddish appearance . Mars is a terrestrial planet with a thin atmosphere , having surface features reminiscent both of the impact craters of the Moon and the volcanoes, valleys, deserts, and polar ice caps of Earth . The rotational period and seasonal cycles of Mars are likewise similar to those of Earth. Mars is the site of Olympus Mons , the highest known mountain within the Solar System, and of Valles Marineris , the largest canyon. The smooth Borealis basin in the northern hemisphere covers 40% of the planet and may be a giant impact feature. [ 14 ] Unlike Earth, Mars is now geologically and tectonically inactive .[ citation needed ] Jupiter Jupiter is the fifth planet from the Sun and the largest planet within the Solar System . It is a gas giant with a mass slightly less than one-thousandth of the Sun but is two and a half times the mass of all the other planets in our Solar System combined. Jupiter is classified as a gas giant along with Saturn , Uranus and Neptune . Together, these four planets are sometimes referred to as the Jovian planets. The planet was known by astronomers of ancient times and was associated with the mythology and religious beliefs of many cultures. The Romans named the planet after the Roman god Jupiter . When viewed from Earth , Jupiter can reach an apparent magnitude of −2.94, making it on average the third-brightest object in the night sky after the Moon and Venus . ( Mars can briefly match Jupiter's brightness at certain points in its orbit.) Saturn is the sixth planet from the Sun and the second largest planet in the Solar System , after Jupiter . Saturn is named after the Roman god Saturn , equated to the Greek Cronus (the Titan father of Zeus ), the Babylonian Ninurta , and the Hindu Shani . Saturn's symbol represents the Roman god's sickle ( Unicode : ♄ ). Saturn, along with Jupiter, Uranus , and Neptune , is classified as a gas giant . Together, these four planets are sometimes referred to as the Jovian, meaning "Jupiter-like", planets. Saturn has an average radius about nine times larger than the Earth's. While only one-eighth the average density of Earth, due to its larger volume , Saturn's mass is just over ninety-five times greater than Earth's. saturn Uranus is the seventh planet from the Sun , and the third-largest and fourth most massive planet in the Solar System . It is named after the ancient Greek deity of the sky Uranus ( Ancient Greek : Οὐρανός ) the father of Cronus ( Saturn ) and grandfather of Zeus ( Jupiter ). Though it is visible to the naked eye like the five classical planets , it was never recognized as a planet by ancient observers because of its dimness and slow orbit. Sir William Herschel announced its discovery on March 13, 1781, expanding the known boundaries of the Solar System for the first time in modern history. Uranus was also the first planet discovered with a telescope . Uranus Neptune is the eighth and farthest planet from the Sun in our Solar System . Named for the Roman god of the sea , it is the fourth-largest planet by diameter and the third-largest by mass. Neptune is 17 times the mass of Earth and is slightly more massive than its near-twin Uranus , which is 15 Earth masses and not as denseOn average, Neptune orbits the Sun at a distance of 30.1 AU , approximately 30 times the Earth-Sun distance. Its astronomical symbol is , a stylized version of the god Neptune's trident . Nepturn Pluto , formal designation 134340 Pluto , is the second-largest known dwarf planet in the Solar System (after Eris ) and the tenth-largest body observed directly orbiting the Sun . Originally classified as a planet, Pluto is now considered the largest member of a distinct population known as the Kuiper belt . [ note 9] Like other members of the Kuiper belt, Pluto is composed primarily of rock and ice and is relatively small : approximately a fifth the mass of the Earth 's Moon and a third its volume . It has an eccentric and highly inclined orbit that takes it from 30 to 49 AU ( 4.4–7.4 billion km ) from the Sun . This causes Pluto to periodically come closer to the Sun than Neptune .
The Ebbinghaus illusion or Titchener circles is an optical illusion of relative size perception. Named for its discoverer, the German psychologist Hermann Ebbinghaus (1850–1909), the illusion was popularized in the English-speaking world by Edward B. Titchener in a 1901 textbook of experimental psychology, hence its alternative name. In the best-known version of the illusion, two circles of identical size are placed near to each other, and one is surrounded by large circles while the other is surrounded by small circles. As a result of the juxtaposition of circles, the central circle surrounded by large circles appears smaller than the central circle surrounded by small circles. Recent work suggests that two other critical factors involved in the perception of the Ebbinghaus illusion are the distance of the surrounding circles from the central circle and the completeness of the annulus, which makes the illusion comparable in nature to the Delboeuf illusion. Regardless of relative size, if the surrounding circles are closer to the central circle, the central circle appears larger and if the surrounding circles are far away, the central circle appears smaller. While the distance variable appears to be an active factor in the perception of relative size, the size of the surrounding circles limits how close they can be to the central circle, resulting in many studies confounding the two variables. Possible explanations Edit The Ebbinghaus illusion has played a crucial role in the debate over the existence of separate pathways in the brain for perception and action (for more details see Two Streams hypothesis). It has been argued that the Ebbinghaus illusion distorts perception of size, but not action. A study by neuroscientist Melvyn A. Goodale showed that when a subject is required to respond to a physical model of the illusion by grasping the central circle, the scaling of the grip aperture was unaffected by the perceived size distortion. While other studies confirm the insensitivity of grip scaling to size-contrast illusions like the Ebbinghaus illusion, other work suggests that both action and perception are fooled by the illusion. Neuroimaging research suggests an inverse correlation between an individual's receptivity to the Ebbinghaus and similar illusions (such as the Ponzo illusion) and the highly variable size of the individual's primary visual cortex. Developmental research suggests that the illusion is dependent on context-sensitivity. The illusion was found more often to cause relative-size deception in university students, who have high context-sensitivity, than in children aged 10 and under. Study found 70 genetic variants linked to the perception of the Ebbinghaus illusion. The winner of the 2014 Best Illusion of the Year Contest, submitted by Christopher D. Blair, Gideon P. Caplovitz, and Ryan E.B. Mruczek, of the University of Nevada, Reno, animated the Ebbinghaus illusion, putting it in motion. An exception with opposite visual effects Edit A new relative size illusion was discovered by Italian visual researcher Gianni A. Sarcone in 2013. It contradicts Ebbinghaus illusion (1898), aka Titchener Circles, and Obonai square illusion (1954). In fact, the central test shape (a cross) surrounded by large squares appears larger instead of smaller. Sarcone's Cross illusion consists of a cross (the test shape) surrounded by sets of squares of distinct size (the inducing shapes). As shown in the diagram opposite, the three blue crosses are exactly the same size; however, the one on the left (fig. 1) tends to appear larger. The illusion works even when the small squares completely occlude the blue cross (see fig. 3). In conclusion, there isn’t always correlation between the size of the surrounding shapes and the relative size perception of the test shape. - Roberts B, Harris MG, Yates TA (2005). "The roles of inducer size and distance in the Ebbinghaus illusion (Titchener circles)". Perception. 34 (7): 847–56. doi:10.1068/p5273. PMID 16124270. S2CID 26626773. - M.A. Goodale; A.D. Milner (January 1992). "Separate pathways for perception and action". Trends in Neurosciences. 15 (1): 20–25. CiteSeerX 10.1.1.207.6873. doi:10.1016/0166-2236(92)90344-8. PMID 1374953. S2CID 793980. - MA Goodale (2011). "Transforming vision into action". Vision Res. 51 (14): 1567–87. doi:10.1016/j.visres.2010.07.027. PMID 20691202. - V.H. Franz; F. Scharnowski; K.R. Gegenfurtner (2005). "Illusion effects on grasping are temporally constant not dynamic" (PDF). J Exp Psychol Hum Percept Perform. 31 (6): 1359–1378. doi:10.1037/0096-1518.104.22.1689. PMID 16366795. - D Samuel Schwarzkopf; Chen Song; Geraint Rees (January 2011). "The surface area of human V1 predicts the subjective experience of object size". Nature Neuroscience. 14 (1): 28–30. doi:10.1038/nn.2706. PMC 3012031. PMID 21131954. - Martin J. Doherty; Nicola M. Campbell; Hiromi Tsuji; William A. Phillips (2010). "The Ebbinghaus illusion deceives adults but not young children" (PDF). Developmental Science. 13 (5): 714–721. doi:10.1111/j.1467-7687.2009.00931.x. hdl:1893/1473. PMID 20712737. - Zhu, Zijian; Chen, Biqing; Na, Ren; Fang, Wan; Zhang, Wenxia; Zhou, Qin; Zhou, Shanbi; Lei, Han; Huang, Ailong; Chen, Tingmei; Ni, Dongsheng (2020-09-16). "A genome-wide association study reveals a substantial genetic basis underlying the Ebbinghaus illusion". Journal of Human Genetics. 66 (3): 261–271. doi:10.1038/s10038-020-00827-4. ISSN 1435-232X. PMID 32939015. S2CID 221770542. - Gonzalez, Robbie (21 May 2014). "A New Optical Illusion Demonstrates How Gullible Our Brains Really Are". i09. Retrieved 2015-03-01.
The short answer is phase angle: the time delay between a voltage and a current in a circuit. How can an angle be a time? That’s part of what I’ll need to explain. First, consider a resistor. If you apply a voltage to it, a certain current will flow that you can determine by Ohm’s law. If you know the instantaneous voltage across the resistor, you can derive the current and you can find the power–how much work that electricity will do. That’s fine for DC current through resistors. But components like capacitors and inductors with an AC current don’t obey Ohm’s law. Take a capacitor. Current only flows when the capacitor is charging or discharging, so the current through it relates to the rate of change of the voltage, not the instantaneous voltage level. That means that if you plot the sine wave voltage against the current, the peak of the voltage will be where the current is minimal, and the peak current will be where the voltage is at zero. You can see that in this image, where the yellow wave is voltage (V) and the green wave is current (I). See how the green peak is where the yellow curve crosses zero? And the yellow peak is where the green curve crosses zero? These linked sine and cosine waves might remind you of something — the X and Y coordinates of a point being swept around a circle at a constant rate, and that’s our connection to complex numbers. By the end of the post, you’ll see it isn’t all that complicated and the “imaginary” quantity isn’t imaginary at all. Start with an audio signal of someone speaking and feed that into your circuit. It is awash with different frequencies that change constantly. If you had a circuit with only resistors in it, you could pick a point in time, find all the frequency components present or the instantaneous amplitude, derive the instantaneous currents, and you could use conventional techniques on it. You’d just have to do it over and over and over again. If the circuit involves inductors or capacitors, whose behavior depends on more than just the voltage across them, this becomes very difficult very quickly. Instead, it is easier to start with a sine wave at a single frequency and assume that a complex signal of many different frequencies is just the sum of many single sines. One way to think of a capacitor is to consider it a resistor that has higher resistance at lower frequencies. An inductor acts like a resistor that gets larger at higher frequencies. Because we are only considering a single frequency, we can convert any capacitance and inductance values to an impedance: a resistance that is only good at the frequency of interest. What’s more is that we can represent impedance as a complex number so that we can track the phase angle of the circuit, which directly relates to a particular time delay between voltage and current. For a true resistor, the imaginary part is 0. That makes sense because the voltage and current are in phase and therefore there is no time delay at all. For a pure capacitor or inductor, the real part is zero. Real circuits will have combinations and thus will have a combination of real and imaginary parts. Numbers like that are complex numbers and you can write them in several different ways. The first thing to remember is that the word imaginary is just an arbitrary term. Maybe it is better to forget the normal meaning of the word imaginary. These imaginary quantities are not some kind of magic electricity or resistance. We use imaginary numbers to represent time delays in circuits. That’s all. There is a long story about what imaginary numbers mean in pure math and why they are called imaginary. You can look that up if you are a math-head, but you should know that math books use the symbol i for the imaginary part of a complex number. However, since electrical engineers use i for current, we use j instead. You just have to remember when reading math books, you’ll see i and it isn’t a current, and it is the same as j in electrical books. There are several ways to represent a complex number. The simplest way is to write the real part and the imaginary part as being added together along with j. So consider this: 5 + 3j We say the real part is 5 and the imaginary part is 3. Numbers written in this form are in rectangular format. You can plot it on the number lines like this: That leads to the second way to write a complex number: polar notation. If the point on the graph is 5 + 3j, you can note that a vector can represent the same point. It will have a length or magnitude and an angle (the angle it makes with the X-axis of the graph). In this case, the magnitude is 5.83 (about) and the angle is just a little under 31 degrees. This is interesting because it is a vector and there are a lot of good math tools to manipulate vectors. It is going to become really important in a minute because the angle can correspond to a phase angle in a circuit and the magnitude has a direct physical relationship, as well. Remember that I said we do an AC analysis at a single frequency? If you plot the AC voltage across and the current going through a resistor at some frequency, the two sine waves will line up exactly. That’s because a resistor doesn’t time delay anything. We’d say the phase angle across the resistor is zero degrees. However, for a capacitor, the current will appear to rise before the voltage by some amount of time. This makes sense if you think about your intuition about capacitors at DC. When a capacitor is discharged, it has no voltage across it, but it will consume a lot of current — it temporarily looks like a short circuit. As the charge builds, the voltage rises but the current drops, until the capacitor is fully charged. At that point, the voltage is at a maximum, but the current is zero, or nearly so. Inductors have the opposite arrangement: voltage leads current, so the curves would look the same but the V curve is now the I and the I curve is now the V. You can remember that with the simple mnemonic ELI the ICE man, where E is voltage just like in Ohm’s law. When you talk about phase shift in a circuit, you really mean how much the current leads or lags the voltage at a given frequency. That’s a key idea: phase shift or angle is the amount of time the current leads or lags the voltage. You can also measure phase between other things like two different voltage sources, but generally when you say “this circuit has a phase shift of 22 degrees” you mean the voltage vs the current time delay. Keep in mind a sine wave is like a circle bent to fit a line. So if the start of the sine wave is at 0 degrees, the top of the positive peak is 90 degrees. The second 0 crossing is 180 degrees, and the negative peak is 270 degrees–just like the points on a circle. Since the sine wave is at a fixed frequency, putting something at a particular degree mark is the same as expressing a time. In the case of a resistor, the shift is 0 degrees. So in complex notation, a 100 ohm resistor is 100 + 0j. It can also be 100∠0. For a capacitor, the current rises before the voltage by 90 degrees so a capacitor has a phase shift of -90. But what’s the magnitude? You probably learned that the capacitive reactance is equal to 1/(2πfC) where f is the frequency in Hz. That’s the magnitude of the polar form. Of course, since -90 degrees is straight down the number line, it is also the imaginary part of the rectangular form (and the real part is zero). If capacitive reactance (Xc) is equal to 50, for example, then you could write 0-50j or 50∠-90. Inductors work the same but the reactance (Xl) is 2πfL and the phase angle is 90 degrees. So an inductor with the same reactance would be 0 + 50j or 50∠90. Finding the Power Let’s look at a quick example of what these phase angles are good for: calculating power. You know that power is voltage times current. So if a capacitor has 1 V across it (peak) and draws 1 A through it (peak), is the power 1 watt? No, because it doesn’t draw 1 V at 1 A at the same time. Consider this simulation (see figure to the right). You can see the traces to the left show the 90 degree phase shift very clearly (the green trace is voltage and the yellow one is current). The peak voltage is 1.85 V and the current peaks at about 4.65 mA. The product of the voltage times the current is 8.6 mW. But that’s not the right answer. The power is actually 4.29 mW (see the graph on the right). In an ideal capacitor, power isn’t consumed. It is stored and released, which is why the power goes negative. Real capacitors, of course, exhibit some loss. Note that the power supply doesn’t provide 4.29 mW, but much less. That’s because the resistor is the only thing consuming power. The voltage and current are in phase for it and some of the power it dissipates is coming from the capacitor’s stored charge. The magnitude of the vector is usable in Ohm’s law. For example, at 40 Hz, the Xc of the example circuit is just under 400 ohms. So the total complex impedance for the RC circuit is 1000 – 400j. If you are adept with vectors you could do polar by writing 1000∠0 + 400∠-90. However, it is usually easier to write the rectangular version and convert to polar (Wolfram Alpha is good at that; just remember to use i instead of j). The magnitude is just the Pythagorean theorem and the angle is simple trig. I am not going to go into it, but here’s the formula where R and J are the real and imaginary parts, respectively. Our example, then, is 1077∠-21.8. So what’s the power coming out of the voltage source? Power is E^2/R (or, actually, E^2/Z in this case). So 25/1077 = 23 mW peak. The simulation shows 22.29 and since I rounded a few values, that’s close enough. That’s not it, of course, but it is all you need to know for a lot of purposes. Many hobby-level electronic texts skimp on the details and just work with magnitudes. For simple circuits, this can work, but for something complex (no pun intended), it gets hairy fast. By the way, this example showed to elements in series. However, you can add reactances in parallel just like you do resistors in parallel. The key concepts you need to remember are: The analysis of an AC circuit mostly occurs at a single frequency with a sine wave input. Imaginary numbers aren’t imaginary. Magnitudes of complex numbers in polar forms can be treated like a resistance. Phase angle is the time delay between the voltage and the current waveform. There are a lot of details I glossed over. You probably don’t need to know how i is really the square root of negative one. Or how Euler’s number plays into this and the simplicity of integrating and differentiating sine waves written with an amplitude and a phase angle. If you are interested in math history, imaginary numbers have quite a story behind them. If you want something more practical, Khan Academy has some useful videos. However, what’s covered here should be all you need to know to work with AC circuits. Filed under: Engineering, Featured
In mathematics, spherical harmonics are the angular portion of a set of solutions to Laplace's equation. Represented in a system of spherical coordinates, Laplace's spherical harmonics are a specific set of spherical harmonics that forms an orthogonal system, first introduced by Pierre Simon de Laplace in 1782. Spherical harmonics are important in many theoretical and practical applications, particularly in the computation of atomic orbital electron configurations, representation of gravitational fields, geoids, and the magnetic fields of planetary bodies and stars, and characterization of the cosmic microwave background radiation. In 3D computer graphics, spherical harmonics play a special role in a wide variety of topics including indirect lighting (ambient occlusion, global illumination, precomputed radiance transfer, etc.) and modelling of 3D shapes. - 1 History - 2 Laplace's spherical harmonics - 3 Conventions - 4 Spherical harmonics in Cartesian form - 5 Spherical harmonics expansion - 6 Spectrum analysis - 7 Algebraic properties - 8 Visualization of the spherical harmonics - 9 List of spherical harmonics - 10 Higher dimensions - 11 Connection with representation theory - 12 See also - 13 Notes - 14 References Spherical harmonics were first investigated in connection with the Newtonian potential of Newton's law of universal gravitation in three dimensions. In 1782, Pierre-Simon de Laplace had, in his Mécanique Céleste, determined that the gravitational potential at a point x associated to a set of point masses mi located at points xi was given by Each term in the above summation is an individual Newtonian potential for a point mass. Just prior to that time, Adrien-Marie Legendre had investigated the expansion of the Newtonian potential in powers of r = |x| and r1 = |x1|. He discovered that if r ≤ r1 then where γ is the angle between the vectors x and x1. The functions Pi are the Legendre polynomials, and they are a special case of spherical harmonics. Subsequently, in his 1782 memoire, Laplace investigated these coefficients using spherical coordinates to represent the angle γ between x1 and x. (See Applications of Legendre polynomials in physics for a more detailed analysis.) In 1867, William Thomson (Lord Kelvin) and Peter Guthrie Tait introduced the solid spherical harmonics in their Treatise on Natural Philosophy, and also first introduced the name of "spherical harmonics" for these functions. The solid harmonics were homogeneous solutions of Laplace's equation By examining Laplace's equation in spherical coordinates, Thomson and Tait recovered Laplace's spherical harmonics. The term "Laplace's coefficients" was employed by William Whewell to describe the particular system of solutions introduced along these lines, whereas others reserved this designation for the zonal spherical harmonics that had properly been introduced by Laplace and Legendre. The 19th century development of Fourier series made possible the solution of a wide variety of physical problems in rectangular domains, such as the solution of the heat equation and wave equation. This could be achieved by expansion of functions in series of trigonometric functions. Whereas the trigonometric functions in a Fourier series represent the fundamental modes of vibration in a string, the spherical harmonics represent the fundamental modes of vibration of a sphere in much the same way. Many aspects of the theory of Fourier series could be generalized by taking expansions in spherical harmonics rather than trigonometric functions. This was a boon for problems possessing spherical symmetry, such as those of celestial mechanics originally studied by Laplace and Legendre. The prevalence of spherical harmonics already in physics set the stage for their later importance in the 20th century birth of quantum mechanics. The spherical harmonics are eigenfunctions of the square of the orbital angular momentum operator Laplace's spherical harmonics Consider the problem of finding solutions of the form f(r,θ,φ) = R(r)Y(θ,φ). By separation of variables, two differential equations result by imposing Laplace's equation: The second equation can be simplified under the assumption that Y has the form Y(θ,φ) = Θ(θ)Φ(φ). Applying separation of variables again to the second equation gives way to the pair of differential equations for some number m. A priori, m is a complex constant, but because Φ must be a periodic function whose period evenly divides 2π, m is necessarily an integer and Φ is a linear combination of the complex exponentials e±imφ. The solution function Y(θ,φ) is regular at the poles of the sphere, where θ=0,π. Imposing this regularity in the solution Θ of the second equation at the boundary points of the domain is a Sturm–Liouville problem that forces the parameter λ to be of the form λ = ℓ(ℓ+1) for some non-negative integer with ℓ ≥ |m|; this is also explained below in terms of the orbital angular momentum. Furthermore, a change of variables t = cosθ transforms this equation into the Legendre equation, whose solution is a multiple of the associated Legendre polynomial . Finally, the equation for R has solutions of the form R(r) = Arℓ + Br−ℓ−1; requiring the solution to be regular throughout R3 forces B = 0. Here the solution was assumed to have the special form Y(θ,φ) = Θ(θ)Φ(φ). For a given value of ℓ, there are 2ℓ+1 independent solutions of this form, one for each integer m with −ℓ ≤ m ≤ ℓ. These angular solutions are a product of trigonometric functions, here represented as a complex exponential, and associated Legendre polynomials: Here is called a spherical harmonic function of degree ℓ and order m, is an associated Legendre polynomial, N is a normalization constant, and θ and φ represent colatitude and longitude, respectively. In particular, the colatitude θ, or polar angle, ranges from 0 at the North Pole to π at the South Pole, assuming the value of π/2 at the Equator, and the longitude φ, or azimuth, may assume all values with 0 ≤ φ < 2π. For a fixed integer ℓ, every solution Y(θ,φ) of the eigenvalue problem is a linear combination of . In fact, for any such solution, rℓY(θ,φ) is the expression in spherical coordinates of a homogeneous polynomial that is harmonic (see below), and so counting dimensions shows that there are 2ℓ+1 linearly independent such polynomials. The general solution to Laplace's equation in a ball centered at the origin is a linear combination of the spherical harmonic functions multiplied by the appropriate scale factor rℓ, Orbital angular momentum The is conventional in quantum mechanics; it is convenient to work in units in which . The spherical harmonics are eigenfunctions of the square of the orbital angular momentum Laplace's spherical harmonics are the joint eigenfunctions of the square of the orbital angular momentum and the generator of rotations about the azimuthal axis: Furthermore, L2 is a positive operator. If Y is a joint eigenfunction of L2 and Lz, then by definition for some real numbers m and λ. Here m must in fact be an integer, for Y must be periodic in the coordinate φ with period a number that evenly divides 2π. Furthermore, since and each of Lx, Ly, Lz are self-adjoint, it follows that λ ≥ m2. Denote this joint eigenspace by Eλ,m, and define the raising and lowering operators by Then L+ and L− commute with L2, and the Lie algebra generated by L+, L−, Lz is the special linear Lie algebra, with commutation relations Thus L+ : Eλ,m → Eλ,m+1 (it is a "raising operator") and L− : Eλ,m → Eλ,m−1 (it is a "lowering operator"). In particular, Lk + : Eλ,m → Eλ,m+k must be zero for k sufficiently large, because the inequality λ ≥ m2 must hold in each of the nontrivial joint eigenspaces. Let Y ∈ Eλ,m be a nonzero joint eigenfunction, and let k be the least integer such that it follows that Thus λ = ℓ(ℓ+1) for the positive integer ℓ = m+k. Orthogonality and normalization Several different normalizations are in common use for the Laplace spherical harmonic functions. Throughout the section, we use the standard convention that (see associated Legendre polynomials) which is the natural normalization given by Rodrigues' formula. In seismology, the Laplace spherical harmonics are generally defined as (this is the convention used in this article) which are orthonormal where δij is the Kronecker delta and dΩ = sinθ dφ dθ. This normalization is used in quantum mechanics because it ensures that probability is normalized, i.e. The disciplines of geodesy and spectral analysis use which possess unit power The magnetics community, in contrast, uses Schmidt semi-normalized harmonics which have the normalization In quantum mechanics this normalization is sometimes used as well, and is named Racah's normalization after Giulio Racah. It can be shown that all of the above normalized spherical harmonic functions satisfy where the superscript * denotes complex conjugation. Alternatively, this equation follows from the relation of the spherical harmonic functions with the Wigner D-matrix. One source of confusion with the definition of the spherical harmonic functions concerns a phase factor of (−1)m for m > 0, 1 otherwise, commonly referred to as the Condon–Shortley phase in the quantum mechanical literature. In the quantum mechanics community, it is common practice to either include this phase factor in the definition of the associated Legendre polynomials, or to append it to the definition of the spherical harmonic functions. There is no requirement to use the Condon–Shortley phase in the definition of the spherical harmonic functions, but including it can simplify some quantum mechanical operations, especially the application of raising and lowering operators. The geodesy and magnetics communities never include the Condon–Shortley phase factor in their definitions of the spherical harmonic functions nor in the ones of the associated Legendre polynomials. A real basis of spherical harmonics can be defined in terms of their complex analogues by setting The Condon-Shortley phase convention is used here for consistency. The corresponding inverse equations are The real spherical harmonics are sometimes known as tesseral spherical harmonics. These functions have the same orthonormality properties as the complex ones above. The harmonics with m > 0 are said to be of cosine type, and those with m < 0 of sine type. The reason for this can be seen by writing the functions in terms of the Legendre polynomials as The same sine and cosine factors can be also seen in the following subsection that deals with the cartesian representation. See here for a list of real spherical harmonics up to and including , which can be seen to be consistent with the output of the equations above. Use in quantum chemistry As is known from the analytic solutions for the hydrogen atom, the eigenfunctions of the angular part of the wave function are spherical harmonics. However, the solutions of the non-relativistic Schrödinger equation without magnetic terms can be made real. This is why the real forms are extensively used in basis functions for quantum chemistry, as the programs don't then need to use complex algebra. Here, it is important to note that the real functions span the same space as the complex ones would. Spherical harmonics in Cartesian form The following expresses normalized spherical harmonics in Cartesian coordinates (Condon-Shortley phase): and for m = 0: For this reduces to Using the expressions for , , and listed explicitly above we obtain: Using the equations above to form the real spherical harmonics, it is seen that for only the terms (cosines) are included, and for only the terms (sines) are included: and for m = 0: Spherical harmonics expansion The Laplace spherical harmonics form a complete set of orthonormal functions and thus form an orthonormal basis of the Hilbert space of square-integrable functions. On the unit sphere, any square-integrable function can thus be expanded as a linear combination of these: This expansion holds in the sense of mean-square convergence — convergence in L2 of the sphere — which is to say that The expansion coefficients are the analogs of Fourier coefficients, and can be obtained by multiplying the above equation by the complex conjugate of a spherical harmonic, integrating over the solid angle Ω, and utilizing the above orthogonality relationships. This is justified rigorously by basic Hilbert space theory. For the case of orthonormalized harmonics, this gives: A square-integrable function f can also be expanded in terms of the real harmonics Yℓm above as a sum The convergence of the series holds again in the same sense, but the benefit of the real expansion is that for real functions f the expansion coefficients become real. Power spectrum in signal processing The total power of a function f is defined in the signal processing literature as the integral of the function squared, divided by the area of its domain. Using the orthonormality properties of the real unit-power spherical harmonic functions, it is straightforward to verify that the total power of a function defined on the unit sphere is related to its spectral coefficients by a generalization of Parseval's theorem: is defined as the angular power spectrum. In a similar manner, one can define the cross-power of two functions as is defined as the cross-power spectrum. If the functions f and g have a zero mean (i.e., the spectral coefficients f00 and g00 are zero), then Sff(ℓ) and Sfg(ℓ) represent the contributions to the function's variance and covariance for degree ℓ, respectively. It is common that the (cross-)power spectrum is well approximated by a power law of the form When β = 0, the spectrum is "white" as each degree possesses equal power. When β < 0, the spectrum is termed "red" as there is more power at the low degrees with long wavelengths than higher degrees. Finally, when β > 0, the spectrum is termed "blue". The condition on the order of growth of Sff(ℓ) is related to the order of differentiability of f in the next section. One can also understand the differentiability properties of the original function f in terms of the asymptotics of Sff(ℓ). In particular, if Sff(ℓ) decays faster than any rational function of ℓ as ℓ → ∞, then f is infinitely differentiable. If, furthermore, Sff(ℓ) decays exponentially, then f is actually real analytic on the sphere. The general technique is to use the theory of Sobolev spaces. Statements relating the growth of the Sff(ℓ) to differentiability are then similar to analogous results on the growth of the coefficients of Fourier series. Specifically, if then f is in the Sobolev space Hs(S2). In particular, the Sobolev embedding theorem implies that f is infinitely differentiable provided that for all s. A mathematical result of considerable interest and use is called the addition theorem for spherical harmonics. This is a generalization of the trigonometric identity in which the role of the trigonometric functions appearing on the right-hand side is played by the spherical harmonics and that of the left-hand side is played by the Legendre polynomials. Consider two unit vectors x and y, having spherical coordinates (θ,φ) and (θ′,φ′), respectively. The addition theorem states where Pℓ is the Legendre polynomial of degree ℓ. This expression is valid for both real and complex harmonics. The result can be proven analytically, using the properties of the Poisson kernel in the unit ball, or geometrically by applying a rotation to the vector y so that it points along the z-axis, and then directly calculating the right-hand side. In particular, when x = y, this gives Unsöld's theorem which generalizes the identity cos2θ + sin2θ = 1 to two dimensions. In the expansion (1), the left-hand side Pℓ(x·y) is a constant multiple of the degree ℓ zonal spherical harmonic. From this perspective, one has the following generalization to higher dimensions. Let Yj be an arbitrary orthonormal basis of the space Hℓ of degree ℓ spherical harmonics on the n-sphere. Then , the degree ℓ zonal harmonic corresponding to the unit vector x, decomposes as Furthermore, the zonal harmonic is given as a constant multiple of the appropriate Gegenbauer polynomial: where ωn−1 is the volume of the (n−1)-sphere. The Clebsch–Gordan coefficients are the coefficients appearing in the expansion of the product of two spherical harmonics in terms of spherical harmonics itself. A variety of techniques are available for doing essentially the same calculation, including the Wigner 3-jm symbol, the Racah coefficients, and the Slater integrals. Abstractly, the Clebsch–Gordan coefficients express the tensor product of two irreducible representations of the rotation group as a sum of irreducible representations: suitably normalized, the coefficients are then the multiplicities. The spherical harmonics have well defined parity in the sense that they are either even or odd with respect to reflection about the origin. Reflection about the origin is represented by the operator . For the spherical angles, this corresponds to the replacement . The associated Legendre polynomials gives (−1)ℓ+m and from the exponential function we have (−1)m, giving together for the spherical harmonics a parity of (−1)ℓ: This remains true for spherical harmonics in higher dimensions: applying a point reflection to a spherical harmonic of degree ℓ changes the sign by a factor of (−1)ℓ. Visualization of the spherical harmonics The Laplace spherical harmonics can be visualized by considering their "nodal lines", that is, the set of points on the sphere where , or alternatively where . Nodal lines of are composed of circles: some are latitudes and others are longitudes. One can determine the number of nodal lines of each type by counting the number of zeros of in the latitudinal and longitudinal directions independently. For the latitudinal direction, the real and imaginary components of the associated Legendre polynomials each possess ℓ−|m| zeros, whereas for the longitudinal direction, the trigonometric sin and cos functions possess 2|m| zeros. When the spherical harmonic order m is zero (upper-left in the figure), the spherical harmonic functions do not depend upon longitude, and are referred to as zonal. Such spherical harmonics are a special case of zonal spherical functions. When ℓ = |m| (bottom-right in the figure), there are no zero crossings in latitude, and the functions are referred to as sectoral. For the other cases, the functions checker the sphere, and they are referred to as tesseral. More general spherical harmonics of degree ℓ are not necessarily those of the Laplace basis , and their nodal sets can be of a fairly general kind. List of spherical harmonics Analytic expressions for the first few orthonormalized Laplace spherical harmonics that use the Condon-Shortley phase convention: The classical spherical harmonics are defined as functions on the unit sphere S2 inside three-dimensional Euclidean space. Spherical harmonics can be generalized to higher-dimensional Euclidean space Rn as follows. Let Pℓ denote the space of homogeneous polynomials of degree ℓ in n variables. That is, a polynomial P is in Pℓ provided that obtained by restriction from Aℓ. The following properties hold: - The sum of the spaces Hℓ is dense in the set of continuous functions on Sn−1 with respect to the uniform topology, by the Stone-Weierstrass theorem. As a result, the sum of these spaces is also dense in the space L2(Sn−1) of square-integrable functions on the sphere. Thus every square-integrable function on the sphere decomposes uniquely into a series a spherical harmonics, where the series converges in the L2 sense. - For all f ∈ Hℓ, one has - where ΔSn−1 is the Laplace–Beltrami operator on Sn−1. This operator is the analog of the angular part of the Laplacian in three dimensions; to wit, the Laplacian in n dimensions decomposes as - It follows from the Stokes theorem and the preceding property that the spaces Hℓ are orthogonal with respect to the inner product from L2(Sn−1). That is to say, - for f ∈ Hℓ and g ∈ Hk for k ≠ ℓ. - Conversely, the spaces Hℓ are precisely the eigenspaces of ΔSn−1. In particular, an application of the spectral theorem to the Riesz potential gives another proof that the spaces Hℓ are pairwise orthogonal and complete in L2(Sn−1). - Every homogeneous polynomial P ∈ Pℓ can be uniquely written in the form - where Pj ∈ Aj. In particular, An orthogonal basis of spherical harmonics in higher dimensions can be constructed inductively by the method of separation of variables, by solving the Sturm-Liouville problem for the spherical Laplacian where φ is the axial coordinate in a spherical coordinate system on Sn−1. The end result of such a procedure is where the indices satisfy |ℓ1| ≤ ℓ2 ≤ ... ≤ ℓn−1 and the eigenvalue is −ℓn−1(ℓn−1 + n−2). The functions in the product are defined in terms of the Legendre function Connection with representation theory The space Hℓ of spherical harmonics of degree ℓ is a representation of the symmetry group of rotations around a point (SO(3)) and its double-cover SU(2). Indeed, rotations act on the two-dimensional sphere, and thus also on Hℓ by function composition The elements of Hℓ arise as the restrictions to the sphere of elements of Aℓ: harmonic polynomials homogeneous of degree ℓ on three-dimensional Euclidean space R3. By polarization of ψ ∈ Aℓ, there are coefficients symmetric on the indices, uniquely determined by the requirement The condition that ψ be harmonic is equivalent to the assertion that the tensor must be trace free on every pair of indices. Thus as an irreducible representation of SO(3), Hℓ is isomorphic to the space of traceless symmetric tensors of degree ℓ. More generally, the analogous statements hold in higher dimensions: the space Hℓ of spherical harmonics on the n-sphere is the irreducible representation of SO(n+1) corresponding to the traceless symmetric ℓ-tensors. However, whereas every irreducible tensor representation of SO(2) and SO(3) is of this kind, the special orthogonal groups in higher dimensions have additional irreducible representations that do not arise in this manner. The special orthogonal groups have additional spin representations that are not tensor representations, and are typically not spherical harmonics. An exception are the spin representation of SO(3): strictly speaking these are representations of the double cover SU(2) of SO(3). In turn, SU(2) is identified with the group of unit quaternions, and so coincides with the 3-sphere. The spaces of spherical harmonics on the 3-sphere are certain spin representations of SO(3), with respect to the action by quaternionic multiplication. The angle-preserving symmetries of the two-sphere are described by the group of Möbius transformations PSL(2,C). With respect to this group, the sphere is equivalent to the usual Riemann sphere. The group PSL(2,C) is isomorphic to the (proper) Lorentz group, and its action on the two-sphere agrees with the action of the Lorentz group on the celestial sphere in Minkowski space. The analog of the spherical harmonics for the Lorentz group is given by the hypergeometric series; furthermore, the spherical harmonics can be re-expressed in terms of the hypergeometric series, as SO(3) = PSU(2) is a subgroup of PSL(2,C). |Wikimedia Commons has media related to Spherical harmonics.| - Cylindrical harmonics - Spherical basis - Spin spherical harmonics - Spin-weighted spherical harmonics - Sturm–Liouville theory - Table of spherical harmonics - Vector spherical harmonics - A historical account of various approaches to spherical harmonics in three-dimensions can be found in Chapter IV of MacRobert 1967. The term "Laplace spherical harmonics" is in common use; see Courant & Hilbert 1962 and Meijer & Bauer 2004. - The approach to spherical harmonics taken here is found in (Courant & Hilbert 1966, §V.8, §VII.5). - Physical applications often take the solution that vanishes at infinity, making A = 0. This does not affect the angular portion of the spherical harmonics. - Edmonds 1957, §2.5 - Messiah, Albert (1999). Quantum mechanics : two volumes bound as one (Two vol. bound as one, unabridged reprint ed.). Mineola, NY: Dover. ISBN 9780486409245. - al.], Claude Cohen-Tannoudji, Bernard Diu, Franck Laloë; transl. from the French by Susan Reid Hemley ... [et (1996). Quantum mechanics. Wiley-Interscience: Wiley. ISBN 9780471569527. - Heiskanen and Moritz, Physical Geodesy, 1967, eq. 1-62 - Watson & Whittaker 1927, p. 392. - This is valid for any orthonormal basis of spherical harmonics of degree ℓ. For unit power harmonics it is necessary to remove the factor of 4π. - Watson & Whittaker 1927, p. 395 - Unsöld 1927 - Stein & Weiss 1971, §IV.2 - Eremenko, Jakobson & Nadirashvili 2007 - Solomentsev 2001; Stein & Weiss 1971, §Iv.2 - Higuchi, Atsushi (1987). "Symmetric tensor spherical harmonics on the N-sphere and their application to the de Sitter group SO(N,1)". Journal of Mathematical Physics 28 (7). - N. Vilenkin, Special Functions and the Theory of Group Representations, Am. Math. Soc. Transl.,vol. 22, (1968). - J. D. Talman, Special Functions, A Group Theoretic Approach, (based on lectures by E.P. Wigner), W. A. Benjamin, New York (1968). - W. Miller, Symmetry and Separation of Variables, Addison-Wesley, Reading (1977). - A. Wawrzyńczyk, Group Representations and Special Functions, Polish Scientific Publishers. Warszawa (1984). - Cited references - Courant, Richard; Hilbert, David (1962), Methods of Mathematical Physics, Volume I, Wiley-Interscience. - Edmonds, A.R. (1957), Angular Momentum in Quantum Mechanics, Princeton University Press, ISBN 0-691-07912-9 - Eremenko, Alexandre; Jakobson, Dmitry; Nadirashvili, Nikolai (2007), "On nodal sets and nodal domains on and ", Université de Grenoble. Annales de l'Institut Fourier 57 (7): 2345–2360, ISSN 0373-0956, MR 2394544 - MacRobert, T.M. (1967), Spherical harmonics: An elementary treatise on harmonic functions, with applications, Pergamon Press. - Meijer, Paul Herman Ernst; Bauer, Edmond (2004), Group theory: The application to quantum mechanics, Dover, ISBN 978-0-486-43798-9. - Solomentsev, E.D. (2001), "Spherical harmonics", in Hazewinkel, Michiel, Encyclopedia of Mathematics, Springer, ISBN 978-1-55608-010-4. - Stein, Elias; Weiss, Guido (1971), Introduction to Fourier Analysis on Euclidean Spaces, Princeton, N.J.: Princeton University Press, ISBN 978-0-691-08078-9. - Unsöld, Albrecht (1927), "Beiträge zur Quantenmechanik der Atome", Annalen der Physik 387 (3): 355–393, Bibcode:1927AnP...387..355U, doi:10.1002/andp.19273870304. - Watson, G. N.; Whittaker, E. T. (1927), A Course of Modern Analysis, Cambridge University Press, p. 392. - General references - E.W. Hobson, The Theory of Spherical and Ellipsoidal Harmonics, (1955) Chelsea Pub. Co., ISBN 978-0-8284-0104-3. - C. Müller, Spherical Harmonics, (1966) Springer, Lecture Notes in Mathematics, Vol. 17, ISBN 978-3-540-03600-5. - E. U. Condon and G. H. Shortley, The Theory of Atomic Spectra, (1970) Cambridge at the University Press, ISBN 0-521-09209-4, See chapter 3. - J.D. Jackson, Classical Electrodynamics, ISBN 0-471-30932-X - Albert Messiah, Quantum Mechanics, volume II. (2000) Dover. ISBN 0-486-40924-4. - Press, WH; Teukolsky, SA; Vetterling, WT; Flannery, BP (2007), "Section 6.7. Spherical Harmonics", Numerical Recipes: The Art of Scientific Computing (3rd ed.), New York: Cambridge University Press, ISBN 978-0-521-88068-8 - D. A. Varshalovich, A. N. Moskalev, V. K. Khersonskii Quantum Theory of Angular Momentum,(1988) World Scientific Publishing Co., Singapore, ISBN 9971-5-0107-4 - Weisstein, Eric W., "Spherical harmonics", MathWorld.
Computer-based instruction was used by the military to create standardize training and be more cost-effective (Shlechter, 1991). Computer-based instruction allows individual learners to pace the lesson content to meet his or her needs and provides the environment for self-directed learning (Lowe, 2002). Computer-based instruction can be defined as using computers to deliver, track, and/or manage instruction and when computers are the main mode of content delivery. The instruction can include text, images, and feedback. Software advances allow developers to integrate audio narrations, sound clips, graphics, videos, and animation into a single presentation and played on a computer (Koroghlanian & Klein, 2000; Moreno & Mayer, 1999). Instruction is classified as multimedia when sound, video, and images are included. Multimedia incorporates audio and visual elements with the instruction (Craig Gholson, & Driscoll, 2002; Mayer & Moreno, 2003; Mayer and Sims, 1994; Mayer & Johnson, 2008). Audio components include narrations, which uses the student’s verbal channel of his or her working memory. Visual components include static images, animations using multiple still images, a video, and/or on screen text, which uses the student’s visual channel of his or her working memory. When the student receives the information from the verbal and visual channel of his or her working memory and relates the information from the two channels, then meaningful learning has occurred (Tempelman-Kluit, 2006). Meaningful learning is “developing a understanding of the material, which includes attending to important aspects of the presented material, mentally organizing it into a coherent cognitive structure, and integrating it with relevant existing knowledge,” (Mayer & Moreno, 2003). Meaningful learning or understanding occurs when students are able to apply the content they learned and are able to transfer the information to new situations or creating solutions to problems rooted in the content presented (Jamet & Le Bohec, 2007; Mayer and Sims, 1994). Allowing students to process and apply the information is essential for knowledge retention and meaningful learning. “In multimedia learning, active processing requires five cognitive processes: selecting words, selecting images, organizing words, organizing images, and integrating.” – Mayer Moreno 2003 Multimedia instruction not only incorporates audio and visual elements it also has the capability of creating nonlinear content. Creating a nonlinear lesson allows the learner to have an active role in his or her learning and bypass sections they have already learned as well as go back and review sections if they need reinforcement. It is like putting the student in the driver’s seat and enabling them to reach the destination through a variety of paths versus sitting on a bus and stopping at each stop and waiting until they reach the destination. Cognitive Learning Theories in Multimedia Multiple multimedia learning theories and principles guide the creation process for multimedia presentations and facilitates student learning. Using the theories and principles guides the presentation creation process and facilitates students learning. The two overarching theories are cognitive load and dual coding. Several effects and __ related to the two main theories are: split-attention, redundancy, modality, spatial contiguity principle, temporal congruity principle and coherence principle. The four theories that are directly relevant to this study are: ___ ___ ___ and ___. Add figure of org chart of principles & theories. Paivo, Sweller & Mayer. Mayer’s theory of multimedia learning The working memory has a finite capacity for processing incoming information for any one channel, visual or audio. The combined processing, at any particular time, creates the working memory’s cognitive load ability (Baddeley, 1992; Mayer & Moreno, 2003; Chandler & Sweller, 1991). To take advantage of the memory’s capability it is important to reduce redundant and irrelevant information, thus reducing the cognitive load (Sweller, 1994; Ardaç and Unal 2008; Mayer & Moreno, 2003; Tempelman-Kluit, 2006). To keep the information efficient the multimedia should eliminate information that does not apply to a lesson or assignment. Content that is nonessential for transfer or retention should also be eliminated. Information needs to be concise by carefully selecting the text and images for the content and present the information succinct and organized in a logical pattern (Mayer & Moreno, 2003). Careful selection of text and images should be concise so content can be presented in a succinct and organized, logical pattern. Grouping the information into smaller portions of information reduces the cognitive load. By chunking the information, the working memory has the opportunity process the content and makes connections with prior learning and knowledge. The information is then stored in long-term memory (Mayer & Moreno, 2003). After presenting a portion of the information, the multimedia presentation should include a brief activity to engage the student in processing and storing the information. Utilizing both the auditory and the visual channel of the working memory also helps with the cognitive load and content retention (Tempelman-Kluit, 2006). Based on the information above about memory and processing the Cognitive Load Theory (CLT) was developed by Sweller (1993, 1994, 1998). “The theory assumes that people possess a limited working memory (Miller, 1956) and an immense long-term memory (Chase & Simon, 1973), with learning mechanisms of schema acquisition (Chi et al., 1982; Larkin et al., 1980) and automatic processing (Kotovsky et al., 1985),” (Jueng, Chandler & Sweller, 1997). Cognitive load theory provides a single framework for instructional design based on separate cognitive processing capabilities for visual and auditory information (Jamet & Le Bohec, 2007). Creating a multimedia presentation that conforms to CLT would integrate the auditory and visual information on the screen. The CLT presentation design limits the load on any one channel to prevent cognitive overload and increase learning (Kalyuga, Chandler, & Sweller, 1998; Mayer and Moreno 2002; Tindall-Ford, Chandler, & Sweller, 1997). Further research conducted by ____ ______ _____ indentified three separate types of cognitive load, intrinsic, extraneous, and germane. Intrinsic cognitive load The first type of cognitive load is intrinsic and is shaped by the learning task and the learning taking place (Van Merriënboer and Sweller, 2005). Intrinsic cognitive load occurs between the learner and the content, with the learner’s level of knowledge in the content area playing a factor. The other factors are the elements the working memory is processing at one time and element interactivity (van Merriënboer and Sweller, 2005). Element interactivity level depends on the degree to which the learner can understand the element information independently (Pass, Renkl, & Sweller, 2003). If you need to reduce total cog load (intri + extr + gemain) you need you need to know the elements and how to reduce loads. If the learner needs to understand several elements at once, and how they interact with each other, then the element interactivity is high. However, if the learner can understand each element independently then the element interactivity is low (Pass, Renkl, & Sweller, 2003). The intrinsic level occurs with the learner and their working memory and constructing meaning from the elements presented. While intrinsic load cannot be adjusted, the extraneous load can be modified. - Give own example of high and low element interactivity. - (van Merriënboer and Sweller, 2005) à intrinsic learning – schema construction and automation. - Content element interactivity directly correlated to intrinsic cognitive load – ? (Pass, Renkl, & Sweller, 2003). Page 1 of article Extraneous cognitive load The second type of cognitive load is extraneous or ineffective and is affected by the format of the information presented and what is required of the learner. Extraneous cognitive load occurs when information or learning tasks have high levels of cognitive processing and impedes with knowledge attainment (Pass, Renkl, & Sweller, 2003). Extraneous cognitive load is also referred to as ineffective cognitive load since the cognitive processing is not contributing to the learning process. The working memory has independent two channels for processing audio and visual. If the instruction occurs only using one channel instead of utilizing both channels the learner will experience a higher level of extraneous cognitive load (van Merriënboer and Sweller, 2005). Extraneous cognitive load can be reduced by several effects studied as part of instructional design and cognitive load report as by Sweller et al., 1998 such as; split attention, modality, and redundancy (van Merriënboer and Sweller, 2005). Germane cognitive load The third type of cognitive load is germane and is also affected by design of the instruction being presented. While extraneous cognitive load accounts for information impeding learning germane cognitive load focuses on freeing cognitive resources to increase learning. Germane is also referred to as ineffective cognitive load. Germane and extraneous work together disproportionately. Designing instruction that lessens the extraneous cognitive load allows additional cognitive processing for germane load and increase students ability to assimilate information being presented (Pass, Renkl, & Sweller, 2003). Intrinsic, extraneous, and germane cognitive loads work together for a combined total cognitive load; this combined load cannot be greater than the available memory resources for a learner. An experiment conducted by Tindall-Ford, Chandler and Sweller, 1997 had a purpose of measuring cognitive load. The participants were twenty two first year apprentices and had completed grade ten of high school. The participants were assigned to one of two treatments, visual-only instructions and audio-visual instructions. The experiment started with an instructional phase, which has two parts and was 100 seconds in length. Part one of the instruction phase had an explanation of how to read an electrical table and was either all visual, or was visual and audio with a cassette player. After the instructional phase part one, the participant rated the mental effort (load) based on a seven point scale. Then the apprentices took part in a test phase which included three sections. The first section was a written test where participants filled in the blank headings in an electrical table. The second section contained questions about the format of the table. After the first part of instruction and two parts of testing, participants were given the same electrical table and participants had to apply information contained in the table to examples given. Participants had 170 seconds to study the information, then completed another subjective mental effort (load) survey. Then the participants complete the final section of the test phase. The apprentices had to apply the information and select the appropriate cable for an installation job with the given parameters. Apprentices had a two week break where they continued with their normal training. Then both the two part instruction phase and the three part test phase were repeated. A 2 (group) X 2 (phase) ANOVA was run for the first instruction section and the first two sections of the written test in the test phase and significant difference was found with the audio-visual group performing better than the visual-only group. When the ANOVA was run for the mental load for the two phases significance was found again, with the audio-visual group rating the mental effort lower than the visual-only group. Similar results were found when analyzing part two of instruction mental load and section three of the written test for both phases. All test results revealed the audio-visual group outperforming the visual-only group for all tests and a lower mental load rating. Therefore the participant performance can be linked back to the cognitive load. An experiment was conducted by Ardac and Unal, 2008 — finish later — Based on the experiment above by Tindall-Ford, Chandler and Sweller, 1997, when selecting a format for a presentation audio-only is the better choice. This is true not only from a modality theory, it is also better from a cognitive load theory perspective, since visual-only formats cause a higher level of mental effort for participants. Transition sentence that link split-attention effect as a part of cognitive load theory. When images or animations are involved with the redundant text then the visual channel has to pay attention to multiple visual elements and the attention is split between the many visual pieces, creating the “split-attention” effect. Having several visual components such as text and animations causes an increase in the cognitive load and learning is hampered (Ardac & Unal, 2008). Split-attention occurs when instructional material contains multiple sources of information that are not comprehendible by themselves and need to be integrated either physically or mentally to be understandable (Jeung, Chandler & Sweller, 1997; Kalyuga, Chandler, & Sweller, 1998; Tindall-Ford, Chandler, & Sweller, 1997). Split-attention effect can be minimized by placing related text close in proximity to the image in the presentation or using audio narration for an animation instead of on-screen text (Jamet & Le Bohec, 2007). One experiment conducted to test the split-attention theory was designed by Mayer, Heiser, and Lonn, 2001. In this experiment there were 78 participants selected from an university psychology subject pool. The experiment was a 2 x 2 design with summarized on-screen text as a factor and extraneous details as a second factor. There were four groups; no text/no seductive details group with 22 students, text/no seductive details group with 19 students, no text/seductive details group with 21 students, text/seductive details group with 16 students. The group had a median age of 18.4 and was 33% male. All participants a little prior knowledge of meteorology with a score of seven or lower out of eleven questions. Participants viewed a computer-based multimedia presentation. The versions with text included a summary of the narration. The versions with seductive details included additional narrations with real world examples. The experiment started with participants completing a questionnaire to collect demographic and prior knowledge information. Then participants watched a presentation with one of the treatments at individual computers. At the completion of the video students completed a retention and transfer test. Students who received on-screen text scored significantly lower on both the transfer and retention test than student who did not have on-screen text. These results are consistent with the split-attention theory as it relates to cognitive theory of multimedia. Students who received seductive details also scored lower on both the transfer and retention test than student who did not have seductive details. These results indicate that including seductive details to a presentation hampered student learning. Another experiment conducted was by Tindall-Ford, Chandler, and Sweller, 1997. This experiment had thirty participants that were first year trade apprenticed from Sydney. The participants were randomly assigned to one of three groups, each group had ten participants. The first group was the visual only group that consisted of diagrams and related textual statements. The second group integrated the presentation included the textual statements however the statements were physically integrated into the diagrams. The third group is the audio-visual group included the same diagrams and however the textual statements were presented as audio instead of text. The participants first read the instructional materials, the audio group listened to the information from an audio-cassette. Then participants completed a written test with three sections; a labeling section, a multiple choice section, and a transfer section, and finally participants completed a practical test. While analysis of the multiple choice section revealed no significant difference, the data indicated the audio-visual group performing better than the visual group. The section three data, the transfer test, had significant with the audio-visual and the integrated group performing better than the visual only. The findings revealed that the audio-visual and the integrated formats performed better than the visual only group. The non-integrated text performed the poorest out of the three groups, which supports the split-attention effect. A set of two experiments were conducted by Mayer & Moreno, 1998 to verify split-attention and dual processing. The first experiment had 78 college students from a university psychology pool with little prior knowledge about metrology. The participants were randomly assigned to one of two groups. The concurrent narrations group (AN) had 40 students and the concurrent on-screen text groups (AT) had 38 students. Participants were tested in groups of one to five and were seated at individual cubicles with computers. The participants first completed a questionnaire, which assessed the student’s prior knowledge and collected demographic information. Then the students watched the presentation about lightening formation; the students in the AN groups had on headphones. The presentation was 140 seconds long and included animation of the lightening process. The AN version had narration and the AT version had text on-screen that was identical to the narration, and used the same timings as the narration version. After the presentation the participants had 6 minutes to complete the retention test, where participants had to explain the lightening process. Then they had 3 minutes to complete a transfer test, which consisted of four short essay questions. Finally the participants had 3 minutes to complete a matching test, where the students had to label parts of an image, based on the lightening formation statements provided. A split-attention effect occurred for all three tests, retentions, matching, and the transfer test; which the AN group scored higher on the matching test than the AT group. These results also align with dual-processing. The second experiment by Mayer and Moreno, 1998 the content was changed to how a car’s braking system operates. The first experiment had 68 college students from a university psychology pool with little prior knowledge about car mechanics. The concurrent narrations group (AN) had 34 students and the concurrent on-screen text groups (AT) had 34 students. Participants were tested in groups of one to five and were seated at individual cubicles with computers. The participants first completed a questionnaire, which assessed the student’s prior knowledge and collected demographic information. Then the students watched the presentation about how a car’s braking system operates; the students in the AN groups had on headphones. The presentation was 45 seconds long and included animation of a car’s braking process, and was broken into 10 segments. The AN version had narration and a brief pause between segments, and the AT version had text on-screen that was identical to the narration, and used the same timings as the narration version. The AT group’s text appeared under the animation and stayed visible until the next segment started. Then participants were randomly assigned to one of two groups. After the presentation the participants had 5 minutes to complete the retention test, where participants had to explain the braking process. Then they had 2.5 minutes to complete a transfer test, which consisted of four short essay questions. Finally the participants had 2.5 minutes to complete a matching test, where the students were given parts of the braking system and they had to identify the parts in an image and label them. A split-attention effect occurred for all three tests, retentions, matching, and the transfer test; which the AN group scored higher on the matching test than the AT group. These results also align with dual-processing. – CONCLUSION!!! (318-319) The experiments indicate the adding text in addition to the narration will impede student learning. The second experiment clarifies the split-attention effect, which if text is included it needs to be placed near the relevant part of the diagram. If text is not near the images, increase in the cognitive load occurs by trying to combine the images and text. The last two experiment further clarify the split-attention effect with three measures in two different experiments. Therefore narration should be used to accompany animation and images instead of text. The working memory of a human has two channels a visual channel that processes information such as text, images, and animation through the eyes and an auditory channel that processes sounds such as narration through the ears. According to the “modality principle,” when information is presented in multimedia explanations, it also should ideally be presented auditorily versus on screen text (Craig, Gholson, & Discoll, 2002; Moreno & Mayer, 1999; Mayer, 2001; Mayer & Johnson, 2008; Mayer, Fennell, et al., 2004). When the information is presented auditorily, the working memory uses both channels, visual and auditory to process the information being heard and the information on the screen (Tabbers, Martens, & van Merriënboer, 2004). By utilizing both working memory channels, the mind can allocate additional cognitive resources and create relationships between the visual and verbal information (Moreno and Mayer, 1999). When learning occurs using both memory channels the memory does not become overloaded and the learning becomes embedded, this improves the learner’s understanding (Mayer & Moreno, 2002). Several experiments have been conducted relating to modality theory. One experiment in a geometry lesson taught in a math class at the elementary school level focused on the conditions that modality effect would be occur. The researchers, Jeung, Chandler, and Sweller, (1997) created a three-by-two experiment that included three presentation modes and two search modes. The three presentation modes were visual-visual, audio-visual, and audio-visual-flashing. The visual-visual diagrams and supporting information were presented visually as on screen text; the audio-visual group diagrams and supporting information were presented visually. In the audio-visual-flashing group, the supporting information was presented auditorily and diagrams were presented visually. However parts of the diagram flashed when the corresponding audio occurred. The two search modes were high search mode and low search mode. The high search mode labeled each end of the line separately so a line was identified by the letters at each end such as “AB.” Whereas the low search mode labeled the entire line with a single letter, such as “C” and reducing the search needed to locate the information. The experiment content was geometry; the study population was sixty students from year six in a primary school with no previous geometry experience, creating ten students per group. The students participated in the experiment individually during class time. Students were randomly assigned to one of six groups the information was presented to the students on the computer. The experiment had three phases; an introduction phase where the problem was identified and was presented in one of the six modes as assigned to the student, an acquisition phase which included two worked out examples on the computer, after each example students were required to complete a similar problem with pencil and paper, and finally a test phase that included four problems for students to complete with pencil and paper. In the test phase they found a significant effect on presentation mode but not on the search complexity. They performed additional data analysis to discover the significance between the presentation modes occurred in the high search group, but not the low search group. Analysis of the presentation modes for the high search group revealed that the audio-visual-flashing group performed a higher level of performance than the visual-visual group. The experiment confirmed the modality theory hypothesis that mixed mode presentation (audio-visual-flashing) would be more effective because the multiple modes increase the working memory capacity. However these results were only found with the high search group and not the low search group. The group conducted two additional experiments to focus on high search and low search separately. The second experiment focused on high search. For this experiment, the population included thirty students from a Sydney public primary school who were in year six and had not been taught parallel line in geometry. The procedure was the same as before however the geometry content was a complex diagram. The groups were visual-visual, audio-visual, and audio-visual-flashing, with ten students were in each group. The results were consistent with modality theory and students who were in the audio-visual-flashing group performed better then the visual-visual group, and no differences were found between visual-visual group and the audio-visual group. Therefore for high search materials, the dual presentation mode increased performance when a visual reference was provided. The third experiment focused on low search. In this experiment the population included thirty students from a Sydney public primary school who had not been taught parallel lines in geometry. The groups included visual-visual, audio-visual, and audio-visual-flashing, and ten students were in each group. The procedure was similar to the first experiment however the geometry content was a low search diagram and only contained two labels. The groups were visual-visual, audio-visual, and audio-visual-flashing, with ten students in each group. The results revealed that the modality effect did occur with the transfer problems and the visual-visual group took more time than the audio-visual and the audio-visual-flashing group. The difference was that with the low search content the audio-visual group performed better than the visual-visual group meaning, low search materials the flashing indicator is not as beneficial. The three experiments had demonstrated that using mixed modes of presentation increases the effectiveness of the working memory and capacity for learning. The results indicated that when content requires a high level of search, visual indicators need to be included to free up cognitive resources and increase memory capacity. Therefore, based on the work of Jeung, Chandler, and Sweller (1997) when the computer multimedia presentations were created with a visual cue of a yellow box with a red outline was used as a visual indicator to assist users to locate where the mouse is clicking so students are not scanning the entire video screen for the mouse. In addition to visual references one version of the video included audio only and another version will contain text only to confirm the modality effect. Selecting the most appropriate part of the working memory to disseminate the information and using the auditory channel to process information via audio instead of visual text allows the visual channel to use the working memory to focus on the images and animations that coincide with the audio. It is similar to watching a news program on television, your ears are listening to the news anchor and the working memory is processing that information while your eyes are watching the corresponding footage and the brain it combining the two pieces of information together. However if put closed captioning on you are reading the same information you are hearing which is redundant. Redundancy effect can be defined as information being presented appears as both an image and as on-screen text, and the visual channel is responsible for all information while the audio channel is not used (Mayer, 2001; Barron & Calandra, 2003). “The distinction between the split-attention and redundancy effects hinges on the distinction between sources of information that are intelligible in isolation and those that are not. If a diagram and the concepts of functions it represents are sufficiently self-contained and intelligibly in isolation, then any text explaining the diagram is redundant and should be omitted in order to reduce the cognitive load (Kalyuga, Chandler, & Sweller, 1998).” Redundancy can occur with full text and full audio, full text and partial audio or partial text and full audio (Barron & Calandra, 2003). The redundant information may be duplicate text and narration, a text description and a diagram or on-screen text and audio narration. The duplicate information causes in increase in the learner’s working memory because the visual channel is processing the same information from multiple sources. (Kalyuga, Chandler, & Sweller, 1998; Mayer, Heiser and Lonn, 2001). The redundancy effect is evident when student performance is hindered when redundant information is present, and student performance increase when the redundant information is removed (Kalyuga et all, 1998; Mayer, Heiser and Lonn, 2001; Jamet & Le Bohec, 2007). The redundancy effect can be eliminated by presenting on-screen text as narration or presenting information as a diagram instead of a lengthy text explanation, and delivering information in a single mode that works complimentary with the other content be delivered (Mayer, Heiser and Lonn, 2001). Several experiments have been conducted relating to redundancy theory. One experiment conducted by Jamet and Le Boec, 2007 was designed to test the hypothesis that redundancy effect would be observed with full text and narration, and presenting sequential text would reduce the redundancy effect. The experiment had 90 undergraduate students from a psychology pool in France, with a median age of 20. The participants were randomly assigned to one of three groups; no text, full text with corresponding audio, and sequential text. The experiment started with a prior knowledge test with four general questions and two specific questions. Then the participants viewed three documents about memory functioning, the presentation lasted about 11 minutes. After the presentation the participants took a retention test twelve open-ended questions. Then they took a transfer test with twelve inferential open-ended questions. Finally, the participants had to complete a diagram by labeling components. Results revealed significance difference with the retention scores with the no-text group performing better than the full-text group and the sequential text group. Similar results were reported for the diagram completion portion of the experiment and the transfer task. There was no significant effect size to indicate that the redundancy effect would be reduced by presenting redundant text sequentially. There was a significant effect between the no-text and the other two groups for the transfer, retention, and the diagram test which validates the redundancy effect. Based on the findings from the experiment above, having on-screen text in addition to narration overloads the visual channel and decreases learning. The authors did point out that the participants had a difficult time understanding the documents presented and they could not control the presentation. Another set of experiments were conducted by Mayer and Johnson, 2008 to test the redundancy theory. The first experiment focused on short redundant text that was display on-screen. Cite This Work To export a reference to this article please select a referencing stye below: Related ServicesView all Related ContentAll Tags Content relating to: "Technology" Technology can be described as the use of scientific and advanced knowledge to meet the requirements of humans. Technology is continuously developing, and is used in almost all aspects of life. Effect of Traditional and E-learning Training on Employees Table of Contents Chapter 1: Introduction 1.1 Background of the study 1.2 Research problem 1.3 Research objective 1.4 Importance of this study 1.5 Theoretical framework 1.6 Research Question 1.... The Impact of Technological Advances on Jewellery Manufacture. Abstract Technology has advanced massively over recent years and this has impacted the jewellery industry in many ways; including the way jewellery is designed and the process of making the jewellery... DMCA / Removal Request If you are the original writer of this dissertation and no longer wish to have your work published on the UKDiss.com website then please:
- What Is Python Function - How to Define a Python Function - Python Function Example - Call a Function in Python - Scope and Lifetime of Python Variables - Types of Functions In this section, you will learn - What is a function - How to create a function - Types of functions What Is Python Function A function is a small block of a program that contains a number of statements to perform a specific task. When you have a program of thousands of lines performing different tasks then you should divide the program into small modules (blocks) which increases the readability and lowers the complexity. How to Define a Python Function The following is the syntax to define a function: def functionName(arguments): """This is the definition of this function""" statements return returnParam - The keyword defis used to define a function. functionNameis the name of the function. argumentsare optional. Arguments provide values to function to perform operations on. - Colon ( :) ends the function header. """This is the definition of this function"""is a docstringand is optional which describes what the function does. statementsrefers to the body of the function. - The statement returnoptionally returns the result to the caller. Python Function Example def language(p): """Function to print a message""" print("Programming language:", p) Here a function language is defined which has one argument p passing from the caller. Inside the function, there is a docstring and a Call a Function in Python A function can be called from anywhere in the program. A function can be called by its name with the required parameters. Programming language: Python return statement transfers the control back to the codes where the function is called. It indicates the ending of the function definition. The syntax of return is as follows: If there is no return statement in a function, a None object will be returned. Example of Using def square(n): return n*n print("Square of 4=", square(4)) Square of 4=16 In this code, the function is called in a n*n which is evaluated and the result is returned to where the function is called ( Scope and Lifetime of Python Variables The scope of a variable is where a variable can be accessed. When a variable is declared inside a function, it is not accessible to the outside of that function. This type of variables is called local variable and is accessed only to the function where it is declared. The lifetime of a variable is the time during which a variable exists in memory. When a variable is declared inside a function, the memory of it will be released when the control jumps out of the function. See the example below: def fun(): a = 12 print("Value of a inside function:", a) a = 24 fun() print("Value of a outside function:", a) Value of a inside function: 12 Value of a outside function: 24 In this code, variable a inside the function and variable a outside the function are different variables. If you try to access variables declared inside functions from outside, you will come across an error - NameError name 'x' is not defined. But the variables declared outside of functions have a global scope and can be accessed from inside. Types of Functions Functions in Python can be categorized into two types: Built-in functions: have a predefined meaning and perform specific tasks. User-defined functions: are defined by the user containing any number of statements to perform user-defined tasks.