text
stringlengths
4
602k
This article includes a list of references, but its sources remain unclear because it has insufficient inline citations . (April 2009) (Learn how and when to remove this template message) |An aspect of fiscal policy| In a tax system, the tax rate is the ratio (usually expressed as a percentage) at which a business or person is taxed. There are several methods used to present a tax rate: statutory, average, marginal, and effective. These rates can also be presented using different definitions applied to a tax base: inclusive and exclusive. A statutory tax rate is the legally imposed rate. An income tax could have multiple statutory rates for different income levels, where a sales tax may have a flat statutory rate. The statutory tax rate is expressed as a percentage and will always be higher than the effective tax rate. An average tax rate is the ratio of the total amount of taxes paid to the total tax base (taxable income or spending), expressed as a percentage. In a proportional tax, the tax rate is fixed and the average tax rate equals this tax rate. In case of tax brackets, commonly used for progressive taxes, the average tax rate increases as taxable income increases through tax brackets, asymptoting to the top tax rate. For example, consider a system with three tax brackets, 10%, 20%, and 30%, where the 10% rate applies to income from $1 to $10,000, the 20% rate applies to income from $10,001 to $20,000, and the 30% rate applies to all income above $20,000. Under this system, someone earning $25,000 would pay $1,000 for the first $10,000 of income (10%); $2,000 for the second $10,000 of income (20%); and $1,500 for the last $5,000 of income (30%). In total, they would pay $4,500, or an 18% average tax rate. A marginal tax rate is the tax rate on income set at a higher rate for incomes above a designated higher bracket, which in 2016 in the United States was $415,050. For annual income that was above cut off point in that higher bracket, the marginal tax rate in 2016 was 39.6%. For income below the $415,050 cut off, the lower tax rate was 35% or less. The marginal tax rate on income can be expressed mathematically as follows: where t is the total tax liability and i is total income, and ∆ refers to a numerical change. In accounting practice, the tax numerator in the above equation usually includes taxes at federal, state, provincial, and municipal levels. Marginal tax rates are applied to income in countries with progressive taxation schemes, with incremental increases in income taxed in progressively higher tax brackets, resulting in the tax burden being distributed amongst those who can most easily afford it. Marginal taxes are valuable as they allow governments to generate revenue to fund social services in a way that only affects those who will be the least negatively affected. In economics, one heavily disputed theory is that marginal tax rates will impact the incentive of increased income, meaning that higher marginal tax rates cause individuals to have less incentive to earn more. This is the basis of the Laffer curve theory, which theorizes that population-wide taxable income decreases as a function of the marginal tax rate, making net governmental tax revenues decrease beyond a certain taxation point.[ citation needed ] With a flat tax, by comparison, all income is taxed at the same percentage, regardless of amount. An example is a sales tax where all purchases are taxed equally. A poll tax is a flat tax of a set dollar amount per person. The marginal tax in these scenarios would be zero, however, these are both forms of regressive taxation and place a higher tax burden on those who are least able to cope with it, and often results in an underfunded government leading to increased deficits. For individuals who receive means tested benefits, benefits are decreased as more income is earned. This is sometimes described as an implicit tax. [ who? ] argue that these issues create a disincentive for work or promotion and may result in a structural income inequality.[ citation needed ]These implicit marginal tax rates can exceed 90% or even greater than 100%. Some economists The term effective tax rate has different meanings in different contexts. Generally its calculation attempts to adjust a nominal tax rate to make it more meaningful. It may incorporate econometric, estimated, or assumed adjustments to actual data, or may be based entirely on assumptions or simulations. The term is used in financial reporting to measure the total tax paid as a percentage of the company's accounting income, instead of as a percentage of the taxable income. International Accounting Standard 12,define it as income tax expense or benefit for accounting purposes divided by accounting profit. In Generally Accepted Accounting Principles (United States), the term is used in official guidance only with respect to determining income tax expense for interim (e.g. quarterly) periods by multiplying accounting income by an "estimated annual effective tax rate", the definition of which rate varies depending on the reporting entity's circumstances. In U.S. income tax law, the term is used in relation to determining whether a foreign income tax on specific types of income exceeds a certain percentage of U.S. tax that would apply on such income if U.S. tax had been applicable to the income. The popular press, Congressional Budget Office, and various think tanks have used the term to mean varying measures of tax divided by varying measures of income, with little consistency in definition. Investors usually modify a statutory marginal tax rate to create the effective tax rate appropriate for their decision. For example: If capital gains are only taxed when realized by a sale, the effective tax rate is the yearly rate that would have applied to the average yearly gain so that the resulting after-tax profit is the same as when all taxed at statutory rates on sale. It will be lower than the statutory rate because unrealized profits are reinvested without tax. For example: When dividends are both taxed as income, and also generate a tax credit in the UK and Canadian system, the effective tax rate is the net effect of both - the net tax divided by the actual dividend's value. For example: When contributions are made to Tax Deferred Accounts the reduced tax base will result in reduced taxes calculated at the statutory marginal rate. But the reduction in the tax base may also affect qualification for other government benefits. The difference in those benefits is added to the numerator to increase the effective marginal rate due to the contribution. Tax rates can be presented differently due to differing definitions of tax base, which can make comparisons between tax systems confusing. Some tax systems include the taxes owed in the tax base (tax-inclusive, Before Tax), while other tax systems do not include taxes owed as part of the base (tax-exclusive, After Tax).In the United States, sales taxes are usually quoted exclusively and income taxes are quoted inclusively. The majority of Europe, value added tax (VAT) countries, include the tax amount when quoting merchandise prices, including Goods and Services Tax (GST) countries, such as Australia and New Zealand. However, those countries still define their tax rates on a tax exclusive basis. For direct rate comparisons between exclusive and inclusive taxes, one rate must be manipulated to look like the other. When a tax system imposes taxes primarily on income, the tax base is a household's pre-tax income. The appropriate income tax rate is applied to the tax base to calculate taxes owed. Under this formula, taxes to be paid are included in the base on which the tax rate is imposed. If an individual's gross income is $100 and income tax rate is 20%, taxes owed equals $20. The income tax is taken "off the top", so the individual is left with $80 in after-tax money. Some tax laws impose taxes on a tax base equal to the pre-tax portion of a good's price. Unlike the income tax example above, these taxes do not include actual taxes owed as part of the base. A good priced at $80 with a 25% exclusive sales tax rate yields $20 in taxes owed. Since the sales tax is added "on the top", the individual pays $20 of tax on $80 of pre-tax goods for a total cost of $100. In either case, the tax base of $100 can be treated as two parts—$80 of after-tax spending money and $20 of taxes owed. A 25% exclusive tax rate approximates a 20% inclusive tax rate after adjustment.By including taxes owed in the tax base, an exclusive tax rate can be directly compared to an inclusive tax rate. A tax is a compulsory financial charge or some other type of levy imposed upon a taxpayer by a governmental organization in order to fund various public expenditures. A failure to pay, along with evasion of or resistance to taxation, is punishable by law. Taxes consist of direct or indirect taxes and may be paid in money or as its labour equivalent. The first known taxation took place in Ancient Egypt around 3000–2800 BC. A flat tax is a tax system with a constant marginal rate, usually applied to individual or corporate income. A true flat tax would be a proportional tax, but implementations are often progressive and sometimes regressive depending on deductions and exemptions in the tax base. There are various tax systems that are labeled "flat tax" even though they are significantly different. In finance, the net present value (NPV) or net present worth (NPW) applies to a series of cash flows occurring at different times. The present value of a cash flow depends on the interval of time between now and the cash flow. It also depends on the discount rate. NPV accounts for the time value of money. It provides a method for evaluating and comparing capital projects or financial products with cash flows spread over time, as in loans, investments, payouts from insurance contracts plus many other applications. A progressive tax is a tax in which the tax rate increases as the taxable amount increases. The term "progressive" refers to the way the tax rate progresses from low to high, with the result that a taxpayer's average tax rate is less than the person's marginal tax rate. The term can be applied to individual taxes or to a tax system as a whole; a year, multi-year, or lifetime. Progressive taxes are imposed in an attempt to reduce the tax incidence of people with a lower ability to pay, as such taxes shift the incidence increasingly to those with a higher ability-to-pay. The opposite of a progressive tax is a regressive tax, where the average tax rate or burden decreases as an individual's ability to pay increases. Payroll taxes are taxes imposed on employers or employees, and are usually calculated as a percentage of the salaries that employers pay their staff. Payroll taxes generally fall into two categories: deductions from an employee’s wages, and taxes paid by the employer based on the employee's wages. The first kind are taxes that employers are required to withhold from employees' wages, also known as withholding tax, pay-as-you-earn tax (PAYE), or pay-as-you-go tax (PAYG) and often covering advance payment of income tax, social security contributions, and various insurances. The second kind is a tax that is paid from the employer's own funds and that is directly related to employing a worker. These can consist of fixed charges or be proportionally linked to an employee's pay. The charges paid by the employer usually cover the employer's funding of the social security system, medicare, and other insurance programs. It is sometimes claimed that the economic burden of the payroll tax falls almost entirely on the worker, regardless of whether the tax is remitted by the employer or the employee, as the employers’ share of payroll taxes is passed on to employees in the form of lower wages than would otherwise be paid. Because payroll taxes fall exclusively on wages and not on returns to financial or physical investments, payroll taxes may contribute to underinvestment in human capital such as higher education. FairTax is a proposal to reform the federal tax code of the United States. It would replace all federal income taxes, payroll taxes, gift taxes, and estate taxes with a single broad national consumption tax on retail sales. The Fair Tax Act would apply a tax, once, at the point of purchase on all new goods and services for personal consumption. The proposal also calls for a monthly payment to all family households of lawful U.S. residents as an advance rebate, or "prebate", of tax on purchases up to the poverty level. First introduced into the United States Congress in 1999, a number of congressional committees have heard testimony on the bill; however, it has not moved from committee and has yet to have any effect on the tax system. In 2005, a tax reform movement began to form behind the FairTax proposal. Attention increased after talk radio personality Neal Boortz and Georgia Congressman John Linder published The FairTax Book in 2005 and additional visibility was gained in the 2008 presidential campaign. Tax brackets are the divisions at which tax rates change in a progressive tax system. Essentially, they are the cutoff values for taxable income—income past a certain point will be taxed at a higher rate. In business, operating margin—also known as operating income margin, operating profit margin, EBIT margin and return on sales (ROS)—is the ratio of operating income to net sales, usually presented in percent. Operating leverage is a measure of how revenue growth translates into growth in operating income. It is a measure of leverage, and of how risky, or volatile, a company's operating income is. In corporate finance, the return on equity (ROE) is a measure of the profitability of a business in relation to the equity, also known as net assets or assets minus liabilities. ROE is a measure of how well a company uses investments to generate earnings growth. In investing, the cash-on-cash return is the ratio of annual before-tax cash flow to the total amount of cash invested, expressed as a percentage. In finance, return is a profit on an investment. It comprises any change in value of the investment, and/or cash flows which the investor receives from the investment, such as interest payments or dividends. It may be measured either in absolute terms or as a percentage of the amount invested. The latter is also called the holding period return. For the Old Age, Survivors and Disability Insurance (OASDI) tax or Social Security tax in the United States, the Social Security Wage Base (SSWB) is the maximum earned gross income or upper threshold on which a wage earner's Social Security tax may be imposed. The Social Security tax is one component of the Federal Insurance Contributions Act tax (FICA) and Self-employment tax, the other component being the Medicare tax. It is also the maximum amount of covered wages that are taken into account when average earnings are calculated in order to determine a worker's Social Security benefit. The Fair Tax Act is a bill in the United States Congress for changing tax laws to replace the Internal Revenue Service (IRS) and all federal income taxes, payroll taxes, corporate taxes, capital gains taxes, gift taxes, and estate taxes with a national retail sales tax, to be levied once at the point of purchase on all new goods and services. The proposal also calls for a monthly payment to households of citizens and legal resident aliens as an advance rebate of tax on purchases up to the poverty level. The impact of the FairTax on the distribution of the tax burden is a point of dispute. The plan's supporters argue that it would decrease tax burdens, broaden the tax base, be progressive, increase purchasing power, and tax wealth, while opponents argue that a national sales tax would be inherently regressive and would decrease tax burdens paid by high-income individuals. Tax deferral refers to instances where a taxpayer can delay paying taxes to some future period. In theory, the net taxes paid should be the same. Taxes can sometimes be deferred indefinitely, or may be taxed at a lower rate in the future, particularly for deferral of income taxes. The marriage penalty in the United States refers to the higher taxes required from some married couples with both partners earning income that would not be required by two otherwise identical single people with exactly the same incomes. There is also a marriage bonus that applies in other cases. Multiple factors are involved, but in general, in the current U.S. system, single-income married couples usually benefit from filing as a married couple, while dual-income married couples are often penalized. The percentage of couples affected has varied over the years, depending on shifts in tax rates. A financial ratio or accounting ratio is a relative magnitude of two selected numerical values taken from an enterprise's financial statements. Often used in accounting, there are many standard ratios used to try to evaluate the overall financial condition of a corporation or other organization. Financial ratios may be used by managers within a firm, by current and potential shareholders (owners) of a firm, and by a firm's creditors. Financial analysts use financial ratios to compare the strengths and weaknesses in various companies. If shares in a company are traded in a financial market, the market price of the shares is used in certain financial ratios. In general, the United States federal income tax is progressive, as rates of tax generally increase as taxable income increases, at least with respect to individuals that earn wage income. As a group, the lowest earning workers, especially those with dependents, pay no income taxes and may actually receive a small subsidy from the federal government. Tax policy and economic inequality in the United States discusses how tax policy affects the distribution of income and wealth in the United States. Income inequality can be measured before- and after-tax; this article focuses on the after-tax aspects. Income tax rates applied to various income levels and tax expenditures primarily drive how market results are redistributed to impact the after-tax inequality. After-tax inequality has risen in the United States markedly since 1980, following a more egalitarian period following World War II. The Common Consolidated Corporate Tax Base (CCCTB) is a proposal for a common tax scheme for the European Union developed by the European Commission and first proposed in March 2011 that provides a single set of rules for how EU corporations calculate EU taxes, and provide the ability to consolidate EU taxes. Corporate tax rates in the EU would not be changed by the CCCTB, as EU countries would continue to have their own corporate tax rates. |Wikimedia Commons has media related to Marginal tax rates .|
Until now, we have discussed only positive numbers. These numbers were called "unsigned 8-bit integers". In an 8-bit byte, we can represent a set of 256 positive numbers in the range 010-25510. However, in many operations it is necessary to also have negative numbers. For this purpose, we introduce "signed 8-bit integers". Since we are limited to 8-bit representation, we remain also limited to a total of 256 numbers. However, half of them will be negative (-12810 through -110) and half will be positive (010 through 12810). The representation of signed (positive and negative) numbers in the computer is done through the so-called 8-bit 2's complement representation. In this representation, the 8th bit indicates the sign of the number (0 = +, 1 = -). The signed binary numbers must conform to the obvious laws of signed arithmetic. For example, in signed decimal arithmetic, -310 + 310 = 010. When performing signed binary arithmetic, the same cancellation law must be verified. This is assured when constructing the 2's complement negative binary numbers through the following rule: To find the negative of a number in 8-bit 2's complement representation, simply subtract the number from zero, i.e. -X = 0 - X using 8-bit binary arithmetic. Example 1: Use the above rule to represent in 8-bit 2's complement the number -310 Solution: Subtract the 8-bit binary representation of 310 from the 8-bit binary representation of 010 using 8-bit arithmetic (8-bit arithmetic implies that you can liberally take from, or carry into the 9th bit, since only the first 8 bits count!). BINARY DECIMAL 00000000 - 010 - 00000011 310 11111101 -310 Note that, in this operation, a 1 was liberally borrowed from the 9th bit and used in the subtraction!Verification We have establish that -310 = 111111012. Verify that -310 + 310 = 010 using 8-bit arithmetic. BINARY DECIMAL 11111101 - -310 - 00000011 310 00000000 010Note that, in this operation, a carry of 1 was liberally lost in the 9th bit! Example 2: Given the binary number 00110101, find it's 2's complement. Verification: 01110101 + 10001011 = (1)00000000. Since the 9th bit is irrelevant, the answer is actually 00000000, as expected The rule outlined above can be applied to both binary and hex numbers. Example 3: Given the hex number 6A, find its 8-bit 2's complement. Solution: Subtract the number from 0016 using 8-bit arithmetic: Verification: 6A16 + 9616 = (1)00. Since the 9th binary bit is irrelevant, the answer is actually 0016, as expected
An assessor is a local government official who determines the value of a property for local real estate taxation purposes. An assessor is a local government official who determines the value of a property for local real estate taxation purposes. Local municipalities base their property tax rates upon the value of owned property, including land. This value is converted into an assessment, which is one component in the computation of real property tax bills. Assessors are trained to determine the fair market value of property. What Is an Assessor? An assessor is a local government official who determines the value of a property for local real estate taxation purposes. The figures assessors derive are used to calculate future property taxes. The assessor estimates the value of real property within a city or town’s boundaries. This value is converted into an assessment, which is one component in the computation of real property tax bills. How Assessors Work Assessors are government officials who maintain annual assessments at a uniform percentage of market value. An assessor signs an oath to this effect when certifying the tentative assessment roll. The assessment roll is a document containing each property assessment. Each year assessors are required to keep current the physical description, inventory, and value estimate of every parcel. An assessment occurs when an asset's value must be determined for the purpose of taxation. Some assessments are made annually on certain types of property, such as homes, while others may be made only once. For example, homes are often valued every three or four years according to their physical condition and comparable values of surrounding residences. Assessors are trained to determine the fair market value of property. Fair market value refers to the price that would be agreed upon between a willing and informed buyer and a willing and informed seller under usual and ordinary circumstances. It is the highest price a property would bring if it were for sale on the open market for a reasonable period of time. Many sales occur at prices other than what is considered the fair market value. The sale price is often adjusted due to the time constraints and pressures on the buyer and seller. Certification for assessors varies across municipalities. In New York State, for example, a person becomes an assessor first by appointment or election. Then this person has to get a basic certification within three years of taking office, although assessors in some states are not required to obtain basic certification. The certification requires successful completion of orientation, which consists of three assessment administration course components and five appraisal components, including farm appraisal for certain agricultural communities. Appointed assessors are required to complete an average of 12 credits of continuing education annually. Why Assessors Matter Local municipalities base their property tax rates upon the value of owned property, including land. The assessments made by local assessors provide the basis for the municipality's calculation of property values. The local governing body uses the assessed tax to fund water and sewer improvements, provide law enforcement and fire service, K-12 and higher education, highway construction, and other services that benefit the community at large. Property tax rates and the types of properties taxed vary by jurisdiction, as does assessor certification. Ad Valorem Tax An ad valorem tax is a tax derived from the value of real estate or personal property. read more Assessed value is the dollar value assigned to a home or other piece of property for tax purposes. It is often a percentage of fair market value. read more An assessment occurs when an asset's value must be determined for the purpose of taxation. read more Federal Income Tax In the U.S., the federal income tax is the tax levied by the IRS on the annual earnings of individuals, corporations, trusts, and other legal entities. read more Market value is the price an asset gets in a marketplace. Market value also refers to the market capitalization of a publicly traded company. read more Property tax is an ad valorem tax assessed on real estate by a local government and paid by the property owner. read more A tax roll is an official breakdown of all property within a given jurisdiction, such as a city or county, that can be taxed. read more Taxation refers to the act of levying or imposing a tax by a taxing authority. Taxes include income, capital gains, or estate. read more
There are four main operators in arithmetic: addition, subtraction, multiplication and division. Of these, the first encountered by children, and where they begin to show difficulties with procedural mathematics, are addition and subtraction. An understanding of counting is considered to be an essential foundation to develop arithmetic skills. For example, understanding of addition may begin by using counting strategies as addition generally implies that the total set of items will increase. Similar to counting, addition typically progresses through a number of strategies. Initially children add by counting all the numerals in a sum (e.g. for 2+3, a children would count ‘one, two, and then three, four, five’). Addition progresses to a counting on strategy where the counting begins from the second number in the sum (e.g. 2+3 a child would start counting from the two and add on ‘three, four, five’). A final counting on strategy is used when the child realises that it doesn’t matter which number you begin with, the answer is the same. This is known as commutativity. When a children understand commutativity they count on from the biggest number (e.g. 2+3 a child would count ‘start with three, add four, five (the two)’). These strategies may be used with fingers or blocks rather than rote memory. Eventually retrieval becomes the most common strategy and counting is used only as a back-up mechanism. Although subtraction is often learned around the same time as addition in the first year of formal schooling, it is often considered more difficult to teach. In particular the language that is used to describe subtraction such as ‘take away’ or ‘what is the difference between’, make it much more difficult to acquire. The main types of questions used in subtraction are: Often subtraction is described as ‘taking away’ but it is clear that the language surrounding this operation is much more complex than that. Thus difficulties can emerge due to the formulation of the problem; the actual underlying process of subtraction is the same. Multiplication is taught after addition and subtraction; although the foundations for multiplication are formed through pattern recognition in Grades 1 and 2 and Foundation level it is not part of the US national curriculum until Grade 3. One reason for this is that early multiplication builds on knowledge of addition by interpreting multiplication as repeated addition. With a repeated addition structure, the multiplication sign can be interpreted as meaning ‘sets of’; e.g. 6 x 3 means 6 sets of 3, and written as 3 + 3 + 3 + 3 + 3 + 3. However, although this makes intuitive sense, multiplication has the property of commutativity (see explanation for addition); this means that 6 x 3 is equivalent to 3 x 6. If we express this multiplication sum using repeated addition, 6 + 6 + 6 does not appear the same as 3 + 3 + 3 + 3 + 3 + 3. The property of commutativity can be explored by using rectangular arrays. For example six rows of three dots when rotated 90 degrees becomes 3 rows of 6 dots. See below. Rectangular arrays to represent commutativity for 6 x 3 and 3 x 6 One issue with repeated addition is that it appears to be focused on sets of items, but multiplication and the use of repeated addition in multiplication can be used in other contexts. For example, if we go to the shops and buy three things that cost $6 each, essentially we want to multiply 3 x 6 to find out how much the total cost will be. This time we do not have three sets of 6 things but three times a measure of a value of goods. Repeated addition can be used in the same way but the outcome takes into account what the measure is; if it is 3 x $6, the answer will be $18; if it is 3 x 6 inches, the answer will be 18 inches. With practice, most children learn the multiplication tables from 1 x 1 to 10 x 10 and repeated addition is replaced by memory retrieval. Although repeated addition is a good conceptual way for a child to understand multiplication, this operator also encounters some of the language issues discussed for subtraction which means that the underlying operation is not clear. For example, in the word problem ‘John has three sweets and Bill has six times more, how many sweets does Bill have?’ the connection between the words and 6 x 3 is not as clear as each boy has their own set of sweets. This question is asking about a quantity scaled by a factor. Nevertheless to find the solution, the actual multiplication remains the same and once again the language of math can lead to difficulties. Division is usually introduced in schools through the idea of sharing as this is a concept that is assumed to be familiar; importantly in division it must be equal sharing. Thus is we have 12 ÷ 4, we need to share the 12 equally between 4. This can be done with objects to show how many items would be in each of the four groups. However another way of expressing division is as the inverse of multiplication; 12 ÷ 4 can be interpreted as ‘how many fours make twelve?’ This is also known as ‘grouping’ as it involves working out how many groups of a four can be found in twelve. This connection between division and multiplication leads to another interpretation of division. In multiplication repeated addition was the most common way to interpret multiplication, thus it follows that if division can be expressed as inverse multiplication, it can also be thought of as repeated subtraction. To phrase another way ‘how many times can I take four away from twelve until there is nothing left?’ Both inverse multiplication and repeated subtraction can be represented on the number line to illustrate the steps. This is illustrated below: 12 ÷ 4 using reverse multiplication 12 ÷ 4 using repeated subtraction
Tips for using this guide: This guide is meant to be self-guided for students above grade 8. Students between grade 3-7 might need some guidance (download activity worksheet), and students under grade 3 will need teacher/parent assistance (download activity worksheet). For future challenges, visit the activity series page or sign up for the outreach newsletter to receive email reminders. A musical instrument is something made to produce music. Building one is a great way for you to explore the physics of sound, and learn a little bit about music along the way. We’ve put together an explanation of some of the physics behind musical instruments, and will show you lots of suggestions for instruments you can make yourself. Using what you learn, you can also make your own creative design! Questions? Contact UBC physicists and astronomers if you have any physics questions about building your own musical instrument. We will also showcase your design on this page or via social media – you can share using a drawing, a video of your device, or any other creative ways you would like! Some musicians even choose to build their own instruments with everyday materials – check out the examples below! Things you need for creating your own instrument: - This webpage. We’ve chosen a few key concepts in physics that will help building your instrument. To show these concepts in action, we’ve provided explanations on how common instruments produce music. - Decide what type of instrument to make: Choose among string, wind, and percussion instruments. - Pen and paper to design your instrument. Think about the physics behind what makes a good instrument! - Supplies. You can use just about anything to make your instrument. We provided some examples below to get you started, but you can use what you can find! - Safety first. Make sure you use safe materials, and that your instrument is safe to play! If you are not sure, ask adults for help. - Make it a group project? Connect with friends online and coordinate a group performance! (thank you local teacher Albert C for your suggestion!) Build your own! We have picked some ideas in physics that are helpful in designing an instrument! Applying these concepts allows us to make the organized sound that is music! Sound and vibration To know what sound is, we first need to learn about vibrations. When something moves back and forth in a rapid and regular way, it vibrates. You can make a sound through vibrations – using our bodies (try humming), or using a musical instrument. The vibration you create makes the air all around vibrate, and the air then vibrates parts of the ears of the listener – which is how a person can hear a sound. Sound is a wave – it transmits vibrational energy from one place to another, say from a musical instrument to your ears. Experiment and explore! You can find some sound-related activities on the Exploratorium website. Also, experiment online through the PhET simulation of sound. Music is made using specific sounds called “tones” or “musical notes”. The notes have names, A, B, C, D, E, F and G. This is the sound of a “A” note: And this is the sound of a C note: On a musical instrument, different musical notes can be created through a few ways. For string instruments like an acoustic guitar or a violin, using the finger of one hand to change the vibrating length of the string (called “stopping”) can create various notes. On an electric guitar, strings can be of the same length and similar tension, but have different thickness. When you play a recorder, musical notes are created by allowing different lengths of air in the recorder vibrate when you plug the tone holes. Some sounds are louder than other ones. What is different about the sound waves that have different loudness? Remember that sound comes from things that are vibrating, and travels to our ears as waves. Waves can be big or small though, and the bigger wave (with more energy) will sound louder than the smaller wave (with less energy). We say a bigger wave has a larger amplitude, and a smaller wave has smaller amplitude. A wave with a large amplitude. Image: PhET simulation (wave on a string) A wave with a small amplitude. Image: PhET simulation (wave on a string) Experiment and explore: try this PhET online simulation – see how you can adjust the amplitude of the waves! The different musical notes are sounds with different frequencies. The frequency of a sound is how many times the air vibrates in one second. For example, the C note has a frequency of 440 Hz, which means the air vibrates 440 times in just one second. A few things can affect the frequency of the sound created by a musical instrument. Here is an explanation about how electric guitar does it. Experiment and explore: try this PhET online simulation – adjust the frequency to see what happens! Sounds of instruments (advanced) Let’s dive deeper into the idea of musical notes and frequency. Listen again to the sound of the C note, You might be able to tell that the C note above is played on a piano. Now listen to a C note played on a different instrument, the guitar Notice how different they sound? Different instruments sound different because the notes they play are not just vibrations at one frequency – they include vibrations at certain higher frequencies called harmonics. Different instruments give different volumes to each harmonic, producing the unique sounds we hear. This video provides a very good visualization of how different instrumental sounds have different combinations of frequencies and intensities. Experiment and Explore! Do you already play any musical instrument? You can run the visualization yourself using Spectrum Lab (PC only). If you are using Mac, you might need to use iSpectrum (check to see if your computer already has it). Resonance is the idea that objects have certain frequencies that they vibrate at most easily. If sound of the right frequency reaches an object, it can make the object vibrate too. Watch how this idea of resonance can be used to break a wine glass or produce beautiful patterns in the videos below. Understanding resonance is important in making musical instruments. Resonance allows the sound created by the instrument to be amplfied, resulting in a louder sound so others can hear. As an example, the sound box of a guitar vibrates when you play the guitar (pluck the strings), forcing the air in the sound box to vibrate and producing a louder sound. If you cover the sound box, you cannot hear the sound as much. How Instruments work – examples For ideas on designing your own instrument, you might want to learn a little about professional instruments and how they make sound. Guitar (string instrument) What is Vibrating? On a guitar, the player plucks the strings to make them vibrate. The guitar strings are too thin to produce loud sounds on their own, but the vibrating strings cause the large soundboard of the guitar to start vibrating too, which creates the sound we hear! How are different notes produced? The sound from a guitar can be controlled using the tuning pegs. They pull the strings tighter or let them out looser. The frets can be used to change the vibrating length of the string, which also changes the frequency of the sound. Drum (percussion instrument) What is Vibrating? On a drum, the player bangs on the drum skin to create vibrations. The effect of these small vibrations is amplified by the shell of the drum, which makes the loud drum noise! How are different notes produced? The sound produced by a drum can be controlled by varying how tight the drum skin is. A tighter skin will produce a higher frequency sound. Changing the size of the drum skin will also change the sound. A small drum will produce high frequency sounds. Trombone (wind instrument) How are different notes produced? The sound from a trombone is controlled by the length of the tube, which the player controls using the slider. What is Vibrating? In a trombone, the vibrations are made by the player’s lips, which must conform to the natural resonances in the tube. The musician blows into the mouthpiece, creating a sound wave that travels back and forth, and eventually out of the tube. Make your own instruments Not sure where to start? Here are a few simple instruments you can make at home right away. Or, if you are adventurous, can you design your own instrument like this guy does – remember to ask us questions if you have any! Drums (percussion instrument) Home made drums are one of the easiest to make – just about anything can be a drum! Try tapping your fingers on the table in front of you. Can you make different sounds? What is shaking to make the sound? Try experimenting with different things to make drums. Pots and pans, pipes, and blocks of wood are great examples you can try. Try tapping using a wooden spoon or other stick for a different sound. Sound Sandwich (wind instrument) You can make a sound sandwich at home using simple supplies! You will need: - Two popsicle sticks (jumbo craft size sticks are ideal) - A thick rubber band (like the ones used to bundle brocolli) - Two thin rubber bands - Paper OR a plastic straw If you have these supplies and would like to make the sound sandwich, check out the detailed instructions on exploratorium. In exploratorium they use a plastic straw, but you can replace the straws with strips of paper. Fold the paper over twice to make it a bit thicker. Below is an instruction session with our graduate student Alex May on how to make a sound sandwich! If you choose to make this instrument, you will need: - Several glass cups or jars. - A chopstick or spoon - Food colouring (optional) Try to find all the same shape and size of glass container, but its ok if they’re all different too. Notice that the sound changes when there is more or less water in the container. Why could this be? Tissue box guitar (string instrument) For this instrument, you’ll need - Two pencils - 3-5 elastic bands - One empty tissue box What vibrates in this instrument? Try stretching an elastic between your fingers and plucking it to make a sound. Is the sound louder or quieter than your tissue box guitar? Why? After you’ve built your instrument, you can try to adjust it so that it creates specific musical notes (sounds of different frequencies). This process is called tuning. Each instrument is tuned in its own unique way – try playing around with your instrument to see if you can create different musical notes and how. To help you identify what frequency your instrument is making, you can use a tuner from an instrument store, or even easier – you can use a tuning app on a smartphone. For Android, you can try out “Instrument tuner” by Gebauer Matthias. After downloading the app, open the “Chromatic Tuner”. Hold your phone close to your instrument and try playing. The app will then tell you what note you’re playing. This will work best if you are in a quiet room. For iPhone, you can try out “Tuner & Metronome” by Soundcorset. After opening the app, click on the “Tuner” button at the top of the screen. Hold your phone close to your instrument and try playing. The app will then tell you what note you’re playing. This will work best if you are in a quiet room. If you have any questions, let us know! You can reach us using the comments section at the end of this page, send us an email at firstname.lastname@example.org, or message us on our Instagram post for this particular challenge. We look forward to hearing from you soon! Share your design with us! You can submit a video, photo, or drawing of your instrument here (click on “submit your design”) or tagging #PHASMusic @UBCphasoutreach on Instagram. We’ll share your post and send you back feedback on your instrument! Remember to tell us about the physics of your instrument in the video. If you’d like an extra challenge, try and - Make your instrument play a couple of different notes - Play a song on your instrument Submit your design Check out some cool submissions below We received our first submission for our musical instrument challenge!! Thank you Saiprasad for sharing photos of this awesome tissue box guitar with us! This tissue box guitar has 4 strings and a very festive theme 🎄🦌. The strings vibrate to create sound, and their sound is made louder by the tissue box through resonance. Moving the pens closer or farther apart will change the notes played by the strings. Thanks for the beautiful submission, we can’t wait to see more! Theresa from UBC Physics & Astronomy Outreach built a musical instrument – a sound sandwich that makes sound using the vibrations of a rubber band. She also tried to play the theme song of a movie – can you guess which song it is? Check out Theresa’s explanation on the physics behind her sound sandwich!
National income helps to determine the economic condition of a country. It also plays an important role in observing the developed, underdeveloped or developing status of the nation. It means that the income of the nation is the income from which a nation manages its basic expenditures. Today we will know what is National Income and the national income of India. What is National Income Income is probably the most frequently used term in economics. If an attempt is made to understand national income in general, then it is the sum of the values of all the goods and services produced in any nation during a financial year. It also includes income earned from abroad. It serves to represent the income of a given period of time. For example, to estimate the national income of India, the prices of goods and services produced between April 1 and March 31 are estimated. This duration is also known as the current financial year of India. Aspects of national income Gross Domestic Product (DGP) Gross domestic product or GDP is the value of all final goods and services produced within the territory of a nation during one calendar year. The calendar year in India is from 1st April to 31st March. Recommended: Inflation and its impact on various stakeholders in the Economy GDP is also calculated by adding the national private consumption, gross investment, government spending and trade balance (exports minus imports). It is important to mind that the use of exports minus imports factor removes expenditure on imports not produced within the nation and adds expenditure of goods and services that are exported but not sold within the country. Net Domestic Products (NDP) Net Domestic Product (NDP) is the GDP calculated after adjusting the weight of the ‘depreciation’. This is actually the real or Net form of GDP. Thus, NDP = GDP – Depreciation Gross National Product (GNP) When we add ‘Income from abroad’ in the GDP, The net value derived is called Gross National Product (GNP). Thus, GNP = GDP + Income from Abroad In the case of India, GNP is always lower than its GDP then we can write the above formula in this: GNP = GDP + ( -Income from abroad). Net National Product (NNP) NNP or Net National Product is generally viewed from the point of view of profit. Net National Product or NNP of an economy is the GNP after deducting the loss due to ‘depreciation’. Thus, NNP = GNP – Depreciation Or, in respect of GDP, it will be, NNP = GDP + Income from Abroad – Depreciation National Income of India The national income of India is very essential for the smooth operation of power towards each nation. Recommended: Foreign Policy – Meaning, Aims and major determinants The calculation of the national income of India was first started by Dadabhai Noroji in the year 1867-1868. At that time Noroji told according to his assessment that the per capita income of India is 20 rupees and if the population at that time is divided by 20, then we would know the national income at that time. Methods of Measuring Product method and income method are adopted to find the national income of any nation, so let us know what is this method? 1. Product Method or Value Added Method Under this, the net value addition of goods and all services produced at different levels are calculated. It is used in the fields of agriculture, animal husbandry, and industry. Value Addition = Value of Output – Intermediate Consumption Value of output refers to the market value of goods produced by an enterprise during a financial year and Intermediate consumption refers to the value of non-factor inputs like the value of raw materials. We know it seems a bit difficult. Let’s understand with an example, A carpenter buys woods worth Rs.100 from a woodcutter and then makes a table worth Rs.120. Here, woods is an intermediate good valued at Rs.100, and its value is regarded as ‘intermediate consumption. Table, an output product valued Rs.120, is regarded as ‘value of output’. Therefore, the difference of Rs.20 is the ‘value-added’ or ‘value addition’ and it is the net value added to the economy by the carpenter. Some other examples, |Producers||Stages of Production||Cost Price||Selling Price||Value Added| The value-added method includes the contribution of each stage of production into the calculation. Thus, it removes the possibility of double counting. 2. Income Method Under this, the sum of payments made for the resources of production is taken and it is used to estimate the GDP of the service provider like transport, governance and industry trade. National Income = Employees’ compensation + Net income + Operating surplus (W + R + P + I) + Net Factor Income generated from abroad W = Salaries and Wages R = Rental income P = Profit I = Mixed Income] Here, we include the four elements of production – - Land (which receive rent) - Labour (which receive salary/wages) - Capital (which receive interest) - Entrepreneurship (which receive profit in the term of remuneration) 3. Expenditure Method The expenditure method is one of the most effective ways to calculate the National income in which the measurement of the same is taken as a flow of expenditure from government consumption, net exports and gross capital formation. Thus, National Income = C + G + I + NX C = Household consumption G = Government expenditure I = Investment expense NX = Net exports Currently, the national income of India is assessed by the Central Statistical Organization. National income serves to show the structure and status of each nation (country). It is the endeavor of all the countries to find a way to increase their national income and get success in them. Recommended: NATO: North Atlantic Treaty Organization An increase in national income is essential for the development of the nation.
Human activities lead to the emission of greenhouse gases in various ways, including the combustion of fossil fuels for energy, deforestation, the use of fertilisers in agriculture, livestock farming, and the decomposition of organic material in landfills. Of all the long-lived greenhouse gases that are emitted by human activities, the ones that have the largest climate impact are carbon dioxide (CO2), methane (CH4) and nitrous oxide (N2O). WHAT ARE GREENHOUSE GAS CONCENTRATIONS AND NET FLUXES? AND WHAT ARE SOURCES AND SINKS? 'CONCENTRATION' - the amount of a gas contained in a certain volume of air. Learn more from our GHG concentration indicator page. 'NET FLUX' - the difference between the amount of a gas added to the atmosphere by emissions from various ‘sources’ (such as the combustion of fossil fuels or industrial processes) and the amount taken up by various ‘sinks’ (such as oceans or land biomass), which remove that gas from the atmosphere. ESTIMATES OF SURFACE FLUXES OF GREENHOUSE GASES There are a number of ways to estimate the net surface fluxes of greenhouse gases. 'INVERSE MODELLING' is used by CAMS to estimate the net fluxes of CO2, CH4 and N2O. This approach combines accurate in-situ measurements of the concentrations of the gases with the knowledge of how they are transported and mixed into the atmosphere in order to model the underlying fluxes. This approach is used for the values presented here. SATELLITE DATA are also used by CAMS to estimate the net surface fluxes of CO2 and CH4. The corresponding results cover a shorter period of time than inversion products and are therefore not presented here. Tg = Teragram = 1000 000 000 000 g Pg = Petagram = 1000 000 000 000 000 g An approach called ‘inverse modelling’ is used by CAMS to estimate the net surface fluxes of CO2, CH4 and N2O through combining accurate measurements of these greenhouse gases from close to the ground with our knowledge of atmospheric dispersion and transport. CAMS also estimates the net surface fluxes of CO2 and CH4 from satellite data, but the corresponding results cover a shorter period of time and are therefore not presented here. There is significant annual variation, but estimated net fluxes of these greenhouse gases into the atmosphere have been increasing over recent decades, without any noticeable sign of a possible curbing of this long-term upward trend. The figure above shows that the current CO2 total net flux is about 5 PgC per year, which is equivalent to an increase in atmospheric concentration of about 2.5 ppm per year. Anthropogenic emissions of CO2 are mainly the result of the combustion of fossil fuels, industrial activities and deforestation. There is also a natural uptake and release of CO2 from oceans and land biomass. For example, vegetation can act as both a source and a sink for carbon, through uptake via photosynthesis and release through respiration. The release of carbon through natural processes thus also affects atmospheric concentrations. Anthropogenic emissions are partly being compensated for by the natural sinks of oceans and land biomass. The land sink associated with vegetation growth shows variation throughout the year as it responds rapidly to meteorological anomalies such as rainfall and soil moisture. The net flux of CH4 at the Earth’s surface is a combination of anthropogenic emissions (for example from agriculture, fossil fuels and waste) and natural emissions (from wetlands and wildfires). The mass of CH4 in the atmosphere is also driven by a net sink through oxidation (reaction with OH). In different regions of the world, different sources dominate. Over the last decade, tropical South America and temperate Eurasia have contributed the most CH4, each at around 15% of the global total, while Europe’s contribution is around 8% (see figure in section below). The net surface flux of N2O is largely due to emissions from the microbial processes of nitrification and denitrification, which occur naturally in soils, freshwater systems and the ocean. However, human activities, such as the use of fertilisers in agriculture, have made a substantial contribution to emissions. Further emissions arise from industrial processes, wastewater treatment, and the combustion of fossil fuels and biomass. Trends in N2O emissions are region-dependent, with the largest increasing trend in temperate Eurasia, due to higher use of nitrogen fertiliser. During the past decade, this region contributed 19% to the total global emission level, compared with 6% from Europe (see figure in section below). In Europe it is estimated that the CO2 uptake by land vegetation, through photosynthesis, significantly contributes to an overall reduction in the global net flux. However, the relative scale of this sink has shown considerable variation over the last four decades, most likely due to changes in the atmospheric transport of heat and humidity, itself due to the variability of weather, over Europe. This part of the world currently captures the equivalent of about 20% of the global growth of atmospheric CO2. More generally, regional variations in temperature and precipitation over the globe modify the fluxes from year to year, for instance during El Niño events. On a decadal scale, the relative contribution of regional fluxes to the total global flux is stable in most areas. Land sinks with an increasing impact are seen in tropical South America, however, while land sinks with a decreasing impact are seen in temperate North America. Providing spatial estimates of CO2 fluxes at a fine spatial scale is challenging, because of the sparse surface measurement network and the uncertainty in the satellite CO2 observations. However, at bigger scales, such as the largest countries or groups of countries, for example the European Union (EU), it is possible to distinguish multi-year averages of CO2 fluxes. This is done by identifying the combined human-induced (fossil fuel burning and cement production) and natural effects (vegetation and wildfires) that are larger than the uncertainty of the flux estimate. In some areas, such as China, the EU, India and USA, the variation in fluxes is mainly driven by fossil fuel burning, while for others such as Australia, Brazil, Canada and the Russian Federation, it is the photosynthesis of vegetation that is dominating. The CAMS greenhouse gas product describes the variations, in space and in time, of the surface sources and sinks (fluxes) of the three major greenhouse gases that are directly affected by human activities: carbon dioxide (CO2), methane (CH4) and nitrous oxide (N2O). The variations provide information on the underlying emissions and absorption processes of these gases. For CO2, the product distinguishes between natural and anthropogenic contributions. For CH4, the product also distinguishes between four emission types (rice cultivation, natural wetlands, biomass burning and other sources). This product primarily exploits high-quality measurements of air samples collected at tens of sites around the world by various laboratories (129 sites for CO2, 31 sites for CH4 and 136 sites for N2O), in combination with a numerical model of atmospheric tracer transport (Chevallier et al. 2010, Bergamaschi et al. 2013, Thompson et al. 2014). The selected air sample measurements themselves have negligible uncertainty and are representative of large areas. The product uncertainty mainly comes from the limited coverage of the measurement network and from errors in atmospheric transport modelling. Expressed in relative terms, the uncertainty can reach over 100% for some years and some regions if the estimated flux is small, and much less for some other years. The inverse modelling system cannot retrieve the fine spatial details of the CO2 fluxes, but it still distinguishes the CO2 fluxes from the largest countries or from groups of countries such as the European Union (EU), on a multi-year average: variations in the net CO2 flux combining both the anthropogenic effects and the natural effects can be identified that are larger than the flux uncertainty. The flux data and the associated atmospheric fields are available to download. CAMS also provides daily forecasts of atmospheric concentrations of CO2 and CH4 globally with a horizontal resolution of about 9 km by 9 km. Observations have been kindly provided by many laboratories around the world, including NOAA, CSIRO, ECCC and ICOS-ATC. Maps and graphs The length of the data flux record is 1979-onwards for CO2, 1996-onwards for N2O and 1990-onwards for CH4. The fluxes are defined so that a positive value indicates a net flux into the atmosphere. The first figure shows the net annual fluxes of these gases averaged over the globe. The conversion factor of 2.086 PgC/ppm is taken from Prather (2012) and accounts for the lag between CO2 variations in the troposphere and in the stratosphere. The regions used in the second figure are from the ‘Transcom’ mask. For CO2 the net flux is related to natural processes only, while for CH4 and N2O the flux shown includes all sources and sinks. All values are expressed as the fraction of the total global mean net flux into the atmosphere. The uncertainty estimate given in the third figure is the 68% uncertainty envelope (one standard deviation) calculated with a robust Monte Carlo approach. The analysis of the change in the European sink for CO2 given here is derived from Bastos et al. (2016). This dataset in context Each product version undergoes careful quality assurance and quality control procedures. The product is evaluated in comparison with other flux estimates and with independent aircraft air sample measurements. An example is given below. This product is consistent with broad current knowledge about the surface sources and sinks of CO2, CH4 and N2O, but, to our knowledge, it is unique in its combination of temporal coverage, spatial resolution and inclusion of recent measurements. Annual updates are available with a half-year to one-year delay. Information on atmospheric CO2 and CH4 concentrations is also available in real time as part of the CAMS global atmospheric analysis and forecast of atmospheric composition at ECMWF (Massart et al., 2014; Massart et al., 2016, Agusti-Panareda et al., 2014, Agusti-Panareda et al., 2019). In the atmospheric CO2 analysis and forecast, the modelled CO2 fluxes from vegetation are bias-corrected based on the optimised fluxes from the CAMS flux inversion system (Agusti-Panareda et al., 2016). The atmospheric analysis assimilates the GOSAT XCO2 product from Bremen University (Heymann et al. 2015) and the GOSAT XCH4 product from SRON (Butz et al. 2011, as well as the IASI XCH4 product from LMD (Crevoisier et al. 2009).
Want to help your 10th-grader master math? Here are some of the skills your child will be learning in the classroom. For high school students, math skills and understandings are organized not by grade level but by concept. In algebra, students work with creating and reading expressions, rational numbers and polynomials, and the conventions of algebraic notations. They apply these skills and understandings to solve real-world problems. Understand an equation as a mathematical statement that uses letters to represent unknown numbers (such as 2x-6y+z=14) and is a statement of equality between two expressions (“this equals that”). Explain each step in solving a simple equation, and construct a practical argument to justify a solution method. Graph these equations on coordinate axes with labels and scales. Identify ways to rewrite the structure of an expression. Understand that some equations have no solutions in a given number system, but have a solution in a larger system. For example, the solution of x + 1 = 0 is an integer, not a whole number; the solution of 2x + 1 = 0 is a rational number, not an integer; the solutions of x² – 2 = 0 are real numbers, not rational numbers; and the solutions of x² + 2 = 0 are complex numbers, not real numbers. Add, subtract, and multiply polynomials (expressions with multiple terms, such as 5xy² + 2xy - 7). Understand the relationship between the zeros and the factors of polynomials. Use polynomial identities to solve real world problems. A rectangular garden has a length of x + 2 ft. and a width of x + 8 ft. What must x be in order for the garden to have an area of 91 sq. ft.? Create equations and inequalities in one variable, and use them to solve problems, including weighted averages, calculation of mortgage and interest rates, and rate of travel. A plane takes off from Chicago O’Hare airport, heading east and traveling at 580 miles an hour. Another plane takes off from O’Hare at the same time, heading west and traveling at 530 miles an hour. The two planes will be 1000 miles apart in how many hours? Represent, interpret, and solve equations and inequalities on graphs, plotted in the coordinate plane, and using technology to graph the functions and make tables of values. For high school students, math skills and understandings are organized not by grade level but by concept. In geometry, students work primarily with plane, or Euclidean geometry (with and without coordinates). Students build on geometry concepts learned through 8th grade, using more precise definitions and develop careful proofs of theorems (statements that can be proved true). Understand geometric transformation (moving a shape so it is in a different position, but still has same size, area, angles, and lengths) – especially rigid motions: translations, rotations, reflections, and combinations of these – involving angles, circles, perpendicular lines, parallel lines, and line segments. Understand and prove geometric theorems about lines and angles, triangles, parallelograms, and circles. For example, Pythagorean Theorem, Line Intersection Theorem, Exterior Angle Theorem. Understand trigonometry as a measurement of triangles (and circles, such as orbits). Apply trigonometry to general triangles. Define the sine, cosine, and tangent trigonometric ratios. Understand and use algebraic reasoning to prove geometric theorems. Explain volume formulas and use them to solve problems. What is the volume of a cylinder that is 10m high, and has a radius of 9m? (Use π = 3.14) Apply geometric concepts to model real-life situations. - Use measures and properties of geometric shapes to describe objects – for example, model a tree trunk or a human torso as a cylinder. - Apply concepts of density based on area and volume – for example, persons per square mile, BTUs per cubic foot. - Design objects or structures to satisfy specific physical constraints or minimize cost. For high school students, math skills and understandings are organized not by grade level but by concept. In High School Math: Number and Quantity, students extend their understanding of number to imaginary numbers and complex numbers, and work with a variety of measurement units in modeling. Emphasis is on using numbers – in calculations, equations, and measurements – to solve real-world problems, including those that students themselves quantify and define. Rational and irrational numbers Understand and explain why: - the sum of two rational numbers is rational (sum can be written as a fraction or decimal) - the sum of a rational number and an irrational number is irrational (sum cannot be written as a fraction; written in decimal form, is non-repeating and unending) Interpreting and converting units Consistently choose and interpret units in formulas; scale drawings and figures in graphs, data displays and maps. Convert rates and measurements (grams to centigrams, inches to feet, meters to kilometers, miles to kilometers, square inches into square feet, etc.). Use measurement units in modeling to solve real-world problems – for example: acceleration, currency conversions, per capita income, safety statistics, disease incidence, batting averages, etc.) Understand that complex numbers are formed by real numbers and imaginary numbers – imaginary numbers that, when squared, give a negative result: i&2sup; = -1. Use the relation i&2sup; = -1 to add, subtract and multiply complex numbers. Understand a vector as a quantity that has both magnitude (length) and direction. Add and subtract vectors. Solve problems involving velocity and other quantities represented by vectors. - Drew leaves home for a morning walk. He goes 13.5 km south and 5.5 km west. What is his velocity relative to his brother, who is still asleep in bed at home? - Jack is doing push-ups. Which requires smaller muscular force – if his hands are 0.25m apart, or his hands are 0.5m apart? For tips to help your 10th-grader in math class, check out our 10th grade math tips page. Parent Toolkit resources were developed by NBC News Learn with the help of subject-matter experts and align with the Common Core State Standards.
Introduction – Understanding the Relationship between Interior Angles and Number of Sides: A Guide The relationship between interior angles and the number of sides of a polygon is a mathematical concept that many students starting off in geometry may find confusing but it is an interesting and important one to understand. In its most basic understanding, each interior angle in a polygon corresponds to one side of the shape. The angles measure how much of a turn is present between two adjacent sides and can tell us about the nature of the shape itself. Let’s start by looking at some examples: A triangle has three sides and three interior angles that all add up to 180 degrees. A square has four sides, each with an interior angle measuring 90 degrees meaning they all added together make 360 degrees. And finally, a pentagon has five sides with five respective interior angles that add up to 540 degrees. This pattern continues onwards as we consider more complex shapes with more sides, such as hexagons, septagons and octagons – the total amount of degrees in their respective internal angles will rise accordingly. It isn’t just more complex polygons where this relationship holds true – it applies equally across all shapes regardless of their dimensions or size so even regular geometric figures such as circles also fit into this pattern; despite having zero sides, you could still calculate its circumference inside using 360° (or 2π), which may feel counterintuitive but goes to show that there are no exceptions when discussing this subject matter! In terms of why this is important for geometry learners – understanding these relationships helps us determine properties such as ‘regularity’ regarding certain shapes e.g., if we know that the internal angle measurement for an equilateral triangle is always 60° then we can infer certain characteristics about its size or relative position within other shapes which is incredibly powerful tool when constructing mathematical models or simply dealing with questions involving calculation techniques ! You can also apply this concept when conducting calculations for other subjects related areas however there will be slight variations depending on what it concerns ( How to Find the Number of Sides From an Interior Angle Finding the number of sides from an interior angle is a simple process taking only basic geometry knowledge. To begin, you must understand that all regular polygons – which are shapes whose edges and angles are equal in size – have an interior angle that is relative to the total number of sides. That means, for example, if a shape has four sides, then each of its interior angles add up to 360 degrees. In reverse, when faced with an interior angle value and no information about how many sides it has, it’s possible to extrapolate how many sides it has by dividing that angle value by 360 degrees. To illustrate this further using our four-sided polygon example from earlier; divide 360 by four which results in 90 degrees meaning each of the shape’s interior angles must be 90 degrees in order for them all to add up to 360. But if instead we were given 135 degrees as our starting point we wouldn’t necessarily know what kind of polygon this would be right away until we divided 135 by 360 (resulting in 0.375). After repeating the same process with 0.375 and dividing by 360 once again – this time resulting in 2 – we can now see that our polygon must have two sides since 2 x 180 =360. So based on this method, any time you’re given or find an interior angle without knowing how many sides it has, just divide it by 360 until you arrive at a whole number that represents how many total sides your polygon contains — easy! Step-by-Step Instructions for Calculating Number of Sides In mathematics, the number of sides a polygon has is an important part of its definition. A polygon with three sides is called a triangle; four sides is called a quadrilateral; five sides is called a pentagon; six sides is called a hexagon; and so on. Knowing how to calculate the number of sides in any given polygon can be helpful for anyone studying geometry or using it in everyday life. This step-by-step guide will show you how to calculate the number of sides for any given polygon. Step 1: Understand what a polygon is and what makes them unique. A polygon is any closed planar shape made up of straight lines connected together at their endpoints, forming one continuous path that completely encircles the inside area without crossing itself. The key factor that makes polygons unique from other shapes (like circles) is that they have several line segments that join its edges together to form straight angles – no curves required! Step 2: Examine the object’s shape to determine how many line segments join its edges together, which determines the number of angles present in its design. For example, if you have an object with four straight lines joining its edges together, then you have a quadrilateral (four angles); if there are six lines joining its edges together, you have a hexagon (six angles). Step 3: Count how many angles are visible compared to any curved shapes on its boundaries – these are not counted as they do not make up part of the total angle count because they represent curved side(s) instead. Only count each distinct angle created by two separate line segments – remember, all polygons must only include straight line segments! Step 4: After counting the number of straight-angled lines used to make your shape, multiply this number by two to get your total number of possible sides – every side requires two points (i.e., one FAQs About Interior Angles and Number of Sides What are interior angles? Interior angles are an angle whose vertex lies on the inside of a polygon. These angles can be found inside regular polygons (which have all sides and angles equal), as well as irregular polygons (which have different side lengths and angles). Interior angles help to define the shape of a polygon by connecting two adjacent sides within the figure. How is the number of interior angles related to the number of sides? For any given polygon, the number of interior angles is directly related to the number of sides. Our formula for finding how many interior angels a regular polygon has follows: (n-2)*180 / n, where ‘n’ represents the number of sides in that particular polygon – or simply put, (number of sides-2) times 180 over that same amount of sides. For example, if you had a pentagon with five sides drawn on paper, then your calculation would look like this: (5-2) x 180/5 = 108° – which goes to show you that all pentagons have 108° for their total sum of interior angle measurements. The same applied for more complicated shapes such as nonagon’s and decagons; so make sure you brush up on your math before attempting any geometry-based problem! Top 5 Facts About the Relationship Between Interior Angles and Number of Sides 1. The interior angles of a polygon will always add up to the same sum. This is due to the fact that polygon shapes are comprised of flat line segments, where each line shares two common points perimeter; the points at which two lines or curves intersect. For instance, in a three sided polygon (triangle) such as an equilateral triangle, the three sides meet at three apexes and form three equal interior angles which always sum up to 180°. This rule then applies in a linear fashion for any other shape with more than three sides — such as pentagons, hexagons and octagons — therefore, it can be generalized: the total of all interior angles of any polygon is sometimes referred to as “interior angle sum” and it always equals (n-2)*180. Where n represents the number of sides on a polygon. 2.For convex polygons – meaning those that do not have any part extending outside their boundaries – then the measurement of individual angles increases proportionately along with the increase in side numbers for any given extruded shape–i.e., if there is an increase of one side–then there is also an increase in one angle from what would be considered ‘normal.’ That is why for a triangle all interior angles are equal to sixty degrees while for four sided polygons (quadrilateral) they are ninety degrees each–an eight sided regular octagon has interior angle measurements of 135 degrees each while having twelve sides makes them split into five sections containing 140 degree areas that make up these polygons’ interiors. 3.The opposite also applies: when a greater number of sides means smaller angles, this would correspondingly make coefficients higher; this principle means that even if one adds just one more side onto already existing ten ones–the entire figure (if it is example an eleventh regular pentagon) immediately changes due to addition’s effect on divisions inside it making coefficients bigger by 20 Conclusion – Learning More About Interior Angles and Their Corresponding Numbers of Sides Interior angles are an important part of understanding geometry; they represent the angle inside two sides of a polygon. By understanding interior angles and their corresponding numbers of sides, you can use this knowledge to accurately measure polygons in different scenarios. It’s also useful for construction projects, such as measuring angle measurements that need to be precise when it comes to building structures. Furthermore, knowledge of interior angles is essential for recognizing shapes in the world around us. Knowing how to calculate the measures of interior angles helps you better visualize and understand them; it’s not just memorizing facts like what types of angles are present because if you understand the properties and relationships between each kind then you will never have trouble figuring out problems or recognizing odd shapes or patterns. Additionally, with knowledge on how many sides polygon’s contain gives us some insight into why certain shapes have certain words associated with them – for example, hexagon has 6 sides so its name makes sense! In conclusion, learning about interior angles and their corresponding numbers of sides is essential for truly mastering geometry concepts. It allows people to more easily recognize figures in the real world and visualize problems that involve those figures – something that makes problem-solving much easier! While memorizing particular facts may help someone temporarily remember information, true understanding requires deeper comprehension which comes with practice.
A normal is a dotted line drawn perpendicular to the surface of the refracting material, at the point of entry of the light. When light travels from air into a denser medium like water or glass, it will refract towards the normal. When light travels from a denser medium into air, it will refract away from the normal. What is normal line in total internal reflection? The normal line is a line that is perpendicular to the surface where light is entering from one medium to another. All angles are measured from the normal line. The angle of incidence (θi or θ1) is the angle of the incoming ray. The angle of refraction (θr or θ2) is the angle of the refracted (bent) ray. How do you draw a normal line? The arrow is the imaginary line at 90° to the bow. then draw a perpendicular line (at 90°) to the tangent. The Normal line is the arrow that would exist if the curve were the bow. This is the normal line and it is perpendicular to the convex lens surface drawn at the point of contact. Why a normal line is drawn? At the point of incidence where the ray strikes the mirror, an imaginary line can be drawn perpendicular to the surface of the mirror, which is known as a normal line. “The normal ray gives us the perception of understanding when the angle of incidence, angle of reflection, and angle of refraction changes. What is the normal in optics? In optics, a normal ray is a ray that is incident at 90 degrees to a surface. That is, the light ray is perpendicular or normal to the surface. The angle of incidence (angle an incident light ray makes with a normal to the surface) of the normal ray is 0 degrees. What is the normal of a function? The derivative of a function at a point is the slope of the tangent line at this point. The normal line is defined as the line that is perpendicular to the tangent line at the point of tangency. Why is it called total internal reflection? The word “total” in “total internal reflection” is used in the following sense: all of the light that could possibly propagate away from this surface is reflected, and none is refracted. What is total refractive index? The refractive index (also known as the index of refraction) is defined as the quotient of the speed of light as it passes through two media. It is a dimensionless number that depends on the temperature and wavelength of the beam of light. “Refractive index describes how fast a light beam travels through media.” How do you draw a normal in physics? When a line is drawn perpendicular to the reflecting surface at the point of incidence, this line is known as normal. It is the imaginary line which is perpendicular to the reflecting surface. The normal ray is incident at 90 degrees to the reflecting surface. How do you draw a normal line in a circle? What is the law of reflection? Definition of law of reflection : a statement in optics: when light falls upon a plane surface it is so reflected that the angle of reflection is equal to the angle of incidence and that the incident ray, reflected ray, and normal ray all lie in the plane of incidence. Why normal is important in light? We also need the Normal ray because it separates incident and reflected rays into two equal angles, thus we won’t be able to measure angles with respect to the surface without it. Whenever a ray of light reflects from the smooth or a shiny surface, it obeys laws of reflection. What is the angle of normal? Translation: A ray of light hits a surface at a point. From that point the line straight up, at 90 degrees to the surface, is called the normal. Is the normal always 90 degrees? Due to Newton’s third law, the normal forces will be equal in magnitude and in opposite direction. The normal force always makes a 90 degree angle with the surface (perpendicular). What is the normal line in Snell’s law? Snell’s Law of Refraction If light hits the surface of a media at less than a 90° angle, the angle formed between the line representing the path of light and a line that is perpendicular to the surface (the so called normal line), is called the angle of incidence. What is a normal class 8? Normal: A perpendicular (a line making an angle of 900) at the point of incidence (where the incident ray strikes the mirror) is known as normal to the reflecting surface at point. The angle between the incident ray and the normal is called the angle of incidence ( i). What is normal class 10th? Normal: The line which is drawn perpendicular to the reflecting surface, at the point of incidence is called the normal at that point. Angle of incidence: It is the angle between the incident ray and the normal to the reflecting surface at the point of incidence. What is the normal line equation? So the equation of the normal is y = x. What best describes a normal line? It is the line about which the angle of incidence and reflection is same. How do you find the normal? - Step 1: find the gradient of the curve at point P(a,b). - Step 2: find the gradient of the normal to the curve at P(a,b). - Step 3: find the normal’s equation in the form y=mx+c by making y the subject in the formula: y−b=m(x−a) What is another name for critical angle? Also called angle of stall, critical angle of attack, stalling angle. Aeronautics. the angle of attack, greater than or equal to the angle of attack for maximum lift, at which there is a sudden change in the airflow around an airfoil with a subsequent decrease in lift and increase in drag. What is critical angle formula? Critical Angle Formula = the inverse function of the sine (refraction index / incident index). Critical Angle is the angle of incidence corresponding to the angle of refraction of 90°. What is meant by a critical angle? critical angle, in optics, the greatest angle at which a ray of light, travelling in one transparent medium, can strike the boundary between that medium and a second of lower refractive index without being totally reflected within the first medium. What is the SI unit of refractive index? The refraction index has no unit as it is the ratio of two similar Quantities. What is minimum angle deviation? The smallest angle through which light is bent by an optical element or system. In a prism, the angle of deviation is a minimum if the incident and exiting rays form equal angles with the prism faces.
It is a commonly believed that a rainy winter is followed by fewer numbers of wildfires in the next fire season, but a new NASA study uncovers a paradox to suggest that a wet winter actually correlates well with more small wildfires in next fire season. According to researchers, a wet winter allows vegetation—especially grasses—to grow copiously, thus providing more dried fuel for small wildfires during next fire season. Larger wildfires behave in a more logical manner, according to the NASA study, as there is a lesser number of large wildfires after a rainy winter. “This is the most surprising result from our study because we would expect small fires to follow suit with larger fires,” said Daniel Jensen, a Ph.D. student from UCLA, who was part of this study. The year 2017 has proved to be one of California’s worst in terms of wildfires, according to the Los Angeles Times. In October alone, wildfires in Northern California resulted in 44 deaths and destruction of huge property. In early December, the Thomas fire burnt nearly 50,000 acres of land and forced thousands of people to leave their houses. This was despite above-average rainfall in California and other regions of the West in 2017. The new NASA research was carried out under the guidance of J.T. Reager, a scientist at NASA’s Jet Propulsion Laboratory (JPL) in California. For this study, the team decided to use GRACE (Gravity Recovery and Climate Experiment) satellites data to probe the connection between wet winters and wildfire incidents across the U.S. from 2003 through 2012. GRACE, a joint project of the U.S. and Germany, was launched in March 2002 to make precise measurements of the Earth’s gravity field. Twin GRACE satellites, GRACE-1 and GRACE-2, remained in orbit for 15 years, allowing scientists to trace the unceasing movement of ice, liquid water, and the solid Earth. The mission ended in 2017 following decommissioning of the GRACE-2 satellite due to age-related battery problems. NASA researchers analyzed the soil moisture measurements data from GRACE satellites and assimilated this data into the Catchment Land Surface Model created by the Goddard Space Flight Center in Maryland. The final results suggested that for different types of landscapes, the number of small fires increases following a rainy pre-season. The detailed findings of the study were published in the journal Environmental Research Letters.
The Witwatersrand Basin formed as a vast ancient inland lake around 3 billion years ago. Over about 300 million years, successive strata settled down in the lake, and when the water dried up the rocks compacted to form hard quartzites. Some seams or “reefs” contain the richest gold field the world has ever known. Mining activities have yielded over 1.5 billion gold ounces since the late 19th century – a third of all the gold that has ever been produced. Where did the gold come from originally? It was formed, like other elements, in the burning of the universe’s first-generaion stars – specifically, in the superheated explosions of supernovae. When the resulting space dust coalesced into second-generation stars and planets, gold found its way into the crust of the Earth where humankind would later hunt for and treasure it. Why exactly so much gold was concentrated in the Witwatersrand remains a mystery. We do know that it was brought down into the basin as placer gold (in powder form) from very high mountains. High-energy rivers dropped the heavy gold along with pebbles in deltas on the edge of the lake, forming conglomerates known as “banket” (nut pudding). This is the prized gold ore. Australian prospector George Harrison was one of many who picked over the landscape for years in a frustrating hunt for the elusive yellow metal. When he found signs of gold outcropping along the surface of a ridge of hills, he did not know that the reefs plunged down deeply as a result of the mighty Vredefort asteroid impact. The thin gold-bearing strata dip steeply towards the centre of the crater near Vredefort, some 140km to the south, disappearing under later geological deposits. The impact happened about 700 million years after the Witwatersrand Basin had been fully formed as a stratigraphic series of rock and mineral layers. Outcrops of the gold-bearing reefs were traced downwards by miners who developed large-scale engineering technologies to descend into the hard quartzite. Johannesburg, the city of gold, flourished on the surface while a labour force of black miners (recruited from all over Southern Africa) sweated deep underground. The country’s wealth and its apartheid system both developed out of this harsh mining environment. The economic boom continued for well over a century but today the gold deposits appear to be reaching depletion. Meanwhile the people of South Africa have reached a political settlement giving equal rights to all and setting their sights on growth beyond mining. None of this – good and bad – would have happened had the Vredefort impact not occurred, capsizing the reefs and thrusting them deep down. Had the Witwatersrand strata continued to lie flat, as they were intially formed, it is most likely that over the succeeding span of two billion years all the gold would have eroded away into the oceans. - “Why the gold came to the Reef”: http://www.joburg.org.za/index.php?option=com_content&task=view&id=274&Itemid=51 - “Mangalisa Geology” : http://superiormining.com/properties/south_africa/mangalisa/geology/
If marsquakes do indeed take place, said the scientists who analyzed the high-resolution images, our nearest planetary neighbor may still have active volcanism, which could help create conditions for liquid water. With High Resolution Imaging Science Experiment (HiRISE) imagery, the research team examined boulders along a fault system known as Cerberus Fossae, which cuts across a very young (few million years old) lava surface on Mars. By analyzing boulders that toppled from a martian cliff, some of which left trails in the coarse-grained soils, and comparing the patterns of dislodged rocks to such patterns caused by quakes on Earth, the scientists determined the rocks fell because of seismic activity. The martian patterns were not consistent with how boulders would scatter if they were deposited as ice melted, another means by which rocks are dispersed on Mars. Gerald Roberts, an earthquake geologist with Birkbeck, an institution of the University of London, who led the study, said that the images of Mars included boulders that ranged from two to 20 meters (6.5 to 65 feet) in diameter, which had fallen in avalanches from cliffs. The size and number of boulders decreased over a radius of 100 kilometers (62 miles) centered at a point along the Cerberus Fossae faults. "This is consistent with the hypothesis that boulders had been mobilized by ground-shaking, and that the severity of the ground-shaking decreased away from the epicenters of marsquakes," Roberts said. The study, by Roberts and his colleagues, will be published Thursday in the Journal of Geophysical Research-Planets, a publication of the American Geophysical Union (AGU).The team compared the pattern of boulder falls, and faulting of the martian surface, with those seen after a 2009 earthquake near L'Aquila, in central Italy. In that event, boulder falls occurred up to approximately 50 km (31 miles) from the epicenter. Because the area of displaced boulders in the marsscape stretched across an area approximately 200 km (124- miles) long, the quakes were likely to have had a magnitude greater than 7, the researchers estimated. By looking at the tracks that the falling boulders had left on the dust-covered martian surface, the team determined that the marsquakes were relatively recent - and certainly within the last few percent of the planet's history - because martian winds had not yet erased the boulder tracks. Trails on Mars can quickly disappear - for instance, tracks left by NASA robotic rovers are erased within a few years by martian winds, whereas other, sheltered tracks stick around longer. It is possible, the scientists concluded, that large-magnitude quake activity is still occurring on Mars. The existence of marsquakes could be significant in the ongoing search for life on Mars, the researchers stated. If the faults along the Cerberus Fossae region are active, and the quakes are driven by movements of magma related to the nearby volcano, Elysium Mons, the energy provided in the form of heat from the volcanic activity under the surface of Mars could be able to melt ice. The resulting liquid water, they noted, could provide habitats friendly to life.Notes for Journalists Or, you may order a copy of the final paper by emailing your request to Kate Ramsayer at email@example.com. Please provide your name, the name of your publication, and your phone number. Neither the paper nor this press release are under embargo.Title: Brian Matthews: Department of Physics and Astronomy, The Open University, Milton Keynes, United Kingdom; Chris Bristow: Department of Earth and Planetary Sciences, Birkbeck, University of London, United Kingdom and Hyder Consulting, London, United Kingdom; Luca Guerrieri: Geological Survey of Italy, ISPRA - High Institute for the Environmental Protection and Research, Rome, Italy; Joyce Vetterlein: Department of Earth and Planetary Sciences, Birkbeck, University of London, United Kingdom.Contact information for the authors: Kate Ramsayer | American Geophysical Union Devils Hole: Ancient Traces of Climate History 24.05.2017 | Universität Innsbruck Supercomputing helps researchers understand Earth's interior 23.05.2017 | University of Illinois College of Liberal Arts & Sciences Physicists from the University of Würzburg are capable of generating identical looking single light particles at the push of a button. Two new studies now demonstrate the potential this method holds. The quantum computer has fuelled the imagination of scientists for decades: It is based on fundamentally different phenomena than a conventional computer.... An international team of physicists has monitored the scattering behaviour of electrons in a non-conducting material in real-time. Their insights could be beneficial for radiotherapy. We can refer to electrons in non-conducting materials as ‘sluggish’. Typically, they remain fixed in a location, deep inside an atomic composite. It is hence... Two-dimensional magnetic structures are regarded as a promising material for new types of data storage, since the magnetic properties of individual molecular building blocks can be investigated and modified. For the first time, researchers have now produced a wafer-thin ferrimagnet, in which molecules with different magnetic centers arrange themselves on a gold surface to form a checkerboard pattern. Scientists at the Swiss Nanoscience Institute at the University of Basel and the Paul Scherrer Institute published their findings in the journal Nature Communications. Ferrimagnets are composed of two centers which are magnetized at different strengths and point in opposing directions. Two-dimensional, quasi-flat ferrimagnets... An Australian-Chinese research team has created the world's thinnest hologram, paving the way towards the integration of 3D holography into everyday... In the race to produce a quantum computer, a number of projects are seeking a way to create quantum bits -- or qubits -- that are stable, meaning they are not much affected by changes in their environment. This normally needs highly nonlinear non-dissipative elements capable of functioning at very low temperatures. In pursuit of this goal, researchers at EPFL's Laboratory of Photonics and Quantum Measurements LPQM (STI/SB), have investigated a nonlinear graphene-based... 24.05.2017 | Event News 23.05.2017 | Event News 22.05.2017 | Event News 24.05.2017 | Physics and Astronomy 24.05.2017 | Physics and Astronomy 24.05.2017 | Event News
Strike and dip ||This article includes a list of references, related reading or external links, but its sources remain unclear because it lacks inline citations. (March 2010)| Strike and dip refer to the orientation or attitude of a geologic feature. The strike line of a bed, fault, or other planar feature, is a line representing the intersection of that feature with a horizontal plane. On a geologic map, this is represented with a short straight line segment oriented parallel to the strike line. Strike (or strike angle) can be given as either a quadrant compass bearing of the strike line (N25°E for example) or in terms of east or west of true north or south, a single three digit number representing the azimuth, where the lower number is usually given (where the example of N25°E would simply be 025), or the azimuth number followed by the degree sign (example of N25°E would be 025°). The dip gives the steepest angle of descent of a tilted bed or feature relative to a horizontal plane, and is given by the number (0°-90°) as well as a letter (N,S,E,W) with rough direction in which the bed is dipping. One technique is to always take the strike so the dip is 90° to the right of the strike, in which case the redundant letter following the dip angle is omitted. The map symbol is a short line attached and at right angles to the strike symbol pointing in the direction which the planar surface is dipping down. The angle of dip is generally included on a geologic map without the degree sign. Beds that are dipping vertically are shown with the dip symbol on both sides of the strike, and beds that are flat are shown like the vertical beds, but with a circle around them. Both vertical and flat beds do not have a number written with them. Another way of representing strike and dip is by dip and dip direction. The dip direction is the azimuth of the direction the dip as projected to the horizontal (like the trend of a linear feature in trend and plunge measurements), which is 90° off the strike angle. For example, a bed dipping 30° to the South, would have an East-West strike (and would be written 090°/30° S using strike and dip), but would be written as 30/180 using the dip and dip direction method. Strike and dip are determined in the field with a compass and clinometer or a combination of the two, such as a Brunton compass named after D.W. Brunton a Colorado miner. Compass-clinometers which measure dip and dip direction in a single operation (as pictured) are often called "stratum" or "Klar" compasses after a German professor. Any planar feature can be described by strike and dip. This includes sedimentary bedding, faults and fractures, cuestas, igneous dikes and sills, metamorphic foliation and any other planar feature in the Earth. Linear features are measured with very similar methods, where "plunge" is the dip angle and "trend" is analogous to the dip direction value. Apparent dip is the name of any dip measured in a vertical plane that is not perpendicular to the strike line. True dip can be calculated from apparent dip using trigonometry if you know the strike. Geologic cross sections use apparent dip when they are drawn at some angle not perpendicular to strike. ||This article includes a list of references, related reading or external links, but its sources remain unclear because it lacks inline citations. (July 2014)| - Compton, Robert R. (1985). Geology in the Field. New York: J. Wiley and Sons. ISBN 978-0-471-82902-7. OCLC 301031779. - Lahee, Frederic Henry (1961) . Field Geology (6th ed.). New York: McGraw-Hill. OCLC 500832981. - Tarbuck, Edward J.; Lutgens, Frederick K. (2008). 0-13-092025-8 Earth: An Introduction to Physical Geology (9th ed.). Upper Saddle River, N.J.: Pearson Prentice Hall. ISBN 0-13-156684-9. OCLC 70408067. - "Digital Cartographic Standard for Geologic Map Symbolization". FGDC Geological Data Subcommittee. USGS. August 2006. Retrieved 20 March 2010.
Students will practice writing the equation of a circle given the center and radius, center and diameter, a graph, the center and a point through which the circle passes, the endpoints of a diameter, the area, or the circumference of the circle. This activity was designed for a high school level geometry class. The answer to each station will give them a piece of a story (who, doing what, with who, where, when, etc.). This is a much more fun approach to multiple choice, and the students adore reading the story to the class. They get very excited to see which of their teachers is the "star" of the story. ALL slides are given in an editable format so you are free to personalize the story for your students. PowerPoint is required to edit. Only story elements can be changed, not the actual problems. This activity works very well in conjunction with my Geometry Circles Unit. This resource is included in the following bundle(s): Geometry Activities Bundle This purchase includes a license for one teacher only for personal use in their classroom. Licenses are non-transferable , meaning they can not be passed from one teacher to another. No part of this resource is to be shared with colleagues or used by an entire grade level, school, or district without purchasing the proper number of licenses. If you are a coach, principal, or district interested in transferable licenses to accommodate yearly staff changes, please contact me for a quote at firstname.lastname@example.org. This resource may not be uploaded to the internet in any form, including classroom/personal websites or network drives, unless the site is password protected and can only be accessed by students. Mathlibs is the registered trademark of kidCourses.com and is used with permission by All Things Algebra, a friend and strategic partner of, and a collaborator and contributor for kidCourses.com.
Harvard scientists have unveiled a new laser-measuring device that they say will provide a critical advance in the resolution of current planet-finding techniques, making the discovery of Earth-sized planets possible. The discovery of planets outside of our solar system, called “exoplanets,” is one of the hottest fields in astronomy and holds great promise to increase our understanding of Earth’s solar system and of how life first took hold on this planet. The problem, however, is that the two main techniques to find exoplanets rely on the planet’s very small effect on its star. One measures the star’s “wobble” due to the planet’s gravitational pull as it circles, while the other measures the dimming of a star’s light as a planet passes in front of it. With current technology, both of these techniques can identify relatively large planets that have a noticeable effect on their star. Large planets, however, tend to be gaseous giants, like our solar system’s Jupiter and Saturn, incapable of supporting life. Smaller, rocky planets, like Earth and Mars, are thought to be the likeliest candidates for life, but are too small to be detected by current techniques. The new device, called an astro-comb, uses femto-second (one millionth of one billionth of a second) pulses of laser light linked to an atomic clock to provide a precise standard against which light from a star can be measured. Ronald Walsworth, senior lecturer on physics in the Faculty of Arts and Sciences, senior physicist at the Smithsonian Astrophysical Observatory and in whose lab the astro-comb was developed, said it may increase the resolution of the star “wobble” technique by about 100 times, which would allow detection of a planet the size of Earth. “The existing tools, prior to astro-comb, couldn’t do the job,” Walsworth said. Both Walsworth and Astronomy Professor Dimitar Sasselov, director of the Harvard University Origins of Life Initiative, said that the final resolution of astronomical observations taken using the astro-comb may be somewhat lower than what would be ideally possible because other factors, such as “noise” in stellar atmospheres, may affect the quality of measurements. Still, Sasselov said, planets close to the size of Earth — and that share enough of Earth’s characteristics to harbor the conditions of life — should be detectable within the next few years using the astro-comb. The ability to find and analyze Earthlike planets is an important step in obtaining baseline information with which to understand how life on Earth arose, said Sasselov. The Origins of Life Initiative, he said, brings together experts from a variety of fields whose expertise is pertinent to understanding the planetary roots of life in the universe. By studying conditions on Earthlike planets circling other stars, he said, scientists may be better able to understand what conditions were like on Earth before life arose. That understanding, he said, could inform the research of chemists and molecular biologists seeking to learn how organic chemicals came to create the chemistry of life. "I think this is super-exciting, from the point of view of furthering this new field of science: the exploration of environments out of this solar system,” Sasselov said. The astro-comb was developed in a collaboration between physicists and astronomers working at the Harvard-Smithsonian Center for Astrophysics and Harvard’s Department of Physics and Origins of Life Initiative, as well as Massachusetts Institute of Technology’s Department of Electrical Engineering. In particular, both Walsworth and Sasselov credited the interdisciplinary environment of the Center for Astrophysics and the vision of the Origins of Life Initiative for bringing together experts in diverse fields whose work made the advance possible. Three years ago, as a classroom exercise in a physics course he was teaching, Walsworth began mulling ways that an existing instrument, called a laser comb, could be used to solve the knotty problems of astrophysics. At about the same time, Andrew Szentgyorgyi, an associate of the Harvard College Observatory and senior astrophysicist at the Smithsonian Astrophysical Observatory, began hearing about laser combs and was trying to find someone knowledgeable enough about them to tell him if they could be adapted to his astronomy research. Finally, in frustration, he went to Walsworth’s office. “I knocked on his door and asked, ‘Do you know anything about these laser combs?’ He said, ‘Do I know anything? I have one,’” Szentgyorgyi said. Laser combs have been around for a decade and are used in creating extremely precise clocks. The combs work by creating regular spikes of laser light that are evenly spaced in wavelength — like the teeth of a comb — and can be projected onto a light-spectrum measuring device called a spectrograph. Against such a precise background, physicists and astronomers can accurately measure the light from various sources, including the light from stars. Laser combs hadn’t been used in astrophysics before now, however, because of technical problems that made them too precise — the teeth on the light “comb” they created were too close together to be useful in measuring starlight. Walsworth and colleagues added a filtering device that spreads the laser comb’s teeth apart by a factor of about ten, and stabilized the system to an atomic clock, creating the first laser comb appropriate for astrophysical research. “Now we can tune the system and get out exactly the light out that a particular astrophysical spectrograph needs,” Walsworth said. Though its most high-profile application will be the search for exoplanets, the astro-comb can measure other light coming from the heavens as well. In fact, it will get its first tryout in late spring at the Mount Hopkins Observatory in Arizona examining stars in a nearby globular cluster to see if their motion is affected by the theorized presence of dark matter there. Once that trial is complete, a new astro-comb will be constructed as part of the Harvard University Origins of Life Initiative with the aim of deploying it at a project being built in the Canary Islands for exoplanet research, called the New Earths Facility. Szentgyorgyi, who will head the Canary Island team, said they will first take the astro-comb to Geneva to calibrate it with equipment being built by European collaborators and then install it in the Canary Islands. It will be operational sometime in 2010, he said. Source: Harvard University Explore further: Toothpaste fluorine formed in stars
It all starts from a problem with dust: galaxies with the highest rates of star formation are also the "dustiest", because the violent process of star formation produces gas and heavy molecules. This means that part of the electromagnetic radiation emitted by nascent stars cannot be recorded by the instruments for astronomical observation in the optical and the ultraviolet band, as it is absorbed by dust and gas and re-emitted in the infrared. On top of this, owing to instrument limitations it is even difficult to observe this infrared radiation in the case of very distant, older galaxies. All this complicates things for astrophysicists investigating stellar and galaxy formation, and all studies to date have mostly proposed predictions based on purely theoretical models. Montage of the SDP.81 Einstein Ring and the lensed galaxy - Credits: ALMA (NRAO/ESO/NAOJ)/Y. Tamura (The University of Tokyo)/Mark Swinbank (Durham University) Claudia Mancuso, PhD student under the supervision of Andrea Lapi and Luigi Danese, SISSA professors in the astrophysics group and co-authors of the study, did the opposite: "we started from the data, available in complete form only for the closer galaxies and in incomplete form for the more distant ones, and we filled the 'gaps' by interpreting and extending the data based on a scenario we devised" comments Mancuso. The analysis also took into account the phenomenon of gravitational lensing, which allows us to observe very distant galaxies belonging to ancient cosmic epochs. In this "direct" manner (i.e., model-independent) the SISSA group obtained an image of the evolution of galaxies even in very ancient epochs (close, in a cosmic timescale, to the epoch of reionization). This reconstruction demonstrates that elliptical galaxies cannot have formed through the merging of other galaxies, "simply because there wasn't enough time to accumulate the large quantity of stars seen in these galaxies through these processes", comments Mancuso. "This means that the formation of elliptical galaxies occurs through internal, in situ processes of star formation. "These findings", states Mancuso, "will constitute a necessary starting point for building the future generation of models and numerical simulations and, more importantly, they will provide an unprecedented basis for identifying primordial galaxies in the next generation surveys in the ultraviolet with the future James Webb Space Telescope (JWST), in the millimeter band with the Atacama Large Millimeter Array (ALMA), and in the radio band with the Square Kilometer Array (SKA) interferometer".
Informatics Educational Institutions & Programs |Smallest recognized division of a chemical element| |Mass range||1.67×10−27 to 4.52×10−25 kg| |Electric charge||zero (neutral), or ion charge| |Diameter range||62 pm (He) to 520 pm (Cs) (data page)| |Components||Electrons and a compact nucleus of protons and neutrons| An atom is the smallest unit of ordinary matter that forms a chemical element. Every solid, liquid, gas, and plasma is composed of neutral or ionized atoms. Atoms are extremely small, typically around 100 picometers across. They are so small that accurately predicting their behavior using classical physics—as if they were tennis balls, for example—is not possible due to quantum effects. Every atom is composed of a nucleus and one or more electrons bound to the nucleus. The nucleus is made of one or more protons and a number of neutrons. Only the most common variety of hydrogen has no neutrons. More than 99.94% of an atom's mass is in the nucleus. The protons have a positive electric charge, the electrons have a negative electric charge, and the neutrons have no electric charge. If the number of protons and electrons are equal, then the atom is electrically neutral. If an atom has more or fewer electrons than protons, then it has an overall negative or positive charge, respectively – such atoms are called ions. The electrons of an atom are attracted to the protons in an atomic nucleus by the electromagnetic force. The protons and neutrons in the nucleus are attracted to each other by the nuclear force. This force is usually stronger than the electromagnetic force that repels the positively charged protons from one another. Under certain circumstances, the repelling electromagnetic force becomes stronger than the nuclear force. In this case, the nucleus splits and leaves behind different elements. This is a form of nuclear decay. The number of protons in the nucleus is the atomic number and it defines to which chemical element the atom belongs. For example, any atom that contains 29 protons is copper. The number of neutrons defines the isotope of the element. Atoms can attach to one or more other atoms by chemical bonds to form chemical compounds such as molecules or crystals. The ability of atoms to associate and dissociate is responsible for most of the physical changes observed in nature. Chemistry is the discipline that studies these changes. History of atomic theory The basic idea that matter is made up of tiny indivisible particles is very old, appearing in many ancient cultures such as Greece and India. The word atom is derived from the ancient Greek word atomos, which means "uncuttable". This ancient idea was based in philosophical reasoning rather than scientific reasoning, and modern atomic theory is not based on these old concepts. That said, the word "atom" itself was used throughout the ages by thinkers who suspected that matter was ultimately granular in nature. Dalton's law of multiple proportions In the early 1800s, an English chemist John Dalton compiled experimental data gathered by himself and other scientists and discovered a pattern now known as the "law of multiple proportions". He noticed that in chemical compounds which contain a particular chemical element, the content of that element in these compounds will differ by ratios of small whole numbers. This pattern suggested to Dalton that each chemical element combines with others by some basic and consistent unit of mass. For example, there are two types of tin oxide: one is a black powder that is 88.1% tin and 11.9% oxygen, and the other is a white powder that is 78.7% tin and 21.3% oxygen. Adjusting these figures, in the black oxide there is about 13.5 g of oxygen for every 100 g of tin, and in the white oxide there is about 27 g of oxygen for every 100 g of tin. 13.5 and 27 form a ratio of 1:2. In these oxides, for every tin atom there are one or two oxygen atoms respectively (SnO and SnO2). As a second example, Dalton considered two iron oxides: a black powder which is 78.1% iron and 21.9% oxygen, and a red powder which is 70.4% iron and 29.6% oxygen. Adjusting these figures, in the black oxide there is about 28 g of oxygen for every 100 g of iron, and in the red oxide there is about 42 g of oxygen for every 100 g of iron. 28 and 42 form a ratio of 2:3. In these respective oxides, for every two atoms of iron, there are two or three atoms of oxygen (Fe2O2 and Fe2O3).[a] As a final example: nitrous oxide is 63.3% nitrogen and 36.7% oxygen, nitric oxide is 44.05% nitrogen and 55.95% oxygen, and nitrogen dioxide is 29.5% nitrogen and 70.5% oxygen. Adjusting these figures, in nitrous oxide there is 80 g of oxygen for every 140 g of nitrogen, in nitric oxide there is about 160 g of oxygen for every 140 g of nitrogen, and in nitrogen dioxide there is 320 g of oxygen for every 140 g of nitrogen. 80, 160, and 320 form a ratio of 1:2:4. The respective formulas for these oxides are N2O, NO, and NO2. Kinetic theory of gases In the late 18th century, a number of scientists found that they could better explain the behavior of gases by describing them as collections of sub-microscopic particles and modelling their behavior using statistics and probability. Unlike Dalton's atomic theory, the kinetic theory of gases describes not how gases react chemically with each other to form compounds, but how they behave physically: diffusion, viscosity, conductivity, pressure, etc. In 1827, botanist Robert Brown used a microscope to look at dust grains floating in water and discovered that they moved about erratically, a phenomenon that became known as "Brownian motion". This was thought to be caused by water molecules knocking the grains about. In 1905, Albert Einstein proved the reality of these molecules and their motions by producing the first statistical physics analysis of Brownian motion. French physicist Jean Perrin used Einstein's work to experimentally determine the mass and dimensions of molecules, thereby providing physical evidence for the particle nature of matter. Discovery of the electron In 1897, J. J. Thomson discovered that cathode rays are not electromagnetic waves but made of particles that are 1,800 times lighter than hydrogen (the lightest atom). Thomson concluded that these particles came from the atoms within the cathode — they were subatomic particles. He called these new particles corpuscles but they were later renamed electrons. Thomson also showed that electrons were identical to particles given off by photoelectric and radioactive materials. It was quickly recognized that electrons are the particles that carry electric currents in metal wires. Thomson concluded that these electrons emerged from the very atoms of the cathode in his instruments, which meant that atoms are not indivisible as the name atomos suggests. Discovery of the nucleus J. J. Thomson thought that the negatively-charged electrons were distributed throughout the atom in a sea of positive charge that was distributed across the whole volume of the atom. This model is sometimes known as the plum pudding model. Ernest Rutherford and his colleagues Hans Geiger and Ernest Marsden came to have doubts about the Thomson model after they encountered difficulties when they tried to build an instrument to measure the charge-to-mass ratio of alpha particles (these are positively-charged particles emitted by certain radioactive substances such as radium). The alpha particles were being scattered by the air in the detection chamber, which made the measurements unreliable. Thomson had encountered a similar problem in his work on cathode rays, which he solved by creating a near-perfect vacuum in his instruments. Rutherford didn't think he'd run into this same problem because alpha particles are much heavier than electrons. According to Thomson's model of the atom, the positive charge in the atom is not concentrated enough to produce an electric field strong enough to deflect an alpha particle, and the electrons are so lightweight they should be pushed aside effortlessly by the much heavier alpha particles. Yet there was scattering, so Rutherford and his colleagues decided to investigate this scattering carefully. Between 1908 and 1913, Rutheford and his colleagues performed a series of experiments in which they bombarded thin foils of metal with alpha particles. They spotted alpha particles being deflected by angles greater than 90°. To explain this, Rutherford proposed that the positive charge of the atom is not distributed throughout the atom's volume as Thomson believed, but is concentrated in a tiny nucleus at the center. Only such an intense concentration of charge could produce an electric field strong enough to deflect the alpha particles as observed. Discovery of isotopes While experimenting with the products of radioactive decay, in 1913 radiochemist Frederick Soddy discovered that there appeared to be more than one type of atom at each position on the periodic table. The term isotope was coined by Margaret Todd as a suitable name for different atoms that belong to the same element. J. J. Thomson created a technique for isotope separation through his work on ionized gases, which subsequently led to the discovery of stable isotopes. In 1913 the physicist Niels Bohr proposed a model in which the electrons of an atom were assumed to orbit the nucleus but could only do so in a finite set of orbits, and could jump between these orbits only in discrete changes of energy corresponding to absorption or radiation of a photon. This quantization was used to explain why the electrons' orbits are stable (given that normally, charges in acceleration, including circular motion, lose kinetic energy which is emitted as electromagnetic radiation, see synchrotron radiation) and why elements absorb and emit electromagnetic radiation in discrete spectra. Later in the same year Henry Moseley provided additional experimental evidence in favor of Niels Bohr's theory. These results refined Ernest Rutherford's and Antonius van den Broek's model, which proposed that the atom contains in its nucleus a number of positive nuclear charges that is equal to its (atomic) number in the periodic table. Until these experiments, atomic number was not known to be a physical and experimental quantity. That it is equal to the atomic nuclear charge remains the accepted atomic model today. Chemical bonds between atoms were explained by Gilbert Newton Lewis in 1916, as the interactions between their constituent electrons. As the chemical properties of the elements were known to largely repeat themselves according to the periodic law, in 1919 the American chemist Irving Langmuir suggested that this could be explained if the electrons in an atom were connected or clustered in some manner. Groups of electrons were thought to occupy a set of electron shells about the nucleus. The Bohr model of the atom was the first complete physical model of the atom. It described the overall structure of the atom, how atoms bond to each other, and predicted the spectral lines of hydrogen. Bohr's model was not perfect and was soon superseded by the more accurate Schrödinger model, but it was sufficient to evaporate any remaining doubts that matter is composed of atoms. For chemists, the idea of the atom had been a useful heuristic tool, but physicists had doubts as to whether matter really is made up of atoms as nobody had yet developed a complete physical model of the atom. The Schrödinger model The Stern–Gerlach experiment of 1922 provided further evidence of the quantum nature of atomic properties. When a beam of silver atoms was passed through a specially shaped magnetic field, the beam was split in a way correlated with the direction of an atom's angular momentum, or spin. As this spin direction is initially random, the beam would be expected to deflect in a random direction. Instead, the beam was split into two directional components, corresponding to the atomic spin being oriented up or down with respect to the magnetic field. In 1925 Werner Heisenberg published the first consistent mathematical formulation of quantum mechanics (matrix mechanics). One year earlier, Louis de Broglie had proposed the de Broglie hypothesis: that all particles behave like waves to some extent, and in 1926 Erwin Schrödinger used this idea to develop the Schrödinger equation, a mathematical model of the atom (wave mechanics) that described the electrons as three-dimensional waveforms rather than point particles. A consequence of using waveforms to describe particles is that it is mathematically impossible to obtain precise values for both the position and momentum of a particle at a given point in time; this became known as the uncertainty principle, formulated by Werner Heisenberg in 1927. In this concept, for a given accuracy in measuring a position one could only obtain a range of probable values for momentum, and vice versa. This model was able to explain observations of atomic behavior that previous models could not, such as certain structural and spectral patterns of atoms larger than hydrogen. Thus, the planetary model of the atom was discarded in favor of one that described atomic orbital zones around the nucleus where a given electron is most likely to be observed. Discovery of the neutron The development of the mass spectrometer allowed the mass of atoms to be measured with increased accuracy. The device uses a magnet to bend the trajectory of a beam of ions, and the amount of deflection is determined by the ratio of an atom's mass to its charge. The chemist Francis William Aston used this instrument to show that isotopes had different masses. The atomic mass of these isotopes varied by integer amounts, called the whole number rule. The explanation for these different isotopes awaited the discovery of the neutron, an uncharged particle with a mass similar to the proton, by the physicist James Chadwick in 1932. Isotopes were then explained as elements with the same number of protons, but different numbers of neutrons within the nucleus. Fission, high-energy physics and condensed matter In 1938, the German chemist Otto Hahn, a student of Rutherford, directed neutrons onto uranium atoms expecting to get transuranium elements. Instead, his chemical experiments showed barium as a product. A year later, Lise Meitner and her nephew Otto Frisch verified that Hahn's result were the first experimental nuclear fission. In 1944, Hahn received the Nobel Prize in Chemistry. Despite Hahn's efforts, the contributions of Meitner and Frisch were not recognized. In the 1950s, the development of improved particle accelerators and particle detectors allowed scientists to study the impacts of atoms moving at high energies. Neutrons and protons were found to be hadrons, or composites of smaller particles called quarks. The standard model of particle physics was developed that so far has successfully explained the properties of the nucleus in terms of these sub-atomic particles and the forces that govern their interactions. Though the word atom originally denoted a particle that cannot be cut into smaller particles, in modern scientific usage the atom is composed of various subatomic particles. The constituent particles of an atom are the electron, the proton and the neutron. The electron is by far the least massive of these particles at 9.11×10−31 kg, with a negative electrical charge and a size that is too small to be measured using available techniques. It was the lightest particle with a positive rest mass measured, until the discovery of neutrino mass. Under ordinary conditions, electrons are bound to the positively charged nucleus by the attraction created from opposite electric charges. If an atom has more or fewer electrons than its atomic number, then it becomes respectively negatively or positively charged as a whole; a charged atom is called an ion. Electrons have been known since the late 19th century, mostly thanks to J.J. Thomson; see history of subatomic physics for details. Protons have a positive charge and a mass 1,836 times that of the electron, at 1.6726×10−27 kg. The number of protons in an atom is called its atomic number. Ernest Rutherford (1919) observed that nitrogen under alpha-particle bombardment ejects what appeared to be hydrogen nuclei. By 1920 he had accepted that the hydrogen nucleus is a distinct particle within the atom and named it proton. Neutrons have no electrical charge and have a free mass of 1,839 times the mass of the electron, or 1.6749×10−27 kg. Neutrons are the heaviest of the three constituent particles, but their mass can be reduced by the nuclear binding energy. Neutrons and protons (collectively known as nucleons) have comparable dimensions—on the order of 2.5×10−15 m—although the 'surface' of these particles is not sharply defined. The neutron was discovered in 1932 by the English physicist James Chadwick. In the Standard Model of physics, electrons are truly elementary particles with no internal structure, whereas protons and neutrons are composite particles composed of elementary particles called quarks. There are two types of quarks in atoms, each having a fractional electric charge. Protons are composed of two up quarks (each with charge +2/3) and one down quark (with a charge of −1/3). Neutrons consist of one up quark and two down quarks. This distinction accounts for the difference in mass and charge between the two particles. The quarks are held together by the strong interaction (or strong force), which is mediated by gluons. The protons and neutrons, in turn, are held to each other in the nucleus by the nuclear force, which is a residuum of the strong force that has somewhat different range-properties (see the article on the nuclear force for more). The gluon is a member of the family of gauge bosons, which are elementary particles that mediate physical forces. All the bound protons and neutrons in an atom make up a tiny atomic nucleus, and are collectively called nucleons. The radius of a nucleus is approximately equal to femtometres, where is the total number of nucleons. This is much smaller than the radius of the atom, which is on the order of 105 fm. The nucleons are bound together by a short-ranged attractive potential called the residual strong force. At distances smaller than 2.5 fm this force is much more powerful than the electrostatic force that causes positively charged protons to repel each other. Atoms of the same element have the same number of protons, called the atomic number. Within a single element, the number of neutrons may vary, determining the isotope of that element. The total number of protons and neutrons determine the nuclide. The number of neutrons relative to the protons determines the stability of the nucleus, with certain isotopes undergoing radioactive decay. The proton, the electron, and the neutron are classified as fermions. Fermions obey the Pauli exclusion principle which prohibits identical fermions, such as multiple protons, from occupying the same quantum state at the same time. Thus, every proton in the nucleus must occupy a quantum state different from all other protons, and the same applies to all neutrons of the nucleus and to all electrons of the electron cloud. A nucleus that has a different number of protons than neutrons can potentially drop to a lower energy state through a radioactive decay that causes the number of protons and neutrons to more closely match. As a result, atoms with matching numbers of protons and neutrons are more stable against decay, but with increasing atomic number, the mutual repulsion of the protons requires an increasing proportion of neutrons to maintain the stability of the nucleus. The number of protons and neutrons in the atomic nucleus can be modified, although this can require very high energies because of the strong force. Nuclear fusion occurs when multiple atomic particles join to form a heavier nucleus, such as through the energetic collision of two nuclei. For example, at the core of the Sun protons require energies of 3 to 10 keV to overcome their mutual repulsion—the coulomb barrier—and fuse together into a single nucleus. Nuclear fission is the opposite process, causing a nucleus to split into two smaller nuclei—usually through radioactive decay. The nucleus can also be modified through bombardment by high energy subatomic particles or photons. If this modifies the number of protons in a nucleus, the atom changes to a different chemical element. If the mass of the nucleus following a fusion reaction is less than the sum of the masses of the separate particles, then the difference between these two values can be emitted as a type of usable energy (such as a gamma ray, or the kinetic energy of a beta particle), as described by Albert Einstein's mass-energy equivalence formula, , where is the mass loss and is the speed of light. This deficit is part of the binding energy of the new nucleus, and it is the non-recoverable loss of the energy that causes the fused particles to remain together in a state that requires this energy to separate. The fusion of two nuclei that create larger nuclei with lower atomic numbers than iron and nickel—a total nucleon number of about 60—is usually an exothermic process that releases more energy than is required to bring them together. It is this energy-releasing process that makes nuclear fusion in stars a self-sustaining reaction. For heavier nuclei, the binding energy per nucleon in the nucleus begins to decrease. That means fusion processes producing nuclei that have atomic numbers higher than about 26, and atomic masses higher than about 60, is an endothermic process. These more massive nuclei can not undergo an energy-producing fusion reaction that can sustain the hydrostatic equilibrium of a star. The electrons in an atom are attracted to the protons in the nucleus by the electromagnetic force. This force binds the electrons inside an electrostatic potential well surrounding the smaller nucleus, which means that an external source of energy is needed for the electron to escape. The closer an electron is to the nucleus, the greater the attractive force. Hence electrons bound near the center of the potential well require more energy to escape than those at greater separations. Electrons, like other particles, have properties of both a particle and a wave. The electron cloud is a region inside the potential well where each electron forms a type of three-dimensional standing wave—a wave form that does not move relative to the nucleus. This behavior is defined by an atomic orbital, a mathematical function that characterises the probability that an electron appears to be at a particular location when its position is measured. Only a discrete (or quantized) set of these orbitals exist around the nucleus, as other possible wave patterns rapidly decay into a more stable form. Orbitals can have one or more ring or node structures, and differ from each other in size, shape and orientation. Each atomic orbital corresponds to a particular energy level of the electron. The electron can change its state to a higher energy level by absorbing a photon with sufficient energy to boost it into the new quantum state. Likewise, through spontaneous emission, an electron in a higher energy state can drop to a lower energy state while radiating the excess energy as a photon. These characteristic energy values, defined by the differences in the energies of the quantum states, are responsible for atomic spectral lines. The amount of energy needed to remove or add an electron—the electron binding energy—is far less than the binding energy of nucleons. For example, it requires only 13.6 eV to strip a ground-state electron from a hydrogen atom, compared to 2.23 million eV for splitting a deuterium nucleus. Atoms are electrically neutral if they have an equal number of protons and electrons. Atoms that have either a deficit or a surplus of electrons are called ions. Electrons that are farthest from the nucleus may be transferred to other nearby atoms or shared between atoms. By this mechanism, atoms are able to bond into molecules and other types of chemical compounds like ionic and covalent network crystals. By definition, any two atoms with an identical number of protons in their nuclei belong to the same chemical element. Atoms with equal numbers of protons but a different number of neutrons are different isotopes of the same element. For example, all hydrogen atoms admit exactly one proton, but isotopes exist with no neutrons (hydrogen-1, by far the most common form, also called protium), one neutron (deuterium), two neutrons (tritium) and more than two neutrons. The known elements form a set of atomic numbers, from the single-proton element hydrogen up to the 118-proton element oganesson. All known isotopes of elements with atomic numbers greater than 82 are radioactive, although the radioactivity of element 83 (bismuth) is so slight as to be practically negligible. About 339 nuclides occur naturally on Earth, of which 252 (about 74%) have not been observed to decay, and are referred to as "stable isotopes". Only 90 nuclides are stable theoretically, while another 162 (bringing the total to 252) have not been observed to decay, even though in theory it is energetically possible. These are also formally classified as "stable". An additional 34 radioactive nuclides have half-lives longer than 100 million years, and are long-lived enough to have been present since the birth of the solar system. This collection of 286 nuclides are known as primordial nuclides. Finally, an additional 53 short-lived nuclides are known to occur naturally, as daughter products of primordial nuclide decay (such as radium from uranium), or as products of natural energetic processes on Earth, such as cosmic ray bombardment (for example, carbon-14).[note 1] For 80 of the chemical elements, at least one stable isotope exists. As a rule, there is only a handful of stable isotopes for each of these elements, the average being 3.2 stable isotopes per element. Twenty-six elements have only a single stable isotope, while the largest number of stable isotopes observed for any element is ten, for the element tin. Elements 43, 61, and all elements numbered 83 or higher have no stable isotopes.:1–12 Stability of isotopes is affected by the ratio of protons to neutrons, and also by the presence of certain "magic numbers" of neutrons or protons that represent closed and filled quantum shells. These quantum shells correspond to a set of energy levels within the shell model of the nucleus; filled shells, such as the filled shell of 50 protons for tin, confers unusual stability on the nuclide. Of the 252 known stable nuclides, only four have both an odd number of protons and odd number of neutrons: hydrogen-2 (deuterium), lithium-6, boron-10 and nitrogen-14. Also, only four naturally occurring, radioactive odd-odd nuclides have a half-life over a billion years: potassium-40, vanadium-50, lanthanum-138 and tantalum-180m. Most odd-odd nuclei are highly unstable with respect to beta decay, because the decay products are even-even, and are therefore more strongly bound, due to nuclear pairing effects. The large majority of an atom's mass comes from the protons and neutrons that make it up. The total number of these particles (called "nucleons") in a given atom is called the mass number. It is a positive integer and dimensionless (instead of having dimension of mass), because it expresses a count. An example of use of a mass number is "carbon-12," which has 12 nucleons (six protons and six neutrons). The actual mass of an atom at rest is often expressed in daltons (Da), also called the unified atomic mass unit (u). This unit is defined as a twelfth of the mass of a free neutral atom of carbon-12, which is approximately 1.66×10−27 kg. Hydrogen-1 (the lightest isotope of hydrogen which is also the nuclide with the lowest mass) has an atomic weight of 1.007825 Da. The value of this number is called the atomic mass. A given atom has an atomic mass approximately equal (within 1%) to its mass number times the atomic mass unit (for example the mass of a nitrogen-14 is roughly 14 Da), but this number will not be exactly an integer except (by definition) in the case of carbon-12. The heaviest stable atom is lead-208, with a mass of 207.9766521 Da. As even the most massive atoms are far too light to work with directly, chemists instead use the unit of moles. One mole of atoms of any element always has the same number of atoms (about 6.022×1023). This number was chosen so that if an element has an atomic mass of 1 u, a mole of atoms of that element has a mass close to one gram. Because of the definition of the unified atomic mass unit, each carbon-12 atom has an atomic mass of exactly 12 Da, and so a mole of carbon-12 atoms weighs exactly 0.012 kg. Shape and size Atoms lack a well-defined outer boundary, so their dimensions are usually described in terms of an atomic radius. This is a measure of the distance out to which the electron cloud extends from the nucleus. This assumes the atom to exhibit a spherical shape, which is only obeyed for atoms in vacuum or free space. Atomic radii may be derived from the distances between two nuclei when the two atoms are joined in a chemical bond. The radius varies with the location of an atom on the atomic chart, the type of chemical bond, the number of neighboring atoms (coordination number) and a quantum mechanical property known as spin. On the periodic table of the elements, atom size tends to increase when moving down columns, but decrease when moving across rows (left to right). Consequently, the smallest atom is helium with a radius of 32 pm, while one of the largest is caesium at 225 pm. When subjected to external forces, like electrical fields, the shape of an atom may deviate from spherical symmetry. The deformation depends on the field magnitude and the orbital type of outer shell electrons, as shown by group-theoretical considerations. Aspherical deviations might be elicited for instance in crystals, where large crystal-electrical fields may occur at low-symmetry lattice sites. Significant ellipsoidal deformations have been shown to occur for sulfur ions and chalcogen ions in pyrite-type compounds. Atomic dimensions are thousands of times smaller than the wavelengths of light (400–700 nm) so they cannot be viewed using an optical microscope, although individual atoms can be observed using a scanning tunneling microscope. To visualize the minuteness of the atom, consider that a typical human hair is about 1 million carbon atoms in width. A single drop of water contains about 2 sextillion (2×1021) atoms of oxygen, and twice the number of hydrogen atoms. A single carat diamond with a mass of 2×10−4 kg contains about 10 sextillion (1022) atoms of carbon.[note 2] If an apple were magnified to the size of the Earth, then the atoms in the apple would be approximately the size of the original apple. Every element has one or more isotopes that have unstable nuclei that are subject to radioactive decay, causing the nucleus to emit particles or electromagnetic radiation. Radioactivity can occur when the radius of a nucleus is large compared with the radius of the strong force, which only acts over distances on the order of 1 fm. - Alpha decay: this process is caused when the nucleus emits an alpha particle, which is a helium nucleus consisting of two protons and two neutrons. The result of the emission is a new element with a lower atomic number. - Beta decay (and electron capture): these processes are regulated by the weak force, and result from a transformation of a neutron into a proton, or a proton into a neutron. The neutron to proton transition is accompanied by the emission of an electron and an antineutrino, while proton to neutron transition (except in electron capture) causes the emission of a positron and a neutrino. The electron or positron emissions are called beta particles. Beta decay either increases or decreases the atomic number of the nucleus by one. Electron capture is more common than positron emission, because it requires less energy. In this type of decay, an electron is absorbed by the nucleus, rather than a positron emitted from the nucleus. A neutrino is still emitted in this process, and a proton changes to a neutron. - Gamma decay: this process results from a change in the energy level of the nucleus to a lower state, resulting in the emission of electromagnetic radiation. The excited state of a nucleus which results in gamma emission usually occurs following the emission of an alpha or a beta particle. Thus, gamma decay usually follows alpha or beta decay. Other more rare types of radioactive decay include ejection of neutrons or protons or clusters of nucleons from a nucleus, or more than one beta particle. An analog of gamma emission which allows excited nuclei to lose energy in a different way, is internal conversion—a process that produces high-speed electrons that are not beta rays, followed by production of high-energy photons that are not gamma rays. A few large nuclei explode into two or more charged fragments of varying masses plus several neutrons, in a decay called spontaneous nuclear fission. Each radioactive isotope has a characteristic decay time period—the half-life—that is determined by the amount of time needed for half of a sample to decay. This is an exponential decay process that steadily decreases the proportion of the remaining isotope by 50% every half-life. Hence after two half-lives have passed only 25% of the isotope is present, and so forth. Elementary particles possess an intrinsic quantum mechanical property known as spin. This is analogous to the angular momentum of an object that is spinning around its center of mass, although strictly speaking these particles are believed to be point-like and cannot be said to be rotating. Spin is measured in units of the reduced Planck constant (ħ), with electrons, protons and neutrons all having spin ½ ħ, or "spin-½". In an atom, electrons in motion around the nucleus possess orbital angular momentum in addition to their spin, while the nucleus itself possesses angular momentum due to its nuclear spin. The magnetic field produced by an atom—its magnetic moment—is determined by these various forms of angular momentum, just as a rotating charged object classically produces a magnetic field, but the most dominant contribution comes from electron spin. Due to the nature of electrons to obey the Pauli exclusion principle, in which no two electrons may be found in the same quantum state, bound electrons pair up with each other, with one member of each pair in a spin up state and the other in the opposite, spin down state. Thus these spins cancel each other out, reducing the total magnetic dipole moment to zero in some atoms with even number of electrons. In ferromagnetic elements such as iron, cobalt and nickel, an odd number of electrons leads to an unpaired electron and a net overall magnetic moment. The orbitals of neighboring atoms overlap and a lower energy state is achieved when the spins of unpaired electrons are aligned with each other, a spontaneous process known as an exchange interaction. When the magnetic moments of ferromagnetic atoms are lined up, the material can produce a measurable macroscopic field. Paramagnetic materials have atoms with magnetic moments that line up in random directions when no magnetic field is present, but the magnetic moments of the individual atoms line up in the presence of a field. The nucleus of an atom will have no spin when it has even numbers of both neutrons and protons, but for other cases of odd numbers, the nucleus may have a spin. Normally nuclei with spin are aligned in random directions because of thermal equilibrium, but for certain elements (such as xenon-129) it is possible to polarize a significant proportion of the nuclear spin states so that they are aligned in the same direction—a condition called hyperpolarization. This has important applications in magnetic resonance imaging. The potential energy of an electron in an atom is negative relative to when the distance from the nucleus goes to infinity; its dependence on the electron's position reaches the minimum inside the nucleus, roughly in inverse proportion to the distance. In the quantum-mechanical model, a bound electron can occupy only a set of states centered on the nucleus, and each state corresponds to a specific energy level; see time-independent Schrödinger equation for a theoretical explanation. An energy level can be measured by the amount of energy needed to unbind the electron from the atom, and is usually given in units of electronvolts (eV). The lowest energy state of a bound electron is called the ground state, i.e. stationary state, while an electron transition to a higher level results in an excited state. The electron's energy increases along with n because the (average) distance to the nucleus increases. Dependence of the energy on ℓ is caused not by the electrostatic potential of the nucleus, but by interaction between electrons. For an electron to transition between two different states, e.g. ground state to first excited state, it must absorb or emit a photon at an energy matching the difference in the potential energy of those levels, according to the Niels Bohr model, what can be precisely calculated by the Schrödinger equation. Electrons jump between orbitals in a particle-like fashion. For example, if a single photon strikes the electrons, only a single electron changes states in response to the photon; see Electron properties. The energy of an emitted photon is proportional to its frequency, so these specific energy levels appear as distinct bands in the electromagnetic spectrum. Each element has a characteristic spectrum that can depend on the nuclear charge, subshells filled by electrons, the electromagnetic interactions between the electrons and other factors. When a continuous spectrum of energy is passed through a gas or plasma, some of the photons are absorbed by atoms, causing electrons to change their energy level. Those excited electrons that remain bound to their atom spontaneously emit this energy as a photon, traveling in a random direction, and so drop back to lower energy levels. Thus the atoms behave like a filter that forms a series of dark absorption bands in the energy output. (An observer viewing the atoms from a view that does not include the continuous spectrum in the background, instead sees a series of emission lines from the photons emitted by the atoms.) Spectroscopic measurements of the strength and width of atomic spectral lines allow the composition and physical properties of a substance to be determined. Close examination of the spectral lines reveals that some display a fine structure splitting. This occurs because of spin-orbit coupling, which is an interaction between the spin and motion of the outermost electron. When an atom is in an external magnetic field, spectral lines become split into three or more components; a phenomenon called the Zeeman effect. This is caused by the interaction of the magnetic field with the magnetic moment of the atom and its electrons. Some atoms can have multiple electron configurations with the same energy level, which thus appear as a single spectral line. The interaction of the magnetic field with the atom shifts these electron configurations to slightly different energy levels, resulting in multiple spectral lines. The presence of an external electric field can cause a comparable splitting and shifting of spectral lines by modifying the electron energy levels, a phenomenon called the Stark effect. If a bound electron is in an excited state, an interacting photon with the proper energy can cause stimulated emission of a photon with a matching energy level. For this to occur, the electron must drop to a lower energy state that has an energy difference matching the energy of the interacting photon. The emitted photon and the interacting photon then move off in parallel and with matching phases. That is, the wave patterns of the two photons are synchronized. This physical property is used to make lasers, which can emit a coherent beam of light energy in a narrow frequency band. Valence and bonding behavior Valency is the combining power of an element. It is determined by the number of bonds it can form to other atoms or groups. The outermost electron shell of an atom in its uncombined state is known as the valence shell, and the electrons in that shell are called valence electrons. The number of valence electrons determines the bonding behavior with other atoms. Atoms tend to chemically react with each other in a manner that fills (or empties) their outer valence shells. For example, a transfer of a single electron between atoms is a useful approximation for bonds that form between atoms with one-electron more than a filled shell, and others that are one-electron short of a full shell, such as occurs in the compound sodium chloride and other chemical ionic salts. Many elements display multiple valences, or tendencies to share differing numbers of electrons in different compounds. Thus, chemical bonding between these elements takes many forms of electron-sharing that are more than simple electron transfers. Examples include the element carbon and the organic compounds. The chemical elements are often displayed in a periodic table that is laid out to display recurring chemical properties, and elements with the same number of valence electrons form a group that is aligned in the same column of the table. (The horizontal rows correspond to the filling of a quantum shell of electrons.) The elements at the far right of the table have their outer shell completely filled with electrons, which results in chemically inert elements known as the noble gases. Quantities of atoms are found in different states of matter that depend on the physical conditions, such as temperature and pressure. By varying the conditions, materials can transition between solids, liquids, gases and plasmas. Within a state, a material can also exist in different allotropes. An example of this is solid carbon, which can exist as graphite or diamond. Gaseous allotropes exist as well, such as dioxygen and ozone. At temperatures close to absolute zero, atoms can form a Bose–Einstein condensate, at which point quantum mechanical effects, which are normally only observed at the atomic scale, become apparent on a macroscopic scale. This super-cooled collection of atoms then behaves as a single super atom, which may allow fundamental checks of quantum mechanical behavior. While atoms are too small to be seen, devices such as the scanning tunneling microscope (STM) enable their visualization at the surfaces of solids. The microscope uses the quantum tunneling phenomenon, which allows particles to pass through a barrier that would be insurmountable in the classical perspective. Electrons tunnel through the vacuum between two biased electrodes, providing a tunneling current that is exponentially dependent on their separation. One electrode is a sharp tip ideally ending with a single atom. At each point of the scan of the surface the tip's height is adjusted so as to keep the tunneling current at a set value. How much the tip moves to and away from the surface is interpreted as the height profile. For low bias, the microscope images the averaged electron orbitals across closely packed energy levels—the local density of the electronic states near the Fermi level. Because of the distances involved, both electrodes need to be extremely stable; only then periodicities can be observed that correspond to individual atoms. The method alone is not chemically specific, and cannot identify the atomic species present at the surface. Atoms can be easily identified by their mass. If an atom is ionized by removing one of its electrons, its trajectory when it passes through a magnetic field will bend. The radius by which the trajectory of a moving ion is turned by the magnetic field is determined by the mass of the atom. The mass spectrometer uses this principle to measure the mass-to-charge ratio of ions. If a sample contains multiple isotopes, the mass spectrometer can determine the proportion of each isotope in the sample by measuring the intensity of the different beams of ions. Techniques to vaporize atoms include inductively coupled plasma atomic emission spectroscopy and inductively coupled plasma mass spectrometry, both of which use a plasma to vaporize samples for analysis. Electron emission techniques such as X-ray photoelectron spectroscopy (XPS) and Auger electron spectroscopy (AES), which measure the binding energies of the core electrons, are used to identify the atomic species present in a sample in a non-destructive way. With proper focusing both can be made area-specific. Another such method is electron energy loss spectroscopy (EELS), which measures the energy loss of an electron beam within a transmission electron microscope when it interacts with a portion of a sample. Spectra of excited states can be used to analyze the atomic composition of distant stars. Specific light wavelengths contained in the observed light from stars can be separated out and related to the quantized transitions in free gas atoms. These colors can be replicated using a gas-discharge lamp containing the same element. Helium was discovered in this way in the spectrum of the Sun 23 years before it was found on Earth. Origin and current state Baryonic matter forms about 4% of the total energy density of the observable Universe, with an average density of about 0.25 particles/m3 (mostly protons and electrons). Within a galaxy such as the Milky Way, particles have a much higher concentration, with the density of matter in the interstellar medium (ISM) ranging from 105 to 109 atoms/m3. The Sun is believed to be inside the Local Bubble, so the density in the solar neighborhood is only about 103 atoms/m3. Stars form from dense clouds in the ISM, and the evolutionary processes of stars result in the steady enrichment of the ISM with elements more massive than hydrogen and helium. Up to 95% of the Milky Way's baryonic matter are concentrated inside stars, where conditions are unfavorable for atomic matter. The total baryonic mass is about 10% of the mass of the galaxy; the remainder of the mass is an unknown dark matter. High temperature inside stars makes most "atoms" fully ionized, that is, separates all electrons from the nuclei. In stellar remnants—with exception of their surface layers—an immense pressure make electron shells impossible. Electrons are thought to exist in the Universe since early stages of the Big Bang. Atomic nuclei forms in nucleosynthesis reactions. In about three minutes Big Bang nucleosynthesis produced most of the helium, lithium, and deuterium in the Universe, and perhaps some of the beryllium and boron. Ubiquitousness and stability of atoms relies on their binding energy, which means that an atom has a lower energy than an unbound system of the nucleus and electrons. Where the temperature is much higher than ionization potential, the matter exists in the form of plasma—a gas of positively charged ions (possibly, bare nuclei) and electrons. When the temperature drops below the ionization potential, atoms become statistically favorable. Atoms (complete with bound electrons) became to dominate over charged particles 380,000 years after the Big Bang—an epoch called recombination, when the expanding Universe cooled enough to allow electrons to become attached to nuclei. Since the Big Bang, which produced no carbon or heavier elements, atomic nuclei have been combined in stars through the process of nuclear fusion to produce more of the element helium, and (via the triple alpha process) the sequence of elements from carbon up to iron; see stellar nucleosynthesis for details. Isotopes such as lithium-6, as well as some beryllium and boron are generated in space through cosmic ray spallation. This occurs when a high-energy proton strikes an atomic nucleus, causing large numbers of nucleons to be ejected. Elements heavier than iron were produced in supernovae and colliding neutron stars through the r-process, and in AGB stars through the s-process, both of which involve the capture of neutrons by atomic nuclei. Elements such as lead formed largely through the radioactive decay of heavier elements. Most of the atoms that make up the Earth and its inhabitants were present in their current form in the nebula that collapsed out of a molecular cloud to form the Solar System. The rest are the result of radioactive decay, and their relative proportion can be used to determine the age of the Earth through radiometric dating. Most of the helium in the crust of the Earth (about 99% of the helium from gas wells, as shown by its lower abundance of helium-3) is a product of alpha decay. There are a few trace atoms on Earth that were not present at the beginning (i.e., not "primordial"), nor are results of radioactive decay. Carbon-14 is continuously generated by cosmic rays in the atmosphere. Some atoms on Earth have been artificially generated either deliberately or as by-products of nuclear reactors or explosions. Of the transuranic elements—those with atomic numbers greater than 92—only plutonium and neptunium occur naturally on Earth. Transuranic elements have radioactive lifetimes shorter than the current age of the Earth and thus identifiable quantities of these elements have long since decayed, with the exception of traces of plutonium-244 possibly deposited by cosmic dust. Natural deposits of plutonium and neptunium are produced by neutron capture in uranium ore. The Earth contains approximately 1.33×1050 atoms. Although small numbers of independent atoms of noble gases exist, such as argon, neon, and helium, 99% of the atmosphere is bound in the form of molecules, including carbon dioxide and diatomic oxygen and nitrogen. At the surface of the Earth, an overwhelming majority of atoms combine to form various compounds, including water, salt, silicates and oxides. Atoms can also combine to create materials that do not consist of discrete molecules, including crystals and liquid or solid metals. This atomic matter forms networked arrangements that lack the particular type of small-scale interrupted order associated with molecular matter. Rare and theoretical forms All nuclides with atomic numbers higher than 82 (lead) are known to be radioactive. No nuclide with an atomic number exceeding 92 (uranium) exists on Earth as a primordial nuclide, and heavier elements generally have shorter half-lives. Nevertheless, an "island of stability" encompassing relatively long-lived isotopes of superheavy elements with atomic numbers 110 to 114 might exist. Predictions for the half-life of the most stable nuclide on the island range from a few minutes to millions of years. In any case, superheavy elements (with Z > 104) would not exist due to increasing Coulomb repulsion (which results in spontaneous fission with increasingly short half-lives) in the absence of any stabilizing effects. Each particle of matter has a corresponding antimatter particle with the opposite electrical charge. Thus, the positron is a positively charged antielectron and the antiproton is a negatively charged equivalent of a proton. When a matter and corresponding antimatter particle meet, they annihilate each other. Because of this, along with an imbalance between the number of matter and antimatter particles, the latter are rare in the universe. The first causes of this imbalance are not yet fully understood, although theories of baryogenesis may offer an explanation. As a result, no antimatter atoms have been discovered in nature. In 1996 the antimatter counterpart of the hydrogen atom (antihydrogen) was synthesized at the CERN laboratory in Geneva. Other exotic atoms have been created by replacing one of the protons, neutrons or electrons with other particles that have the same charge. For example, an electron can be replaced by a more massive muon, forming a muonic atom. These types of atoms can be used to test fundamental predictions of physics. - Iron(II) oxide's formula is written here as Fe2O2 rather than the more conventional FeO because this better illustrates the explanation. - Pullman, Bernard (1998). The Atom in the History of Human Thought. Oxford, England: Oxford University Press. pp. 31–33. ISBN 978-0-19-515040-7. - Melsen (1952). From Atomos to Atom, pp. 18-19 - Dalton (1817). A New System of Chemical Philosophy vol. 2, p. 36 - Melsen (1952). From Atomos to Atom, p. 137 - Dalton (1817). A New System of Chemical Philosophy vol. 2, pp. 28 - Millington (1906). John Dalton, p. 113 - Dalton (1808). A New System of Chemical Philosophy vol. 1, pp. 316-319 - Holbrow et al (2010). Modern Introductory Physics, pp. 65-66 - Einstein, Albert (1905). "Über die von der molekularkinetischen Theorie der Wärme geforderte Bewegung von in ruhenden Flüssigkeiten suspendierten Teilchen" (PDF). Annalen der Physik (in German). 322 (8): 549–560. Bibcode:1905AnP...322..549E. doi:10.1002/andp.19053220806. Archived (PDF) from the original on 18 July 2007. - Mazo, Robert M. (2002). Brownian Motion: Fluctuations, Dynamics, and Applications. Oxford University Press. pp. 1–7. ISBN 978-0-19-851567-8. OCLC 48753074. - Lee, Y.K.; Hoon, K. (1995). "Brownian Motion". Imperial College. Archived from the original on 18 December 2007. - Patterson, G. (2007). "Jean Perrin and the triumph of the atomic doctrine". Endeavour. 31 (2): 50–53. doi:10.1016/j.endeavour.2007.05.003. PMID 17602746. - Thomson, J.J. (August 1901). "On bodies smaller than atoms". The Popular Science Monthly: 323–335. Retrieved 21 June 2009. - Navarro (2012). A History of the Electron, p. 94 - Heilbron (2003). Ernest Rutheford and the Explosion of Atoms, pp. 64-68 - "Frederick Soddy, The Nobel Prize in Chemistry 1921". Nobel Foundation. Archived from the original on 9 April 2008. Retrieved 18 January 2008. - Thomson, Joseph John (1913). "Rays of positive electricity". Proceedings of the Royal Society. A. 89 (607): 1–20. Bibcode:1913RSPSA..89....1T. doi:10.1098/rspa.1913.0057. Archived from the original on 4 November 2016. - Stern, David P. (16 May 2005). "The Atomic Nucleus and Bohr's Early Model of the Atom". NASA/Goddard Space Flight Center. Archived from the original on 20 August 2007. - Bohr, Niels (11 December 1922). "Niels Bohr, The Nobel Prize in Physics 1922, Nobel Lecture". Nobel Foundation. Archived from the original on 15 April 2008. - Pais, Abraham (1986). Inward Bound: Of Matter and Forces in the Physical World. New York: Oxford University Press. pp. 228–230. ISBN 978-0-19-851971-3. - Lewis, Gilbert N. (1916). "The Atom and the Molecule". Journal of the American Chemical Society. 38 (4): 762–786. doi:10.1021/ja02261a002. Archived (PDF) from the original on 25 August 2019. - Scerri, Eric R. (2007). The periodic table: its story and its significance. Oxford University Press US. pp. 205–226. ISBN 978-0-19-530573-9. - Langmuir, Irving (1919). "The Arrangement of Electrons in Atoms and Molecules". Journal of the American Chemical Society. 41 (6): 868–934. doi:10.1021/ja02227a002. Archived from the original on 21 June 2019. - Scully, Marlan O.; Lamb, Willis E.; Barut, Asim (1987). "On the theory of the Stern-Gerlach apparatus". Foundations of Physics. 17 (6): 575–583. Bibcode:1987FoPh...17..575S. doi:10.1007/BF01882788. S2CID 122529426. - McEvoy, J. P.; Zarate, Oscar (2004). Introducing Quantum Theory. Totem Books. pp. 110–114. ISBN 978-1-84046-577-8. - Kozłowski, Miroslaw (2019). "The Schrödinger equation A History". - Chad Orzel (16 September 2014). "What is the Heisenberg Uncertainty Principle?". TED-Ed. Archived from the original on 13 September 2015 – via YouTube. - Brown, Kevin (2007). "The Hydrogen Atom". MathPages. Archived from the original on 13 May 2008. - Harrison, David M. (2000). "The Development of Quantum Mechanics". University of Toronto. Archived from the original on 25 December 2007. - Aston, Francis W. (1920). "The constitution of atmospheric neon". Philosophical Magazine. 39 (6): 449–455. doi:10.1080/14786440408636058. - Chadwick, James (12 December 1935). "Nobel Lecture: The Neutron and Its Properties". Nobel Foundation. Archived from the original on 12 October 2007. - Bowden, Mary Ellen (1997). "Otto Hahn, Lise Meitner, and Fritz Strassmann". Chemical achievers : the human face of the chemical sciences. Philadelphia, PA: Chemical Heritage Foundation. pp. 76–80, 125. ISBN 978-0-941901-12-3. - "Otto Hahn, Lise Meitner, and Fritz Strassmann". Science History Institute. June 2016. Archived from the original on 21 March 2018. - Meitner, Lise; Frisch, Otto Robert (1939). "Disintegration of uranium by neutrons: a new type of nuclear reaction". Nature. 143 (3615): 239–240. Bibcode:1939Natur.143..239M. doi:10.1038/143239a0. S2CID 4113262. - Schroeder, M. "Lise Meitner – Zur 125. Wiederkehr Ihres Geburtstages" (in German). Archived from the original on 19 July 2011. Retrieved 4 June 2009. - Crawford, E.; Sime, Ruth Lewin; Walker, Mark (1997). "A Nobel tale of postwar injustice". Physics Today. 50 (9): 26–32. Bibcode:1997PhT....50i..26C. doi:10.1063/1.881933. - Kullander, Sven (28 August 2001). "Accelerators and Nobel Laureates". Nobel Foundation. Archived from the original on 13 April 2008. - "The Nobel Prize in Physics 1990". Nobel Foundation. 17 October 1990. Archived from the original on 14 May 2008. - Demtröder, Wolfgang (2002). Atoms, Molecules and Photons: An Introduction to Atomic- Molecular- and Quantum Physics (1st ed.). Springer. pp. 39–42. ISBN 978-3-540-20631-6. OCLC 181435713. - Woan, Graham (2000). The Cambridge Handbook of Physics. Cambridge University Press. p. 8. ISBN 978-0-521-57507-2. OCLC 224032426. - Mohr, P.J.; Taylor, B.N. and Newell, D.B. (2014), "The 2014 CODATA Recommended Values of the Fundamental Physical Constants" Archived 21 February 2012 at WebCite (Web Version 7.0). The database was developed by J. Baker, M. Douma, and S. Kotochigova. (2014). National Institute of Standards and Technology, Gaithersburg, Maryland 20899. - MacGregor, Malcolm H. (1992). The Enigmatic Electron. Oxford University Press. pp. 33–37. ISBN 978-0-19-521833-6. OCLC 223372888. - Particle Data Group (2002). "The Particle Adventure". Lawrence Berkeley Laboratory. Archived from the original on 4 January 2007. - Schombert, James (18 April 2006). "Elementary Particles". University of Oregon. Archived from the original on 21 August 2011. - Jevremovic, Tatjana (2005). Nuclear Principles in Engineering. Springer. p. 63. ISBN 978-0-387-23284-3. OCLC 228384008. - Pfeffer, Jeremy I.; Nir, Shlomo (2000). Modern Physics: An Introductory Text. Imperial College Press. pp. 330–336. ISBN 978-1-86094-250-1. OCLC 45900880. - Wenner, Jennifer M. (10 October 2007). "How Does Radioactive Decay Work?". Carleton College. Archived from the original on 11 May 2008. - Raymond, David (7 April 2006). "Nuclear Binding Energies". New Mexico Tech. Archived from the original on 1 December 2002. - Mihos, Chris (23 July 2002). "Overcoming the Coulomb Barrier". Case Western Reserve University. Archived from the original on 12 September 2006. - Staff (30 March 2007). "ABC's of Nuclear Science". Lawrence Berkeley National Laboratory. Archived from the original on 5 December 2006. - Makhijani, Arjun; Saleska, Scott (2 March 2001). "Basics of Nuclear Physics and Fission". Institute for Energy and Environmental Research. Archived from the original on 16 January 2007. - Shultis, J. Kenneth; Faw, Richard E. (2002). Fundamentals of Nuclear Science and Engineering. CRC Press. pp. 10–17. ISBN 978-0-8247-0834-4. OCLC 123346507. - Fewell, M.P. (1995). "The atomic nuclide with the highest mean binding energy". American Journal of Physics. 63 (7): 653–658. Bibcode:1995AmJPh..63..653F. doi:10.1119/1.17828. - Mulliken, Robert S. (1967). "Spectroscopy, Molecular Orbitals, and Chemical Bonding". Science. 157 (3784): 13–24. Bibcode:1967Sci...157...13M. doi:10.1126/science.157.3784.13. PMID 5338306. - Brucat, Philip J. (2008). "The Quantum Atom". University of Florida. Archived from the original on 7 December 2006. - Manthey, David (2001). "Atomic Orbitals". Orbital Central. Archived from the original on 10 January 2008. - Herter, Terry (2006). "Lecture 8: The Hydrogen Atom". Cornell University. Archived from the original on 22 February 2012. - Bell, R.E.; Elliott, L.G. (1950). "Gamma-Rays from the Reaction H1(n,γ)D2 and the Binding Energy of the Deuteron". Physical Review. 79 (2): 282–285. Bibcode:1950PhRv...79..282B. doi:10.1103/PhysRev.79.282. - Smirnov, Boris M. (2003). Physics of Atoms and Ions. Springer. pp. 249–272. ISBN 978-0-387-95550-6. - Matis, Howard S. (9 August 2000). "The Isotopes of Hydrogen". Guide to the Nuclear Wall Chart. Lawrence Berkeley National Lab. Archived from the original on 18 December 2007. - Weiss, Rick (17 October 2006). "Scientists Announce Creation of Atomic Element, the Heaviest Yet". Washington Post. Archived from the original on 21 August 2011. - Sills, Alan D. (2003). Earth Science the Easy Way. Barron's Educational Series. pp. 131–134. ISBN 978-0-7641-2146-3. OCLC 51543743. - Dumé, Belle (23 April 2003). "Bismuth breaks half-life record for alpha decay". Physics World. Archived from the original on 14 December 2007. - Lindsay, Don (30 July 2000). "Radioactives Missing From The Earth". Don Lindsay Archive. Archived from the original on 28 April 2007. - Tuli, Jagdish K. (April 2005). "Nuclear Wallet Cards". National Nuclear Data Center, Brookhaven National Laboratory. Archived from the original on 3 October 2011. - CRC Handbook (2002). - Krane, K. (1988). Introductory Nuclear Physics. John Wiley & Sons. pp. 68. ISBN 978-0-471-85914-7. - Mills, Ian; Cvitaš, Tomislav; Homann, Klaus; Kallay, Nikola; Kuchitsu, Kozo (1993). Quantities, Units and Symbols in Physical Chemistry (2nd ed.). Oxford: International Union of Pure and Applied Chemistry, Commission on Physiochemical Symbols Terminology and Units, Blackwell Scientific Publications. p. 70. ISBN 978-0-632-03583-0. OCLC 27011505. - Chieh, Chung (22 January 2001). "Nuclide Stability". University of Waterloo. Archived from the original on 30 August 2007. - "Atomic Weights and Isotopic Compositions for All Elements". National Institute of Standards and Technology. Archived from the original on 31 December 2006. Retrieved 4 January 2007. - Audi, G.; Wapstra, A.H.; Thibault, C. (2003). "The Ame2003 atomic mass evaluation (II)" (PDF). Nuclear Physics A. 729 (1): 337–676. Bibcode:2003NuPhA.729..337A. doi:10.1016/j.nuclphysa.2003.11.003. Archived (PDF) from the original on 16 October 2005. - Ghosh, D.C.; Biswas, R. (2002). "Theoretical calculation of Absolute Radii of Atoms and Ions. Part 1. The Atomic Radii". Int. J. Mol. Sci. 3 (11): 87–113. doi:10.3390/i3020087. - Shannon, R.D. (1976). "Revised effective ionic radii and systematic studies of interatomic distances in halides and chalcogenides" (PDF). Acta Crystallographica A. 32 (5): 751–767. Bibcode:1976AcCrA..32..751S. doi:10.1107/S0567739476001551. - Dong, Judy (1998). "Diameter of an Atom". The Physics Factbook. Archived from the original on 4 November 2007. - Zumdahl, Steven S. (2002). Introductory Chemistry: A Foundation (5th ed.). Houghton Mifflin. ISBN 978-0-618-34342-3. OCLC 173081482. Archived from the original on 4 March 2008. - Bethe, Hans (1929). "Termaufspaltung in Kristallen". Annalen der Physik. 3 (2): 133–208. Bibcode:1929AnP...395..133B. doi:10.1002/andp.19293950202. - Birkholz, Mario (1995). "Crystal-field induced dipoles in heteropolar crystals – I. concept". Z. Phys. B. 96 (3): 325–332. Bibcode:1995ZPhyB..96..325B. CiteSeerX 10.1.1.424.5632. doi:10.1007/BF01313054. S2CID 122527743. - Birkholz, M.; Rudert, R. (2008). "Interatomic distances in pyrite-structure disulfides – a case for ellipsoidal modeling of sulfur ions" (PDF). Physica Status Solidi B. 245 (9): 1858–1864. Bibcode:2008PSSBR.245.1858B. doi:10.1002/pssb.200879532. - Birkholz, M. (2014). "Modeling the Shape of Ions in Pyrite-Type Crystals". Crystals. 4 (3): 390–403. doi:10.3390/cryst4030390. - Staff (2007). "Small Miracles: Harnessing nanotechnology". Oregon State University. Archived from the original on 21 May 2011. – describes the width of a human hair as 105 nm and 10 carbon atoms as spanning 1 nm. - Padilla, Michael J.; Miaoulis, Ioannis; Cyr, Martha (2002). Prentice Hall Science Explorer: Chemical Building Blocks. Upper Saddle River, New Jersey: Prentice-Hall, Inc. p. 32. ISBN 978-0-13-054091-1. OCLC 47925884. There are 2,000,000,000,000,000,000,000 (that's 2 sextillion) atoms of oxygen in one drop of water—and twice as many atoms of hydrogen. - Feynman, Richard (1995). Six Easy Pieces. The Penguin Group. p. 5. ISBN 978-0-14-027666-4. OCLC 40499574. - "Radioactivity". Splung.com. Archived from the original on 4 December 2007. Retrieved 19 December 2007. - L'Annunziata, Michael F. (2003). Handbook of Radioactivity Analysis. Academic Press. pp. 3–56. ISBN 978-0-12-436603-9. OCLC 16212955. - Firestone, Richard B. (22 May 2000). "Radioactive Decay Modes". Berkeley Laboratory. Archived from the original on 29 September 2006. - Hornak, J.P. (2006). "Chapter 3: Spin Physics". The Basics of NMR. Rochester Institute of Technology. Archived from the original on 3 February 2007. - Schroeder, Paul A. (25 February 2000). "Magnetic Properties". University of Georgia. Archived from the original on 29 April 2007. - Goebel, Greg (1 September 2007). "[4.3] Magnetic Properties of the Atom". Elementary Quantum Physics. In The Public Domain website. Archived from the original on 29 June 2011. - Yarris, Lynn (Spring 1997). "Talking Pictures". Berkeley Lab Research Review. Archived from the original on 13 January 2008. - Liang, Z.-P.; Haacke, E.M. (1999). Webster, J.G. (ed.). Encyclopedia of Electrical and Electronics Engineering: Magnetic Resonance Imaging. 2. John Wiley & Sons. pp. 412–426. ISBN 978-0-471-13946-1. - Zeghbroeck, Bart J. Van (1998). "Energy levels". Shippensburg University. Archived from the original on 15 January 2005. - Fowles, Grant R. (1989). Introduction to Modern Optics. Courier Dover Publications. pp. 227–233. ISBN 978-0-486-65957-2. OCLC 18834711. - Martin, W.C.; Wiese, W.L. (May 2007). "Atomic Spectroscopy: A Compendium of Basic Ideas, Notation, Data, and Formulas". National Institute of Standards and Technology. Archived from the original on 8 February 2007. - "Atomic Emission Spectra – Origin of Spectral Lines". Avogadro Web Site. Archived from the original on 28 February 2006. Retrieved 10 August 2006. - Fitzpatrick, Richard (16 February 2007). "Fine structure". University of Texas at Austin. Archived from the original on 21 August 2011. - Weiss, Michael (2001). "The Zeeman Effect". University of California-Riverside. Archived from the original on 2 February 2008. - Beyer, H.F.; Shevelko, V.P. (2003). Introduction to the Physics of Highly Charged Ions. CRC Press. pp. 232–236. ISBN 978-0-7503-0481-8. OCLC 47150433. - Watkins, Thayer. "Coherence in Stimulated Emission". San José State University. Archived from the original on 12 January 2008. Retrieved 23 December 2007. - IUPAC, Compendium of Chemical Terminology, 2nd ed. (the "Gold Book") (1997). Online corrected version: (2006–) "valence". doi:10.1351/goldbook.V06588 - Reusch, William (16 July 2007). "Virtual Textbook of Organic Chemistry". Michigan State University. Archived from the original on 29 October 2007. - "Covalent bonding – Single bonds". chemguide. 2000. Archived from the original on 1 November 2008. - Husted, Robert; et al. (11 December 2003). "Periodic Table of the Elements". Los Alamos National Laboratory. Archived from the original on 10 January 2008. - Baum, Rudy (2003). "It's Elemental: The Periodic Table". Chemical & Engineering News. Archived from the original on 21 August 2011. - Goodstein, David L. (2002). States of Matter. Courier Dover Publications. pp. 436–438. ISBN 978-0-13-843557-8. - Brazhkin, Vadim V. (2006). "Metastable phases, phase transformations, and phase diagrams in physics and chemistry". Physics-Uspekhi. 49 (7): 719–724. Bibcode:2006PhyU...49..719B. doi:10.1070/PU2006v049n07ABEH006013. - Myers, Richard (2003). The Basics of Chemistry. Greenwood Press. p. 85. ISBN 978-0-313-31664-7. OCLC 50164580. - Staff (9 October 2001). "Bose–Einstein Condensate: A New Form of Matter". National Institute of Standards and Technology. Archived from the original on 3 January 2008. - Colton, Imogen; Fyffe, Jeanette (3 February 1999). "Super Atoms from Bose–Einstein Condensation". The University of Melbourne. Archived from the original on 29 August 2007. - Jacox, Marilyn; Gadzuk, J. William (November 1997). "Scanning Tunneling Microscope". National Institute of Standards and Technology. Archived from the original on 7 January 2008. - "The Nobel Prize in Physics 1986". The Nobel Foundation. Archived from the original on 17 September 2008. Retrieved 11 January 2008. In particular, see the Nobel lecture by G. Binnig and H. Rohrer. - Jakubowski, N.; Moens, Luc; Vanhaecke, Frank (1998). "Sector field mass spectrometers in ICP-MS". Spectrochimica Acta Part B: Atomic Spectroscopy. 53 (13): 1739–1763. Bibcode:1998AcSpe..53.1739J. doi:10.1016/S0584-8547(98)00222-5. - Müller, Erwin W.; Panitz, John A.; McLane, S. Brooks (1968). "The Atom-Probe Field Ion Microscope". Review of Scientific Instruments. 39 (1): 83–86. Bibcode:1968RScI...39...83M. doi:10.1063/1.1683116. - Lochner, Jim; Gibb, Meredith; Newman, Phil (30 April 2007). "What Do Spectra Tell Us?". NASA/Goddard Space Flight Center. Archived from the original on 16 January 2008. - Winter, Mark (2007). "Helium". WebElements. Archived from the original on 30 December 2007. - Hinshaw, Gary (10 February 2006). "What is the Universe Made Of?". NASA/WMAP. Archived from the original on 31 December 2007. - Choppin, Gregory R.; Liljenzin, Jan-Olov; Rydberg, Jan (2001). Radiochemistry and Nuclear Chemistry. Elsevier. p. 441. ISBN 978-0-7506-7463-8. OCLC 162592180. - Davidsen, Arthur F. (1993). "Far-Ultraviolet Astronomy on the Astro-1 Space Shuttle Mission". Science. 259 (5093): 327–334. Bibcode:1993Sci...259..327D. doi:10.1126/science.259.5093.327. PMID 17832344. S2CID 28201406. - Lequeux, James (2005). The Interstellar Medium. Springer. p. 4. ISBN 978-3-540-21326-0. OCLC 133157789. - Smith, Nigel (6 January 2000). "The search for dark matter". Physics World. Archived from the original on 16 February 2008. - Croswell, Ken (1991). "Boron, bumps and the Big Bang: Was matter spread evenly when the Universe began? Perhaps not; the clues lie in the creation of the lighter elements such as boron and beryllium". New Scientist (1794): 42. Archived from the original on 7 February 2008. - Copi, Craig J.; Schramm, DN; Turner, MS (1995). "Big-Bang Nucleosynthesis and the Baryon Density of the Universe". Science (Submitted manuscript). 267 (5195): 192–199. arXiv:astro-ph/9407006. Bibcode:1995Sci...267..192C. doi:10.1126/science.7809624. PMID 7809624. S2CID 15613185. Archived from the original on 14 August 2019. - Hinshaw, Gary (15 December 2005). "Tests of the Big Bang: The Light Elements". NASA/WMAP. Archived from the original on 17 January 2008. - Abbott, Brian (30 May 2007). "Microwave (WMAP) All-Sky Survey". Hayden Planetarium. Archived from the original on 13 February 2013. - Hoyle, F. (1946). "The synthesis of the elements from hydrogen". Monthly Notices of the Royal Astronomical Society. 106 (5): 343–383. Bibcode:1946MNRAS.106..343H. doi:10.1093/mnras/106.5.343. - Knauth, D.C.; Knauth, D.C.; Lambert, David L.; Crane, P. (2000). "Newly synthesized lithium in the interstellar medium". Nature. 405 (6787): 656–658. Bibcode:2000Natur.405..656K. doi:10.1038/35015028. PMID 10864316. S2CID 4397202. - Mashnik, Stepan G. (2000). "On Solar System and Cosmic Rays Nucleosynthesis and Spallation Processes". arXiv:astro-ph/0008382. - Kansas Geological Survey (4 May 2005). "Age of the Earth". University of Kansas. Archived from the original on 5 July 2008. - Manuel (2001). Origin of Elements in the Solar System, pp. 407-430, 511-519 - Dalrymple, G. Brent (2001). "The age of the Earth in the twentieth century: a problem (mostly) solved". Geological Society, London, Special Publications. 190 (1): 205–221. Bibcode:2001GSLSP.190..205D. doi:10.1144/GSL.SP.2001.190.01.14. S2CID 130092094. Archived from the original on 11 November 2007. - Anderson, Don L.; Foulger, G.R.; Meibom, Anders (2 September 2006). "Helium: Fundamental models". MantlePlumes.org. Archived from the original on 8 February 2007. - Pennicott, Katie (10 May 2001). "Carbon clock could show the wrong time". PhysicsWeb. Archived from the original on 15 December 2007. - Yarris, Lynn (27 July 2001). "New Superheavy Elements 118 and 116 Discovered at Berkeley Lab". Berkeley Lab. Archived from the original on 9 January 2008. - Diamond, H; et al. (1960). "Heavy Isotope Abundances in Mike Thermonuclear Device". Physical Review. 119 (6): 2000–2004. Bibcode:1960PhRv..119.2000D. doi:10.1103/PhysRev.119.2000. - Poston Sr., John W. (23 March 1998). "Do transuranic elements such as plutonium ever occur naturally?". Scientific American. Archived from the original on 27 March 2015. - Keller, C. (1973). "Natural occurrence of lanthanides, actinides, and superheavy elements". Chemiker Zeitung. 97 (10): 522–530. OSTI 4353086. - Zaider, Marco; Rossi, Harald H. (2001). Radiation Science for Physicians and Public Health Workers. Springer. p. 17. ISBN 978-0-306-46403-4. OCLC 44110319. - "Oklo Fossil Reactors". Curtin University of Technology. Archived from the original on 18 December 2007. Retrieved 15 January 2008. - Weisenberger, Drew. "How many atoms are there in the world?". Jefferson Lab. Archived from the original on 22 October 2007. Retrieved 16 January 2008. - Pidwirny, Michael. "Fundamentals of Physical Geography". University of British Columbia Okanagan. Archived from the original on 21 January 2008. Retrieved 16 January 2008. - Anderson, Don L. (2002). "The inner inner core of Earth". Proceedings of the National Academy of Sciences. 99 (22): 13966–13968. Bibcode:2002PNAS...9913966A. doi:10.1073/pnas.232565899. PMC 137819. PMID 12391308. - Pauling, Linus (1960). The Nature of the Chemical Bond. Cornell University Press. pp. 5–10. ISBN 978-0-8014-0333-0. OCLC 17518275. - Anonymous (2 October 2001). "Second postcard from the island of stability". CERN Courier. Archived from the original on 3 February 2008. - Karpov, A. V.; Zagrebaev, V. I.; Palenzuela, Y. M.; et al. (2012). "Decay properties and stability of the heaviest elements" (PDF). International Journal of Modern Physics E. 21 (2): 1250013-1–1250013-20. Bibcode:2012IJMPE..2150013K. doi:10.1142/S0218301312500139. - "Superheavy Element 114 Confirmed: A Stepping Stone to the Island of Stability". Berkeley Lab. 2009. - Möller, P. (2016). "The limits of the nuclear chart set by fission and alpha decay" (PDF). EPJ Web of Conferences. 131: 03002-1–03002-8. Bibcode:2016EPJWC.13103002M. doi:10.1051/epjconf/201613103002. - Koppes, Steve (1 March 1999). "Fermilab Physicists Find New Matter-Antimatter Asymmetry". University of Chicago. Archived from the original on 19 July 2008. - Cromie, William J. (16 August 2001). "A lifetime of trillionths of a second: Scientists explore antimatter". Harvard University Gazette. Archived from the original on 3 September 2006. - Hijmans, Tom W. (2002). "Particle physics: Cold antihydrogen". Nature. 419 (6906): 439–440. Bibcode:2002Natur.419..439H. doi:10.1038/419439a. PMID 12368837. - Staff (30 October 2002). "Researchers 'look inside' antimatter". BBC News. Archived from the original on 22 February 2007. - Barrett, Roger (1990). "The Strange World of the Exotic Atom". New Scientist (1728): 77–115. Archived from the original on 21 December 2007. - Indelicato, Paul (2004). "Exotic Atoms". Physica Scripta. T112 (1): 20–26. arXiv:physics/0409058. Bibcode:2004PhST..112...20I. doi:10.1238/Physica.Topical.112a00020. S2CID 11134265. Archived from the original on 4 November 2018. - Ripin, Barrett H. (July 1998). "Recent Experiments on Exotic Atoms". American Physical Society. Archived from the original on 23 July 2012. - Oliver Manuel (2001). Origin of Elements in the Solar System: Implications of Post-1957 Observations. Springer. ISBN 978-0-306-46562-8. OCLC 228374906. - Andrew G. van Melsen (2004) . From Atomos to Atom: The History of the Concept Atom. Translated by Henry J. Koren. Dover Publications. ISBN 0-486-49584-1. - J.P. Millington (1906). John Dalton. J. M. Dent & Co. (London); E. P. Dutton & Co. (New York). - Charles H. Holbrow; James N. Lloyd; Joseph C. Amato; Enrique Galvez; M. Elizabeth Parks (2010). Modern Introductory Physics. Springer Science & Business Media. ISBN 9780387790794. - John Dalton (1808). A New System of Chemical Philosophy vol. 1. - John Dalton (1817). A New System of Chemical Philosophy vol. 2. - John L. Heilbron (2003). Ernest Rutherford and the Explosion of Atoms. Oxford University Press. ISBN 0-19-512378-6. - Jaume Navarro (2012). A History of the Electron: J. J. and G. P. Thomson. Cambridge University Press. ISBN 9781107005228. - Gangopadhyaya, Mrinalkanti (1981). Indian Atomism: History and Sources. Atlantic Highlands, New Jersey: Humanities Press. ISBN 978-0-391-02177-8. OCLC 10916778. - Iannone, A. Pablo (2001). Dictionary of World Philosophy. Routledge. ISBN 978-0-415-17995-9. OCLC 44541769. - King, Richard (1999). Indian philosophy: an introduction to Hindu and Buddhist thought. Edinburgh University Press. ISBN 978-0-7486-0954-3. - McEvilley, Thomas (2002). The shape of ancient thought: comparative studies in Greek and Indian philosophies. Allworth Press. ISBN 978-1-58115-203-6. - Siegfried, Robert (2002). From Elements to Atoms: A History of Chemical Composition. DIANE. ISBN 978-0-87169-924-4. OCLC 186607849. - Teresi, Dick (2003). Lost Discoveries: The Ancient Roots of Modern Science. Simon & Schuster. pp. 213–214. ISBN 978-0-7432-4379-7. - Wurtz, Charles Adolphe (1881). The Atomic Theory. New York: D. Appleton and company. ISBN 978-0-559-43636-9. - Sharp, Tim (8 August 2017). "What is an Atom?". Live Science. - "Hitchhikers Guide to the Universe, Atoms and Atomic Structure". h2g2. BBC. 3 January 2006.
Huguenots were French Protestants who held to the Reformed, or Calvinist, tradition of Protestantism. The term has its origin in early-16th-century France. It was frequently used in reference to those of the Reformed Church of France from the time of the Protestant Reformation. By contrast, the Protestant populations of eastern France, in Alsace, Moselle, and Montbéliard were mainly German Lutherans. In his Encyclopedia of Protestantism, Hans Hillerbrand said that, on the eve of the St. Bartholomew's Day massacre in 1572, the Huguenot community included as much as 10% of the French population. By 1600 it had declined to 7–8%, and was reduced further after the return of severe persecution in 1685 under Louis XIV's Edict of Fontainebleau. The Huguenots were believed to be concentrated among the population in the southern and western parts of the Kingdom of France. As Huguenots gained influence and more openly displayed their faith, Catholic hostility grew. A series of religious conflicts followed, known as the French Wars of Religion, fought intermittently from 1562 to 1598. The Huguenots were led by Jeanne d'Albret, her son, the future Henry IV (who would later convert to Catholicism in order to become king), and the princes of Condé. The wars ended with the Edict of Nantes, which granted the Huguenots substantial religious, political and military autonomy. Huguenot rebellions in the 1620s resulted in the abolition of their political and military privileges. They retained the religious provisions of the Edict of Nantes until the rule of Louis XIV, who gradually increased persecution of Protestantism until he issued the Edict of Fontainebleau (1685). This ended legal recognition of Protestantism in France and the Huguenots were forced to either convert to Catholicism (possibly as Nicodemites) or flee as refugees; they were subject to violent dragonnades. Louis XIV claimed that the French Huguenot population was reduced from about 900,000 or 800,000 adherents to just 1,000 or 1,500. He exaggerated the decline, but the dragonnades were devastating for the French Protestant community. The remaining Huguenots faced continued persecution under Louis XV. By the time of his death in 1774, Calvinism had been nearly eliminated from France. Persecution of Protestants officially ended with the Edict of Versailles, signed by Louis XVI in 1787. Two years later, with the Revolutionary Declaration of the Rights of Man and of the Citizen of 1789, Protestants gained equal rights as citizens. A term used originally in derision, Huguenot has unclear origins. Various hypotheses have been promoted. The term may have been a combined reference to the Swiss politician Besançon Hugues (died 1532) and the religiously conflicted nature of Swiss republicanism in his time. It used a derogatory pun on the name Hugues by way of the Dutch word Huisgenoten (literally housemates), referring to the connotations of a somewhat related word in German Eidgenosse (Confederate in the sense of "a citizen of one of the states of the Swiss Confederacy"). Geneva was John Calvin's adopted home and the centre of the Calvinist movement. In Geneva, Hugues, though Catholic, was a leader of the "Confederate Party", so called because it favoured independence from the Duke of Savoy. It sought an alliance between the city-state of Geneva and the Swiss Confederation. The label Huguenot was purportedly first applied in France to those conspirators (all of them aristocratic members of the Reformed Church) who were involved in the Amboise plot of 1560: a foiled attempt to wrest power in France from the influential and zealously Catholic House of Guise. This action would have fostered relations with the Swiss. O. I. A. Roche promoted this idea among historians. He wrote in his book, The Days of the Upright, A History of the Huguenots (1965), that "Huguenot" is: a combination of a Dutch and a German word. In the Dutch-speaking North of France, Bible students who gathered in each other's houses to study secretly were called Huis Genooten ("housemates") while on the Swiss and German borders they were termed Eid Genossen, or "oath fellows", that is, persons bound to each other by an oath. Gallicised into "Huguenot", often used deprecatingly, the word became, during two and a half centuries of terror and triumph, a badge of enduring honour and courage. Some disagree with such double or triple non-French linguistic origins. Janet Gray argues that for the word to have spread into common use in France, it must have originated there in French. The "Hugues hypothesis" argues that the name was derived by association with Hugues Capet, king of France, who reigned long before the Reformation. He was regarded by the Gallicians as a noble man who respected people's dignity and lives. Janet Gray and other supporters of the hypothesis suggest that the name huguenote would be roughly equivalent to little Hugos, or those who want Hugo. In this last connection, the name could suggest the derogatory inference of superstitious worship; popular fancy held that Huguon, the gate of King Hugo, was haunted by the ghost of le roi Huguet (regarded by Roman Catholics as an infamous scoundrel) and other spirits. Instead of being in Purgatory after death, according to Catholic doctrine, they came back to harm the living at night. The prétendus réformés ("these supposedly 'reformed'") were said to gather at night at Tours, both for political purposes, and for prayer and singing psalms. Reguier de la Plancha (d. 1560) in his De l'Estat de France offered the following account as to the origin of the name, as cited by The Cape Monthly: Reguier de la Plancha accounts for it [the name] as follows: "The name huguenand was given to those of the religion during the affair of Amboyse, and they were to retain it ever since. I'll say a word about it to settle the doubts of those who have strayed in seeking its origin. The superstition of our ancestors, to within twenty or thirty years thereabouts, was such that in almost all the towns in the kingdom they had a notion that certain spirits underwent their Purgatory in this world after death, and that they went about the town at night, striking and outraging many people whom they found in the streets. But the light of the Gospel has made them vanish, and teaches us that these spirits were street-strollers and ruffians. In Paris the spirit was called le moine bourré; at Orléans, le mulet odet; at Blois le loup garon; at Tours, le Roy Huguet; and so on in other places. Now, it happens that those whom they called Lutherans were at that time so narrowly watched during the day that they were forced to wait till night to assemble, for the purpose of praying God, for preaching and receiving the Holy Sacrament; so that although they did not frighten nor hurt anybody, the priests, through mockery, made them the successors of those spirits which roam the night; and thus that name being quite common in the mouth of the populace, to designate the evangelical huguenands in the country of Tourraine and Amboyse, it became in vogue after that enterprise." Some have suggested the name was derived, with similar intended scorn, from les guenon de Hus (the monkeys or apes of Jan Hus). By 1911, there was still no consensus in the United States on this interpretation. The Huguenot cross is the distinctive emblem of the Huguenots (croix huguenote). It is now an official symbol of the Église des Protestants réformés (French Protestant church). Huguenot descendants sometimes display this symbol as a sign of reconnaissance (recognition) between them. The issue of demographic strength and geographical spread of the Reformed tradition in France has been covered in a variety of sources. Most of them agree that the Huguenot population reached as many as 10% of the total population, or roughly 2 million people, on the eve of the St. Bartholomew's Day massacre in 1572. The new teaching of John Calvin attracted sizeable portions of the nobility and urban bourgeoisie. After John Calvin introduced the Reformation in France, the number of French Protestants steadily swelled to ten percent of the population, or roughly 1.8 million people, in the decade between 1560 and 1570. During the same period there were some 1,400 Reformed churches operating in France. Hans J. Hillerbrand, an expert on the subject, in his Encyclopedia of Protestantism: 4-volume Set claims the Huguenot community reached as much as 10% of the French population on the eve of the St. Bartholomew's Day massacre, declining to 7 to 8% by the end of the 16th century, and further after heavy persecution began once again with the Revocation of the Edict of Nantes by Louis XIV of France in 1685. Among the nobles, Calvinism peaked on the eve of the St. Bartholomew's Day massacre. Since then it has been sharply decreasing as the Huguenots were no more tolerated by both the French royalty and the Catholic masses. By the end of the sixteenth century Huguenots constituted 7-8% of the whole population, or 1.2 million people. By the time Louis XIV revoked the Edict of Nantes in 1685, Huguenots accounted for 800,000 to 1 million people. Huguenots controlled sizeable areas in southern and western France. In addition, many areas, especially in the central part of the country, were also contested between the French Reformed and Catholic nobles. Demographically, there were some areas in which the whole populations had been Reformed. These included villages in and around the Massif Central, as well as the area around Dordogne, which used to be almost entirely Reformed too. John Calvin was a Frenchman and himself largely responsible for the introduction and spread of the Reformed tradition in France. He wrote in French, but unlike the Protestant development in Germany, where Lutheran writings were widely distributed and could be read by the common man, it was not the case in France, where only nobles adopted the new faith and the folk remained Catholic. This is true for many areas in the west and south controlled by the Huguenot nobility. Although relatively large portions of the peasant population became Reformed there, the people, altogether, still remained majority Catholic. Overall, Huguenot presence was heavily concentrated in the western and southern portions of the French kingdom, as nobles there secured practise of the new faith. These included Languedoc-Roussillon, Gascony and even a strip of land that stretched into the Dauphiné. Huguenots lived on the Atlantic coast in La Rochelle, and also spread across provinces of Normandy and Poitou. In the south, towns like Castres, Montauban, Montpellier and Nimes were Huguenot strongholds. In addition, a dense network of Protestant villages permeated the rural mountainous region of the Cevennes. Inhabited by Camisards, it continues to be the backbone of French Protestantism. Historians estimate that roughly 80% of all Huguenots lived in the western and southern areas of France. Today, there are some Reformed communities around the world that still retain their Huguenot identity. In France, Calvinists in the United Protestant Church of France and also some in the Protestant Reformed Church of Alsace and Lorraine consider themselves Huguenots. A rural Huguenot community in the Cevennes that rebelled in 1702 is still being called Camisards, especially in historical contexts. Huguenot exiles in the United Kingdom, the United States, South Africa, Australia, and a number of other countries still retain their identity. Emigration and diaspora The bulk of Huguenot émigrés relocated to Protestant states such as the Dutch Republic, England and Wales, Protestant-controlled Ireland, the Channel Islands, Scotland, Denmark, Sweden, Switzerland, the Electorate of Brandenburg and Electorate of the Palatinate in the Holy Roman Empire, and the Duchy of Prussia. Some fled as refugees to the Dutch Cape Colony in South Africa, the Dutch East Indies, the Caribbean colonies, and several of the Dutch and English colonies in North America. A few families went to Orthodox Russia and Catholic Quebec. After centuries, most Huguenots have assimilated into the various societies and cultures where they settled. Remnant communities of Camisards in the Cévennes, most Reformed members of the United Protestant Church of France, French members of the largely German Protestant Reformed Church of Alsace and Lorraine, and the Huguenot diaspora in England and Australia, all still retain their beliefs and Huguenot designation. |Year||Number of Huguenots in France| |1700||100,000 or less| The availability of the Bible in vernacular languages was important to the spread of the Protestant movement and development of the Reformed church in France. The country had a long history of struggles with the papacy (see the Avignon Papacy, for example) by the time the Protestant Reformation finally arrived. Around 1294, a French version of the Scriptures was prepared by the Roman Catholic priest, Guyard des Moulins. A two-volume illustrated folio paraphrase version based on his manuscript, by Jean de Rély, was printed in Paris in 1487. The first known translation of the Bible into one of France's regional languages, Arpitan or Franco-Provençal, had been prepared by the 12th-century pre-Protestant reformer Peter Waldo (Pierre de Vaux). The Waldensians became more militant, creating fortified areas, as in Cabrières, perhaps attacking an abbey. They were suppressed by Francis I in 1545 in the Massacre of Mérindol. Other predecessors of the Reformed church included the pro-reform and Gallican Roman Catholics, such as Jacques Lefevre (c. 1455–1536). The Gallicans briefly achieved independence for the French church, on the principle that the religion of France could not be controlled by the Bishop of Rome, a foreign power. During the Protestant Reformation, Lefevre, a professor at the University of Paris, published his French translation of the New Testament in 1523, followed by the whole Bible in the French language in 1530. William Farel was a student of Lefevre who went on to become a leader of the Swiss Reformation, establishing a Protestant republican government in Geneva. Jean Cauvin (John Calvin), another student at the University of Paris, also converted to Protestantism. Long after the sect was suppressed by Francis I, the remaining French Waldensians, then mostly in the Luberon region, sought to join Farel, Calvin and the Reformation, and Olivétan published a French Bible for them. The French Confession of 1559 shows a decidedly Calvinistic influence. Although usually Huguenots are lumped into one group, there were actually two types of Huguenots that emerged. Since the Huguenots had political and religious goals, it was commonplace to refer to the Calvinists as "Huguenots of religion" and those who opposed the monarchy as "Huguenots of the state", who were mostly nobles. - The Huguenots of religion were influenced by John Calvin's works and established Calvinist synods. They were determined to end religious oppression. - The Huguenots of the state opposed the monopoly of power the Guise family had and wanted to attack the authority of the crown. This group of Huguenots from southern France had frequent issues with the strict Calvinist tenets that are outlined in many of John Calvin's letters to the synods of the Languedoc. Criticism and conflict with the Catholic Church Like other religious reformers of the time, Huguenots felt that the Catholic Church needed a radical cleansing of its impurities, and that the Pope represented a worldly kingdom, which sat in mocking tyranny over the things of God, and was ultimately doomed. Rhetoric like this became fiercer as events unfolded, and eventually stirred up a reaction in the Catholic establishment. Fanatically opposed to the Catholic Church, the Huguenots attacked priests, monks, nuns, monasticism, images, and church buildings. Most of the cities in which the Huguenots gained a hold saw iconoclast riots in which altars and images in churches, and sometimes the buildings themselves were torn down. Ancient relics and texts were destroyed; the bodies of saints exhumed and burned. The cities of Bourges, Montauban and Orléans saw substantial activity in this regard. The Huguenots transformed themselves into a definitive political movement thereafter. Protestant preachers rallied a considerable army and a formidable cavalry, which came under the leadership of Admiral Gaspard de Coligny. Henry of Navarre and the House of Bourbon allied themselves to the Huguenots, adding wealth and territorial holdings to the Protestant strength, which at its height grew to sixty fortified cities, and posed a serious and continuous threat to the Catholic crown and Paris over the next three decades. The Catholic Church in France and many of its members opposed the Huguenots. Some Huguenot preachers and congregants were attacked as they attempted to meet for worship. The height of this persecution was the St. Bartholomew's Day massacre in August, 1572, when 5,000 to 30,000 were killed, although there were also underlying political reasons for this as well, as some of the Huguenots were nobles trying to establish separate centres of power in southern France. Retaliating against the French Catholics, the Huguenots had their own militia. Reformation and growth Early in his reign, Francis I (reign 1515–1547) persecuted the old, pre-Protestant movement of Waldensians in southeastern France. Francis initially protected the Huguenot dissidents from Parlementary measures seeking to exterminate them. After the 1534 Affair of the Placards, however, he distanced himself from Huguenots and their protection. Huguenot numbers grew rapidly between 1555 and 1561, chiefly amongst nobles and city dwellers. During this time, their opponents first dubbed the Protestants Huguenots; but they called themselves reformés, or "Reformed". They organised their first national synod in 1558 in Paris. By 1562, the estimated number of Huguenots peaked at approximately two million, concentrated mainly in the western, southern, and some central parts of France, compared to approximately sixteen million Catholics during the same period. Persecution diminished the number of Huguenots who remained in France. Wars of religion As the Huguenots gained influence and displayed their faith more openly, Roman Catholic hostility towards them grew, even though the French crown offered increasingly liberal political concessions and edicts of toleration. Following the accidental death of Henry II in 1559, his son succeeded as King Francis II along with his wife, the Queen Consort, also known as Mary, Queen of Scots. During the eighteen months of the reign of Francis II, Mary encouraged a policy of rounding up French Huguenots on charges of heresy and putting them in front of Catholic judges, and employing torture and burning as punishments for dissenters. Mary returned to Scotland a widow, in the summer of 1561. In 1561, the Edict of Orléans declared an end to the persecution, and the Edict of Saint-Germain of January 1562 formally recognised the Huguenots for the first time. However, these measures disguised the growing tensions between Protestants and Catholics. These tensions spurred eight civil wars, interrupted by periods of relative calm, between 1562 and 1598. With each break in peace, the Huguenots' trust in the Catholic throne diminished, and the violence became more severe, and Protestant demands became grander, until a lasting cessation of open hostility finally occurred in 1598. The wars gradually took on a dynastic character, developing into an extended feud between the Houses of Bourbon and Guise, both of which—in addition to holding rival religious views—staked a claim to the French throne. The crown, occupied by the House of Valois, generally supported the Catholic side, but on occasion switched over to the Protestant cause when politically expedient. The French Wars of Religion began with the Massacre of Vassy on 1 March 1562, when dozens (some sources say hundreds) of Huguenots were killed, and about 200 were wounded. It was in this year that some Huguenots destroyed the tomb and remains of Saint Irenaeus (d. 202), an early Church father and bishop who was a disciple of Polycarp. The Michelade by Huguenotes against Catholics was later on 29 September 1567. St. Bartholomew's Day massacre In what became known as the St. Bartholomew's Day Massacre of 24 August – 3 October 1572, Catholics killed thousands of Huguenots in Paris and similar massacres took place in other towns in the following weeks. The main provincial towns and cities experiencing massacres were Aix, Bordeaux, Bourges, Lyons, Meaux, Orléans, Rouen, Toulouse, and Troyes. Although the exact number of fatalities throughout the country is not known, on 23–24 August, between 2,000 and 3,000 Protestants were killed in Paris and a further 3,000 to 7,000 more in the French provinces. By 17 September, almost 25,000 Protestants had been massacred in Paris alone. Beyond Paris, the killings continued until 3 October. An amnesty granted in 1573 pardoned the perpetrators. Edict of Nantes The pattern of warfare, followed by brief periods of peace, continued for nearly another quarter-century. The warfare was definitively quelled in 1598, when Henry of Navarre, having succeeded to the French throne as Henry IV, and having recanted Protestantism in favour of Roman Catholicism in order to obtain the French crown, issued the Edict of Nantes. The Edict reaffirmed Roman Catholicism as the state religion of France, but granted the Protestants equality with Catholics under the throne and a degree of religious and political freedom within their domains. The Edict simultaneously protected Catholic interests by discouraging the founding of new Protestant churches in Catholic-controlled regions. With the proclamation of the Edict of Nantes, and the subsequent protection of Huguenot rights, pressures to leave France abated. However, enforcement of the Edict grew increasingly irregular over time, making life so intolerable that many fled the country. The Huguenot population of France dropped to 856,000 by the mid-1660s, of which a plurality lived in rural areas. The greatest concentrations of Huguenots at this time resided in the regions of Guienne, Saintonge-Aunis-Angoumois and Poitou. Montpellier was among the most important of the 66 "villes de sûreté" (cities of protection/protected cities) that the Edict of 1598 granted to the Huguenots. The city's political institutions and the university were all handed over to the Huguenots. Tension with Paris led to a siege by the royal army in 1622. Peace terms called for the dismantling of the city's fortifications. A royal citadel was built and the university and consulate were taken over by the Catholic party. Even before the Edict of Alès (1629), Protestant rule was dead and the ville de sûreté was no more. By 1620, the Huguenots were on the defensive, and the government increasingly applied pressure. A series of three small civil wars known as the Huguenot rebellions broke out, mainly in southwestern France, between 1621 and 1629 in which the Reformed areas revolted against royal authority. The uprising occurred a decade following the death of Henry IV, a Huguenot before converting to Roman Catholicism, who had protected Protestants through the Edict of Nantes. His successor Louis XIII, under the regency of his Italian Catholic mother Marie de' Medici, was more intolerant of Protestantism. The Huguenots responded by establishing independent political and military structures, establishing diplomatic contacts with foreign powers, and openly revolting against central power. The rebellions were implacably suppressed by the French crown. Edict of Fontainebleau Louis XIV gained the throne in 1643 and acted increasingly aggressively to force the Huguenots to convert. At first he sent missionaries, backed by a fund to financially reward converts to Roman Catholicism. Then he imposed penalties, closed Huguenot schools and excluded them from favoured professions. Escalating, he instituted dragonnades, which included the occupation and looting of Huguenot homes by military troops, in an effort to forcibly convert them. In 1685, he issued the Edict of Fontainebleau, revoking the Edict of Nantes and declaring Protestantism illegal. The revocation forbade Protestant services, required education of children as Catholics, and prohibited emigration. It proved disastrous to the Huguenots and costly for France. It precipitated civil bloodshed, ruined commerce, and resulted in the illegal flight from the country of hundreds of thousands of Protestants many of whom were intellectuals, doctors and business leaders whose skills were transferred to Britain as well as Holland, Prussia, South Africa and other places they fled to. 4,000 emigrated to the Thirteen Colonies, where they settled, especially in New York, the Delaware River Valley in Eastern Pennsylvania, New Jersey, and Virginia. The English authorities welcomed the French refugees, providing money from both government and private agencies to aid their relocation. Those Huguenots who stayed in France were subsequently forcibly converted to Roman Catholicism and were called "new converts". After this, the Huguenots (with estimates ranging from 200,000 to 1,000,000) fled to Protestant countries: England, the Netherlands, Switzerland, Norway, Denmark, and Prussia—whose Calvinist Great Elector Frederick William welcomed them to help rebuild his war-ravaged and underpopulated country. Following this exodus, Huguenots remained in large numbers in only one region of France: the rugged Cévennes region in the south. There were also some Calvinists in the Alsace region, which then belonged to the Holy Roman Empire of the German Nation. In the early 18th century, a regional group known as the Camisards (who were Huguenots of the mountainous Massif Central region) rioted against the Catholic Church, burning churches and killing the clergy. It took French troops years to hunt down and destroy all the bands of Camisards, between 1702 and 1709. End of persecution By the 1760s, Protestants numbered about 700,000 in France, or 3% of the population. Protestantism was no longer a favourite religion of the elite. By then, most Protestants were Cevennes peasants. It was still illegal, and, although the law was seldom enforced, it could be a threat or a nuisance to Protestants. Calvinists lived primarily in the Midi; about 200,000 Lutherans accompanied by some Calvinists lived in the newly acquired Alsace, where the 1648 Treaty of Westphalia effectively protected them. Persecution of Protestants diminished in France after 1724, finally ending with the Edict of Versailles, commonly called the Edict of Tolerance, signed by Louis XVI in 1787. Two years later, with the Declaration of the Rights of Man and Citizen of 1789, Protestants gained equal rights as citizens. Right of return to France in the 19th and 20th centuries The government encouraged descendants of exiles to return, offering them French citizenship in a 15 December 1790 law: All persons born in a foreign country and descending in any degree of a French man or woman expatriated for religious reason are declared French nationals (naturels français) and will benefit from rights attached to that quality if they come back to France, establish their domicile there and take the civic oath. Article 4 of 26 June 1889 Nationality Law stated: "Descendants of families proscribed by the revocation of the Edict of Nantes will continue to benefit from the benefit of 15 December 1790 Law, but on the condition that a nominal decree should be issued for every petitioner. That decree will only produce its effects for the future." Foreign descendants of Huguenots lost the automatic right to French citizenship in 1945 (by force of the Ordonnance n° 45-2441 du 19 octobre 1945, which revoked the 1889 Nationality Law). It states in article 3: "This application does not, however, affect the validity of past acts by the person or rights acquired by third parties on the basis of previous laws." In the 1920s and 1930s, members of the extreme-right Action Française movement expressed strong animus against Huguenots and other Protestants in general, as well as against Jews and Freemasons. They were regarded as groups supporting the French Republic, which Action Française sought to overthrow. In World War II, Huguenots led by André Trocmé in the village of Le Chambon-sur-Lignon in Cévennes helped save many Jews. They hid them in secret places or helped them get out of Vichy France. André Trocmé preached against discrimination as the Nazis were gaining power in neighbouring Germany and urged his Protestant Huguenot congregation to hide Jewish refugees from the Holocaust. In the early 21st century, there were approximately one million Protestants in France, representing some 2% of its population. Most are concentrated in Alsace in northeast France and the Cévennes mountain region in the south, who still regard themselves as Huguenots to this day. Surveys suggest that Protestantism has grown in recent years, though this is due primarily to the expansion of evangelical Protestant churches which particularly have adherents among immigrant groups that are generally considered distinct from the French Huguenot population. A diaspora of French Australians still considers itself Huguenot, even after centuries of exile. Long integrated into Australian society, it is encouraged by the Huguenot Society of Australia to embrace and conserve its cultural heritage, aided by the Society's genealogical research services. In the United States there are several Huguenot worship groups and societies. The Huguenot Society of America has headquarters in New York City and has a broad national membership. One of the most active Huguenot groups is in Charleston, South Carolina. While many American Huguenot groups worship in borrowed churches, the congregation in Charleston has its own church. Although services are conducted largely in English, every year the church holds an Annual French Service, which is conducted entirely in French using an adaptation of the Liturgies of Neufchatel (1737) and Vallangin (1772). Typically the Annual French Service takes place on the first or second Sunday after Easter in commemoration of the signing of the Edict of Nantes. Most French Huguenots were either unable or unwilling to emigrate to avoid forced conversion to Roman Catholicism. As a result, more than three-quarters of the Protestant population of 2 million converted, 1 million, and 500,000 fled in exodus.[clarification needed] Early emigration to colonies The first Huguenots to leave France sought freedom from persecution in Switzerland and the Netherlands. A group of Huguenots was part of the French colonisers who arrived in Brazil in 1555 to found France Antarctique. A couple of ships with around 500 people arrived at the Guanabara Bay, present-day Rio de Janeiro, and settled on a small island. A fort, named Fort Coligny, was built to protect them from attack from the Portuguese troops and Brazilian natives. It was an attempt to establish a French colony in South America. The fort was destroyed in 1560 by the Portuguese, who captured some of the Huguenots. The Portuguese threatened their Protestant prisoners with death if they did not convert to Roman Catholicism. The Huguenots of Guanabara, as they are now known, produced what is known as the Guanabara Confession of Faith to explain their beliefs. The Portuguese executed them. Individual Huguenots settled at the Cape of Good Hope from as early as 1671; the first documented was François Villion (Viljoen). The first Huguenot to arrive at the Cape of Good Hope was Maria de la Quellerie, wife of commander Jan van Riebeeck (and daughter of a Walloon church minister), who arrived on 6 April 1652 to establish a settlement at what is today Cape Town. The couple left for the Batavia ten years later. But it was not until 31 December 1687 that the first organised group of Huguenots set sail from the Netherlands to the Dutch East India Company post at the Cape of Good Hope. The largest portion of the Huguenots to settle in the Cape arrived between 1688 and 1689 in seven ships as part of the organised migration, but quite a few arrived as late as 1700; thereafter, the numbers declined and only small groups arrived at a time. Many of these settlers were given land in an area that was later called Franschhoek (Dutch for "French Corner"), in the present-day Western Cape province of South Africa. A large monument to commemorate the arrival of the Huguenots in South Africa was inaugurated on 7 April 1948 at Franschhoek. The Huguenot Memorial Museum was also erected there and opened in 1957. The official policy of the Dutch East India governors was to integrate the Huguenot and the Dutch communities. When Paul Roux, a pastor who arrived with the main group of Huguenots, died in 1724, the Dutch administration, as a special concession, permitted another French cleric to take his place "for the benefit of the elderly who spoke only French". But with assimilation, within three generations the Huguenots had generally adopted Dutch as their first and home language. Many of the farms in the Western Cape province in South Africa still bear French names. Many families, today, mostly Afrikaans-speaking, have surnames indicating their French Huguenot ancestry. Examples include: Blignaut, Cilliers, Cronje (Cronier), de Klerk (Le Clercq), de Villiers, du Plessis, Du Preez (Des Pres), du Randt (Durand), du Toit, Duvenhage(Du Vinage), Franck, Fouché, Fourie (Fleurit), Gervais, Giliomee (Guilliaume), Gous/Gouws (Gauch), Hugo, Jordaan (Jourdan), Joubert, Kriek, Labuschagne (la Buscagne), le Roux, Lombard, Malan, Malherbe, Marais, Maree, Minnaar (Mesnard), Nel (Nell), Naudé, Nortjé (Nortier), Pienaar (Pinard), Retief (Retif), Rossouw (Rousseau), Taljaard (Taillard), TerBlanche, Theron, Viljoen (Villion) and Visagie (Visage). The wine industry in South Africa owes a significant debt to the Huguenots, some of whom had vineyards in France, or were brandy distillers, and used their skills in their new home. French Huguenots made two attempts to establish a haven in North America. In 1562, naval officer Jean Ribault led an expedition that explored Florida and the present-day Southeastern US, and founded the outpost of Charlesfort on Parris Island, South Carolina. The French Wars of Religion precluded a return voyage, and the outpost was abandoned. In 1564, Ribault's former lieutenant René Goulaine de Laudonnière launched a second voyage to build a colony; he established Fort Caroline in what is now Jacksonville, Florida. War at home again precluded a resupply mission, and the colony struggled. In 1565 the Spanish decided to enforce their claim to La Florida, and sent Pedro Menéndez de Avilés, who established the settlement of St. Augustine near Fort Caroline. Menéndez' forces routed the French and executed most of the Protestant captives. Barred by the government from settling in New France, Huguenots led by Jessé de Forest, sailed to North America in 1624 and settled instead in the Dutch colony of New Netherland (later incorporated into New York and New Jersey); as well as Great Britain's colonies, including Nova Scotia. A number of New Amsterdam's families were of Huguenot origin, often having emigrated as refugees to the Netherlands in the previous century. In 1628 the Huguenots established a congregation as L'Église française à la Nouvelle-Amsterdam (the French church in New Amsterdam). This parish continues today as L'Eglise du Saint-Esprit, now a part of the Episcopal Church USA (Anglican) communion, and welcomes Francophone New Yorkers from all over the world. Upon their arrival in New Amsterdam, Huguenots were offered land directly across from Manhattan on Long Island for a permanent settlement and chose the harbour at the end of Newtown Creek, becoming the first Europeans to live in Brooklyn, then known as Boschwick, in the neighbourhood now known as Bushwick. Huguenot immigrants did not disperse or settle in different parts of the country, but rather, formed three societies or congregations; one in the city of New York, another 21 miles north of New York in a town which they named New Rochelle, and a third further upstate in New Paltz. The "Huguenot Street Historic District" in New Paltz has been designated a National Historic Landmark site and contains one of the oldest streets in the United States of America. A small group of Huguenots also settled on the south shore of Staten Island along the New York Harbor, for which the current neighbourhood of Huguenot was named. Huguenot refugees also settled in the Delaware River Valley of Eastern Pennsylvania and Hunterdon County, New Jersey in 1725. Frenchtown in New Jersey bears the mark of early settlers. New Rochelle, located in the county of Westchester on the north shore of Long Island Sound, seemed to be the great location of the Huguenots in New York. It is said that they landed on the coastline peninsula of Davenports Neck called "Bauffet's Point" after travelling from England where they had previously taken refuge on account of religious persecution, four years before the revocation of the Edict of Nantes. They purchased from John Pell, Lord of Pelham Manor, a tract of land consisting of six thousand one hundred acres with the help of Jacob Leisler. It was named New Rochelle after La Rochelle, their former strong-hold in France. A small wooden church was first erected in the community, followed by a second church that was built of stone. Previous to the erection of it, the strong men would often walk twenty-three miles on Saturday evening, the distance by the road from New Rochelle to New York, to attend the Sunday service. The church was eventually replaced by a third, Trinity-St. Paul's Episcopal Church, which contains heirlooms including the original bell from the French Huguenot Church "Eglise du St. Esperit" on Pine Street in New York City, which is preserved as a relic in the tower room. The Huguenot cemetery, or the "Huguenot Burial Ground", has since been recognised as a historic cemetery that is the final resting place for a wide range of the Huguenot founders, early settlers and prominent citizens dating back more than three centuries. Some Huguenot immigrants settled in central and eastern Pennsylvania. They assimilated with the predominantly Pennsylvania German settlers of the area. In 1700 several hundred French Huguenots migrated from England to the colony of Virginia, where the King William III of England had promised them land grants in Lower Norfolk County. When they arrived, colonial authorities offered them instead land 20 miles above the falls of the James River, at the abandoned Monacan village known as Manakin Town, now in Goochland County. Some settlers landed in present-day Chesterfield County. On 12 May 1705, the Virginia General Assembly passed an act to naturalise the 148 Huguenots still resident at Manakintown. Of the original 390 settlers in the isolated settlement, many had died; others lived outside town on farms in the English style; and others moved to different areas. Gradually they intermarried with their English neighbours. Through the 18th and 19th centuries, descendants of the French migrated west into the Piedmont, and across the Appalachian Mountains into the West of what became Kentucky, Tennessee, Missouri, and other states. In the Manakintown area, the Huguenot Memorial Bridge across the James River and Huguenot Road were named in their honour, as were many local features, including several schools, including Huguenot High School. In the early years, many Huguenots also settled in the area of present-day Charleston, South Carolina. In 1685, Rev. Elie Prioleau from the town of Pons in France, was among the first to settle there. He became pastor of the first Huguenot church in North America in that city. After the Revocation of the Edict of Nantes in 1685, several Huguenots including Edmund Bohun of Suffolk, England, Pierre Bacot of Touraine France, Jean Postell of Dieppe France, Alexander Pepin, Antoine Poitevin of Orsement France, and Jacques de Bordeaux of Grenoble, immigrated to the Charleston Orange district. They were very successful at marriage and property speculation. After petitioning the British Crown in 1697 for the right to own land in the Baronies, they prospered as slave owners on the Cooper, Ashepoo, Ashley and Santee River plantations they purchased from the British Landgrave Edmund Bellinger. Some of their descendants moved into the Deep South and Texas, where they developed new plantations. The French Huguenot Church of Charleston, which remains independent, is the oldest continuously active Huguenot congregation in the United States. L'Eglise du Saint-Esprit in New York, founded in 1628, is older, but it left the French Reformed movement in 1804 to become part of the Episcopal Church. Most of the Huguenot congregations (or individuals) in North America eventually affiliated with other Protestant denominations with more numerous members. The Huguenots adapted quickly and often married outside their immediate French communities, which led to their assimilation. Their descendants in many families continued to use French first names and surnames for their children well into the nineteenth century. Assimilated, the French made numerous contributions to United States economic life, especially as merchants and artisans in the late Colonial and early Federal periods. For example, E.I. du Pont, a former student of Lavoisier, established the Eleutherian gunpowder mills. Howard Hughes, famed investor, pilot, film director, and philanthropist, was also of Huguenot descent and descendant from Rev. John Gano. Paul Revere was descended from Huguenot refugees, as was Henry Laurens, who signed the Articles of Confederation for South Carolina; Jack Jouett, who made the ride from Cuckoo Tavern to warn Thomas Jefferson and others that Tarleton and his men were on their way to arrest him for crimes against the king; Reverend John Gano was a Revolutionary War chaplain and spiritual advisor to George Washington; Francis Marion, and a number of other leaders of the American Revolution and later statesmen. The last active Huguenot congregation in North America worships in Charleston, South Carolina, at a church that dates to 1844. The Huguenot Society of America maintains the Manakin Episcopal Church in Virginia as a historic shrine with occasional services. The Society has chapters in numerous states, with the one in Texas being the largest. The Huguenots originally spoke French on their arrival in the American colonies, but after two or three generations, they had switched to English. They did not promote French-language schools or publications and "lost" their historic identity. In upstate New York they merged with the Dutch Reformed community and switched first to Dutch and then in the early 19th century to English. In colonial New York city they switched from French to English or Dutch by 1730. Some Huguenots fought in the Low Countries alongside the Dutch against Spain during the first years of the Dutch Revolt (1568–1609). The Dutch Republic rapidly became a destination for Huguenot exiles. Early ties were already visible in the "Apologie" of William the Silent, condemning the Spanish Inquisition, which was written by his court minister, the Huguenot Pierre L'Oyseleur, lord of Villiers. Louise de Coligny, daughter of the murdered Huguenot leader Gaspard de Coligny, married William the Silent, leader of the Dutch (Calvinist) revolt against Spanish (Catholic) rule. As both spoke French in daily life, their court church in the Prinsenhof in Delft held services in French. The practice has continued to the present day. The Prinsenhof is one of the 14 active Walloon churches of the Dutch Reformed Church (now of the Protestant Church in the Netherlands). The ties between Huguenots and the Dutch Republic's military and political leadership, the House of Orange-Nassau, which existed since the early days of the Dutch Revolt, helped support the many early settlements of Huguenots in the Dutch Republic's colonies. They settled at the Cape of Good Hope in South Africa and New Netherland in North America. Stadtholder William III of Orange, who later became King of England, emerged as the strongest opponent of king Louis XIV after the French attacked the Dutch Republic in 1672. William formed the League of Augsburg as a coalition to oppose Louis and the French state. Consequently, many Huguenots considered the wealthy and Calvinist-controlled Dutch Republic, which also happened to lead the opposition to Louis XIV, as the most attractive country for exile after the revocation of the Edict of Nantes. They also found many French-speaking Calvinist churches there (which were called the "Walloon churches"). After the revocation of the Edict of Nantes, the Dutch Republic received the largest group of Huguenot refugees, an estimated total of 75,000 to 100,000 people. Amongst them were 200 pastors. Many came from the region of the Cévennes, for instance, the village of Fraissinet-de-Lozère. This was a huge influx as the entire population of the Dutch Republic amounted to ca. 2 million at that time. Around 1700, it is estimated that nearly 25% of the Amsterdam population was Huguenot. In 1705, Amsterdam and the area of West Frisia were the first areas to provide full citizens rights to Huguenot immigrants, followed by the whole Dutch Republic in 1715. Huguenots intermarried with Dutch from the outset. One of the most prominent Huguenot refugees in the Netherlands was Pierre Bayle. He started teaching in Rotterdam, where he finished writing and publishing his multi-volume masterpiece, Historical and Critical Dictionary. It became one of the 100 foundational texts of the US Library of Congress. Some Huguenot descendants in the Netherlands may be noted by French family names, although they typically use Dutch given names. Due to the Huguenots' early ties with the leadership of the Dutch Revolt and their own participation, some of the Dutch patriciate are of part-Huguenot descent. Some Huguenot families have kept alive various traditions, such as the celebration and feast of their patron Saint Nicolas, similar to the Dutch Sint Nicolaas (Sinterklaas) feast. Britain and Ireland As a major Protestant nation, England patronized and help protect Huguenots, starting with Queen Elizabeth in 1562. There was a small naval Anglo-French War (1627–1629) , in which the England supported the French Huguenots against King Louis XIII of France. London financed the emigration of many to England and its colonies around 1700. Some 40,000-50,000 settled in England, mostly in towns near the sea in the southern districts, with the largest concentration in London where they constituted about 5% of the total population in 1700. Many others went to the American colonies, especially South Carolina. The immigrants included many skilled craftsmen and entrepreneurs who facilitated the economic modernization of their new home, in an era when economic innovations were transferred by people rather than through printed works. The British government ignored the complaints made by local craftsmen about the favouritism shown to foreigners. The immigrants assimilated well in terms of using English, joining the Church of England, intermarriage and business success. They founded the silk industry in England. Many became private tutors, schoolmasters, travelling tutors and owners of riding schools, where they were hired by the upper class. Both before and after the 1708 passage of the Foreign Protestants Naturalization Act, an estimated 50,000 Protestant Walloons and French Huguenots fled to England, with many moving on to Ireland and elsewhere. In relative terms, this was one of the largest waves of immigration ever of a single ethnic community to Britain. Andrew Lortie (born André Lortie), a leading Huguenot theologian and writer who led the exiled community in London, became known for articulating their criticism of the Pope and the doctrine of transubstantiation during Mass. Of the refugees who arrived on the Kent coast, many gravitated towards Canterbury, then the county's Calvinist hub. Many Walloon and Huguenot families were granted asylum there. Edward VI granted them the whole of the western crypt of Canterbury Cathedral for worship. In 1825, this privilege was reduced to the south aisle and in 1895 to the former chantry chapel of the Black Prince. Services are still held there in French according to the Reformed tradition every Sunday at 3 pm. Other evidence of the Walloons and Huguenots in Canterbury includes a block of houses in Turnagain Lane, where weavers' windows survive on the top floor, as many Huguenots worked as weavers. The Weavers, a half-timbered house by the river, was the site of a weaving school from the late 16th century to about 1830. (It has been adapted as a restaurant—see illustration above. The house derives its name from a weaving school which was moved there in the last years of the 19th century, reviving an earlier use.) Other refugees practised the variety of occupations necessary to sustain the community as distinct from the indigenous population. Such economic separation was the condition of the refugees' initial acceptance in the city. They also settled elsewhere in Kent, particularly Sandwich, Faversham and Maidstone—towns in which there used to be refugee churches. The French Protestant Church of London was established by Royal Charter in 1550. It is now located at Soho Square. Huguenot refugees flocked to Shoreditch, London. They established a major weaving industry in and around Spitalfields (see Petticoat Lane and the Tenterground) in East London. In Wandsworth, their gardening skills benefited the Battersea market gardens. The flight of Huguenot refugees from Tours, France drew off most of the workers of its great silk mills which they had built. Some of these immigrants moved to Norwich, which had accommodated an earlier settlement of Walloon weavers. The French added to the existing immigrant population, then comprising about a third of the population of the city. Some Huguenots settled in Bedfordshire, one of the main centres of the British lace industry at the time. Although 19th-century sources have asserted that some of these refugees were lacemakers and contributed to the East Midlands lace industry, this is contentious. The only reference to immigrant lacemakers in this period is of twenty-five widows who settled in Dover, and there is no contemporary documentation to support there being Huguenot lacemakers in Bedfordshire. The implication that the style of lace known as 'Bucks Point' demonstrates a Huguenot influence, being a "combination of Mechlin patterns on Lille ground", is fallacious: what is now known as Mechlin lace did not develop until the first half of the eighteenth century and lace with Mechlin patterns and Lille ground did not appear until the end of the 18th century, when it was widely copied throughout Europe. Many Huguenots from the Lorraine region also eventually settled in the area around Stourbridge in the modern-day West Midlands, where they found the raw materials and fuel to continue their glassmaking tradition. Anglicised names such as Tyzack, Henzey and Tittery are regularly found amongst the early glassmakers, and the region went on to become one of the most important glass regions in the country. Following the French crown's revocation of the Edict of Nantes, many Huguenots settled in Ireland in the late 17th and early 18th centuries, encouraged by an act of parliament for Protestants' settling in Ireland. Huguenot regiments fought for William of Orange in the Williamite War in Ireland, for which they were rewarded with land grants and titles, many settling in Dublin. Significant Huguenot settlements were in Dublin, Cork, Portarlington, Lisburn, Waterford and Youghal. Smaller settlements, which included Killeshandra in County Cavan, contributed to the expansion of flax cultivation and the growth of the Irish linen industry. For over 150 years, Huguenots were allowed to hold their services in Lady Chapel in St. Patrick's Cathedral. A Huguenot cemetery is located in the centre of Dublin, off St. Stephen's Green. Prior to its establishment, Huguenots used the Cabbage Garden near the Cathedral. Another, Huguenot Cemetery, is located off French Church street in Cork. A number of Huguenots served as mayors in Dublin, Cork, Youghal and Waterford in the 17th and 18th centuries. Numerous signs of Huguenot presence can still be seen with names still in use, and with areas of the main towns and cities named after the people who settled there. Examples include the Huguenot District and French Church Street in Cork City; and D'Olier Street in Dublin, named after a High Sheriff and one of the founders of the Bank of Ireland. A French church in Portarlington dates back to 1696, and was built to serve the significant new Huguenot community in the town. At the time, they constituted the majority of the townspeople. With the precedent of a historical alliance - the Auld Alliance - between Scotland and France; Huguenots were mostly welcomed to, and found refuge in the nation from around the year 1700. Although they did not settle in Scotland in such significant numbers, as in other regions of Britain and Ireland, Huguenots have been romanticized, and are generally considered to have contributed greatly to Scottish culture. John Arnold Fleming wrote extensively of the French Protestant group's impact on the nation in his 1953 Huguenot Influence in Scotland, while sociologist Abraham Lavender, who has explored how the ethnic group transformed over generations "from Mediterranean Catholics to White Anglo-Saxon Protestants", has analyzed how Huguenot adherence to Calvinist customs helped faciliate compatibility with the Scottish people. A number of French Huguenots settled in Wales, in the upper Rhymney valley of the current Caerphilly County Borough. The community they created there is still known as Fleur de Lys (the symbol of France), an unusual French village name in the heart of the valleys of Wales. Nearby villages are Hengoed, and Ystrad Mynach. Apart from the French village name and that of the local rugby team, Fleur De Lys RFC, little remains of the French heritage. Around 1685, Huguenot refugees found a safe haven in the Lutheran and Reformed states in Germany and Scandinavia. Nearly 50,000 Huguenots established themselves in Germany, 20,000 of whom were welcomed in Brandenburg-Prussia, where Frederick William, Elector of Brandenburg and Duke of Prussia (r. 1649–1688), granted them special privileges (Edict of Potsdam of 1685) and churches in which to worship (such as the Church of St. Peter and St. Paul, Angermünde and the French Cathedral, Berlin). The Huguenots furnished two new regiments of his army: the Altpreußische Infantry Regiments No. 13 (Regiment on foot Varenne) and 15 (Regiment on foot Wylich). Another 4,000 Huguenots settled in the German territories of Baden, Franconia (Principality of Bayreuth, Principality of Ansbach), Landgraviate of Hesse-Kassel, Duchy of Württemberg, in the Wetterau Association of Imperial Counts, in the Palatinate and Palatinate-Zweibrücken, in the Rhine-Main-Area (Frankfurt), in modern-day Saarland; and 1,500 found refuge in Hamburg, Bremen and Lower Saxony. Three hundred refugees were granted asylum at the court of George William, Duke of Brunswick-Lüneburg in Celle. In Berlin the Huguenots created two new neighbourhoods: Dorotheenstadt and Friedrichstadt. By 1700 one fifth of the city's population was French-speaking. The Berlin Huguenots preserved the French language in their church services for nearly a century. They ultimately decided to switch to German in protest against the occupation of Prussia by Napoleon in 1806–07. Many of their descendants rose to positions of prominence. Several congregations were founded throughout Germany and Scandinavia, such as those of Fredericia (Denmark), Berlin, Stockholm, Hamburg, Frankfurt, Helsinki, and Emden. Prince Louis de Condé, along with his sons Daniel and Osias, arranged with Count Ludwig von Nassau-Saarbrücken to establish a Huguenot community in present-day Saarland in 1604. The Count supported mercantilism and welcomed technically skilled immigrants into his lands, regardless of their religion. The Condés established a thriving glass-making works, which provided wealth to the principality for many years. Other founding families created enterprises based on textiles and such traditional Huguenot occupations in France. The community and its congregation remain active to this day, with descendants of many of the founding families still living in the region. Some members of this community emigrated to the United States in the 1890s. In Bad Karlshafen, Hessen, Germany is the Huguenot Museum and Huguenot archive. The collection includes family histories, a library, and a picture archive. The exodus of Huguenots from France created a brain drain, as many of them had occupied important places in society. The kingdom did not fully recover for years. The French crown's refusal to allow non-Catholics to settle in New France may help to explain that colony's low population compared to that of the neighbouring British colonies, which opened settlement to religious dissenters. By the start of the French and Indian War, the North American front of the Seven Years' War, a sizeable population of Huguenot descent lived in the British colonies, and many participated in the British defeat of New France in 1759–1760. Frederick William, Elector of Brandenburg, invited Huguenots to settle in his realms, and a number of their descendants rose to positions of prominence in Prussia. Several prominent German military, cultural and political figures were ethnic Huguenot, including the poet Theodor Fontane, General Hermann von François, the hero of the First World War's Battle of Tannenberg, Luftwaffe General and fighter ace Adolf Galland, the Luftwaffe flying ace Hans-Joachim Marseille and the famed U-boat Captains Lothar von Arnauld de la Perière and Wilhelm Souchon. The last prime minister of East Germany, Lothar de Maizière, is also a descendant of a Huguenot family, as is the German Federal Minister of the Interior, Thomas de Maizière. The persecution and the flight of the Huguenots greatly damaged the reputation of Louis XIV abroad, particularly in England. Both kingdoms, which had enjoyed peaceful relations until 1685, became bitter enemies and fought each other in a series of wars, called the "Second Hundred Years' War" by some historians, from 1689 onward. In October 1985, to commemorate the tricentenary of the Revocation of the Edict of Nantes, President François Mitterrand of France announced a formal apology to the descendants of Huguenots around the world. At the same time, the government released a special postage stamp in their honour reading "France is the home of the Huguenots" (Accueil des Huguenots). Huguenot legacy persists both in France and abroad. Several French Protestant churches are descended from or tied to the Huguenots, including: - Reformed Church of France (l'Église Réformée de France), founded in 1559, the historical and principal Reformed church in France since the Protestant Reformation until its 2013 merger into the United Protestant Church of France - Evangelical Reformed Church of France (Union nationale des églises protestantes réformées évangéliques de France), founded in 1938 - some French members of the largely German Protestant Reformed Church of Alsace and Lorraine - Bayonne, New Jersey - Four-term Republican United States Representative Howard Homan Buffett was of Huguenot descent. - Charleston, South Carolina, is home to the only active Huguenot congregation in the United States - In 1924, the US issued a commemorative half dollar, known as the "Huguenot-Walloon half dollar", to celebrate the 300th anniversary of the Huguenots' settlement in what is now the United States. - Frenchtown, New Jersey, part of the larger Delaware River Valley, was a settling area in the early 1700s. - The Huguenot neighbourhood in New York City's borough of Staten Island, straddling Huguenot Avenue - Huguenot Memorial Park in Jacksonville, Florida. - The early leaders John Jay and Paul Revere were of Huguenot descent. - The Manakintown Church serves as a National Huguenot Memorial. - Francis Marion, an American Revolutionary War guerrilla fighter in South Carolina, was of predominantly Huguenot ancestry. - New Paltz, New York - New Rochelle, New York, named for the city of La Rochelle, a known former Huguenot stronghold in France. The Huguenot and Historical Association of New Rochelle was organised in 1885 for the purpose of perpetuating the history of its original Huguenot settlers. The mascot of New Rochelle High School is the Huguenot; and one of the main streets in the city is called Huguenot Street. - John Pintard (1759–1854), a descendant of Huguenots and prosperous New York City merchant who was involved in various New York City organizations. Pintard was credited with establishing the modern conception of Santa Claus. - In Richmond, Virginia and the neighbouring Chesterfield County, there is a Huguenot Road. A Huguenot High School in Richmond and Huguenot Park in Chesterfield County, along with several other uses of the name throughout the region, commemorate the early refugee settlers. - Walloon Settlers Memorial (located in Battery Park) is a monument given to the City of New York by the Belgian Province of Hainaut in honour of the inspiration of Jessé de Forest in founding New York City. Baron de Cartier de Marchienne, representing the government and Albert I, King of Belgium, presented the monument to Mayor John F. Hylan, for the City of New York 18 May 1924. - There is a Huguenot society in London, as well as a French Protestant Church of London, founded in 1550 in Soho Square, which is still active, and has also been a registered charity since 1926. - Huguenots of Spitalfields is a registered charity promoting public understanding of the Huguenot heritage and culture in Spitalfields, the City of London and beyond. They arrange tours, talks, events and schools programmes to raise the Huguenot profile in Spitalfields and raise funds for a permanent memorial to the Huguenots. - Huguenot Place in Wandsworth is named after the Huguenot Burial Site or Mount Nod Cemetery, which was used by the Huguenots living in the area. The site was in use from 1687 to 1854 and graves can still be observed today. - Canterbury Cathedral retains a Huguenot Chapel in the 'Black Prince's Chantry', part of the Crypt which is accessible from the exterior of the cathedral. The chapel was granted to Huguenot refugees on the orders of Queen Elizabeth I in 1575. To this day, the chapel still holds services in French every Sunday at 3pm. - Strangers' Hall in Norwich got its name from the Protestant refugees from the Spanish Netherlands who settled in the city from the 16th century onwards and were referred to by the locals as the ' Strangers'. The Strangers brought with them their pet canaries, and over the centuries the birds became synonymous with the city. In the early 20th century, Norwich City F.C. adopted the canary as their emblem and nickname. - Huguenot refugees in Prussia are thought to have contributed significantly to the development of the textile industry in that country. One notable example was Marthe de Roucoulle, governess of Prussian kings Frederick William I and Frederick the Great. - Most South African Huguenots settled in the Cape Colony, where they became assimilated into the Afrikaner and Afrikaans population. Many modern Afrikaners have French surnames, which are given Afrikaans pronunciation and orthography. The early immigrants settled in Franschhoek ("French Corner") near Cape Town. The Huguenots contributed greatly to the wine industry in South Africa. - The majority of Australians with French ancestry are descended from Huguenots. Some of the earliest to arrive in Australia held prominent positions in English society, notably Jane Franklin and Charles La Trobe. - Others who came later were from poorer families, migrating from England in the 19th and early 20th centuries to escape the poverty of London's East End Huguenot enclaves of Spitalfields and Bethnal Green. Their impoverishment had been brought on by the Industrial Revolution, which caused the collapse of the Huguenot-dominated silk-weaving industry. Many French Australian descendants of Huguenots still consider themselves very much Huguenots or French, even in the twenty-first century. - Bible translations into French - French Confession of Faith - Guillebeau House - Industrial Revolution - Les Huguenots (opera) - List of Huguenots - Salzburg Protestants—German Protestants expelled from the Archbishopric of Salzburg - Aston, Religion and Revolution in France, 1780–1804 (2000) pp 245–50 - Encyclopædia Britannica, 11th ed, Frank Puaux, Huguenot - Gray, Janet G. (1983). "The Origin of the Word Huguenot". Sixteenth Century Journal. 14 (3): 349–359. doi:10.2307/2540193. JSTOR 2540193. - Antoine Dégert, "Huguenots" Archived 18 August 2009 at the Wayback Machine, The Catholic Encyclopedia, 1911 - "Who Were the Huguenots?", The National Huguenot Society - De l'Estat de France 1560, by Reguier de la Plancha, quoted by The Cape Monthly (February 1877), No. 82 Vol. XIV on page 126|The Cape Monthly at the Internet Archive - Bibliothèque d'humanisme et Renaissance, by Association d'humanisme et renaissance, 1958, p 217 - William Gilmore Simms, The Huguenots in Florida; Or, The Lily and the Totem, 1854, p. 470 - George Lunt, "Huguenot – The origin and meaning of the name", New England Historical & Genealogical Register, Boston, 1908/1911, 241–246 - "Croix huguenote", Wikipédia (in French), 13 May 2019, retrieved 12 June 2019 - "The National Huguenot Society - Cross of Languedoc". www.huguenot.netnation.com. Retrieved 7 December 2018. - Hans J. Hillerbrand, Encyclopedia of Protestantism: 4-volume Set, paragraphs "France" and "Huguenots" - The Huguenot Population of France, 1600-1685: The Demographic Fate and Customs of a Religious Minority by Philip Benedict; American Philosophical Society, 1991 - 164 - "The National Huguenot Society - Who Were the Huguenots?". - The Huguenots: Or, Reformed French Church. Their Principles Delineated; Their Character Illustrated; Their Sufferings and Successes Recorded by William Henry Foote; Presbyterian Committee of Publication, 1870 - 627 - The Huguenots: History and Memory in Transnational Context: Essays in Honour and Memory of by Walter C. Utt - From a Far Country: Camisards and Huguenots in the Atlantic World by Catharine Randall - Calvin, Claude (1945). The Calvin Families. University of Wisconsin. pp. 47–53, 57–71. - Huldrych Zwingli began the Reformed tradition in Zürich, Switzerland in 1519 (see Reformation in Zürich and History of Calvinism). John Calvin converted to it either in the late 1520s or the early 1530s. - Reformed Church of France membership at the time of its 2013 merger into the United Protestant Church of France. - Darling, Charles William (1894). Historical account of some of the more important versions and editions of the Bible. University of Wisconsin-Madison. p. 18. - Bullen, G. (1877). Catalogue of the loan collection of antiquities, curiosities, and appliances connected with the art of printing. N. Trübner and Co. p. 107 (item 687). - "Wayback Machine" (PDF). 12 May 2014. Archived from the original (PDF) on 12 May 2014. Retrieved 15 April 2018. - Malcolm D. Lambert, Medieval Heresy: Popular Movements from the Gregorian Reform to the Reformation, p. 389 - Hanna, William (1872). The wars of the Huguenots. New York: Robert Carter & Brothers. p. 27. Retrieved 7 September 2009. - Margaret Ruth Miles, The Word Made Flesh: A History of Christian Thought, Blackwell Publishing, 2005, pg 381 - Paul Arblaster, Gergely Juhász, Guido Latré (eds) Tyndale's Testament, Turnhout: Brepols, 2002, ISBN 2-503-51411-1, pp. 130–135 - John Calvin, tr. Emily O. Butler. "The French Confession of Faith of 1559". Creeds.net. Archived from the original on 3 March 2018. Retrieved 2 August 2010. - Tylor, Charles (1892). The Huguenots in the seventeenth century: including the history of the Edict of Nantes, from its enactment in 1598 to its revocation in 1685. London: Simpkin, Marshall, Hamilton, Kent. p. 3. Retrieved 15 September 2018. - "The Huguenots". www.renaissance-spell.com. Retrieved 7 January 2020. - margaret kilner. "Huguenots". Orange-street-church.org. Retrieved 2 August 2010. - Lucien Bély (2001). The History of France. Editions Jean-paul Gisserot. p. 48. ISBN 9782877475631. - "L'affaire des placards, la fin de la belle Renaissance". Archived from the original on 18 March 2010. - "18 octobre 1534: l'affaire des placards". Herodote.net. Retrieved 2 August 2010. - Geoffrey Treasure, The Huguenots (New Haven CT: Yale University Press, 2013), 70-71. ISBN 0300196199, 9780300196191 - "Catholic Encyclopedia: Huguenots". Newadvent.org. Archived from the original on 18 August 2009. Retrieved 2 August 2010. - Fischer, David Hackett, "Champlain's Dream", 2008, Alfred A. Knopf Canada - Irene Scouloudi, Huguenots in Britain and France (Springer, 1987). - Rebecca Jane McKee, and Randolph Vigne, The Huguenots: France, Exile and Diaspora (Apollo Books, 2013). - Thomas Martin Lindsay, A History of the Reformation, 1907, p 190: "six or seven hundred Protestants were slain" - John F. Nash Christianity: The One, the Many (2008) p 104 - French, Lawrence Armand (8 July 2014). Frog Town: A Portrait of a French Canadian Paris in New England by Lawrence Armand French. p. 17. ISBN 978-0761867760. - Parker, G. (ed.) (1994), Atlas of World History, Fourth Edition, BCA (HarperCollins), London, pp. 178; - Alastair Armstrong: France 1500–1715 (Heinemann, 2003) pp.70–71; - "This Day in History 1572: Saint Bartholomew's Day Massacre". History.com. Archived from the original on 12 February 2010. Retrieved 2 August 2010. - Parker, G. (ed.) (1998), Oxford Encyclopedia World History, Oxford University Press, Oxford, ISBN 0-19-860223-5 hardback, pp.585; - Chadwick, H. & Evans, G.R. (1987), Atlas of the Christian Church, Macmillan, London, ISBN 0-333-44157-5 hardback, pp. 113; - Alastair Armstrong: France 1500–1715 (Heinemann, 2003) pp.70–71 - Moynahan, B. (2003) The Faith: A History of Christianity, Pimlico, London, ISBN 0-7126-0720-X paperback, pp.456; - Partner, P. (1999), Two Thousand Years: The Second Millennium, Granda Media (Andre Deutsch), Britain, ISBN 0-233-99666-4 hardback, pp. ; - Upshall, M. (ed.) (1990), The Hutchinson Paperback Encyclopedia, Arrow Books, London, ISBN 0-09-978200-6 paperback; - Benedict, Philip (1991). The Huguenot Population of France, 1600–1685: The Demographic Fate and Customs of a Religious Minority. Philadelphia: The American Philosophical Society. p. 8. ISBN 0-87169-815-3. - see article: – Revocation of the Edict of Nantes - John Wolf, Louis XIV, ch 24; Bertrand Van Ruymbeke, "Escape from Babylon", Christian History 2001 20(3): 38–42. ISSN 0891-9666 Fulltext: Ebsco - "Le Temple du Rouve". Archived from the original on 16 July 2013. Retrieved 7 January 2020. - Protestant Christianity in France https://rlp.hds.harvard.edu/faq/protestant-christianity-france, #1lib1ref - Nigel Aston, Religion and Revolution in France, 1780–1804 (2000) pp 61–72 - Sir Thomas Barclay (1888). Nationality, domicile and residence in France: Decree of October 2, 1888 concerning foreigners, with notes and instructions and the laws of France relating to nationality, admission to domicile, naturalization and the sojourn in France of foreigners generally. pp. 23–. - Great Britain. Foreign Office (1893). Nationality and Naturalization: Reports by Her Majesty's Representatives Abroad Upon the Laws of Foreign Countries. H.M. Stationery Office. p. 47. - Nicolas Boring (2019). The Revocation of Huguenot Rights to French Citizenship. The Law Library of Congress. - Ordonnance n° 45-2441 du 19 octobre 1945 portant code de la nationalité française. Le Gouvernement provisoire de la République française. 1945. - "France". State.gov. 1 January 2004. Retrieved 2 August 2010. - firstname.lastname@example.org, The Tablet-w. "Rise of 'neo-Protestantism' under Macron challenges traditional Catholic-secular approach to politics". The Tablet. Retrieved 2 May 2019. - The Huguenot Society of Australia. "Welcome to The Huguenot Society of Australia Website". Retrieved 30 April 2016. - Botha, Colin Graham. The French refugees at the Cape. p. 7. Retrieved 21 July 2009. - Botha, Colin Graham. The French refugees at the Cape. p. 7. Retrieved 21 July 2009. - Walker, Eric (1968). "Chapter IV – The Diaspora". A History of Southern Africa. Longmans. - Ces Francais Qui Ont Fait L'Afrique Du Sud. Translation: The French People Who Made South Africa. Bernard Lugan. January 1996. ISBN 2-84100-086-9 - Watkinson, William Lonsdale; Davison, William Theophilus, eds. (1875). "William Shaw and South Africa". The London Quarterly Review. 44. J.A. Sharp. p. 274. Retrieved 7 July 2017 – via Google Books. - "Chronology – French Church du Saint-Esprit". Retrieved 29 March 2019. - Westward into Kentucky The Narrative of Daniel Trabue. The University Press of Kentucky,, page 160. 1981. ISBN 9780813149264. Retrieved 16 July 2019. - "Huguenots in Manakintown" (PDF). Library of Virginia. Archived from the original (PDF) on 17 December 2008. Retrieved 2 August 2010. - Gevinson, Alan. "Protestant Immigration to Louisiana". Teachinghistory.org, accessed 2 September 2011. - article on EIDupont says he did not even emigrate to the US and establish the mills until after the French Revolution, so the mills were not operating for theAmerican revolution - Thera Wijsenbeek, "Identity Lost: Huguenot refugees in the Dutch Republic and its former colonies in North America and South Africa, 1650 to 1750: a comparison". South African Historical Journal 59.1 (2007): 79–102. - Eric J. Roth, "From Protestant International to Hudson Valley Provincial: A Case Study of Language Use and Ethnicity in New Paltz, New York, 1678–1834". Hudson River Valley Review (2005) 21#2 pp 40-55. - Joyce D. Goodfriend, "The social dimensions of congregational life in colonial New York city". William and Mary Quarterly (1989) 48#2: 252–278. - Ghislain Baury,La dynastie Rouvière de Fraissinet-de-Lozère. Les élites villageoises dans les Cévennes protestantes d'après un fonds d'archives inédit (1403–1908), t. 1: La chronique, t. 2: L'inventaire, Sète, Les Nouvelles Presses du Languedoc, 2011. - D.J.B. Trim, . "The Secret War of Elizabeth I: England and the Huguenots during the early Wars of Religion, 1562-77." Proceedings of the Huguenot Society of Great Britain and Ireland 27.2 (1999): 189-199. - G.M.D. Howat, Stuart and Cromwellian Foreign Policy (1974) p 156. - Roy A. Sundstrom, "French Huguenots and the Civil List, 1696-1727: A Study of Alien Assimilation in England." Albion 8.3 (1976): 219-235. - Robin Gwynn, "The number of Huguenot immigrants in England in the late seventeenth century." Journal of Historical Geography 9.4 (1983): 384-395. - Robin Gwynn, "England's First Refugees" History Today (May 1985) 38#5 pp 22-28. - Jon Butler, The Huguenots in America: A refugee people in New World society (1983). - Kurt Gingrich, "'That Will Make Carolina Powerful and Flourishing': Scots and Huguenots in Carolina in the 1680s." South Carolina Historical Magazine 110.1/2 (2009): 6-34. online - Heinz Schilling,"Innovation through migration: the settlements of Calvinistic Netherlanders in sixteenth-and seventeenth-century Central and Western Europe." Histoire Sociale/Social History 16.31 (1983). online - Mark Greengrass, "Protestant exiles and their assimilation in early modern England." Immigrants & Minorities 4.3 (1985): 68-81. - Irene Scouloudi, ed. Huguenots in Britain and Their French Background, 1550-1800 (1987) - Lien Bich Luu, "French-speaking refugees and the foundation of the London silk industry in the 16th century." Proceedings-Huguenot Society of Great Britain and Ireland 26 (1997): 564-576. - Michael Green, "Bridging the English Channel: Huguenots in the educational milieu of the English upper class." Paedagogica Historica 54.4 (2018): 389-409 online - "The Huguenots in England". The Economist. 28 August 2008. Retrieved 2 August 2010. - "French Protestant Church of London". Egliseprotestantelondres.org. Archived from the original on 17 May 2009. Retrieved 2 August 2010. - Bethnal Green: Settlement and Building to 1836, A History of the County of Middlesex: Volume 11: Stepney, Bethnal Green (1998), pp. 91–95 Date accessed: 21 May 2008 - Palliser, Mrs. Bury (1865). History of Lace. London: Sampson Low, Son and Marston. p. 299. A nest of refugee lace-makers, 'who came out of France by reason of the late "troubles" yet continuing,' were congregated at Dover (1621–22). A list of about twenty-five 'widows being makers of Bone lace is given...' - Wright, Thomas (1919). The Romance of the Lace Pillow. Olney, Bucks: H.H. Armstrong. pp. 37–38. - Seguin, Joseph (1875). J. Rothschild (ed.). La dentelle: Histoire, description fabrication, bibliographie (in French). Paris. p. 140. There is a tradition that the art of bobbin lace was brought to England by the Flemish emigrants who, fleeing from the tyranny of the Duke of Alba, went to settle in England. This tradition is entirely false for the lace industry did not exist in Flanders when the Duke of Alba went there. - Yallop, H.J. (1992). The History of the Honiton Lace Industry. Exeter: University of Exeter Press. p. 18. ISBN 0859893790. - Levey, Santina (1983). Lace, A History. London: Victoria and Albert Museum. p. 90. ISBN 090128615X. Until the late 18th century, the lace made at Lille was indistinguishable from the other copies of Michelin and Valencienne, but, at that time, it appears to have adopted—along with a number of other centres—the simple twist-net ground of the plainer blonde and thread laces. - Ellis, Jason (2002). Glassmakers of Stourbridge and Dudley 1612–2002. Harrogate: Jason Ellis. ISBN 1-4010-6799-9. - Grace Lawless Lee (2009), The Huguenot Settlements in Ireland, Page 169 - Raymond Hylton (2005), Ireland's Huguenots and Their Refuge, 1662–1745: An Unlikely Haven, p. 194, Quote: "The Bishop of Kildare did come to Portarlington to consecrate the churches, backed by two prominent Huguenot Deans of ... Moreton held every advantage and for most of the Portarlington Huguenots there could be no option but acceptance ... - Raymond P. Hylton, "Dublin's Huguenot Community: Trials, Development, and Triumph, 1662–1701", Proceedings of the Huguenot Society of London 24 (1983–1988): 221–231 - Raymond P. Hylton, "The Huguenot Settlement at Portarlington, ... - C. E. J. Caldicott, Hugh Gough, Jean-Paul Pittion (1987), The Huguenots and Ireland: Anatomy of an Emigration, Quote: "The Huguenot settlement at Portarlington, 1692–1771. Unique among the French Protestant colonies established or augmented in Ireland following the Treaty of Limerick (1691), the Portarlington settlement was planted on the ashes of an ..." - The Irish Pensioners of William III's Huguenot Regiments - 300 years of the French Church, St. Paul's Church, Portarlington. - Portarlington, Grant Family Online - Kathy Chater (2012). Tracing Your Huguenot Ancestors: A Guide for Family Historians. Pen & Sword. ISBN 978-1848846104. Combined with what was called the 'Auld Alliance' between Scotland and France (England's traditional enemy), this meant that French Huguenots found Scotland a welcoming refuge. - "The Scots Magazine" (Volume 60 ed.). DC Thomson. Scotland owes a great deal to the Huguenots. They were the flower of France, and the persecution, epitomised by the massacre of St Bartholomew's Day, 1572, which drove so many to seek refuge abroad, enriched our nationCite journal requires - John Arnold Fleming (1953). Huguenot influence in Scotland. W. Maclellan. - Abraham Lavender (1989). French Huguenots: From Mediterranean Catholics to White Anglo-Saxon Protestants. Peter Lang. ISBN 978-0820411361. In Scotland, the Huguenots 'became part of the warp and woof of the Scottish nation. They followed the tenets of John Calvin and made their contribution social, religious and commercial' (Reaman 1966; 95). - "Cooperative religion in Quebec". Journal of Ecumenical Studies. Goliath. 22 March 2004. Retrieved 2 August 2010. - Steinhauer, Harry. Twelve German Novellas, p. 315. University of California Press, 1977. ISBN 0-520-03002-8 - Pawly, Ronald. The Kaiser's Warlords, p.44. Osprey Publishing, 2003. ISBN 1-84176-558-9 - Galland 1954, p. vii. - Miller, David. U-boats, p.12. Brassey's, 2002. ISBN 1-57488-463-8 - Leiby, Richard A. The Unification of Germany, 1989–1990, p. 109. Greenwood Publishing Group, 1999. ISBN 0-313-29969-2 - "Allocution de M. François Mitterrand, Président de la République, aux cérémonies du tricentenaire de la Révocation de l'Edit de Nantes, sur la tolérance en matière politique et religieuse et l'histoire du protestantisme en France, Paris, Palais de l'UNESCO, vendredi 11 octobre 1985. – vie-publique.fr". Archived from the original on 30 June 2015. Retrieved 30 April 2016. - "Bayonne Online The first reference to Bayonne in history is in 1609 when Henry Hudson stopped there before proceeding on his journey up the river which would later bear his name. He called this tip of the peninsula which jutted out into Newark Bay, "Bird's Point". The Dutch as part of New Amsterdam later claimed this land, along with New York and the rest of New Jersey. In 1646, the land was granted to Jacob Jacobson Roy, a gunner at the fort in New Amsterdam (now Manhattan), and named "Konstapel's Hoeck" (Gunner's Point in Dutch). In 1654, additional grants were given and shelters were built as centers for trading with the Leni-Lennapes. Soon, they became enraged with the Dutch trading tactics, and drove out the settlers. A peace treaty was arranged in 1658, and the Dutch returned". 5 March 2016. Archived from the original on 5 March 2016. Retrieved 7 January 2020. - "Huguenot Half Dollar". Commem.com. Retrieved 2 August 2010. - "444 Years: The Massacre of the Huguenot Christians in America". CBN.com - The Christian Broadcasting Network. 2 July 2008. Retrieved 15 April 2018. - "Historic Huguenot Street". Retrieved 30 April 2016. - Super User. "Huguenots of Spitalfields heritage tours & events in Spitalfields – Huguenot Public Art Trust". Retrieved 30 April 2016. - "Eglise Protestante Française de Londres". Retrieved 30 April 2016. - Super User. "Huguenots of Spitalfields heritage tours & events in Spitalfields – Huguenot Public Art Trust". Retrieved 30 April 2016. - "The Huguenot Chapel (Black Prince's Chantry)". Retrieved 28 November 2018. - "The Strangers who enriched Norwich and Norfolk life". Retrieved 21 December 2019. - "The strangers and the canaries - Football Welcomes 2018". Retrieved 21 December 2019. - "Paths to Pluralism: South Africa's Early History". Michigan State University. Retrieved 21 April 2009. - The Huguenot Society of Australia. "Famous people". Retrieved 30 April 2016. - The Huguenot Society of Australia. "Who were the Huguenots?". Retrieved 30 April 2016. - Baird, Charles W. "History of the Huguenot Emigration to America." Genealogical Publishing Company, Published: 1885, Reprinted: 1998, ISBN 978-0-8063-0554-7 - Butler, Jon. The Huguenots in America: A Refugee People in New World Society (1992) - Cottret, Bernard, The Huguenots in England. Immigration and Settlement, Cambridge & Paris, Cambridge University Press, 1991. - Diefendorf, Barbara B. Beneath the Cross: Catholics and Huguenots in Sixteenth-Century Paris (1991) excerpt and text search - Gilman, C. Malcolm. The Huguenot Migration in Europe and America, its Cause and Effect (1962) - Glozier, Matthew and David Onnekink, eds. War, Religion and Service. Huguenot Soldiering, 1685–1713 (2007) - Glozier, Matthew The Huguenot soldiers of William of Orange and the Glorious Revolution of 1688: the lions of Judah (Brighton, 2002) - Gwynn, Robin D. Huguenot Heritage: The History and Contribution of the Huguenots in England (Routledge & Kegan Paul, 1985). - Kamil, Neil. Fortress of the Soul: Violence, Metaphysics, and Material Life in the Huguenots' New World, 1517–1751 Johns Hopkins U. Press, 2005. 1058 pp. - Lachenicht, Susanne. "Huguenot Immigrants and the Formation of National Identities, 1548–1787", Historical Journal 2007 50(2): 309–331, - Lotz-Heumann, Ute: Confessional Migration of the Reformed: The Huguenots, European History Online, Mainz: Institute of European History, 2012, retrieved: 11 July 2012. - McClain, Molly. "A Letter from Carolina, 1688: French Huguenots in the New World." William and Mary Quarterly. 3rd. ser., 64 (April 2007): 377–394. - Mentzer, Raymond A. and Andrew Spicer. Society and Culture in the Huguenot World, 1559–1685 (2007) excerpt and text search - Murdoch, Tessa, and Randolph Vigne. The French Hospital in England: Its Huguenot History and Collections Cambridge: John Adamson, 2009 ISBN 978-0-9524322-7-2 - Ruymbeke, Bertrand Van. New Babylon to Eden: The Huguenots and Their Migration to Colonial South Carolina. U. of South Carolina Press, 2006. 396 pp - Scoville, Warren Candler. The persecution of Huguenots and French economic development, 1680-1720 (U of California Press, 1960). - Scoville, Warren C. "The Huguenots and the diffusion of technology. I." Journal of political economy 60.4 (1952): 294–311. part I online; Part2: Vol. 60, No. 5 (Oct., 1952), pp. 392–411 online part 2 - Soman, Alfred. The Massacre of St. Bartholomew: Reappraisals and Documents (The Hague: Martinus Nijhoff, 1974) - Treasure, G. R. R. Seventeenth Century France (2nd ed., 1981) pp. 371–96. - VanRuymbeke, Bertrand and Sparks, Randy J., eds. Memory and Identity: The Huguenots in France and the Atlantic Diaspora, U. of South Carolina Press, 2003. 352 pp. - Wijsenbeek, Thera. "Identity Lost: Huguenot Refugees in the Dutch Republic and its Former Colonies in North America and South Africa, 1650 To 1750: A Comparison", South African Historical Journal 2007 (59): 79–102 - Wolfe, Michael. The Conversion of Henri IV: Politics, Power, and Religious Belief in Early Modern France (1993). - Augeron Mickaël, Didier Poton et Bertrand Van Ruymbeke, dir., Les Huguenots et l'Atlantique, vol. 1: Pour Dieu, la Cause ou les Affaires, préface de Jean-Pierre Poussou, Paris, Presses de l'Université Paris-Sorbonne (PUPS), Les Indes savantes, 2009 - Augeron Mickaël, Didier Poton et Bertrand Van Ruymbeke, dir., Les Huguenots et l'Atlantique, vol. 2: Fidélités, racines et mémoires, Paris, Les Indes savantes, 2012. - Augeron Mickaël, John de Bry, Annick Notter, dir., Floride, un rêve français (1562–1565), Paris, Illustria, 2012. |Wikimedia Commons has media related to Huguenots.| |Wikisource has the text of the 1911 Encyclopædia Britannica article Huguenots.| |Look up huguenots in Wiktionary, the free dictionary.| - Historic Huguenot Street - Huguenot Fellowship - The Huguenot Society of Australia - Library for Huguenot History, Germany - The National Huguenot Society - The Huguenot Society of America - Huguenot Society of Great Britain & Ireland - Mitterrand's Apology to the Huguenots (in French) - Who were the Huguenots? - Huguenots of Spitalfields - Huguenots and Jews of the Languedoc About the inhabitants of Southern France and how they became to be called French Protestants - Early Prayer Books of America: Being a Descriptive Account of Prayer Books Published in the United States, Mexico and Canada by Rev. John Wright, D.D. St Paul, MN: Privately Printed, 1898. Pages 188 to 210 are entitled "The Prayer Book of the French Protestants, Charleston, South Carolina." (597 pdfs) - The French Protestant (Huguenot) Church in the city of Charleston, South Carolina. Includes history, text of memorial tablets, and the rules adopted in 1869. (1898, 40 pdfs) - La Liturgie: ou La Manière de célébrer le service Divin; Qui est établie Dans le Eglises de la Principauté de Neufchatel & Vallangin. (1713, 160 pdfs) - La Liturgie: ou La Manière de célébrer le service Divin; Qui est établie Dans le Eglises de la Principauté de Neufchatel & Vallangin. Revised and corrected second edition. (1737, 302 pdfs) - La Liturgie: ou La Manière de Célébrer le Service Divin, Comme elle est établie Dans le Eglises de la Principauté de Neufchatel & Vallangin. Nouvelle édition, Augmentée de quelques Prieres, Collectes & Cantiques. (1772, 256 pdfs) - La Liturgie: ou La Manière de Célébrer le Service Divin, qui est établie Dans le Eglises de la Principauté de Neufchatel & Vallangin. Cinquieme édition, revue, corrigée & augmentée. (1799, 232 pdfs) - La Liturgie, ou La Manière de Célébrer le Service Divin, dans le églises du Canton de Vaud. (1807, 120 pdfs) - The Liturgy of the French Protestant Church, Translated from the Editions of 1737 and 1772, Published at Neufchatel, with Additional Prayers, Carefully Selected, and Some Alterations: Arranged for the Use of the Congregation in the City of Charleston, S. C. Charleston, SC: James S. Burgess, 1835. (205 pdfs) - The Liturgy of the French Protestant Church, Translated from the Editions of 1737 and 1772, Published at Neufchatel, with Additional Prayers Carefully Selected, and Some Alterations. Arranged for the Use of the Congregation in the City of Charleston, S. C. New York, NY: Charles M. Cornwell, Steam Printer, 1869. (186 pdfs) - The Liturgy, or Forms of Divine Service, of the French Protestant Church, of Charleston, S. C., Translated from the Liturgy of the Churches of Neufchatel and Vallangin: editions of 1737 and 1772. With Some Additional Prayers, Carefully Selected. The Whole Adapted to Public Worship in the United States of America. Third edition. New York, NY: Anson D. F. Randolph & Company, 1853. 228 pp. Google Books and the Internet Archive. Available also from Making of America Books as a DLXS file or in hardcover. - The Liturgy Used in the Churches of the Principality of Neufchatel: with a Letter from the Learned Dr. Jablonski, Concerning the Nature of Liturgies: To which is Added , The Form of Prayer lately introduced into the Church of Geneva. (1712, 143 pdfs) - Manifesto, (or Declaration of Principles), of the French Protestant Church of London, Founded by Charter of Edward VI. 24 July, A.D. 1550. By Order of the Consistory. London, England: Messrs. Seeleys, 1850. - Preamble and rules for the government of the French Protestant Church of Charleston: adopted at meetings of the corporation held on the 12th and the 19th of November, 1843. (1845, 26 pdfs) - Synodicon in Gallia Reformata: or, the Acts, Decisions, Decrees, and Canons of those Famous National Councils of the Reformed Churches in France by John Quick. Volume 1 of 2. (1692, 693 pdfs) - Synodicon in Gallia Reformata: or, the Acts, Decisions, Decrees, and Canons of those Famous National Councils of the Reformed Churches in France by John Quick. Volume 2 of 2. (1692, 615 pdfs) - Judith Still. "Huguenot". Words of the World. Brady Haran (University of Nottingham).
In geometry, two lines or planes (or a line and a plane), are considered perpendicular (or orthogonal) to each other if they form congruent adjacent angles. The term may be used as a noun or adjective. Thus, referring to Figure 1, the line AB is the perpendicular to CD through the point B. Note that by definition, a line is infinitely long, and strictly speaking AB and CD in this example represent line segments of two infinitely long lines. Hence the line segment AB does not have to intersect line segment CD to be considered perpendicular lines, because if the line segments are extended out to infinity, they would still form congruent adjacent angles. If a line is bending to another as in Figure 1, all of the angles created by their intersection are called right angles (right angles measure ½π radians, or 90°). Conversely, any lines that meet to form right angles are perpendicular. In a coordinate plane, perpendicular lines have opposite reciprocal slopes. A horizontal line has slope equal to zero while the slope of a vertical line is described as undefined or sometimes ±infinity. Two lines that are perpendicular would be denoted as Template:Perpendicular. In terms of slopes In a Cartesian coordinate system, two straight lines <math>L</math> and <math>M</math> may be described by equations. - <math>L : y = ax + b,</math> - <math>M : y = cx + d,</math> as long as neither is vertical. Then <math>a</math> and <math>c</math> are the slopes of the two lines. The lines <math>L</math> and <math>M</math> are perpendicular if and only if the product of their slopes is -1, or if <math>ac=-1</math>. The perpendiculars to vertical lines are always horizontal lines, and the perpendiculars to horizontal lines are always vertical lines. All horizontal lines are perpendicular to all vertical lines; that is, for any horizontal line <math>P : x = J</math> and horizontal line <math>Q : y = K</math>, where <math>J</math> and <math>K</math> are constants, Template:Perpendicular. Construction of the perpendicular To construct the perpendicular to the line AB through the point P using compass and straightedge, proceed as follows (see Figure 2). - Step 1 (red): construct a circle with center at P to create points A' and B' on the line AB, which are equidistant from P. - Step 2 (green): construct circles centered at A' and B', both passing through P. Let Q be the other point of intersection of these two circles. - Step 3 (blue): connect P and Q to construct the desired perpendicular PQ. To prove that the PQ is perpendicular to AB, use the SSS congruence theorem for triangles QPA' and QPB' to conclude that angles OPA' and OPB' are equal. Then use the SAS congruence theorem for triangles OPA' and OPB' to conclude that angles POA and POB are equal. In relationship to parallel lines As shown in Figure 3, if two lines (a and b) are both perpendicular to a third line (c), all of the angles formed on the third line are right angles. Therefore, in Euclidean geometry, any two lines that are both perpendicular to a third line are parallel to each other, because of the parallel postulate. Conversely, if one line is perpendicular to a second line, it is also perpendicular to any line parallel to that second line. In Figure 3, all of the orange-shaded angles are congruent to each other and all of the green-shaded angles are congruent to each other, because vertical angles are congruent and alternate interior angles formed by a transversal cutting parallel lines are congruent. Therefore, if lines a and b are parallel, any of the following conclusions leads to all of the others: - One of the angles in the diagram is a right angle. - One of the orange-shaded angles is congruent to one of the green-shaded angles. - Line 'c' is perpendicular to line 'a'. - Line 'c' is perpendicular to line 'b'. Finding the perpendiculars of a function In algebra, for any linear equation y=mx + b, the perpendiculars will all have a slope of (-1/m), the opposite reciprocal of the original slope. It is helpful to memorize the slogan "to find the slope of the perpendicular line, flip the fraction and change the sign." Recall that any whole number a is itself over one, and can be written as (a/1) To find the perpendicular of a given line which also passes through a particular point (x, y), solve the equation y = (-1/m)x + b, substituting in the known values of m, x, and y to solve for b. First find the derivative of the function. This will be the slope (m) of any curve at a particular point (x, y). Then, as above, solve the equation y = (-1/m)x + b, substituting in the known values of m, x, and y to solve for b. - Definition: perpendicular With interactive animation - How to draw a perpendicular bisector of a line with compass and straight edge Animated demonstration - How to draw a perpendicular at the endpoint of a ray with compass and straight edge Animated demonstration
Deep Learning is a method of information processing that analyzes large amounts of data using neural networks. This approach is largely modeled on the biological processes in the human brain, with the difference that processing such data sets would be nearly impossible for our brain. How does Deep Learning work? Deep Learning includes algorithms that are programmed to learn without human intervention. The technical basis of these programs are neural networks. These consist of many layers of neurons, just like our brain. In the input layer, all the information arrives that is to be processed. In our biological example, this would be the sensory impressions from eyes, fingers, etc. At the end of the network, one or more responses are resulting in the output layer, depending on the inputs. For example, if we see a lion in the immediate vicinity, our reaction is to quickly get to safety. In order for this appropriate response to occur, we must process the inputs correctly. This happens in the layers between the input and output layers, the so-called hidden layers. Based on past experience, stronger or weaker connections form between neurons from different layers. The more intermediate layers a network has, the “deeper” it is. This is where the term “deep” learning comes from. This example can be transferred almost one-to-one to the technical algorithm. We define a neural network with a certain number of layers and neurons. In most cases, more neurons can be used to learn more complex facts. So the more complex the use case, the larger the neural network. With the help of training data, the model then learns to link the correct neurons with each other, so that the correct relationship between model input and output is created. From the outside, we only specify what the correct prediction should look like. The model learns to make the right connections within the network on its own. Practical applications for Deep Learning - Dynamic Pricing: This is about setting specific prices for the same products depending on the customer, country, or other circumstances. A few years ago, this was mainly limited to airlines, which adjusted flight prices accordingly the closer the departure date came. Today, this strategy is conceivable in many areas, for example in e-commerce, where customers are offered particularly favorable bundles to lure them back into the store. - Product Recommendation: This is another use case that is primarily used in e-commerce and aims to suggest a suitable product to the customer based, for example, on their purchase history, search behavior, or other customer characteristics. In addition, such algorithms are also used by Netflix or Amazon Prime to suggest a suitable series or movie. - Fraud Detection: This is the automated detection of conspicuous behavior of all kinds, which usually indicates misuse of the system. The most famous use case is bank accounts on which conspicuous debits or credit card transactions take place, which could indicate that the credit card has fallen into the wrong hands. Deep Learning vs. Machine Learning Deep Learning is a subfield of Machine Learning that differs from Machine Learning in that no human is involved in the learning process. This is based on the fact that only Deep Learning algorithms are able to process unstructured data, such as images, videos, or audio files. Other machine learning models, on the other hand, need the help of humans to process this data, telling them, for example, that there is a car in the image. Deep Learning algorithms, on the other hand, can automatically convert unstructured data into numerical values and then incorporate these into their predictions and recognize structures without any human interaction has taken place. In addition, deep learning algorithms are able to process significantly larger amounts of data and thus also tackle more complex tasks than conventional machine learning models. However, this comes at the expense of a significantly longer training time for deep learning models. At the same time, these models are also very difficult to interpret. That is, we cannot understand how a neural network arrived at a good prediction. This is what you should take with you - Deep Learning is a subarea of Machine Learning and describes a method of information processing. - Neural networks in particular are used to exploit correlations from large data sets, which can then be applied in future situations. - Deep Learning is already used today, for example, in product recommendations in e-commerce or in fraud detection in the banking sector. - Deep Learning differs from Machine Learning primarily in that it can also handle unstructured data such as images, videos, or audio recordings. Explanation of Recurrent Neural Networks and LSTM models with example. Other Articles on the Topic of Deep Learning - IBM has an exciting article describing other Deep Learning applications.
Addition & Subtraction The National Curriculum aims to make sure that children are fluent in the maths necessary for everyday life. A good understanding of addition and subtraction, and how they relate to each other, is essential for solving all sorts of calculations and problems. At the end of primary school, children will need to apply their addition and subtraction skills in arithmetic and reasoning tests. This may seem a bit daunting, but don’t worry – your child will build up their skills gradually, from simple number bonds to written and mental methods that involve increasingly large numbers. There are lots of things you can do at home to support your child’s developing addition and subtraction skills. What your child will learn Follow the links below for an overview of each year, with lots of information, support, and practice activities: Addition & subtraction in Year 1 (age 5–6) In Year 1, your child will be expected to be able to read, write, and understand mathematical ideas using addition (+), subtraction (–) and equals (=) signs. This includes: - making and using number bonds to 10 and then to 20 - adding and subtracting one-digit and two-digit numbers to 20, including 0 - solving simple problems using objects, drawings, diagrams and symbols, including missing number problems such as 7 = ? – 9. Addition & subtraction in Year 2 (age 6–7) In Year 2, your child will be expected to be able to solve addition and subtraction problems using numbers with one and two digits. This includes: - knowing and using addition and subtraction facts up to 20 and working out related addition and subtraction facts up to 100 - adding and subtracting using objects, pictures, and drawings, and also solving problems mentally - understanding that addition and subtraction have an inverse relationship (i.e. they undo each other), and using this to check calculations. Addition & subtraction in Year 3 (age 7–8) In Year 3, your child will be expected to use a range of strategies to solve problems mentally, and to learn formal written methods for column addition and column subtraction. This includes: - adding and subtracting numbers with up to three digits - estimating answers to problems before working them out accurately and checking using the inverse operation (i.e. using addition to check subtraction and vice versa) - explaining how they have solved a problem and why they chose a particular method. Addition & subtraction in Year 4 (age 8–9) In Year 4, your child will be expected to be able to solve addition and subtraction problems involving numbers up to four digits. This includes: - choosing from a variety of methods, including mental calculations, using objects, diagrams and drawings such as number lines, the area/grid method, and written column addition and subtraction - estimating answers before calculating accurately and checking answers by understanding that addition and subtraction are inverse operations - solving two-step word problems that require them to solve two different calculations to get the answer. Addition & subtraction in Year 5 (age 9–10) In Year 5, your child will be expected to be able to solve addition and subtraction problems involving numbers with more than four digits. This includes: - practising a range of mental calculation strategies and a variety of formal calculation methods, like using objects, diagrams and drawings such as number lines, the area/grid method, and written column addition and subtraction - using rounding to estimate answers and checking that their answers are sensible and accurate - solving multi-step word problems that involve multiple calculations before coming to the final answer. Calculation in Year 6 (age 10–11) In Year 6, your child will be expected to be able to solve problems, including multi-step word problems, involving adding, taking away, multiplying, and dividing with large numbers. This includes: - choosing efficient methods to solve problems and checking their answer using a different method - exploring the order of operations using brackets - rounding answers to a specified degree of accuracy (for example, to the nearest 10, 20, 50, and so on). How to help at home You don’t need to be an expert to support your child with maths! Here are three simple but effective ways to help your child develop their addition and subtraction skills. 1. Use the language of addition and subtraction Encourage your child to use mathematical language when talking about calculations, such as add, altogether, more, plus, total, sum for addition and take away, subtract, minus, less, fewer, difference for subtraction. For example, 7 – 3 = 4 can be read as ‘the difference between 7 and 3 is 4’. Practise adding numbers with these counters. 2. Go shopping Shopping provides great opportunities to practise skills. When buying items, ask your child to round prices to the nearest pound before adding mentally. Challenge your child to check shopping totals using subtraction. Encourage them to estimate, for example: I have £15. We need chicken for £4.50, vegetables costing £4.75, and the bus is £3.50. Will I have enough? 3. Explore different methods When adding or subtracting, ask your child to explain each stage of their sum and why they chose that method. They might partition numbers into hundreds, tens, and ones, draw pictures to represent how they added or subtracted, use number lines, use objects, or try written column methods. Encourage them to check with a different strategy. Early maths skills: addition Early maths skills: subtraction Use these quick links or explore our education glossary for definitions and examples of mathematical terms. - The area/grid method method is a way of visualising multiplication, as part of formal calculations. It involves breaking numbers up and multiplying their parts separately in a grid, before adding them back together. - Inverse operations are operations that can ‘undo’ each other. Addition is the inverse of subtraction. - Number bonds are pairs that make up a total. For example, the number bonds for 4 are 0 + 4, 1 + 3, and 2 + 2. - Number lines are images used to help children grasp number relationships. You can use them to count on or back to solve addition or subtraction problems. - Partitioning means to split a number into smaller chunks. It is often used to break down larger numbers to make calculations easier.
types of graph curves physics The area under the graph is the change in the displacement of the object. In this section, I ask students to find commonalities between the graph types on the summary page found on page 15 of the reading from the previous section. Acceleration-Time Graphs. It is for displaying categoric variables.. Pie charts help you to see how the 'whole' is made up of various entities - to see the proportion of the contributions. For curves, it means that the acceleration of the object is changing. Designed by the teachers at SAVE MY EXAMS for the CIE IGCSE Physics 0625 / 0972 syllabus. They are generally used for, and are best for, quite different things. Two quantities are independent if one has no effect on the other. Data is a collection of numbers or values and it must be organized for it to be useful. This is a classic example of a relationship called independence. The graph of position versus time in Figure 2.13 is a curve rather than a straight line. the area under the curve equals the change in velocity. The shapes of the position versus time graphs for these two basic types of motion - constant velocity motion and accelerated motion (i.e., changing velocity) - reveal an important principle. We observe that position is linearly increasing in positive direction with the time. wire at a constant temperature. Students should pay attention to the key features of the diagram to score full marks on this question. Alternatively, almost all of the same information can be conveyed using two displacement vs. position graphs for two different known times, or two displacement vs. time graphs for two different known positions. Import Data. The velocity-time graph (v-t) of a car moving along a level road is shown below.Find the average speed of the car in the time interval 0 to 8 s. Draw v-t with the data given below and answer the questions below velocity (m/s) 0 10 20 30 20 10 0 time (s) 0 5 10 15 20 25 30 1. Relationships between two variables can be classified into two major categories: direct and inverse. x. y = x. Average velocity and average speed from graphs Our mission is to provide a free, world-class education to anyone, anywhere. Step 1: Paste Your Data ... x. The Physics Paper has a mandatory drawing question either of a graph or a device. Kinematics Graphs: Adjust the Acceleration This is a simulation that shows the position vs. time, velocity vs. time, and acceleration vs. time graphs for an object. A direct relationship is a positive relationship between two variables in which they both increase or decrease in conjunction. There are several different types of charts and graphs. Graphical analysis of motion can be used to describe both specific and general characteristics of kinematics. This graph reminds us that this particular piece of the medium is undergoing simple harmonic motion. A common mistake by Physics 20 students is when they assume that all three types of graphs work the exact same way. To convey all of the information contained in \(y(x,t)\) requires both graphs. ... Chocolate lovers develop a bar graph based on the types of chocolate the class likes. There are lots of different types of graph: Bar charts help you to see how two or more separate entities (such as grades, years, different metals, animals, fish, countries) compare to each other. FREE Physics revision notes on Velocity-Time Graphs. Frequency distribution in statistics provides the information of the number of occurrences (frequency) of distinct values distributed within a given period of time or interval, in a list, table, or graphical representation.Grouped and Ungrouped are two types of Frequency Distribution. These graphs are called motion graphs. Types of Graphs provides in-depth information about charts & graphs. As you see on the graph, X axis shows us time and Y axis shows position. Graphs and diagrams are terrific tools for understanding physics, and they are especially helpful for studying motion, a phenomenon that we are used to perceiving visually. Regents Physics - Motion Graphs. IGCSE Edexcel Physics Graphs Practical Questions 2. We generally put position on the y-axis, and time on the x-axis. The Importance of Slope. The slope at any point on a position-versus-time graph is the instantaneous velocity at that point. Types of Graphs. Save Graph. ← Click on the number of any data point that you don't want included in your curve fit. when two curves coincide, the two objects have the same acceleration at that time. You would use: Bar graphs to show numbers that are Khan Academy is a 501(c)(3) nonprofit organization. Graphs can also be used for other topics in physics. If the graph is linear (i.e., a line with a constant slope), it is easy to find the slope at any point and you have the slope for every point. The shapes of the velocity vs. time graphs for these two basic types of motion - constant velocity motion and accelerated motion (i.e., changing velocity) - reveal an important principle. The I-V graph for a linear circuit element Non-linear circuit elements. Take a look at the curve to the right. zero slope implies motion with constant acceleration. • Shapes of AP Physics C - Electricity ... what a specific type of motion looks like on both types of graphs, ... drawn as a curve. The graphs can be used together to determine the economic equilibrium (essentially, to solve an equation). The following v-t graph is based on the same data as we used for the d-t graph, but let’s look at what’s different. Let’s talk about this position vs. time graph. Types of Motion Graphs Designed to help students recognize the major types of motion described in position versus time and speed versus time graphs. Most graphs of experimental data in Physics are linear and not drawn as “connect the dots”. Types of graphs and their uses vary very widely. Graph study is similar to that of kinematics; however, the only difference is that it is in graphical form. the surface is: x 2 y+y 2 z-z 2 x=1 but i dont know how to go about graphing something expressed that way. We'll explore motion through the study of particle diagrams, displacement-time graphs, velocity-time graphs, and acceleration-time graphs. Ranges of values, called classes, are listed at the bottom, and the classes with greater frequencies have taller bars. Applicable in the physics unit of my physical science class, and useful in the kinematics portion of any introductory physics course. Im trying to check my answers to a problem, and in the past i've used a 3d grapher to graph functions like f(x,y) = whatever. We tease out the ideas that graphs require an x-axis label that typically corresponds to an independent variable and a y-axis label that typically corresponds to a dependent variable. In physics, we often use these graphs to present objects in motion. The three most common types of motion graphs are acceleration vs. time graphs, velocity vs. time graphs and displacement vs. time graphs. We call this a linear graph. Connected graph : … but now i need to find a tangent plane to a surface at a point. Area under graph is the change in the velocity of the object Bar graphs measure the frequency of categorical data. are there different names to these types of graphs? Draw the graph: Identify a trend or a relationship between the independent and dependent variables. The four most common are probably line graphs, bar graphs and histograms, pie charts, and Cartesian graphs. An example of direct relationship would the relationship between the number of people attended the party and the food consumption at a party. Types of Charts. A histogram often looks similar to a bar graph, but they are different because of the level of measurement of the data. This type of graph is used with quantitative data. IGCSE Edexcel Physics Getting Started Students are required to know: • To calculate gradient of the graph • To sketch “line of best fit” and “curve of best fit” • Types of graphs: line and bar chart and when and where to use them. The principle is that the slope of the line on a position-time graph reveals useful … There are many different types of graphs, such as connected and disconnected graphs, bipartite graphs, weighted graphs, directed and undirected graphs, and simple graphs. No matter what value the x variable takes on the curve, the y variable stays the same. Example: If the acceleration of a particle is a function of time and the initial velocity is zero, the velocity-time graph will be a curve. Remove any outliers from consideration. The curve is a horizontal, straight line represented by the general form equation… y = k The velocity graph is increasing constantly, in a straight line. Graphs - IGCSE Edexcel Physics 1. Case 3 was just the case of increasing acceleration. Students should solve previous year class 12 physics question papers with a time slot of 3 hours. So this is just a way to visualize how things would behave in terms of position, velocity, and acceleration in the y and x directions and to appreciate, one, how to draw and visualize these graphs and conceptualize them, but also to appreciate that you can treat, once you break your initial velocity vectors down, you can treat the different dimensions, the x and the y dimensions, independently. an object undergoing constant acceleration traces a horizontal line. Constant velocity: Position vs Time graph: If we make a graph of position vs time and our object is moving at a constant velocity, the graph will form a straight line. A few typical examples are: Simple graph: Supply and demand curves, simple graphs used in economics to relate supply and demand to price. We will be learning the three important types of graphs such as; Displacement-time (d-t) Velocity-time (v-t) Acceleration-time (a-t) We have learned a mathematical approach to speed, distance, velocity, and displacement. Change Data Set Cycles 5 Data Sets. The slope of the curve becomes steeper as time progresses, showing that the velocity is increasing over time. Draw a curve or a line that best describes the identified trend. The graphs can be related to each other, but that doesn’t mean you look at them the same way. Override Equation. Velocity vs. Time Graphs Edit. Time is increasing to the right, and distance is increasing constantly with time. The mathematical transformations between graphs of motion are shown below. Graphs are used in a variety of ways, and almost every industry, such as engineering, search engine optimization, mathematics, and education.If you cannot find the information you are looking for,… The slope of the velocity-time graph at any instant (at a certain time) will give the acceleration at that time. The world of graphing polar graphs is filled with beautiful curves and patterns once a young mathematician masters a few basic techniques. The slope of … DESCRIBING MOTION WITH GRAPHS Position vs. Time Graphs: Graphs are commonly used in physics. They give us much information about the concepts and we can infer many things. Graphs that show acceleration look different from those that show constant speed. First, we will talk about velocity vs. time graphs.
* Your assessment is very important for improving the work of artificial intelligence, which forms the content of this project modes, although building the combination in software will usually take more time than if the combination addressing mode existed in hardware (although there is a trade-off that slows down all operations to allow for more complexity). The x86 instructions use five different operand types: registers, constants, and three memories addressing schemes. Each form is called an addressing mode. The x86 processors support the register addressing mode, the immediate addressing mode, the indirect addressing mode, the indexed addressing mode and the direct addressing mode. One approach to processors places an emphasis on flexibility of addressing modes. Some engineers and programmers believe that the real power of a processor lies in its addressing modes. Most addressing modes can be created by combining two or more basic addressing modes, although building the combination in software will usually take more time than if the combination addressing mode existed in hardware (although there is a trade-off that slows down all operations to allow for more complexity). In a purely othogonal instruction set, every addressing mode would be available for every instruction. In practice, this isn’t the case. Virtual memory, memory pages, and other hardware mapping methods may be layered on top of the addressing modes. 7.1 ADDRESSING MODES WITH REGISTER OPERANDS Register operands are the easiest to understand. Consider the following forms of the mov instruction: mov ax,ax mov ax,bx mov ax,cx mov ax, dx In the above instructions, the first instruction is the destination register and the second operand is the source register. The first instruction does nothing. It copies the value from the ax register back into the ax register. The remaining three instructions copy the value of bx,cx, and dx into ax. Note that the original values of bx, cx and dx remain the same. The first operand (the destination register) is not limited to ax; you can move values to any of these registers. This mode of addressing is the register addressing mode 7.2 ADDRESSING MODES WITH CONSTANTS Constants are also pretty easy to deal with. Consider the following instructions: mov ax,25 mov bx, 195 mov cx,2056 mov dx,1000 These instructions are all pretty straightforward; they load their respective registers with the specified hexadecimal constant. This mode of addressing is called the immediate addressing mode. 7.3 ADDRESSING MODES WITH MEMORY STRUCTURES There are three addressing modes which deal with accessing data in memory. There are three main addressing mode found in this category; the direct addressing mode, the indirect addressing mode and the index addressing mode. These addressing modes take the following forms: mov ax, mov ax, [bx] mov ax, [1000+bx] The first instruction uses the direct addressing mode to load ax with the 16 bit value stored in memory starting at location 1000hex. The second instruction loads ax from the memory location specified by the contents of the bx register. This is an indirect addressing mode. Rather than using the value bx, the instruction accesses the memory location whose address appears in bx. There are many cases where the use of indirection is faster, shorter and better. The last memory addressing mode is the indexed addressing mode. An example of this memory addressing mode is mov ax, [1000+bx]. The instruction adds the contents of bx with 1000 to produce the address of the memory value to fetch. This instruction is useful for accessing elements of arrays, records and other data structures. 7.4 ADDRESSING MODE WITH STACK MEMORY The stack plays an important role in all microprocessors. It holds data temporarily and stores return addresses for procedures. The stack memory is a LIFO memory which describes the way data are stored and removed from the stack. Data are placed onto the stack with the PUSH instruction and removed with a POP instruction. The stack memory is maintained by two registers (the stack pointer SP or ESP and the stack segment)s. The stack pointer register always points to an area of memory located within the stack segment. The stack pointer adds to (ss*10h) to form the stack memory address in the real mode. - PUSH ax: Copies ax into the stack - POP cx: Removes a word from the stack and places it in cx - PUSH dx: Copies dx into the stack - PUSH 123: Copies 123 into the stack - PUSH A: Copies the word content of all the registers into the stack (AX,BX,CX,DX,SI,DI,SP,BP) Example .code POP A: Removes data from the stack and places it in the 16 bit registers. start: mov ax,23 mov bx,44 mov cx,13 push ax; copies 23 into the stack push bx; copies 44 into the stack push cx; copies 13 into the stack pop cx; removes 13 from the stack and places it back into cx pop bx; removes 44 from the stack and places it back into bx pop ax; removes 23 from the stack and places it back into ax CHAPTER EIGHT INSTRUCTION SETS Like any other programming language, there are going to be several instructions you use all the time, some you use occasionally, and some you will rarely, if ever, use. These are called the 80x86 instruction sets. 8.1 THE 80X86 INSTRUCTION SETS 80x86 instructions can be roughly divided into eight different classes; 1. Data movement instructions 1 mov, lea, les, push, pop, pushf, popf Conversions 2 Arithmetic instructions 3 7 in,out String instructions 6 and, or, xor, not, shl, shr, rcl, rcr I/O instructions 5 add,inc,sub,dec,cmp,neg,mul,imul,div,idiv Logical shift 4 cbw, cwd, xlat mov,stos,lods Program flow instructions jmp, call, ret conditional jumps Miscellaneous instructions clc, stc, cmc The most commonly used of all these classes are the data movement instructions and the arithmetic instructions. The mov isnstruction is the most commonly used instruction of all the data movement instruction. The mov instruction is actually two instructions merged into the same instruction. The two forms of the mov instruction take the following forms; mov reg, reg/memory/constant mov reg, reg where reg (i.e register) is any of ax,bx,cx, or dx; constant is a numeric constant (using hexadecimal notation), and memory is an operand specifying a memory location. The next section describes the possible forms the memory operand can take. The “reg/memory/constant” operand tells you that this particular operand may be a register, memory location, or a constant. The arithmetic and logical instruction take the following forms: add reg, reg/memory/constant sub reg, reg/memory/constant cmp reg, reg/memory/constant and reg, reg/memory/constant or reg, reg/memory/constant not reg/memory The following sections describe some of the instructions in these groups and how they operate. The 80x86 instruction have simple semantics. The add instruction adds the value of the second operand to the first (register) operand, leaving the sum in the first operand. The sub instruction subtracts the value of the second operand from the first, leaving the difference in the first operand. The cmp instruction subtracts the value of the second operand from the first, leaving the difference in the first operand. The and & or instructions compute the corresponding bitwise logical operation on the two operands and store the result in the first operand. The not instruction invert the bit in the single memory or register operand. 8.2 CONTROL TRANSFER INSTRUCTION The control transfer instructions interrupt the sequential execution of instructions in memory and transfer control to some other point in memory either unconditionally, or system organization after testing the result of thr previous cmp instruction. These instructions include the following; ja dest--jump if above jae dest--jump if above or equal jb dest--jump if below jbe dest--jump if below or equal je dest—jump if equal jne dest—jum if not equal jmp dest-unconditional jump iret return from an interrupt The first six instructionsnin this class let you check the result of the previous cmp instruction for greater than, greater or equal, less than or equal,equality, or inequality. For example, if you compare the ax and bx registers with the cmp instruction and execute the ja instruction, the x86 CPU will jump to the specified destination location if ax was greater than bx. If ax is not greater than bx, control will fall through to the next instruction in the program. The jmp instruction unconditionally transfers control to the instruction at the destination address. The iret instruction returns control from an interrup service routine.The get and put instructions let you read and write integer values. Get will stop and prompt the user for a hexadecimal value and then store that value into the ax register. Put displays (in hexadecimal) the value of the ax register. The remaining instructions do not require any operands, they are halt and brk. Halt terminates program execution and brk stops the program in a state that it can be restarted. 8.3 THE STANDARD INPUT ROUTINES While the standard library provides several input routines, there are three in particular that will be used be used most times. - Getc (gets a character) - Gets(gets a string) - Getsm Getc reads a single character from the keyboard and returns that character in a register. It does not modiffy any other registers. As usual, the carry flag returns the error status. You do not need to pass getc any values in th rgisters. Getc does not echo the input character to the display screen. You must explicitly print the character if you want it to appear on the output monitor. The gets routine reads an entire line of text from the keyboard. It stores each successive character of the input line into a byte array whose base address in the es:di register pair. This array must have a room of 128bytes. The gets routine will read each character and place it in the array except for the carriage return character. Gets terminates the input line with a zero byte. Gets echoes each character you type to the display device, it also handles simple line editing functions such as backspace. The getsm routine reads a string from the keyboard and returns a pointer to that string in nes:di pair register. The difference between gets an getsm is that you do not have to pass the address of an input buffer in es:di. Getsm automatically allocates storage on the heap with a call to malloc and returns a pointer to the buffer in es:di. 8.4 THE STANDARD OUTPUT ROUTINES The basic standard output routines are; PUTC, PUTCR, PUTS, PUTH, PUTI, PRINT and PRINTF Putc outputs a single character to the display device. It outputs the character appearing in the al register. It does not affect any registers unless there is an error on output ( the carry flag denotes error/no error). Putcr outputs a “newline” to the standard output. Puts (put a string) routine prints the zero terminated string at which es:di points. Puts does not automatically output a new line after printing the string. The puth routine prints the value in the al register as exactly two hexadecimal digits including a leading zero byte if the value is in the range (0..Fh). The puti routine puts the value in the ax as a signed 16 bit integer The print routine is one of the most often called procedures in the library. It prints the zero terminated string that immediately follows the call to the string. Printf uses the escape character (“\”) to print special characters in the fashion similar to, but not identical to C’s printf. 8.5 MACROS Many assemblers support macros, programmer-defined symbols that stand for some sequence of text lines. This sequence of text lines may include a sequence of instructions, or a sequence of data storage pseudo-ops. Once a macro has been defined using the appropriate pseudo-op, its name may be used in place of a mnemonic. When the assembler processes such a statement, it replaces the statement with the text lines associated with that macro, then processes them just as though they had appeared in the source code file all along (including, in better assemblers, expansion of any macros appearing in the replacement text). Since macros can have 'short' names but expand to several or indeed many lines of code, they can be used to make assembly language programs appear to be much shorter (require less lines of source code from the application programmer - as with a higher level language). They can also be used to add higher levels of structure to assembly programs, optionally introduce embedded de-bugging code via parameters and other similar features. Many assemblers have built-in macros for system calls and other special code sequences. Macro assemblers often allow macros to take parameters. Some assemblers include quite sophisticated macro languages, incorporating such high-level language elements as optional parameters, symbolic variables, conditionals, string manipulation, and arithmetic operations, all usable during the execution of a given macros, and allowing macros to save context or exchange information. Thus a macro might generate a large number of assembly language instructions or data definitions, based on the macro arguments. This could be used to generate record-style data structures or "unrolled" loops, for example, or could generate entire algorithms based on complex parameters. An organization using assembly language that has been heavily extended using such a macro suite can be considered to be working in a higher-level language, since such programmers are not working with a computer's lowest-level conceptual elements.
Extreme values of a polynomial are the peaks and valleys of the polynomial—the points where direction changes. - On a graph, you find extreme values by looking to see where there’s a mountain top (“peak”) or valley floor. - Mathematically, you find them by looking at the derivative. At an extreme point, where there is a direction change, the derivative of the function is zero. The Number of Extreme Values of a Polynomial Polynomials can be classified by degree. This comes in handy when finding extreme values. A polynomial of degree n can have as many as n – 1 extreme values. For example, a 4th degree polynomial has 4 – 1 = 3 extremes. This follows directly from the fact that at an extremum, the derivative of the function is zero. If a polynomial is of n degrees, its derivative has n – 1 degrees. For example, take the 2nd degree polynomial 3x2. The derivative (using the power rule) is the first degree polynomial, 6x. The above image demonstrates an important result of the fundamental theorem of algebra: a polynomial of degree n has at most n roots. Roots (or zeros of a function) are where the function crosses the x-axis; for a derivative, these are the extrema of its parent polynomial. Note that the polynomial of degree n doesn’t necessarily have n – 1 extreme values—that’s just the upper limit. The actual number of extreme values will always be n – a, where a is an odd number. Absolute Extreme Values of Polynomials The absolute extreme values (also known as the global extreme values) of a polynomial are the absolute maxima and minima of the polynomial. These are the points where the function takes its largest and smallest values, period. An absolute extreme value is also a relative extreme value. To find the absolute extreme values of a polynomial: - Find all extreme values for the entire range, - Calculate the value of the polynomial at each of the extremes. - Find the value of the polynomial at the endpoints of the range. The point at which the polynomial is largest is the absolute maximum value; the point at which our polynomial is smallest is the absolute minimum value. You can also simply graph the polynomial and make a visual judgement. In the image below, the polynomial has a relative maxima at 2 and relative minima at 4 and -2. The relative minima at -2 is also a global minima; the absolute maxima doesn’t exist because the value of the polynomial goes toward positive infinity at both ends. The extreme value theorem tells us that a continuous function contains both the maximum value and a minimum value as long as the function is: - Defined on a closed interval, I. Another way of saying this is that the continuous, real-valued function, f, attains its maximum value and its minimum value each at least once on the interval. This theorem (sometimes called the maximum value theorem) is an “existence theorem”. It tells us that a max and min exist, but doesn’t tell us how to find them. It may seem too simple to be really important or significant, but it actually is the foundation for other theorems and is very significant in the groundwork of Calculus. It is used to prove Rolle’s theorem, among other things. Examples of the Extreme Value Theorem in Action Since we know the function f(x) = x2 is continuous and real valued on the closed interval [0,1] we know that it will attain both a maximum and a minimum on this interval. In order for the extreme value theorem to be able to work, you do need to make sure that a function satisfies the requirements: - Closed interval domain, For example, the function f(x) = 1/x on the half open interval (0,1] doesn’t attain a maximum. That’s because the interval is not closed. The extreme value theorem itself was first proved by the Bohemian mathematician and philosopher Bernard Bolzano (of Bolzano Theorem’s fame) in 1830, but his book, Function Theory, was only published a hundred years later in 1930. Another mathematician, Weierstrass, also discovered a proof of the theorem in 1860. Extreme Values of a Polynomial: References Wandzura, Jacqueline and Wandzura, Stephen. Extreme Value Theorem Demonstration. Retrieved from http://demonstrations.wolfram.com/ExtremeValueTheorem/ on August 11, 2019 Maxima and Minima. Whitman University Calculus Online. Retrieved from https://www.whitman.edu/mathematics/calculus_online/section05.01.html on October 12, 2018. Stephanie Glen. "Extreme Values of a Polynomial; Extreme Value Theorem" From CalculusHowTo.com: Calculus for the rest of us! https://www.calculushowto.com/extreme-values-of-a-polynomial/ Need help with a homework or test question? With Chegg Study, you can get step-by-step solutions to your questions from an expert in the field. Your first 30 minutes with a Chegg tutor is free!
3. Permutations (Ordered Arrangements) On this page... An arrangement (or ordering) of a set of objects is called a permutation. (We can also arrange just part of the set of objects.) In a permutation, the order that we arrange the objects in is important Consider arranging 3 letters: A, B, C. How many ways can this be done? Reminder - Factorial Notation Recall from the Factorial section that n factorial (written `n!`) is defined as: n! = n × (n − 1) × (n − 2) ... 3 × 2 × 1 Each of the theorems in this section use factorial notation. Theorem 1 - Arranging n Objects In general, n distinct objects can be arranged in `n!` ways. In how many ways can `4` different resistors be arranged in series? Theorem 2 - Number of Permutations The number of permutations of n distinct objects taken r at a time, denoted by `P_r^n`, where repetitions are not allowed, is given by (1) `P_n^n=n!` (since `0! = 1`) (2) Some books use the following notation for the number of permutations: and others have: In how many ways can a supermarket manager display `5` brands of cereals in `3` spaces on a shelf? How many different number-plates for cars can be made if each number-plate contains four of the digits `0` to `9` followed by a letter A to Z, assuming that (a) no repetition of digits is allowed? (b) repetition of digits is allowed? Theorem 3 - Permutations of Different Kinds of Objects The number of different permutations of n objects of which n1 are of one kind, n2 are of a second kind, ... nk are of a k-th kind is In how many ways can the six letters of the word "mammal" be arranged in a row? Theorem 4 - Arranging Objects in a Circle There are `(n - 1)!` ways to arrange n distinct objects in a circle (where the clockwise and anti-clockwise arrangements are regarded as distinct.) In how many ways can `5` people be arranged in a circle? 1. In how many ways can `6` girls and `2` boys be arranged in a row (a) without restriction? (b) such that the `2` boys are together? (c) such that the `2` boys are not together? 2. How many numbers greater than `1000` can be formed with the digits `3, 4, 6, 8, 9` if a digit cannot occur more than once in a number? 3 How many different ways can `3` red, `4` yellow and `2` blue bulbs be arranged in a string of Christmas tree lights with `9` sockets? 4. In how many ways can `5` people be arranged in a circle such that two people must sit together? Online Algebra Solver This algebra solver can solve a wide range of math problems. (Please be patient while it loads.) Go to: Online algebra solver Math Lessons on DVD Easy to understand math lessons on DVD. See samples before you commit. More info: Math videos The IntMath Newsletter Sign up for the free IntMath Newsletter. Get math study tips, information, news and updates each fortnight. Join thousands of satisfied students, teachers and parents!
Time: 93 hours College Credit Recommended The purpose of this course is to introduce you to the subject of statistics as a science of data. There is data abound in this information age; how to extract useful knowledge and gain a sound understanding of complex data sets has been more of a challenge. In this course, we will focus on the fundamentals of statistics, which may be broadly described as the techniques to collect, clarify, summarize, organize, analyze, and interpret numerical information. This course will begin with a brief overview of the discipline of statistics and will then quickly focus on descriptive statistics, introducing graphical methods of describing data. You will learn about combinatorial probability and random distributions, the latter of which serves as the foundation for statistical inference. On the side of inference, we will focus on both estimation and hypothesis testing issues. We will also examine the techniques to study the relationship between two or more variables; this is known as regression. By the end of this course, you should gain a sound understanding of what statistics represent, how to use statistics to organize and display data, and how to draw valid inferences based on data by using appropriate statistical tools. First, read the course syllabus. Then, enroll in the course by clicking "Enroll me in this course". Click Unit 1 to read its introduction and learning outcomes. You will then see the learning materials and instructions on how to use them. In today's technologically advanced world, we have access to large volumes of data. The first step of data analysis is to accurately summarize all of this data, both graphically and numerically, so that we can understand what the data reveals. To be able to use and interpret the data correctly is essential to making informed decisions. For instance, when you see a survey of opinion about a certain TV program, you may be interested in the proportion of those people who indeed like the program. In this unit, you will learn about descriptive statistics, which are used to summarize and display data. After completing this unit, you will know how to present your findings once you have collected data. For example, suppose you want to buy a new mobile phone with a particular type of a camera. Suppose you are not sure about the prices of any of the phones with this feature, so you access a website that provides you with a sample data set of prices, given your desired features. Looking at all of the prices in a sample can sometimes be confusing. A better way to compare this data might be to look at the median price and the variation of prices. The median and variation are two ways out of several ways that you can describe data. You can also graph the data so that it is easier to see what the price distribution looks like. In this unit, you will study precisely this; namely, you will learn both numerical and graphical ways to describe and display your data. You will understand the essentials of calculating common descriptive statistics for measuring center, variability, and skewness in data. You will learn to calculate and interpret these measurements and graphs. Descriptive statistics are, as their name suggests, descriptive. They do not generalize beyond the data considered. Descriptive statistics illustrate what the data shows. Numerical descriptive measures computed from data are called statistics. Numerical descriptive measures of the population are called parameters. Inferential statistics can be used to generalize the findings from sample data to a broader population. Completing this unit should take you approximately 22 hours. Probabilities affect our everyday lives. In this unit, you will learn about probability and its properties, how probability behaves, and how to calculate and use it. You will study the fundamentals of probability and will work through examples that cover different types of probability questions. These basic probability concepts will provide a foundation for understanding more statistical concepts, for example, interpreting polling results. Though you may have already encountered concepts of probability, after this unit, you will be able to formally and precisely predict the likelihood of an event occurring given certain constraints. Probability theory is a discipline that was created to deal with chance phenomena. For instance, before getting a surgery, a patient wants to know the chances that the surgery might fail; before taking medication, you want to know the chances that there will be side effects; before leaving your house, you want to know the chance that it will rain today. Probability is a measure of likelihood that takes on values between 0 and 1, inclusive, with 0 representing impossible events and 1 representing certainty. The chances of events occurring fall between these two values. The skill of calculating probability allows us to make better decisions. Whether you are evaluating how likely it is to get more than 50% of the questions correct on a quiz if you guess randomly; predicting the chance that the next storm will arrive by the end of the week; or exploring the relationship between the number of hours students spend at the gym and their performance on an exam, an understanding of the fundamentals of probability is crucial. We will also talk about random variables. A random variable describes the outcomes of a random experiment. A statistical distribution describes the numbers of times each possible outcome occurs in a sample. The values of a random variable can vary with each repetition of an experiment. Intuitively, a random variable, summarizing certain chance phenomenon, takes on values with certain probabilities. A random variable can be classified as being either discrete or continuous, depending on the values it assumes. Suppose you count the number of people who go to a coffee shop between 4 p.m. and 5 p.m. and the amount of waiting time that they spend in that hour. In this case, the number of people is an example of a discrete random variable and the amount of waiting time they spend is an example of a continuous random variable. Completing this unit should take you approximately 25 hours. The concept of sampling distribution lies at the very foundation of statistical inference. It is best to introduce sampling distribution using an example here. Suppose you want to estimate a parameter of a population, say the population mean. There are two natural estimators: 1. sample mean, which is the average value of the data set; and 2. median, which is the middle number when the measurements are arranged in ascending (or descending) order. In particular, for a sample of even size n, the median is the mean of the middle two numbers. But which one is better, and in what sense? This involves repeated sampling, and you want to choose the estimator that would do better on average. It is clear that different samples may give different sample means and medians; some of them may be closer to the truth than the others. Consequently, we cannot compare these two sample statistics or, in general, any two sample statistics on the basis of their performance with a single sample. Instead, you should recognize that sample statistics are themselves random variables; therefore, sample statistics should have frequency distributions by taking into account all possible samples. In this unit, you will study the sampling distribution of several sample statistics. This unit will show you how the central limit theorem can help to approximate sampling distributions in general. Completing this unit should take you approximately 15 hours. In this unit, you will learn how to use the central limit theorem and confidence intervals, the latter of which enables you to estimate unknown population parameters. The central limit theorem provides us with a way to make inferences from samples of non-normal populations. This theorem states that given any population, as the sample size increases, the sampling distribution of the means approaches a normal distribution. This powerful theorem allows us to assume that given a large enough sample, the sampling distribution will be normally distributed. You will also learn about confidence intervals, which provide you with a way to estimate a population parameter. Instead of giving just a one-number estimate of a variable, a confidence interval gives a range of likely values for it. This is useful, because point estimates will vary from sample to sample, so an interval with certain confidence level is better than a single point estimate. After completing this unit, you will know how to construct such confidence intervals and the level of confidence. Completing this unit should take you approximately 10 hours. A hypothesis test involves collecting and evaluating data from a sample. The data gathered and evaluated is then used to make a decision as to whether or not the data supports the claim that is made about the population. This unit will teach you how to conduct hypothesis tests and how to identify and differentiate between the errors associated with them. Many times, you need answers to questions in order to make efficient decisions. For example, a restaurant owner might claim that his restaurant's food costs 30% less than other restaurants in the area, or a phone company might claim that its phones last at least one year more than phones from other companies. In order to decide whether it would be more affordable to eat at the restaurant that "costs 30% less" or another restaurant in the area, or in order to decide which phone company to choose based on the durability of the phone, you will have to collect data to justify these claims. The process of hypothesis testing is a way of decision-making. In this unit, you will learn to establish your assumptions through null and alternative hypotheses. The null hypothesis is the hypothesis that is assumed to be true and the hypothesis you hope to nullify, while the alternative hypothesis is the research hypothesis that you claim to be true. This means that you need to conduct the correct tests to be able to accept or reject the null hypothesis. You will learn how to compare sample characteristics to see whether there is enough data to accept or reject the null hypothesis. Completing this unit should take you approximately 12 hours. In this unit, we will discuss situations in which the mean of a population, treated as a variable, depends on the value of another variable. One of the main reasons why we conduct such analyses is to understand how two variables are related to each other. The most common type of relationship is a linear relationship. For example, you may want to know what happens to one variable when you increase or decrease the other variable. You want to answer questions such as, "Does one variable increase as the other increases, or does the variable decrease?” For example, you may want to determine how the mean reaction time of rats depends on the amount of drug in bloodstream. In this unit, you will also learn to measure the degree of a relationship between two or more variables. Both correlation and regression are measures for comparing variables. Correlation quantifies the strength of a relationship between two variables and is a measure of existing data. On the other hand, regression is the study of the strength of a linear relationship between an independent and dependent variable and can be used to predict the value of the dependent variable when the value of the independent variable is known. Completing this unit should take you approximately 12 hours. This study guide will help you get ready for the final exam. It discusses the key topics in each unit, walks through the learning outcomes, and lists important vocabulary terms. It is not meant to replace the course materials! Course Feedback Survey Please take a few minutes to give us feedback about this course. We appreciate your feedback, whether you completed the whole course or even just a few resources. Your feedback will help us make our courses better, and we use your feedback each time we make updates to our courses. If you come across any urgent problems, email firstname.lastname@example.org or post in our discussion forum. Certificate Final Exam Take this exam if you want to earn a free Course Completion Certificate. To receive a free Course Completion Certificate, you will need to earn a grade of 70% or higher on this final exam. Your grade for the exam will be calculated as soon as you complete it. If you do not pass the exam on your first try, you can take it again as many times as you want, with a 7-day waiting period between each attempt. Once you pass this final exam, you will be awarded a free Course Completion Certificate. - Receive a grade Saylor Direct Credit Take this exam if you want to earn college credit for this course. This course is eligible for college credit through Saylor Academy's Saylor Direct Credit Program. The Saylor Direct Credit Final Exam requires a proctoring fee of $5. To pass this course and earn a Proctor-Verified Course Certificate and official transcript, you will need to earn a grade of 70% or higher on the Saylor Direct Credit Final Exam. Your grade for this exam will be calculated as soon as you complete it. If you do not pass the exam on your first try, you can take it again a maximum of 3 times, with a 14-day waiting period between each attempt. We are partnering with SmarterProctoring to help make the proctoring fee more affordable. We will be recording you, your screen, and the audio in your room during the exam. This is an automated proctoring service, but no decisions are automated; recordings are only viewed by our staff with the purpose of making sure it is you taking the exam and verifying any questions about exam integrity. We understand that there are challenges with learning at home - we won't invalidate your exam just because your child ran into the room! - Desktop Computer - Chrome (v74+) - Webcam + Microphone - 1mbps+ Internet Connection Once you pass this final exam, you will be awarded a Credit-Recommended Course Completion Certificate and can request an official transcript. - Receive a grade - Desktop Computer
Studying development is about measuring how developed one country is compared to other countries, or to the same country in the past. Development measures how economically, socially, culturally or technologically advanced a country is. The two most important ways of measuring development are economic development and human development. - Economic development is a measure of a country’s wealth and how it is generated (for example agriculture is considered less economically advanced then banking). - Human development measures the access the population has to wealth, jobs, education, nutrition, health, leisure and safety – as well as political and cultural freedom. Material elements, such as wealth and nutrition, are described as the standard of living. Health and leisure are often referred to as quality of life. Let’s first examine economic growth. A country’s economic growth is usually indicated by an increase in that country’s gross domestic product, or GDP. Generally speaking, gross domestic product is an economic model that reflects the value of a country’s output. In other words, a country’s GDP is the total monetary value of the goods and services produced by that country over a specific period of time. To assess the economic development of a country, geographers use economic indicators including: - Gross Domestic Product (GDP) is the total value of goods and services produced by a country in a year. - Gross National Product (GNP) measures the total economic output of a country, including earnings from foreign investments. - GNP per capita is a country’s GNP divided by its population. (Per capita means per person.). - Unemployment is the number of people who cannot find work. - Economic structure shows the division of a country’s economy between primary, secondary and tertiary Now let’s take a look at economic development. A country’s economic development is usually indicated by an increase in citizens’ quality of life. ‘Quality of life’ is often measured using the Human Development Index, which is an economic model that considers intrinsic personal factors not considered in economic growth, such as literacy rates, life expectancy and poverty rates. While economic growth often leads to economic development, it’s important to note that a country’s GDP doesn’t include intrinsic development factors, such as leisure time, environmental quality or freedom from oppression. Using the Human Development Index, factors like literacy rates and life expectancy generally imply a higher per capita income and therefore indicate economic development. Human development indicators include: - Life expectancy – the average age to which a person lives, eg this is 79 in the UK and 48 in Kenya. - Infant mortality rate – counts the number of babies, per 1000 live births, who die under the age of one. This is 5 in the UK and 61 in Kenya. - Access to basic services – the availability of services necessary for a healthy life, such as clean water and sanitation. - Access to healthcare – takes into account statistics such as how many doctors there are for every patient. - Access to education – measures how many people attend primary school, secondary school and higher education. - Literacy rate – is the percentage of adults who can read and write. This is 99 per cent in the UK, 85 per cent in Kenya and 60 per cent in India. - Access to technology – includes statistics such as the percentage of people with access to phones, mobile phones, television and the internet. - Male/female equality – compares statistics such as the literacy rates and employment between the sexes. - Government spending priorities – compares health and education expenditure with military expenditure and paying off debts.
Click for a full size image This artist’s impression depicts the accretion disc surrounding a black hole, in which the inner region of the disc precesses. “Precession” means that the orbit of material surrounding the black hole changes orientation around the central object. In these three views, the precessing inner disc shines high-energy radiation that strikes the matter in the surrounding accretion disc. This causes the iron atoms in that disc to emit X-rays, depicted as the glow on the accretion disc to the right (in view a), to the front (in view b) and to the left (in view c) (see Figure 1). In a study published in July 2016, astronomers used data from ESA’s XMM-Newton X-ray Observatory and NASA’s NuSTAR telescope to measure this “wobble” in X-ray emission from excited iron atoms. Scientists interpreted this as evidence for the Lense-Thirring effect — a name for the precession phenomenon — in the strong gravitational field of a black hole. The European Space Agency’s X-ray Multi-Mirror Mission, XMM-Newton, was launched in December 1999. The largest scientific satellite to have been built in Europe, it is also one of the most sensitive X-ray observatories ever flown. More than 170 wafer-thin, cylindrical mirrors direct incoming radiation into three high-throughput X-ray telescopes. XMM-Newton’s orbit takes it almost a third of the way to the moon, allowing for long, uninterrupted views of celestial objects. Image credit: ESA/ATG medialab Black Hole Makes Material Wobble Around It The European Space Agency’s orbiting X-ray observatory, XMM-Newton, has proved the existence of a “gravitational vortex” around a black hole. The discovery, aided by NASA’s Nuclear Spectroscopic Telescope Array (NuSTAR) mission, solves a mystery that has eluded astronomers for more than 30 years, and will allow them to map the behavior of matter very close to black holes. It could also open the door to future investigations of Albert Einstein’s general relativity. Matter falling into a black hole heats up as it plunges to its doom. Before it passes into the black hole and is lost from view forever, it can reach millions of degrees. At that temperature it shines X-rays into space. In the 1980s, pioneering astronomers using early X-ray telescopes discovered that the X-rays coming from stellar-mass black holes in our galaxy flicker. The changes follow a set pattern. When the flickering begins, the dimming and re-brightening can take 10 seconds to complete. As the days, weeks and then months progress, the period shortens until the oscillation takes place 10 times every second. Then, the flickering suddenly stops altogether. The phenomenon was dubbed the Quasi Periodic Oscillation (QPO). “It was immediately recognized to be something fascinating because it is coming from something very close to a black hole,” said Adam Ingram, University of Amsterdam, the Netherlands, who began working to understand QPOs for his doctoral thesis in 2009. During the 1990s, astronomers had begun to suspect that the QPOs were associated with a gravitational effect predicted by Einstein’s general relativity: that a spinning object will create a kind of gravitational vortex. “It is a bit like twisting a spoon in honey. Imagine that the honey is space and anything embedded in the honey will be “dragged” around by the twisting spoon,” explained Ingram. “In reality, this means that anything orbiting a spinning object will have its motion affected.” In the case of an inclined orbit, it will “precess.” This means that the whole orbit will change orientation around the central object. The time for the orbit to return to its initial condition is known as a precession cycle. In 2004, NASA launched Gravity Probe B to measure this so-called Lense-Thirring effect around Earth. After painstaking analysis, scientists confirmed that the spacecraft would turn through a complete precession cycle once every 33 million years. Around a black hole, however, the effect would be much more noticeable because of the stronger gravitational field. The precession cycle would take just a matter of seconds or less to complete. This is so close to the periods of the QPOs that astronomers began to suspect a link. Ingram began working on the problem by looking at what happened in the flat disc of matter surrounding a black hole. Known as an accretion disc, it is the place where material gradually spirals inwards towards the black hole. Scientists had already suggested that, close to the black hole, the flat accretion disc puffs up into a hot plasma, in which electrons are stripped from their host atoms. Termed the hot inner flow, it shrinks in size over weeks and months as it is eaten by the black hole. Together with colleagues, Ingram published a paper in 2009 suggesting that the QPO is driven by the Lense-Thirring precession of this hot flow. This is because the smaller the inner flow becomes, the closer to the black hole it would approach and so the faster its Lense-Thirring precession cycle would be. The question was: how to prove it? “We have spent a lot of time trying to find smoking gun evidence for this behavior,” said Ingram. The answer is that the inner flow is releasing high-energy radiation that strikes the matter in the surrounding accretion disc, making the iron atoms in the disc shine like a fluorescent light tube. The iron releases X-rays of a single wavelength — referred to as “a spectral line.” Because the accretion disc is rotating, the iron line has its wavelength distorted by the Doppler effect. Line emission from the approaching side of the disc is squashed — blue shifted — and line emission from the receding disc material is stretched — red shifted. If the inner flow really is precessing, it will sometimes shine on the approaching disc material and sometimes on the receding material, making the line wobble back and forth over the course of a precession cycle. Seeing this wobbling is where XMM-Newton came in. Ingram and colleagues from Amsterdam, Cambridge, Southampton and Tokyo applied for a long-duration observation that would allow them to watch the QPO repeatedly. They chose black hole H 1743-322, which was exhibiting a four-second QPO at the time. They watched it for 260,000 seconds with XMM-Newton. They also observed it for 70,000 seconds with NASA’s NuSTAR X-ray observatory. “The high-energy capability of NuSTAR was very important,” Ingram said. “NuSTAR confirmed the wobbling of the iron line, and additionally saw a feature in the spectrum called a ‘reflection hump’ that added evidence for precession.” After a rigorous analysis process of adding all the observational data together, they saw that the iron line was wobbling in accordance with the predictions of general relativity. “We are directly measuring the motion of matter in a strong gravitational field near to a black hole,” says Ingram. This is the first time that the Lense-Thirring effect has been measured in a strong gravitational field. The technique will allow astronomers to map matter in the inner regions of accretion discs around black holes. It also hints at a powerful new tool with which to test general relativity. Einstein’s theory is largely untested in such strong gravitational fields. So if astronomers can understand the physics of the matter that is flowing into the black hole, they can use it to test the predictions of general relativity as never before – but only if the movement of the matter in the accretion disc can be completely understood. “If you can get to the bottom of the astrophysics, then you can really test the general relativity,” says Ingram. A deviation from the predictions of general relativity would be welcomed by a lot of astronomers and physicists. It would be a concrete signal that a deeper theory of gravity exists. Larger X-ray telescopes in the future could help in the search because they are more powerful and could more efficiently collect X-rays. This would allow astronomers to investigate the QPO phenomenon in more detail. But for now, astronomers can be content with having seen Einstein’s gravity at play around a black hole. “This is a major breakthrough since the study combines information about the timing and energy of X-ray photons to settle the 30-year debate around the origin of QPOs. The photon-collecting capability of XMM-Newton was instrumental in this work,” said Norbert Schartel, ESA Project Scientist for XMM-Newton. source : NASA – Jet Propulsion Laboratory – California Institute of Technology
Parliamentary government, or cabinet government, is the form of constitutional democracy in which executive authority emerges from, and is responsible to, legislative authority. It differs from the arrangement of independently elected executive and legislative agencies found in the United States. Developed in western Europe and particularly in Great Britain, parliamentary government provides the pattern usually assumed by democratic experiments in eastern Europe, Asia, and Africa. In common usage the term “parliamentary government” is reserved for those political systems that not only are parliamentary but are based on free and competitive elections. This excludes oneparty dictatorships exercising power within a formal parliamentary structure. The essential union of executive and legislative branches is accompanied by the constitutional principle that the legislative body, or parliament, is supreme. Usually the principal executive, the prime minister, is appointed by a monarchical or presidential head of state. The prime minister, in turn, chooses the executive heads of government departments, the most important of whom are in the prime minister’s cabinet. Both the prime minister and his cabinet, known together as the government, are ordinarily members of parliament. They hold ministerial office only as long as they have majority support in parliament. In a bicameral legislature, this requirement usually means majority support in the more popularly elected house. Occasionally a government may be made responsible to both houses. In any case, the rule of continuous legislative confidence is regularly demonstrated in the government’s submission of its program and record for parliamentary approval. A defeat for the government through an adverse legislative vote, on a plainly important issue, indicates a lack of confidence requiring the government either to resign or to attempt, by means of a general election, to secure a new parliamentary majority. A government can stay in office only temporarily without parliamentary support for its policies. Stalemate between an executive of one persuasion and a legislature of another, as occurs with the American system of separated powers, is meant to be impossible in the parliamentary system. Instability of executive authority, on the other hand, is entirely possible. The accepted way to avoid it is by the development in parliament of a strong partisan majority prepared to support a prime minister and his cabinet during the several years between parliamentary elections. The executive authority then becomes the effective policy maker. More than most working governmental forms, the parliamentary system is not so much an invention as it is an evolutionary product. Its essential union of executive and legislative authority is not simply a deliberate constitutional design. It is more significantly the result of the process by which representative assemblies successfully challenged monarchs in the course of modern history. This process, it must be said, was European, although attempts have been made to transfer its result to other parts of the world. It can even be argued that the process was characteristically British rather than European, and that, therefore, parliamentary regimes in continental Europe represented institutional transfers. The outstanding feature of the historical experience was the evolution of parliament from a monarch’s council to a supremacy of its own. Assembled originally, as early as the medieval period, to provide advice and especially to give financial support to the monarch, parliament became in modern history the means by which first an established landowning oligarchy, then a commercial class, and finally representatives of the bulk of the population secured control of the machinery of government. The development was a long one, covering three to five centuries, and it coincided with what now seems, by comparison with new nations, a most gradual change in European society. In particular, parliamentary government developed where there was a substantial historical interval of capitalist, middle-class ascendancy between the era of dominance by court and nobility and the era of mass democracy. Especially in the nineteenth century, parliament seemed to be the agency of the substantial middle class produced by commercial and industrial capitalism. In Britain, Parliament’s supremacy over the monarch dates from 1688, when Parliament asserted its authority to determine the monarchical succession. This authority was made effective by the eighteenth-century development of a cabinet in which those who were nominally ministers of the crown became responsible in fact to Parliament. The monarch, losing control of his ministers, ceased to be the effective executive. In an increasingly rationalist and subsequently democratic age, the hereditary principle did not provide a likely basis for independent executive authority. The claims of a representative body were not successfully resisted by a monarch. Where the monarch resisted too stubbornly, he was likely to be dethroned in favor of another monarch or of a president. The latter, even if not popularly elected, might be a stronger claimant to independent executive authority, but usually his powers were cast in the mold of a constitutional monarch’s. The presidential innovation was a late one in the parliamentary system; the French only introduced the presidency after 1870. This change may make the parliamentary order less smoothly evolutionary, as indeed was the case in France, than the retention of the monarchy while its power is reduced. The parliamentary system with a president has been successful in a few nations and has been introduced in many new nations. [SeeMonarchy.] Parliamentary government developed its more essential features in Britain before the emergence of mass democracy. The larger part of the British population did not even have the right to vote until about the last quarter of the nineteenth century, by which time parliamentary government, including the cabinet system, was well established. Elsewhere, however, the establishment of parliamentary institutions and the achievement of universal suffrage more nearly coincided. The results in such instances were not always so favorable for the stability of the governmental system as the British phasing seems to have been. The evolutionary length of British experience bears emphasis because it may not necessarily be typical of nations attempting to practice parliamentary government. This raises the important general question whether parliamentary government can be dissociated from conditions chiefly typical of Britain, the British-settled territories, and those smaller Continental nations which closely resemble them. The parliamentary system has been differently conceived, not only over time but also from country to country. The view that the parliamentary body itself, rather than the cabinet, was the effective policy-making authority remained a prevailing French conception, in the form of “government by assembly,” long after it had lost meaning in the British system. By the middle of the nineteenth century, often viewed as the classical period of parliamentary government, the British cabinet had assumed a central importance. This was reflected in Bagehot’s famous appraisal (1865-1867). Still, Bagehot did not move Parliament off stage. The cabinet, although it exercised leadership, remained an agency of Parliament in the sense that the House of Commons decided whether to turn out a government after discussion of its policy. Bagehot regarded this elective function, rather than the legislative function, as the most important one that Parliament performed. In this way, Parliament remained the locus of power despite the cabinet’s admittedly crucial role in the whole system. In the twentieth century, however, the original British model changed as the cabinet increasingly had the support of a cohesive majority and thus stayed in office from one general election to the next. The British Parliament was no longer expected to exercise its power to dismiss a government. The elective function was transferred from the Commons to the general public. Parliament remained to register the electorate’s decision as to which party’s leadership was to form the cabinet, but parliamentary debates lost the impact they had formerly had on the life of the government. Accompanying the increasingly direct relation of the cabinet to the electorate was a strengthening of the prime minister’s role. As the leader of a majority party, he has come, in effect, to be chosen as a chief executive when voters elect parliamentary representatives of his party. He bears individually a responsibility to the country. His cabinet has tended to become more a changing team of ministers carrying out the leader’s program than a genuinely collegial policy-making body. The effect of this tendency, along with that which has made the government more directly responsible to the electorate than to Parliament, has been to give the British system an appearance that is less parliamentary and more presidential. It might even seem more presidential than the American system, since the prime minister is less restrained by a parliament, in which his party has a cohesive majority, than is a president of the United States by the Congress. On the other hand, the prime minister’s cohesive majority, ordinarily so supportive, may decide to displace him in admittedly rare but important circumstances. Moreover, conceiving of the parliamentary system in near-presidential terms is also somewhat hazardous in that a cohesive majority party may not always exist as it has in Britain during the middle years of the twentieth century [seeMajority rule]. Yet the tendencies that have changed parliamentary government in Britain are observable elsewhere, and usually in association with a strong parliamentary party possessing a majority or a near-majority of seats. An important case in point is the working of the West German system after World War ii. Here the chancellor, as a German counterpart of the British prime minister, established an ascendancy based on his party leadership and his popularity with the electorate. Most other Continental nations in the parliamentary mold have not developed strong executive leadership to the same degree as West Germany; in the Fourth French Republic, parliamentary government broke down in the absence of a stable cabinet system. The smaller European nations have also been successful in strengthening their cabinets while retaining the parliamentary system. The Englishspeaking nations of the overseas Commonwealth even more closely resemble the British pattern. Cohesion and party government Political parties have not always provided a popular base for executive leadership. Earlier they were simply groups of representatives tending loosely to support ministers or potential ministers. Although they began to have followings in the electorate before mass enfranchisement, modern, large-scale organization of political parties dates only from the last decades of the nineteenth century. So does the crucial cohesiveness of the parliamentary party in support of its leadership. Such support represents a commitment to the electorate, and this commitment makes it possible for parliamentary government to be effective party government. The vital party is the one in parliament, rather than the extraparliamentary, mass-membership organization. The latter may be divided in its support of a leadership and its policy, but as long as the parliamentary party is able to unite, it serves as the base of stable executive authority. The parliamentary party may be influenced by the external organization and its apparatus, but decisions are made by the publicly elected members of the parliamentary party, who ordinarily support their leadership. This is a residual aspect of parliamentary supremacy, transferred from its operation in the body as a whole to operation within a party. As such, it has been strong enough in Britain and in other parliamentary nations to resist the pressures, especially of social-democratic political movements, to make parliamentary parties into agents of an outside, dues-paying, activist membership. Members of parliament feel responsible to their party’s broader electorate, not just to active party workers. This is consistent with the willing acceptance by parliamentary members of the policy positions of their leaders, who also regard themselves as directly responsible to the broader electorate. Coalition and competition Party cohesion alone is not the key to the achievement of executive stability in the parliamentary system. Equally important is the presence of a party, or perhaps a combination of parties, commanding a majority in the legislative body. The simplest case, barring the noncompetitive one-party arrangement, is the two-party competition that largely characterizes British parliamentary politics. Given only two major parties, the probability is high that one party will have a parliamentary majority, even a comfortable working majority. This regularly happened in Britain during the three decades following 1931. Minor parties, as opposed to a third party or other substantial parties in a multiparty system, do not ordinarily win enough seats to reduce one of the large parties to only plurality strength. Even a third party, or any number of other competing parties, would not necessarily prevent one party from gaining a majority. The Christian Democratic parties of Germany and Italy secured narrow and fairly brief majorities in such multiparty circumstances during the post-World War n years. However, absent from these unusual successes was the other ingredient of the British model of party government: a single opposition party large enough to be a potential majority and so an alternative governing party, to the point of providing a leadership core of potential ministers called the “shadow cabinet” Despite the practical success of the two-party model, it is not necessarily an essential feature of the parliamentary system. The British themselves had a three-party system, often without a majority party, as recently as the 1920s, and they may have it again. Moreover, there are other nations that have almost consistently had more than two important parties and yet maintained parliamentary government. The leading examples, it is true, are the Scandinavian countries and a few Englishspeaking nations. Their particular multiparty systems have often produced a majority or near-majority party and have seldom been as fragmented as the French system, which provides the main instance of the difficulty of maintaining parliamentary government in a multiparty system. Without a majority party, various means are used to try to secure majority legislative support, as required by parliamentary government. A large, but not always the largest, plurality party can form a cabinet of its own members while seeking the votes of another party (or of other parties) so as to produce a working majority. This was British practice in the 1920s. Elsewhere, it has been usual to form a cabinet that is itself a coalition representing enough parties for a parliamentary majority. Such a coalition might be headed by a prime minister acting as a leader either of the largest single party or of a smaller but key center party. The task of achieving stability is likely to be facilitated if one party is fairly near a majority on its own and can therefore manage with one or two minor coalition partners. A broader coalition is also feasible, but if it includes all or most of the large parties, the effect is to eliminate the possibility of an opposition presenting itself as an alternative government. The very broad coalition, therefore, has usually been regarded as suited only to wartime or other special circumstances, when it is deemed appropriate in two-party as well as in multiparty systems. Special circumstances, however, can become institutionalized, as in Austria after World War n. Regularly the two major Austrian parties united in a coalition, excluding minor parties, and yet fought competitive elections against each other. In this novel arrangement, the chief function of elections was to decide which of the two parties would increase its share of cabinet positions, not to decide which of the two should form a cabinet. Elections may serve a similarly limited purpose in a more clearly multiparty nation when a coalition cabinet is established over a fairly broad political spectrum, embracing perhaps two-thirds but not all of a parliament. In this case the voters, by electing more or fewer representatives of a given party, help determine the relative strength of the several parties regularly composing the cabinet. It means a more limited form of electoral competition than that between two parties, or two groups of parties, each contending to form a government of its own. But it is competition nonetheless, and of a kind that seems relevant where clear-cut alternatives are simply not available. The result is an operation of parliamentary government different from the standard British method, but it is not incompatible with democracy. The same cannot be said so surely for those systems in which there is one-party domination. Much depends on both the degree and the method of domination. Clearly, when one party has a legal monopoly, as in communist states, there can be no parliamentary government in the Western sense. But where party competition is legal, even if socially discouraged, the problem is more elusive. It is conceivable that a nation, soon after independence, for instance, could regard as legitimate only the party that led the national independence movement. This seems to be the case, in varying degrees, in Africa and Asia during the immediate post-imperial period. Parties other than the governing party often have too little support to furnish serious parliamentary opposition. Political competition takes place mainly within the one major party. Given freedom of expression, in and out of parliament, and given free choice of parliamentary candidates at local party levels, intraparty competition can be substantial. But when, as often happens, the leader of the single party regards open criticism as illegitimate, whether from within his party or from outside, the result tends to resemble the deliberately one-party dictatorships of communist nations. Another kind of difficulty about the role of the opposition arises when substantial competition comes from parties that are not in fact democratic, such as communist and fascist parties. Their opposition raises the question of their legitimacy in any democratic system. This question stems from the assumption that such parties would, if in power, overturn the very parliamentary regime under which they had operated. This is exactly what the National Socialist party did to the Weimar Republic. Even without actually coming to power, a fascist or communist party can adversely affect the working of the parliamentary system by securing enough votes to become the principal opposition. The electorate then has no choice except to vote for the government, unless it is willing to support an opposition dedicated to a radical transformation of the democratic constitutional order. The French and Italian Communist parties came close to creating such limiting alternatives after World War n. The executive and dissolution Strengthening the cabinet and prime minister has changed parliamentary government from its nineteenth-century character, but it has not violated the basic principle of executive responsibility to parliament. On the other hand, increasing the power of the head of state, monarchical or presidential, must be understood as a step away from parliamentary government. An important and independent policymaking role for an elected president, for example, seems incompatible with the parliamentary system. The consequence of such an increase of power, as exemplified by the constitution of the Fifth French Republic and especially by the practices of President de Gaulle, is not enough to create a full-fledged presidential system but is enough to produce a hybrid parliamentary system. The counterparliamentary tendency is likely to be all the stronger when a president, equipped with constitutional authority, is popularly elected and so can claim a popular mandate to rival parliament and its chosen cabinet. Such a president ceases to be the nonpartisan, dignified head of state typified by a modern constitutional monarch or by a president chosen by parliament to play the monarch’s role [seeDelegation of powers]. Even a constitutional monarch, however, has a political function by virtue of his power to appoint the prime minister. In principle, this involves choosing a man whom a parliamentary majority would elect. And, in practice, the choice often falls automatically on the known leader of the majority party. Matters are more complicated if there is no clearly known leader of the majority party, perhaps because of a sudden death or resignation. Potentially still more complicated is the situation where there is no majority party. But even in this contingency, which regularly occurs in multiparty systems, the head of state is effectively limited in his choice by the hard fact that the prime minister, in order to remain in office, must have the support of a parliamentary majority. For the head of state to insist on his own preference, rather than parliament’s, would violate a principle of the system. The same applies to the power of dissolving parliament. Technically, like the choice of prime minister, this is in the hands of the head of state. But for a head of state to dissolve parliament against the wishes of its prime minister and cabinet, or to refuse to dissolve when the prime minister advises him to dissolve, would interfere with the regular working of parliamentary government. Originally, as with so many other powers, dissolution was in fact the prerogative of the head of state, notably of the British monarch. But the exercises of authority over this important political matter is now the prime minister’s. In some systems, e.g., the Third French Republic, parliament retained the power to dissolve itself after a given number of years. This weakens the executive authority because the prime minister needs the power to dissolve parliament as a means of retaining the support of a majority. Members of the prime minister’s majority, in this view, will be more likely to continue voting for him if they believe that his parliamentary defeat might mean not his resignation but a new general election. Ordinary members will not want to risk their seats. This line of argument seems applicable in a multiparty parliament, where a prime minister might effectively threaten dissolution to keep coalition members in line. But the threat in relation to one’s own party seems irrelevant in a two-party multiparty parliament, where a prime minister unquestionably has the effective power to dissolve, but he does not use it as a means to discipline his own followers. They generally have sufficient reasons to support him without the threat of dissolution. Rather, the British prime minister arranges to dissolve Parliament at a time calculated to be of maximum party advantage—for him as well as his parliamentary followers. Therefore, the least desirable occasion to dissolve would be just after a desertion by parliamentary members of his party. Going to the country with a divided party would only be to the advantage of the opposition. In summary, the power of dissolution strengthens a prime minister’s position, as it is meant to do in a parliamentary system; but with a highly developed party structure, like Britain’s, it is used primarily to time a general election for party advantage and not for retaliation against a loss of parliamentary confidence. Individual members and the legislative function Given the tendency of the parliamentary system, in its British form, to develop strong executive authority based on a cohesive party majority, it follows that the legislature is not a policy-making body in the manner of the U.S. Congress—more or less coequal with the executive. Individual nonministerial members of the House of Commons, for example, do not directly legislate, as do American congressmen when deciding whether to accept governmental proposals or to substitute proposals of their own. Members of the British Parliament may, especially in private party councils, influence what their leadership presents by way of policy, and they certainly question ministers (in a daily question hour) about policy particulars, in addition to debating policy generally. But they do not make policy as members of a coordinate branch of government; they lack the legislative facilities to do so. British parliamentary committees are not independent loci of power providing nonministerial members with opportunities to overturn the government’s program. In this respect the British situation is extreme, since the House of Commons avoids official subject-matter committees altogether. Other parliamentary governments, even if otherwise in the British mold, do not usually go this far in guarding against a challenge to the theoretical supremacy of the whole house or to the practical supremacy of the government, whose authority rests on the confidence of a majority in the whole house. Nevertheless, wherever parliamentary government has developed a strong and stable executive, there is necessarily an important limitation on the exercise of independent policy making by the nonministerial membership. This means that parliament’s public importance rests heavily on its public debates, but those may now fail to secure as much attention as is given party leaders on radio and television. Only where parliamentary government has not developed strong executive leadership do legislative activity and organization resemble the American congressional pattern. The extreme examples are provided by the Third and Fourth French republics, both of which had effective subject-matter committees whose leaders could substitute their policies for the government’s. These committee leaders were often rivals of governmental leaders, who could be driven from office by adverse committee action and subsequent adverse parliamentary action. Responsibility, in this situation, is not firmly fixed in the cabinet, which usually has no stable majority in the legislature. Even on foreign policy questions, where greater executive authority has been usual in all systems, the French style of parliamentary government imposed limits on cabinet leadership. The absence of a coherent parliamentary majority, which enabled French representatives to play more active and more direct roles, is closely associated with the instability of governments in the Third and Fourth French republics. Parliamentary government, notably its British form, has long been carefully studied by historical and legal scholars. There are standard treatises describing the laws and customs—the constitution—of the British Parliament. Not all of the historical and legal work has been by Englishmen, but it has generally expressed admiration for British institutions. This was true of earlier Continental scholars, who hoped that their nations would emulate the British system, and it was true also of late nineteenth-century and early twentieth-century American scholars, who saw much in the British system worth adapting to the United States. For instance, Woodrow Wilson admired the British system, in Walter Bagehot’s classic description, as superior to the American presidential-congressional system. Bagehot, it should be noted, was not a conventional historical or legal scholar but a most insightful journalist. Of a different order is the standard twentieth-century analysis by Sir Ivor Jennings. In his volumes Parliament (1939) and Cabinet Government (1936), Jennings uses a wealth of historical material to illustrate the operating constitutional principles. He presents these principles as greatly modified by democratic usages since Bagehot’s time. Since World War n there has been considerable new research in the less formal elements of parliamentary government. The research, it is true, has not been aimed directly at understanding parliamentary government as opposed to another form of government; but any effort to learn more generally about legislative and executive behavior has added to the knowledge of parliamentary institutions. The new literature involves roll-call analysis, systematic accounts of backgrounds of legislators, effects of external party organizations, intraparty and intracabinet decision making, and the participation by interest groups in the parliamentary process. The last of these subjects was hardly studied at all in most parliamentary democracies before the 1950s; indeed, until then interest groups, or pressure groups, tended to be regarded as only American political phenomena. Consequently, there was a significant gap in knowledge of even the otherwise much-studied British system. More can be learned by detailed studies of particular elements in the established systems. But there is also a special need to understand, in comparative perspective, the common factors associated with stability and effectiveness of parliamentary government or with failure of such government. Western Europe and the English-speaking Commonwealth nations provide the main laboratories, since their historical experience with parliamentary institutions is long and varied. From this experience, there is some hope of developing hypotheses of more general applicability with respect to parliamentary government in developing nations. For such hypotheses to have validity, however, they must also be examined in the environment of the developing nations themselves. This lies in the future, since so far there has been an understandable scholarly tendency to regard the new constitutional arrangements of developing nations as less durable and so less meaningful than the broader problems of nation-building and informal group processes. The evidence is not yet conclusive as to the adaptability of parliamentary government outside of its limited Western homelands. What is known, however, gives little cause for optimism, since no developing nation with a non-Western background has a long period of experience with parliamentary government. The record of parliamentary government is somewhat discouraging even in the advanced nations of western Europe. None of the three major Continental nations—France, Germany, and Italy—consistently maintained a parliamentary system through the first six decades of this century. Yet the parliamentary system provided the pattern for most new twentieth-century democratic governments. This held for the embryonic political institutions of the supranational European community and for the initial stages, at least, of constitutions in many non-Western nations. The new African and Asian nations emerging from British control generally adopted parliamentary constitutions without monarchs. So did several nations that had been under the rule of other imperial powers. Japan, the most developed of non-Western nations, also established a parliamentary system. Whether many of these nations, especially the new ones, would be able to remain for long within the rules of parliamentary government was uncertain despite the impressive scale of the Indian effort. French-speaking African nations, for example, just as they were becoming independent, tended to move away from the parliamentary form when France itself did so in changing from the Fourth Republic to the Fifth Republic. Maintaining a strong and effective executive, yet one responsible to a legislative body, has proved difficult, if not impossible, even in many European circumstances. And without such an executive—that is, without parliamentary government that is also cabinet government—the system seems unable to cope with all of the domestic and foreign problems of a state in the modern world. Leon D. Epstein [See alsoConstitutions and constitutionalism; Government; Local government; Monarchy; Parties, political. Other relevant material may be found inCoalitions; Democracy; Legislation; Presidential government; and the biographies ofBagehot; Bentham; Burke; Dicey.] Bagehot, Walter (1865-1867) 1964 The English Constitution. London: Watts. Campion, Gilbert F.; and Lidderdale, D. W. S. 1953 European Parliamentary Procedure: A Comparative Handbook. London: Allen & Unwin. Dawson, Robert M. (1947) 1963 The Government of Canada. 4th ed., rev. Univ. of Toronto Press. Emerson, Rupert 1955 Representative Government in Southeast Asia. Cambridge, Mass.: Harvard Univ. Press; London: Allen & Unwin. Encel, Solomon 1962 Cabinet Government in Australia. Melbourne Univ. Press; London: Cambridge Univ. Press. Friedrich, Carl J. (1937) 1950 Constitutional Government and Democracy: Theory and Practice in Europe and America. Rev. ed. Boston: Ginn. → Originally published as Constitutional Government and Politics: Nature and Development. Hansard Society of Parliamentary Government 1958 What Are the Problems of Parliamentary Government in West Africa? London: The Society. Hiscocks, Richard 1957 Democracy in Western Germany. Oxford Univ. Press. Inter-Parliamentary Union (1961) 1962 Parliaments: A Comparative Study on the Structure and Functioning of Representative Institutions in Forty-one Countries. London: Cassell. → First published in French. Jenks, Edward 1903 Parliamentary England (Story of the Nations). New York: Putnam. Jennings, William Ivor (1936) 1959 Cabinet Government. 3d ed. Cambridge Univ. Press. Jennings, William Ivor (1939) 1957 Parliament. 2d ed. Cambridge Univ. Press. Mackintosh, John P. 1962 The British Cabinet. Univ. of Toronto Press. Maki, John M. 1962 Government and Politics in Japan: The Road to Democracy. New York: Praeger. Morris-Jones, Wyndraeth H. 1957 Parliament in India. Philadelphia: Univ. of Pennsylvania Press. Ullman, Richard K.; and King-Hall, Stephen 1955 German Parliaments: A Study of the Development of Representative Institutions in Germany. New York: Praeger. Wahlke, John C ; and Eulau, Heinz (editors) 1959 Legislative Behavior: A Reader in Theory and Research. Glencoe, III.: Free Press. Williams, Philip M. (1954) 1964 Crisis and Compromise: Politics in the Fourth Republic. 3d ed. Hamden, Conn.: Shoe String Press. → First published as Politics in Post-war France: Parties and the Constitution in the Fourth Republic. Wiseman, Herbert V. 1958 The Cabinet in the Commonwealth: Post-war Developments in Africa, the West Indies and South-East Asia. London: Stevens. "Parliamentary Government." International Encyclopedia of the Social Sciences. . Encyclopedia.com. (December 11, 2017). http://www.encyclopedia.com/social-sciences/applied-and-social-sciences-magazines/parliamentary-government "Parliamentary Government." International Encyclopedia of the Social Sciences. . Retrieved December 11, 2017 from Encyclopedia.com: http://www.encyclopedia.com/social-sciences/applied-and-social-sciences-magazines/parliamentary-government PARLIAMENT. Between 1450 and 1700 the English Parliament developed from a medieval institution dominated by the monarch to one whose role, function, and procedure is still recognizable today. During this transition, Parliament developed omnicompetence in statutory matters; expanded its membership dramatically (particularly in the House of Commons); revived the early medieval process of impeachment; and became a permanent and essential part of the government structure in England. Parliament during the period of the English Civil War and Interregnum (1642–1660) assumed the role of the executive and ordered the trial and execution of Charles I (ruled 1625–1649) in January 1649 before internal dissension and political circumstances brought about the restoration of the monarchy in 1660. Although the Restoration Settlement again limited the power of Parliament, its growing role in fiscal matters and the highly charged political and religious atmosphere of the late-seventeenth century enabled it to play a role in deposing another monarch, James II (ruled 1685–1688), in the Glorious Revolution of 1688. The subsequent passage of the Bill of Rights (1689) and the Triennial Act (1694) gave Parliament a more closely knit relationship with the monarchy and the governance of England. This period also saw the rise of the political parties and the increasing reliance of the monarch on Parliament for financial support. Parliament was called and dissolved at the whim of the monarch until the enactment of the Triennial Act of 1641. Forty days before the start of the Parliament, individual writs of summons were sent to all the peers of the realm, except those disqualified by lunacy, poverty, or minority of age (usually under 21). Sometimes, as in 1626 with the case of the earl of Bristol, who was imprisoned due to his bitter dispute over foreign policy with Charles I, political confrontation with the monarch also determined whether a writ was received. The senior judges in the land were also summoned to act as legal advisers to the monarch. The membership of the House of Commons was determined by elections (under widely varying rules) held among the enfranchised in the constituencies. Towns and boroughs normally elected two members of Parliament (although a few single-member constituencies existed, primarily in Wales), while two Knights of the Shire were elected for each county. The elections were determined by the vote of 40-shilling freeholders—those men who were resident in the county and held 40 shillings per annum in freehold land. The borough franchise, however, was not so clear-cut and ranged from the most common, voting by the freeman of the borough, to oligarchic control of the town corporation, and on occasion only those resident in the borough. This led to a select few controlling the vote in certain areas. An extreme example of this was the Aylesbury, Buckinghamshire, election of 1572, where one person selected the two M.P.s. The elections were further complicated by the interference of both the crown and noble patrons. The crown certainly enjoyed considerable influence, particularly in areas in which it controlled the majority of the property, while powerful magnates, such as William Herbert, third earl of Pembroke, supposedly influenced favorably at least 98 seats between 1614 and 1628. Until 1540, membership of the House of Lords consisted of the nobility, bishops, and representatives of the regular clergy (abbots and priors). Throughout the early Tudor period the spiritual peers reached a maximum of 48, and they easily outnumbered the temporal peers, whose numbers fluctuated between 34 and 45. However, Henry VIII's (ruled 1509–1547) break with Rome in the mid-1530s signaled dramatic changes in membership. With the dissolution of the monasteries in 1540, the parliamentary careers of abbots and priors ended, thereby removing 27 spiritual peers. Even with the creation of six new bishoprics between 1540 and 1542, the temporal peers now outnumbered their spiritual colleagues—a situation that was never reversed. For the next 100 years, the nobility summoned to Parliament continued to fluctuate. Elizabeth I (ruled 1558–1603), who was notoriously parsimonious in handing out favors, only elevated two commoners to the peerage, and the natural attrition through the failure of peers to produce male heirs, as well as nobles executed for treason, actually caused the numbers to fall from 57 to 55 over the course of her reign. The accession of James VI of Scotland to the English throne as James I (ruled 1603–1625) changed this situation dramatically. In part, James was anxious to make up for years of Elizabethan parsimony by creating new peers, but he also saw the peerage as a money-making device. Elevation through both deserving recognition and the sale of titles meant that by the end of James's reign in 1625, the peers eligible to attend Parliament numbered 104. This process continued under Charles I (ruled 1625–1649) until the nobility reached 123 at the start of the Short Parliament (April 1640). However, during the political turmoil of the early 1640s, Charles attempted to use the bishops to ensure he always had a loyal voting bloc. This led to the exclusion of the bishops in 1642, and the numbers of the nobility attending the Lords dropped even further when the Civil War broke out and Royalist peers deserted the Parliament. By late 1642, the number in the Lords had fallen to 30. In March 1649, after Charles had lost the Civil War and been executed, the monarchy and the House of Lords were abolished. With the Restoration in 1660, the Lords returned in its familiar pre–Civil War guise with the bishops taking their place alongside the nobility. The temporal peerage continued to grow and exceeded 150 by the turn of the century. Changes in the Commons membership were not as drastic as those in the Lords, except during the Civil War and Interregnum. Before 1640, the number of M.P.s steadily increased, from 296 in 1485 to 493 in 1628 and 513 in 1689. In a similar fashion to the Lords, the king's supporters deserted Parliament after 1642, and over 100 attended a rival Royalist Parliament that convened in Oxford in early 1644. The numbers dropped further in December 1648 when Colonel Thomas Pride, in what has come to be known as "Pride's Purge," arrested 45 members and excluded 186 more. Other M.P.s stayed away of their own volition, leaving the "Rump Parliament" with a little over 200. Further changes in membership occurred during the Protectorate Parliaments before the Commons was restored to its pre–Civil War state in 1660. The three major functions of Parliament were legislation, advice, and supply. To this may be added the revival in 1621 of Parliament as the highest court in the land. During the medieval period, Parliament had acted as a law court. This role fell into abeyance during the sixteenth century, but in 1621 charges of impeachment were presented against the Lord Chancellor, Sir Francis Bacon (1561–1626). This process continued throughout the 1620s and later. This role was supplemented by the like revival of the role of the Lords as the highest appellate court. The legislative aspect of Parliament also changed. The medieval House of Commons was not an equal part of the parliamentary trinity of King, Lords, and Commons, but precedents in the fifteenth century saw it grow into a constitutionally equal partner. In 1489, the judges ruled that legislation did not have the force of law unless the Commons and the Lords assented to it. The Commons had the right to initiate legislation, like the Lords, and throughout the sixteenth century the three-reading procedure developed into the norm. This required each bill to be read three times in both the Lords and the Commons before it was presented for the monarch's assent, or, occasionally, veto. Equally, it became more common for each bill to be committed for detailed scrutiny and amendment after the second reading. During the 1530s it was accepted that statute law could regulate every sphere of life, including religious and spiritual matters and property rights. This omnicompetence of statute law increased the monarch's need for Parliament through this extension of legislative jurisdiction. Parliament's conciliar or advice function grew out of its origins in the king's great council, which was called together to advise the king on matters of national importance, such as war. Although Parliament was primarily called for matters of taxation, it also offered the governing elite a chance to present grievances to the king and to offer advice. For example, James I in 1624 asked Parliament to advise him on England's reaction to the Thirty Years' War (1618–1648). The supply side of parliamentary operation was its most important role. During times of peace, monarchs were expected to live off their own revenues, although this became increasingly difficult after the inflationary years of the first half of the sixteenth century. In practice, monarchs became more accustomed to requesting taxes from Parliament for day-to-day fiscal matters. Supply was passed by act of Parliament in two distinct forms: lay and clerical taxation. The Clergy voted a clerical tax and the Commons initiated a tax based both on income and movable property. Both forms were enacted as statutes and required the assent of the parliamentary trinity. Because of drastic underassessment of income and the failure of an effective collection method, England remained one of the most lightly taxed nations in Europe, while the amount brought into the crown declined dramatically during the period. HISTORIOGRAPHY OF CROWN AND PARLIAMENT The relationship between the English crown and Parliament in early modern England has been the subject of major debate in British history. Until the 1970s, the dominant historiography saw the House of Commons marching onward from an embryonic power under Henry VIII to executive power in the mid-seventeenth century and then to a Glorious Revolution led by Parliament, before the late Victorian model of parliamentary government eventually emerged. This Whig view of parliamentary history, most eloquently championed by S. R. Gardiner, was challenged first by Marxist historians, who viewed the Civil War and parliamentary tensions as a bourgeois revolution. However, the Marxist interpretation foundered because the Civil War can be better explained as an aristocratic and/or religious rebellion and because no widespread or lasting social revolution occurred. Furthermore, relations between Parliament and the crown returned in 1660 to their pre–Civil War status. The more fundamental challenge to the Whig interpretation was led by a diverse group of revisionists, in particular, Geoffrey Elton, Conrad Russell, and Kevin Sharpe. They emphasized consensus, not conflict, as the primary mode of interpreting the relationship between crown and Parliament. Elton and Russell, especially, saw the Parliament as an effective, businesslike institution in which conflict was often more the result of misunderstanding than hostility or the competition for power. Sharpe, on the other hand, saw what conflict there was in Parliament as the result of competing factions. Since the late 1980s, this revisionist view has been nuanced by the work of scholars such as Thomas Cogswell, Ann Hughes, and Richard Cust. In their "postrevisionist" view, an underlying tension and conflict was ever present, but it usually only manifested itself in times of political crisis—for example, during the mismanagement of the war against France and Spain by Charles I in the late 1620s. CROWN AND PARLIAMENT RELATIONS Henry VII (ruled 1485–1509) and Henry VIII both needed Parliament to achieve their objectives. Henry VII solidified his hold on the throne by calling and consulting seven Parliaments between 1485 and 1509, while Henry VIII enacted the Reformation through Parliament. Although there was some parliamentary opposition to the policies of both monarchs, generally relations between the crown and Parliament in the early Tudor period were good. Henry VIII, in particular, adopted a style of personal intervention in parliamentary affairs, even appearing in the Commons on occasion to use his physical presence to sway M.P.s toward royal policies. There was opposition in Parliament, especially in the Lords, to the religious reformation of the 1530s, but this was defeated without a significant crisis or breakdown in relations. The mid-Tudor Parliaments of Edward VI (ruled 1547–1553) and Mary I (ruled 1553–1558) likewise witnessed some opposition to the Protestant Reformation and Catholic Counter-Reformation (both carried out through parliamentary statute), but again those opposed to government policy were in the minority. That changes in England's official religion, including the introduction of the Protestant Book of Common Prayer (1549) and the return of England to Roman Catholicism (1553–1554), were enacted through Parliament was testimony to its increased role in the governance of the nation and the newfound awareness of the omnicompetence of statute. Under Elizabeth I, both Parliament and the Privy Council attempted to persuade the queen to marry or, later, to name a successor. Elizabeth had no particular liking for Parliaments and avoided calling them whenever possible, and Parliament only assembled on 13 occasions between 1559 and 1601. Furthermore, these sessions were short and relatively harmonious. No constitutional crisis erupted during the period and Elizabeth effectively managed her Parliaments by curtailing discussions on her marital status and on further Protestant reformation. Although her policy of granting manufacturing monopolies to individuals and companies came in for severe criticism in the Parliaments of 1597, 1598, and 1601, her "golden speech" of 30 November 1601, in which she promised to abolish the monopoly grants, won her fulsome praise. At the end of the Tudor dynasty, relations between the Parliament and crown were in good shape. The policies of the first Stuart monarch, James I and VI, did cause friction between the crown and Commons in his first Parliament (1604–1610). In particular, James's desire to enact a union between England and his native Scotland aroused the ire of many M.P.s, and anti-Scottish hysteria in the Lower House. James was forced to abandon his plans for union in 1607. Similarly, disagreement arose in 1610 over the Great Contract, a scheme to reform the English financial system, but neither the Commons nor James could agree to the terms stipulated by the other party. Relations between the king and Parliament sank lower in 1614, during the "Addled Parliament." No legislation was enacted and a bitter session was dissolved by the king after claims of undue royal influence on the elections. Although the Parliament has now been seen as an example of two factions competing for influence, it certainly discouraged James from relying on the goodwill of Parliament. In the next Parliament (1621), the king once again dissolved the Parliament in anger after it refused his decree regarding not meddling with foreign policy and the marriage of his son, Prince Charles. However, in the final Jacobean Parliament (1624), both the crown and Parliament worked together to enact legislation and debate the impending crisis with Spain. This legacy of relative goodwill, if punctuated by friction and occasional moments of high tension, was rapidly dissipated by Charles I. His first Parliament of 1625 ended in acrimony over money and religion; the 1626 Parliament was dissolved in similar circumstances, and in 1628 both Houses forced Charles to accept the Petition of Right—a statement of the freedom, liberties, and privileges of Parliament. With relations at a low point in 1629, Charles vowed to live without Parliaments. The political reality of a Scottish army camped in northern England saw Charles once again turn to Parliament for financing to fight a campaign in 1640. However, he found Parliament even less inclined to his policies in 1640 than eleven years earlier. In the subsequent struggle between Charles and his Parliament, the king was forced to cede some of his authority to Parliament, but he refused to give up the right to control the army. The conflict culminated in war between Parliament and king—a war won by Parliament—and Charles was executed in 1649, the House of Lords was abolished, and a republic declared. The parliamentary trinity of King, Lords, and Commons had been destroyed. Parliament during the 1640s had gradually assumed executive powers, taxing the populace, fielding an army, and effectively running the country. Parliament continued in this role and acted as the sole legal governing authority until 1653, when Oliver Cromwell (1599–1658) was named Lord Protector. After the establishment of the Protectorate, Parliament sat only intermittently until 1659. The relationship between Parliament and Cromwell was often fractious and they never managed to establish an effective working relationship. This contributed to the ineffectiveness of the republic, and Parliament finally voted in early 1660 for the restoration of the monarchy. The next major constitutional crisis between Parliament and the crown arose during the Exclusion Crisis. Between 1679 and 1681, a majority in the Commons assisted by a substantial minority in the Lords attempted to exclude Charles II's brother, the Catholic Duke of York, from the succession to the throne. Although this movement failed, it left Charles at odds with substantial sections of his Parliament. The crisis spilled over into James's reign, and after a series of pro-Catholic policies championed by the king, an Assembly of Peers invited the Dutchman William of Orange (ruled 1689–1702) to take over the throne. James fled England, and when Parliament met in 1689 it enacted the Revolution Settlement. The situation was complicated by the emergence in the previous twenty years of embryonic political parties. The Whigs believed in a contractual form of government and the right to resist a tyrannical monarch. In contrast, the Tories favored the view of a monarch's divine right to rule, where civil authority descended directly from God. Negotiations between the two parties and the king led to a compromise in which William agreed to rule jointly with his wife, Mary Stuart (ruled 1689–1694). It also led to fundamental changes in the relationship between Parliament and the crown. The Bill of Rights (1689) stipulated the "undoubted rights and liberties" of Parliament and that it was required to meet frequently. The revised coronation oath stated that monarchs ruled according to the statutes made by Parliament and the Protestant religion established by law, thus excluding Catholics from the succession. Furthermore, the 1689 Mutiny Act established that a standing army could only be raised in the kingdom with the consent of Parliament. Finally, the financial settlement imposed on William and Mary ensured that the crown revenue was forever tied to parliamentary taxation. This in turn assured that Parliament would meet every year from 1689. The settlement witnessed the establishment of Parliament as a permanent institution of government, and in it we can see the structures and actions of the modern Westminster Parliament. See also Charles I (England) ; Church of England ; Cromwell, Oliver ; Edward VI (England) ; Elizabeth I (England) ; England ; English Civil War and Interregnum ; Exclusion Crisis ; Glorious Revolution (Britain) ; Henry VII (England) ; Henry VIII (England) ; James I and VI (England and Scotland) ; James II (England) ; Mary I (England) ; Political Parties in England ; Representative Institutions ; William and Mary . Cogswell, Thomas. The Blessed Revolution: English Politics and the Coming of War, 1621–1624. Cambridge, U.K., 1989. Cust, Richard, and Ann Hughes, eds. Conflict in Early Stuart England: Studies in Religion and Politics, 1603–1642. London, 1989. Elton, G. R. The Parliament of England, 1559–1581. Cambridge, U.K., 1986. Foster, Elizabeth Read. The House of Lords, 1603–1649: Structure, Procedure and the Nature of its Business. Chapel Hill, N.C., 1983. Gardiner, S. R. History of England from the Accession of James to the Outbreak of the Civil War. 10 vols. London, 1883–1884. Graves, Michael A. R. The Tudor Parliaments: Crown, Lords and Commons, 1485–1603. London, 1985. Kenyon, J. P., ed. The Stuart Constitution, 1603–1688: Documents and Commentary. 2nd ed. Cambridge, U.K., 1986. Kishlansky, Mark A. Parliamentary Selection: Social and Political Choice in Early Modern England. Cambridge, U.K., 1986. Kyle, Chris R., and Jason Peacey, eds. Parliament at Work: Parliamentary Committees, Political Power, and Public Access in Early Modern England. Rochester, N.Y., 2002. Russell, Conrad. Parliaments and English Politics, 1621–1629. Oxford, 1979. Sharpe, Kevin. "Re-writing the History of Parliament in Seventeenth-Century England." In Remapping Early Modern England: The Culture of Seventeenth-Century Politics, edited by Kevin Sharpe, pp. 269–293. Cambridge, U.K., 2000. Smith, David L. The Stuart Parliaments, 1603–1689. London, 1999. Chris R. Kyle "Parliament." Europe, 1450 to 1789: Encyclopedia of the Early Modern World. . Encyclopedia.com. (December 11, 2017). http://www.encyclopedia.com/history/encyclopedias-almanacs-transcripts-and-maps/parliament-0 "Parliament." Europe, 1450 to 1789: Encyclopedia of the Early Modern World. . Retrieved December 11, 2017 from Encyclopedia.com: http://www.encyclopedia.com/history/encyclopedias-almanacs-transcripts-and-maps/parliament-0 English ParliamentParliament is a servant which became a master. It originated with three royal needs; the need of monarchs to obtain advice and information; the realization that subjects were more likely to pay taxes if they knew what they were for; and the need to find some way of dealing with complaints, grievances, and petitions from all over the realm. The third function of Parliament gradually atrophied as, in the Middle Ages, an elaborate network of local and national courts was established, though the concept of the High Court of Parliament survives in the appellate jurisdiction of the House of Lords and petitions are still submitted. Two other characteristics which have survived are that the advice is not always palatable, nor the taxation paid cheerfully even after explanation given. Representative institutions developed for similar reasons in many other European countries, though they varied in composition and powers according to local circumstances. In a general sense, Parliament may be traced back to the Saxon witan and the Norman Council, each of which included the chief men of the realm, lay and clerical. But the development of Parliament as a wider, national body, with a representative element, reflects the incessant demands of government for more money, and a change in the distribution of wealth brought about by the spread of commerce and the growth of towns. Feudal dues were intended to be exceptional—for the king's marriage, his ransom, or the knighting of his son—but chronic warfare, particularly against France, demanded ever-increasing taxation and made it impossible for the king to ‘live of his own’. Consequently, Parliament developed at moments of crisis, usually associated with a disputed succession, or domestic or foreign war. Any institution which survives over eight centuries must have adapted and changed its functions. In Saxon and Norman times, a good deal of public business was done at crown-wearings, ceremonial occasions at Christmas, Easter, and Whitsun. Since the great men were expected to attend to show respect, it was easy to consult them. Charters often referred to the consent of the barons, since it was to the king's advantage that his policies should be known to have the support of all important subjects. In the course of the 13th cent., these meetings came to be referred to as discussions—colloquia or parliamenta. But though their purpose was to assist the king, they could also be turned against an unpopular or unsuccessful monarch. In December 1203 John left Normandy to seek urgent help from his barons at Oxford in saving the duchy: they promised obedience but demanded ‘the rights of the kingdom inviolate’. In 1234, the council at Gloucester forced Henry III to dismiss his unpopular foreign adviser Peter des Roches. In 1257, when the king was absent fighting in Gascony, his regents called another council to appeal for money. Though they augmented the barons with representatives of the lower clergy and two knights from each shire, the money was not forthcoming. During the conflict between Simon de Montfort's party and the king, each side used Parliament in turn: de Montfort's Parliament in January 1265 included both knights and members from certain boroughs. By this time, Parliament was becoming a familiar institution, usually, but not invariably, meeting at Westminster. But its composition still varied considerably. The lesser clergy, summoned for the first time in 1257, attended irregularly thereafter, and then dropped out, using convocation instead. Edward I's ‘Model’ Parliament of 1295, called to provide funds for war against the Scots, included 2 archbishops, 18 bishops, 67 abbots, 3 heads of religious orders, 48 lay barons, the lower clergy, 2 knights from each shire, and 2 burgesses from 110 boroughs—a total of more than 400 members. Though not a model in the sense that its composition was subsequently adhered to, it was very different from a small council of 40 to 50 members. For some years, composition and procedures remained flexible. In 1305 all members not of the council were sent home early, though the Parliament continued. In 1372, the burgesses were held back after the knights had been dismissed to see whether they would make a separate grant. The next important step in the evolution of Parliament was the separation into houses. Previously there had been only one chamber, with groups of committees breaking off for discussions: the burgesses had a largely silent role as spectators. At first the knights of the shire tended to identify with the barons as the landed or aristocratic interest, but in the course of the 14th cent. they sat increasingly with the burgesses. The lay lords and the greater clergy then came to form the upper house. We must not however exaggerate the importance of Parliament at this stage in the regular business of government. Attendances were not always good, partly because travel was difficult, partly because involvement was not always welcome. Sessions were short—sometimes no more than a week, often a month or so. But the Commons were beginning to assert themselves. Taxation, which had been voted jointly, was said in the reign of Henry IV to be by the Commons ‘with the assent of the Lords’—a significant change. The early part of the 15th cent. saw further advances. The Hundred Years War against France led to incessant demands for supply, and in the Wars of the Roses which followed, each side made use of parliaments as an instrument and to demonstrate support. With the return of more stable conditions, the use of parliaments diminished. Edward IV summoned only one parliament in the last five years of his reign and Henry VII only one in the last twelve years of his. The Tudor period saw a great leap forward, the power of Parliament and that of the monarchy advancing together. Henry VIII's use of Parliament to regulate the succession and to reform the church strengthened its authority and the elimination of the abbots from the Upper House left the lay lords in a strong majority. In 1536, the Act of Union brought the principality of Wales into Parliament's range. Yet, by and large, it remained under royal control. During Elizabeth's reign there were signs of restiveness, but in the last ten years of her reign, Parliament was in existence for only some seven months. In the course of the 17th cent., Parliament made a decisive breakthrough. The ineptitude of James I and Charles I lost them control and lack of trust led in 1642 to civil war. But the result was stalemate. The restoration of the monarchy in 1660 could be seen as proof that, as kings had always argued, it was the bulwark against anarchy or despotism. Few vital royal prerogatives were lost. Yet Parliament in 1660 was far from discredited. It had demonstrated a remarkable capacity to improvise in government and to wage war, and an important part of Charles II's appeal from exile had been his promise to summon a free parliament: none of his predecessors, he assured the speaker rather excessively, had greater esteem for parliaments than he had. Even so, relations with parliaments during the rest of his reign were often fraught. The balance tipped in 1688. After James II's flight, the House of Commons took advantage of the situation to improve its position in relation to the new monarchs. The financial settlement given William III was deliberately ungenerous: ‘when princes have not needed money,’ declared Sir Joseph Williamson, with great candour, ‘they have not needed us.’ Twenty-five years of almost continuous warfare, on a scale never before seen, guaranteed annual sessions and assured Parliament of a regular and inescapable place in the machinery of government. Ministers like Harley and Walpole learned how to control Parliament through patronage and cajolery and made reputations as managers. The ‘corruption’ of Hanoverian politics, which used to be greatly deplored, is no more than a testimony to Parliament's enhanced position, since no one bribes when they can ignore or intimidate. They were helped in their task by the Act of Union with Scotland in 1707 since the 45 MPs and 16 representative peers who arrived at Westminster were, by and large, penurious and purchasable. In many ways, Parliament after the revolution was at its zenith. The government of aristocracy and gentry, who had a near monopoly of wealth, leisure, and education, seemed natural and inevitable and could boast of notable achievements. The constitution was greatly admired, at home and abroad. The standard of debate was high, with orators like Pulteney, Murray, Chatham, North, Fox, Burke, Sheridan, Pitt the Younger, and Canning. In 1801, the Act of Union with Ireland meant that, for the first time, Parliament could claim total sovereignty over the British Isles, though the result was not an unmixed blessing. Yet even when Parliament was at its strongest, there were tremors. The breakaway of the Americans in 1776 foreshadowed the time when Canada, Australia, India, New Zealand, Ireland, and the colonies would follow suit. At the same time, Parliament, with great reluctance, allowed reports of its proceedings to appear in newspapers. ‘This’, Pulteney had once declared, ‘looks very like making us accountable without doors for what we say within.’ He was right and through that gap public opinion forced an entrance. The movement of population, the growth of great unrepresented towns, and the development of a more critical, utilitarian attitude gnawed at the foundations of aristocratic rule. In 1832 the first great reform took place. As its opponents gloomily forecast, it led, by stages, to full democracy, though not at the speed which they had envisaged. A continuous series of adjustments, many of them piecemeal, changed the nature of Parliament—the abolition of religious tests, more equal electoral areas, payment for MPs, extension of the franchise through to 1948. Though the Parliament Act of 1911 stripped the House of Lords of much of its remaining power, the introduction of life peerages in 1958 gave it an unexpected and new lease of life. A further reform in 2000 deprived the hereditary peers of their seats. The institution of referenda—on the European Economic Community and on devolution—took some powers away from Parliament itself, handing them directly to the electors, and critics of the EEC argued that the very sovereignty of Parliament had been surrendered. There is still much criticism of Parliament as an institution, though less than in the 1930s. The domination of party is deplored by many people who would never dream of voting for an independent. The introduction of TV does not seem to have much effect in improving decorum. But the familiar accusation that Parliament is a talking-shop is based upon a misunderstanding. It is not, and never has been, a governing body, but a check upon government. Whether it does that well is much debated. In the prime minister, the Commons found a master more powerful than kings in the past, even if his ultimate deterrent, a dissolution, is little more than a threat of mass suicide. But events in many countries remind us that there are worse things than talking-shops: there are civil wars. See also Commons, House of; Lords, House of. J. A. Cannon Irish ParliamentThe Irish Parliament was instituted at much the same time as the English, Sir John Wogan summoning an assembly in 1295 to Kilkenny, which included the lords and two knights from certain counties. Burgesses were added in 1311. The native Irish were excluded as ‘not fit to be trusted with the counsel of the realm’. Though an Act of 1542 allowed the native Irish to take part, Parliament remained an Anglo-Irish institution. Control was exercised through Poynings's law (1494), which subjected the Irish Parliament to the English Privy Council. More counties and boroughs were brought in during the 17th cent., and after the Glorious Revolution the Commons consisted of 64 knights, 234 burgesses, and 2 representatives from Trinity College, Dublin. There were some 80 peers in the House of Lords. Though the Irish Parliament had a splendid building on College Green, begun in 1729, real power was in the hands of the lord-lieutenant and the English government. Debates were often eloquent and the castle government paid much attention to management, but they did not engage directly on the levers of power. Until the Octennial Act of 1768 parliaments lasted the length of the reign: there was no parliament between 1666 and 1692 (save for James II's Assembly of 1689), and the first Parliament of George II in 1727 lasted until 1760. Sessions were held every other year. Throughout much of the 18th cent. there were repeated attempts to wriggle free from English control. Not until England began to run into difficulties after the Seven Years War were concessions forthcoming. The granting of the Octennial Act in 1768 came at a time when the English were anxious to increase the Irish army to cut military expense, and the repeal of Poynings's law in 1782 came when the Volunteers carried a clear threat in the midst of the American War. The grant of legislative independence ushered in the final phase of the Irish Parliament, which has been bathed in a golden light as ‘Grattan's Parliament’. Pitt's commercial propositions had to be withdrawn in 1785 and the Irish Parliament cut loose during the Regency crisis of 1789. But in the end the decisive factor was that law and order broke down in the great rising of 1798. Without a union, Ireland would, wrote the lord-lieutenant Camden, be ‘dreadfully vulnerable in all future wars’, and Pitt seems to have resolved on a union the very day he ordered 5,000 more troops to Ireland to put down the rebellion. By the Act of Union of 1801 the Irish Parliament was suppressed and representation transferred to Westminster. The new parliament house in Dublin, no longer required, became the Bank of Ireland. J. A. Cannon Scottish ParliamentThe Scottish Parliament differed significantly from its English counterpart. No equivalent of the Houses of Commons and Lords ever existed; instead, the three estates—clergy, barons, and burgh commissioners—assembled in one chamber. Legislation, from the early 15th cent., was drafted by the lords of the Articles, a smaller committee elected by the estates, before being passed in full Parliament. Likewise many judicial matters were delegated to a committee of lords auditors. Parliament was supplemented by the institutions of general council, until the late 15th cent., and from the 16th cent. by the Convention of Estates, effectively parliaments without judicial powers. In the past these bodies were accused of making the Scottish Parliament constitutionally defective—simply a ‘rubber stamp’ for royal decisions. This opinion is now substantially discredited. Evolving from the king's council of bishops and earls, Parliament is first recorded in 1235, referred to as a colloquium. In the early 14th cent. the presence of knights and freeholders became important, and from 1326 burgh commissioners attended, because of the need to secure their consent for taxation. In the 15th cent. Parliament was often willing to defy the king, repeatedly opposing taxes for James I (1406–37), and frequently openly critical of James III (1460–88). By refusing to forfeit the duke of Albany (d. 1485) between 1479 and 1481, it seriously undermined the king's authority. Called in this period on average more than once a year, Parliament was expected to provide support for many crown policies. However, it could be a dangerous place for a monarch, and James IV (1488–1513) avoided meetings after 1509. The composition of Parliament remained the same in the 16th cent., although following the Reformation many opposed the presence of the clergy, particularly as they were essentially crown nominees. Shire commissioners attended Parliament from 1594, again as a result of the need to collect tax. By James VI's reign (1567–1625), the Committee of the Articles was heavily dominated by crown supporters, creating parliamentary weakness. With the Scottish constitutional settlement (1640–1), the royal prerogative was curtailed, and Parliament took control of the executive, a precedent for the English Long Parliament. The Interregnum saw a union of parliaments (1657), but the Scottish Parliament returned strongly after the Restoration (1660). In 1689 the attendance of clergy was abolished, followed by the Committee of the Articles (1690). Parliament's strength was such that the crown turned to corruption to undermine its autonomy. Bribery and parliamentary division, rather than dominant unionism, best explain the crown's ability to secure a parliamentary majority in favour of incorporating union with England (16 January 1707). Finally dissolved on 28 April 1707, the Scottish Parliament has remained important to Scottish national identity, and in 1999, after a referendum, it was restored. Welsh ParliamentThough there is no evidence of a Welsh parliament as a regular part of government, there was a tradition of consultation. Llywelyn called an assembly of magnates at Aberdovey in 1216 to decide on the territorial divisions of south Wales. Glyndŵr is said to have summoned two parliaments—at Machynlleth in 1404 and at Harlech in 1405—to the second of which four influential men from each commote (hundred) were summoned. Since Glyndŵr was anxious to assume the trappings of monarchy, there is no reason to disbelieve the reports. Some representatives from Wales were summoned to the English Parliament in 1322 and 1327 but Wales was not included in the regular representation until after 1536. In 1999, after a referendum, a Welsh Assembly was instituted. J. A. Cannon Butt, R. , A History of Parliament: The Middle Ages (1989); Davies, R. G., and Denton, J. H. (eds.), The English Parliament in the Middle Ages (Manchester, 1981); Donaldson, G. , Scotland: James V to James VII (Edinburgh, 1965); Ferguson, W. , Scotland: 1689 to the Present (Edinburgh, 1968); Graves, M. A. R. , The Tudor Parliament (1985); —— Early Tudor Parliaments 1485–1558 (1990); Johnston, E. M. , Great Britain and Ireland, 1760–1800 (1963); Nicholson, R. , Scotland: The Later Middle Ages (Edinburgh, 1974); Porritt, E. and and A. G. , The Unreformed House of Commons (2 vols., 1903); Rait, R. , The Parliaments of Scotland (Glasgow, 1924); Richardson, H. G., and and Sayles, G. O. , The English Parliament in the Middle Ages (1981). "Parliament." The Oxford Companion to British History. . Encyclopedia.com. (December 11, 2017). http://www.encyclopedia.com/history/encyclopedias-almanacs-transcripts-and-maps/parliament "Parliament." The Oxford Companion to British History. . Retrieved December 11, 2017 from Encyclopedia.com: http://www.encyclopedia.com/history/encyclopedias-almanacs-transcripts-and-maps/parliament Parliament, legislative assembly of the United Kingdom of Great Britain and Northern Ireland. Over the centuries it has become more than a legislative body; it is the sovereign power of Great Britain, whereas the monarch remains sovereign in name only. Parliament consists, technically, of the monarch, the House of Commons, and the House of Lords, but the word in common usage refers to the members of the two houses or, more specifically to Commons alone. The great power of the House of Commons lies, historically, in its control of government finances. The powers of the House of Lords have been negligible since 1911. Parliament is housed in Westminster Palace. The House of Lords was formerly composed of the hereditary peers of the realm, life peers, Scottish peers, all peeresses in their own right, and 26 Anglican prelates. In 1999 both houses voted to strip most hereditary peers and peeresses of their right to a seat in the House of Lords; 92 of them remained, some by virtue of offices they hold from the monarch or were elected to by the House, the rest (75) as a result of their election to the body by the hereditary peers. The vast majority of the House consists of life peers proposed by the prime minister and created by the monarch; the titles of life peers cannot be inherited. The membership of the body is not fixed. Formerly headed by the lord chancellor, Lords is now presided over by a lord speaker, a post that was created (2006) when the lord chancellor's duties were reorganized. Commons is a democratically elected body, currently composed of 650 members: 533 from England, 40 from Wales, 59 from Scotland, and 18 from Northern Ireland. The membership is elected from single-member constinuencies that are periodically redrawn and increased or decreased. The speaker, a generally nonpartisan presiding officer, is elected by members of the party in power. The prime minister must, by modern tradition, be a member of Commons; all other ministers of the cabinet may be from either house. Although two parties have tended to predominate, a third party has often been important, yet coalition governments have occurred only rarely. The party or coalition controlling a majority chooses the prime minister—the executive head of government—while the largest minority party not in the government functions in Parliament as "Her Majesty's loyal opposition." When the government party is unable to obtain a parliamentary majority on important issues, it is obliged to call a general election for a new Parliament. Elections must be called every five years at the latest, but the government may call an election earlier, at a time of its choosing. Unlike in the U.S. system, there is no clear separation of legislative and executive branches of the government; the executive branch is, structurally, a committee of the legislature, but because of party discipline, the cabinet, as leadership of the majority party, controls Parliament, while being answerable to it. The British Parliament has had great influence as a model for legislative bodies in other democratic countries. The Origins of Parliament There was no historical continuity between the Anglo-Saxon witenagemot and the British Parliament. The first steps in the genesis of the modern parliament occurred in the 13th cent. The long, slow process of evolution began with the Curia Regis, the king's feudal council to which he summoned his tenants in chief, the great barons, and the great prelates. This was the kernel from which Parliament and, more specifically, the House of Lords developed. The Curia Regis, more commonly called the great council, had merely quasilegislative powers and was primarily a judicial and executive body. The development of the heritable right of certain barons (the peerage) to be summoned to the council, originally composed at the king's will, was not at all secure until the mid-14th cent., and even then was far from inviolable. The House of Commons originated in the 13th cent. in the occasional convocation of representatives of other social classes of the state—knights and burgesses—usually to report the "consent" of the counties and towns to taxes imposed by the king. Its meetings were often held in conjunction with a meeting of the great council, for the early 13th cent. recognized no constitutional difference between the two bodies; the formalization of Parliament as a distinct organ of government took at least another century to complete. During the Barons' War, Simon de Montfort summoned representatives of the counties, towns, and lesser clergy in an attempt to gain support from the middle classes. His famous Parliament of 1265 included two representative burgesses from each borough and four knights from each shire, admitted, at least theoretically, to full standing with the great council. Although Edward I's so-called Model Parliament of 1295 (which contained prelates, magnates, two knights from each county, two burgesses from each town, and representatives of the lower clergy) seemed to formalize a representative principle of composition, great irregularities of membership in fact continued well into the 14th cent. Nor did the division of Parliament into two houses coalesce until the 14th cent. Before the middle of the century the clerical representatives withdrew to their own convocations, leaving only two estates in Parliament (in contrast to the French States-General). The knights of the shires, who, as a minor landholding aristocracy, might have associated themselves with the great barons in the House of Lords, nevertheless felt their true interest to lie with the burgesses, and with the burgesses developed that corporate sense that marked the House of Commons by the end of the century. The Growth of Parliamentary Sovereignty The constitutional position of Parliament was at first undifferentiated from that of the great council. Large assemblies were called only occasionally, to support the king's requests for revenue and other important matters of policy, but not to legislate or "consent to taxation" in the modern sense. In the 14th cent., Parliament began to gain greater control over grants of revenue to the king. From Parliament's judicial authority (derived, through the Lords, from the judicial powers of the great council) to consider petitions for the redress of grievances and to submit such petitions to the king, developed the practice of withholding financial supplies until the king accepted and acted on the petitions. Statute legislation arose as the petition form was gradually replaced by the drafting of bills sent to the king and ultimately enacted by Commons, Lords, and king together. Impeachment of the king's ministers, another means for securing control over administrative policy, also derived from Parliament's judicial authority and was first used late in the 14th cent. In the 15th cent., through these devices, Parliament wielded wide administrative and legislative powers. In addition a strong self-consciousness on the part of its members led to claims of parliamentary "privilege," notably freedom from arrest and freedom of debate. With the growth of a stronger monarchy under the Yorkists and especially under the Tudors, Parliament became essentially an instrument of the monarch's will. The House of Lords with its lord chancellor (now the lord speaker) and the House of Commons with its speaker appeared in their modern form in the 16th cent. The English Reformation greatly increased the powers of Parliament because it was through the nominal agency of Parliament that the Church of England was established. Yet throughout the Tudor period Parliament's legislative supremacy was challenged by the crown's legislative authority through the privy council, a descendant of part of the old feudal council. With the accession (1603) of the Stuart kings, inept in their dealings with Parliament after the wily Tudors, Parliament was able to exercise its claims, drawing on precedents established but not exploited over the preceding 200 years. In the course of the English civil war, Parliament voiced demands not only for collateral power but for actual sovereignty. Although parliamentary authority was reduced to a mere travesty under Oliver Cromwell and the Protectorate, the Restoration brought Parliament back into power—secure in its claims to legislative supremacy, to full authority over taxation and expenditures, and to a voice in public policy through partial control (by impeachment) over the king's choice of ministers. Charles II set about learning to manage Parliament, rather than opposing or circumventing it. James II's refusal to do so led to the Glorious Revolution of 1688, which permanently affirmed parliamentary sovereignty and forced William III to accept great limitations on the powers of the crown. During the reign of Queen Anne even the royal veto on legislation disappeared. The Ascendancy of Commons Despite a general division into Whig and Tory parties toward the end of the 17th cent., political groupings in Parliament were more inclined to form about a particular personality or issue. Although members had considerable freedom to make temporary political alliances without regard to their constituencies, control over members was exercised by the ministry and the crown through patronage, which rested on the purchase of parliamentary seats and tight control over a narrow electorate. As members were paid no salaries, private wealth and liberal patronage were prerequisites to a seat in Commons; as a result, Parliament represented only the propertied upper classes, and private legislation took precedence over public acts throughout the 18th cent. The parliamentary skills of Sir Robert Walpole, in many respects the first prime minister, both signified and contributed to the growing importance of Commons. The crown retained the theoretical power to appoint a ministry of its choice, but the resignation (1782) of George III's minister Lord North established, once and for all, a tendency that had developed gradually since the Glorious Revolution—that the prime minister could not function without the support and confidence of the House of Commons. The complexion of Parliament changed rapidly after 1800. The union (1800) of Ireland and England dissolved the Irish Parliament and added to the British Parliament 100 Irish members, who functioned as an important political bloc throughout the 19th cent. With the appearance of powerful new classes created by the Industrial Revolution and with the currency of democratic doctrines grew demands for extension of suffrage, reform of flagrant abuses of patronage, and reorganization of the entire representative basis of Commons. The first step was achieved by the great Reform Bill of 1832 (see Reform Acts), followed by the Reform Bills of 1867 and 1884 and the eventual establishment of universal suffrage by the Representation of the People Acts in 1948. Parliamentary committees, appointed to investigate social conditions and recommend legislation, played an enlarged role. The tendency toward consolidation of parties was accelerated as public opinion became a factor in elections free from patronage. Although the Liberals and the Conservatives were known to stand for certain general policies, it was not until near the end of the 19th cent. that William E. Gladstone began the practice of making national campaign tours to pledge the party to a program for the coming Parliament. With the development of the party caucus, at about the same time, freedom of action by individual members was reduced. By the late 19th cent. members of working-class origin (later organized into the Labour party) were being elected to the House of Commons. Concomitantly, the class represented in the House of Lords began to lose power in British society, and through long conflict with the Commons, particularly on matters of social legislation, the House of Lords itself was weakened. Commons was at first able to intimidate Lords by threatening the creation of enough new peers to override any opposition by the upper house. The contest over the financial bill of 1909 finally led Commons to a more drastic solution. The Parliament Act of 1911 stripped the House of Lords of its veto power on money bills, and on other bills provided that a measure should become law if passed by Commons in two separate sessions, even if vetoed by Lords, if two years had elapsed between sessions. The Parliament Act of 1949 reduced the period to one year. The 1911 act also provided for the payment of salaries to members, thus opening participation to representatives of all classes. Party discipline became increasingly strong as the 20th cent. progressed, to the extent that a member may be ejected from the parliamentary party if he or she does not vote the party line on specified issues. The result has been to eliminate choice for most MPs on most issues. Long periods of loyal party service in Commons have become nearly required for achieving ministerial status. The rise of socialism in Great Britain after World War II did not greatly affect parliamentary structure, although increased delegation of important functions to the civil service reduced Parliament's immediate control of many governmental activities. Toward the end of the century Parliament implemented some fundamental changes by moving to redefine the role of the House of Lords and by accepting Scotland's desire to create its own parliament for the governing of domestic affairs; a Welsh assembly was also established. The removal of many hereditary peers from the House of Lords strengthened the remaining members' belief that they had a legitimate constitutional right to challenge those laws passed by the Commons that they regarded as bad law. See K. R. Mackenzie, The English Parliament (1950, repr. 1963); A. F. Pollard, The Evolution of Parliament (2d ed. 1926, repr. 1964); G. D. Sayles, The King's Parliament of England (1974); D.C. Bank, How Things Get Done (1979); E. Cruikshanks, Parliamentary History (4 vol., 1985); M. S. Ryan, Parliamentary Procedure (1985); G. Jones, Parliamentary Procedure at a Glance (1989). "Parliament." The Columbia Encyclopedia, 6th ed.. . Encyclopedia.com. (December 11, 2017). http://www.encyclopedia.com/reference/encyclopedias-almanacs-transcripts-and-maps/parliament "Parliament." The Columbia Encyclopedia, 6th ed.. . Retrieved December 11, 2017 from Encyclopedia.com: http://www.encyclopedia.com/reference/encyclopedias-almanacs-transcripts-and-maps/parliament parliamentary law, rules under which deliberative bodies conduct their proceedings. In English-speaking countries these are based on the practice of the British Parliament, chiefly in the House of Commons. British parliamentary law is conventional, rather than statutory, including traditions and precedents as well as the Standing Orders of the House. Thomas Jefferson, when presiding over the U.S. Senate, prepared a manual of parliamentary law based on the practice of the House of Commons, and this practice has generally been followed in the House of Representatives as well. Robert's Rules of Order, first compiled by Henry Martyn Robert in 1876 and drawn from the usages of all three bodies, is the usually accepted authority on parliamentary law in the United States. Parliamentary law includes the rules necessary for the efficient and equitable conduct of business by an assembly. In Britain the effective interpreter of parliamentary law is the speaker of the House of Commons; in the United States the role is shared by the speaker of the House and the president of the Senate, who are partisan figures, unlike their British counterpart. See H. A. Bosmajian, ed., Readings in Parliamentary Procedure (1968); H. E. Hellman, Parliamentary Procedure (1968). "parliamentary law." The Columbia Encyclopedia, 6th ed.. . Encyclopedia.com. (December 11, 2017). http://www.encyclopedia.com/reference/encyclopedias-almanacs-transcripts-and-maps/parliamentary-law "parliamentary law." The Columbia Encyclopedia, 6th ed.. . Retrieved December 11, 2017 from Encyclopedia.com: http://www.encyclopedia.com/reference/encyclopedias-almanacs-transcripts-and-maps/parliamentary-law "parliamentary government." A Dictionary of Sociology. . Encyclopedia.com. (December 11, 2017). http://www.encyclopedia.com/social-sciences/dictionaries-thesauruses-pictures-and-press-releases/parliamentary-government "parliamentary government." A Dictionary of Sociology. . Retrieved December 11, 2017 from Encyclopedia.com: http://www.encyclopedia.com/social-sciences/dictionaries-thesauruses-pictures-and-press-releases/parliamentary-government "parliament." World Encyclopedia. . Encyclopedia.com. (December 11, 2017). http://www.encyclopedia.com/environment/encyclopedias-almanacs-transcripts-and-maps/parliament "parliament." World Encyclopedia. . Retrieved December 11, 2017 from Encyclopedia.com: http://www.encyclopedia.com/environment/encyclopedias-almanacs-transcripts-and-maps/parliament par·lia·ment / ˈpärləmənt/ • n. (Parliament) (in the UK) the highest legislature, consisting of the sovereign, the House of Lords, and the House of Commons: the Secretary of State will lay proposals before Parliament. ∎ the members of this legislature for a particular period, esp. between one dissolution and the next: the act was passed by the last parliament of the reign. ∎ a similar legislature in other nations and states: the Russian parliament. "parliament." The Oxford Pocket Dictionary of Current English. . Encyclopedia.com. (December 11, 2017). http://www.encyclopedia.com/humanities/dictionaries-thesauruses-pictures-and-press-releases/parliament-0 "parliament." The Oxford Pocket Dictionary of Current English. . Retrieved December 11, 2017 from Encyclopedia.com: http://www.encyclopedia.com/humanities/dictionaries-thesauruses-pictures-and-press-releases/parliament-0 "Parliament." The Oxford Dictionary of Phrase and Fable. . Encyclopedia.com. (December 11, 2017). http://www.encyclopedia.com/humanities/dictionaries-thesauruses-pictures-and-press-releases/parliament "Parliament." The Oxford Dictionary of Phrase and Fable. . Retrieved December 11, 2017 from Encyclopedia.com: http://www.encyclopedia.com/humanities/dictionaries-thesauruses-pictures-and-press-releases/parliament The general body of enacted rules and recognized usages governing the procedure of legislative assemblies and other deliberative sessions such as meetings of stockholders and directors of corporations, town meetings, and board meetings. Roberts Rules of Order are an example of such rules. "Parliamentary Law." West's Encyclopedia of American Law. . Encyclopedia.com. (December 11, 2017). http://www.encyclopedia.com/law/encyclopedias-almanacs-transcripts-and-maps/parliamentary-law "Parliamentary Law." West's Encyclopedia of American Law. . Retrieved December 11, 2017 from Encyclopedia.com: http://www.encyclopedia.com/law/encyclopedias-almanacs-transcripts-and-maps/parliamentary-law a legislative body and consultative assembly. Also, cricket parliament at Lords, 1903; Pimlico parliament (i.e., the mob), 1799. Examples : parliament of bees, 1640; of brides, 1400; of fools, 1727; of fowls; of men, 1842; of owls; of religions, 1893; of tinners, 1686; of women, 1741. "Parliament." Dictionary of Collective Nouns and Group Terms. . Encyclopedia.com. (December 11, 2017). http://www.encyclopedia.com/education/dictionaries-thesauruses-pictures-and-press-releases/parliament "Parliament." Dictionary of Collective Nouns and Group Terms. . Retrieved December 11, 2017 from Encyclopedia.com: http://www.encyclopedia.com/education/dictionaries-thesauruses-pictures-and-press-releases/parliament Hence parliamentarian sb., parliamentary XVII. "parliament." The Concise Oxford Dictionary of English Etymology. . Encyclopedia.com. (December 11, 2017). http://www.encyclopedia.com/humanities/dictionaries-thesauruses-pictures-and-press-releases/parliament-1 "parliament." The Concise Oxford Dictionary of English Etymology. . Retrieved December 11, 2017 from Encyclopedia.com: http://www.encyclopedia.com/humanities/dictionaries-thesauruses-pictures-and-press-releases/parliament-1
About 400,000 years after the universe was created began a period called “The Epoch of Reionization.” During this time, the once hotter universe began to cool and matter clumped together, forming the first stars and galaxies. As these stars and galaxies emerged, their energy heated the surrounding environment, reionizing some of the remaining hydrogen in the universe. The universe’s reionization is well known, but determining how it happened has been tricky. To learn more, astronomers have peered beyond our Milky Way galaxy for clues. In a new study, astronomers at the University of Iowa identified a source in a suite of galaxies called Lyman continuum galaxies that may hold clues about how the universe was reionized. In the study, the Iowa astronomers identified a black hole, a million times as bright as our sun, that may have been similar to the sources that powered the universe’s reionization. That black hole, the astronomers report from observations made in February 2021 with NASA’s flagship Chandra X-ray observatory, is powerful enough to punch channels in its respective galaxy, allowing ultraviolet photons to escape and be observed. “The implication is that outflows from black holes may be important to enable escape of the ultraviolet radiation from galaxies that reionized the intergalactic medium,” says Phil Kaaret, professor and chair in the Department of Physics and Astronomy and the study’s corresponding author. “We can’t yet see the sources that actually powered the universe’s reionization because they are too far away,” Kaaret says. “We looked at a nearby galaxy with properties similar to the galaxies that formed in the early universe. One of the primary reasons that the James Webb Space Telescope was built was to try to see the galaxies hosting the sources that actually powered the universe’s reionization.” Reference: “Rapid turn-on of a luminous X-ray source in the candidate Lyman continuum emitting galaxy Tol 0440-381” by P Kaaret, J Bluem and A H Prestwich, 14 December 2021, Monthly Notices of the Royal Astronomical Society. Jesse Bluem, a graduate research assistant at Iowa, and Andrea Prestwich, with the Harvard-Smithsonian Center for Astrophysics, are co-authors. [Editor’s Note: A misspelling of “reionization” was corrected in the headline after publication.]
An enormous meteorite impact and then a rocky flight from Mars. Is that how life appeared on Earth? Cornelia Meyer takes us on a space trip through the lithopanspermia theory and describes how she is putting it to the test with the help of student colleagues. On 7 August 1996, scientists at NASA announced they had identified structures that looked like microscopic fossil bacteria in the Martian meteorite ALH84001, found in Allan Hills, Antarctica. Although scientists disagree about the significance of the Allan Hills meteorite, the question remains: was there life on Mars? When comets and asteroids strike planets, they can dislodge rock fragments that are catapulted into space and – like the Allan Hills meteorite – sometimes land on other planets as meteorites (see glossary). This has caused much speculation. Could the first life forms have arisen not on Earth but on Mars, or perhaps another distant planet? If so, could meteorites then have transported life to Earth? In 2007, three other postgraduate students – Ralf Moeller, Thomas Berger and Jean-Pierre de Vera – and I decided to investigate this idea, known as the lithopanspermia theory (see image above), in three steps: - The ejection of living organisms into space, on board a meteorite. - The effect of space travel on living organisms. - Their survival as they enter Earth’s atmosphere and land. The lithopanspermia theory – solid as a rock? The lithopanspermia theory (from the Greek: lithos = rock, pan = all, sperma = seed) was proposed in 1903 by the Swedish scientist Svante Arrhenius. Although the idea is not widely accepted, there is some evidence to support it: - The existence of lunar and Martian meteorites on Earth - The presence of organic material and (possibly) microbial fossils on the Allan Hills meteorite (see image) - The fact that large comets or asteroids hitting a planet launch pieces of rock with enough velocity to overcome gravity and leave the planet (as meteoroids) - The ability of bacterial spores to survive the shock waves caused by such an impact - The UV-resistance of micro-organisms at the low temperatures found in space - The survival for millions of years, in amber or salt, of bacterial spores - The survival of bacterial spores in space for up to six years - The palaeogeochemical evidence for ancient microbial ecosystems on Earth, leaving only about 400 million years for the evolution from simple precursor molecules to cellular life. Asteroid: One of the numerous small rocky bodies in orbit around the Sun. Most asteroids reside in the ‘main belt’ between Mars and Jupiter, but some have orbits that cross Earth’s orbit and could strike its surface. Comet: One of the primitive icy bodies originating in the outer reaches of the Solar System that are in elliptical orbits around the Sun. Near the Sun, icy material vaporises and streams off the comet, forming a glowing tail. Meteorite: An extraterrestrial rock that has fallen to Earth. Most meteorites are pieces of asteroids and are made of stone, stone and iron, or iron. Meteoroid: A small solid body moving through interplanetary space. After falling to Earth it is called a meteorite. 1. The journey begins As part of our Masters and PhD theses, we investigated the feasibility of the first step: the ejection phase, in which living material is launched into space by a meteorite impact (Horneck et al, 2008; Stöffler et al, 2007). To simulate the event, we took two slices of rock thought to be similar to the rocks on Mars, put a layer of micro-organisms between them, placed this ‘sandwich’ in an iron cylinder and blasted it with TNT. We had good reasons for using micro-organisms in this experiment. On Earth, only microbes are known to survive in extremely hostile environments, so they were more likely to endure the experiment. Also, as simple organisms, they may be similar to the putative first life forms on Mars. The particular micro-organisms chosen for the experiment were bacterial spores, cyanobacteria and lichens that live inside or on rocks, and are known to survive simulated space conditions. We also chose the rocks carefully. To discover whether a meteorite originates from Mars, its composition is compared with that of rocks studied on the surface of Mars. The most frequent Martian meteorites found on Earth are known as basaltic Shergottites, and are formed by volcanic activity. For our experiment, therefore, we used basalt: readily available on Earth and similar to Martian rock. In repeated experiments, the TNT explosions exposed the micro-organisms to pressures of between 50 000 and 500 000 bar. These are similar to the pressures that would be generated by meteorite impacts on Mars, causing craters of more than 75 km in diameter and launching Martian rocks into space. The compression of the blast also exposed the micro-organisms to temperatures of up to 1000 °C. Although such conditions might be expected to destroy all life, at 400 000 bar (400 000 times normal air pressure), 0.02% of the micro-organisms survived. Today, temperatures on Mars range from -143 °C at the poles to +27 °C at the equator. Although early Mars would have been warmer than it is today, it would have cooled down faster than Earth because it lost its atmosphere. This means that by the time of the proposed transfer of life from Mars to Earth (up to 20 million years ago), Mars would already have reached the low temperatures that exist there today. Therefore, in a second experiment to better reflect conditions on Mars, we used dry ice (solid carbon dioxide) to cool the apparatus to -80°C before blasting it, and found that some of the micro-organisms survived even at 500 000 bar. In the previous, uncooled, experiment, none had survived at that pressure. During the experiments, the micro-organisms were exposed to the high temperatures and pressures only for a few microseconds, as would happen with a real meteorite impact on Mars. This may have been the key to their survival. So, the first part of the lithopanspermia theory appears to be plausible: organisms on rocks could survive the launch into space. 2. Space travel: the ESA SUCCESS student contest Next, we decided to compete for the opportunity to investigate step two of the lithopanspermia theory: could living organisms survive the extreme cold, cosmic radiation and vacuum during a long space journey? In the SUCCESS student contestw1, organised by the European Space Agency (ESA), we were offered the opportunity to fly an experiment on board the International Space Station (ISS) in November 2009. Since the 1980s, several experiments have shown that micro-organisms are capable of surviving in space (e.g. Mileikowsky et al, 2000). However, the micro-organisms in those tests either were shielded from radiation by aluminium, or only spent a few days in space. So how long could they survive in space? We want to use the ISS for a more realistic investigation into the effect of space conditions on living organisms. We suggested building an artificial meteorite packed with micro-organisms as well as sensors to measure cosmic rays and temperature. A piece of basaltic rock will be cut into eight slices, with holes for the micro-organisms and sensors. The holes will be sealed with rock, and the slices fitted back together into an airtight structure. The artificial meteorite will then be transported to the ISS, mounted onto an aluminium platform outside the Station and exposed to space conditions for six months. As a control, a second artificial meteorite will remain on Earth. Once the meteorite returns to Earth, the biologists Ralf and Jean-Pierre will determine the survival rate of the micro-organisms and look for physiological changes induced by the conditions in space. As the mineralogist on the team, I will investigate the effects of space weathering on the artificial meteorite. Space weathering is a blanket term for processes, including cosmic radiation, solar winds and meteorite bombardment, that act on bodies in the harsh space environment. We will also compare the physical properties of the artificial meteorite with those of the rock that remained on Earth. Besides providing evidence that may support the lithopanspermia theory, these results could supply information about the effect of space weathering on the optical properties of rock. These properties are important for the observation of asteroids, as optical spectroscopy is used to determine their elemental composition. Knowing more about the effects of space weathering, therefore, could help scientists to determine whether meteorites found on Earth and asteroids observed in space come from the same parent bodies. 3. A new experiment? The soft landing Even if first two parts of the lithopanspermia theory are plausible – micro-organisms could survive take-off from their home planet and a lengthy journey through space – could they survive on another planet? Astrobiologists at the Deutsches Zentrum für Luft- und Raumfahrt have suggested that Earth micro-organisms could survive for some time on Marsw2. This suggests that Martian life forms might also be able to survive on Earth, assuming they could withstand the impact. So far, however, we know very little about what would happen if a meteorite carrying living organisms landed on Earth. We do, however, have information that enables us to speculate. When objects enter Earth’s atmosphere at high speed, their surfaces are exposed to very high temperatures due to friction. However, although temperatures in the outer layers of the meteorite are high enough to melt – or even vaporise – the rock, the inside of the meteorite remains closer to the -273 °C (0 K) found in space. Very often, meteorites break up when they hit the ground. If any organisms survived inside the meteorite – protected from the very high temperatures at the surface – they would thus be released and could begin to colonise Earth. They would experience a temperature shock, coming from -273 °C in the meteorite core to the ambient Earth temperature, but micro-organisms are known to be able to survive rapid changes in temperature. The evidence for lithopanspermia? Although micro-organisms could have survived all three steps described in the lithopanspermia theory, this is not evidence that life on Earth has an extraterrestrial origin. Above all, we don’t actually know whether life exists beyond our planet – but the search for extraterrestrial life continues. And speculation about our origins continues, too. The ESA SUCCESS contest SUCCESS, the Space station Utilisation Contest Calls for European Student initiativeS, organised by ESAw1, aims to make today’s students into tomorrow’s users of the International Space Station (ISS). European university students up to Master’s level or equivalent, from any discipline, are invited to propose an experiment to fly on board the ISS. The first prize is a one-year internship at ESA’s space research and technology centre ESTEC in the Netherlands. The winner will be able to work on their experiment, to enable it to fly to the ISS. The contest is currently closed for new contestants. A new SUCCESS Student Contest is foreseen for 2010. - Horneck G et al (2008) Microbial rock inhabitants survive impact and ejection from host planet: first phase of lithopanspermia experimentally tested. Astrobiology 8: 17-44 - Mileikowsky C et al (2000) Natural transfer of viable microbes in space. Part 1: From Mars to Earth and Earth to Mars. Icarus 145: 391-427 - Stöffler D et al (2007) Experimental evidence for the potential impact ejection of viable micro-organisms from Mars and Mars-like planets. Icarus 186: 585-588 For more information about panspermia, see:
Government expenditure refers to the spending on goods and services by the government. Examples are purchasing goods for operations and investing in public goods. Also, some expenses do not involve the exchange of goods and services, i.e., transfer payments. If spending exceeds revenue, the government runs a fiscal deficit. Conversely, if revenue exceeds expenditure, the government runs a fiscal surplus. And, when expenditure equals revenue, we call it balanced fiscal. Types of government expenditure Government spending consists of three main categories, namely: - Transfer payments involve monetary payments to the private sector. Examples are unemployment benefits, capital transfers, and job search benefits. In the expenditure approach, GDP excludes this component. Transfer payments do not involve exchanging goods and services, even if the government hands over the money. - Current expenditures cover routine expenditure for operations. - Capital expenditures include spending on infrastructure, such as roads. These expenditures are vital to increasing the capital stock in the economy. The effect of government spending on the economy Spending contributes to increasing potential GDP. Investment in infrastructure creates a multiplier effect on the economy. Such investment also increases the productive capacity of the economy in the long run. By changing its expenditure, the government can influence economic activity. Such policies can minimize the adverse effects of the economic cycle. To prevent a recession, the government increases its spending. Increased spending stimulates higher aggregate demand. That ultimately stimulates the production and drives real GDP. When production expands, it helps to reduce the unemployment rate. Some spending also provides monetary income for households such as unemployment benefits. Such programs help the unemployed to maintain a minimum standard of living to reduce extreme poverty. As a fiscal tool Spending is a fiscal tool other than taxes. The government can use it to influence economic activity. If governments adopt expansionary policies, they will increase spending. Conversely, in contractionary policies, the government will reduce spending. The economic expansion policy is to revive economic growth, usually during a recession. Higher spending leads to increased demand for goods and services in the economy. Higher demand stimulates businesses to increase their production. They also began to recruit new workers. As a result, the economy is growing, and the unemployment rate is declining. A growing economy leads to better prospects for household income. Because they have more money, they should spend more on goods and services. Once again, encourage businesses to increase production. This process continues, and hence government spending has a multiplier effect on the economy. Meanwhile, the government implements a contractionary policy to avoid an overheating economy. The government reduces spending to reduce aggregate demand and moderate inflation. Fiscal deficit and the crowding-out effect Fiscal deficits do not always result in higher economic growth. It depends on how much influence government spending has on the gross domestic product (GDP). Let’s recall the following equation: GDP = C + I + G + NX - C: household consumption - I: gross business investment - G: government expenditure - NX: net exports So, government spending is not the only contributor to GDP. There are three other contributors. Well, I will discuss the effects of government spending on gross investment by the private sector. When experiencing a fiscal deficit, the government will borrow to cover the shortfall. For this reason, they will owe, for example, by issuing bonds. The government may offer high-interest rates to attract investors. This situation ultimately raised interest rates in the economy. As a result, borrowing costs become more expensive. The private sector can respond to higher borrowing costs by delaying investment. As a result, private investment fell. The situation where increasing government deficits reduce private investment is called the crowding-out effect. The net effect on the economy depends on which one is more significantly influential on aggregate demand? Is it government spending or private investment? Sometimes, the decline in private investment is far more significant than the increase in government spending. Transfer payments – such as unemployment benefits – act as an automatic stabilizer. These expenditures change counter-cyclically. When the business cycle goes up, it decreases and when the cycle goes down, it goes up. Payments declined during the economic boom. In this situation, unemployment decreases in line with increased production. Conversely, payments increase when recession. In this period, economic activities shrank. The unemployment rate goes up because businesses cut production and reduce labor.
The Internet has become an indispensable tool to us, more so, with the advent of the World Wide Web (www). The Web pages displayed on the World Wide Web are written in a markup language that allows a computer web browser to decipher the manner in which the page is to be displayed. HTML and XML are two of the most common markup languages. - HTML is used to design and display web content, while XML is for storing, carrying, and exchanging data. - HTML has predefined tags, whereas XML tags are customizable. - HTML focuses on presentation, while XML focuses on data structure and organization. HTML vs XML HTML is a programming language used for creating and structuring content on the internet. It is a markup language designed to display content as text, images, and other elements. XML is a markup language that is used for data interchange between systems and allows for the creation of custom tags. HTML, or Hypertext Markup Language, was developed by Computer Scientist Tim Berners-Lee in 1991. It is a basic markup language that helps in creating both web pages and applications. HTML uses fixed control tags to design web pages which makes using it simpler for those with no programming experience or for publishing low-cost information. It is also HTML’s simplicity that renders limited in its approach, thus, paving the way for XML. XML, or Extensible Markup Language, was developed by the World Wide Web Consortium (W3C) in 1998, to store and exchange data between organizations and systems. It is designed in a fashion that makes it readable by both humans and computers. Since it embodies elements of Standard Generalized Markup Language (SGML) and HTML, XML is multi-browser compatible and can format data, thereby, generating faster and more reliable search results on the World Wide Web. |Parameters of Comparison||HTML||XML| |Abbreviation for||Hypertext Markup Language||Extensible Markup Language| |Meaning||It is a markup language used to create web pages and web applications||It is a markup language that allows data or information to be exchanged between platforms and programs| |Purpose||Helps in designing the structure of web pages for the presentation of data||Helps in cross-platform data sharing| |Language Type||Case insensitive||Case sensitive| |Tags||HTML has pre-defined tags. All tags need not have a closing tag||The programmer can customize his own tags, but it is mandatory that all tags used must be closed| What is HTML? HTML is a markup language that is described in Standard Generalized Markup Language (SGML), but it isn’t as complex as SGML. The markup language in HTML uses tags that design a web page and describes the presentation of data on the page. These tags in HTML are predefined and limited in number. Tags come in pairs which means that if a tag has been used in the beginning, a tag must also be used at the end, however, in HTML, some tags need not have a closing tag. In addition, HTML tags are also case insensitive, which implies that <Heading> is the same as <HEADING> or <HeaDing>. A web page created using HTML consists of a head and a body, which are enclosed within <html> tags. Choosing a suitable heading is of utmost importance because it is the first thing that surfaces after an Internet search. HTML also supports three kinds of lists- unnumbered or unordered lists, numbered or ordered lists, and description lists. Using tags specific to the unnumbered or numbered list, the generated text is either in the form of bullet points or numbered, respectively. A description list, when used with its specific tag, can contain multiple descriptive information. The unique feature of HTML is that it can link regions of text or images to an anchor either in the same document or in an external document. These link-ed regions are highlighted by the browser to catch the attention of the readers while reading the document. What is XML? The markup language of HTML was not equipped enough to facilitate data-sharing; therefore, XML was developed to ensure the exchange of information between programs and platforms. Unlike in HTML, programmers can create new tags, and by doing so, they can describe the information in the text document in a better manner, consequently, XML provides faster, more structured and more accurate search results on the Web. XML tags are case sensitive, therefore, <Heading> is not the same as <HEADING> or <HeaDing>. It is mandatory for tags to be closed in XML if used. Furthermore, in order to create language or elements in XML, one has to abide by the set of rules defined in the XML. In case the programmer errs in inputting syntax, grammar, or punctuation, the code will not parse. XML document consists of a prologue and body. The prologue comprises administrative metadata, a document-type declaration and comments. The body in XML can be bifurcated into structure and content. Main Differences Between HTML and XML - HTML is employed for designing a web page and structuring the information therein. In contrast, XML was introduced to ensure that data or information can be shared between programs and platforms. - The primary focus of HTML is on the presentation of the data, but the primary focus of XML is on the data, thereby giving the programmer the liberty to present the data in a manner desirable to him. - HTML provides a set of predefined tags, but the programmer can custom-create tags in XML in adherence to the given rules. - While in HTML, minute errors are not a problem, in XML, if there are errors in the code, they cannot be parsed. - In HTML, whitespaces are removed from the text automatically when displayed, therefore, whitespaces here can be ignored. On the other hand, XML takes every character into consideration, as a result of which whitespaces may be used only for specific purposes. I’ve put so much effort writing this blog post to provide value to you. It’ll be very helpful for me, if you consider sharing it on social media or with your friends/family. SHARING IS ♥️ Sandeep Bhandari holds a Bachelor of Engineering in Computers from Thapar University (2006). He has 20 years of experience in the technology field. He has a keen interest in various technical fields, including database systems, computer networks, and programming. You can read more about him on his bio page.
Adding Doubles Worksheet Bank on our printable adding doubles worksheets to familiarize grade and grade kids with this strategy or for a quick strategy-refresh and help kids demonstrate fluency in addition and subtraction by creating equivalent but easier or known sums. whether it is adding doubles, completing the doubles facts, adding doubles plus, or plus, minus, or minus, or adding near doubles the exercise options are endless to build fluency in adding identical single-digit numbers. Worksheets math grade addition adding doubles. adding doubles practice worksheets. doubles (, and so on) are key math facts to memorize they can be used to quickly solve other addition math facts in two steps solving the doubles fact and then adding or subtracting from that sum. List of Adding Doubles Worksheet Similar adding doubles plus adding near doubles. Addition doubles worksheets once your students have a solid understanding of the concept of addition, they can memorize specific facts. learning about addition doubles is key to speeding up calculations needed in the coming years. print our free addition doubles facts worksheets for extra practice with your students. Worksheets for practicing simple addition. print the addition of doubles worksheet of in format. print the addition of doubles worksheet of in format. print the addition of doubles worksheet of in format. 1. Double Worksheet Adding Doubles Number names worksheets adding doubles worksheet free from doubles worksheet, sourceoverage.info. ladybug addition math game from doubles worksheet, sourcehomeschoolme.com. here s a rap for helping students remember the doubles facts from doubles worksheet, sourcepinterest. 2. Double Digit Adding Free Worksheets Doubles Worksheet Use the doubles to find the answers the math worksheet shows your child how to make addition problems a little easier by doubling a -digit number and adding one to quickly find their answer. for example, is faster as, then. Free worksheets and resources to help children learn doubles to double and halves to half of. our printable resources (the basics) doubles (to double ) mad maths minutes sets a b. Worksheet double-digit addition and subtraction. practice makes perfect children work on their multi-digit addition and subtraction skills by solving problems in this worksheet. 3. Free Math Worksheets Printouts Beginning Grade Printable Graph Doubles Worksheet Puzzles Plans Brain Games Adults Rules Solving Adding 4. Doubles Facts Addition Game Measured Mom Adding Worksheet Www.ndgradeworksheets.net name adding doubles directions solve by finding the sums. ccss. fluently add and subtract within. - number sums - adding doubles this -themed maths worksheet tasks pupils with finding the total of doubles and near doubles. 5. Adding Doubles Addition Worksheet Grade Math Worksheets Fact Teaching Kids Decimals Punctuation Estimation Basic Using adding doubles worksheet, students write doubles facts in the provided charts by filling in the missing equations. knowing doubles facts is an important skill for students to learn. this skill helps students to improve their mental math abilities. this worksheet gives a doubles chart for students to fill in up to. This set of worksheets focuses on the doubles and doubles plus one concept of addition. included in this page set are doubles and doubles plus one facts with ten frames, dominoes, and number bonds where the students will complete the addition sentences. 6. Doubles 2 Worksheets Adding Worksheet Com.au. Use these worksheets to practice adding double digits with your child there are various ways to solve these math problems practice different ones to see which your child likes best. download all click on a worksheet in the set below to see more info or download the. 7. Sentence Structure Building Worksheets Subject Practice Adding Doubles Worksheet Addition Pictures Kindergarten Substitution Math Problems 2 Digit Games 8. Personal Pronouns Pronoun Worksheets Grammar Adding Doubles Games Reading Comprehension Math Free Playground Grade 3 Worksheet 9. Add Doubles Worksheet Grade Lesson Planet Adding Adding doubles worksheets use the worksheets below to practice adding doubles. i recommend having worksheets per day only. the first four worksheets below are good for days of practice. Welcome to the adding doubles (small numbers) (a) math worksheet from the addition worksheets page at math-drills. com. this math worksheet was created on -- and has been viewed times this week and times this month. it may be printed, downloaded or saved and used in your classroom, home school, or other educational environment to help someone learn math. Kindergarten worksheets preschool worksheets first grade worksheets kindergarten addition worksheets kindergarten subtraction worksheets doubles doubles plus one more addition doubles worksheet addition - doubles - worksheet - download,,, more a. 11. Grade Math Worksheets Coloring Kids Print Graders Adding Doubles Step Algebra Equation Solver Expressions Worksheet 12. Math Worksheet Addition Worksheets Grade Super Teacher Tools Free Printable Adding Doubles 13. Addition Worksheets Print Elementary Math Adding Doubles Worksheet Geometry Reference Sheet Blaster Games Times Decimal Numbers These worksheets are appropriate for first grade math. Doubles facts to - displaying top worksheets found for this concept. some of the worksheets for this concept are doubles, math fact fluency work, adding doubles, addition, doubles in subtraction, domino doubles addition to, facts using doubles, doubles . Adding doubles worksheet for kindergarten children. this is a math printable activity sheet with several exercises. it has an answer key attached on the second page. this worksheet is a supplementary kindergarten resource to help teachers, parents and children at home and in school. 14. Addition Facts Teaching Squared Basic Worksheets Math Fact Double Digit Worksheet Adding Doubles Horizons Subtraction Games Grade There may be fewer problems, depending on what is selected below. multiple worksheets. create different worksheets using these selections. language french. memo line. include answer key. Adding doubles to (vertical questions - full page) this basic adding doubles worksheet is designed to help kids practice adding doubles, for numbers through with addition questions that change each time you visit. this math worksheet is printable and displays a full page math sheet with vertical addition questions. Practice adding doubles horizontally with these math addition worksheets. these are addition worksheets for st, and rd grade level kids who want to practice and improve their addition skills. 15. Free Math Worksheets Excel Addition Elementary Adding Doubles Grade Geometry Subtraction Problems Probability Answers Estimation Worksheet This is a fun worksheet to use during as a plenary to assess pupils understanding, or as a lesson starter recap. 16. Math Facts Worksheets Addition Single Digit Subtraction Times Tables Printable Multiplication Division Word Problems Year 2 Adding Doubles Grade Rules Worksheet 18. Adding Doubles Worksheet Worksheets Numbers will add up to no. Download and print turtle adding doubles worksheet. our large collection of math worksheets are a great study tool for all ages. Addition worksheets subtraction worksheets regrouping addition and subtraction fraction worksheets multiplication worksheets times table worksheets brain teaser worksheets picture analogies cut and paste worksheets pattern worksheets dot to dot worksheets preschool and kindergarten mazes size comparison worksheets. top worksheets new. More adding doubles interactive worksheets. use doubles to add by use doubles to add by use doubles to add by doubles by addition with doubles set by doubles by doubles by week - mental math strategy by double. Adding doubles plus one (a) welcome to the adding doubles plus one (a) math worksheet from the addition worksheets page at math-drills. 19. Addition Worksheets Special Ed Kindergarten Grade 1 Adding Doubles Worksheet Practice adding doubles vertically with these math addition worksheets. these are addition worksheets for st, and rd grade level students. they can use these worksheets as practice material and improve their addition skills. teachers may use these worksheets to take class test or give students classroom assignment. The various resources listed below are aligned to the same standard, () taken from the (common core standards for mathematics) as the addition and subtraction worksheet shown above. add and subtract within, demonstrating fluency for addition and subtraction within. 21. Free Single Digit Addition Worksheets Adding Doubles Learn Grade Math Tutoring Preschool Activities Multiplication Division Word Problems Year 6 Worksheet 22. Adding Doubles Create Math Worksheets Worksheet There are also blank templates of ten frame. subjects arithmetic, basic operations, numbers. The various resources listed below are aligned to the same standard, () taken from the (common core standards for mathematics) as the addition and subtraction worksheet shown above. add and subtract within, demonstrating fluency for addition and subtraction within. Adding doubles worksheets. adding doubles worksheets to help students commit the basic addition facts to memory. adding doubles helps with memorizing the basic addition facts. 23. Double Digit Addition Worksheets Adding Digits Printable Coins Math Geometry Area Book Grade Kids Doubles Worksheet Designed by educators, this worksheet supports second and third graders who are learning or reviewing double-digit addition and subtraction with regrouping. Title doubles addition author t. smith publishing subject doubles addition with sums to keywords addition worksheet help with math adding doubles. 24. Worksheet Free Math Worksheets Grade Addition Adding Double Digit Answer Questions Graph Paper Print Show Work Arithmetic Doubles 25. Adding Doubles Activities Fractions Worksheets Number Tracing Grade Math Review Printable Fraction Worksheet Solver Algebra 2 Steps Adding doubles to (horizontal questions - full page) this basic adding doubles worksheet is designed to help kids practice adding doubles, for numbers through with addition questions that change each time you visit. this math worksheet is printable and displays a full page math sheet with horizontal addition questions. Adding doubles worksheets. this time, we are adding to our repository of math worksheets, some with very simple sums, of double numbers. we start with the first ten (from to ) and so on, until we reach number. with these cards our kids will work on the concept of double, and thereby improve both mental calculation and addition algorithms. 26. Addition Strategies Kids Adding Doubles Worksheet Teachers may use these worksheets to take class test or give students classroom assignment. The addition worksheets on this page have no regrouping or carrying. approx. levels st grade, grade. -digit addition (with regrouping) the double-digit addition worksheets on this page require student to carry ones, or regroup. includes graph paper math, a scoot game, and word problem worksheets. approx. levels st grade, grade. math plus worksheets worksheet adding doubles small from doubles plus one worksheet, image source pinterest.com. Adding doubles, ten, or zero addition first grade math worksheets below, you will find a wide range of our printable worksheets in chapter adding doubles of section addition. 27. Add Doubles Worksheets Adding Worksheet Practice addition worksheets for doubling values. includes worksheets for single digit addition, two digit addition that require no borrowing (mental addition) and for addition worksheets for factor of five. these addition worksheets are great tools for place-value concepts. Addition worksheets doubles and near doubles. these grade addition worksheets mix doubles (e.g. ) and near doubles (e.g., ) problems. students should use their instant recall of doubles math facts to solve all the problems. similar adding doubles adding doubles minus. 28. Doubles Add Worksheet Adding Ready to print adding doubles worksheet with answer sheet. for more dynamically created addition worksheets go to math-aids.com. math worksheets provided by math-aids.com. rd. worksheets. order of operations. go ad-free. x., inc capital of hwy south bldg, suite,. 29. Adding Doubles Worksheet Fun Teaching Doubles mean, etc. these are addition facts that should be committed to memory. basic addition fact worksheets for numeracy. Adding doubles addition worksheets horizontal format - addends these addition worksheets may be configured for adding doubles, double, and double addition number sets in a horizontal format. the addends for the worksheets may be selected from a number range of to., or digits addition worksheets vertical format -, or. Adding doubles to worksheet worksheets double digit addition worksheet for st and grade kids - best doubles doubles plus one images addition, subtraction. 30. Adding Subtracting Doubles Worksheet Com. this math worksheet was created on -- and has been viewed times this week and times this month. it may be printed, downloaded or saved and used in your classroom, home school, or other educational environment to help someone learn math. Adding doubles - displaying top worksheets found for this concept. some of the worksheets for this concept are doubles, adding doubles, adding doubles a, math fact fluency work, directions solve by finding the, adding doubles, adding doubles plus a, name class. Adding doubles. number of problems. problems problems problems problems problems. 31. Adding Doubles Worksheet Grade Free Spreadsheet Below, you will find a wide range of our printable worksheets in chapter doubles and doubles plus one of section addition single digit. these worksheets are appropriate for second grade math. we have crafted many worksheets covering various aspects of this topic, and many more. we hope you find them very useful and interesting. Add the doubles that are up to. worksheetplace.com for great educators. menu. behavior worksheets grammar phonics and letters science worksheets setting goals worksheets writing worksheets worksheets adding doubles worksheets adding doubles to worksheet have your gr s practice adding doubles. 32. Digit Addition Worksheets Double Adding Math Large Free Super Teacher Graph Middle School Classes Paper Print Doubles Worksheet Adding doubles worksheet for st grade children. this is a math printable activity sheet with several exercises. it has an answer key attached on the second page. this worksheet is a supplementary first grade resource to help teachers, parents and children at home and in school. These doubles to worksheets feature simple equations accompanied by visual aids such as dice and ladybirds to help your class learn to double numbers. this is a great mental math exercise and a great way to practice simple multiplication with your students. Print the addition of doubles worksheet of in format. print the addition of doubles worksheet of in format. Doubles addition pack includes different doubles addition worksheets. use these simple math worksheet with your k- grade classroom. these worksheets are perfect for centers, stations, test prep, seat work, morning work, review, problem solving etc. - doubles -. subjects. Adding doubles adding doubles id language school subject math elementary age - main content addition more addition interactive worksheets. -digit addition with regrouping by addition without regrouping by addition up to by use pictures to add to.
Calculating the perimeter has several practical applications. A calculated perimeter is the length of fence required to surround a yard or garden. The perimeter of a wheel/circle (its circumference) describes how far it will roll in one revolution. Similarly, the amount of string wound around a spool is related to the spool's perimeter; if the length of the string was exact, it would equal the perimeter. |circle||where is the radius of the circle and is the diameter.| |triangle||where , and are the lengths of the sides of the triangle.| |square/rhombus||where is the side length.| |rectangle||where is the length and is the width.| |equilateral polygon||where is the number of sides and is the length of one of the sides.| |regular polygon||where is the number of sides and is the distance between center of the polygon and one of the vertices of the polygon.| |general polygon||where is the length of the -th (1st, 2nd, 3rd ... nth) side of an n-sided polygon.| The perimeter is the distance around a shape. Perimeters for more general shapes can be calculated, as any path, with , where is the length of the path and is an infinitesimal line element. Both of these must be replaced by algebraic forms in order to be practically calculated. If the perimeter is given as a closed piecewise smooth plane curve with then its length can be computed as follows: Polygons are fundamental to determining perimeters, not only because they are the simplest shapes but also because the perimeters of many shapes are calculated by approximating them with sequences of polygons tending to these shapes. The first mathematician known to have used this kind of reasoning is Archimedes, who approximated the perimeter of a circle by surrounding it with regular polygons. An equilateral polygon is a polygon which has all sides of the same length (for example, a rhombus is a 4-sided equilateral polygon). To calculate the perimeter of an equilateral polygon, one must multiply the common length of the sides by the number of sides. A regular polygon may be characterized by the number of its sides and by its circumradius, that is to say, the constant distance between its centre and each of its vertices. The length of its sides can be calculated using trigonometry. If R is a regular polygon's radius and n is the number of its sides, then its perimeter is A splitter of a triangle is a cevian (a segment from a vertex to the opposite side) that divides the perimeter into two equal lengths, this common length being called the semiperimeter of the triangle. The three splitters of a triangle all intersect each other at the Nagel point of the triangle. A cleaver of a triangle is a segment from the midpoint of a side of a triangle to the opposite side such that the perimeter is divided into two equal lengths. The three cleavers of a triangle all intersect each other at the triangle's Spieker center. Circumference of a circle The perimeter of a circle, often called the circumference, is proportional to its diameter and its radius. That is to say, there exists a constant number pi, π (the Greek p for perimeter), such that if P is the circle's perimeter and D its diameter then, In terms of the radius r of the circle, this formula becomes, To calculate a circle's perimeter, knowledge of its radius or diameter and the number π suffices. The problem is that π is not rational (it cannot be expressed as the quotient of two integers), nor is it algebraic (it is not a root of a polynomial equation with rational coefficients). So, obtaining an accurate approximation of π is important in the calculation. The computation of the digits of π is relevant to many fields, such as mathematical analysis, algorithmics and computer science. Perception of perimeter The perimeter and the area are two main measures of geometric figures. Confusing them is a common error, as well as believing that the greater one of them is, the greater the other must be. Indeed, a commonplace observation is that an enlargement (or a reduction) of a shape make its area grow (or decrease) as well as its perimeter. For example, if a field is drawn on a 1/10,000 scale map, the actual field perimeter can be calculated multiplying the drawing perimeter by 10,000. The real area is 10,0002 times the area of the shape on the map. Nevertheless, there is no relation between the area and the perimeter of an ordinary shape. For example, the perimeter of a rectangle of width 0.001 and length 1000 is slightly above 2000, while the perimeter of a rectangle of width 0.5 and length 2 is 5. Both areas equal to 1. Proclus (5th century) reported that Greek peasants "fairly" parted fields relying on their perimeters. However, a field's production is proportional to its area, not to its perimeter, so many naive peasants may have gotten fields with long perimeters but small areas (thus, few crops). If one removes a piece from a figure, its area decreases but its perimeter may not. In the case of very irregular shapes, confusion between the perimeter and the convex hull may arise. The convex hull of a figure may be visualized as the shape formed by a rubber band stretched around it. In the animated picture on the left, all the figures have the same convex hull; the big, first hexagon. The isoperimetric problem is to determine a figure with the largest area, amongst those having a given perimeter. The solution is intuitive; it is the circle. In particular, this can be used to explain why drops of fat on a broth surface are circular. This problem may seem simple, but its mathematical proof requires some sophisticated theorems. The isoperimetric problem is sometimes simplified by restricting the type of figures to be used. In particular, to find the quadrilateral, or the triangle, or another particular figure, with the largest area amongst those with the same shape having a given perimeter. The solution to the quadrilateral isoperimetric problem is the square, and the solution to the triangle problem is the equilateral triangle. In general, the polygon with n sides having the largest area and a given perimeter is the regular polygon, which is closer to being a circle than is any irregular polygon with the same number of sides. The word comes from the Greek περίμετρος perimetros from περί peri "around" and μέτρον metron "measure". - Coastline paradox - Girth (geometry) - Pythagorean theorem - Surface area - Wetted perimeter |Look up perimeter in Wiktionary, the free dictionary.| |The Wikibook Geometry has a page on the topic of: Perimeters, areas and volumes| |The Wikibook Geometry has a page on the topic of: Perimeter and Arclength| |The Wikibook Geometry has a page on the topic of: Arcs|
The size of the atom is significant in governing its property. If the atom is assumed to be spherical, then the radius of the sphere gives the atomic radius. But it is difficult to exactly determine the radius of the atom because - The probability of finding the electron is never zero even at large distances from the nucleus and so the atom does not have a well defined boundary. - It is not possible to isolate an atom and measure its radius. - The size of the atom changes in going from one set of environment to another and from one bonded state to another. So, one can arbitrarily define atomic radius as the effective size which is the distance of closest approach of one atom to another atom in a given bonding situation. This approximate radius can be determined by measuring the inter-nuclear distance between the two centres of the neighbouring atoms in a covalent molecule. This is usually done by diffraction and spectroscopic techniques. Fig: 4.3 - Calculation of atomic radiusThe inter-nuclear distance corresponds to the diameter of the atom and therefore half of this distance gives the atomic radius for a homonuclear molecule like Cl-Cl or Br-Br. Hence, it may be defined as one half of the distance between the centers of the nuclei of two similar atoms bonded by a single covalent bond. This is also called as covalent radius.For a hetero-nuclear molecule, the covalent radius is the distance between the center of the nucleus of the atom and the mean position of the shared paired of electrons between the bonded atoms. The covalent radii are smaller than the atomic radii in the uncombined atoms because the overlap region between atomic orbitals of two atoms becomes common in a covalent bond.The forces of attraction (Van der Waals forces) existing between non-bonded atoms and molecules are weak and the atoms are held at larger inter-nuclear distances. Thus these radii known as Van der Waals radii are always larger than covalent radii. Van der Waals radius is defined as one half of the inter-nuclear distance between two adjacent atoms belonging to the two nearest neighbouring molecules of the substance in the solid state. Variation of Atomic Radii Variation in a period Atomic radii in general, decrease with increase in atomic number, going from left to right in a period. This is explained on the basis of increasing nuclear charge along a period. The nuclear charge increases progressively by one unit while the corresponding addition of one electron takes place in the same principal shell. As the electrons in the same shell do not screen each other from the nucleus, the nuclear charge is not neutralized by the extra valence electron. Consequently the electrons are pulled closer to the nucleus by the increased effective nuclear charge resulting in the decrease in the size of the atom. In this way the atomic size goes on decreasing across the period. Fig: 4.4 - Variation of atomic radius with atomic number in a periodThe atomic radius abruptly increases in the case of noble gas element Neon as it does not form covalent bonds. So the value of Neon radius is Van der Waals radius which is considerably higher than the value of other covalent radii. Variation in a group The atomic radii of elements increases from top to bottom in a group because the nuclear charge increases with increasing atomic number. Although, there is an increase in the principal quantum number from one atom to another, the number of electrons in the valence shell remain the same. The effect of increase in the size of the electron cloud out weighs the effect of increased nuclear charge and so the distance of the valence electron from the nucleus increases down the group. Thus the size of the atom goes on increasing down the group in spite of increasing nuclear charge. Fig: 4.5 - Variation of atomic radius with atomic number in a group 4.The atomic mass of germanium is 72.6 and its density is 5.47g cm-3. What is the atomic volume of germanium?
In May 2011, the National Science Teachers Association (NSTA) reported that 75 percent of the surveyed members hosted a science fair during the past school year and 68 percent were planning a science fair for the 2011–2012 school year. Although many educators value science fairs as opportunities for students to “explore new ideas, apply and develop new skills, and demonstrate their learning” (NSTA 2011), an effective and interesting science fair can be challenging to pull off! However, with the help of your school librarian, science fairs can be “made over” into events that engage the entire school and showcase the positive outcomes of inquiry-based learning in the context of your state and local standards! As we’ve discussed in prior columns, school librarians work toward achieving the American Association of School Librarians’ Standards for the 21st Century Learner. These standards encompass a number of learning activities relating to the school library, but in the case of science fairs, aspects of Standard 1 (inquire, think critically, and gain knowledge) particularly apply: 1.1.1 Follow an inquiry-based process in seeking knowledge in curricular subjects, and make the real-world connection for using this process in own life. 1.1.3 Develop and refine a range of questions to frame the search for new understanding. 1.2.1 Display initiative and engagement by posing questions and investigating the answers beyond the collection of superficial facts. 1.2.4 Maintain a critical stance by questioning the validity and accuracy of all information. 1.2.5 Demonstrate adaptability by changing the inquiry focus, questions, resources, or strategies when necessary to achieve success. School librarians have long histories with science fair. At the elementary level, school librarians report that ideas for science fair projects are key components of their science collections (Mardis and Hoffman 2007) and that science fairs are a great opportunity for them to connect with their science teachers (Mardis 2011). Because school libraries are often the largest instructional spaces in the school, they are ideal places for displaying science fair projects. Beyond space and idea books, school librarians have much to offer science fairs. A study of children’s questions to the Internet Public Library reference service showed that kids between third and eighth grade primarily asked questions about science fairs. Moreover, the questions were not just about science fair topics, they were about how to conduct experiments and how to display results (Mardis 2009). Another study showed that when children do not understand how to conduct their science fair experiments, their parents take over (Watson 2003)! Students and teachers have found the science fair particularly frustrating because the topics range widely and it’s difficult to organize projects on so many different topics. Using your state or local standards as a guide, you can develop a theme for your science fair and work closely with your school librarian to ensure that this focus is reflected in the topics your students choose. Let’s follow an example of how you might work with your school librarian on a science fair project relating to the theme of this issue of Beyond Weather and the Water Cycle: “Earth’s Climate Changes.” Because this theme encompasses many topics, let’s focus our example project on tree rings. Step 1: Help children to understand why the science fair is important. Although this step seems obvious, many science teachers do not spend adequate time helping their students understand why the science fair is an important part of their learning. While videos like Treeline Elementary’s Fourth Grade Science Fair, a digital poster such as “Mrs. Hunt’s Science Fair Hints,” or Christine Little’s “Why Participate in a Science Fair?” presentation, can help kids to envision the outcome of their projects, it’s always helpful to help students understand what a tremendous opportunity science fair can be to make friends and learn about future careers. The Science News for Kids article “The Science Fair Circuit” can show kids how they can enjoy science fair activities throughout their school years and even use the fair as a way to get to know people across the country and learn about college scholarships. Answers to the Internet Public Library’s question Why should I do a science fair project? can help you set expectations and build excitement. Step 2: Identify an appropriate topic and a testable question for the science fair project. As most science teachers know, the Internet teems with sites that list project ideas for science fairs. Perhaps you’ve already seen the Internet Public Library’s directory of science fair idea sites or Science Buddies’ lists of science fair topics that just focus on weather. Your school librarian can quickly assemble a list of appropriate topics once he or she is aware of your science fair’s theme. Helping children choose questions for science fair projects is a place where your school librarian can shine! As we saw in the Standards listed above, the ability to phrase compelling questions is a key part of learning in the library. Working with your school librarian, you can help children focus their explorations of productive questions for their science fair projects. The answers generated by productive questions are derived from first-hand experiences and raise children’s awareness of the possibility of more than one correct answer to a question. Productive questions cannot be answered by using a simple yes or no because they require children to test theories, before responding, through attention, focus, measuring or counting, comparison, action, problem solving, or reasoning—features of a great science fair project! With a good question, working on the rest of the science fair project is focused: background research is easier to conduct, results are easier to analyze, and conclusions come more naturally. Vickie Harry of the Mediterranean Association for International Schools discusses types of testable questions in depth on her web site.She describes some varieties of questions that are especially well suited to younger students’ skills. Measuring and Counting Questions Quantitative questions encourage sharp observations and communications. Carefully phrased measuring and counting questions help children organize their thinking and unify similar concepts or ideas through the use of grouping or sets. Children use the science process skills of measuring and classifying as they check accuracy and use new instruments. Examples of measuring and counting questions include “How many…?”, “How often…?”, “How long…?”, and “How much…?” Comparison questions ask children to identify number relationships, develop concepts of alike and different, quantify the number of ways things are alike or different, and describe how things fit together. The science processes of observing, measuring, classifying, and communicating are used by children as they answer comparison questions. Comparison question starters include: “How do…fit together?”, “How are…different?”, “In how many ways are…alike?”, and “In how many ways do…differ?” Action questions involve children in the science process skills of predicting, investigating, and experimenting. Finding the answers to “What happens if…?” and “What would happen if you…?” engages children in the process of inquiry to discover an answer through investigation and experimentation. Asking children to make predictions about the outcomes of investigations or experiments stimulates thinking about variables, hypotheses, and conclusions affecting the investigation before it begins. Often, choosing the right question is a matter of phrasing. The Discovery Channel’s Science Fair Central web site presents examples of questions rephrased from unproductive to testable, productive questions. For this article, we’ve chosen tree rings as our example, but for your students, choosing their own topic is much more challenging and can mean the difference between a great science fair experience and a frustrating one! Work with your librarian to be prepared with a list of possible topics and questions. Step 3: Review the scientific method and processes for conducting science experiments. A good science fair project involves much more than a topic. Grasp of the process is important for making the science fair an experience that translates to other learning activities. For example, teacher Mark Holmes gives an overview of all of the necessary things to consider before undertaking a science fair project in his TeacherTube video Successful Science Fair Projects. A good project takes a lot of preparation and that’s when the student-teacher ratio in the classroom can become difficult. As an instructional collaborator, your school librarian can facilitate the planning process with you, using a number of methods like the Know-Want to Know-Learn (K-W-L) process worksheet. Step 4: Ground the topic in appropriate research and references. Grounding the testable questions in known research is an essential part of drawing conclusions from collected data. In the first column in this series, Questions, Questions: Taking Energy Inquiry Further in the School Library, we reviewed a number of research processes appropriate for elementary grades. For the science fair, you may want to focus on the location of information in books and web sites and allow children to focus on analysis and synthesis of information. As an information specialist, your school librarian can recommend or gather a variety of high quality informational sources for children to review before beginning their experiments. In addition to the many print resources available through the school library, your school librarian can focus kids’ searching and reading by assembling links to digital magazine articles and videos. For our tree rings experiment, the school librarian might lead us to sources such as Fossil Forests by Emily Sohn at Science News for Kids Life of a Tree by the National Arbor Day Foundation Tree Ring Analysis by Earth Day Canada Real Trees 4 Kids by the National Christmas Tree Association Talking Trees: A living diary of climate from the National Science Foundation’s Why Files Note that each of these sources includes both text and pictures and most are of an appropriate elementary reading level. Imagine how long it would take you to help your students find appropriate resources like these for each of their projects using a web search engine! Give children time to review their resources and write down one or two main ideas from each source. Schedule one-on-one meetings during your library time for you and your school librarian to review each child’s notes and make sure that they have a clearly defined question and good background sources. Be sure that students record their sources in a bibliography. Your school librarian will have a range of resources for this activity, such as the formats presented at the Science Buddies site or web-based formatting tools like the free and easy-to-use NoodleBib Express. Step 5: Conduct the experiment and gather data. To continue with our example, tree-ring analysis can be conducted outdoors or with slices of trees that are brought in. The North Dakota Education Network has a lesson plan for tree-ring analysis that you can adapt for the data-gathering portion of this science fair project. Basically, the rings are counted backward from the bark. It may be helpful to mark every decade with a dot. Then, when rings are particularly wide, narrow, oddly shaped, or discolored, that means the tree experienced an unusual weather or physical event. Step 6: Analyze data and draw conclusions. Of course, data analysis will depend on the project. For this phase, you and your school librarian may want to check in with students as they collect and analyze their data, providing what library guru Joan Frye Williams calls “bookend service” (Mardis 2011). That is, check in on the children as they get started, occasionally as they work, and at the end of their activity to ensure that they are comfortable with the process. For our example, one way to determine if a climate event may have influenced the tree’s development is to check the weather recorded for the year. The Weather Underground has a free weather history tool. Although different research processes use different terms, drawing conclusions for a research question is really the result of a simple formula: A (findings of prior research; known facts) + B (finding from experiment) = C (conclusion) Be sure to emphasize that conclusions can affirm prior research, deny prior research, or present unexpected conclusions. Not all experiments result in “yes” or “no” answers! Step 7: Display and communicate research findings in an understandable format. Again, your school librarian can be a valuable partner here whether you decide you would like students to present their work on a traditional tabletop triptych or you’d like them to use a multimedia method like Glogster or Prezi. Digital formats can allow you to have multimedia products to show parents, administrators, and future students. Glogster allows students to create interactive posters like the one below: Prezi is similar to PowerPoint but with many more exciting features. We’ve already seen a Prezi about the reasons to participate in a science fair above, but a Prezi for our tree rings science fair project might look like this: Your school librarian can be a valuable ally and collaborator in helping to get kids excited about and see the value in the science fair; develop good research questions and a sound, replicable process to pursue them; understand the link between their research results and research done by others; and present their project in a clear and dynamic way. With your school librarian’s help, the science fair can be, as one Prezi noted, Not Your Mother’s Science Fair. Mardis, M. A. 2009. Children’s Questions about Science Fair: Preliminary Results of an Analysis of Digital Library Reference Questions. Proceedings of the American Society for Information Science and Technology, 45 (1), 1-10. Mardis, M.A. 2011. Reflections on School Library as Place, School Library as Space. School Libraries Worldwide 17 (1), pp.i-iii. Retrieved from http://findarticles.com/p/articles/mi_7728/is_201101/ai_n56829368/ Mardis, M. A., and Hoffman, E. S. 2007. Collection and Collaboration: Science in Michigan Middle School Media Centers. School Library Media Research, 10. Retrieved from http://www.ala.org/ala/mgrps/divs/aasl/aaslpubsandjournals/slmrb/slmrcontents/volume10/mardis_collectionandcollaboration.cfm National Science Teachers Association. 2011. NSTA Members Don’t See Increasing Popularity for Science Fairs. Retrieved from http://www.nsta.org/publications/news/story.aspx?id=58461 Watson, J. S. 2003. Examining Perceptions of the Science Fair Project: Content or Process? School Library Media Research 6. Retrieved from http://www.ala.org/ala/aasl/aaslpubsandjournals/slmrb/slmrcontents/volume62003/sciencefair.cfm Marcia Mardis, EdD, assistant professor, College of Information Science, Florida State University, wrote this article. Marcia is a former school librarian, school administrator, and educational digital library director. Email Marcia at email@example.com. Copyright July 2011 – The Ohio State University. This material is based upon work supported by the National Science Foundation under Grant No. 1034922. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation. This work is licensed under an Attribution-ShareAlike 3.0 Unported Creative Commons license.
Making and Using Compost Compost is partially decomposed organic matter. It is dark and easily crumbled and has an earthy aroma. It is created by biological processes in which soil-inhabiting organisms break down plant tissue. When decomposition is complete, compost has turned to a dark-brown powdery material called humus. The processes occurring in a compost pile are similar to those that break down organic matter in soil. However, decomposition occurs much more rapidly in the compost pile because the environment can be made ideal for the microbes to do their work (Figure 1). A compost pile encourages natural decomposition of organic materials. Gardeners often have difficulty disposing of leaves, grass clippings and other garden refuse, particularly in urban areas. Missouri law bans these materials from landfills, so finding environmentally sound ways to dispose of them has become even more important. These byproducts of the garden and landscape can be turned into useful compost with little more effort than it takes to bag and haul them away. Home composters avoid hauling or utility costs associated with centralized composting facilities and end up with a valuable soil conditioner or mulch for the landscape and garden. Good compost consists primarily of decomposed or partially decomposed plant and animal residues but may also contain a small amount of soil. Compost improves both the physical condition and the fertility of the soil when added to the landscape or garden. It is especially useful for improving soils low in organic matter. Organic matter in compost improves heavy clay soils by binding soil particles together into “crumbs,” making the soil easier to work. Binding soil particles also helps improve aeration, root penetration and water infiltration and reduces crusting of the soil surface. In sandy soils, additional organic matter also helps with nutrient and water retention. Compost also increases the activity of soil microorganisms that release nutrients and other growth-promoting materials into the soil. Although compost contains nutrients, its greatest benefit is in improving soil characteristics. You should consider compost a valuable soil amendment rather than a fertilizer because additional fertilization may be necessary to obtain acceptable growth and yields. Compost also is a valuable mulching material for garden and landscape plants. It may be used as a top-dressing for lawns and, when it contains a small amount of soil, as part of a growing medium for houseplants or for starting seedlings. Composting is a method of speeding natural decomposition under controlled conditions. Raw organic materials are converted to compost by a succession of organisms (Figure 2). During the first stages of composting, bacteria increase rapidly. Later, actinomycetes (filamentous bacteria), fungi and protozoans go to work. After much of the carbon in the compost has been used and the temperature of the pile has fallen, centipedes, millipedes, sowbugs, earthworms and other organisms continue the decomposition. As microorganisms decompose the organic materials, their heat of respiration causes the temperature in the pile to rise dramatically. The center of a properly made heap should reach a temperature of 110 to 140 degrees F in four to five days. At this time, the pile begins “settling,” which is a sign that the pile is working properly. The pH of the pile will be very acidic at first, at a level of 4.0 to 4.5. By the time the process is complete, the pH should rise to about 7.0 to 7.2. The heating in the pile kills some of the weed seeds and disease organisms. However, this happens only in areas where the most intense temperatures develop. In cooler sections toward the outside of the pile, some weed seeds or disease organisms may survive. So, proper turning is important to heat all parts of the pile. The organisms that break down the organic materials require large quantities of nitrogen. So, adding nitrogen fertilizer, or other materials that supply nitrogen, is necessary for rapid and thorough decomposition. During the breakdown period, the nitrogen is incorporated into the bodies of the microbes and is not available for plant use. This nitrogen is released when the decomposition is completed and the compost is returned to the garden. A succession of organisms decompose organic matter in compost. Many types of organic materials can be used for compost. Possible materials include sod, grass clippings, leaves, hay, straw, weeds, manure, chopped corncobs, cornstalks, sawdust, shredded newspaper, wood ashes, hedge clippings and many kinds of plant refuse from the garden. If the compost is to be returned to the garden, leave out weed plants heavily laden with seeds. Even though some seeds will be killed during composting, those that survive might create a weed problem. Most kitchen scraps also may be used in the compost heap. Some items that should not be used are grease, fat, meat scraps and bones. These materials may attract dogs, rats or other animals. They also may develop an unpleasant odor during decomposition. Fats are slow to break down and greatly increase the time required before the compost can be used. Unless compost is either completely and thoroughly turned during its formation or allowed to remain unused for several years, diseased plants from the flower or vegetable garden should not be placed on the compost heap. Even though the heating during the compost formation may kill some diseases, some disease organisms may survive to be returned to the garden. Although animal manures are often good sources of nitrogen, they should be used with caution. Always wash your hands thoroughly after handling compost containing manure of any kind. The length of time necessary for the composting process depends on several conditions: - Carbon-to-nitrogen ratio - Surface area of particles Of the above, the carbon-to-nitrogen ratio is especially important. All organic material contains carbon and nitrogen. Carbon is a major component of the cellulose and lignin that give cell walls their strength. Nitrogen is found in proteins and many other compounds inside plant cells. The carbon-to-nitrogen ratio (C:N) of a material is an estimate of the relative amounts of these two elements it contains. The ratio is usually based on the percent dry weight of carbon and nitrogen in the material. A ratio of about 30:1 is ideal for the activity of the microbes in the compost. This balance can be achieved by controlling the materials included in the compost or by adding nitrogen either from fertilizer or from organic materials high in nitrogen, such as manure or grass clippings. Table 1 shows the approximate ratios for some materials commonly added to compost piles. The items at the top of the list are highest in nitrogen, and those at the bottom are highest in carbon. These ratios represent comparative weights. So, in the first example, 5 to 7 pounds of dry pig manure would contain about 1 pound of nitrogen, and near the other extreme, 500 pounds of sawdust might contain only 1 pound of nitrogen. The 30:1 ratio in compost is the most desirable to supply the microorganisms with the amount of both the carbon they need for energy and the nitrogen they need for protein synthesis so they can function efficiently and quickly. To estimate the C:N of a mixture, average the ratios of the individual materials. For example, a mixture of equal parts grass clippings and leaves might have a C:N of (20 + 50) ÷ 2 = 35. Carbon-to-nitrogen ratios in various materials. |Hog manure||5 to 7:1| |Poultry manure (fresh)||10:1| |Poultry manure (with litter)||13 to 18:1| |Vegetable wastes||12 to 20:1| |Grass clippings||12 to 25:1| |Horse manure (fresh)||25:1| |Horse manure (with litter)||30 to 60:1| |Straw||40 to 100:1| |Bark||100 to 130:1| |Paper||150 to 200:1| |Wood chips, sawdust||200 to 500:1| Before a compost pile is constructed, the decision must be made where to locate it and whether it will be contained in a structure or just heaped. Once those decisions have been made and the area is ready, the process of layering compost materials, as described below, may begin. Build the pile in a convenient but inconspicuous place. If the compost will be used mainly in the garden, then a location near the garden would be logical. Because the compost pile may need to be kept moist during dry weather, a convenient source of water should be available. But don’t locate the pile where water may stand. Excess moisture in the bottom of the pile can cause the process to stop or lead to odor problems. Locate the pile where occasional earthy odors are not likely to offend neighbors. A shaded area is generally desirable for best composting. If possible, however, do not locate the pile or structure close to trees. Tree roots may be attracted to the loose, moist organic material in the bottom. During summer, roots of some trees may invade the lower areas of the bin and make the compost difficult to dig and use. Containing the pile Although compost can be stacked in a heap, decomposition is best and space is used more efficiently when the compost is placed in some type of bin or enclosure (Figure 3). Air should be able to move through the sides of the structure. The pile may be round, square, rectangular or any other convenient shape. Pile turning and removal of finished compost will be easier if the structure has an open, removable or hinged side. A composter may be built from available materials. If the material is porous (e.g. wire mesh wrap), wrap the bin with weed-barrier fabric or perforated plastic sheeting to reduce moisture loss. Making compost does not require a structure and can be done simply in a heap. However, heaps require more space. The minimum size of a heap should be 5 feet by 5 feet and 3 feet high. Materials can be added as they become available, but when the first heap is high enough, a second one should be started until the first has decomposed enough to be used. Heaps may be turned regularly or not at all. If they are not turned, the upper portions will not be totally decomposed and will have to be pulled off when the compost is used. Compost piles develop best when built in layers (Figure 4). Layering is a good way to ensure that the materials are added in the proper proportion. Once several layers are formed, however, composting will be most rapid if the layers are mixed before making new layers. If available materials are limited, building a pile in this way may not be practical. When organic materials are accumulated rather slowly, stockpile them until enough are available to layer properly. The pile normally may be started directly on the ground. However, to provide the best aeration to the base and improve drainage, dig a trench across the center of the base and cover it with stiff hardware cloth before you begin the layers. Branches or brush may be placed on the bottom as another means of improving lower aeration, but because they will decompose more slowly than finer materials, they may interfere with removal of the finished compost. Firm each layer of organic material as it is added, but do not compact it so much that air cannot move freely through it. Lightly water each layer as it is added. The entire pile should be as wet as a well-wrung sponge. Achieving this result is easier if you water each layer of dry material while building the pile rather than trying to wet the entire pile after it is built. Every two to three layers, use a tool such as a pitchfork or spading fork to mix the layers thoroughly so the materials are evenly distributed. This practice speeds up decomposition. During construction of the pile, remember the C:N ratios and that the pile needs about one pound of actual nitrogen for each 30 pounds of lightly moist organic matter for best decomposition. Proper layering in a compost bin. First layer — organic materials Begin the pile by placing a 6- to 8-inch layer of organic matter in the enclosed area. Shredded or chopped materials decompose faster, so if a shredder is available, run coarse organic matter through it. A machete and chopping block are useful for processing brushy yard trimmings. Materials that tend to mat, such as grass clippings, should either be placed in layers only 2 to 3 inches thick or mixed with coarser materials for thicker additions. After the organic layer is built, moisten but do not soak it. Second layer — fertilizer or manure Over the layer of plant material, add a layer of a material high in nitrogen or a sprinkling of a high-nitrogen garden fertilizer. If animal manure is used, a layer 1 to 2 inches thick should be satisfactory. If organic materials high in nitrogen, such as grass clippings, are used, layer them about 4 inches thick. Although adding grass clippings or other materials that have been treated with herbicides may cause concern, most pesticides break down quickly in a compost pile. If garden fertilizers such as 12-12-12 are used as a nitrogen source, use about 1 cup per 25 square feet of the top surface of each layer. When using fertilizer materials, about 0.8 ounce of actual nitrogen per bushel of organic matter such as leaves is needed. For example, one cubic yard (3 feet x 3 feet x 3 feet) of leaves contains about 23 bushels and thus would require about 18 ounces (1.1 pounds) of nitrogen or about 5.5 pounds of a fertilizer containing 20 percent N. To avoid overwhelming the microorganisms, add fertilizer to the pile in several doses as the pile is turned. More uniform distribution on each layer can be obtained if a water-soluble fertilizer is mixed with water and sprinkled over the surface. Table 2 shows the amount of each material needed to apply 1 pound of actual nitrogen. Do not add lime to the pile. Adding ground limestone to a compost pile was once thought necessary, but it is no longer considered to be so because the organisms function well with a pH of between 4.2 and 7.2. Compost naturally becomes less acid as it matures. Adding lime helps convert ammonium nitrogen to ammonia gas, which can create an odor problem as it escapes from the pile and can reduce the nutrient content of the finished compost. Adding lime may also cause the pH of the finished compost to be higher than optimal for plant growth. Quantities of various nitrogen sources required to provide 1 pound of nitrogen |Nitrogen source||% Nitrogen||Ounces to apply for 1 pound N| Third layer — soil. Next, add a layer of soil or sod about 1 inch thick Soil contains microorganisms that help start the decomposition process. If an adequate source of soil is not available, a layer of finished compost may be used as a soil substitute. Compost activators may also be used to introduce organisms into the pile. Continue to develop and alternate the layers — organic, fertilizer/manure, soil — until the pile is 3 to 5 feet high. Remember, after every two to three layers, mix the layers thoroughly to evenly distribute the materials. The speed at which compost forms depends on the conditions already discussed. Controlling these factors, along with frequent turning of the compost, speeds up the process. But many gardeners are content with the slower, more traditional methods that require less attention. Fast composting methods depend on the use of turning units. They can create good compost in as little as six weeks, depending on how the compost pile is managed. Materials that can be used include nonwoody yard waste, nonfat kitchen waste and similar materials. Structures or containers that allow frequent, easy turning are essential. Turning units for the fast method are of two general types: a series of bins (usually three) that allow manual turning of the compost from one bin into the next (Figures 5 and 6); or a rotating, horizontally mounted drum, such as a 55-gallon barrel. The materials for fast composting should be added in larger amounts rather than frequent additions of small amounts. Thus, organic matter should be collected until there is enough to properly fill a barrel composter or other unit, such as a 3-square-foot bin. To reduce odor problems, grass clippings should be spread to dry before stockpiling, and food wastes should be covered or buried in the compost. Compost bin constructed from landscape timbers. To turn the compost, disassemble the bin and restack the timbers close by; then fork the compost into the new enclosure. Traditional or slow method In the traditional, slower method of composting, material may be added to the enclosure at any time. Turning can help but is not required. When only one unit is developed, finished compost may be taken from the bottom while new materials are still being added to the top. Two bins are always better where space permits because one bin can be allowed to mature while new materials are being added to the other. Woven wire fencing, chicken wire, chain link, hardware cloth, wood slat fencing (snow fence), concrete blocks, bricks or lumber can be used to enclose the compost heap. Fencing wires need corner supports although some can be used to make cylinders that need little or no support. If woven wire fencing is too loose to contain finer materials, line the enclosure with plastic that contains some aeration holes to keep the pile neat and speed decomposition. The plastic lining will also prevent excessive drying of the vertical pile surfaces. Bricks or concrete blocks may be stacked without mortar. Leave 1/2-inch spaces between them to allow adequate air movement through the sides. Line up the holes facing upward as you stack, and drive metal posts down through a few of the holes to make the bin more stable (Figure 6). A three-compartment turning unit constructed with concrete blocks and metal ties. Lumber, whether new or scrap, is suitable for sides of compost bins. Allow enough space between the boards for air movement. Lumber is gradually ruined by exposure to the damp compost, and boards occasionally have to be replaced as they decay. Discarded pallets can be used to make an inexpensive yet durable composting enclosure. Decomposition will occur even if a compost pile is ignored after it has been built, but it will occur at a slower rate. Adding water to maintain moist conditions and turning the pile to improve aeration will speed the process. To check the moisture content of the pile, squeeze a handful of compost. If a few drops of water can be squeezed out, moisture is about right. If no drops fall, the pile is too dry. If water trickles out, the pile is too wet. Cover the pile with plastic or other materials during wet weather to avoid excessive moisture buildup. A properly built pile should develop a temperature of at least 110 degrees F at the center in about a week during summer or up to a month in cooler seasons. When that temperature is reached, the pile should be opened, compacted materials should be loosened, and the pile should be turned or stirred so that the material previously on the top and sides is moved to the center. During warm weather, the pile may need another turning after a second week. The optimum temperature in an active compost pile is 135 to 140 degrees F. Compost piles occasionally reach temperatures as high as 170 degrees F — hot enough to kill some of the microorganisms. This usually happens when excessive amounts of wet, high-nitrogen materials are added to the pile. The rate of heat buildup and decomposition also will depend on external temperatures. In winter, little decomposition occurs except in the center of large piles. Piles may be turned by slicing through them with a spade and turning over each slice. The main objectives of turning are to aerate the pile and to shift materials from the outside closer to the center, where they may also be heated and decomposed. Moisten dry spots in the pile by spraying with water during turning. As materials decompose, the pile heats up and should also shrink, eventually becoming no more than half its original height. Often, the pile’s volume may shrink by 70 to 80 percent. Compost is ready to use when it is dark brown and crumbly and has an earthy smell. For a very fine product, run the compost through a ½-inch screen and either use the coarser material for mulch or return it to the pile for continued decomposition with other materials. - The pile is producing a bad odor. Cause: The pile may be too wet, too tight, or both. Turn it to loosen and allow better air exchange in the pile. If the pile is too wet, add dry new materials as you turn it. Odors also may indicate that animal products are in the compost pile. - No decomposition seems to be taking place. Cause: The pile is too dry. Moisten the materials while turning the pile. - The compost is moist enough and the center is warm but not hot enough for complete breakdown. Cause: The pile is probably too small. If the pile is not small, more nitrogen may be needed. If the pile is small, collect more materials or add those available to make a larger pile. Turn and mix the old ingredients that may have only slightly decomposed into the new pile. - The heap is moist and sweet smelling, with some decomposition, but still does not heat enough. Cause: There is not enough nitrogen available for proper decomposition. Mix a nitrogen source such as fresh grass clippings, manure or fertilizer into the pile. When compost is ready to use, it should be dark and crumbly, and you should not be able to recognize the original composted items (Figure 7). If compost is not used promptly, it still makes a good soil amendment, but nitrogen may be lost through leaching. Well-decomposed compost ready for use. Fast composting may produce good compost in three to eight weeks. Traditional composting methods will produce a product in three to nine months, depending on the types of organic materials used, temperatures, and how often the compost is turned. In some cases, screening compost through a 1-inch wire mesh will help sort out incompletely decomposed materials before use. Twigs decompose slowly, and if they have become a part of the debris, they may have to be removed from finished compost to be returned to the heap. Compost is also very suitable to use for potting houseplants or starting many types of seeds. Recent research has shown that microorganisms found in mature compost can actually suppress plant diseases such as those causing “damping off” as effectively as fungicides. Generally, best results are obtained when compost is mixed with other materials such as perlite and vermiculite, with about 30 percent of the volume being compost. Compost should be added annually if you are using it to build good soil. The best time to add compost to the vegetable or flower garden is during fall or spring tilling. You can add it to the soil when planting trees, shrubs, annuals or perennials. Compost also is an excellent mulch or top-dressing around flowers, vegetables, shrubs and trees. If used as mulch, the compost need not be completely finished. Compost may be used as a lawn top-dressing, but it should not be applied more than a ¼ inch thick. For this purpose, the compost should be screened so that only the finer particles are used.
On Feb. 15, a 13,000-ton rock plunged through the skies above Chelyabinsk, Russia. It shone 30 times brighter than the sun, and hurtled at 42,000 miles per hour toward a city of more than a million people. As the rock broke apart during its fiery descent, it dispersed energy equivalent to 500 kilotons of TNT, shattering thousands of windows. Some 1,500 people were injured, and shock waves caused a ribbon of damage extending 55 miles on either side of the meteor’s path. Witnesses thought a nuclear war was upon them. Luckily, no one died. This incident alone might not be cause to start worrying about death by asteroid. But new research suggests that space rocks as large as the one that fell over Chelyabinsk – about 19 meters (62 feet) across – are three to five times more numerous than scientists had realized. The study, led by Peter Brown at the University of Western Ontario, also found that larger and more dangerous ones are unexpectedly abundant. In other words, alien projectiles pose a serious threat. Is it a manageable one? For decades, astronomers have focused on the dangers posed by very large asteroids. Starting in 1998, NASA led an effort to catalog ‘near-Earth objects’ at least a kilometer in diameter – big enough to cause a global catastrophe if they collided with Earth. About 90 percent of these have been identified. Yet smaller, Chelyabinsk-sized objects are harder to find. Scientists estimate there are more than a million of them nearby, and only about a thousand have so far been found. Locating and tracking every one isn’t practical. So what to do? Astronomers at the University of Hawaii are working on one solution. Their system, called ATLAS, will deploy eight small telescopes to survey the entire sky twice each night in search of incoming asteroids, starting in 2015. ATLAS won’t peer as deeply into space as other detection systems – such as the Pan- STARRS facility, which hunts for ‘killer asteroids’ that are decades from colliding with Earth – but it should find imminent threats more efficiently. If it works as planned, an international network of such systems could offer low-cost early warning. The next question is how to prepare for any strike we see coming. Another new study, which analyzed the Chelyabinsk meteor’s trajectory and behavior in detail, should help. With better knowledge of how alien objects fall when they enter Earth’s atmosphere, it should be possible to determine the likely path of the next meteor, ensure people in harm’s way are informed or evacuated, and prepare an emergency response. With a few days’ warning in Chelyabinsk, officials could have urged residents simply to stay away from windows; shattered glass caused almost all the reported injuries. Larger asteroids, though, remain a different story. NASA and other space watchers are making progress on their next goal of cataloging near-Earth objects larger than 140 meters, but the cosmos is still full of nasty surprises. In just the past few weeks, NASA has discovered three huge new space rocks, two of them initially estimated to be 20 kilometers wide. Although these won’t threaten Earth any time soon, the find was alarming. If such a behemoth was on a collision course with Earth, disaster planning wouldn’t be enough: We’d need a strategy for deflecting it. With enough warning, that actually wouldn’t be so hard. NASA has suggested several plausible options, including hitting an incoming asteroid with a spacecraft or a nuclear weapon. Unfortunately, it’s hard to get people to take this idea seriously. (For one thing, it suffers by association with a lamentable Bruce Willis film.) And Congress has yet to designate responsibility for such a mission to any agency or take any other substantial steps to ensure the U.S. could react quickly to a potential threat. The United Nations is starting to take the lead on that front. It’s helping countries coordinate their detection efforts, and its Committee on the Peaceful Uses of Outer Space is making plans to coordinate an international deflection mission. That’s precisely the kind of nuts-and-bolts bureaucratic preparation that’s needed to enable the world to respond rapidly. And in a cosmos roiling with hazardous and unpredictable threats, preparation is the best defense we have.
Theories of Aggregate Supply What do you mean by the Theory of Supply in Economics? Supply is the amount of any commodity that sellers are willing to offer for sale at a different price per unit of time. There is a direct relationship between the price of a given commodity and the quantity offered by a seller for sale over a specified time. If the price of the commodity rises, then other factors remain constant. Its quality which is offered for sale starts increasing as well and when the price of the commodity falls, the quantity of commodity available for sale decreases. This relationship between the price of the commodity and the quantity which the supplier is willing to sell is called the Theory of Supply. Law of Supply In simple words, the law of supply states that sellers supply more goods at higher prices and supply fewer goods at lower prices. The supply function is explained in the mentioned supply curve and schedule. Market Supply Schedule of a Commodity: In the above schedule, it is clear that the seller is willing to sell 100 units of a good at $4. Observe, as the price falls the quantity that the seller is willing to sell also starts falling. Therefore, at $1 the quantity that is being offered to sale is 40 units only. [Image will be Uploaded Soon] In the figure, the price of the commodity is on the Y-axis and the quantity of the commodity is on the X-axis. The four points namely d, c, b, and a show the combination of each price and the specific quantity that is being supplied at that price. The slope is the supply curve slopes upwards from left to right, which indicates that less quantity is being offered for sale at a lower price. Economic students study the theory of supply Class 12 Economics in detail. Theories of Aggregate Supply Explained Theory of Aggregate Supply and Aggregate demand was given by John Maynard Keynes which was presented in his work in The General Theory of Employment, Interest, and Money. In Macroeconomics, aggregate supply (AS) is also termed as domestic final supply (DFS). Aggregate supply is the total supply of commodities that forms in an economy plan on selling during a specified amount of time. In simple words, the theory of aggregate supply is the total supply in an economy’s Gross Domestic Product (GDP). Typically, a positive relationship is observed between the price level and the aggregate supply. The main components of aggregate supply are consumption and saving. The aggregate supply is the sum of consumption expenditure and savings. Aggregate Supply (AS) = Consumption Expenditure + Saving (S) The Formula for Theory of Supply: Qx = Quantity Supplied Φ = Function of Tech = technology Px = Price F = Features of nature X = Taxes and subsidies [It is assumed that these variables remain constant.] Difference Between Theory of Supply and Theory of Aggregate Supply The theory of supply is a concept of Microeconomics and Aggregate Supply is a concept of Macroeconomics. The law of supply and demand is a fundamental economic theory that establishes a relation between what producers sell and what consumers demand. Whereas Aggregate Supply is the total supply in an economy, the total amount of a nation produces and sells. Demand and Supply Theory of Wages Wages are the price of services that are being rendered by the labour to the employer. As product prices are determined by its supply and demand curve, similarly wages are also obtained with the hero of demand and supply of labour. The Modern Theory of wages was given by J.R. Hicks. Did You Know? The shift in a supply curve is caused by a change in the cost of production, change in the number of producers, change in tax rates, or changes in the state of production technology in use. Q. The supply schedule for firm A and firm B is given below. Compute the market supply schedule for the same. A1. The market supply will be the number of commodities supplied by firm A and firm B.
the Physics Education Technology Project This simulation promotes understanding of isotopes by providing a simple way to model isotopes of the first 10 elements in the Periodic Table. In the most basic model, users click on an atomic symbol. The simulation displays a stable isotope for that atom. (For example, choose Helium and view a nucleus with two protons and two neutrons.) Now, drag neutrons into the nucleus and watch to see if the atom becomes unstable. Students may be surprised to see that Beryllium and Flourine, for example, are unstable with equal numbers of protons and neutrons in the nucleus. Click on "Abundance in Nature" to see how common or rare a particular isotope is in nature. Mass number and Atomic Mass (amu) are displayed in real time. See Related Materials for a lesson plan developed specifically for use with the "Isotopes and Atomic Mass" simulation. This item is part of a larger collection of simulations developed by the Physics Education Technology project (PhET). Please note that this resource requires Java Applet Plug-in. atom, atom simulation, atomic mass, elements, isotope, nuclear properties, radioactivity, stable element, unstable element Metadata instance created July 18, 2011 by Caroline Hall July 18, 2011 by Caroline Hall Last Update when Cataloged: July 8, 2011 AAAS Benchmark Alignments (2008 Version) 4. The Physical Setting 4D. The Structure of Matter 6-8: 4D/M1a. All matter is made up of atoms, which are far too small to see directly through a microscope. 6-8: 4D/M1b. The atoms of any element are like other atoms of the same element, but are different from the atoms of other elements. 9-12: 4D/H1. Atoms are made of a positively charged nucleus surrounded by negatively charged electrons. The nucleus is a tiny fraction of the volume of an atom but makes up almost all of its mass. The nucleus is composed of protons and neutrons which have roughly the same mass but differ in that protons are positively charged while neutrons have no electric charge. 9-12: 4D/H2. The number of protons in the nucleus determines what an atom's electron configuration can be and so defines the element. An atom's electron configuration, particularly the outermost electrons, determines how the atom can interact with other atoms. Atoms form bonds to other atoms by transferring or sharing electrons. 9-12: 4D/H3. Although neutrons have little effect on how an atom interacts with other atoms, the number of neutrons does affect the mass and stability of the nucleus. Isotopes of the same element have the same number of protons (and therefore of electrons) but differ in the number of neutrons. 9-12: 4D/H4. The nucleus of radioactive isotopes is unstable and spontaneously decays, emitting particles and/or wavelike radiation. It cannot be predicted exactly when, if ever, an unstable nucleus will decay, but a large group of identical nuclei decay at a predictable rate. This predictability of decay rate allows radioactivity to be used for estimating the age of materials that contain radioactive substances. 9-12: 4D/H5. Scientists continue to investigate atoms and have discovered even smaller constituents of which neutrons and protons are made. 11. Common Themes 6-8: 11B/M1. Models are often used to think about processes that happen too slowly, too quickly, or on too small a scale to observe directly. They are also used for processes that are too vast, too complex, or too dangerous to study. 6-8: 11B/M4. Simulations are often useful in modeling events and processes. 6-8: 11D/M3. Natural phenomena often involve sizes, durations, and speeds that are extremely small or extremely large. These phenomena may be difficult to appreciate because they involve magnitudes far outside human experience. This resource is part of a Physics Front Topical Unit. Topic: Particles and Interactions and the Standard Model Unit Title: Molecular Structures and Bonding This simulation promotes understanding of isotopes by providing a simple way to model isotopes of the first 10 elements in the Periodic Table. In the most basic model, users click on an atomic symbol. The simulation displays a stable isotope for that atom. (For example, choose Helium and view a nucleus with two protons and two neutrons.) Now, drag neutrons into the nucleus and watch to see if the atom becomes unstable. %0 Electronic Source %D July 8, 2011 %T PhET Simulation: Isotopes and Atomic Mass %I Physics Education Technology Project %V 2014 %N 21 November 2014 %8 July 8, 2011 %9 application/java %U http://phet.colorado.edu/en/simulation/isotopes-and-atomic-mass Disclaimer: ComPADRE offers citation styles as a guide only. We cannot offer interpretations about citations as this is an automated procedure. Please refer to the style manuals in the Citation Source Information area for clarifications. This is a lesson plan appropriate for Grades 7-10, created specifically to accompany the "Isotopes and Atomic Mass" simulation. It is very effective at guiding beginning students through an exploration of the atomic nucleus and factors that affect the stability of an atom.
How to find Probability and Expected Value in Mathematics? Mathematics is the major science that helps in understanding the basic concepts of numbers, counting, measurements, shapes of objects etc. It has some other branches that deal with different aspects of the calculations and possibilities. Here we can summarize the concepts of probability and expected value. Probability is the quality of expected and it is the possible outcomes of any event before its occurrence. Another type of discussion is the expected value. It is a probability theory and defining as the expectation of the outcomes in response to any kind of inputs or actions that we put in. In this article the complete discussion about finding the probability and expected value in mathematics and their important features in search of the findings. What is probability? At the initial step the probability is defining as something that is likely to happen. This means it has the possible outcomes of the occurrence of any event. The outcome of any event without happening is not possible but it can be determined by one and only thing that is “chance”. For example, when rolling a ludo dice at its first attempt there would be the probability of ludo dice is 6. After every trial the probability of the desired number is determined by the chance of it rolling and sideways. How to find probability? Probability might be used in different situations, businesses and firms for their developing strategies, marketing plans, revenues, sales and expenditure ideas that used to run or drive a business. In case of multiple events the probability can be found by smashing down the probability into a separate calculation. Then the probability is multiplying consequences altogether to achieve the single possible outcome. Some of the steps that helps in finding or calculating the probability can be described as follows: - Regulates a single event with single outcome - Recognize the total number of outcomes - Division of number of events by the number of possible outcomes. These are the steps that we must have to follow for finding the probability, we can also find the probability online by using probability calculator with steps. Formula for probability The formula for probability refers to the possibility that events need to happen, and can explain as equal to the ratio of the numbers of the favorable outcomes and the total number of outcomes. The mathematical expressions for the formula of probability can write as follows: Probability P ( E ) = Number of favorable outcomes / Total number of outcomes In this formula the P ( E ) belongs to the probability of an event to happen. Through this formula the solution to any event, situation or probability can easily calculate. What is the expected value? Expected value can define as the probability that will multiply by the value of every outcome. As its name indicates, it is the estimation or average values that we can use in the case of random variables. It is the probability distribution that helps in conducting the average or mean of any kind of random variable. Expected value has different patterns to calculate the variables. Some of them particularly need the integrals to be used in the situation of continuous variables. Through this calculation we can find the centre of the variable or the average of the random variable. Categories of expected value Expected values have categories that calculated different amounts of variables that can be expressed as - Single discrete variables - Single continuous variables - Multiple discrete variables - Multiple continuous variables How to find expected value? In simple words the expected value can measure by multiplying every possible outcome by the given value that will occur. After that possible outcomes multiplication the all values will added up to conclude the average result of the expected value. The possibilities of every outcome related to every new chance that arises in the process of expected value. For example, the use of expected value in a game of 3 bottles that has 3 chances of playing and winning the price. The first turn of rolling bottle has the chances of ⅓ to win the game, the second rolling has chances of ⅔ and third has the 3/3. The final outcome will calculate after the summation of all random values i.e. ⅓ + ⅔ + 3/3= 2 Hence prove that the chances of winning the bottle game would be 2 after the proper estimation of expected values. Related: One may also check the answer of the problem online. As there are a lot of online expected outcome calculator available for calculating expected value with in a second with all possible steps in its calculation. Formula for expected value The mathematical representation of the general formula for expected value are as follows; EV = ∑ P ( x ) * n ) The description of expected value formula is as follows EV = the expected value ∑ = the sigma notation P = the probability X = the event that will multiply n = n is the amount of time that will happen after and after The following formula for expected value is helpful in daily life for the estimation of investment, currency, travelling expenditures and also the chances of winning prize money or games.
In this video, we will learn how to find general term or a recursive formula for a sequence and how to use these to work out terms in the sequence. We will also learn what an alternating sequence is and what increasing or decreasing sequences are. Let’s begin with an example of a sequence. A sequence like this one — two, four, six, eight, and so on — can be described in terms of a position index. For example, the term with index one is two, the term with index two is four, and the term with index three is six, and so on. The term with index one can also be written as 𝑎 sub one. And we can read that as the first term. The second term can be referred to as 𝑎 sub two and the third term as 𝑎 sub three, and so on. When it comes to writing the rules of sequences, we’re really trying to make life a little easier. For example, if we needed to find the 290th term, we wouldn’t want to list all the terms up to the 290th term. What we want is a relationship between the index and the term value that will allow us to very quickly write the term for any index. The shorthand for any general term is the letter 𝑛. For a sequence, we want to find the 𝑛th term given an index 𝑛. You might have already worked out the relationship in this sequence between the index and the term. Every index is multiplied by two to give us the term. So for a term with index 𝑛, the 𝑛th term is two 𝑛. So the term that has the index of 290 would in fact be 580. And so for the sequence two, four, six, eight, and so on, the 𝑛th term, 𝑎 sub 𝑛, is equal to two 𝑛. When we’re working with sequences, we can also be given the 𝑛th term and asked to work out the first few terms of a sequence. In the first example, we’ll see how we can do this. Let’s have a look. Find the first five terms of the sequence whose general term is given by 𝑎 sub 𝑛 equals 𝑛 times 𝑛 minus 34, where 𝑛 is greater than or equal to one. In this question, we’re given the general or the 𝑛th term of a sequence for an index 𝑛. To find any term in the sequence, we would substitute that value for 𝑛 into the general term. For example, if we wanted to find the 20th term, we would substitute 𝑛 equals 20 into the 𝑛th term. However, in this question, we need to find the first five terms. We’re told that the index 𝑛 is greater than or equal to one. So that means we’re going to start by substituting 𝑛 equals one then 𝑛 equals two, three, four, and five into the 𝑛th term to find the first five terms. Let’s start by finding 𝑎 sub one, the first term, which occurs when 𝑛 is equal to one. This means that we’d have the first term 𝑎 sub one equals one times one minus 34. One minus 34 is negative 33. And when we multiply that by one, we get negative 33. And so the first term is negative 33. Now we can substitute 𝑛 equals two into the 𝑛th term. This time, we’ll have the second term, 𝑎 sub two, is equal to two times two minus 34. Simplifying this, we have two multiplied by negative 32, which gives us the second term as negative 64. For the third term, we’ll follow the same process, only this time we’re substituting in 𝑛 equals three. This gives us that the third term is equal to three times three minus 34, which is negative 93. When 𝑛 is equal to four, the fourth term is negative 120. Finally, when 𝑛 is equal to five, the fifth term is equal to negative 145. And so we can give the answer for the first five terms of the sequence. And we’ve found that by substituting the five different values into the 𝑛th term formula. We’ll now have a look at how we find a recursive formula for a sequence. Let’s have a look at this example sequence: one, four, seven, 10, and so on. We can compare the values of the index 𝑛 is greater than or equal to one to the values of three 𝑛. These values of three 𝑛 do not give us the same values as we have in the sequence. But if we subtracted two from each of the values of three 𝑛, we would get the terms of the sequence. In fact, we could write that the 𝑛th term of this sequence is three 𝑛 minus two. However, there’s also another way we could describe this sequence. We might have noticed that the pattern between terms is to add three. For example, if we wanted to find the fifth term, we would take the fourth term and add three. So if we wanted to find any 𝑛th term, we would take the term before it and add three. Using the same form of notation, the term before the 𝑛th term — that’s the term with index 𝑛 — would be the term with index 𝑛 minus one. And so a different way to describe this sequence would be to say that the 𝑛th term 𝑎 sub 𝑛 is equal to 𝑎 sub 𝑛 minus one plus three. When we’re given a formula for a sequence in this way, we also need to say what the first term is. We can write it as a list like this, so we have 𝑎 sub one is equal to one, and then we have the 𝑛th term. Notice that we’ve also given the values of the index as 𝑛 is greater than or equal to two. In this case, the index has to start with two. It can’t start with one, as we’ve been given the first term. And if we did substitute one into this part, we’d be trying to find the term with index zero. A rule that’s written in this way is called a recursive formula for a sequence. A recursive formula is a formula in which the terms of a sequence are defined using one or more of the previous terms. In this case, our term with index 𝑛 is defined by the term before it. Before we finish with recursive formulas, there’s just one other point to note. In this case, we wrote the formula for 𝑎 sub 𝑛. But it could also have been given as a formula to find the term with index 𝑛 plus one. Notice, however, that we still have this relationship that it’s the term before it plus three. The first term will still be the same both times. But notice that the index is different. Because we’re given the formula 𝑎 sub 𝑛 plus one, we can start with a first value of 𝑛 as one to find the term with We’ll now see an example of how we can find a specific term in a sequence when we’re given a recursive formula. If 𝑎 sub 𝑛 is a sequence defined as 𝑎 sub one equals 11 and 𝑎 sub 𝑛 plus one equals 𝑎 sub 𝑛 minus three, where 𝑛 is greater than or equal to one, then the fourth term equals what. We’re given four answer options: two, four, five, or eight. In this question, we’re given a formula for a sequence. This type of formula is called a recursive formula. And that’s when the terms of a sequence are defined using one or more previous terms. If we wanted to describe this term in words, we would say that for any term with index 𝑛 plus one, we take the term before it — that’s the one with index 𝑛 — and we subtract three. And so if we wanted to find the fourth term — that’s the term with index four — that means that 𝑛 plus one must be equal to four, and so 𝑛 must be three. And so the fourth term must be equal to the third term minus three. But how do we find the third Well, the third term — that’s the term with index three — must happen when 𝑛 plus one is three. And so 𝑛 must be equal to two. So the third term is equal to the second term minus three. Of course, we don’t know the second term either. But you’ve guessed it! It’s going to be the first term minus three. And this is also one of the disadvantages of recursive formulas because we need to work out every term up to the term that we need. We do get a little bit of relief here because we’re actually given the first term. 𝑎 sub one is equal to 11. So now we can work forwards through the sequence. If 𝑎 sub one is equal to 11 and 𝑎 sub two is equal to 𝑎 sub one minus three, then 𝑎 sub two, the second term, is equal to 11 minus three. And that’s equal to eight. As the third term is equal to the second term minus three, then our third term must be equal to eight minus three, which is five. And finally then, the fourth term is the third term minus three. And so five minus three is equal to two. We can therefore give the answer that the fourth term of the sequence is that given in option (A). It’s the term two. In this example, the terms of the sequence went from 11, eight, five, two, and so on. This type of sequence would be called a decreasing sequence. We’ll now define more formulae, what we mean by increasing, decreasing, or constant sequences along with the term A sequence of real numbers 𝑎 sub 𝑛 is said to be increasing if 𝑎 sub 𝑛 plus one is greater than 𝑎 sub 𝑛 for all values of 𝑛 in the natural numbers. The terminology here really means that every term in the sequence must be greater than the term before it in order for the sequence to be increasing. For example, if we took the sequence of square numbers one, four, nine, 16, and so on, every value in that sequence is larger than the term before it. So the sequence of square numbers is an increasing sequence. Notice that this has to be true for all values of 𝑛. If we had another sequence that went one, two, three, one, and so on, this would not be increasing because although we have an increasing portion of the sequence, it’s not all increasing. We can define a decreasing sequence in a similar way. This time, every term in the sequence must be less than the term before it. An example of a decreasing sequence could be the sequence one, one-half, one-third, one-quarter, and so on. When we have a sequence in which every term is equal to the term before it, then it’s called a constant sequence. An example of this type of sequence might be the sequence of all twos. If a sequence is one of these three types, that is, increasing, decreasing, or constant, then it’s called a monotonic sequence. In the next example, we’ll identify if a sequence is increasing, decreasing, or neither. Is the sequence 𝑎 sub 𝑛 equals negative one to the power of 𝑛 over 11𝑛 minus 22 increasing, decreasing, or When we’re considering if a sequence is increasing or decreasing, we’re comparing any term to the term before it. If a sequence is increasing, then any term 𝑎 sub 𝑛 must be greater than 𝑎 sub 𝑛 minus one. That must be true for all values of 𝑛. Similarly, if a sequence is decreasing, then any term of index 𝑛 in a sequence must be less than the term before it. What we can do is to work out the first few terms of the sequence and see if the values are increasing, decreasing, or So we could take the 𝑛th term, and we’ll start by substituting 𝑛 is equal to one. So for the first term 𝑎 sub one, we have negative one to the power of one over 11 times one minus 22. When we simplify this, we get the fraction negative 243 over 11. Now that we’ve found the first term, we can find the second term by substituting in 𝑛 is equal to two. When we simplify negative one squared over 11 times two minus 22, we get the fraction negative 483 over 22. We can find the third term in the same way by substituting 𝑛 is equal to three. This means that we get a third term, 𝑎 sub three, equal to negative 727 over 33. We have at this point got three terms in the sequence, but it’s a little difficult to see if they’re increasing or decreasing. So it might be helpful to find their decimal equivalents. The first term is approximately negative 22.09, the second term approximately negative 21.95, and the third term is approximately negative 22.03. We notice that the second term is greater than the first term. However, the third term is less than the second term. That means that we can’t say that for all values either 𝑎 sub 𝑛 is bigger than 𝑎 sub 𝑛 minus one or 𝑎 sub 𝑛 is less than 𝑎 sub 𝑛 minus one. And that means that the sequence isn’t increasing or decreasing, so it must be neither. The answer is that 𝑎 sub 𝑛 is neither increasing nor decreasing. One other piece of terminology to introduce is that of alternating sequences. An alternating sequence is one where the terms of the sequence alternate between positive and negative. For example, the sequence negative two, three, negative four, five, negative six, and so on would be an alternating sequence. The values switch between positive and negative. We’ll now see an example of how we can find a general term of an alternating sequence. The general term of the sequence three, negative six, nine, negative 12, 15 is 𝑎 sub 𝑛 equals what. And we’re given four answer options. We might notice that the terms of this sequence alternate between positive and negative values. This type of sequence is defined as an alternating sequence. We can take the sequence and consider if we just had the absolute values of the sequence, then we would have the terms three, six, nine, 12, and 15. If we took the index in this case as 𝑛 is greater than or equal to one, then for any index 𝑛, the 𝑛th term of these absolute values would be 𝑎 sub 𝑛 is equal to three 𝑛. But as we don’t have just three, six, nine, 12, and so on — we have three, negative six, nine, negative 12, and so on — then the 𝑛th term of this sequence is not three 𝑛. Furthermore, we can also say that the 𝑛th term is not negative three 𝑛 either. In this case, the sequence would have the values negative three, negative six, negative nine, negative 12, and so on. However, we do have a sequence that does very closely match three 𝑛. So one way to find a general term of a sequence that includes three 𝑛 but which alternates between positive and negative is to multiply three 𝑛 by a power of negative one. We notice that options (A) and (B) present two alternatives. Let’s have a look at the 𝑛th term option given in (A). In order to find the first term, we would substitute in 𝑛 is equal to one. Negative one to the power of one is negative one, and three times one is three. Multiplying these gives us the first term of negative three. However, if we look at the first term in the given sequence, it’s three and not negative three. Therefore, the 𝑛th term in option (A) is incorrect. The 𝑛th term given in option (B) is different because the exponent of negative one is 𝑛 plus one. When we substitute in 𝑛 is equal to one to find the first term, we have negative one to the power of one plus one, which is two, and negative one squared gives us one, which when multiplied by three gives us three. This matches the given first term. Substituting in 𝑛 is equal to two, we get that the second term is equal to negative six. We can observe the pattern. When we have an even index, like we did here when 𝑛 is equal to two, then the exponent of negative one will be odd. Negative one with an odd power will give us the value of negative one. The result of this is that every even index gives us a term value which is negative. If we continued by substituting an odd index of three, we would get an even value of nine. We can therefore give the answer that it’s option (B). 𝑎 sub 𝑛 is equal to negative one to the power of 𝑛 plus one times three 𝑛. We can now summarize the key points of this video. Firstly, we saw that to find the terms of a sequence given a general term, we substitute values of 𝑛 is greater than or equal to one into the formula for the general term. We defined recursive formulas and saw that sometimes we might need to apply the formula several times in order to find the values of preceding terms. Finally, we defined increasing, decreasing, constant, and alternating sequences.
The term monetary policy refers to the process by which a central bank controls the supply of money into an economy. Monetary policy oftentimes attempts to control an overarching economic factor such as inflation or interest rates. A central bank is the financial institution that oversees monetary policy and regulates the commercial banking system of a country. In the United States, the Federal Reserve Act provides for a central banking system, which is ultimately established to serve and protect the public interest. Monetary policy is the strategy used by the central bank to control economic influencers such as interest rates or inflation. While there are several strategies within monetary policy, the overall goal is to foster a healthy rate of economic growth while maintaining the exchange rate of local currency with that of foreign nations. There are a number of monetary policy strategies a central bank can use including inflation targeting, price level targeting, fixed exchange rates, and even adopting a gold standard policy. Generally, all these strategies are to control the economy in one of two ways: - Expansionary: if the central bank would like to stimulate the economy of their country, they would increase the money supply, thereby boosting borrowing and consumer spending. - Contractionary: if the central bank would like to slow down their country's rate of growth, they would decrease the money supply, thereby decreasing borrowing and consumer spending.
Argue Analysis Worksheet Part I: Terms and Definitions • A statement is any unambiguous declarative sentence about a fact (or non-fact) about the world. It says that something is (or isn’t) the case. • An argument is a series of statements meant to establish a claim. • A claim or conclusion is the statement whose truth an argument is meant to establish. • A statement’s truth value is either true or false. o All statements have a truth value. A statement is false when what it says about the world is not actually the case. A statement is true when what it says about the world is actually the case. • A premise is a statement that is used in an argument to establish a conclusion. What we can say about an argument: • An argument is valid if its premises necessarily lead to its conclusion. That is, if you accept that the premises are all true, you must accept that the conclusion is true. • An argument is sound if it is valid and you accept that all its premises are true. • A good, convincing argument is a sound argument. That is, since you accept all the premises are true, you must accept the conclusion is true (because the argument is valid). • A bad argument is any other kind of argument. Examples: • “Every animal needs to breathe in order to live. Fish are animals. Fish cannot breathe in the air. Therefore, fish cannot live in the air.” Here, the claim is that “fish cannot live in the air.” The premises are “Every animal needs to breathe in order to live,” “Fish are animals,” and “Fish cannot breathe in the air.” The argument is valid – the premises necessarily lead to the conclusion. The argument is also sound – the premises are true. It is a good argument. • “Oranges are green. All green things make me sick. Therefore, oranges make me sick.” The claim is “oranges make me sick.” The premises are “Oranges are green,” and “All green things make me sick.” The argument is valid – if we accept
Airbreathing jet engine An airbreathing jet engine (or ducted jet engine) is a jet engine that emits a jet of hot exhaust gases formed from air that is forced into the engine by several stages of centrifugal, axial or ram compression, which is then heated and expanded through a nozzle. They are typically gas turbine engines. The majority of the mass flow through an airbreathing jet engine is provided by air taken from outside of the engine and heated internally, using energy stored in the form of fuel. All practical airbreathing jet engines are internal combustion engines that directly heat the air by burning fuel, with the resultant hot gases used for propulsion via a propulsive nozzle, although other techniques for heating the air have been experimented with (such as nuclear jet engines). Most modern jet engine designs are turbofans, which have largely replaced turbojets. These modern engines use a gas turbine engine core with high overall pressure ratio (about 40:1 in 1995) and high turbine entry temperature (about 1800 K in 1995), and provide a great deal of their thrust with a turbine-powered fan stage, rather than with pure exhaust thrust as in a turbojet. These features combine to give a high efficiency, relative to a turbojet. A few jet engines use simple ram effect (ramjet) or pulse combustion (pulsejet) to give compression. This section does not cite any sources. (July 2018) (Learn how and when to remove this template message) The original air-breathing gas turbine jet engine was the turbojet. It was a concept brought to life by two engineers, Frank Whittle in England UK and Hans von Ohain in Germany. The turbojet compresses and heats air and then exhausts it as a high speed, high temperature jet to create thrust. While these engines are capable of giving high thrust levels, they are most efficient at very high speeds (over Mach 1), due to the low-mass-flow, high speed nature of the jet exhaust. Modern turbofans are a development of the turbojet; they are basically a turbojet that includes a new section called the fan stage. Rather than using all of its exhaust gases to provide direct thrust like a turbojet, the turbofan engine extracts some of the power from the exhaust gases inside the engine and uses it to power the fan stage. The fan stage accelerates a large volume of air through a duct, bypassing the engine core (the actual gas turbine component of the engine), and expelling it at the rear as a jet, creating thrust. A proportion of the air that comes through the fan stage enters the engine core rather than being ducted to the rear, and is thus compressed and heated; some of the energy is extracted to power the compressors and fans, while the remainder is exhausted at the rear. This high-speed, hot-gas exhaust blends with the low speed, cool-air exhaust from the fan stage, and both contribute to the overall thrust of the engine. Depending on what proportion of cool air is bypassed around the engine core, a turbofan can be called low-bypass, high-bypass, or very-high-bypass engines. Low bypass engines were the first turbofan engines produced, and provide the majority of their thrust from the hot core exhaust gases, while the fan stage only supplements this. These engines are still commonly seen on military fighter aircraft, because they have a smaller frontal area which creates less ram drag at supersonic speeds leaving more of the thrust produced by the engine to propel the aircraft. Their comparatively high noise levels and subsonic fuel consumption are deemed acceptable in such an application, whereas although the first generation of turbofan airliners used low-bypass engines, their high noise levels and fuel consumption mean they have fallen out of favor for large aircraft. High bypass engines have a much larger fan stage, and provide most of their thrust from the ducted air of the fan; the engine core provides power to the fan stage, and only a proportion of the overall thrust comes from the engine core exhaust stream. A high-bypass turbofan functions very similarly to a turboprop engine, except it uses a many-bladed fan rather than a multi-blade propeller, and relies on a duct to properly vector the airflow to create thrust. Over the last several decades, there has been a move towards very high bypass engines, which use fans far larger than the engine core itself, which is typically a modern, high efficiency two or three-spool design. This high efficiency and power is what allows such large fans to be viable, and the increased thrust available (up to 75,000 lbs per engine in engines such as the Rolls-Royce Trent XWB or General Electric GENx), have allowed a move to large twin engine aircraft, such as the Airbus A350 or Boeing 777, as well as allowing twin engine aircraft to operate on long overwater routes, previously the domain of 3-engine or 4-engine aircraft. Jet engines were designed to power aircraft, but have been used to power jet cars and jet boats for speed record attempts, and even for commercial uses such as by railroads for clearing snow and ice from switches in railyards (mounted in special rail cars), and by race tracks for drying off track surfaces after rain (mounted in special trucks with the jet exhaust blowing onto the track surface). Types of airbreathing jet enginesEdit Airbreathing jet engines are nearly always internal combustion engines that obtain propulsion from the combustion of fuel inside the engine. Oxygen present in the atmosphere is used to oxidise a fuel source, typically a hydrocarbon-based jet fuel. The burning mixture expands greatly in volume, driving heated air through a propelling nozzle. Gas turbine powered engines: Ram powered jet engine: Pulsed combustion jet engine: Turbojets consist of an inlet, a compressor, a combustor, a turbine (that drives the compressor) and a propelling nozzle. The compressed air is heated in the combustor and passes through the turbine, then expands in the nozzle to produce a high speed propelling jet Turbojets have a low propulsive efficiency below about Mach 2 and produce a lot of jet noise, both a result of the very high velocity of the exhaust. Modern jet propelled aircraft are powered by turbofans. These engines, with their lower exhaust velocities, produce less jet noise and use less fuel. Turbojets are still used to power medium range cruise missiles due to their high exhaust speed, low frontal area, which reduces drag, and relative simplicity, which reduces cost. Most modern jet engines are turbofans. The low pressure compressor (LPC), usually known as a fan, compresses air into a bypass duct whilst its inner portion supercharges the core compressor. The fan is often an integral part of a multi-stage core LPC. The bypass airflow either passes to a separate 'cold nozzle' or mixes with low pressure turbine exhaust gases, before expanding through a 'mixed flow nozzle'. In the 1960s there was little difference between civil and military jet engines, apart from the use of afterburning in some (supersonic) applications. Today, turbofans are used for airliners because they have an exhaust speed that is better matched to the subsonic flight speed of the airliner. At airliner flight speeds, the exhaust speed from a turbojet engine is excessively high and wastes energy. The lower exhaust speed from a turbofan gives better fuel consumption. The increased airflow from the fan gives higher thrust at low speeds. The lower exhaust speed also gives much lower jet noise. The comparatively large frontal fan has several effects. Compared to a turbojet of identical thrust, a turbofan has a much larger air mass flow rate and the flow through the bypass duct generates a significant fraction of the thrust. The additional duct air has not been ignited which it gives it a slow speed, but no extra fuel is needed to provide this thrust. Instead, the energy is taken from the central core, which gives it also a reduced exhaust speed. The average velocity of the mixed exhaust air is thus reduced (low specific thrust) which is less wasteful of energy but reduces the top speed. Overall, a turbofan can be much more fuel efficient and quieter, and it turns out that the fan also allows greater net thrust to be available at slow speeds. Thus civil turbofans today have a low exhaust speed (low specific thrust – net thrust divided by airflow) to keep jet noise to a minimum and to improve fuel efficiency. Consequently, the bypass ratio (bypass flow divided by core flow) is relatively high (ratios from 4:1 up to 8:1 are common), with the Rolls-Royce Trent XWB approaching 10:1. Only a single fan stage is required, because a low specific thrust implies a low fan pressure ratio. Turbofans in civilian aircraft usually have a pronounced large front area to accommodate a very large fan, as their design involves a much larger mass of air bypassing the core so they can benefit from these effects, while in military aircraft, where noise and efficiency are less important compared to performance and drag, a smaller amount of air typically bypasses the core. Turbofans designed for subsonic civilian aircraft also usually have a just a single front fan, because their additional thrust is generated by a large additional mass of air which is only moderately compressed, rather than a smaller amount of air which is greatly compressed. Military turbofans, however, have a relatively high specific thrust, to maximize the thrust for a given frontal area, jet noise being of less concern in military uses relative to civil uses. Multistage fans are normally needed to reach the relatively high fan pressure ratio needed for high specific thrust. Although high turbine inlet temperatures are often employed, the bypass ratio tends to be low, usually significantly less than 2.0. Turboprop and turboshaftEdit Turboprop engines are jet engine derivatives, still gas turbines, that extract work from the hot-exhaust jet to turn a rotating shaft, which is then used to produce thrust by some other means. While not strictly jet engines in that they rely on an auxiliary mechanism to produce thrust, turboprops are very similar to other turbine-based jet engines, and are often described as such. In turboprop engines, a portion of the engine's thrust is produced by spinning a propeller, rather than relying solely on high-speed jet exhaust. Producing thrust both ways, turboprops are occasionally referred to as a type of hybrid jet engine. They differ from turbofans in that a traditional propeller, rather than a ducted fan, provides the majority of thrust. Most turboprops use gear-reduction between the turbine and the propeller. (Geared turbofans also feature gear reduction), but they are less common. The hot-jet exhaust is an important minority of thrust, and maximum thrust is obtained by matching the two thrust contributions. Turboprops generally have better performance than turbojets or turbofans at low speeds where propeller efficiency is high, but become increasingly noisy and inefficient at high speeds. Turboshaft engines are very similar to turboprops, differing in that nearly all energy in the exhaust is extracted to spin the rotating shaft, which is used to power machinery rather than a propeller, they therefore generate little to no jet thrust and are often used to power helicopters. A propfan engine (also called "unducted fan", "open rotor", or "ultra-high bypass") is a jet engine that uses its gas generator to power an exposed fan, similar to turboprop engines. Like turboprop engines, propfans generate most of their thrust from the propeller and not the exhaust jet. The primary difference between turboprop and propfan design is that the propeller blades on a propfan are highly swept to allow them to operate at speeds around Mach 0.8, which is competitive with modern commercial turbofans. These engines have the fuel efficiency advantages of turboprops with the performance capability of commercial turbofans. While significant research and testing (including flight testing) has been conducted on propfans, none have entered production. Major components of a turbojet including references to turbofans, turboprops and turboshafts: - Air intake (Inlet) – For subsonic aircraft, the inlet is a duct which is required to ensure smooth airflow into the engine despite air approaching the inlet from directions other than straight ahead. This occurs on the ground from cross winds and in flight with aircraft pitch and yaw motions. The duct length is minimised to reduce drag and weight. Air enters the compressor at about half the speed of sound so at flight speeds lower than this the flow will accelerate along the inlet and at higher flight speeds it will slow down. Thus the internal profile of the inlet has to accommodate both accelerating and diffusing flow without undue losses. For supersonic aircraft, the inlet has features such as cones and ramps to produce the most efficient series of shockwaves which form when supersonic flow slows down. The air slows down from the flight speed to subsonic velocity through the shockwaves, then to about half the speed of sound at the compressor through the subsonic part of the inlet. The particular system of shockwaves is chosen, with regard to many constraints such as cost and operational needs, to minimise losses which in turn maximises the pressure recovery at the compressor. - Compressor or Fan – The compressor is made up of stages. Each stage consists of rotating blades and stationary stators or vanes. As the air moves through the compressor, its pressure and temperature increase. The power to drive the compressor comes from the turbine (see below), as shaft torque and speed. - Bypass ducts deliver the flow from the fan with minimum losses to the bypass propelling nozzle. Alternatively the fan flow may be mixed with the turbine exhaust before entering a single propelling nozzle. In another arrangement an afterburner may be installed between the mixer and nozzle. - Shaft – The shaft connects the turbine to the compressor, and runs most of the length of the engine. There may be as many as three concentric shafts, rotating at independent speeds, with as many sets of turbines and compressors. Cooling air for the turbines may flow through the shaft from the compressor. - Diffuser section: – The diffuser slows down the compressor delivery air to reduce flow losses in the combustor. Slower air is also required to help stabilize the combustion flame and the higher static pressure improves the combustion efficiency. - Combustor or Combustion Chamber – Fuel is burned continuously after initially being ignited during the engine start. - Turbine – The turbine is a series of bladed discs that act like a windmill, extracting energy from the hot gases leaving the combustor. Some of this energy is used to drive the compressor. Turboprop, turboshaft and turbofan engines have additional turbine stages to drive a propeller, bypass fan or helicopter rotor. In a free turbine the turbine driving the compressor rotates independently of that which powers the propellor or helicopter rotor. Cooling air, bled from the compressor, may be used to cool the turbine blades, vanes and discs to allow higher turbine entry gas temperatures for the same turbine material temperatures.** - Afterburner or reheat (British) – (mainly military) Produces extra thrust by burning fuel in the jetpipe. This reheating of the turbine exhaust gas raises the propelling nozzle entry temperature and exhaust velocity. The nozzle area is increased to accommodate the higher specific volume of the exhaust gas. This maintains the same airflow through the engine to ensure no change in its operating characteristics. - Exhaust or Nozzle – Turbine exhaust gases pass through the propelling nozzle to produce a high velocity jet. The nozzle is usually convergent with a fixed flow area. - Supersonic nozzle – For high nozzle pressure ratios (Nozzle Entry Pressure/Ambient Pressure) a convergent-divergent (de Laval) nozzle is used. The expansion to atmospheric pressure and supersonic gas velocity continues downstream of the throat and produces more thrust. The various components named above have constraints on how they are put together to generate the most efficiency or performance. The performance and efficiency of an engine can never be taken in isolation; for example fuel/distance efficiency of a supersonic jet engine maximises at about mach 2, whereas the drag for the vehicle carrying it is increasing as a square law and has much extra drag in the transonic region. The highest fuel efficiency for the overall vehicle is thus typically at Mach ~0.85. For the engine optimisation for its intended use, important here is air intake design, overall size, number of compressor stages (sets of blades), fuel type, number of exhaust stages, metallurgy of components, amount of bypass air used, where the bypass air is introduced, and many other factors. An example is design of the air intake. The thermodynamics of a typical air-breathing jet engine are modeled approximately by a Brayton Cycle which is a thermodynamic cycle that describes the workings of the gas turbine engine, which is the basis of the airbreathing jet engine and others. It is named after George Brayton (1830–1892), the American engineer who developed it, although it was originally proposed and patented by Englishman John Barber in 1791. It is also sometimes known as the Joule cycle. The nominal net thrust quoted for a jet engine usually refers to the Sea Level Static (SLS) condition, either for the International Standard Atmosphere (ISA) or a hot day condition (e.g. ISA+10 °C). As an example, the GE90-76B has a take-off static thrust of 76,000 lbf (360 kN) at SLS, ISA+15 °C. Naturally, net thrust will decrease with altitude, because of the lower air density. There is also, however, a flight speed effect. Initially as the aircraft gains speed down the runway, there will be little increase in nozzle pressure and temperature, because the ram rise in the intake is very small. There will also be little change in mass flow. Consequently, nozzle gross thrust initially only increases marginally with flight speed. However, being an air breathing engine (unlike a conventional rocket) there is a penalty for taking on-board air from the atmosphere. This is known as ram drag. Although the penalty is zero at static conditions, it rapidly increases with flight speed causing the net thrust to be eroded. As flight speed builds up after take-off, the ram rise in the intake starts to have a significant effect upon nozzle pressure/temperature and intake airflow, causing nozzle gross thrust to climb more rapidly. This term now starts to offset the still increasing ram drag, eventually causing net thrust to start to increase. In some engines, the net thrust at say Mach 1.0, sea level can even be slightly greater than the static thrust. Above Mach 1.0, with a subsonic inlet design, shock losses tend to decrease net thrust, however a suitably designed supersonic inlet can give a lower reduction in intake pressure recovery, allowing net thrust to continue to climb in the supersonic regime. Safety and reliabilityEdit Jet engines are usually very reliable and have a very good safety record. However, failures do sometimes occur. In some cases in jet engines the conditions in the engine due to airflow entering the engine or other variations can cause the compressor blades to stall. When this occurs the pressure in the engine blows out past the blades, and the stall is maintained until the pressure has decreased, and the engine has lost all thrust. The compressor blades will then usually come out of stall, and re-pressurize the engine. If conditions are not corrected, the cycle will usually repeat. This is called surge. Depending on the engine this can be highly damaging to the engine and creates worrying vibrations for the crew. Fan, compressor or turbine blade failures have to be contained within the engine casing. To do this the engine has to be designed to pass blade containment tests as specified by certification authorities. Bird ingestion is the term used when birds enter the intake of a jet engine. It is a common aircraft safety hazard and has caused fatal accidents. In 1988 an Ethiopian Airlines Boeing 737 ingested pigeons into both engines during take-off and then crashed in an attempt to return to the Bahir Dar airport; of the 104 people aboard, 35 died and 21 were injured. In another incident in 1995, a Dassault Falcon 20 crashed at a Paris airport during an emergency landing attempt after ingesting lapwings into an engine, which caused an engine failure and a fire in the airplane fuselage; all 10 people on board were killed. Jet engines have to be designed to withstand the ingestion of birds of a specified weight and number, and to not lose more than a specified amount of thrust. The weight and numbers of birds that can be ingested without hazarding the safe flight of the aircraft are related to the engine intake area. In 2009, an Airbus A320 aircraft, US Airways Flight 1549, ingested one Canada goose into each engine. The plane ditched in the Hudson River after taking off from LaGuardia International Airport in New York City. There were no fatalities. The incident illustrated the hazards of ingesting birds beyond the "designed-for" limit. The outcome of an ingestion event and whether it causes an accident, be it on a small fast plane, such as military jet fighters, or a large transport, depends on the number and weight of birds and where they strike the fan blade span or the nose cone. Core damage usually results with impacts near the blade root or on the nose cone. Few birds fly high, so the greatest risk of a bird ingestion is during takeoff and landing and during low level flying. If a jet plane is flying through air contaminated with volcanic ash, there is risk that ingested ash will cause erosion damage to the compressor blades, blockage of fuel nozzle air holes and blockage of the turbine cooling passages. Some of these effects may cause the engine to surge or flame-out during the flight. Re-lights are usually successful after flame-outs but with considerable loss of altitude. It was the case of British Airways Flight 9 which flew through volcanic dust at 37,000 ft. All 4 engines flamed out and re-light attempts were successful at about 13,000 ft. One class of failure that has caused accidents is the uncontained failure, where rotating parts of the engine break off and exit through the case. These high energy parts can cut fuel and control lines, and can penetrate the cabin. Although fuel and control lines are usually duplicated for reliability, the crash of United Airlines Flight 232 was caused when hydraulic fluid lines for all three independent hydraulic systems were simultaneously severed by shrapnel from an uncontained engine failure. Prior to the United 232 crash, the probability of a simultaneous failure of all three hydraulic systems was considered as high as a billion-to-one. However, the statistical models used to come up with this figure did not account for the fact that the number-two engine was mounted at the tail close to all the hydraulic lines, nor the possibility that an engine failure would release many fragments in many directions. Since then, more modern aircraft engine designs have focused on keeping shrapnel from penetrating the cowling or ductwork, and have increasingly utilized high-strength composite materials to achieve the required penetration resistance while keeping the weight low. Jet engines are usually run on fossil fuels and are thus a source of carbon dioxide in the atmosphere. Jet engines can also run on biofuels or hydrogen, although hydrogen is usually produced from fossil fuels. About 7.2% of the oil used in 2004 was consumed by jet engines. Nitrogen compounds are also formed during the combustion process from reactions with atmospheric nitrogen. At low altitudes this is not thought to be especially harmful, but for supersonic aircraft that fly in the stratosphere some destruction of ozone may occur. Sulphates are also emitted if the fuel contains sulphur. A ramjet is a form of airbreathing jet engine using the engine's forward motion to compress incoming air, without a rotary compressor. Ramjets cannot produce thrust at zero airspeed and thus cannot move an aircraft from a standstill. Ramjets require considerable forward speed to operate well, and as a class work most efficiently at speeds around Mach 3. This type of jet can operate up to speeds of Mach 6. They consist of three sections; an inlet to compress incoming air, a combustor to inject and combust fuel, and a nozzle to expel the hot gases and produce thrust. Ramjets require a relatively high speed to efficiently compress the incoming air, so ramjets cannot operate at a standstill and they are most efficient at supersonic speeds. A key trait of ramjet engines is that combustion is done at subsonic speeds. The supersonic incoming air is dramatically slowed through the inlet, where it is then combusted at the much slower, subsonic, speeds. The faster the incoming air is, however, the less efficient it becomes to slow it to subsonic speeds. Therefore, ramjet engines are limited to approximately Mach 5. Ramjets can be particularly useful in applications requiring a small and simple engine for high speed use, such as missiles, while weapon designers are looking to use ramjet technology in artillery shells to give added range: it is anticipated that a 120-mm mortar shell, if assisted by a ramjet, could attain a range of 22 mi (35 km). They have also been used successfully, though not efficiently, as tip jets on helicopter rotors. Ramjets are frequently confused with pulsejets, which use an intermittent combustion, but ramjets employ a continuous combustion process, and are a quite distinct type of jet engine. Scramjets are an evolution of ramjets that are able to operate at much higher speeds than any other kind of airbreathing engine. They share a similar structure with ramjets, being a specially shaped tube that compresses air with no moving parts through ram-air compression. They consist of an inlet, a combustor, and a nozzle. The primary difference between ramjets and scramjets is that scramjets do not slow the oncoming airflow to subsonic speeds for combustion. Thus, scramjets do not have the diffuser required by ramjets to slow the incoming airflow to subsonic speeds. They use supersonic combustion instead and the name "scramjet" comes from "Supersonic Combusting Ramjet." Scramjets start working at speeds of at least Mach 4, and have a maximum useful speed of approximately Mach 17. Due to aerodynamic heating at these high speeds, cooling poses a challenge to engineers. Since scramjets use supersonic combustion they can operate at speeds above Mach 6 where traditional ramjets are too inefficient. Another difference between ramjets and scramjets comes from how each type of engine compresses the oncoming airflow: while the inlet provides most of the compression for ramjets, the high speeds at which scramjets operate allow them to take advantage of the compression generated by shock waves, primarily oblique shocks. P&W J58 Mach 3+ afterburning turbojetEdit Turbojet operation over the complete flight envelope from zero to Mach 3+ requires features to allow the compressor to function properly at the high inlet temperatures beyond Mach 2.5 as well as at low flight speeds. The J58 compressor solution was to bleed airflow from the 4th compressor stage at speeds above about Mach 2. The bleed flow, 20% at Mach 3, was returned to the engine via 6 external tubes to cool the afterburner liner and primary nozzle as well as to provide extra air for combustion. The J58 engine was the only operational turbojet engine, being designed to operate continuously even at maximum afterburning, for Mach 3.2 cruise. An alternative solution is seen in a contemporary installation, which did not reach operational status, the Mach 3 GE YJ93/XB-70. It used a variable stator compressor. Yet another solution was specified in a proposal for a Mach 3 reconnaissance Phantom. This was pre-compressor cooling, albeit available for relatively short duration. Hydrogen-fuelled air-breathing jet enginesEdit Jet engines can be run on almost any fuel. Hydrogen is a highly desirable fuel, as, although the energy per mole is not unusually high, the molecule is very much lighter than other molecules. The energy per kg of hydrogen is twice that of more common fuels and this gives twice the specific impulse. In addition, jet engines running on hydrogen are quite easy to build—the first ever turbojet was run on hydrogen. Also, although not duct engines, hydrogen-fueled rocket engines have seen extensive use. However, in almost every other way, hydrogen is problematic. The downside of hydrogen is its density; in gaseous form the tanks are impractical for flight, but even in the form of liquid hydrogen it has a density one fourteenth that of water. It is also deeply cryogenic and requires very significant insulation that precludes it being stored in wings. The overall vehicle would end up being very large, and difficult for most airports to accommodate. Finally, pure hydrogen is not found in nature, and must be manufactured either via steam reforming or expensive electrolysis. A few experimental hydrogen-powered aircraft have flown with propellers, and jets have been proposed that may be feasible. Precooled jet enginesEdit An idea originated by Robert P. Carmichael in 1955 is that hydrogen-fueled engines could theoretically have much higher performance than hydrocarbon-fueled engines if a heat exchanger were used to cool the incoming air. The low temperature allows lighter materials to be used, a higher mass-flow through the engines, and permits combustors to inject more fuel without overheating the engine. This idea leads to plausible designs like Reaction Engines SABRE, that might permit single-stage-to-orbit launch vehicles, and ATREX, which could permit jet engines to be used up to hypersonic speeds and high altitudes for boosters for launch vehicles. The idea is also being researched by the EU for a concept to achieve non-stop antipodal supersonic passenger travel at Mach 5 (Reaction Engines A2). The air turborocket is a form of combined-cycle jet engine. The basic layout includes a gas generator, which produces high pressure gas, that drives a turbine/compressor assembly which compresses atmospheric air into a combustion chamber. This mixture is then combusted before leaving the device through a nozzle and creating thrust. There are many different types of air turborockets. The various types generally differ in how the gas generator section of the engine functions. Air turborockets are often referred to as turboramjets, turboramjet rockets, turborocket expanders, and many others. As there is no consensus on which names apply to which specific concepts, various sources may use the same name for two different concepts. This section does not cite any sources. (July 2010) (Learn how and when to remove this template message) To specify the RPM, or rotor speeds, of a jet engine, abbreviations are commonly used: - For a turboprop engine, Np refers to the RPM of the propeller shaft. For example, a common Np would be about 2200 RPM for a constant speed propeller. - N1 or Ng refers to the RPM of the gas generator section. Each engine manufacturer will pick between those two abbreviations. N1 is also used for the fan speed on a turbofan, in which case N2 is the gas generator speed (2 shaft engine). Ng is mainly used for turboprop or turboshaft engines. For example, a common Ng would be on the order of 30,000 RPM. - N2 or Nf refers to the speed of the power turbine section. Each engine manufacturer will pick between those two abbreviations but N2 is mainly used for turbofan engines whereas Nf is mainly used for turboprop or turboshaft engines. In many cases, even for free turbine engines, the N1 and N2 may be very similar. - Ns refers to the speed of the reduction gear box (RGB) output shaft for turboshaft engines. In many cases, instead of expressing rotor speeds (N1, N2) as RPM on cockpit displays, pilots are provided with the speeds expressed as a percentage of the design point speed. For example, at full power, the N1 might be 101.5% or 100%. This user interface decision has been made as a human factors consideration, since pilots are more likely to notice a problem with a two- or 3-digit percentage (where 100% implies a nominal value) than with a 5-digit RPM. - "Gas Turbine Technology Evolution: A Designer's Perspective" Bernard L.Koff Journal of Propulsion and Power Vol20 No4 July–August 2004 Fig.34/41 - Angelo, Joseph A. (2004). The Facts on File dictionary of space technology (3 ed.). Infobase Publishing. p. 14. ISBN 0-8160-5222-0. - "Turbojet Engine". NASA Glenn Research Center. Archived from the original on 8 May 2009. Retrieved 6 May 2009. - "Trent XWB infographic". Retrieved 15 October 2015. - Hill & Peterson 1992, pp. 190. sfn error: no target: CITEREFHillPeterson1992 (help) - Mattingly 2006, pp. 12–14. sfn error: no target: CITEREFMattingly2006 (help) - Mattingly, p. 12 - Sweetman, Bill (2005). The Short, Happy Life of the Prop-fan Archived 14 October 2013 at the Wayback Machine. Air & Space Magazine. 1 September 2005. - "Trade-offs in jet inlet design" Andras Sobester Journal of Aircraft, Vol44 No3 May–June 2007 - "Jet Propulsion for Aerospace Applications" 2nd edition, Walter J.hesse Nicholas V.S. MumfordPitman Publishing Corp 1964 p110 - "Jet Propulsion for Aerospace Applications" 2nd edition, Walter J.hesse Nicholas V.S. MumfordPitman Publishing Corp 1964 p216 - according to Gas Turbine History Archived 3 June 2010 at the Wayback Machine - "Part33 Airworthiness Standards- Aircraft Engines" para 33.94 Blade containment and rotor out of balance tests - "Transport Canada – Sharing the Skies". Tc.gc.ca. 6 January 2010. Archived from the original on 17 March 2010. Retrieved 26 March 2010. - "Part33-Airworthiness Standards-Aircraft Engines section 33.76 Bird ingestion - flightglobal archive Flight International 10 July 1982 p59 - "U.S. Airlines: Operating in an Era of High Jet Fuel Prices" (PDF). Archived from the original (PDF) on 30 October 2008. Retrieved 29 June 2010. - "How many air-miles are left in the world's fuel tank?". After-oil.co.uk. 29 June 2005. Archived from the original on 17 March 2010. Retrieved 26 March 2010. - Mattingly, p. 14 - Benson, Tom. Ramjet Propulsion. NASA Glenn Research Center. Updated: 11 July 2008. Retrieved: 23 July 2010. - McNab, Chris; Hunter Keeter (2008). Tools of Violence: Guns, Tanks and Dirty Bombs. Osprey Publishing. p. 145. ISBN 1-84603-225-3. - "Here Comes the Flying Stovepipe". TIME. 26 November 1965. Archived from the original on 9 March 2008. Retrieved 9 March 2008. - "Astronautix X30". Astronautix.com. Retrieved 26 March 2010. - Heiser, William H.; Pratt, David T. (1994). Hypersonic Airbreathing Propulsion. AIAA Education Series. Washington, D.C.: American Institute of Aeronautics and Astronautics. pp. 23–4. ISBN 978-1-56347-035-6. - X-51 Waverider makes historic hypersonic flight. United States Air Force. 26 May 2010. Retrieved: 23 July 2010. - U.S.Patent 3,344,606 "Recover Bleed Air Turbojet" Robert B. Abernethy - sr-71.org Blackbird Manual Section 1 Description and Operation page 1-20 - enginehistory.org Presentation by Pete Law "SR-71 Propulsion, Part 2" - "Jet Propulsion for Aerospace Applications- second edition" Walter J. Hesse, Nicholas V.S. Mumford,Jr. Pitman Publishing corporation. p377 - aviationtrivia.blogspot.ca "Tails Through Time" J P Santiago Wednesday,18 July 2012 "The Mach 3 Phantom" - "F-12 Series Aircraft Propulsion System Performance and Development"David H. Campbell, J.AircraftVol 11, No 11, November 1974 - e.g. Reaction engines A2 hypersonic airliner - "NASA history Other Interests in Hydrogen". Hq.nasa.gov. 21 October 1955. Archived from the original on 16 April 2015. Retrieved 26 March 2010. - "The Skylon Spaceplane" (PDF). Archived from the original (PDF) on 15 June 2011. Retrieved 26 March 2010. - Heiser and Pratt, p. 457 - PRATT & WHITNEY CANADA MAINTENANCE MANUAL – MANUAL PART NO. 3017042 – Introduction – Page 6 - Email from subject matter expert – Sr. Field Support Representative, Pratt & Whitney Canada Worldwide Support Network 12 January 2010
Chapter 1: Design What is a study? A study seeks to establish whether there is an association between a dependent and independent variable. Statisticians use the method of comparison to find the effect of treatment/exposure on a disease/response. Compare the responses of a treatment group to a control group. - If the control group is similar to the treatment group, apart from the treatment, the differences in response are likely to be due to the effect of the exposure - If not, then other effects could be “confounded” with the results of the treatment. These are called confounders. - Confounders must be associated with both the exposure and the response - Minimized through randomized-control. Objective: ensure similarity between treatment and control group - Put subjects into treatment and control at random - If possible, give control placebo: - neutral, but resembles treatment. - Response should be treatment itself and not idea of it - Subjects and evaluators do not know whether subject is in treatment or control group. - Prevents bias in analysis In controlled experiments, investigators decide who will be in the treatment group and who will be in the control group. In observational studies, subjects assign themselves to the different groups. To see if confounding is a problem, look at how the exposed and non-exposed groups are selected. One way to control for confounders is to make comparisons for smaller and more homogeneous grups (eg. by age, sex). This is called “slicing” (not an official term). Observational studies can establish association. But association does not imply causation. Smoking is an example of a _discrete_ variable (a.k.a. categorical variable). Eg. Smoking has two categories (binary categorical): you smoke or you don't. a.k.a. numerical, measurement 2x2 contingency table A and Y are associated if (1) rate(A|Y) != rate(A|!Y) OR (2) rate(Y|A) != rate(Y|!A) Consistency rule states than (1) iff (2), and vice-versa. a/(a+c) != b/(b+d) a/(a+b) != c/(c+d) - Assignment may be random, but adherence is not - Clues in to success of blinding (eg. drug has negative side effects) Relationships between percentages in subgroups can be reversed if subgroups are combined. - Not controlled Randomized and controlled studies minimize confounding. Suppose units are randomly assigned to be exposed or not. If the sample size is very large, then the likelihood that a given variable C is not associated to exposure x tends to almost certainty. |A||not A||row Total| |B||x||y||x + y| |not B||a||b||a + b| risk (A | B) = x / (x+y) risk (A | !B) = a / (a+b) RR = risk(A|B) / risk(A|!B) RR = 1 means no association - RR > 1 => first group has higher risk - Population risk cannot be estimated in case-control studies, even with random samples. odds(A|B) = x/y odds(A|!B)= a/b OR = bx/ay odds = risk/(1-risk) - Population vs Estimated RR population sample size too large, calculation done based on samples. | Study | Samples from | Advantage | |--------------|--------------|------------------------------| | Cohort | Exposure | Risk and RR can be estimated | | Case-control | Response | Good for rare diseases | Chapter 2: Association - Deterministic Relationship - Value of variable can be determined if we know the value of the other variable - Statistical Relationship - Natural variability exists in measurements - Average pattern of one variable can be described given the value of the other variable Data that consists of group or category names. Measurements can be grouped too. - Measurements of Association: RR and OR - RR and OR can be accurately estimated to a cohort study - RR is intuitively clearer and can only be estimated from cohort studies - OR applies to both cohort and case-control studies Bivariate data and Scatter Diagram Average: eg. son's average height is taller than dad association: positive gradient? linear or exponential relationship? Standard deviation: spread or variability of data - Correlation Coefficient Summarizes direction and strength of linear association: -1 <= r <= 1 - r > 0 positive association - r < 0 negative association - r = 0 no association - r = 1 perfectly positive association - r value close to 0 weak association <!--listend--> ```text weak moderate strong 0 0.3 0.7 1 ``` Not affected by: 1. Interchanging two variables 2. Adding a number to all values of a variable 3. Multiplying a number to all values of a variable - Standard Unit ```text SU = (X - X_bar) / sd_x ``` To obtain r, obtain the product of standard unit of father-son pairs, then take the average of the products - Causation A change in one variable produces a change in the other variable. - Outliers in data set Data points that are unusually far away from the bulk of the data. Dangerous to exclude outliers without understanding the cause of the occurrence. - non-linear association - zero correlation only says no "linear association" - high correlation doesn't mean linear association Correlation based on aggregated data, such as gorup averages or rates. In general, when the associations for both individuals and aggregates are in the same direction, the ecological correlation, based on the aggregates, will typically overstate the strength of the association in individuals. Variability among individuals are eliminated during aggregation - Ecological Fallacy Deduction of inferences about individuals based on aggregate data - Atomistic Fallacy Generalize the correlation based on individuals toward the aggregate level correlation - Attentuation Effect > Due to range restriction in one variable, the correlation coefficient > obtained tends to understate the strength of association between two > variables. Range restriction: bivariate data set formed based on criteria on one variable data for the other variable is only available for a limited range. Range restriction tends to have diminishing influence on the strength of the association, called the attenuation effect. - Regression fallacy > In virtual test-retest situations, the bottom group on the first test > will on average show some improvement on the second test, and the top > group will, on average, fall back. Prediction with linear regression Y = a + bX slope and intercept determined using least-square-method. Predicting “average”, not exact. Also dangerous to predict beyond observed range. Chapter 3: Sampling - Unit: Object/Individual - Population: Collection of units - Sample: Subset of a population - Sampling frame: list of sampling units intended to identify all units in the population - Good Coverage - Up-to-date and complete - Probability Sampling - Every unit must have a known probability of being sampled - Simple random sampling: all units have equal probability - Systematic sampling - Selecting units from a list through the application of a selection interval K, so every Kth unit following a random start is included in the sample - treated as simple random when sampling units are arranged randomly - might obtain undesirable sample if sampling units and K have cyclical effect - can use when # sampling units unknown - first divide population of units into strata, take a probability sample from each group Difficulties in Sampling - Imperfect sample frame - Perfect sampling frame consists of all units in population - otherwise, might include unwanted units (increased cost of study), or exclude desired units (need to redefine target population). - not all units are contactable, willing to take part. Non-respondents typically differ from respondents, and this effect needs to be studied. - Volunteer sample (biased) - Convenience sample (biased) - Judgement sample (uses own discretion, biased) - Quota sample (Having proportions of categories dose not make extension of results to population better) Chapter 4: Probability |Relative Frequency||Personal Probability| |Will you win the lottery||Will you be working overseas once you graduate?| |Can be quantified exactly||Cannot be quantified exactly| |Based on repeated observation of outcomes||Based on personal belief| Odds of having disease = P(disease) / P(no disease) Average value = expected value - p-value = the probability of obtaining an outcome equivalent to or more extreme than the observed - null hypothesis: assumption used to calculate p-value (eg. coin is fair) - if p-value is small, unlikely for observed to occur by chance, and unlikely for null hypothesis to be true. Converse for large. - p-value > 0.05 : do not reject NH at 5% significance level. Cannot conclude that it is not fair. Observed effect in sample is likely to reflect effect in population. Testing rare events (Medical screening) - Base rate: P(disease) - Sensitivity: P(positive | disease) - Specificity: P(negative | no disease) |To test||Not to test| |no alternative test||Alternative more reliable test| |Test is inexpensive & more expensive 2nd test||Test is expensive| |Goo chance of successful treatment||Unreliable treatment| Chapter 5: Networks - Collection of objects and well-defined relations between objects - Degree: number of other vertices in the network a node is adjacent to - Order: number of vertices - Size: number of edges - Distance d(X,Y) = distance between X and Y |Closeness||Ccen(u) = sum[d(u,vi)/ n-1 ]| |Degree||Dcen(u) = deg(u) / n-1| Betweeness: For a vertex Z in any graph, how many shortest paths are there, between any pair of 2 vertices, passing through Z? If 2 shortest paths between a,b, only 1 pass through z, add 1/2. Appendix: Answering Questions - exposure (potential cause) - response (potential effect)
This action might not be possible to undo. Are you sure you want to continue? Module 2 Networking Fundamentals • • • • Networking terminology Some network architectures The importance of bandwidth Networking models: OSI vs TCP/IP Relative size of network • Bridge: convert network transmission data formats as well as perform basic data transmission management.Networking Devices • Equipment that connects directly to a network segment is referred to as a device. • Router: routing and other services . Network Devices • Repeater: regenerate a signal. • Switch: add more intelligence to data transfer management. • There are 2 type of devices: end-user devices and network devices. • Hub: concentrate connections and may regenerate a signal. Physical Topology . which is the actual layout of the wire or media. • The logical topology. which defines how the media is accessed by the hosts for sending data.Network Topology • The physical topology. first-serve. • Eg: Ethernet . • First-come.Logical Topology Broadcast Token Passing Logical Topology: Broadcast • Each host sends its data to all other hosts on the network medium. Logical Topology: Token Passing • Access to media is controlled by an electronic token. • Possession of the token gives the host the right to pass data to its destination. • A protocol is a formal description of a set of rules and conventions that govern a particular aspect of how devices on a network communicate. • Eg: Token-Ring. FDDI Network Protocols • Protocol suites are collections of protocols that enable network communication from one host through the network to another host. . which include the following: – – – – – How the physical network is built How computers connect to the network How the data is formatted for transmission How that data is sent How to deal with errors LANs • Operate within a limited geographic area • Allow many users to access high-bandwidth media • Provide full-time connectivity to local services • Connect physically adjacent devices .Functions of Protocols • Protocols control all aspects of data communication. LAN Devices and Technology • Some common LAN technologies are: – Ethernet – Token Ring – FDDI WANs • Operate over a large geographically separated areas • Provide full-time remote resources connected to local services . E1. . T3. • A MAN usually consists of two or more LANs in a common geographic area.WAN Technologies Include • Some common WAN technologies are: – – – – – – – Analog modems Integrated Services Digital Network (ISDN) Digital Subscriber Line (DSL) Frame Relay Asynchronous Transfer Mode (ATM) T (US) and E (Europe) carrier series: T1. E3 Synchronous Optical Network (SONET) Metropolitan-Area Networks (MANs) • A MAN is a network that spans a metropolitan area such as a city or suburban area. • VPN is the most costeffective method of establishing secured connection . Virtual Private Networks (VPNs) • A VPN is a private network that is constructed within a public network infrastructure such as the global Internet . high-performance network used to move data between servers and storage resources.Storage-Area Networks (SANs) • A SAN is a dedicated. VPN Types There are three main types of VPNs: • • • Access VPNs Intranet VPNs Extranet VPNs Bandwidth . Importance of Bandwidth Digital Bandwidth • Bandwidth is the measure of how much information. can flow from one place to another in a given amount of time. . or seconds. or bits. Bandwidth Limitations (LAN) Bandwidth Limitations (WAN) . and while a specific set of data is transmitted on the network. at a specific time of day. • Factors that determine throughput: – – – – – – – Internetworking devices Type of data being transferred Network topology Number of users on the network User computer Server computer Power conditions Data Transfer Calculation Calculate an estimate of network performance .Throughput • Throughput refers to actual measured bandwidth. using specific Internet routes. ..... Atmosphere Atmosphere . . Where Where does does the the flow flow occur occur ? ? Cable... Protocol Protocol . Standard..... .Networking Models Analyzing network in layer What What is is flowing flowing ? ? Data Data What What different different forms forms flow flow ? ? Text. Text. Graphic. Video Video . Cable. . Graphic. .. What What rules rules govern govern flow flow ? ? Standard. Atmosphere • Protocol – Format – Procedure . Destination address • Media – Cable. Fiber.Communication characteristics • Addresses – What are the source and the destination of a communication process? • Media – Where does the communication take place? • Protocols – How to make the communication process effectively? Packets Protocols Source Address Medium Destination Address Data Communication • Address – Source address. Proprietary vs.Evolution of networking standards SNA Standard • Interconnection • Development • Simplification Proprietary TCP/IP DECNET OSI Model • The OSI model: model a framework within which networking standards can be developed. – It provided vendors with a set of standards that ensured greater compatibility and interoperability between the various types of network technologies that were produced by the many companies around the world. Open . access to media a Binary transmission •All People Seem To Need Data Processing . Standardizes interfaces. Facilitates modular engineering. • Simplifies teaching and learning. Ensures technology compatibility.Why a layered model Reduces complexity. • • • • 7 layers of the OSI reference model Network processes to applications Data representation Interhost communication End-to-end connections Address and best path Direct link control. • Accelerates evolution. – – – – File transfer Electronic mail Terminal access … 7 layers of the OSI reference model Data representation • Ensures that the information that the application layer of one system sends out is readable by the application layer of another system. it provides network services to the user’s applications.7 layers of the OSI reference model Network processes to applications • Is the OSI layer that is closest to the user. – – – – Format of data Data conversion Data compression Data encryption . transparent transfer of data over networks. – – – – – Segments. – – – – Sessions Dialog Conversations Data exchange 7 layers of the OSI reference model End-to-end connections • Provides reliable. and terminates sessions between two communicating hosts. datagram End-to-end flow control Error detection and recovery Segmentation & reassembly … . data stream.7 layers of the OSI reference model Interhost communication • Establishes. manages. routing table.7 layers of the OSI reference model Address and best path • Provides connectivity and path selection between two host systems that may be located on geographically separated networks. – – – – – Frames Physical address Network topology Line discipline … . – – – – – Packets Route. access to media • Provides for the reliable transfer of data cross a physical link. Logical address Fragmentation … 7 layers of the OSI reference model Direct link control. mechanical. access to media a Binary transmission •All People Seem To Need Data Processing . – Electrical. procedural and functional specifications – Physical data rate – Distances – Physical connector 7 layers of the OSI reference model Network processes to applications Data representation Interhost communication End-to-end connections Address and best path Direct link control.7 layers of the OSI reference model Binary transmission • Transmission of an unstructured bit stream over a physical link between end systems. . Encapsulation The lower layers use encapsulation to put the protocol data unit (PDU) from the upper layer into its data field and to add headers and trailers that the layer can use to perform its function.Peer-to-Peer Communication • The protocols of each layer exchange information. called protocol data units (PDUs). between peer layers. thereby creating a datagram. Encapsulation example: E-mail . – It passes the datagram up to the next layer.De-Encapsulation • When the data link layer receives the frame. it does the following: – It reads the physical address and other control information provided by the directly connected peer data link layer. – It strips the control information from the frame. following the instructions that appeared in the control portion of the frame. .Layer-to-layer communications Provide services Request services TCP/IP model development • The late-60s The Defense Advance Research Projects Agency (DARPA) originally developed Transmission Control Protocol/Internet Protocol (TCP/IP) to interconnect various defense department computer networks. • The Internet. an International Wide Area Network. uses TCP/IP to connect networks across the world. Do not confuse the layers of the two models.The TCP/IP Reference Model • • • • Layer 4: Layer 3: Layer 2: Layer 1: It is important to note that some of the layers in the TCP/IP model have the same name as layers in the OSI model. TCP/IP Protocol Stack . OSI Model and TCP/IP Model Focus of the CCNA Curriculum . Summary • • • • • • • Networking devices Some of the common network types Intranet and extranet Bandwidth and throughput The layered communication model OSI reference model TCP/IP networking model . This action might not be possible to undo. Are you sure you want to continue? We've moved you to where you read on your other device. Get the full title to continue reading from where you left off, or restart the preview.
Earlier in May, during black hole week, NASA released an eerie sound clip of a black hole showing that space makes a lot of noise, depending on where you look, and how you process it. The clip carries the sounds of a massive black hole located more than 200 million light-years away from Earth in the Perseus galaxy cluster. The Perseus galaxy cluster is an 11 million-light-year-wide grouping of galaxies packed in hot gas. Those clouds of hot gas are where these sound waves shared in a clip by NASA come from. Scientists discovered that pressure waves come from Perseus’ interior. These waves ripple through the hot gas surrounding the galaxy cluster and can be translated into sound. Space, unlike Earth, does not have air particles to carry sound vibrations. However, it doesn’t mean that vibrations aren’t there. In the case of Perseus’ black hole, the cosmic giant is so close to the cluster’s gas clouds that it can create sound wave vibrations in the form of gas ripples. In 2003, a team of astronomers from NASA’s Chandra X-ray Observatory took astronomical data of these ripples and translated them into sounds. Unfortunately, those sounds were a massive 57 octaves below middle C, meaning they couldn’t be heard by human ears. To make the sounds audible for humans, NASA scaled the sound data up by 57 and 58 octaves so we can all listen to the massive black hole at the center of the Perseus galaxy cluster. In a blog post, NASA wrote that the sound waves “are being heard 144 quadrillion and 288 quadrillion times higher than their original frequency.” Earlier this year, a separate team of researchers also used a new tool called the “Reverberation Machine” to make their own black hole sound clip. They stated that black hole echoes in data from NICER, a telescope aboard the International Space Station, and converted them into sound waves for another eerie sample that could’ve come out of a classic prog rock tune.
Agricultural Literacy Curriculum Matrix Homes on the Range Students design a board game that reinforces how rangelands provide habitat for livestock and wildlife while benefiting humans, animals, and plants and explore the responsibilities of a range manager. Grades 3-5 - Homes on the Range T-chart handout - Homes on the Range PowerPoint Slides (optional) - Rangeland Rescue Game (Instructions, Rescue Cards, Gameboard Spaces, Fact Cards, and Grading Rubric), 1 per group - 6-sided die, 1 per group - File folder, 1 per group - Colored paper - Crayons, colored pencils, or markers - Index cards Essential Files (maps, charts, pictures, or documents) habitat: the natural home or environment of an animal, plant, or other organism pasture lands: land used for grazing livestock that is planted and maintained by farmers and ranchers range managers: a career with responsibility in maintaining, protecting, and improving range resources such as soil, water, plants, and animal life rangeland: large, mostly unimproved section of land that is primarily used for grazing livestock Did You Know? (Ag Facts) - Rangelands occupy 40-50% of the land area of the Earth.1 - Rangelands support livestock for farmers and ranchers in addition to the wildlife indigenous to the area.1 - Rangeland managers typically have a bachelor's degree.2 Background Agricultural Connections This lesson is one in a series of five related lessons to promote the development of STEM abilities and critical thinking skills, while fostering an appreciation for the people involved in livestock production. For more information about what STEM is, why it's important, and how it can be implemented in your classroom, watch the video, What is STEM? The curriculum includes real-life challenges for students to investigate, inquiry-based labs, and opportunities to plan and construct models. Featured careers include: - Animal Physiologist: Significant Surroundings - Agricultural Engineer: Build it Better - Animal Geneticist: Roll of the Genes - Animal Nutritionist: Got Guts? - Range Manager: Homes on the Range Rangelands are vast natural landscapes that include grasslands, shrublands, woodlands, forests, tundra, wetlands, and deserts. Rangelands do not include barren desert, farmland, or land covered by bare soil, solid rock, concrete, or glaciers. Rangelands are uncultivated lands that will provide the necessities of life for grazing and browsing animals. Rangelands are distinguished from pasture lands because they grow naturally occurring vegetation rather than plants cultivated by humans with irrigation, fertilizers, and other additions. From the wide open spaces of Northern California to the vast plains of Africa, rangelands are found all over the world, encompassing more than half of the Earth’s land surface. Rangelands also provide important habitat for domestic livestock, including cattle, sheep, goats, and horses. These animals graze the land, feeding on plants, such as grasses. Grazing is important in agriculture, because domestic livestock convert grass and other forage into meat, milk, and other products. There are many benefits to livestock grazing, including reducing fire hazards, promoting plant life, and encouraging wildlife species. Properly managed livestock grazing helps reduce fire hazards by controlling the amount and distribution of grasses and other potential fuel. Additionally, livestock grazing controls the growth of non-native grasses and herbs so that desirable plants (wildflowers and native grasses) can regenerate and coexist with them. Many species, including several threatened species, benefit from the vegetation management performed by livestock. Rangelands are an important resource. They preserve open space and provide recreational uses, natural beauty, wildlife habitat, water purification, and clean air. Approximately 70 percent of the planet and 50 percent of the United States is rangeland. Range managers care for our country’s vast rangelands. They maintain plants for forage; wildlife for aesthetics and hunting; livestock for meat, milk, and fiber production; and clean water. In this lesson, students will learn the basics about rangelands and use their acquired knowledge and research skills to design an educational game. Refer to the Answers to Commonly Asked Questions for more background information. Interest Approach - Engagement - Begin a discussion with your students by asking them, "What is a rangeland?" Allow students to offer answers using their prior knowledge. Guide the discussion using the information found in the Background Agricultural Connections section and help students distinguish between a rangeland and pasture lands. Ask students: - How are natural resources, like public rangelands, used in agriculture? - How can agricultural use benefit rangelands? - Inform your students that they will: - design a board game about rangelands; - learn about the responsibilities of a range manager; - predict the outcome of an investigation; - conduct an experiment to test a hypothesis; - create a double bar graph; and - use appropriate tools to measure length, width, depth, and perimeter. - Distribute the Homes on the Range T-chart handout to students and project a copy onto a large screen. Demonstrate how students can use the graphic organizer to record notes. - Lead a discussion with the students to build background information about rangelands in your state. Optional: Use the PowerPoint, Homes on the Range: An Introduction to Rangelands to help students visualize and better understand what rangelands are. The Classroom discussion should include the following points: - "What are rangelands?" (Rangelands are vast natural landscapes that include grasslands, shrublands, woodlands, forests, tundra, wetlands, and deserts. It is land that can be used for grazing, foraging, wildlife habitat, aesthetics, hunting, and a clean water supply.) - "Who uses rangeland?" (Ranchers, hunters, hikers, scientists, wildlife, and livestock. Ask students to share ways they have personally used rangeland, emphasizing the value of rangeland to humans, animals, and plants.) - How can grazing animals improve rangeland? Grazing animals… - Reduce the amount of fuel (grasses and shrubs) for wildfires. Land that is grazed is less likely to experience severe fires. - Increase aeration of the soil, facilitating better water absorption. Their hooves break up hard ground, adding beneficial air to the soil. - Control the growth of the non-native grasses and plants so that other desirable plants (wildflowers and native grasses) can thrive. - Increase the diversity of habitats available to wildlife species. Many species, including several threatened species, benefit from livestock controlling the growth of invasive plants. - What role does a range manager have in the health of our land? The range manager makes decisions about how to carefully use and manage rangeland resources (plants, animals, soil, and water) to meet the needs and desires of society. When managed properly, rangelands provide habitat for livestock and wildlife while benefiting humans, animals, and plants. - What does a range manager do? A range manager may work with ranchers, scientists, and others to monitor plant growth, create agreements among rangeland users, develop conservation plans to meet land goals, manage private livestock operations, and develop methods to protect the range from fire, unwanted wildlife, and poisonous plants. - Introduce the students to the game, Rangeland Rescue. Explain that in this activity, students will take on the role of range managers to help a game board manufacturer create a realistic board game about rangelands. The manufacturer has provided instructions and game board spaces. Students must use these resources to design their game board. Review the Rangeland Rescue Game Instructions out loud as a class. Tell students that this handout is their instructions for playing the game. - Distribute and review the Rangeland Rescue Game Design handout. Tell students that this handout is their instructions for designing the game. Show students an example of the game board pictured below. Divide the class into groups of four. Distribute the necessary materials. - Once game boards are complete, each student will evaluateanother group’s game board, using the Rangeland Rescue Game Design Grading Rubric. Students should play the game completely prior to filling out the rubric. The teacher will review the completed rubrics and average the student-determined evaluation scores for grading. - Debrief the activity to highlight significant discoveries. Use the Range Fact Cards to guide discussion and quiz students on the information they learned about rangelands and range managers. Concept Elaboration and Evaluation After conducting these activities, use the following questions to review and summarize key concepts: - Which Range Fact Card surprised you? Why? - What skills are important for a range manager to have? - Why is rangeland important? - What would life be like without rangelands? - Have students use library, classroom, and Web resources to design their own Range Fact Cards. Each Range Fact Card must feature a question about rangelands. Questions may be true/false, multiple choice, or short answer. Students should print questions and answers neatly on index cards for use in the game. Use an electronic presentation to introduce the topic. - Distribute the Homes on the Range T-Chart electronically and have students fill them out on their tablet computer. - Give students a copy of the lesson’s Background Agricultural Connections, which provides additional information about rangelands and range managers. Lead students in highlighting and annotating the text to identify important information. - Use games such as Pictionary or Bingo to reinforce challenging new vocabulary words. - The “Think-Pair-Share” technique increases student engagement and is an effective way to encourage English language learners to express new concepts in English. Give students time to write a response to a question on paper, additional time to discuss their ideas with their neighbor, and then solicit responses from the entire class. Design product packaging and a commercial for the game. Include the box, instructions, and optional add-on packs. Create a video of actual game play to help build interest. Have students explore the educational background and skills required to be a range manager. This unit was funded in 2012 by the United States Department of Agriculture’s National Institute of Food and Agriculture through the Secondary Education, Two-Year Postsecondary Education, and Agriculture in the K-12 Classroom Challenge Grants Program (SPECA). Images submitted by California Foundation for Agricultural Education. Executive Director: Judy Culbertson Illustrator: Erik Davison Layout and Design: Nina Danner Suggested Companion Resources |We welcome your feedback. Please take a minute to share your thoughts on this lesson.|
Asteroid impact prediction The process of impact prediction follows three major steps: - Discovery of an asteroid and initial assessment of its orbit which generally includes a short observation arc of less than 14 days. - Follow up observations to improve the orbit determination - Calculating if, when and where the orbit may intersect with Earth at some point in the future. In addition, although not strictly part of the prediction process, once an impact has been predicted, an appropriate response needs to be made. Most asteroids are discovered by a camera on a telescope with a wide field of view. Image differencing software compares a recent photograph with earlier ones of the same part of the sky, detecting objects that have moved, brightened, or appeared. Follow up can be carried out by any telescope powerful enough to see the newly detected object. Orbit intersection calculations are then carried out by two independent systems, one (Sentry) run by NASA and the other (NEODyS) by ESA. A few near misses have been predicted, years in advance, with a tiny chance of actually striking Earth. A handful of actual impactors have been detected hours in advance. They were small, struck wilderness or ocean, and hurt nobody. Current systems only detect an arriving object when several factors are just right, mainly the direction of approach, weather, and phase of the Moon. The result is a low rate of success. Performance is improving as existing systems are upgraded and new ones come on line, but some of the issues the current systems face can only be overcome by a dedicated space based system. - 1 History - 2 Discovery of Near Earth Asteroids - 2.1 Cataloging vs Warning - 2.2 Surveys - 3 Follow up observations - 4 Impact calculation - 5 Response to predicted impact - 6 Effectiveness of the current system - 7 Improving impact prediction - 8 List of successfully predicted asteroid impacts - 9 Notes - 10 See also - 11 References - 12 External links In 1992 a report to NASA recommended a coordinated survey (christened Spaceguard) to discover, verify and provide follow-up observations for Earth-crossing asteroids. This survey was scaled to discover 90% of all objects larger than one kilometer within 25 years. Three years later, a further NASA report recommended search surveys that would discover 60–70% of the short-period, near-Earth objects larger than one kilometer within ten years and obtain 90% completeness within five more years. In 1998, NASA formally embraced the goal of finding and cataloging, by 2008, 90% of all near-Earth objects (NEOs) with diameters of 1 km or larger that could represent a collision risk to Earth. The 1 km diameter metric was chosen after considerable study indicated that an impact of an object smaller than 1 km could cause significant local or regional damage but is unlikely to cause a worldwide catastrophe. The impact of an object much larger than 1 km diameter could well result in worldwide damage up to, and potentially including, extinction of the human race. The NASA commitment has resulted in the funding of a number of NEO search efforts, which made considerable progress toward the 90% goal by the target date of 2008 and also produced the first ever successful prediction of an asteroid impact (the 4-meter 2008 TC3 was detected 19 hours before impact). However the 2009 discovery of several NEOs approximately 2 to 3 kilometers in diameter (e.g. 2009 CR2, 2009 HC82, 2009 KJ, 2009 MS and 2009 OG) demonstrated there were still large objects to be detected. Three years later, in 2012, the small asteroid 367943 Duende was discovered and successfully predicted to be on close but non-colliding approach to Earth again just 11 months later. This was a landmark prediction as the object was only 20 m × 40 m, and it was closely monitored as a result. On the day of its closest approach and by coincidence, a smaller asteroid was also approaching Earth, unpredicted and undetected, from a direction close to the Sun. Unlike 367943 Duende it was on a collision course and it impacted Earth 16 hours before 367943 Duende passed, becoming the Chelyabinsk meteor. It injured 1,500 people and damaged over 7,000 buildings, raising the profile of the dangers of even small asteroid impacts if they occur over populated areas. The asteroid is estimated to have been 17 m across. In April 2018, the B612 Foundation stated "It's 100 per cent certain we'll be hit [by a devastating asteroid], but we're not 100 per cent sure when." Also in 2018, physicist Stephen Hawking, in his final book Brief Answers to the Big Questions, considered an asteroid collision to be the biggest threat to the planet. In June 2018, the US National Science and Technology Council warned that America is unprepared for an asteroid impact event, and has developed and released the "National Near-Earth Object Preparedness Strategy Action Plan" to better prepare. Discovery of Near Earth Asteroids The first step in predicting impacts is detecting asteroids and determining their orbits. Finding faint Near-Earth objects against the background stars is very much a needle in a haystack search. It is achieved by sky surveys that are designed to discover near Earth asteroids. Unlike the majority of telescopes that have a narrow field of view and high magnification, survey telescopes have a wide field of view to scan the entire sky in a reasonable amount of time with enough sensitivity to pick up the faint Near-Earth objects they are searching for. NEO focused surveys revisit the same area of sky several times in succession. Movement can then be detected using image differencing techniques. Anything that moves from image to image against the background of stars is compared to a catalogue of all known objects, and if it is not already known is reported as a new discovery along with its precise position and the observation time. This then allows other observers to confirm and add to the data about the newly discovered object. Cataloging vs Warning The existing asteroid surveys have a fairly clear-cut division between 'cataloging surveys' which use larger telescopes to mostly identify larger asteroids well before they come very close to Earth, and 'warning surveys' which use smaller telescopes to mostly look for smaller asteroids on their final approach. Cataloging systems focus on finding larger asteroids years in advance and they scan the sky slowly (of the order of once per month), but deeply. Warning systems focus on scanning the sky relatively quickly (of the order of once per night) and typically cannot detect objects that are as faint as cataloging systems. Some systems compromise and scan the sky say once per week. For larger asteroids (> 100m to 1 km across), prediction is based on cataloging the asteroid, years to centuries before it could impact. This technique is possible as they can be seen from a long distance due to their large size. Their orbits therefore can be measured and any future impacts predicted long before they are on their final approach to Earth. This long period of warning is necessary as an impact from a 1 km object would cause world-wide damage. As of 2018, the inventory is nearly complete for the kilometer-size objects (around 900) which would cause global damage, and approximately one third complete for 140 meter objects (around 8500).[note 1][note 2] Smaller near-Earth objects are far more numerous (millions) and the vast majority remain undiscovered. They seldom pass close enough to Earth on a previous approach that they become bright enough to observe, and so most can only be observed on final approach. They therefore cannot usually be cataloged well in advance and can only be warned about, a few weeks to days in advance. This is much too late to deflect them away from Earth, but is enough time to mitigate the consequences of the impact by evacuating and otherwise preparing the affected area. Current mechanisms for detecting asteroids on final approach rely on ground based telescopes with wide fields of view which currently can monitor the sky at most every second night. They therefore still miss most of the smaller asteroids that more commonly impact Earth, which are bright enough to detect for less than two days. Ground-based telescopes also cannot detect most of the asteroids which impact the day side of the planet. These and other problems mean very few impacts are successfully predicted (see §Effectiveness of the current system and §Improving impact prediction). The main NEO focussed surveys are listed below, along with future telescopes that are already funded. They have a fairly clear-cut division between 'cataloging surveys' and 'warning surveys'. The existing warning surveys have enough capacity between them to scan the northern sky once per clear night. However, they are concentrated in a relatively small part of the planet. Two surveys (Pan-STARRS and ATLAS) are in Hawaii, which means they see the same parts of the sky at the same time of day, and are affected by the same weather. Two others (Catalina Sky Survey and Zwicky Transient Facility) are located in the southwestern United States and so suffer from similar overlap. These surveys do complement each other to an extent in that some are cataloging surveys and some are warning surveys. However, the resulting coverage across the globe is imperfect. In particular, there are currently no major surveys in the Southern Hemisphere. This clustering together of the sky surveys in the Northern hemisphere means that around 15% of the sky at extreme Southern declination is never monitored, and that the rest of the Southern sky is observed over a shorter season than the Northern sky. Moreover as the hours of darkness are fewer in summertime, the lack of a balance of surveys between North and South means that the sky is scanned less often in the Northern summer. Once it is completed, the Large Synoptic Survey Telescope will cover the southern sky, but being at a similar longitude to the other surveys there will still be times every day when it will be in daylight along with all the others. The 3.5 m Space Surveillance Telescope, which was originally also in the southwest United States, was dismantled and moved to Western Australia in 2017. When completed, this would make a significant difference to the global coverage. However, due to the new site being in a cyclone region, construction has not been completed. Unless this issue is resolved, the two new ATLAS telescopes planned for construction in the Southern hemisphere (including one at the South African Astronomical Observatory) by 2020, are currently the only ones which will cover this gap in monitoring the skies (south east of the globe). |Time to scan entire visible Sky (when clear) [note 3] |Hemisphere||Activity||Peak yearly observations |ATLAS||0.5||2||2 nights||19||Northern||2016–present||1,908,828||Warning survey| |0.5||2||1 night||19||Southern||2020||NA||Warning survey| |Catalina Sky Survey||1.5||1||30 nights||21.5||Northern||1998–present||see Mount Lemmon Survey||Cataloging survey| |0.7||1||7 nights||19.5||Northern||1998–present||1,934,824||Cataloging survey| |Lincoln Near-Earth Asteroid Research||1.0||2||?||?||Northern||1998-2012||3,346,181||Cataloging survey| |Lowell Observatory Near-Earth-Object Search||0.6||1||41 nights||19.5||Northern||1998-2008||836,844||Cataloging survey| |Mount Lemmon Survey||1.52||1||?||~21||Northern||2005–present||2,920,211||Cataloging survey| |Near-Earth Asteroid Tracking||?||2||?||?||Northern||1995-2007||1,214,008||Cataloging survey| |NEO Survey Telescope||1||1||1 night||21||Northern||2020||NA||Warning survey| |NEOWISE||0.4||1||~6 months||~22||Earth Orbit||2009–present||2,279,598||Cataloging survey| |Pan-STARRS||1.8||2||30 nights||23||Northern||2010–present||5,254,605||Cataloging survey| |Space Surveillance Telescope||3.5||1||6 nights||20.5||Northern||2014-2017||6,973,249||Warning survey| |Spacewatch||1.8||1||?||?||Northern||1980–1998[note 6]||1,532,613||Cataloging survey| |Zwicky Transient Facility||1.2||1||3 nights||20.5||Northern||2018–present||483,822||Warning survey| ATLAS, the "Asteroid Terrestrial-impact Last Alert System" uses two 0.5 metre telescopes located at Haleakala and Mauna Loa on two of the Hawaiian Islands. With a field of view of 30 square degrees each, the telescopes survey the observable sky down to apparent magnitude 19 with 4 exposures every two clear nights. The survey has been fully operational with these two telescopes since 2017, and in 2018 obtained NASA funding for two additional telescopes. Both will be sited in the Southern hemisphere, with one at the South African Astronomical Observatory, and they are expected to take 18 months to build. Their southern locations will provide coverage of the 15% of the sky that cannot be observed from Hawaii, and the doubling of its observing resources will allow ATLAS to survey the observable sky with 4 exposures every clear night rather than every two nights. Catalina Sky Survey (including Mount Lemmon Survey) In 1998, the Catalina Sky Survey (CSS) took over from Spacewatch in surveying the sky for the University of Arizona. It uses two telescopes, a 1.5 m Cassegrain reflector telescope on the peak of Mount Lemmon (also known as a survey in its own right, the Mount Lemmon Survey), and a 0.7 m Schmidt telescope near Mount Bigelow (both in the Tucson, Arizona area in the south west of the United States). Both sites use identical cameras which provide a field of view of 5 square degrees on the 1.5-m telescope and 19 square degrees on the Catalina Schmidt. The Cassegrain reflector telescope takes three to four weeks to survey the entire sky, detecting objects fainter than apparent magnitude 21.5. The 0.7 m telescope takes a week to complete a survey of the sky, detecting objects fainter than apparent magnitude 19. This combination of telescopes, one slow and one medium, has so far detected more near Earth Objects than any other single survey. This shows the need for a combination of different types of telescopes. Large Synoptic Survey Telescope The Large Synoptic Survey Telescope (LSST) is a wide-field survey reflecting telescope with an 8.4-meter primary mirror, currently under construction on Cerro Pachón in Chile. It will survey the entire available sky around every three nights. Science operations are due to begin in 2022. Scanning the sky relatively fast but also being able to detect objects down to apparent magnitude 27, it should be good at detecting nearby fast moving objects as well as excellent for larger slower objects that are currently further away. NEO Survery Telescope The Near Earth Object Survery TELescope (NEOSTEL) is an ESA funded project, starting with an initial prototype currently under construction. The telescope is of a new "fly-eye" design that combines a single reflector with multiple sets of optics and CCDs, giving a very wide field of view (around 45 square degrees). When complete it will have the widest field of view of any telescope and will be able to survey the majority of the visible sky in a single night. If the initial prototype is successful, three more telescopes are planned for installation around the globe. Because of the novel design, the size of the primary mirror is not directly comparable to more conventional telescopes, but is equivalent to a conventional 1 metre telescope. The Wide-field Infrared Survey Explorer is a 0.4 m infrared-wavelength space telescope launched in December 2009, and placed in hibernation in February 2011. It was re-activated in 2013 specifically to search for near-Earth objects under the NEOWISE mission. By this stage, the spacecraft's cryogenic coolant had been depleted and so only two of the spacecraft's four sensors could be used. Whilst this has still led to new discoveries of asteroids not previously seen from ground-based telescopes, the productivity has dropped significantly. In its peak year when all four sensors were operational, WISE made 2.28 million asteroid observations. In recent years, with no cryogen, NEOWISE typically makes approximately 0.15 million asteroid observations annually. The next generation of infrared space telescopes has been designed so that they do not need cryogenic cooling. Pan-STARRS, the "Panoramic Survey Telescope And Rapid Response System", currently (2018) consists of two 1.8 m Ritchey–Chrétien telescopes located at Haleakala in Hawaii. It has discovered a large number of new asteroids, comets, variable stars, supernovae and other celestial objects. Its primary mission is now to detect mear-Earth objects that threaten impact events, and it is expected to create a database of all objects visible from Hawaii (three-quarters of the entire sky) down to apparent magnitude 24. The Pan-STARRS NEO survey searches all the sky north of declination −47.5. It takes three to four weeks to survey the entire sky. Space Surveillance Telescope The Space Surveillance Telescope (SST) is a 3.5 m telescope that detects, tracks, and can discern small, obscure objects, in deep space with a wide field of view system. The SST mount uses an advanced servo-control technology, that makes it one of the quickest and most agile telescopes of its size. It has a field of view of 6 square degrees and can scan the visible sky in 6 clear nights down to apparent magnitude 20.5. Its primary mission is tracking orbital debris. This task is similar to that of spotting near-Earth asteroids and so it is capable of both. The SST was initially deployed for testing and evaluation at the White Sands Missile Range in New Mexico. On December 6, 2013, it was announced that the telescope system would be moved to the Naval Communication Station Harold E. Holt in Exmouth, Western Australia. The SST was moved to Australia in 2017, but due to the new site being in a cyclone region, construction has been delayed, pending a redesign that can withstand cyclone force winds. Spacewatch was an early sky survey focussed on finding near Earth asteroids, originally founded in 1980. It was the first to use CCD image sensors to search for them, and the first to develop software to detect moving objects automatically in real-time. This led to a huge increase in productivity. Before 1990 a few hundred observations were made each year. After automation, annual productivity jumped by a factor of 100 leading to tens of thousands of observations per year. This paved the way for the surveys we have today. Although the survey is still in operation, in 1998 is was superseded by Catalina Sky Survey. Since then it has focused on following up on discoveries by other surveys, rather than making new discoveries itself. In particular it aims to prevent high priority PHOs from being lost after their discovery. The survey telescopes are 1.8 m and 0.9 m. The two follow up telescopes are 2.3 m and 4 m. Zwicky Transient Facility The Zwicky Transient Facility (ZTF) was commissioned in 2018, superseding the Intermediate Palomar Transient Factory (2009–2017). It is designed to detect transient objects that rapidly change in brightness as well as moving objects, for example supernovae, gamma ray bursts, collisions between two neutron stars, comets and asteroids. The ZTF is a 1.2 m telescope that has a field of view of 47 square degrees, designed to image the entire northern sky in three nights and scan the plane of the Milky Way twice each night to a limiting magnitude of 20.5. The amount of data produced by ZTF is expected to be 10 times larger than its predecessor. Follow up observations Once a new asteroid has been discovered and reported, other observers can confirm the finding and help define the orbit of the newly discovered object. The International Astronomical Union Minor Planet Center (MPC) acts as the global clearing house for information on asteroid orbits. It publishes lists of new discoveries that need verifying and accepts the resulting follow up observations from around the world. Unlike the initial discovery, which typically requires unusual and expensive wide-field telescopes, ordinary telescopes can be used to confirm the object as its position is now approximately known. There are far more of these around the globe, and even a well equipped amateur astronomer can contribute valuable follow-up observations of moderately bright asteroids. For example, the Great Shefford Observatory in the back garden of amateur Peter Birtwhistle typically submits thousands of observations to the Minor Planet Center every year. Nonetheless, some surveys (for example CSS and Spacewatch) have their own dedicated follow up telescopes. Follow up observations are important because once a sky survey has reported a discovery it may not return to observe the object again for days or weeks. By this time it may be too faint for it to detect, and in danger of becoming a lost asteroid. The more observations and the longer the observation arc, the greater the accuracy of the orbit model. This is important for two reasons: - for imminent impacts it helps to make a better prediction of where the impact will occur and whether there is any danger of hitting a populated area. - for asteroids that will miss Earth this time round, the more accurate the orbit model is, the further into the future its position can be predicted. This allows impacts to be predicted years in advance. Estimating size and impact severity Assessing the size of the asteroid is important for predicting the severity of the impact, and therefore the actions that need to be taken (if any). Because of this, one key follow up observation is to view the asteroid in the thermal infrared spectrum (long-wavelength infrared), using an infrared telescope. The amount of thermal radiation given off by an asteroid allows a much more accurate assessment of its size than how bright it appears (apparent magnitude) to an ordinary telescope that operates in the visible spectrum. Using thermal infrared, it is possible to estimate the size to within about 10% of the true size. With reflected visible light observed by a conventional telescope, the object could be anything from 50% to 200% of the estimated diameter, and anything from one eighth to eight times of the estimated volume and mass. One example of a such a follow up observation was for 3671 Dionysus by UKIRT, the worlds largest infrared telescope at the time (1997). However such follow ups are rare. The size estimates of most near-Earth asteroids are based on visible light only. If the object was discovered by an infrared survey telescope initially, then an accurate size estimate will already be available, and infrared follow up will not be needed. However none of the ground-based survey telescopes listed above operate at thermal infrared wavelengths. The NEOWISE satellite had two thermal infrared sensors but they stopped working when the cryogen ran out. There are therefore currently no active or planned thermal infrared sky surveys which are focused on discovering near-Earth objects. Minimum orbit intersection distance The minimum orbit intersection distance (MOID) between an asteroid and the Earth is the distance between the closest points of their orbits. This first check is a coarse measure that does not allow an impact prediction to be made, but is based solely on the orbit parameters and gives an initial measure of how close to Earth the asteroid could come. If the MOID is large then the two objects never come near each other. In this case, unless the orbit of the asteroid is perturbed so that the MOID is reduced at some point in the future, it will never impact Earth and can be ignored. However if the MOID is small then it is necessary to carry out more detailed calculations to determine if an impact will happen in the future. NASA considers asteroids with a MOID of less than 0.05 AU and an absolute magnitude brighter than 22 to be a potentially hazardous asteroid. Projecting into the future Once the initial orbit is known, the potential positions can be forecast years into the future and compared to the future position of Earth. If the distance between the asteroid and the centre of the Earth is less than Earth radius then a potential impact is predicted. To take account of the uncertainties in the orbit of the asteroid, several future projections are made (simulations). Each simulation has slightly different parameters within the range of the uncertainty. This allows a percentage chance of impact to be estimated. For example if 1,000 simulations are carried out and 73 result in an impact, then the prediction would be a 7.3% chance of impact. NEODyS (Near Earth Objects Dynamic Site) is a European Space Agency service that provides information on near Earth objects. It is based on a continually and (almost) automatically maintained database of near earth asteroid orbits. The site provides a number of services to the NEO community. The main service is an impact monitoring system (CLOMON2) of all near-Earth asteroids covering a period until the year 2100. The NEODyS website includes a Risk Page where all NEOs with probabilities of hitting the Earth greater than 10−11 from now until 2100 are shown in a risk list. In the table of the risk list the NEOs are divided into: - "special", as it is the case of (99942) Apophis - "observable", objects which are presently observable and which critically need a follow up in order to improve their orbit - "possible recovery", objects which are not visible at present, but which are possible to recover in the near future - "lost", objects which have an absolute magnitude (H) brighter than 25 but which are virtually lost, their orbit being too uncertain; and - "small", objects with an absolute magnitude fainter than 25 and, even if they are "lost", they are considered too small to result in heavy damage on the ground (though it should be noted that the Chelyabinsk meteor would have been fainter than this). Each object has its own impactor table (IT) which shows many parameters useful to determine the risk assessment. Sentry prediction system NASA's Sentry System continually scans the MPC catalog of known asteroids, analyzing their orbits for any possible future impacts. Like ESA's NEODyS, it gives a MOID for each near-Earth object, and a list of possible future impacts, along with the probability of each. It uses a slightly different algorithm to NEODyS, and so provides a useful cross-check and corroboration. Currently, no impacts are predicted (the single highest probability impact currently listed is ~7 m asteroid 2010 RF12, which is due to pass Earth in September 2095 with only a 5% predicted chance of impacting). Impact probability calculation pattern The ellipses in the diagram on the right show the predicted position of an example asteroid at closest Earth approach. At first, with only a few asteroid observations, the error ellipse is very large and includes the Earth. Further observations shrink the error ellipse, but it still includes the Earth. This raises the predicted impact probability, since the Earth now covers a larger fraction of the error region. Finally, yet more observations (often radar observations, or discovery of a previous sighting of the same asteroid on archival images) shrink the ellipse revealing that the Earth is outside the error region, and the impact probability is near zero. For asteroids that are actually on track to hit Earth the predicted probability of impact continues to increase as more observations are made. This very similar pattern makes it difficult to differentiate between asteroids which will be millions of kilometres from Earth and those which will actually hit it. This in turn makes it difficult to decide when to raise an alarm as gaining more certainty takes time, which reduces the time available to react to a predicted impact. However raising the alarm too soon has the danger of causing a false alarm and creating a Boy Who Cried Wolf effect if the asteroid in fact misses Earth. Response to predicted impact Once an impact has been predicted the potential severity needs to be assessed, and a response plan formed. Depending on the time to impact and the predicted severity this may be as simple as giving a warning to citizens. For example, although unpredicted, the 2013 impact at Chelyabinsk was spotted through the window by teacher Yulia Karbysheva. She thought it prudent to take precautionary measures by ordering her students to stay away from the room's windows and to perform a duck and cover maneuver. The teacher, who remained standing, was seriously lacerated when the blast arrived and window glass severed a tendon in one of her arms and left thigh, but none of her students, whom she ordered to hide under their desks, suffered cuts. If the impact had been predicted and a warning had been given to the entire population, similar simple precautionary actions could have vastly reduced the number of injuries. Children who were not in her class were injured. If a more severe impact is predicted, the response may require evacuation of the area, or an avoidance mission to repel the asteroid. According to expert testimony in the United States Congress in 2013, NASA would require at least five years of preparation before a mission to intercept an asteroid could be launched. Effectiveness of the current system The diagram below shows the number of successfully predicted impacts each year compared to the number of unpredicted asteroid impacts recorded by infrasound sensors designed to detect detonation of nuclear devices: - Successfully predicted impacts - Unpredicted impacts Another way to assess the effectiveness of the current system is to look at warning times for asteroids which did not impact Earth, but came reasonably close. Looking at asteroids which came closer than the Moon, the below diagram shows how far in advance of closest approach the asteroids were first detected. Unlike actual asteroid impacts where, by using infrasound sensors, it is possible to assess how many were undetected, there is no ground truth for close approaches. The below chart therefore does not include any statistics for asteroids which were undetected. It can be seen however that the number of detections is increasing as more survey sites come on line (for example ATLAS in 2016 and ZTF in 2018). - Discovered > 1 year in advance - Discovered > 1 week in advance - Discovered up to 1 week in advance - < 24 hours warning - Discovered after closest approach One final statistic which casts some light on the effectiveness of the current system is the average warning time for an asteroid impact. Based on the few successfully predicted asteroid impacts, the average time between initial detection and impact is currently around 14 hours. Note however that there is some delay between the initial observation of the asteroid and the follow up calculations which lead to an impact prediction being made. Improving impact prediction In addition to the already funded telescopes mentioned above, two separate approaches have been suggested by NASA to improve impact prediction. Both approaches focus on the first step in impact prediction (discovering near-Earth asteroids) as this is the largest weakness in the current system. The first approach uses more powerful ground-based telescopes similar to the LSST. Being ground-based, such telescopes will still only observe part of the sky around Earth. In particular, all ground-based telescopes have a large blind spot for any asteroids coming from the direction of the Sun. In addition, they are affected by weather conditions, airglow and the phase of the Moon. To get around all of these issues, the second approach suggested is the use of space-based telescopes which can observe a much larger region of the sky around Earth. Although they still cannot point directly towards the Sun, they do not have the problem of blue sky to overcome and so can detect asteroids much closer in the sky to the Sun than ground-based telescopes. Unaffected by weather or airglow they can also operate 24 hours per day all year round. Finally, telescopes in space have the advantage of being able to use infrared sensors without the interference of the Earth's atmosphere. These sensors are better for detecting asteroids than optical sensors, and although there are some ground based infrared telescopes such as UKIRT, they are not designed for detecting asteroids. Space-based telescopes are more expensive, however, and tend to have a shorter lifespan. Therefore, Earth-based and space-based technologies complement each other to an extent. Although the majority of the IR spectrum is blocked by Earth's atmosphere, the very useful thermal (long-wavelength infrared) frequency band is not blocked (see gap at 10 μm in the diagram below). This allows for the possibility of ground based thermal imaging surveys designed for detecting near earth asteroids, though none are currently planned. There is a further issue that even telescopes in Earth orbit do not overcome (unless they operate in the thermal infrared spectrum). This is the issue of illumination. Asteroids go through phases similar to the lunar phases. Even though a telescope in orbit may have an unobstructed view of an object that is close in the sky to the Sun, it will still be looking at the dark side of the object. This is because the Sun is shining primarily on the side facing away from the Earth, as is the case with the Moon when it is in a crescent phase. Because of the opposition effect, objects are far less bright in these phases than when fully illuminated, which makes them difficult to detect (see diagram below). This problem can be solved by the use of thermal infrared surveys (either ground based or space based). Ordinary telescopes depend on observing light reflected from the Sun, which is why the opposition effect occurs. Telescopes which detect thermal infrared light depend only on the temperature of the object. Its thermal glow can be detected from any angle, and is particularly useful for differentiating asteroids from the background stars which have a different thermal signature. This problem can also be solved without using thermal infrared, by positioning a space telescope away from Earth, closer to the Sun. The telescope can then look back towards Earth from the same direction as the Sun, and any asteroids closer to Earth than the telescope will then be in opposition, and much better illuminated. There is a point between the Earth and Sun where the gravities of the two bodies are perfectly in balance, called the Sun-Earth L1 Lagrange point (SEL1). It is approximately 1 million miles from Earth, about four times as far away as the Moon, and is ideally suited for placing such a space telescope. One problem with this position is Earth glare. Looking outward from SEL1, Earth itself is at full brightness, which prevents a telescope situated there from seeing that area of sky. Fortunately, this is the same area of sky that ground-based telescopes are best at spotting asteroids in, so the two complement each other. Another possible position for a space telescope would be even closer to the Sun, for example in a Venus-like orbit. This would give a wider view of Earth orbit, but at a greater distance. Unlike a telescope at the SEL1 Lagrange point, it would not stay in sync with Earth but would orbit the Sun at a similar rate to Venus. Because of this, it would not often be in a position to provide any warning of asteroids shortly before impact, but it would be in a good position to catalog objects before they are on final approach, especially those which primarily orbit closer to the Sun. One issue with being as close to the Sun as Venus is that the craft may be too warm to use infrared wavelengths. A second issue would be communications. As the telescope will be a long way from Earth for most of the year (and even behind the Sun at some points) communication would often be slow and at times impossible, without expensive improvements to the Deep Space Network. Solutions to problems: summary table This table summarises which of the various problems encountered by current telescopes are solved by the various different solutions. |Geographically separated ground based survey telescopes||✓| |More powerful ground based survey telescopes||✓| |Infrared ground based NEO survey telescopes[note 10]||✓||✓| |Telescope in Earth orbit||✓||✓||✓||✓ |Infrared Telescope in Earth orbit||✓||✓||✓||✓ |Telescope at SEL1||✓||✓||✓||✓ |Infrared Telescope at SEL1||✓||✓||✓||✓ |Telescope in Venus-like orbit||✓||✓||✓||✓||[note 13]||✓| In 2017 NASA proposed a number of alternative solutions to detect 90% of near-Earth objects of size 140 m or larger over the next few decades, which will also improve detection rates for the smaller objects which impact Earth more often. Several of the proposals use a combination of an improved ground based telescope and a space based telescope positioned at the SEL1 Lagrange point such as NEOCAM. However none of these proposals have yet been funded. As this is a global issue, and noting that to date NASA-sponsored surveys have contributed over 95% of all near earth object discoveries, in 2018 the Trump administration asked NASA to find international partners to help fund the improvements. List of successfully predicted asteroid impacts Below is the list of all near-Earth objects which may have impacted the Earth and which were successfully predicted beforehand. This list would also include any objects identified as having greater than 50% chance of impacting in the future, but none of the future impacts have been predicted at this time. As asteroid detection ability increases it is expected that prediction will become more successful in the future. |2014-01-02||2014-01-01||2014 AA||69||0.8||No||2–4||30.9||35.0||unknown||unknown[note 15]| |2018-06-02||2018-06-02||2018 LA[note 17]||227||0.3||No||2.6–3.8||30.6||17||28.7||1 | - Completeness refers to the number of undiscovered asteroids, not the amount of time remaining to achieve completeness. The asteroids remaining to be discovered are the ones which are hardest to find. - The exact percentage of objects discovered is uncertain but is estimated using statistical techniques. 2018 estimates for objects at least 1 km in size put the figure somewhere between 89% and 99%, with an expected value of 94%. This matches the figure from a 2017 NASA report which was estimated independently using a different technique - Because of daylight telescopes cannot see the portion of the sky around the Sun and because the Earth is in the way can only see so far North and South of the latitude that they are positioned at. The time given is the time it takes for a survey to complete coverage of the part of the sky that it can see from where it is located, assuming good weather. - The limiting magnitude indicates how bright an object needs to be before the telescope can detect it, larger numbers are better (fainter objects can be detected). - This total includes all asteroid observations, not just Near-Earth asteroids - Spacewatch is still operational, however in 1998 Catalina Sky Survey (which is also run by the University of Arizona) took over survey duties. Since then Spacewatch has focused on follow up observations. - Around the time of the full Moon, the Moon is so bright that it lights up the atmosphere making faint objects impossible to see for several days per month - This refers to the opposition effect as seen from Earth, the fact that objects outside of the narrow cone centered on Earth are much fainter and harder to spot without using thermal infrared (see diagram above) - Use of thermal infrared allows objects to be seen at all angles as detection doesn't depend on reflected sunlight. It also allows an accurate size estimate of the object which is important for predicting the severity of an impact. - Although many IR wavelengths are blocked by the atmosphere, there is a window from 8 μm to 14 μm that allows detection of IR at useful wavelengths such as 12 μm. A 12 μm sensor was used by WISE to detect asteroids during its space based mission. Although some ground based IR surveys exist which can detect 12 μm (such as UKIRT Infrared Deep Sky Survey), none are designed to detect moving objects such as asteroids. - telescopes in Earth orbit are affected to an extent by the glow of the moon, but not in the same way as ground based telescopes where the light from the moon is scattered across the sky by the atmosphere - telescopes at SEL1 are primarily affected by the glare of the Earth rather than the Moon, but not in the same way as ground based telescopes where the light from the moon is scattered across the sky by the atmosphere - telescopes in a Venus like orbit have no problems with atmosphere but being closer to the Sun, may be too warm to effectively use thermal infrared sensors. This problem could be overcome by using cryogenic coolant but this increases cost and gives the telescope a limited lifespan due to the coolant running out - There are two main strategies for predicting asteroid impacts with Earth, the Cataloging Strategy and the Warning Strategy. The Cataloging Strategy aims to detect all near Earth objects which could at some point in the future impact Earth. Accurate orbit predictions are made which can then anticipate any future impact years in advance. The larger and therefore most dangerous objects are amenable to this strategy as they can be observed from a sufficient distance. The more numerous but less dangerous smaller objects cannot so easily be detected this way as they are fainter and cannot be seen until they are relatively close by. The Warning Strategy aims to detect impactors months or days before they reach Earth (NASA 2017 Update on Enhancing the Search and Characterization of Near Earth Objects) - 2014 AA exploded over the mid-Atlantic, far from the nearest infrasound detectors. Although some detections were made, reliable figures are not known - an object with the temporary designation A106fgF was discovered by the ATLAS survey and only has an observation arc of 39 minutes. Using the observation arc, it was only possible to estimate a 9% chance of impact between the South Atlantic, southern Africa, the Indian ocean, Indonesia, or the Pacific ocean. Whether the asteroid did impact Earth or not remains uncertain due to its small size. - 2018 LA was estimated to have an 82% chance of impacting Earth somewhere between the central Pacific ocean and Africa (Impact path). Several reports from South Africa and Botswana confirmed that it did indeed impact in South-central Africa and additional observations that came in after the impact post-predicted a consistent impact location. - Asteroid impact avoidance - Earth-grazing fireball - Impact event - List of asteroid close approaches to Earth - List of bolides - asteroids and meteoroids that impacted Earth - on YouTube - "Federal Government Releases National Near-Earth Object Preparedness Plan". Centre for NEO Studies. Interagency Working Group for Detecting and Mitigating the Impact of Earth-Bound Near-Earth Objects. Retrieved 24 August 2018. - Morrison, D., 25 January 1992, The Spaceguard Survey: Report of the NASA International Near-Earth-Object Detection Workshop Archived October 13, 2016, at the Wayback Machine, NASA, Washington, D.C. - Shoemaker, E.M., 1995, Report of the Near-Earth Objects Survey Working Group, NASA Office of Space Science, Solar System Exploration Office - Harper, Paul (28 April 2018). "Earth will be hit by asteroid with 100% CERTAINTY – space experts warn - EXPERTS have warned it is "100pc certain" Earth will be devastated by an asteroid as millions are hurling towards the planet undetected". Daily Star. Retrieved 24 November 2018. - Homer, Aaron (28 April 2018). "Earth Will Be Hit By An Asteroid With 100 Percent Certainty, Says Space-Watching Group B612 - The group of scientists and former astronauts is devoted to defending the planet from a space apocalypse". Inquisitr. Retrieved 24 November 2018. - Stanley-Becker, Isaac (15 October 2018). "Stephen Hawking feared race of 'superhumans' able to manipulate their own DNA". The Washington Post. Retrieved 24 November 2018. - Haldevang, Max de (14 October 2018). "Stephen Hawking left us bold predictions on AI, superhumans, and aliens". Quartz. Retrieved 24 November 2018. - Bogdan, Dennis (18 June 2018). "Comment - Better Way To Avoid Devastating Asteroids Needed?". The New York Times. Retrieved 24 November 2018. - Staff (21 June 2018). "National Near-Earth Object Preparedness Strategy Action Plan" (PDF). White House. Retrieved 24 November 2018. - Mandelbaum, Ryan F. (21 June 2018). "America Isn't Ready to Handle a Catastrophic Asteroid Impact, New Report Warns". Gizmodo. Retrieved 24 November 2018. - Myhrvold, Nathan (22 May 2018). "An empirical examination of WISE/NEOWISE asteroid analysis and results". Icarus. 314: 64–97. Bibcode:2018Icar..314...64M. doi:10.1016/j.icarus.2018.05.004. Retrieved 24 November 2018. - Chang, Kenneth (14 June 2018). "Asteroids and Adversaries: Challenging What NASA Knows About Space Rocks - Two years ago, NASA dismissed and mocked an amateur's criticisms of its asteroids database. Now Nathan Myhrvold is back, and his papers have passed peer review". The New York Times. Retrieved 24 November 2018. - Chang, Kenneth (14 June 2018). "Asteroids and Adversaries: Challenging What NASA Knows About Space Rocks - Relevant Comments". The New York Times. Retrieved 24 November 2018. - "Update to Determine the Feasibility of Enhancing the Search and Characterization of NEOs" (PDF). Near-Earth Object Science Definition Team Report 2017. NASA. Retrieved 7 July 2018. - Granvik, Mikael; Morbidelli, Alessandro; Jedicke, Robert; Bolin, Bryce; Bottke, William F.; Beshore, Edward; Vokrouhlický, David; Nesvorný, David; Michel, Patrick (25 April 2018). "Debiased orbit and absolute-magnitude distributions for near-Earth objects". Icarus International Journal of Solar System Studies. Elsevier / Science Direct. 312: 181–207. Retrieved 14 December 2018. - David, Rich (22 June 2018). "The "Threat" of Asteroid Impacts - Breaking Down the Comprehensive Chart by the US Government". Asteroid Analytics. Retrieved 14 December 2018. - Makoni, Munyaradzi (4 September 2018). "NASA's next-generation asteroid telescope set for South Africa". Physics World. IOP Publishing. Retrieved 10 December 2018. - Terán, José; Hill, Derek; Ortega Gutiérrez, Alan; Lindh, Cory (2018-07-06). "Design and construction of the SST Australia Observatory in a cyclonic region". SPIE. The international society for optics and photonics. 10700: 1070007. doi:10.1117/12.2314722.short (inactive 2018-11-05). Retrieved 20 October 2018. - Watson, Traci (2018-08-14). "Project that spots city-killing asteroids expands to Southern Hemisphere". nature international journal of science. Springer Nature Limited. Retrieved 17 October 2018. - "Residuals". Minor Planet Center. International Astronomical Uniion. Retrieved 22 October 2018. - "Spacewatch". UA Lunar & Planetary Lab. University of Arizona. Retrieved 7 December 2018. - Heinze, Aren (Ari). "The Last Alert: A New Battle Front in Asteroid Defense". CSEG Recorder. Canadian Society of Exploration Geophysicists. Retrieved 17 October 2018. - Tonry; et al. (28 March 2018). "ATLAS: A High-Cadence All-Sky Survey System". Publications of the Astronomical Society of the Pacific. 130 (988): 064505. arXiv:1802.00879. Bibcode:2018PASP..130f4505T. doi:10.1088/1538-3873/aabadf. Accessed 2018-04-14. - "ATLAS specifications". Retrieved 9 December 2018. - UA Lunar & Planetary Laboratory. "Catalina Sky Survey Telescopes". Catalina Sky Survey. The University of Arizona. Retrieved 17 October 2018. - Safi, Michael (20 October 2014). "Earth at risk after cuts close comet-spotting program, scientists warn". The Guardian. Retrieved 25 November 2015. - LSST Project Office. "LSST PROJECT SUMMARY". Large Synoptic Survey Telescope. Retrieved 17 October 2018. - "Flyeye Telescope". ESA. European Space Agency. Retrieved 10 December 2018. - "ESA's bug-eyed telescope to spot risky asteroids". ESA Space Situational Awareness. European Space Agency. Retrieved 10 December 2018. - Hugo, Kristin. "EUROPEAN SPACE AGENCY'S 'FLYEYE' TELESCOPE COULD SPOT ASTEROIDS BEFORE THEY DESTROY LIFE ON EARTH". Newsweek Tech & Science. Newsweek. Retrieved 10 December 2018. - Ray, Justin (December 14, 2008). "Mission Status Center: Delta/WISE". Spaceflight Now. Retrieved December 26, 2009. - Rebecca Whatmore; Brian Dunbar (December 14, 2009). "WISE". NASA. Retrieved December 26, 2009. - Clavin, Whitney (December 14, 2009). "NASA's WISE Eye on the Universe Begins All-Sky Survey Mission". NASA Jet Propulsion Laboratory. Retrieved December 26, 2009. - "Wide-field Infrared Survey Explorer". Astro.ucla.edu. Retrieved August 24, 2013. - Reuters (August 22, 2013). "NASA space telescope rebooted as asteroid hunter". CBC News. Retrieved August 22, 2013. - "NEOCam Instrument". Jet Propulsion Laboratory. NASA. Retrieved 23 October 2018. - "Minor Planet Discoverers (by number)". IAU Minor Planet Center. 12 March 2017. Retrieved 28 March 2017. - Michele Bannister [@astrokiwi] (30 Jun 2014). "Twitter" (Tweet). Retrieved 1 May 2016 – via Twitter. - "Pan-STARRS". UoH Institute for Astronomy. University of Hawaii. Retrieved 17 October 2018. - University of Hawaii at Manoa's Institute for Astronomy (18 February 2013). "ATLAS: The Asteroid Terrestrial-impact Last Alert System". Astronomy Magazine. Retrieved 2018-10-17. - Pike, John (2010). "Space Surveillance Telescope" (Basic overview). GlobalSecurity.org. Retrieved 2010-05-20. - Major Travis Blake, Ph.D., USAF, Program Manager (2010). "Space Surveillance Telescope (SST)" (Public Domain see Notes section). DARPA. Retrieved 2010-05-20.CS1 maint: Multiple names: authors list (link) - Ruprecht, Jessica D; Ushomirsky, Greg; Woods, Deborah F; Viggh, Herbert E M; Varey, Jacob; Cornell, Mark E; Stokes, Grant. "Asteroid Detection Results Using the Space Surveillance Telescope" (PDF). Defense Technical Information Center. DTIC. Retrieved 20 October 2018. - Smith, Roger M.; Dekany, Richard G.; Bebek, Christopher; Bellm, Eric; Bui, Khanh; Cromer, John; Gardner, Paul; Hoff, Matthew; Kaye, Stephen (2014-07-14). "The Zwicky transient facility observing system" (PDF). Ground-based and Airborne Instrumentation for Astronomy V. 9147: 914779. doi:10.1117/12.2070014. - Cao, Yi; Nugent, Peter E.; Kasliwal, Mansi M. (2016). "Intermediate Palomar Transient Factory: Realtime Image Subtraction Pipeline". Publications of the Astronomical Society of the Pacific. 128 (969): 114502. arXiv:1608.01006. Bibcode:2016PASP..128k4502C. doi:10.1088/1538-3873/128/969/114502. - Birtwhistle, Peter. "Great Shefford Location and situation". Great Shefford Observatory. Retrieved 24 October 2018. - "NEOCam Infrared". Jet Propulsion Laboratory. NASA. Retrieved 30 October 2018. - "Discovery of a Satellite Around a Near-Earth Asteroid". European Southern Observatory. 22 July 1997. Retrieved 30 October 2018. - "NEO Earth Close Approach data". NASA JPL. NASA. Retrieved 7 July 2018. - "NEO Basics - NEO Groups". Center for Near Earth Object Studies. NASA JPL. Retrieved 25 October 2018. - "Sentry: Earth Impact Monitoring Introduction". Center for Near Earth Object Studies. NASA JPL. Retrieved 25 October 2018. - "Near Earth Objects - Dynamic Site". NEODyS-2. European Space Agency. Retrieved 25 October 2018. - "NEODyS-2 Risk List". NEODyS-2. European Space Agency. Retrieved 25 October 2018. - "Sentry: Earth Impact Monitoring". Jet Propulsion Laboratory. NASA. Retrieved 25 August 2018. - "How NASA hunts the asteroids that could smash into Earth". Vox.com. Vox Media Inc. 2017-06-30. Retrieved 4 September 2018. - "Why we have Asteroid "Scares"". Spaceguard UK. Archived from the original on December 22, 2007.CS1 maint: BOT: original-url status unknown (link) (Original Site is no longer available, see Archived Site at ) - Kramer, Andrew E. (17 February 2013). "After Assault From the Heavens, Russians Search for Clues and Count Blessings". New York Times. Archived from the original on 17 February 2013. - "Челябинская учительница спасла при падении метеорита более 40 детей". Интерфакс-Украина (in Russian). Retrieved 2018-09-28. - Bidder, Benjamin (15 February 2013). "Meteoriten-Hagel in Russland: "Ein Knall, Splittern von Glas"" [Meteorite hail in Russia: "A blast, splinters of glass"]. Der Spiegel (in German). Archived from the original on 18 February 2013. - U.S.Congress (19 March 2013). "Threats From Space: a Review of U.S. Government Efforts to Track and mitigate Asteroids and Meteors (Part I and Part II) – Hearing Before the Committee on Science, Space, and Technology House of Representatives One Hundred Thirteenth Congress First Session" (PDF). United States Congress. p. 147. Retrieved 24 November 2018. - "JPL - Fireball and bolide reports". Jet Propulsion Laboratory. NASA. Retrieved 1 Feb 2019. - "LSST Project Schedule". Retrieved 24 August 2018. - UKIDSS Home Page. Retrieved April 30, 2007. - "Impact Risk Data". Sentry: Earth Impact Monitoring. Jet Propulsion Lab. Retrieved 7 July 2018. - Jenniskens, P.; et al. (2009). "The impact and recovery of asteroid 2008 TC3". Nature. 458 (7237): 485–488. Bibcode:2009Natur.458..485J. doi:10.1038/nature07920. PMID 19325630. - Farnocchia, Davide; Chesley, Steven R.; Brown, Peter G.; Chodas, Paul W. (1 August 2016). "The trajectory and atmospheric impact of asteroid 2014 AA". Icarus. 274: 327–333. Bibcode:2016Icar..274..327F. doi:10.1016/j.icarus.2016.02.056. - "Tiny Asteroid Discovered Saturday Disintegrates Hours Later Over Southern Africa". NASA/JPL. Jet Propulsion Laboratory. Retrieved 4 June 2018. |Wikimedia Commons has media related to Impact events.|
Properties of the Centroid By: Sydney Roberts A median is a line segment that extends from the vertex of a triangle to the midpoint of the opposite side of the triangle. Therefore, each triangle with have three medians that intersect at a point that is exactly two-thirds of the way along each median from each vertex. We call this point the Centroid. How do we know this is true? Consider the following triangle XYZ. We can think of each vertex of this triangle as a vector. Hence, we can label each side as such, where x is the vector extending from O to X, y is the vector extending from O to Y, and similarly z is the vector extending from O to Z. Now, hiding the original vectors in order to avoid clutter, we can also find the midpoints of each side of the triangle and label them as such. Now that we have the midpoints, we can construct the medians. Now we want to show that each of these medians intersect at a common point. Specifically, we want to show that these medians intersect at the point which we refer to as the centroid of a triangle. As mentioned before, we want to show that this centroid lies two-thirds of the way down each median from the vertex. Well, now that we have the vectors for each side length, we can label the medians as follows. Now we just need to show that Similarly, this algebra will hold on the remaining two equations and we see that the centroid does indeed lie at a point two-thirds of the way along each median from the vertex. The centroid is commonly referred to as the “center of mass” or “center of gravity” for a triangle. This idea comes from the fact that if you had a 2D triangle with the centroid labeled, that this centroid would be the balancing point and the triangle could rest on this point alone without toppling. This should only work if the area around this point is evenly dispersed. Therefore, consider the three triangles that are formed by the intersection of the medians. Now we want to show that the area of triangle ABG equals the area of triangles CGA and BGC. Conveniently, we can use Geometer’s Sketchpad to calculate this, and we can see that this is true. If you want to change the triangle and see that the areas remain equal, use the following GSP file:
Lesson 2 - Dynamic memory management in C++ New In the last lesson, Introduction to pointers in C++, we introduced pointers in C++. We already know that C++ lets us work with memory, and we have learned to pass parameters by reference to functions. However, we will start the right programming in C++ today. We will understand how memory allocation works and break free from all the limits of the length of static arrays. Static and dynamic memory allocation As we know, our program has to call the operating system to allocate memory, which is not entirely easy, which is why C++ tries to do as much as possible for us. Statically allocated memory When our program is compiled, the compiler can in many cases simply find out how much memory will be needed to run the program. When we create a variable of the int type, the compiler knows to set aside 32 bits for it. When we create an array of 100 characters, C++ again knows it needs to reserve 800 bits. If no data needs to be added while the program is running, this automatic allocation will suffice. That's basically the way we've programmed so far. Dynamically allocated memory on the stack You may have wondered how memory allocation for local variables (those defined inside functions) works in C++. After all, C++ doesn't know how many times we call a function and therefore how many variables will be needed in the end. This memory is really allocated dynamically at runtime. However, everything happens fully automatically. When we call a function, C++ asks for memory, and when the function ends, that memory is freed. This is why the function can't return an array. As we already know, an array isn't copied (it is not passed by value such as int), but it's treated as if it were a pointer. Since the variables disappear after the function ends, we would get a pointer somewhere where the array may no longer exist. Dynamically allocated memory on the heap So far, it seems that C++ is doing everything for us. So where's the problem? Imagine that you're programming an application that records, for example, items in a warehouse. Do you know in advance how big of an array you need to create for the items? Will there be 100, 1000, millions? If we declare a static array of some structures, we will always either waste space or risk that the reserved array will no longer suffice. The problem is that sometimes we don't know how much memory will be needed to run the program and therefore C++ can't allocate it for us. Fortunately, it offers us functions that we can use to obtain any amount of memory while the program is running. Note: The terms stack and heap were mentioned in the text. These are the 2 types of memory in RAM that the program works with. Simply put, working with the stack is faster, but it's limited in size. The heap is intended primarily for larger data, e.g. for the mentioned items in the warehouse. When we talk about memory ourselves, it'll always be allocated on the heap. Dynamic memory allocation The focus of work with dynamic memory in C++ is a pair of keywords - new and delete. The new keyword tells the operating system about the amount of memory for the specific type we are going to allocate. The function returns a pointer to the beginning of the address where lies our new memory. We already know that each pointer has a certain type, resp. points to some type. If the memory allocation fails (for example, we ran out of memory, which theoretically doesn't happen today, but we should take this case into account), the new call returns NULL (i.e. a pointer to nowhere). Each call to new must sometimes (perhaps only at the end of the program) be followed by a call to delete, which marks the memory as free. This memory is fully under our direction and no one but us will release it for us. As soon as we no longer need any dynamically allocated memory, we should free it immediately. Let's allocate space for int and double while the program is running int* number = new int; double* floating_number = new double; //program delete number; number = NULL; delete floating_number; floating_number = NULL; In the program, we work with pointers in the same way as we showed in the previous part. At the end of the program, you must free the memory using delete. Notice the subsequent assignment of NULL back to the pointer. If you call delete twice on the same pointer, the program will probably crash. It'll try to delete the memory that no longer belongs to it and the operating system will terminate it. When we assign NULL, delete doesn't delete anything, but at least the program doesn't crash, so it's safer to assign NULL to a pointer after each call to delete. Common mistakes when working with pointers Working with pointers is quite dangerous, because no one watches us programmers. And making a mistake in the program is very simple and you don't even have to be a beginner. Let's mention a few points that are good to pay attention to when working with pointers. - Not freeing memory - Once we forget to free up some memory, basically nothing will happen. The problem is when we forget to free the memory inside a function that is called several times during the program run. The worst situation is when we don't free the memory in a loop. With this error, we'll of course run out of memory, the application will crash and the user will lose data and pay the competition to get a working application.:) - Exceeding memory boundaries - As was the case with arrays, no one monitors what we store in this memory. If we save something larger than how much space we have reserved, we'll break down the memory of another part of the application. This error can occur anywhere, and we will probably look for it for a very long time, because the subsequent error is not logically related to the place in the program where our memory overflowed. It can really show up in any way:) - Working with freed memory - It may happen that we free some memory and then try to write something to this address again. At that moment, however, we write again to a memory that doesn't belong to us, see the consequences above. Therefore, it's a good idea to store the NULL value in the pointer after freeing its memory to avoid this error. We allocate the array as if it were a type. For example, we will allocate an array of 10 integers as follows: int* array = new int; array = 125; First, notice that we can actually treat the pointer as an array. You will learn more in the next part. The rule is still that C++ doesn't check for array violations. C++ allows us to access, for example, the fifteenth element (array = 15), but again we try to get to a memory that doesn't belong to us. In the best case, the program reads random data, in the worse case the program crashes. But what is different is freeing memory. This time we have to tell the compiler that we're freeing the array. We do this using square brackets for delete: delete array; Thanks to the brackets, C++ knows it needs to delete an array. It's also important that the pointer type is the same as the type of stored data (because pointers can be changed to a different type). If you change the pointer type and then try to delete it, the result will be uncertain and the program may crash or free the wrong amount of memory. In any case, the program gets into a situation that shouldn't happen in any case. In the next lesson, Arithmetic of pointers in C++, we will learn pointer arithmetic and find that pointers in C++ are even more similar to arrays than we thought. 2 messages from 2 displayed.
Divergent roosting habits of Rafinesque's big-eared bat and southeastern myotis during winter floods. Tree roosts are a fundamental resource for forest dwelling bats, providing protection from the elements and predators, and serving as a location for many social and reproductive functions (Kunz and Lumsden, 2003). Although much effort has been devoted to describing summer tree roosts, few studies have described trees used as roosts during winter (Cryan and Veilleux, 2007). Winter roosts may be particularly important to survival because bats must cope with greater thermoregulatory challenges at a time of reduced food abundance (Speakman and Thomas, 2003) and because they are more susceptible to predators (Estok et al., 2010), disease (Bouma et al., 2010), and accidents (Cope, 1977) while hibernating. Given that the ecological constraints facing tree roosting bats are compounded by these thermoregulatory challenges during winter, winter roost selection strategies may differ from summer strategies. The limited information available indicates that a number of tree roosting species use different roosts in winter than in summer. For example some lasiurine species use tree roosts in winter but move to leaf litter during periods of extreme cold (Mormann and Robbins, 2007; Hein et al., 2008). Low winter temperatures can also prompt silver-haired bats (Lasionycteris noctivagans) and evening bats (Nycticeius humeralis) to switch from tree roosts to rock crevices or burrows (Boyles et al., 2005; Perry et al., 2010). Understanding these seasonal changes in roost selection is vital to developing year round management strategies for tree roosting bats (Brigham, 2007). Rafinesque's big-eared bat (Corynorhinus rafinesquii) and southeastern myotis (Myotis austroriparius) are restricted to the Southeast and lower Midwest regions of the United States (Jones, 1977;Jones and Manning, 1979). In the Coastal Plain, both species typically roost in hollow trees in baldcypress (Taxodium distichum)--tupelo (Nyssa spp.) swamps (Gooding and Langford, 2004). During summer both species commonly roost in large diameter water tupelo (N. aquatica) with basal openings, although Rafinesque's big-eared bat uses trees with a larger mean diameter (Carver and Ashley, 2008). During winter cypress-gum swamps are subject to prolonged flooding, which can cover basal openings (Battle and Golladay, 2001). Therefore, trees used as roosts in summer may be unavailable for winter use and ground roosting strategies used by some other bats are untenable. Previous roost searches during winter were limited to times or locations with minimal flooding (Mirowsky and Homer, 1997; Stevenson, 2008; Rice, 2009; Loeb and Zarnoch, 2011). In those surveys, bats were commonly absent from summer roosts during winter. A telemetry study documented alternate Rafinesque's big-eared bat roosts in trees without basal openings in fall and early winter (Rice, 2009). However, roost searches have not been conducted during prolonged winter flooding. Therefore, our objectives were to locate winter roosts of Rafinesque's big-eared bat and southeastern myotis in an area subject to extensive flooding, evaluate the basis of roost selection in winter, and compare winter roosts to previously described summer roosts. We conducted our research on River Bend Wildlife Management Area (WMA; 32[degrees]28'N, 82[degrees]50'W), a state-owned property in Laurens County, Georgia. During winter typical high temperatures are 14 to 21 C and typical low temperatures are 2 to 6 C. The site consists of pine (Pinus spp.) dominated uplands and a hardwood dominated river floodplain along the Oconee River. The floodplain is intermittently flooded by up to 0.5 m of water during winter, but surface water is limited to several oxbow lakes during summer. Accordingly, the floodplain supports diverse water tolerant tree species, including oaks (Quercus spp.), hickories (Carya spp.), red maple (Acer rubrum), and sweetgum (Liquidambar styraciflua), while the oxbow lakes are dominated by two hydrophilic tree species, water tupelo and baldcypress, that stand in 1-5 m of water throughout winter. We previously surveyed the study area during summer 2008 and determined that large diameter hollow trees were concentrated around two oxbow lakes, Troup Lake and Beacham Lake (Clement, 2011). Therefore, we limited our winter surveys to those lakes. We conducted visual searches of potential roosts by boat from 23 Jan. to 16 Mar., 2010. We visually searched the interior of all trees with basal openings using a spotlight and mirror to determine if bats were present. Based on the high estimated probability of detection for roosting bats (Clement and Castleberry, 2013a), we assumed bats were not overlooked when present. We used standard arborist techniques to climb additional hollow trees that lacked basal openings but had higher openings (Jepson, 2000). We searched all trees that could be climbed safely and had an opening large enough (>25 cm) to allow visual inspection for roosting bats. Trees suitable for climbing were searched three times during the study to minimize false negatives due to bats switching roosts. Trees with basal openings were searched only once or twice because openings were covered by flood waters during most of the study. We captured bats in single high mist nets (50 denier weight, 2 ply nylon, 38 mm mesh, Avinet Inc., Dryden, N.Y.) placed near Troup Lake from 3 Feb. to 7 Mar., 2010. Netting sites were selected on an ad hoc basis with the goal of maximizing captures. We attached 0.4 g radio transmitters (Lotek Wireless, Newmarket, Ontario) to bats with Torbot Liquid Bonding Cement (Torbot Group Inc., Cranston, RI). Radiotransmitters averaged 5.0% of body weight (range: 4.3-5.7%) for Rafinesque's big-eared bats and 6.2% of body weight (range: 6.1-6.3%) for southeastern myotis. Capture and handling protocols were approved by the University of Georgia Institutional Animal Care and Use Committee (approval no. A2007-10046-cl) and Georgia Scientific Collecting Permit #29-WCH-06-104. We radio tracked bats daily, beginning the day after capture on Feb. 3 and continued until Mar. 18. We located bats using a portable telemetry receiver (TRX 2000S; Wildlife Materials Inc., Murphysboro, IL) and a 3-element Yagi antenna (Advanced Telemetry Systems, Inc., Isanti, MI). For all searched trees and roosts located by telemetry, we recorded tree species and whether the tree was alive or dead. We recorded tree diameter at 0.5 m above the surface of the water using a diameter at breast height (dbh) tape (Spencer Products Co., Seattle, WA). The measurement typically was above the buttress swell common in hydrophilic trees, but the height of measurement varied with water level. We measured tree height above the water using a 400 LH laser hypsometer (Opti-Logic Corp., Tullahoma, TN), although height varied with water depth. We counted the number of visible cavity openings (>7.5 cm) on each tree, recorded height above water, height and width of each opening, and calculated the area of the opening. We recorded UTM coordinates of trees with an eTrex Venture HC global positioning system unit (Garmin Ltd., Olathe, KS). We measured interior cavity diameter of hollow trees with a tape measure and interior cavity height using a tape measure or hypsometer. Cavity volume was calculated from cavity height and average diameter assuming a cylindrical shape ([h[pi][r.sup.2]). We estimated solid tree volume by subtracting estimated cavity volume from total bole volume, which we estimated from bole height and diameter, assuming a conical shape ([h[pi][r.sup.2]/3). We recorded whether or not the cavity had a chimney like opening at the top. We also characterized the interior surface of the cavity as rough (>50% of cavity surface covered with projections >2 cm) or smooth. We also obtained previously collected data on summer tree roosts of both bat species in River Bend WMA and seven other study sites in the Coastal Plain of Georgia (Clement, 2011). We returned to the study site and remeasured characteristics of winter roosts after flood waters had receded to allow comparisons between winter and summer roost trees. For Rafinesque's big-eared bat, we used logistic regression to predict which trees were occupied in winter based on the measured characteristics of hollow trees (Hosmer and Lemshow, 2000). We considered trees that contained bats during any roost search or telemetry session to be occupied and trees that did not contain bats during any roost search to be unoccupied. Due to sample size constraints and the lack of prior knowledge about winter roosts, we only evaluated univariate models (Hosmer and Lemeshow, 2000). In addition to nine univariate models, we also evaluated a global model including all uncorrelated (Pearson [R.sup.2] < 0.25) predictor variables and a model with no predictors, indicating roost usage was random with respect to tree characteristics. Before performing logistic regression on tree data for Rafinesque's big-eared bat, we transformed the predictor variables "tree cavity volume" and "solid tree volume" using the natural logarithm to ensure linearity in the logit function (Hosmer and Lemeshow, 2000). We also eliminated tree species as a predictor due to data separation (Hosmer and Lemeshow, 2000). For statistical analysis, we considered the tree to be the experimental unit. To compare trees that were occupied or unoccupied in winter, we used tree measurements with flood waters present to represent the trees as they would be encountered by bats during winter. We evaluated goodness-of-fit of our global model using the Hosmero-Lemeshow statistic (Hosmer and Lemshow, 2000). We transformed log odds regression coefficients to odds ratios, which express how much more or less likely an outcome (i.e., bat occupancy) is as the predictor variable changes (Hosmer and Lemeshow, 2000). Because data were collected in a case control sampling scheme in which analyzed trees were not proportional to their presence in the population, analysis yielded unbiased estimates of coefficients and odds ratios, but biased intercept terms (Keating and Cherry, 2004). We used Akaike's Information Criterion corrected for small sample bias (AICc) to assess the fit of the candidate models, with the lowest AICc indicating the best supported model (Burnham and Anderson, 2002). We calculated a composite model from the best supported models, where appropriate (Burnham and Anderson, 2002). We calculated Nagelkerke's [R.sup.2] to quantify the variation explained by each model (Nagelkerke, 1991). We used leave-one-out cross validation to estimate the top model's ability to predict bat presence (Efron, 1983). We selected a prediction cutoff equal to the proportion of trees occupied during winter (0.43), so that a predicted probability greater than 0.43 was considered a prediction of presence. The process was repeated for every data point and model predictions were compared to the actual states to calculate prediction and classification error rates. For southeastern myotis roosts, our sample was smaller than the minimum recommendation of 10 observations per independent variable for logistic regression (Hosmer and Lemeshow, 2000). Therefore, we used t-tests and Fisher's exact test to identify continuous and binary tree characteristics that differed between occupied and unoccupied trees in winter. We conducted all analyses in Program R 2.11.1 (R Development Core Team, 2010). We also examined differences between trees used in winter and trees used in summer using previously collected summer roost data from River Bend WMA and seven other sites. Because combining sites violated assumptions of many statistical tests, we qualitatively compared winter roosts to summer roosts from all eight sites by examining box-and-whisker plots. We then compared summer roosts from only River Bend WMA to winter roosts using the same statistical analyses described above (i.e., logistic regression, cross-validation, t-tests, and Fisher's exact tests). To compare winter and summer roosts, we used tree measurements with flood waters absent to obtain valid comparisons. For this analysis, we selected a prediction cutoff equal to the proportion of roosts that were winter roosts (0.58), so that a predicted probability greater than 0.58 was considered a prediction of a winter roost. We identified 149 hollow trees with visible cavity openings during our winter surveys. We searched all 18 trees (12%) that had a basal opening exposed at some point during our surveys and climbed an additional 29 trees (19%) lacking basal openings, but with higher openings. Eleven (61%) of the trees with a basal opening also had an opening >4 m above the water, while the rest only had openings <2 m above the water. The remaining hollow trees (68%) were unsafe to climb or had small entrances and were not searched. Both searched and unsearched trees were dominated by large water tupelo but searched trees were larger in diameter (113 cm versus 84 cm), less likely to be dead (2% versus 6%), and less likely to be baldcypress (9% versus 13%). Three (17%) of the trees with basal openings were occupied by one or two Rafinesque's big-eared bats during [greater than or equal to] 1 survey. Two of these also had elevated openings, while one did not. Fifteen (52%) of the trees we climbed were occupied by one to nine Rafinesque's big-eared bats during [greater than or equal to] 1 survey. None of the trees searched held southeastern myotis. We radio tagged eight Rafinesque's big-eared bats (five adult males with enlarged epididymides, three adult females) and two southeastern myotis (adult males with enlarged epididymides) captured in mist nets. We located the Rafinesque's big-eared bats on 112 of 122 tracking days (92%) for an average of 14.0 d per bat. For six bats, the distance from capture site to the initial roost was <150 m, while two bats moved 2.5 km to private land on the opposite (west) side of the Oconee River, yielding a mean of 718 m. On average, Rafinesque's big-eared bats switched roosts every 6.9 d (range: 1-22 d). Mean distance between successive roosts was 100 m (range: 3-210 m). Each bat used an average of 1.8 roosts, although radio tagged bats only used eight unique roosts because some were used by more than one bat. Six of these roosts only had elevated openings, one had both basal and elevated openings, and one only had a basal opening. Combining all search techniques and accounting for three roosts located by both telemetry and roost searches, we found 23 total roosts, all of which were water tupelo in the oxbow lakes or similar habitat on private property. We located the two southeastern myotis on 17 of 20 tracking days (85%) for an average of 8.5 d each. The average distance from capture site to the initial roost was 208 m, while the mean distance between successive roosts was 716 m (range: 15-2,237 m). These bats switched roosts every 2.8 d (range: 1-11 d), and used six unique roosts, although we could not locate one roost on private property. In contrast to the Rafinesque's big-eared bat roosts, all southeastern myotis roosts were located in the floodplain surrounding Troup Lake. One roost had a large cavity with a basal opening that was occasionally submerged by flood waters. The other four roosts we located were small crevices at various heights above the ground. The only differences between Rafinesque's big-eared bat winter roost trees and unoccupied trees were that occupied trees were less likely to have a low opening and more likely to be water tupelo (Table 1). The Hosmer-Lemeshow statistic was not significant ([chi square] = 11.95, d.f. = 8, P = 0.153) indicating adequate goodness-of-fit of the logistic regression models. The best supported model of winter roost occupancy by Rafinesque's big-eared bat, receiving 43% of the AICc weight, indicated that trees with low openings were less likely to be occupied (Table 2). The intercept-only model also received support and we discarded more complex models with less support from the data (Grueber et al., 2011), which left two models in the confidence set. In the composite model, every 1 m increase in height of the lowest opening made the odds of bat presence 1.05 times higher (90% confidence limits: 0.91-1.21). The top model was able to correctly identify 62% of winter roosts and 71% of unoccupied trees. Characteristics of Rafinesque's big-eared bat winter roosts largely overlapped those of summer roosts (n = 170) across eight study sites (Fig. 1). One difference was that trees with no elevated entrance were commonly used in summer but rarely used in winter (Fig. 1). In addition winter roosts occurred in a narrower range of dbh and cavity volume sizes, with relatively small trees unused in winter. Winter roosts were also more likely to have a chimney opening and a rough interior. Summer roost data only from River Bend WMA (n = 15) confirmed many of these results, with significantly higher openings and more chimney openings on winter roosts (Table 3). The Hosmer-Lemeshow statistic was not significant ([chi square] = 7.28, d.f = 8, P = 0.506) indicating adequate goodness-of-fit of the global logistic regression model. The best supported model distinguishing between winter and summer roosts, receiving 62% of the AICc weight, indicated that winter roosts had higher openings (Table 4). The composite model also included chimney opening as a predictor variable. In the composite model, every I m increase in highest opening height made the odds of a roost being a winter roost 1.27 times higher (90% confidence limits: 0.97-1.66), while the presence of a chimney opening made the odds of a roost being a winter roost 1.54 times higher (90% confidence limits: 0.48-4.90). The top model identified 71% of winter roosts and 53% of summer roosts. In contrast southeastern myotis winter roosts had smaller diameters, cavities, and openings than unoccupied trees (Table 1). Roost trees also lacked chimney openings and consisted of a sweetgum, a red maple, a water hickory (C. aquatica), an overcup oak (Q. lyrata), and an unidentifiable snag. Compared to summer roosts at eight field sites (n = 25), winter roosts generally had smaller diameters, heights, and cavity volumes, and had higher, but smaller openings (Fig. 2). Chimney trees were not used as roosts in either season. Southeastern myotis changed from using only water tupelo roosts in summer, to a variety of tree species, which did not include water tupelo, in winter. Considering summer roost tree data only from River Bend WMA (n = 2), dbh, cavity volume, and tree species differed between winter and summer roosts (Table 3). Rafinesque's big-eared bats and southeastern myotis use roosts that differ subtly in diameter and cavity dimensions during summer (Carver and Ashley, 2008; Stevenson, 2008; Rice, 2009). However, during winter, roosting bats face different selection pressures due to colder temperatures and reduced food resources (Speakman and Thomas, 2003), which may induce bats to select different types of roosts during colder periods (Boyles and Robbins, 2006; Hein et al., 2008). In cypress-gum swamps, bats must also deal with flood waters that can limit roost availability or trap roosting bats (Rice, 2009). Presumably as a result of these environmental factors interacting with different roosting preferences of the species, we observed species specific seasonal patterns of roost tree selection. For Rafinesque's big-eared bat during winter, the primary difference between used and unused trees was that used trees were less likely to have low openings. Bats may have avoided low openings due to the risk of flooding. However, the effect on occupancy was weak, with only a 5% increase in the odds of occupancy in the composite model for every 1 m in height of the lowest opening. The weak effect may be because most trees had an elevated opening, reducing risks posed by flooding. Other measured features did not differ between used and unused trees. Similarly, no differences between used and unused trees during winter were identified in Mississippi bottomlands, although entrance height was not examined (Stevenson, 2008). The similarity of used and unused trees suggests that many of the large trees in the oxbow lakes provided adequate roosting conditions and some trees that were unused during the study period may be used at other times during the winter. Although there were few differences between used and unused trees on the oxbow lakes, the large (mean dbh = 106 cm) trees used as roosts are rare on the landscape (Stevenson, 2008), indicating that roost trees were not typical of all trees at our site. The floodplains surrounding the lakes were within the range of roosting bats (0-2.5 km), but our telemetry data indicated they were unused, suggesting that large, hollow water tupelo were strongly preferred to other tree types. Winter roosts were distinguished from summer roosts by the highest opening height, rather than the lowest. Trees lacking elevated entrances were commonly used in summer, but rarely in winter. The use of roosts with higher openings could be due to flood waters obscuring basal openings or because trees with chimney openings provide a more favorable microclimate during winter (Rice, 2009). If trees without basal openings provide a superior microclimate in winter, we would expect bats to avoid trees with basal openings in winter, regardless of water levels, and to avoid trees without basal hollows during summer. However, Rafinesque's big-eared bat regularly uses trees without basal hollows during summer (Trousdale and Beckett, 2005; Clement and Castleberry, 2013a). Furthermore, bats use trees with basal openings during winter, when not flooded (Stevenson, 2008; Rice, 2009). We suggest that any shift, during winter, from trees with basal openings to elevated openings is primarily due to flood waters submerging basal openings and the attendant risk of being trapped. Another seasonal difference was that Rafinesque's big-eared bats avoided trees with rough cavity interiors in summer (Clement and Castleberry, 2013a), but were indifferent to interior surface in winter. During summer, smooth interior surfaces may be important as a barrier to snakes and other predators. Several bird species nest in trees with smooth bark that provide protection against snakes (Rudolph et al., 1990; Mullin and Cooper, 2002). During winter, however, snakes in the Coastal Plain are generally less active (e.g., Glaudas et al., 2007; Rudolph et al., 2007) and may pose less of a threat to roosting bats. If selection pressure exerted by snakes is reduced during winter, bats may not be constrained to select cavities with rough interiors for winter roosts. In contrast to Rafinesque's big eared bats, southeastern myotis winter roosts differed from available trees on the oxbow lakes and from summer roosts. Summer roosts in Georgia were large diameter hydrophilic trees (Clement, 2011) consistent with those used at other sites (Hofmann et al., 1999; Gooding and Langford, 2004; Carver and Ashley, 2008; Stevenson, 2008). However, winter roosts at our site were small hardwood trees that shared few characters with summer roosts. Our results contrast with other studies that found winter roosts similar to summer roosts, with some trees used in both seasons. In Louisiana, almost all summer roosts were also used in winter and roost tree diameter was similar in both seasons (Rice, 2009). In Mississippi, individual trees also were used in both seasons, and diameter increased from summer to winter (Stevenson, 2008). Although southeastern myotis roosts differ among seasons and locations, they never have chimney openings (Mirowsky and Homer, 1997; Hoffman, 1999; Gooding and Langford, 2004; Carver and Ashley, 2008; Stevenson, 2008; Rice, 2009; Clement, 2011). Southeastern myotis may prefer non chimney trees because reduced air flow provides a better microclimate (Stevenson, 2008; Rice, 2009). Although a chimney has a small effect on microclimate (Clement and Castleberry, 2013b), winter roosts differed in several other aspects known to affect microclimate, including tree size (Coombs et al., 2010), species (Kalcounis and Brigham, 1998), and health (Hosken, 1996). We expect that if microclimate were the primary factor in roost selection, at least some chimney trees would provide an acceptable microclimate, but the universal rejection of chimney trees suggests that another factor drives southeastern myotis roost selection. We hypothesize that the preference for non chimney trees is related to the roosting substrate. In contrast to Rafinesque's big-eared bats that usually roost on cavity walls, southeastern myotis always roosts on the cavity ceiling (Mirowsky and Homer, 1997; Carver and Ashley, 2008; Stevenson, 2008; Rice, 2009; Clement, 2011). Therefore, only non chimney trees provide a roosting substrate for southeastern myotis. Roosting on cavity ceilings may help southeastern myotis avoid predators, take flight, maintain a grip on the roosting substrate, or provide some other unknown benefit. The preference of southeastern myotis for non chimney trees likely explains the differences between winter and summer roost trees at our study site. The basal openings of nearly all large water tupelo that we located in summer were submerged and unavailable during winter. Therefore, southeastern myotis likely used the only available non chimney trees, even though they differed in size, species, and other characteristics. In other surveys that focused on times or locations with less flooding, summer roosts remained available in winter and roosts showed fewer seasonal differences (Stevenson, 2008; Rice, 2009). We suggest that southeastern myotis share a similar preference for large water tupelo with basal openings throughout the Coastal Plain, but they will use other non chimney trees if preferred trees are flooded. We identified seasonal differences in roost selection for Rafinesque's big-eared bat and southeastern myotis that are likely due to differences in roost entrances and preferred roosting substrate. Due to their reliance on non chimney trees, southeastern myotis changed roosts substantially in winter, moving from the lake to the floodplain, switching tree species, and using smaller trees. In contrast the ability of Rafinesque's big-eared bat to use roosts with chimney openings allowed them to use a subset of summer roosts and maintain a presence in a flooded oxbow lake in winter. Due to our small sample of southeastern myotis roosts, we likely did not capture the full variation of winter roosts. Nonetheless, we demonstrated that under some circumstances, southeastern myotis use smaller trees and cavities than reported elsewhere and that they may move from cypress-gum swamps to floodplains supporting a different suite of tree species. Accordingly, management that focuses solely on conserving summer roosting habitat may be adequate for Rafinesque's big-eared bat, but inadequate for southeastern myotis. Acknowledgments.--Funding was provided by the Georgia Department of Natural Resources Wildlife Resources Division and the Daniel B. Warnell School of Forestry and Natural Resources at the University of Georgia. The Georgia Department of Natural Resources provided access and housing at River Bend WMA. We thank C. Carpenter, V. Kinney, and C. Bland for field assistance. SUBMITTED 7 AUGUST 2012 ACCEPTED 7 FEBRUARY 2013 BATTLE, J. AND S. GOLLADAY. 2001. Hydroperiod influence on breakdown of leaf litter in cypress-gum wetlands. Am. Midl, Nat., 146:128-145. BOUMA, H. R., H. V. CAREY, AND F. G. M. KROESE. 2010. Hibernation: the immune system at rest. J Leukocyte Biol, 88:619-624. BOYLES, J., B. MORMANN, AND L. ROBBINS. 2005. Use of an underground winter roost by a male evening bat (Nycticeius humeralis). Southeast. Nat., 4:375-377. -- AND L. ROBBINS. 2006. Characteristics of summer and winter roost trees used by evening bats (Nycticeius huraeralis) in southwestern Missouri. Am. Midl. Nat., 155:210-220. BRIGHAM, R. M. 2007. Bats in forests: what we know and what we need to learn, p. 1-16. In: M.J. Lacki, J. P. Hayes, and A. Kurta (eds.). Bats in forests: conservation and management. Johns Hopkins University Press, Baltimore, Maryland. 329 p. BURNHAM, K. P. AND D. R. ANDERSON. 2002. Model selection and multimodel inference: a practical information-theoretic approach. 2nd ed. Springer-Verlag, New York. 488 p. CARVER, B. D. AND N. ASHLEY. 2008. Roost tree use by sympatric Rafinesque's big-eared bats (Corynorhinus rafinesquii) and southeastern myotis ( Myotis austroriparius). Am. Midl. Nat., 160:364-373. CLEMENT, M. J. 2011. Roosting ecology of Rafinesque's big-eared bat and southeastern myotis in the coastal plain of Georgia. Ph.D., Dissertation, University of Georgia, Athens. 185 p. -- AND S. B. CASTLEBERRY. 2013a. Summer tree roost selection by Rafinesque's big-eared bat. J. Wildl. Manag., 77:414-422. -- AND --. 2013b. Tree structure and cavity microclimate: implications for bats and birds. Int. J. Biometeorol, 57:437-450. COOMBS, A. B., J. BOWMAN, AND C.J. GARROWAY. 2010. Thermal properties of tree cavities during winter in a northern hardwood forest. J. Wildl. Manage., 74:1875-1881. COPE, J. B. AND S. R. HUMPHREY. 1977. Spring and autumn swarming behavior in Indiana bat, Myotis sodalis. J. Mammal., 58:93-95. CRYAN, P. M. AND J. P. VEILLEUX. 2007. Migration and use of autumn, winter, and spring roosts by tree bats, p. 153-176. In: N.J. Lacki, J. P. Hayes, and A. Kurta (eds.). Bats in forests: conservation and management. Johns Hopkins University Press, Baltimore, Maryland. 329 p. EFRON, B. 1983. Estimating the error rate of a prediction rule--improvement on cross-validation. J. Am. Stat. Assoc., 78:316-331. ESTOK, P., S. ZSEBOK, AND B. M. SIEMERS. 2010. Great tits search for, capture, kill and eat hibernating bats. Biol. Letters, 6:59-62. GLAUDAS, X., K M. ANDREWS, J. D. WILLSON, AND J. W. GIBBONS. 2007. Migration patterns in a population of cottonmouths (Agkistrodon piscivorus) inhabiting an isolated wetland. J. Zool., 271:119-124. GOODING, G. AND J. R. LANGFORD. 2004. Characteristics of tree roosts of Rafinesque's big-eared bat and southeastern bat in northeastern Louisiana. Southwest. Nat., 49:61-67. GRUEBER, C. E., S. NAKAGAWA, R. J. LAWS, AND I. G. JAMIESON. 2011. Multimodel inference in ecology and evolution: challenges and solutions. J. Evol. Biol., 24:699-711. HEIN, C. D., S. B. CASTLEBERRY, AND K. V. MILLER. 2008. Male Seminole bat winter roost-site selection in a managed forest. J. Wildl. Manage., 72:1756-1764. HOFFMAN, V. E., III. 1999. Roosting and relative abundance of the southeastern myotis, Myotis austroriparius, in a bottomland hardwood forest. M.Sc. Thesis, Arkansas State University, Jonesboro. 35 p. HOFMAN, J. E., J. E. GARDNER, J. K. KREJCA, AND J. D. GARNER. 1999. Summer records and a maternity roost of the southeastern myotis (Myotis austroriparius) in Illinois. Trans. Illinois State Acad. Sci., 92:95-107. HOSKEN, D.J. 1996. Roost selection by the lesser long-eared bat, Nyctophilus geoffroyi, and the greater longeared bat, N. major (Chiroptera: Vespertilionidae) in Banksia woodlands. J. R. Soc. West. Aust., 79:211-216. HOSMER, D. W. AND S. LEMESHOW. 2000. Applied logistic regression. 2nd ed. Wiley, New York. 392 p. JEPSON, J. 2000. The tree climber's companion. 2nd ed. Access Publications, 104 p. JONES, C. 1977. Plecotus rafinesquii. Mammal. Spec., 69:1-4. -- AND R. W. MANNING. 1989. Myotis austroriparius. Mammal. Spec., 332:1-3. KALCOUNIS, M. C. AND R. M. BRIGHAM. 1998. Secondary use of aspen cavities by tree-roosting big brown bats. J. Wildl. Manage., 62:603-611. KEATING, K. A. AND S. CHERRY. 2004. Use and interpretation of logistic regression in habitat selection studies. J. Wildl. Manage., 68:774-789. KUNZ, T. H. AND L. F. LUMSDEN. 2003. Ecology of cavity and foliage roosting bats, p. 3-89. In: T. H. Kunz and M. B. Fenton (eds.). Bat ecology. University of Chicago Press, Chicago, Illinois. 798 p. LOEB, S. C. AND S.J. ZARNOCH. 2011. Seasonal and multiannual roost use by Rafinesque's big-eared bats in the coastal plain of South Carolina, p. 111-122. In: S. C. Loeb, N.J. Lacki, and D. A. Miller (eds.). Conservation and management of eastern big-eared bats. General Technical Report SRS-145 edition. U.S. Department of Agriculture, Forest Service, Southern Research Station, Asheville, North Carolina. 157 p. MIROWSKY, K. M. AND P. HORNER. 1997. Roosting ecology of two rare vespertilionid bats, the southeastern myotis and Rafinesque's big-eared bat, in east Texas. 1996 Annual Report (20 Jun 1997), Texas Parks and Wildlife Department, Resource Protection Division, Austin. 45 p. MORMANN, B. M. AND L. W. ROBBINS. 2007. Winter roosting ecology of eastern red bats in southwest Missouri. J. Wildl. Manage., 71:213-217. MULLIN, S.J. AND R.J. COOPER. 2002. Barking up the wrong tree: climbing performance of rat snakes and its implications for depredation of avian nests. Can. J. Zool., 80:591-595. NAGELKERKE, N.J.D. 1991. A note on a general definition of the coefficient of determination. Biometrika, 78:691-692. PERRY, R. W., D. A. SAUGEY, AND B. G. CRUMP. 2010. Winter roosting ecology of silver-haired bats in an Arkansas forest. Southeast. Nat., 9:563-572. R DEVELOPMENT COPE TEAM. 2010. R: a language and environment for statistical computing, ver. 2.11.1. R Foundation for Statistical Computing, Vienna, Austria. RICE, C. L. 2009. Roosting ecology of Corynorhinus rafinesquii (Rafinesque's big-eared bat) and Myotis austroriparius (southeastern myotis) in tree cavities found in a northeastern Louisiana bottomland hardwood forest streambed. M.Sc. Thesis, University of Louisiana at Monroe, Monroe. 124 p. RUDOLPH, D. C., H. KYLE, AND R. N. CONNER. 1990. Red-cockaded woodpeckers vs rat snakes: the effectiveness of the resin barrier. Wilson Bull., 102:14-22. --, R. R. SCHAEFER, S.J. BURGDORF, M. DURAN, AND R. N. CONNER. 2007. Pine snake (Pituophis ruthveni and Pituophis melanoleucus lodingi) hibernacula. J. Herpetol., 41:560-565. SPEAKMAN, J. R. AND D. W. THOMAS. 2003. Physiological ecology and energetics of bats, p. 430-492. In: T. H. Kunz and M. B. Fenton (eds.). Bat ecology. University of Chicago Press, Chicago, Illinois. 798 p. STEVENSON, C. L. 2008. Availability and seasonal use of diurnal roosts by Rafinesque's big-eared bat and southeastern myotis in bottomland hardwoods of Mississippi. M.Sc. Thesis, Mississippi State University, Starkville, Mississippi. 109 p. TROUSDALE, A. W. AND D. C. BECKETT. 2005. Characteristics of tree roosts of Rafmesque's big-eared bat (Corynorhinus rafinesquii) in southeastern Mississippi. Am. Midl. Nat., 154:442-449. MATTHEW J. CLEMENT (1,2) AND STEVEN B. CASTLEBERRY D.B. Warnell School of Forestry and Natural Resources, University of Georgia, Athens 30602 (1) Present address: USGS Patuxent Wildlife Research Center, Laurel, MD 20708 (2) Corresponding author: e-mail: firstname.lastname@example.org TABLE 1.--Mean [+ or -] standard deviation of characteristics of unoccupied hollow trees, trees occupied by Rafinesque's big- eared bat (Cmynorhinus rafinesquii), and trees occupied by southeastern myotis (Myotis austroriparius) at River Bend Wildlife Management Area, Georgia during winter 2010. P-values indicate results of t-tests or Fisher's exact tests compared to unoccupied trees Unoccupied Variable Mean (n = 30) Tree height (m) 16.9 [+ or -] 5.6 dbh (cm) 117.8 [+ or -] 48.7 Cavity volume (L) 1,157 [+ or -] 1,509 Solid tree volume (L) 7,841 [+ or -] 11,554 Area of openings ([m.sup.2]) 0.22 [+ or -] 0.23 Widest opening (cm) 32.5 [+ or -] 11.9 Highest opening (m) 6.08 [+ or -] 5.16 Lowest opening (m) 1.73 [+ or -] 2.61 Total openings (no.) 2.23 [+ or -] 1.28 Live (Y/N) 0.97 [+ or -] 0.18 Tupelo tree (Y/N) 0.80 [+ or -] 0.41 Chimney (Y/N) 0.63 [+ or -] 0.49 Rough interior (Y/N) 0.50 [+ or -] 0.51 Rafinesque's big-eared bat Variable Mean (n = 23) P Tree height (m) 16.3 [+ or -] 4.1 0.673 dbh (cm) 105.9 [+ or -] 34.8 0.327 Cavity volume (L) 1,038 [+ or -] 604 0.252 Solid tree volume (L) 4,341 [+ or -] 5,945 0.613 Area of openings ([m.sup.2]) 0.19 [+ or -] 0.12 0.623 Widest opening (cm) 36.4 [+ or -] 13.1 0.256 Highest opening (m) 5.88 [+ or -] 2.70 0.866 Lowest opening (m) 3.56 [+ or -] 2.50 0.013 Total openings (no.) 2.26 [+ or -] 1.66 0.946 Live (Y/N) 0.96 [+ or -] 0.21 0.999 Tupelo tree (Y/N) 1.00 [+ or -] 0.00 0.030 Chimney (Y/N) 0.83 [+ or -] 0.39 0.140 Rough interior (Y/N) 0.41 [+ or -] 0.50 0.577 Southeastern myotis Variable Mean (n = 5) P Tree height (m) 13.4 [+ or -] 8.0 0.187 dbh (cm) 33.2 [+ or -] 18.5 <0.001 Cavity volume (L) 26.7 [+ or -] 55.3 <0.001 Solid tree volume (L) 1138 [+ or -] 1069 0.007 Area of openings ([m.sup.2]) 0.03 [+ or -] 0.03 0.050 Widest opening (cm) 13.9 [+ or -] 10.4 0.001 Highest opening (m) 3.95 [+ or -] 4.73 0.281 Lowest opening (m) 3.82 [+ or -] 4.85 0.362 Total openings (no.) 1.20 [+ or -] 0.45 0.107 Live (Y/N) 0.80 [+ or -] 0.45 0.245 Tupelo tree (Y/N) 0.00 [+ or -] 0.00 <0.001 Chimney (Y/N) 0.00 [+ or -] 0.00 0.003 Rough interior (Y/N) 0.33 [+ or -] 0.58 0.999 TABLE 2.--Predictor variables, number of model parameters (K), Akaike's Information Criterion adjusted for small sample size (AIC,), difference between model and top model (AAICc), model weight ([w.sub.I]), and Nagelkerke's [R.sup.2] for logistic regression models of winter 2010 roost use by Rafinesque's big-eared bat (Corynorhinus raftnesquia) at River Bend Wildlife Management Area, Georgia [DELTA] Variables K AICc AICc [w.sub.I] [R.sup.2] Lowest opening 2 66.15 0.00 0.434 0.131 Intercept-only 1 69.01 2.86 0.104 0.000 Cavity volume 2 69.78 3.63 0.071 0.038 Widest opening 2 69.94 3.79 0.065 0.034 Diameter 2 70.01 3.85 0.063 0.032 Chimney 2 70.10 3.95 0.060 0.029 Rough interior 2 70.49 4.34 0.050 0.019 Solid tree volume 2 70.92 4.76 0.040 0.007 Opening area 2 71.00 4.85 0.039 0.005 Highest opening 2 71.17 5.02 0.035 0.000 Height 2 71.19 5.03 0.035 0.000 Global 7 75.05 8.90 0.005 0.216 TABLE 3.--Mean [+ or -] standard deviation of roost characteristics for Rafinesque's big-eared bat (Corynorhinus raftnesquii) and southeastern myotis (Myotis austroriparius) at River Bend Wildlife Management Area, Georgia. Winter roosts were measured in summer 2011 and summer roosts were measured in summer 2008 (Clement, 2011). P-values indicate results of t-tests or Fisher's exact tests Rafinesque's big-eared bat Variable Winter (n = 23) Summer (n = 15) P Tree height (m) 18.7 [+ or -] 3.9 17.7 [+ or -] 6.2 0.571 dbh (cm) 147.5 [+ or -] 29.8 142.9 [+ or -] 34.3 0.670 Cavity volume (L) 3,229 [+ or -] 1,301 2,773 [+ or -] 1,486 0.267 Solid tree 8,037 [+ or -] 5,474 8,767 [+ or -] 8,554 0.946 volume (L) Area of 0.37 [+ or -] 0.36 0.20 [+ or -] 0.17 0.101 openings ([m.sup.2]) Widest 46.0 [+ or -] 26.8 32.4 [+ or -] 18.3 0.098 opening (cm) Highest 7.51 [+ or -] 2.74 3.97 [+ or -] 3.95 0.003 opening (m) Lowest 1.96 [+ or -] 3.35 0.99 [+ or -] 2.00 0.328 opening (m) Total 3.52 [+ or -] 2.04 2.93 [+ or -] 2.34 0.427 openings (no.) Live (Y/N) 0.95 [+ or -] 0.22 0.93 [+ or -] 0.26 0.999 Tupelo tree (Y/N) 1.00 [+ or -] 0.00 0.93 [+ or -] 0.26 0.417 Chimney (Y/N) 0.81 [+ or -] 0.40 0.40 [+ or -] 0.51 0.017 Rough interior 0.38 [+ or -] 0.50 0.13 [+ or -] 0.35 0.142 (Y/N) Southeastern myotis Variable Winter (n = 5) Summer P Tree height (m) 13.4 [+ or -] 8.0 18.5 [+ or -] 2.1 0.436 dbh (cm) 33.2 [+ or -] 18.5 111.4 [+ or -] 31.8 0.008 Cavity volume (L) 26.7 [+ or -] 55.3 2,242 [+ or -] 1,922 0.022 Solid tree 1138 [+ or -] 1069 4,000 [+ or -] 1,048 0.182 volume (L) Area of 0.03 [+ or -] 0.03 0.09 [+ or -] 0.02 0.082 openings ([m.sup.2]) Widest 13.9 [+ or -] 10.4 19.0 [+ or -] 8.5 0.574 opening (cm) Highest 3.95 [+ or -] 4.73 0.05 [+ or -] 0.07 0.321 opening (m) Lowest 3.95 [+ or -] 4.73 0.05 [+ or -] 0.07 0.321 opening (m) Total 1.20 [+ or -] 0.45 2.00 [+ or -] 0.00 0.062 openings (no.) Live (Y/N) 0.80 [+ or -] 0.45 1.00 [+ or -] 0.00 0.999 Tupelo tree (Y/N) 0.00 [+ or -] 0.00 1.00 [+ or -] 0.00 0.048 Chimney (Y/N) 0.00 [+ or -] 0.00 0.00 [+ or -] 0.00 0.999 Rough interior 0.33 [+ or -] 0.59 0.00 [+ or -] 0.00 0.999 (Y/N) TABLE 4.--Predictor variables, number of model parameters (K), Akaike's Information Criterion adjusted for small sample size ([AIC.sub.c]), difference between model and top model ([DELTA][AIC.sub.c]), model weight ([w.sub.i]), and Nagelkerke's [R.sup.2] for logistic regression models distinguishing winter and summer roost use by Rafinesque's big-eared bat (Corynorhinus rafinesquit) at River Bend Wildlife Management Area, Georgia, 2007-2010 [DELTA] Variables K AICc AICc [w.sub.i] [R.sup.2] Highest opening 2 44.42 0.00 0.620 0.293 Chimney 2 46.81 2.39 0.187 0.221 Opening area 2 49.69 5.27 0.044 0.127 Widest opening 2 49.89 5.47 0.040 0.121 Rough interior 2 50.42 5.99 0.031 0.102 Intercept-only 1 51.02 6.60 0.023 0.000 Cavity volume 2 51.97 7.55 0.014 0.048 Low 2 52.18 7.76 0.012 0.040 Height 2 52.92 8.50 0.009 0.013 Diameter 2 53.07 8.65 0.008 0.007 Solid tree volume 2 53.26 8.84 0.007 0.000 Global 7 55.32 10.90 0.003 0.370 |Printer friendly Cite/link Email Feedback| |Author:||Clement, Matthew J.; Castleberry, Steven B.| |Publication:||The American Midland Naturalist| |Date:||Jul 1, 2013| |Previous Article:||Do native warm-season grasslands near airports increase bird strike hazards?| |Next Article:||Demographic effects of white-tailed deer (Odocoileus virginianus) exclosures on white-footed mice (Peromyscus leucopus).|
Social psychology is a fascinating field that studies how people’s thoughts, feelings, and behaviors are influenced by the presence of others. Over the years, researchers have conducted various experiments to gain insights into different aspects of social psychology. In this article, we will discuss some of the most famous social psychology experiments that have shaped our understanding of human behavior. The Asch Conformity Experiment One of the most well-known experiments in social psychology is the Asch conformity experiment. Conducted by psychologist Solomon Asch in the 1950s, this experiment aimed to understand how people conform to group norms. In this study, participants were shown a line and asked to match it with one of three comparison lines. However, all but one participant in each group were confederates who purposely gave incorrect answers. The results showed that participants often conformed to the incorrect answers given by the confederates, even when it went against their own judgment. This experiment demonstrated the power of social pressure and conformity. The Milgram Obedience Experiment Another famous experiment in social psychology is the Milgram obedience experiment conducted by Stanley Milgram in 1963. This study aimed to understand how far individuals would go when ordered to do something by an authority figure. Participants were asked to deliver electric shocks to another person (who was actually a confederate) whenever they answered a question incorrectly. The shocks increased in intensity with each wrong answer. Despite hearing screams and protests from the supposed victim, most participants continued delivering shocks up to 450 volts (which was labeled as “XXX” on the machine). This experiment highlighted how easily individuals can be influenced by authority figures. The Zimbardo Stanford Prison Experiment The Zimbardo Stanford Prison Experiment conducted by Philip Zimbardo in 1971 aimed at studying how people’s behavior changes when placed in positions of power or subordination. In this study, participants were randomly assigned the roles of prisoners or guards in a simulated prison environment. The guards quickly began to abuse their power and mistreat the prisoners, while the prisoners became passive and obedient. The experiment was terminated after just six days due to the extreme nature of the behaviors displayed by both groups. This study demonstrated how individuals can quickly adapt and conform to social roles assigned to them. The Robber’s Cave Experiment The Robber’s Cave Experiment conducted by Muzafer Sherif in 1954 aimed at understanding intergroup conflict and prejudice. In this study, two groups of boys were brought together at a summer camp but kept apart from each other. The researchers then created situations that would cause competition between the groups, such as sports tournaments. As a result, the boys began showing signs of hostility towards each other. However, when given tasks that required both groups to work together, such as fixing a water supply issue, the hostility dissipated. This experiment demonstrated how competition can lead to intergroup conflict but also showed that cooperation can help reduce prejudice and promote positive intergroup relations. These are just a few examples of some of the most famous social psychology experiments that have helped shape our understanding of human behavior. By analyzing these experiments and their results, we can gain insights into how individuals behave in certain situations and how social norms influence our thoughts and actions.
Edublox Online Tutor (EOT) houses several multisensory cognitive training programs to enhance cognitive skills such as attention, visual memory, auditory memory, and logical thinking. EOT also provides a free online assessment to measure several cognitive skills, specifically visual sequential memory, auditory memory, iconic memory, and logical reasoning. Sixty-four Grade 2 students of an inner-city school took the test, after which their test scores were correlated with their academic grades using the Pearson Correlation. What are cognitive skills? The word “cognition” is defined as “the act or process of knowing”. Cognitive skills, therefore, refer to those skills that make it possible for us to know. They have more to do with the mechanisms of how we learn rather than with any actual knowledge. Cognitive skills include perception, attention, memory, and logical reasoning. Sensation is the pickup of information by our sensory receptors, for example, the eyes, ears, skin, nostrils, and tongue. In vision, sensation occurs as rays of light are collected by the two eyes and focused on the retina. In hearing, sensation occurs as waves of pulsating air are collected by the outer ear and transmitted through the bones of the middle ear to the cochlear nerve. On the other hand, perception (also called processing) is the interpretation of what is sensed. The physical events transmitted to the retina may be interpreted as a particular color, pattern, or shape. The physical events picked up by the ear may be interpreted as musical sounds, a human voice, noise, and so forth. In essence, then, perception means interpretation. Visual perception refers to the brain’s ability to make sense of what the eyes see, while auditory perception is the ability to identify, interpret, and attach meaning to sound. A lack of experience may cause a person to misinterpret what he has seen or heard. In other words, perception represents our apprehension of a present situation in terms of our past experiences, or, as stated by the philosopher Immanuel Kant in Critique of Pure Reason (1781): “We see things not as they are but as we are.” The process of perception is very much affected by attention, a phenomenon that involves filtering incoming stimuli. Human beings do not pay attention to everything in their environments nor attend to all the stimuli impinging on their sense organs. Rather than becoming overwhelmed by the enormous complexity of the physical world, we attend to some stimuli and do not notice others. William James (1842-1910) recognized the importance of attention very early. “A thing may be present to a man a hundred times, but if he persistently fails to notice it, it cannot be said to enter his experience,” he wrote in his book Psychology: The Briefer Course. Attention can be divided into focused, sustained, and divided attention. Focused attention enables one to stay focused on a task despite distractions and sustained attention to stay focused for a sustained period of time. Divided attention is a higher-level skill where one has to perform two (or more) tasks simultaneously, and attention is required to perform both (or all) the tasks. The process of perception is very much affected by attention, a phenomenon that involves filtering of incoming stimuli. Human beings do not pay attention to everything in their environments; nor do they attend to all the stimuli impinging on their sense organs. Rather than becoming overwhelmed by the enormous complexity of the physical world, we attend to some stimuli and do not notice others. William James (1842-1910) recognized the importance of attention very early. “A thing may be present to a man a hundred times, but if he persistently fails to notice it, it cannot be said to enter his experience,” he wrote in his book Psychology: The Briefer Course. Attention can be divided into focused, sustained and divided attention. Focused attention enables one to stay focused on a task despite distractions and sustained attention to stay focused for a sustained period of time. Divided attention is a higher-level skill where one has to perform two (or more) tasks at the same time, and attention is required for the performance of both (or all) the tasks. Memory is the process by which knowledge is encoded, stored, and later retrieved. Although the word memory may conjure up an image of a singular, “all-or-none” process, it is clear that there are actually many kinds of memory, each of which may be somewhat independent of the others. The distinction between short-term memory and working memory is an ongoing debate, and the terms are often used interchangeably. Some scholars claim that some kind of manipulation of remembered information is needed to qualify the task as one of working memory. Repeating digits in the same order they were presented would thus be a short-term memory task while repeating them backward would be a working memory task. Another viewpoint is that of Nelson Cowan, who says short-term memory refers to the passive storage of information when rehearsal is prevented with a storage capacity of around four items. When rehearsal is allowed and controlled attention is involved, it is a working memory task, and the capacity is closer to seven items. When it comes to memory, one’s senses are involved too. Visual memory involves storing and retrieving previously experienced visual sensations and perceptions when the stimuli that initially evoked them are no longer present. Various researchers have stated that as much as eighty percent of all learning occurs through the eye – with visual memory as a crucial aspect of learning. Auditory memory, on the other hand, involves being able to take in information that is presented orally, process that information, store it in one’s mind, and then recall what one has heard. Basically, it involves the skills of attending, listening, processing, storing, and recalling. Sequential memory requires items to be recalled in a specific order. In saying the days of the week, months of the year, a telephone number, the alphabet, and in counting, the order of the elements is of paramount importance. Visual sequential memory is the ability to remember things seen in sequence, while auditory sequential memory is the ability to remember things heard in sequence. Sensory memory is the shortest-term element of memory. It is the ability to retain impressions of sensory information after the original stimuli have ended. It acts as a kind of buffer for stimuli received through the five senses of sight, hearing, smell, taste and touch, which are retained accurately, but very briefly. For example, the ability to look at something and remember what it looked like with just a second of observation is an example of sensory memory. The sensory memory for visual stimuli is sometimes known as the iconic memory, the memory for aural stimuli is known as the echoic memory, and that for touch as the haptic memory. Logical reasoning is the process of using a rational, systematic series of steps based on sound mathematical procedures and given statements to arrive at a conclusion. In logic, there are two broad methods of reaching a conclusion, deductive reasoning and inductive reasoning. Deduction begins with a broad truth (the major premise), such as the statement that all men are mortal. This is followed by the minor premise, a more specific statement, such as that Socrates is a man. A conclusion follows: Socrates is mortal. If the major premise is true and the minor premise is true, the conclusion cannot be false. In inductive reasoning, broad conclusions are drawn from specific observations; data leads to conclusions. If the data shows a tangible pattern, it will support a hypothesis. For example, having seen ten white swans, we could use inductive reasoning to conclude that all swans are white. This hypothesis is easier to disprove than to prove, and the premises are not necessarily true. Still, they are true given the existing evidence and given that one cannot find a situation in which it is not true. EOT uses colors placed in sequences to develop inductive reasoning skills. The student will, for example, have to add four colors to complete the sequence below: The importance of strong cognitive skills Many studies over many decades have shown that cognitive skills determine an individual’s learning ability, according to Oxfordlearning.com, the skills that “separate the good learners from the so-so learners.” When cognitive skills are strong, learning is fast and easy. When cognitive skills are weak, learning becomes a challenge. Auditory memory crucial for literacy Research has confirmed that auditory memory plays a crucial role in literacy: It is one area of auditory processing that directly impacts reading, spelling, writing, and math skills. Kurdek and team measured auditory memory in kindergarteners and found readiness in auditory memory predicted later reading and mathematics achievement in fourth grade. Children with poor auditory memory skills may struggle to recognize sounds and match them to letters – a common symptom of a reading disability or dyslexia. Research by Plaza et al. found that dyslexic children exhibited a significant deficit in tasks involving auditory memory skills (digit span, unfamiliar word repetition, sentence repetition) compared with their age-mates. Howes et al. compared 24 readers with auditory dyslexia and 21 with visual dyslexia to 90 control group participants and revealed auditory sequential memory impairments for both types of readers with dyslexia and multiple strengths for good readers. Visual memory critical for math Research has also confirmed that visual memory, often considered a subset of visual perception rather than a separate skill, plays a crucial role in literacy, especially maths. One hundred seventy-one children with a mean age of 10.08 years participated in a study by Marjean Kulp et al. The study, conducted at the Ohio State University College of Optometry, was designed to determine whether or not performance on visual perception tests could predict the children with poor current achievement in mathematics. Controls for age and verbal cognitive ability were included in all regression analyses because the failure to control for verbal cognitive ability/intelligence has been a criticism of some literature investigating the relation between visual perception and academic skills. Scholars have argued that a relation between visual perception – a nonverbal cognitive skill – and math achievement is merely due to the confounding effects of verbal cognitive ability/intelligence. Kulp et al. concluded: “Poor visual perceptual ability is significantly related to poor achievement in mathematics, even when controlling for verbal cognitive ability. Therefore, visual perceptual ability, and particularly visual memory, should be considered to be amongst the skills that are significantly related to mathematics achievement.” Another investigation of the relation between visual memory and academics was performed in 155 second- through fourth-grade children; the results were published in the journal Optometry and Vision Science. Visual memory ability was assessed with the Test of Visual Perceptual Skills visual memory subtest. The school administered the Otis-Lennon School Ability Test and Stanford Achievement Test. Age and verbal ability were controlled in all regression analyses. The researchers concluded that poor visual memory ability is significantly related to below-average reading decoding, maths, and overall academic achievement (as measured by the Stanford Achievement Test) in second- through fourth-grade children. Visual sequential memory linked to reading Guthrie and team investigated relations between visual sequential memory and reading in 81 typical and 43 disabled readers. The children had normal intelligence and a mean reading grade of 2.5. The mean chronological age of the typical readers was 8.5 years, and the mean of the reading disabled 10.3. Partial correlations between three tests of visual sequential memory and three tests of reading were computed. Significant, positive associations were identified between visual sequential memory and paragraph comprehension, oral reading, and word recognition. A study by Stanley et al. compared 33 dyslexic and 33 control eight- to 12-year-old children and found the dyslexic children inferior to the controls on tasks involving visual sequential memory and auditory sequential memory. Iconic memory related to reading ability Thirty-six 9‐year‐old children were given a test of image persistence in visual sensory (iconic) memory and the Neale Analysis of Reading Ability. In the iconic memory test, the subjects viewed a white disc in a tachistoscope and were required to state whether or not the disc disappeared for a short interval which ranged from 10 ms to 800 ms. The shortest disappearance perceived was taken as a measure of icon persistence. The reading test gave scores for fluency, accuracy, and comprehension. All three measures of reading performance were found to be significantly related to icon persistence. Short and long image persistence resulted in a reading age on the accuracy score that was, on average, 1.75 years below that for moderate persistence. Reasoning tied to academic achievement A study conducted in India by Bhat examined the contribution of six components of reasoning ability (inductive reasoning, deductive reasoning, linear reasoning, conditional reasoning, cause-and-effect reasoning, and analogical reasoning) to explain the variation in academic achievement of 598 class 10th students. The predictive power of various components of reasoning ability for academic achievement was 31.5%. Out of the six dimensions of reasoning ability, the maximum involvement was reflected by deductive reasoning (with a reliability coefficient of .49), followed by cause and effect reasoning (.26), inductive reasoning (.16), linear reasoning (.05), conditional reasoning (.03) and analogical reasoning (.02) on academic achievement. Each of our cognitive skills plays an essential part in processing new information. That means if even one of these skills is weak, no matter what kind of information is coming one’s way, grasping, retaining, or using that information is impacted. In fact, most learning challenges are caused by one or more weak cognitive skills. Edublox and cognitive skills Edublox started life in 1979 as a school readiness program with only three cognitive exercises and has since been tried and tested by more than 150,000 children and adults in approximately 40 countries through home-based kits, programs, and learning clinics. Although the developers caution that the Edublox system requires long-term use, Edublox programs have shown improvement in cognitive skills over a short period of time. A one-week Edublox program was presented in Singapore to 27 learners, ages 10 to 12; the control group comprised 25 students. The Center for Evaluation and Assessment in the Faculty of Education at UP analyzed the pre-and post-test results. The results of the study showed a significant improvement in focused attention. A 2013 study by Dr. Jaidan Mays, an M.Tech student at the University of Johannesburg, found a significant improvement in visual memory – from 6.2 to 7.5 years following an intensive one-week Edublox program of 22.5 hours. Visual memory ability was assessed with the Test of Visual Perceptual Skills visual memory subtest. Results of a post hoc research study at the University of Pretoria (UP) by Naseehat Dawood as part of her master’s degree in research psychology under the guidance of Professor David Maree, former Head of the Department of Psychology at UP, show that EOT improves auditory memory. Sixty-four Grade 2 students of an inner-city school in Pretoria were randomly divided into three groups: the first group completed 28 hours of EOT’s Development Tutor over three weeks; a second group was exposed to standard computer games, while a third group continued with schoolwork. Findings suggest that exposure to the Edublox program significantly improved the post-test scores in the processing speed domain. The same 64 students took the EOT assessment at the start of the study, after which their assessment scores were correlated with their academic grades by SPSS software using the Pearson Correlation test. As in the study by Bhat, contrary to the common practice of using achievement tests to measure academic achievement, real-life academic grades were used as a measure of academic achievement. EOT assessment and academic achievement The EOT assessment consists of five subtests: the Visual Sequential Memory test assesses visual sequential memory, the Auditory Memory test auditory memory, the Eye Span test visual sensory (iconic) memory, the Logical Thinking test logical reasoning, and the Reading test reading age. The Reading test was not applied in this study. Academic grades in South African schools are scaled from 1-7, 7 being “outstanding achievement” and 1 being “not achieved.”. SCALE: 7. Outstanding achievement – 80 – 100% 6. Meritorious achievement – 70 – 79% 5. Substantial achievement – 60 – 69% 4. Adequate achievement – 50 – 59% 3. Moderate achievement – 40 – 49% 2. Elementary achievement – 30 – 39% 1. Not achieved - 0–29% . The 64 students’ Term 4 consolidated academic grades were used in this study, which is a score that combines a student’s academic achievement of all four school terms. The academic grades of their four subjects (English language, second language, math, and life skills) were added together to derive at a Total Academic Score. The students’ scores on the Auditory Memory subtest correlated significantly with their total academic grades (0.01 level), English language grades (0.05 level), and math grades (0.01 level). The students’ scores on the Visual Sequential Memory subtest correlated significantly with their math grades (0.01 level) and their scores on the Logical Thinking subtest with their English language grades (0.05 level). In contrast, the Eye Span subtest that assesses iconic memory did not correlate with any school subject. Summary and conclusion This study confirms the importance of strong cognitive skills for academic achievement; the cognitive skill with the strongest correlation was auditory memory, which correlated significantly with language and math, as well as total academic grades. The Eye-span subtest, which measures iconic memory, most likely did not correlate with any school subject as iconic memory is a cognitive skill required mainly for reading, and none of the school subjects rely exclusively on reading. English language grades, for example, are also dependent on spelling, writing, handwriting, and oral ability. The same type of reasoning probably applies to the Visual Sequential Memory subtest, which correlated with math grades but not with English language as a whole. The Logical Thinking subtest correlated with the students’ English language scores but not their math scores. We predict that a correlation between this subtest and math scores will become more and more apparent in the higher school levels as stronger and stronger problem-solving abilities become a growing requirement over and above basic mathematical skills and knowledge. More research, however, will be required to confirm this theory. Edublox offers cognitive training and live online tutoring to students with dyslexia, dysgraphia, dyscalculia, and other learning disabilities. Our students are in the United States, Canada, Australia, and elsewhere. Book a free consultation to discuss your child’s learning needs. “A quasi-experimental control group evaluation of the Edublox implementation in Singapore in June 2014.” Unpublished project. Center for Evaluation and Assessment, University of Pretoria. August 2017. Bhat, MA. “The predictive power of reasoning ability on academic achievement.” International Journal of Learning, Teaching and Educational Research. January 2016, 15(1). Cowan, N. “The magical number 4 in short-term memory: A reconsideration of mental storage capacity.” Behavioral and Brain Sciences. 2001, 24. Guthrie JT, Goldberg, HK. “Visual sequential memory in reading disability.” Journal of Learning Disabilities. January 1972. Howes NL, Bigler ED, Lawson JS, Burlingame GM. “Reading disability subtypes and the test of memory and learning.” Archives of Clinical Neuropsychology. April 1999, 14(3): 317–339. James W. Psychology: The Briefer Course. Harper, 1961. Kulp MT, Edwards KE, Mitchell GL. “Is visual memory predictive of below-average academic achievement in second through fourth graders?” Optometry and Vision Science. July 2002, 79(7): 431-4. Kulp MT et al. “Are visual perceptual skills related to mathematics ability in second through sixth grade children?” Focus on Learning Problems in Mathematics. 2004, 26(4): 44-51. Kurdek LA, Sinclair RJ. “Predicting reading and mathematics achievement in fourth-grade children from kindergarten readiness scores,” Journal of Educational Psychology. September 2001, 93(3): 451-455. Mays JL, Effects of Edublox Training versus Edublox Training Combined with Cervical Spinal Manipulative Therapy on Visual Memory and Visual Sequential Memory. M.Tech. thesis, University of Johannesburg. 2013. Plaza M, Cohen H, Chevrie-Muller C. “Oral language deficits in dyslexic children: weaknesses in working memory and verbal planning.” Brain and Cognition. March 2002, 48(2-3): 505-512. Riding RJ, Pugh JC. “Iconic memory and reading performance in nine‐year‐old children.” British Journal of Educational Psychology. June 1977, 47(2). Stanley G, Kaplan I, Poole C. “Cognitive and nonverbal perceptual processing in dyslexics.” Journal of General Psychology. 1975, 93(1): 67-72.
what are executive functions? The term executive function describes a set of skills that reside in the prefrontal cortex of the brain. These cognitive functions help us to plan and organise our responses, behaviour and emotions. These are many of the skills that underpin learning and enable children and adolescents to function with a reasonable degree of independence. For example our executive function skills enable us to keep track of time, stay on task, make plans, to be flexible when things change and to control our impulses. The development of executive skills are crucial for successful learning and relationships and they are foundational skills for later life and work. Executive function skills continue to develop until our mid twenties. Childhood and adolescence presents an opportunity to embed strong skills early on, but we can continue to work on these skills throughout our lives. What do executive function challenges look like? People who have executive function challenges often have trouble getting started on tasks, get distracted easily, have organisational, planning and prioritisation struggles, and have poor working memory and cognitive flexibility. At school or university this may impact on their academic work, revision for exams, failure to do or hand in homework, or finding transitioning difficult. In the working world, it may present as putting off the task until the last minute, time management, failing to plan how long something may take and then not finish a task, have short term working memory or find it hard with transitioning of tasks. These people are often considered chronic underachievers and are at risk of academic failure, are likely to have a poor employment record, as well as having emotional and behavioural difficulties. They are often labelled as lazy. The good news is due to the malleable nature of the brain’s neural pathways, we know now that these executive function challenges are not fixed and that we can make changes to the environment to support children and young people to strengthen their executive function skills. Poor executive functioning can also be a hallmark of neurodiverse profiles such as ADHD, autism, dyslexia and dyspraxia. Why is neuroplasticity so important to executive function development? Encouragingly, our brains are not fully developed until at least our mid-twenties, giving plenty of scope to support and help young people. Neuroplasticity means that when skills and strategies are taught to overcome executive function challenges, especially to children and young people, the neural connections in the brain are rewired and strengthened.
In 2008 several zoological studies provided new insights into how species’ life-history traits (such as the timing of reproduction or the length of life of adult individuals) are derived in part as responses to environmental vagaries. The findings had implications for both short- and long-term evolutionary responses of animals to global climate change, harsh natural environments, and infectious disease. Anne Charmantier of the University of Oxford and colleagues reported on their examination of the behavioral adjustments of a wild-bird population of great tits (Parus major) that had been studied since 1961. The long-term data set included information on seasonal temperature changes, the timing of the emergence of a vital prey (larvae of the European winter moth, Operophtera brumata) for the birds’ young, and the reproductive success of the bird population. By 2008 the average date on which the female birds laid eggs had shifted to about two weeks earlier than in the 1970s, a gradual change that tracked an increase in the environmental temperatures that preceded egg laying over the same time period. The timing of peak abundance of winter-moth larvae had also shifted in response to environmental temperatures. In order for the birds to capitalize on the availability of this key prey for their young, the females had to adjust when they laid eggs each year, since the optimal time changed annually in response to early spring temperatures. On the basis of analyses of the annual timing of the birds’ egg laying and rearing of young in response to environmental temperature fluctuations, the investigators concluded that the population responded successfully to regional climate change by adaptive phenotypic plasticity of individual birds rather than by a genetically based response. Curtis A. Deutsch, Joshua J. Tewksbury, and Raymond B. Huey of the University of Washington at Seattle and colleagues constructed thermal performance curves for terrestrial insects from around the world through the use of a global data set that related population growth rates of insects to environmental temperatures. The investigators then used the performance curves to predict the direct impact that rising environmental temperatures might have on insect fitness at different latitudes. Even though greater increases in environmental temperatures were expected in temperate regions, the smaller warming in tropical regions was predicted to have greater impact on insects because tropical species lived at close to their optimal temperature and had limited capacity to adjust to change. Species living at temperate latitudes generally operated at conditions appreciably cooler than their optimal temperature, a situation in which an increase in temperatures might enhance fitness. One conclusion from the analyses was that the greatest risk of extinction from global warming would occur in species living in the world’s regions of greatest biological diversity, the tropics. Among living tetrapods—amphibians, reptiles, birds, and mammals—virtually all species live one year or more after they are hatched or born, and females typically reproduce several times in their lifetime. In a dry desert region of Madagascar, Kristopher B. Karsten of Oklahoma State University and colleagues discovered an unusual chameleon that lived most of its life in the egg stage and whose females reproduced only once in their lifetime. The investigators found that all individuals of the chameleon, Furcifer labordi, were the same age. The entire population hatched from eggs in November. They mated about two months later, and after the females laid their eggs, both sexes became senescent. The adults died within five months of hatching—the shortest postembryonic life span ever reported for a tetrapod. The entire species then persisted for at least six months each year solely in the egg stage. It was uncertain how such an unusual life-history pattern might have evolved, but presumably it was one strategy for a species that lived in an extremely harsh and unpredictable seasonal environment where high adult mortality led to the evolution of shorter life spans. The confirmation that some chameleons were naturally short-lived had important implications to conservation programs that held animals in captivity to form groups known as assurance colonies for later release into the wild. Menna E. Jones of the University of Tasmania and colleagues investigated changes in the life-history traits of populations of the Tasmanian devil (Sarcophilus harrisii), a carnivorous marsupial endemic to Tasmania. Tasmanian devil populations were being devastated by a contagious cancer called devil facial tumour disease (DFTD). The disease produced large tumours around the head and mouth that interfered with eating and invariably led to death within a few months. Researchers first noted DFTD among Tasmanian devils in 1996. By 2007 it was present in at least one-half of the populations of the species, and some infected populations had declined by about 90%. Susceptibility to DFTD was believed to be a consequence of low diversity in the genes that facilitated the animal’s immune responses to tumours, and the spread of the infection was promoted by the physically aggressive biting behaviour among individuals during the mating season. The investigators examined demographic data of Tasmanian devil populations from five locations before and after the appearance of the disorder, and they determined that the proportion of animals that were more than three years old in a given population was greater before than after the onset of the disease. Also, in most populations before the onset of the disease, a majority of females produced several litters between ages two and four, and no females bred before then. After DFTD became prevalent, the number of females that bred early increased by 16 times on average. Despite an unprecedented shift by most females in the population to begin breeding at significantly earlier ages, the spectre of extinction of Tasmanian devils continued to be a major conservation concern. Plans to save the species included developing a vaccine against DFTD, keeping healthy Tasmanian devils in zoos and breeding programs under quarantine, and building fences to protect healthy populations in the wild from infected animals. Test Your Knowledge Designing Life: A Quiz About Genetic Engineering Many animals communicate with others of their species for reproduction, and the challenges in such communication range from situations in which being too quiet is ineffective to situations in which being too loud can be dangerous. A study by Ryo Nakano of the University of Tokyo and Takuma Takanashi of the Forestry and Forest Products Research Institute, Tsukuba, Japan, and colleagues in Japan and Denmark reported on a moth that produced ultrasonic sounds during courtship. The male Asian corn borer moth, Ostrinia furnacalis, directed the low-intensity sounds toward a nearby female. Predators or other males that might compete for the same mate could not detect the quiet sound. Yet the nearby female could hear the courtship sounds, which enhanced the male’s opportunity for mating. The investigators determined that the male produced the sound by rubbing specialized scales on the wings against the thorax. Further investigation revealed that production of low-intensity ultrasonic sounds during courtship was common among a variety of species in other families of moths. Jun-Xian Shen of the Chinese Academy of Sciences, Beijing, and colleagues discovered another type of ultrasonic communication—in an amphibian. During ovulation female Chinese torrent frogs, Odorrana tormota, produced ultrasonic sounds that signaled when they were ready to mate. After ovulation, the females did not produce the call. The males gave advertisement calls during the mating season, but the female calls were distinctive in having a higher frequency and shorter duration. The call of the female informed males that she was ready to mate and indicated her location in a densely forested habitat. Male torrent frogs had a hyperacute ability to detect the call amid high ambient noise levels created by stream waters and to determine the female’s location precisely. The production of high-frequency sounds by females and the males’ ability to pinpoint their source were most likely adaptations for communicating in the noisy habitat of torrential streams. One of the oddest vertebrates is the platypus (Ornithorhynchus anatinus), a type of mammal called a monotreme. Platypuses lay eggs like reptiles and birds but have fur and feed their young milk produced from lactate glands with no nipple. Other unusual features of the platypus include the presence of a bill with electrosensory pits, the absence of teeth in adults, and—in males—the production of venom, which they apply through spurs on the hind feet. Geneticist Wesley C. Warren of the Washington University School of Medicine, St. Louis, Mo., and an international consortium sequenced the entire genome of the species to assess the evolutionary relationships between platypuses, other mammals, birds, and reptiles. Comparative investigations of protein-coding and non-protein-coding genes and the reading of some 26.9 million DNA sequences revealed information on the genomic evolution of mammals. The findings showed that the venoms of reptiles and monotremes evolved independently as the result of convergent evolution and that the milk-producing genes were conserved from a mammalian ancestry. The study also confirmed that marsupials and placental mammals are more closely related to each other than either is to monotremes. In 2008 progress was made in creating genetically modified (GM) plants to produce pharmaceutical drugs. The production of pharmaceuticals derived from GM plants had proved to be efficient on a large scale, but little research had been done in using GM plants for vaccines against cancer and other chronic diseases. In one report Alison McCormick of Touro University California’s College of Pharmacy and colleagues described new plant-made vaccines that they had developed for treating non-Hodgkin lymphoma cancer. The researchers were able to use the GM plant technique to make vaccines tailored to individual patients, which was important because the molecular signature of the lymphoma tumour cells differed from patient to patient. The researchers created the vaccine by isolating the antibody to each patient’s tumour and inserting the gene for that antibody into a modified version of the tobacco mosaic virus, which was then used to infect a tobacco plant. The virus carried the gene into the plant’s cells, where the antibody was produced, and after a few days the antibody was extracted and purified. Only a few plants were needed to make enough vaccine for each patient. The results of a phase 1 clinical trial showed that 70% of the patients developed an immune response to the plant-made vaccine. In another study South Korean researchers showed that the tomato plant held promise as a suitable plant for producing a possible oral vaccine against Alzheimer disease. The researchers produced GM tomatoes engineered with the human gene for beta-amyloid, a peptide that was believed to be one of the major components of Alzheimer disease. The gene was introduced into the tomato plants by infecting them with a genetically engineered bacterium belonging to the genus Agrobacterium. When mice were fed soluble extracts from the plants, the beta-amyloid triggered an immune response. The researchers hoped that it would eventually be possible to reduce the accumulation of beta-amyloid in the human brain in this way and thereby inhibit the degeneration of neuron cells. Scientists discovered how a gene known as SUN controlled the shape of fruit. The fruit of the wild ancestral tomato plant was small and round, but cultivated varieties came to have a wide range of shapes and sizes. After investigating the molecular basis of the SUN gene’s effect on elongation, Esther van der Knaap and colleagues at Ohio State University and Michigan State University reported that a duplication of a DNA sequence in the SUN gene had increased the gene’s expression and had led to the elongated shape of the fruit. The gene-duplication event might have been caused by a DNA element called a retrotransposon, which inserted itself within the plant’s genome, or genetic code, and increased the expression of the gene. The authors said that their findings demonstrated that retrotransposons might be a major driving force in genome evolution, especially in plants. The discovery might also help unravel the mystery behind the huge differences in shape among fruits and vegetables and might provide new insights into the basic mechanisms of plant development. More evidence came to light concerning the effects of climate change on plants. Researchers from AgroParisTech in France surveyed 171 species of forest plants across six Western European mountain ranges by reviewing about 8,000 plant surveys that had been collected between 1905 and 2005. The researchers found that more than two-thirds of the species had climbed in elevation over those 100 years and that the average increase in their optimum elevation was 29 m (95 ft) per decade. The shift to higher elevation was greater for plant species whose habitat was restricted to mountains. Average temperatures in Western Europe rose by nearly 1 °C (1.8 °F) during the 20th century, and these results added to the growing body of evidence that increasing temperatures were leading to the migration of plants in search of cooler climates. The study also showed that quick-breeding grasses had moved up mountains more quickly than slower-growing trees. This disparity raised concerns that communities of plants would disintegrate and possibly affect the animals that relied on them for food and shelter. Flowers typically used scents to attract their pollinators, but a new study revealed that tobacco flowers used a mixture of both attractants and repellents to regulate their pollination and defend themselves. A team of botanists led by Ian Baldwin at the Max Planck Institute for Chemical Ecology in Jena, Ger., found that tobacco flowers produced nectar with both benzyl acetone, which had a sweet smell, and nicotine, which had a bitter taste and was poisonous. The study selectively blocked the production of each scent to see how they affected the plant’s pollination. The nicotine repelled predatory insects that tried to rob the nectar or eat the flowers. The nicotine also prevented pollinators from lingering too long at any one flower and thereby caused them to visit more flowers and increase the chances of cross-pollination. The proper dose of both attractant and repellent chemicals was needed to optimize pollination by enticing pollinators to the flower and then persuading them to leave shortly afterward. “This … shows just how sophisticated a plant can be in using chemistry to get what it wants,” commented Baldwin. A team led by Sarah Sallon of the Louis Borick Natural Medicine Research Center at Hadassah Hospital in Jerusalem managed to germinate a Judean date-palm seed that was thought to be at least 2,000 years old. It was the oldest seed to have been successfully germinated. The seed was found at Masada, the hill fortress overlooking the Dead Sea that was besieged by the Romans in ad 72–73. The scientists treated the seed with hormones, and after eight weeks it began to sprout. It grew over 26 months into a healthy sapling 1.5 m (4.9 ft) tall, which was comparable to modern date seedlings. Radiocarbon dating of fragments of the seed’s shell that clung to the plant’s roots when it was transferred to a larger pot pinpointed the age of the seed. “The exceptionally dry and hot climatic conditions at Masada may have prevented it from disintegrating and preserved its viability, but this still says a lot about the ability of seeds to survive,” said Sallon. The study of the viability of such ancient seeds was important for understanding conservation techniques for seed banks, and it might also help in modern date-palm cultivation and breeding. (See Environment: Sidebar.) Molecular Biology and Genetics The Genetics of Stress Response Physical traits often run in families. Tall parents tend to have tall children; short parents tend to have short children; blond-haired parents tend to have blond-haired children; and so forth. Emotional or behavioral traits also tend to run in families, although these traits can be more complex and difficult to quantify. Anxiety disorder (the tendency to experience excessive anxiety relative to a stimulus) is a behavioral trait that demonstrates 40–60% heritability. This level of heritability indicates that environmental factors, such as stressful conditions, and genetic factors, such as those that influence how stress is perceived and accommodated, are both very important in contributing to the etiology of the disorder. A study published in April 2008 by a team of researchers led by David Goldman of the U.S. National Institutes of Health was an important step toward dissecting the genetic factors that contribute to anxiety disorder. It provided insights into the basis not only of the disorder but also of the normal variations in responses to stress. The study consisted of several components. One component explored the functional significance of normal genetic variation in the gene NPY, which encodes a 36-amino-acid peptide called neuropeptide Y. The peptide is expressed at high levels in regions of the brain that are associated with arousal and emotional response to a stress-inducing challenge. Previous studies had demonstrated that neuropeptide Y is released in the brain in response to stress and that its release helps to control characteristic fight-or-flight hormonal and metabolic responses to stress, such as an increase in heart rate. The researchers hypothesized that natural genetic variation in the NPY gene might lead to variation in the expression of neuropeptide Y, which in turn might correlate with variation in stress response from individual to individual (a characteristic called trait anxiety). To test their hypothesis, the researchers identified seven naturally occurring variations in the human NPY gene sequence. They then took DNA samples from a large number of study volunteers and characterized the samples with regard to these variations. The resulting data enabled them to classify the NPY alleles into haplotypes (groups of alleles defined by the presence and absence of specific DNA-sequence markers). Since humans carry two copies of most genes—one maternally inherited and one paternally inherited—the volunteers in the study could be further categorized by the diplotype (set of two NPY haplotypes) each person happened to carry. The researchers then tested the possible impact of NPY diplotype on the expression of neuropeptide Y by measuring the level of neuropeptide-Y messenger RNA (mRNA) in lymphoblast cells from 47 volunteers whose NPY diplotype had been determined. The results demonstrated a threefold range in neuropeptide-Y mRNA levels and a clear correlation between NPY diplotype and the expression level of the NPY mRNA. A similar correlation between NPY diplotype and neuropeptide-Y mRNA levels was observed from studies of 28 postmortem brain samples and from an independent study of neuropeptide-Y levels in plasma samples derived from a separate study of 42 subjects. Next, the researchers sought to test whether NPY diplotypes associated with low, medium, or high neuropeptide-Y expression levels might also correlate with brain responses to emotion and stress. They applied a technique called functional magnetic resonance imaging (fMRI) to detect amygdala and hippocampal activation in 71 study volunteers who were subjected to transient stress by showing them images of threatening facial expressions. The fMRI provided real-time and noninvasive measurement of small changes in the blood flow or oxygenation levels of tissues. Since the amygdala governs arousal, emotional response, and autonomous responses to fear and the hippocampus functions in establishing memory and is influenced by stress, small changes in the blood flow or oxygenation levels of these regions of the brain served as quantifiable markers for the emotional recognition of and response to stress. The results were striking. Amygdala activation in stressed study volunteers with a diplotype associated with low NPY expression was significantly higher than in study volunteers with a high NPY-expression diplotype. Indeed, NPY diplotype accounted for 9% of the variance observed in amygdala activation among the volunteers. Studies of task-related hippocampal activation also demonstrated a significant correlation with NPY diplotype. To extend their work from imaging studies to trait anxiety, Goldman and colleagues used the Tridimensional Personality Questionnaire to characterize 137 study volunteers on various measures of harm avoidance. From these data the researchers found statistically significant, although modest, correlations between an individual’s NPY diplotype and both fear of uncertainty and anticipatory worry, but they found no correlation between NPY diplotype and either shyness with strangers or fatigability and asthenia (loss of strength). Considering the multitude of factors that influence emotional perception and response, it was remarkable that normal, naturally occurring sequence variations in one gene, NPY, could be demonstrated to have such an impact. Seasonal Susceptibility to Influenza Despite efforts to promote widespread immunization, every year in the United States and many other countries, 5–20% of the population becomes infected with influenza (flu) virus and experiences symptoms such as high fever, headache, fatigue, nasal discharge, sore throat, muscle aches, gastrointestinal upset, and general misery. In addition, many thousands of people die every year from influenza or its complications. Influenza is generally spread by aerosol transmission, particularly when an infected person coughs or sneezes in proximity to others. Influenza can also be transmitted when a person touches a surface contaminated with the virus from an infected person and then inadvertently touches the mucous membranes of the nose or mouth with the contaminated hand or finger. A notable characteristic of influenza infection in the Northern Hemisphere is that it is seasonal. Influenza peaks in the winter, and the months from November to March are typically considered to constitute the flu season. Although the seasonal epidemiology of influenza infection was long recognized, it was poorly understood. In 2007, however, experiments were reported that convincingly demonstrated that temperature and humidity affect flu transmission, and in 2008 a study emerged that provided clear evidence of a mechanism to explain this effect. This study, by Joshua Zimmerberg and colleagues from the U.S. National Institutes of Health, concerned the properties of substances, called phospholipids, that make up the influenza viral envelope. The researchers used a methodology called proton magic-angle spinning nuclear magnetic resonance to probe the ordered-versus-disordered arrangement of the phospholipids at different temperatures. At cool to cold temperatures (temperatures below 22 °C [72 °F]), the phospholipids formed an ordered gel phase, which the researchers believed would protect the virus from the elements and thereby extend its survival during transmission. At warmer temperatures, such as those common in the summer, the phospholipid envelope melted into a liquid phase, which the researchers believed would not protect the virus effectively against the environment. Thus, its survival and the range of its transmission would be limited. The study not only offered a logical explanation for the seasonal nature of the epidemiology of influenza but also presented new approaches to preventing influenza transmission. For example, compounds might be designed to disrupt the organization of the phospholipids in the viral envelope at cool temperatures. The results of the study also suggested that other viruses that use a phospholipid envelope to shield themselves from the environment during transmission might demonstrate similar properties.
We are going to be investigating the forces on this toy fighter jet, which is strangely enough propeller driven. We are going to determine the speed of the jet in two different ways. In one method we will look at the forces acting on the airplane and use Newton's Laws, and figure out the speed and acceleration that way. In the second method, we will measure the velocity directly by finding out how far the jet travels in a certain amount of time, and divide the distance by the time. And then we will compare the two methods. So one measurement that we are going to have to have to do this is the length of the string that the airplane is hanging on. So let's do that first before we put the plane back in motion. I will just put a meter stick up beside the string. Start from the pivot point, and it almost comes down to the airplane. It takes another, we will estimate, probably a couple of centimeters to go to the plane. So that is 102 centimeters or 1.02 meters. Now, let's take a look at the theory before we take the measurements. Let's take a look at the forces acting on the toy plane from the side. Here we have the weight of the airplane, we will call that mg, that is the mass of the plane times the acceleration due to gravity, and the string pulls on it through a tension force. Now vertically, these are the only forces acting on the airplane. We want to combine these and do a net force analysis in order to determine the velocity of the airplane. And to get its velocity we need the acceleration of the airplane. First of all, we know the airplane is moving in a horizontal circle and that when objects move in circles, there is acceleration is centripetal and directed toward the center of the circle. So, the acceleration vector is like that. It makes sense, when doing a net force diagram, to make one of the axes point in the direction of acceleration. So, I'll make the x axis point that way, and the y axis just ahs to be perpendicular to that, it can point up or down. So, the positive axes are in these directions. Now, with those axes let's look at the components of Tension force. I am going to define the angle that the string makes with the vertical as angle θ, and that also makes that angle equal to theda because they are alternate interior angles. That makes this side equal to Tsinθ, and this side Tcosθ. Now, we are ready to write our net force equation. In the X direction we only have one force, the horizontal component of Tension force. That's Tsinθ. Let's look at the vertical direction. We have two vertical forces in the net force in the vertical direction. We have the vertical tension component, that's Tcosθ, and we have the full weight force, which is in the other direction, so its negative. Let's do some physics and some algebra to combine these two equations. First we know that the object is not accelerating vertically, it always stays in the same vertical plane, so f net y is equal to zero. And we know that f net x is equal to ma, by Newton's second law. So let's write down two equations. ma = Tsinθ mg = Tcosθ We want to solve these to get rid of the Tension force, because we don't have a way to directly measure tension force. Well, its easy to do that if we divide one equation by the other. ma/mg = Tsinθ/Tcosθ a/g = tanθ So the a is equal to gtanθ. That's a centripetal acceleration, so its possible to express a as v2/r. And finally to solve for v, we get v = sqrt(grtanθ) So this is one method we will use to find the speed of the airplane. We will need to know the radius of the path (r) and the angle of the path (θ). What I measure before was the length of the string, that's the hypotenuse of this triangle. In order to get the angle and the radius, and the radius is right here, we are going to measure this height. This is a right triangle right here, so if we know the height here, and the length here, we can use trig and the Pythagorean theorem to calculate the angle of the path and the radius of the path respectively. So we can know all the numbers to calculate the speed of the airplane using this method. Now for the other method, let's look at the top view of the airplane. It looks like a circle, and the radius of the path is "r" and the speed of the airplane is "v". Now, this particular method is particularly simple, as it involves figuring out how far the plane travels in a certain amount of time. Now, we are going to use the formula that v = change in distance/change in time. Since this is a circular path, we can use a special formula for the change in distance, we can just use the circumference of the path and the time it takes to go around once. From geometry, we know that circumference is 2πr. So this results in v = 2πr/T ; where T is equal to the period of one circle. So this will give us a second measurement for speed, which we can compare to Newton's Laws. These should come out about the same. Here we are again with the airplane. We need to measure two things, the vertical height and the time it takes to go around. To get the vertical height, I'll take my meter stick and just put it up to the side here and bring it in as close to the airplane as I can. And you can see here, about how high the plane is here on the meter stick. Zero is at the ceiling, so that tells you how low the plane is below its anchor point. To get the time measurement, you can simply do that with your real player window, because you have a time display in your real player. So what you will do is watch the airplane and count the number of rotations, I would say 10 rotations. And when you begin counting at zero, note the time in the real player and note the time at the 10th rotation. You can take the total time and divide it by ten to get the period of the airplane. So this gives you all the measurements you need to calculate the speed of the airplane using the two methods described. Its up to you now to make the calculations and the comparison of the two results.
Mon, 14 August 2017 Spain and England colonized the Americas in very different ways. That led to different cultural values, which led to different constitutions. Mexico has had to update and rewrite the Constitution several times since the first one in 1824, because that one was a disaster. So let’s talk about how that constitution came to life. Napoleon invaded Spain in 1808, and sent the nation into a crisis. In September of that year King Ferdinand was captured, and he abdicated the throne. In response to this, several Spanish administrators declared themselves the new government, basically a government in resistance to the French. They formed a parliament that produced the Cadiz Constitution in 1812, which called for equality under the law, and a more democratic legal system. Now the rulers of Latin America had to decide how to respond to this. They had been loyal to the Spanish Crown, which had put them in the positions they were in. But now Spain was preoccupied with the invasion, and so the Latin American rulers were given a little more space to make their own rules. Independence and The Constitution But Spain wasn’t totally distracted. A wave of independence spread over Latin America. The first movement was in La Paz, Bolivia in 1809. Spain sent troops from Peru to crush that one. The Mexican response to the Spanish crisis was complicated by a movement in 1810 led by a priest called Miguel Hidalgo. Hidalgo and his men sacked the city of Guanajuato and then started killing every white person they could find. The line between class warfare and ethnic cleansing totally disappeared. Mexico’s elite at that time was largely white, and as they watched that little example of popular participation in local politics, they remembered the Cadiz Constitution, which called for even more popular participation. They could never embrace that kind of document. So Mexico’s elite stayed loyal to the Spanish Crown. In 1815 Napoleon’s empire collapsed and King Ferdinand took the throne again. But he now faced mutinies and was forced to recognize the Cadiz Constitution as well as the parliament that had written it. The parliament was now becoming more radical and was calling for the abolition of slavery. The Mexican elites watched this as well. They no longer had an ally in the Spanish Crown, so they decided independence was a better fate than adopting the Cadiz Constitution. Those elites wanted Mexico to become its own independent constitutional monarchy. The man who led their independence movement decided that he should be the emperor. And he wasted no time giving himself dictatorial powers. He didn’t last long, but the cycle of dictatorship, coup, dictatorship, coup haunted Mexico for about a hundred years. During those hundred years Mexico endured the disastrous misrule of Antonio Lopez de Santa Ana, the guy who attacked The Alamo and lost the Mexican-American War. The extreme political instability in Mexico during the 1800s was, obviously, disastrous. Mexico lost about half of its territory during this period, including basically all of the American southwest. Since the arrival of the Spanish, the Mexican elite had structured their society entirely around slave labor and monopolies, AKA extractive economic institutions. So after hundreds of years of those extractive policies, by the time the Industrial Revolution rolled around, Mexico was in no place to take advantage of it. In the United States in the early 1900s, people from most walks of life could get a patent to develop products. Once you had a patent, you could get a loan from a bank to start a business. By 1914 there were almost 28,000 banks in the U.S., and the competition was fierce. In Mexico at that same time, there were about 40 banks, and no competition among them, meaning there was no incentive for a bank to provide a better service than the bank down the street. Since there was no competition, the banks could charge huge interest rates, which basically meant that only the superrich could get loans, and then they could use that easy access to credit as a way of gaining even more control over the country. The American Constitution placed huge constraints on executive power in the United States, but in Mexico there were basically no restraints, and the only way for someone to get rid of a president was the same way he originally took power: By force. Presidents in Mexico violated property rights with total impunity, they expropriated tons of land, and they granted monopolies and political favors to their supporters. The reason the United States banking system worked better for Americans than the Mexican system was because of the political and economic institutions of both countries. The stable banking industry worked in conjunction with political institutions that were much more democratic. So American bankers and politicians could try to corrupt each other, and were often successful, but politicians could be kicked out of office during the next election. A nation with extremely unstable political institutions can’t hold people accountable in the same way. England’s Path to the Industrial Revolution We don’t have time for a full recap of the conditions that put England and Spain on different roads, so if you want that, you’ll have to get the book. England’s road to the Industrial Revolution was long and winding and difficult. The elites fought every attempt to limit their power and make the nation more democratic, but in the long run those elites failed just often enough. One important event was the signing of the Magna Carta. It was not “liberty and justice for all,” but it was a tiny step in that direction. The king was forced to sign it. Later the Pope annulled it for him, but the seed was already planted. In the late 1400s the Lancasters won the War of the Roses. Their king, Henry VII, disarmed the aristocracy, basically giving the crown, or the nation, the monopoly of violence. Then Henry VIII and Thomas Cromwell turned the government into a set of bureaucratic institutions rather than what it had been before, which was just the private household of the king. Acemoglu and Robinson’s thesis rests on the assumption that power needs to be centralized in a kind of Goldilocks balance, not too little, not too much. Too much centralization causes North Korea. Too little centralization causes Somalia. Without centralization, political institutions are not possible. Henry VIII fought to make himself more powerful, and the elites underneath him fought against him, and they ended up indirectly making the government more pluralistic, making a system of checks and balances (more or less). The institutions in England were still extractive at this point, but they were laying the foundations for England’s longterm success. As both sides fought each other, they were able to centralize the state just enough, but also limit centralization just enough so that absolutism didn’t creep in and destroy their longterm success, like it did in Spain. Later, King James I did everything he could to become an absolutist ruler, but he had to fight a civil war over it, and he was defeated and executed. But a dictator replaced him. After this long series of conflicts came more conflicts that we don’t have time for. But since you’re a podcast listener you’ll be able to find podcasts that go into these topics in ways that I just can’t. There are probably at least 6 podcasts on the history of Great Britain. The path to longterm national success is not obvious, and it was even less obvious in England before the Industrial Revolution. The Industrial Revolution The Industrial Revolution began with transportation and textile. As you might guess, it was not a straight/simple path forward. One family invested £6000 of their money to make a river navigable, and in exchange the government granted them the right to charge people for navigation on the river. But the government tried to backpedal, and so they had to go back and forth fighting it out. The issue was resolved in favor of the family, thereby setting a precedent and demonstrating to the people that their property rights would be respected. If we compare that with Venezuela today, we see a place where the government can walk into any private business and say, “This now belongs to me.” Nobody in Venezuela has any reason to open a tiny café or a corner store in their neighborhood and hire a few neighbors (in a country with at least 25% unemployment, by the way). People know that if their government sees them being successful, they could lose everything they worked for and have to start all over from zero. So it’s smarter to not do anything. England avoided becoming Venezuela in part because people believed their property would be secure. There are lots of other factors, but that one is key. At any point in England’s development, the wrong person could have come into power and could have held onto it for too long, but luck as well as virtuous cycles or positive feedback loops put England’s economy in the best position for longterm success. So the Glorious Revolution in the late 1600s increased pluralism and led to the creation of the Bank of England, which sparked a financial revolution. People could then take out loans and start businesses, which gave more power to the commoners, which in turn created even more political changes that kept the cycle improving decade after decade. The political and economic institutions became more favorable to innovators and entrepreneurs, and property rights got more secure. That played a role in the transportation revolution, which laid a foundation for the Industrial Revolution. England also made smart use of economic nationalism and protectionism. Just as companies are in constant competition with each other, so are nations. The government made it illegal for foreign ships to carry products to England or its colonies, and they made it illegal to transport English products on foreign ships. English trade had to be transported on English ships. This obviously encouraged English traders and manufacturers to continue innovating and looking for profitable activities. Property rights were improving, infrastructure was improving, more people had access to finance, and manufacturers and merchants were protected overseas. In 1760 the number of patents jumped way up as a result of people’s faith that they could benefit by going into business. But, as I’ve mentioned at least twice now, it wasn’t simply a complete and steady improvement. People tried to set up monopolies and tried to change laws to make it illegal to compete with them, and the government tried to weasel out of agreements, but the general trend was positive. A strong economy is a changing one. An economy is a living organism, and the only constant for a living organism is change. Death is a part of all living things. Skin cells die and get replaced by new ones just like old industries die and get replaced by new ones. Anticapitalists like to use periodic market contractions as evidence that capitalism will soon fail, but that’s kind of like saying humanity will soon go extinct because so many of them die. Humans are not eternal, and neither are businesses or industries. It’s a process called creative destruction. It’s a scary process. It creates winners and losers. And ultimately that is why most countries are poor. The people who are scared of creative destruction have held too much power for too long. They are scared of change because they very well could lose in a competitive economy. As cotton started booming in England, the wool industry declined. New technologies were invented to speed up the production of cotton fabrics, and that meant people who wanted to join the cotton boom had to learn to use those new machines. People who adapted to the changes survived and prospered. People who could not adapt did not. The world economy exploded during this era. The leaders of extractive countries could get rich by exporting natural resources to the nations that were expanding. I’ve already talked about Mexico during the Porfiriato, which is the 30-year dictatorship of Porfirio Diaz. If you want more about that, you can listen to the episode called Revolution 1.1. Mexico underwent big changes during his rule. But these were what Acemoglu and Robinson call path-dependent changes. Since these resource-rich countries with extractive institutions were already on that path, the path of extraction, the changes that took place were simply an evolution of the processes that had already impoverished them. Globalization made the frontiers economically valuable. Large, open spaces that took forever to cross by horse were now seen as areas filled with valuable resources. The people who lived in those areas were not able to defend themselves, and so they were pushed out. As a tangent, that moment in history, colonization and the forceful dispossession of people from their lands, gives us an extremely powerful lesson for today. Cultures that are not strong will always be trampled by people from other cultures who are hungrier and better-organized. If we take nothing else away from the history of cultures interacting with each other, we need to take that lesson. It’s true in international relations as well as business. Stronger, hungrier, and more desperate companies can put others out of business, like Amazon did to bookstores. Remember, Amazon was not as big as Borders Books or Barnes & Noble. But it was hungrier and smarter. Apple did it to the music industry. When successful companies and successful nations grow complacent, when they get too comfortable, that’s when disruption happens. So the newly-discovered value of those wide open spaces led to more divergence between the U.S. and Mexico, because both countries reacted to those wide open spaces in different ways. The indigenous populations in America were pushed out of their land, and then the United States gave broad access to those frontier lands. This made those lands economically dynamic, in the words of Acemoglu and Robinson, as well as somewhat egalitarian. In Latin America the same dispossession happened, but those lands were not then made broadly accessible to the public. They were given to the politically powerful, which allowed the elite to concentrate their wealth and expand their power even further. Porfirio Diaz used the opening of frontier lands as a way to enrich himself and his allies. He sped up the cycle of extraction. And of course there were consequences for him, and you can’t just flat-out condemn every single thing that happened under his rule. But he continued Mexico’s path of extraction. Extractive institutions can cause economic growth, but only for a limited time, and only in limited quantities. Extractive economic policies don’t work in the long term. And so eventually Diaz was overthrown, but Mexico was sent into at least a decade of chaos afterwards, and probably closer to 15 or 20 years of chaos. This pattern of extraction, like I said, causes short-term growth, but it comes at a very high price to the country at large. There were civil wars, coups, revolutions, and economic stagnation all over Latin America through basically the entire 20th century as a result of the Spanish Crown’s original extractive policies. There was a revolution in Mexico in 1910, in Bolivia in 1952, Cuba in 1959, Nicaragua in 1979, and civil wars in Colombia, El Salvador, Guatemala, and Peru, and attempted agrarian reforms in Bolivia, Brazil, Chile, Colombia, Guatemala, Peru, and Venezuela. For many Latin American countries, democracy didn’t arrive until the 1990s, and even today they’re not very stable. In my own opinion many Latin American countries have made great strides toward developing more inclusive economic and political institutions, and even where they have failed to make improvements, the internet is a tidal wave rolling over Latin America and letting the people communicate with each other and conduct business even if their governments are doing everything possible to keep the old extractive models in place. At least in Mexico almost everyone I know who’s my age or younger has a Facebook account, which means they have regular access to the internet. The internet is probably the most powerful economic equalizer in world history. Unfortunately 97% of those people will use the internet exactly like Americans and Europeans use it, as a way to waste as much time as possible rather than learning something valuable. You can give people an equalizer, but you can’t force them to use that equalizer to get the equivalent of a university education every two years if they’d rather watch 30-second Facebook videos all day. The authors of Why Nations Fail illustrate the modern difference between the United States and Mexico by using the example of two of the world’s richest people: Bill Gates and Carlos Slim. They say Gates largely became successful through innovation, and they point out how the monopolistic tendencies of Microsoft were punished by the U.S. Government. In 1991 the Federal Trade Commission investigated the issue of whether Microsoft had become a monopoly. The U.S. Department of Justice filed a lawsuit against Microsoft in 1998 claiming the company had abused monopoly power, particularly by tying Internet Explorer to the Windows operating system. In 2001 the company reached a deal with the government. They didn’t face the penalties many people had wanted, but they didn’t get off scot-free either. Carlos Slim, on the other hand, got his money through intelligently manipulating the legal systems of Mexico. His initial success came through stock market deals and through buying and revamping failing businesses. The government privatized a telecom company Telmex in 1990 and did the thing that seems reasonable to socialists, they turned it into a state monopoly. Then they sold it to Slim, turning a state monopoly into a private monopoly. If you’re a Mexican entrepreneur, you face huge obstacles, including expensive licenses, truly labyrinthian red tape, and a financial sector that colludes with your largest competitors. Slim is a smart man who simply uses the system to his advantage. But Mexico is becoming more competitive, and Carlos didn’t build his empire by being the strongest competitor. He did it by finding loopholes, and loopholes are not a longterm economic strategy. As Mexico becomes more competitive, it gets more and more important for individual citizens of Mexico to get ahead of the curve. It is the world’s 15th largest economy, and now in the internet age every business with an online presence has to compete globally. Mexico might not be the easiest place to start a business, but I see a lot of ways in which it is somewhat easier than where I come from. I don’t want to turn this into a tangent on doing business in Mexico, so I’ll keep this short, but the level of business sophistication in Mexico is extremely low, which makes it easier for Mexicans to outperform their competition simply by being dedicated to their customers and willing to go to the bookstore and buy a couple business books every month. I’m not saying it’s easier in Mexico, but your competitors are extremely unsophisticated and unwilling to invest profits into their business. That counts for something. Sun, 6 August 2017 Before we get into today’s episode, I want to take a second to plug a pretty cool thing I made that can really help out anybody who has learned a little bit of Spanish and wants to go much deeper. It is the Mexican Spanish Master course. It’s 90 minutes of video lessons about Mexican slang, culture, and profanity. You can download the videos, the audio files, as well as the transcripts, and listen to the course in your spare time. This course did not exist when I needed it to exist, but it does exist now, and you don’t have to spend hundreds of hours listening to people say these words but not understanding them, and then slowly putting together a vocab list of new words that your teachers never bothered to tell you about because they were teaching you a generic international Spanish. If this sounds interesting, check out digitalnomad.mx and scroll down right below the email signup form, and you can join The Mexican Spanish Master Course. That’s it. Let’s get into the show. Nogales VS Nogales My research for The Mexican Revolution took me on several detours. One of those detours was the Labyrinth of Solitude. Another detour was Why Nations Fail. If I could go back in time to when I was 18 or 19, when I was deciding to go to college and thinking about majoring in Global Studies, which is the ridiculous Marxist version of Poli Sci and International Relations, I would say tell myself first of all not to major in Global Studies because it would be a colossal waste of time, and I would tell myself, “If you really want to understand global development, college will not explain it to you. You should start with two books. One of those books is Why Nations Fail. The other is Guns, Germs, and Steel.” In college I had to read a ton of irrelevant nonsense: Postmodern imbeciles like Horkheimer and Adorno, Foucault, and a bunch of other people whose appraisal of global development is so flawed that it’s honestly baffling to me that anybody takes them seriously in the 21st century. These two books basically took my Bachelor’s degree, threw it in the garbage, set it on fire, and then spit in my face. They showed me that my degree is EVEN LESS VALUABLE than a Gender Studies degree. I graduated with a piece of paper that’s worth less than Comparative Literature. But Daron Acemoglu, James Robinson, and Jared Diamond have at least helped me stop believing the total nonsense I believed in my 20’s. I can’t turn back the clock, but I can hopefully serve as a warning to anybody who’s thinking about going down the same stupid road I went down. Don’t do it. Just read the two books I mentioned. Why Nations Fail and Guns, Germs, and Steel offer arguments that in some ways compete with each other and in some ways complement each other. Right away in the first chapter of Why Nations Fail, they smacked me so hard that my face still hurts. The simplest way to understand the basic argument of the book is to look at the differences between Nogales, Arizona, and Nogales, Sonora, Mexico. And then North Korea vs South Korea. The differences between those places are not explained by geography or culture. These are places separated only by a little fence, not by oceans and not by cultures. Just a fence. There are differences in culture, especially between North and South Korea, but they didn’t start that way. And those cultural differences didn’t cause the South Korea to win and North Korea to fail. Acemoglu and Robinson argue that the real determining factors in a country’s prosperity are its economic and political institutions. To put it in as plain language as I can, the countries with good institutions are prosperous while the countries with bad institutions fail. To be clear, Acemoglu and Robinson aren’t calling them “good” and “bad” institutions. I am. The language the authors use is inclusive institutions and extractive institutions. We’ll talk a bit more about those definitions later, but for now we’ll just say that inclusive institutions give people incentives to start businesses and to get involved in the democratic process, while extractive institutions either don’t incentivize people or they actively punish people for starting businesses. What I just said should be totally obvious. Of course there are competing theories, but none of them, especially the ones coming out of Global Studies departments, come even remotely close to reality. But there is a semi-competing idea put forward by Jared Diamond. If you’re not familiar with Jared Diamond and his book Guns, Germs, and Steel, he proposed a theory of global development that was based largely on geography. He spends 500 pages laying out his theories, and so I’m not going to even attempt to explain him adequately. But the basic argument is that Europe became a world power because they got a lucky roll of the dice geographically. His theory is very powerful and makes more sense the more you think about it. But geography is not the whole story. And this is where Acemoglu and Robinson come in. Jared Diamond’s theory does not explain, for example, North and South Korea. And it doesn’t explain the reversal of fortune on the American continent. What do I mean by “reversal of fortune?” I mean, why is Mexico so poor today if the Aztecs were so much more economically powerful than North American tribes before colonization? If Diamond were correct and geography was the main determinant, then Mexico would still be the dominant power in North America. Geography is extremely important, but it’s only about half the story. Institutions are the other half. So how did those different institutions come about? Why does Nogales, Arizona have different institutions than Nogales, Sonora? To find the answer, we have to talk about how Spain and England colonized the continent. (They did so in some very different ways.) We also have to talk about WHY Spain and England were able to begin colonization in the first place. Then we have to look at how exactly they did it, because their styles were very different. Why did England become the primary superpower? The Black Death, also called the Plague, was a disease that ravaged Europe in the 1300s. It lasted about seven years and killed between 75-200 million people. At that time the estimated world population was 450 million. The Black Death killed potentially HALF the world’s population. Before the Black Death, English peasants had a bit more political power than the peasants of most other nations, especially the ones in Eastern Europe. After the Black Death, which killed roughly half of all the populations that it came into contact with, the English peasants were able to agitate for even more rights. In Eastern Europe, the Black Death only resulted in the government squeezing its people even harder. There were small differences in peasant rights before the Black Death, but the Plague was a critical juncture that each nation’s political institutions had to respond to. It was a turning point that made the relatively small differences between nations larger. The response of each nation to that critical juncture put each country on a somewhat different path than others. Some of the most important critical junctures in Europe were the fall of Rome, the Plague, the Atlantic slave trade, the colonization of the Americas, and the Industrial Revolution. Before Colonization of Americas Absolutism began to crumble in England, but increased in Spain as those two societies began structuring themselves differently in response to the decline of Rome, and then to the Plague. England and Spain were more similar before the fall of Rome, but they took separate paths as Rome fell. Same with the Plague. At each critical juncture, the societies drifted further and further apart. The nation of Spain was born in 1492 with the marriage of Ferdinand and Isabella. With that marriage, the kingdoms of Aragon and Castile became one. The Reconquista also happened in 1492, when Spain liberated itself from the Moors, the Arabs. And of course Christopher Columbus sailed to the Americas and began claiming territoiries for Ferdinand and Isabella, who funded his voyage. In subsequent years, through marriages, their dynasty acquired more land in the Americas as well the Netherlands, Germany, and part of France. The Spanish monarchy was now in charge not just of the Iberian Peninsula, but of a multicontinental empire. The emperor, Charles, strengthened the absolutism that Isabella and Ferdinand started. The North and South American territories that Spain took were rich with gold and silver. The discovery of those precious metals strengthened the Spanish crown and led to even more absolutism. As the crown became stronger and more absolutist, the laws of the empire became more and more extractive, as Acemoglu and Robinson call it. By the year 1600 Spain was in economic decline. Property rights were highly insecure. Jews and Arabs were forced out of Spain and were not allowed to take any gold or silver with them as they left. Spain defaulted on debts in 1557 and 1560, and 8 more times in the following 100 years. The banking families who had lent money to the Spanish crown were totally ruined by those defaults. Spain’s colonization style funneled money to the top while England’s colonization style spread wealth much more broadly among the citizens. There was no free trade in Spanish America, and trade was highly regulated. For example, merchants in Mexico could not trade with merchants in Colombia. If the crown could not get a piece of the action, nobody could. This policy did nothing to help Spain’s decline, and in fact it only sped it up. Spain’s version of Parliament primarily represented a few of the biggest cities. In England, Parliament represented people in urban AND rural areas. This meant the English government represented a broader range of people with different and competing interests. As that power was spread more and more broadly in England, the people could use their influence to push for less and less absolutism. This obviously led to a virtuous cycle in which English citizens had more incentive to start businesses and innovate and create wealth. Spanish citizens had basically no similar incentives. In the 1500s Spain was getting massive amount of wealth ffrom Latin America. Spain was much, much wealthier than England, but the crown was spending its wealth stupidly and setting up future generations for failure. Official positions could be bought and sold, or passed down through inheritance. By the way, that system is STILL in place in some sectors in Mexico. You can just buy your way into an important job even though you have no credentials, or your mommy or daddy can give you their job when they retire. By the end of the 1600s England was growing and industrializing while Spain declined. The citizens of England had more incentive to innovate and to generate wealth than almost anybody in the world. And the nation prospered as a result. But this wasn’t because the English government was just more benign, more fair. The English didn’t colonize the Americas differently than Spain just because they thought it would be a smarter longterm move. They weren’t playing 3-dimensional chess. In fact, they originally wanted to copy the seemingly-successful Spanish model of colonization. The authors write: “The Spanish strategy of colonization was highly effective. First perfected by Cortes in Mexico, it was based on the observation that the best way for the Spanish to subdue opposition was to capture the indigenous leader. This strategy enabled the Spanish to claim the accumulated wealth of the leader and coerce the indigenous peoples to give tribute and food. The next step was setting themselves up as the new elite of the indigenous society and taking control of the existing methods of taxation, tribute, and, particularly, forced labor.” Spain was richer than England at this time. England was a minor power, and suffering the effects of the War of the Roses. As such, England was in no condition to begin colonization when Spain did. But roughly 100 years later, England had recovered a bit and they were building up their navy. Spain tried to invade England, and famously the Spanish Armada was defeated. Spain’s navy was much more powerful, and they could easily have overthrown Queen Elizabeth and taken Britain as their own territory. But bad weather conspired against Spain, as did the death of one of Spain’s best naval commanders. So at the last minute Spain had to choose someone else to lead the attack, and the guy they picked was not a great tactician. The English defeated Spain’s armada, which opened up the seas, meaning England now had new trade routes and could really start colonizing. So by this time England was a latecomer in the colonization of the Americas. All the rich lands had been taken by Spain. They were left with the part nobody else wanted: North America. Unlike Mexico and South America, the indigenous population of North America was small and spread out. Spain took advantage of dense populations in their colonies. Indigenous slaves worked in the fields and mines, and a giant percentage of the wealth generated or extracted there went straight to the Spanish Crown. The settlers who founded Jamestown were heavily influenced by Spain’s method of conquest. They wanted to take the local ruler hostage and use him to force the locals into slavery in fields and mines. This didn’t work. The locals were not cooperative, and they didn’t live in huge cities like the Aztecs and Incas. And there was no gold or silver. So the English settlers were forced to work for their food. John Smith, yes that John Smith, was in charge of the settlers. He wrote to England asking for them to send more carpenters, agricultural workers, blacksmiths, and masons, rather than adventurers and dreamers. All the goldsmiths who had come were useless. He soon instituted a new rule, “He that will not work shall not eat.” That is perhaps the only thing that helped Jamestown survive the second brutal winter. Smith was working for the Virginia Company, which was losing money in Jamestown because of the lack of gold and free labor. So he was forced out of the colony and he went back to England. The guy who took his place tried to coerce the settlers into working. He told them that anybody who tried to leave the colony would be executed, anyone who stole food would be executed, and anyone trying to get back to England would be executed. But his strategy did not work. So the Virginia Company had to adapt. The Company decided to give the settlers incentives rather than coercion. They gave 50 acres of land to each male settler, and 50 more acres for each family member. Each adult male settler was given a say in the laws and institutions governing the colony. They saw that the only way to make a colony economically viable was to give the settlers incentives to work hard. Every time the English elites tried to set up a system that restricted economic and political rights, they failed. In Spain’s American colonies they were able to force the locals into slavery and ship all the wealth to Spain, leaving a few rich foreigners to govern the massive impoverished local population. The Spanish Crown won big in the short term but bankrupted an entire continent and screwed over future generations of Spanish citizens. Today Spain’s unemployment is around 18%. That’s almost as bad as Greece. For comparison, unemployment in the UK is around 4%. I said in Episode 1 of the Fall of Tenochtitlan that a huge portion of Spain’s wealth today comes directly from the colonial period, and that’s true. But Spain is also feeling the negative effects of absolutism from hundreds of years ago. So we’ve seen the very beginning of the processes that put Mexico on a different path from the United States. In the next episode we’re gonna watch how the Latin American indpenedence movements impacted Latin America’s ability to join the Industrial Revolution. And we have to talk about how all of this influenced Mexico’s first constitution. Thank you for listening to The Mexico Podcast. And again, visit digitalnomad.mx for the Mexican Spanish Master Course. Or sign up for my email list to get the free version. It’s up to you. How deep do you want to go with Mexican Spanish? You can reach me at firstname.lastname@example.org with any questions or comments. Sun, 30 July 2017 This is part 2 in a 2-part series on Labyrinth of Solitude. In this episode I’m going to perform a quick medical diagnosis of one of the best books written about Mexico. And also one of the most self-indulgent and cringeworthy books ever written. First we’re gonna talk about teenagers, then we’re gonna talk about Coca-Cola, and then we’re gonna talk about the major, glaring flaw with this book, because huge parts of Octavio Paz’s masterpiece are completely unreadable, while other parts are completely perfect. This is not an attempt to summarize the book. If you’ve ever read it, you probably understand how difficult a task that would be. If you haven’t read it, it’s an impossible task. So rather than condensing it, I’ll instead point out some of the parts that were most interesting to me. The Pachuco and Other Extremes Paz writes that many of the thoughts that inspired him to write Labyrinth of Solitude came to him when he was in the United States. He wanted to understand American culture, but he kept seeing himself reflected in his questions about American customs. He writes about a chicano subculture called Pachucos. They got their style of dress from a character called Tin Tan, played by the actor German Valdes, in the early 1950s. The guy was somewhere between The Fonz from Happy Days, Charlie Chaplin, and Robert De Niro’s take on The Godfather, as opposed to Marlon Brando. Octavio Paz says the Pachuco style came as a response to being Mexican in racist-Post War America. Pachucos were adolescents who didn’t want to go back to being Mexican, but also didn’t want to try to pass as white. And in my own estimation the Pachuco falls into the trap every adolescent falls into: Trying to prove his/her distinctness and individuality by totally conforming to the rigid rules of whatever subculture or counterculture they gravitate towards. Paz writes that the adolescent cannot forge himself, because when a person finally forges themselves, they are no longer an adolescent. According to Paz, the Pachuco is the product of two irreconcilable worlds: Mexico and the United States. In my opinion the adolescent mind is tortured by that supposed dichotomy and therefore lashes out. The adolescent wants to fit in somewhere, because an adolescent is still a child and still wants someone to protect him. By taking on the outward appearance of a particular subculture, the adolescent hopes that subculture will protect him from the hardships of the world. This is possibly why pop stars like Selena as well as academics find themselves struggling incessantly with biculturality. Selena was 23 when she died. That’s only slightly older than a college graduate. And anyone who spends their entire life in a college will probably not mature very much beyond that point. So we find people like Gloria Anzaldua who are much older than people like Selena but who still write about how tortured they are by being between two cultures. The adolescent mind of a pop star in their early 20s and the adolescent mind of a career academic need to go through a long process before forging themselves into adulthood. I’m reading another book about Mexico…which, duh, obviously. But I came across something by another writer that completely validates Octavio Paz’s explanation of mexicanness. It’s about Coca-Cola. If you want to be politically fashionable in Western liberal democracies in 2017, you can never even imply that a gigantic multinational corporation could ever be right about anything in any way. Ever. Well, since I turned 30 I’ve stopped caring about the contemporary political orthodoxy. So screw it. Coca-Cola is right about something. They wanted to boost sales of Diet Coke in Mexico, so they did a study. When a huge corporation has billions of dollars on the line, their studies aren’t arbitrary and they don’t play BS word games. For some background here, in case you didn’t know, Coke is an absolute beast in Mexico. I’ve only met one person who didn’t like Coke here. No, I hvaen’t asked every single person I’ve ever met whether they like Coke. Anyway, Coke is huge in Mexico. There is not a single village that doesn’t have a store where you can buy Coke. Okay, maybe there’s one. But you can even buy Coca-Cola from Zapatistas. Subcomandante Marcos probably has some Coke in his fridge. Mexico is usually among the world’s top consumers of soft drinks, depending on the survey and the year. Diet Coke was about 30% of all Coca-Cola products sold in the United States, but it was only 2% in Mexico. So Coke wanted to know why. There are two important findings from their study. One, Mexican men think Diet Coke is for girls, and they don’t want to be seen in public drinking it. And that’s true. Diet Coke is for girls. Girls who like the taste of a dentist office. And guys who like the taste of a dentist office. Now here’s the part where I have to again recognize the total brilliance of Octavio Paz. I’m getting this Coke story from Andres Oppenheimer, but Coke is totally validating Paz from a hardcore capitalist perspective. Here it goes: The second finding is that Mexicans are quote unquote compensators. Compensators are a small category in the United States, but much bigger in Mexico. A compensator will overeat and then repent the next day, and try to undo the damage, but revert to the old behavior shortly thereafter. By the way, that’s what makes Mexican parties so great. In the U.S. most people will either drink Coke OR Diet Coke almost exclusively. In Mexico Coca-Cola found that people will drink tons of Coke one day and just generally go overboard in every way, and the next day they’ll try to make up for it by drinking Diet Coke. It reminds me of when I worked in a Mexican restaurant. In this example it was actually an American who would order the biggest, greasiest thing we had…and actually this happened all the time. I worked at three fast food joints and a couple restaurants, and it happened in all those places. People would get the unhealthiest thing on the menu and then “GIVE ME A DIET COKE.” I guess the only difference here is the on-the-spot repentance or compensation. Or maybe they drink it for the taste… I’m not sure which is worse. Moving away from mass market sugar water… Paz writes that Mexicans like to work slowly and carefully, paying attention to all the small details, and that Mexicans have an innate good taste that is an ancient heritage. There are certainly a lot of great products made by serious artisans who are dedicated to their craft, but there’s an even greater amount of crap. That’s normal. That’s the same in any country. There’s a huge amount of slowly and lovingly crafted stuff in Mexico. The craft beer scene is still emerging and it still belongs to people who love beer. Mexico’s big beer conglomerates haven’t caught on to the profit they could make yet, and especially in Oaxaca where I live, there are only a few brands and a few micro or nano-breweries. They use Mexican ingredients, too. There’s beer infused with mezcal, Jamaica, tejate, and I’m sure a dedicated beer connosiuer could find American or European companies making those flavors, there’s some really cool stuff happening with Mexican beer. Then there are the mezcal and tequila artisans. Some of them stick rigidly to tradition and some of them experiment. Both avenues are wonderful. Since mezcal is getting its extended 15 minutes of fame, you can find upscale mezcal bars as well as the seedier joints, and if you know what to look for you can get great stuff in both kinds of places. There’s not a huge variety of Mexican cheese, at least not that I’ve found, but it’s all great pretty much anywhere you find it. I’m not gonna bother getting into what is and is not artisanry, but by any definition there is great artisanry as well as complete crap. As I mentioned before, Vast portions of Labyrinth of Solitude are completely unreadable, and I blame that on Octavio Paz’s career as a poet. He thinks so deeply about some things that his thoughts lose all meaning. And again it’s the echo chamber that academics fall into. And then not only is he writing things that mean nothing, but he puts them into overly poetic nonsense prose. Take this passage for instance: “Man is alone everywhere, but the solitude of the Mexican, under the great stone night of the high plateau that is still inhabited by insatiable gods, is very different from that of the North American, who wanders in an abstract world of machines, fellow citizens, and moral precepts. In the Valley of Mexico man feels himself suspended between heaven and earth, and he oscillates between contrary powers and forces, and petrified eyes and devouring mouths. Reality – that is the world that surrounds us – exists by itself here, has a life of its own and was not invented by man as it was in the United States. The Mexican feels himself to have been torn from the womb of this reality, which is both creative and destructive, both Mother and Tomb. He has forgotten the word that ties him to all those forces through which life manifests itself. Therefore he shouts or keeps silent, stabs or prays, or falls asleep for 100 years.” If that passage made any sense, or if Paz was actually trying to say something real, then there would be too much wrong with it to even know where to begin. But ultimately they’re just pretty words that mean nothing, because the author is a poet who has spent too much time being terrified at his own solitude and now often forgets that he’s writing a thing that’s going to be read by other people who are also alone and therefore not inside of Octavio Paz’s mind, which would be the reason to publish something, so that you can explain your oh-so-poetic solitude to someone else who’s also alone in a cold/harsh/painful/oppressive world that only poetry can sweeten or illuminate. There simply aren’t enough drugs on Earth to make that paragraph comprehensible. It’s sort of like reading dense theological justifications of things like transubstantiation or how many angels can dance on the head of a pin. Also, most of his writing has some really haunting similarities to critical theory, which is the academic language of Marxism. And since Paz was a Marxist at this particular point in his life, it makes sense that his writing contains those scary, mechanical, dehumanizing thoughts and fixations that most Marxist writing has. But I do agree with his assertion that the differences between Mexico and the U.S. are not merely economic. In other words, if everyone in Mexico and the U.S. had the same income and the same access to the same products and services, the two countries would still be very different. The major flaw with Labyrinth of Solitude is Paz’s career as a poet. The word lacerate appears on almost every page. Mexican traditions are constantly compared to a firecracker exploding in the air and disappearing, or a bullet fired into the air. The only thing Paz likes more than the word lacerate are commas. In many, many sentences, nearly every single word will have a comma after it. Here’s an example: “Spanish Catholicism has always expressed the same will; [semicolon] hence, [comma] perhaps, [comma] its belligerent, [comma] authoritarian, [comma] inquisitorial tone.” Maybe that’s just a problem with the translation, but I doubt it. That’s what happens when writers try too hard to sound like what they imagine writers sound like, trying to impress other writers who also try too hard to sound like writers. And I think that’s why poetry never gets taken as seriously as poets want, because they make no effort to write something that non-poets can understand or would ever care about. Thank you for listening to my absurd opinions on one of Mexico’s greatest literary treasures. I promise I will be back in one week to continue defiling this sacred cultural artefact. Sun, 23 July 2017 This is part 1 in a 2-part series on The Labyrinth of Solitude. The differences between the U.S. and Mexico go back long before Europe discovered North America. In what is now Mexico, there were massive and complex civilizations. Farther north there were mostly nomadic tribes. The Aztecs and Maya were economically richer than, say, the Apache and the Cherokee. Spain and England were also different, though not as different as the Aztecs and the Cherokees. The south, Mexico, had different natural resources than the north did. I’ll talk more about the divergent paths that the U.S. and Mexico took in a future episode, but for this one we’re again talking about Labyrinth of Solitude. The author, Octavio Paz, won the Nobel Prize for Literature in 1990, and I’d be surprised if anybody who’s ever taken a Spanish class hasn’t at least heard his name. Paz says there’s one fundamental difference that helps to explain the modern differences between Mexico and the U.S. “In England the Reformation triumphed, whereas Spain was the champion of the Counter-Reformation. Spain had been under Islamic oppression since roughly the year 700 until 1492, when Arab domination of the Iberian Peninsula ended. But after 700 years of something, a culture can’t really help but internalize some aspects of it. And so conversion by the sword as well as crusades and holy wars and inquisitions had become a fact of Spanish life and it became part of Spain’s brand of Catholicism, which Spain then exported to Mexico. Paz writes that conquest and evangelization are as fundamental to Spain and to Catholicism as they are to Islam. For them, conquest meant occupying foreign lands, subjugating the people, and forcing them to convert. The conversion then legitimized the conquest. English colonization was different in that evangelization was not quite as important. Mexico was conquered by people who were orthodox, inflexible, dogmatic, and authoritarian about their faith, and extremely violent. The United States was conquered by people who were also very religious, but who were largely dissidents and who felt that religion should be read and understood by everyone, not just by a priestly class. Broadly speaking, the American vision was one of Protestant Reformism, while the Mexican vision was one of Catholic Orthodoxy. Mexico’s Catholic orthodoxy was defensive rather than critical, it resisted modernity. It prevented examination and criticism. Paz writes that these two styles of religious thought are irreconcilable, the rigidly dogmatic and the interpretive. And that irreconcilable difference played out in the structure of the religions. The hierarchy of the Catholic church is complex, and the mass itself focuses mostly on ritual and sacrament. In the Protestant tradition, scripture is freely discussed and examined and questioned, hierarchy between the clergy and the believers is less, and the focus of mass is more on delivering an ethical message than ritual. This difference comes from the Reformation, which was a criticism of European religion. The Reformation led to the Enlightenment. Spain closed itself off from the Reformation, and the Enlightenment never happened in Spain. That’s going a bit too far maybe, but when anyone thinks of the Enlightenment, no Spanish names come to mind, whereas several French and English names are immediately recognizable. John Locke, Voltaire, Rousseau, Montesquieu. Immanuel Kant was German. For Octavio Paz, Mexico’s vision of progress comes from looking to the past, whereas America’s vision of progress comes from looking toward the future. The founding of the United States was done with a promise of a better future. One major difference is that in Mexico there are still millions of indigenous people. In America there are few, and even then, most native people are corralled into reservations and forgotten. America was founded as a land without a past. In Mexico, the past is still at war with itself. Cortes and Moctezuma are still alive. Emiliano Zapata’s great desire was a return to the past, a return to pre-Hispanic communal ownership of land. Paz writes that clear-thinking Mexicans have been wondering about modernization since the 18th century. “In the 19th century it was believed that to adopt the new democratic and liberal principles was enough. Today after almost two centuries of setbacks, we have realized that countries change very slowly, and that if such changes are to be fruitful they must be in harmony with the past and the traditions of each nation. And so Mexico has to find its own road to modernity. Our past must not be an obstacle, but a starting point.” I want to take a moment to point out that I’ve had this thought independently of Octavio Paz. Therefore I am extremely smart and impressive. But seriously, the major discovery that I’ve had while living in Mexico is that the singer Selena was wrong, and Gloria Anzaldua was wrong, and every other whiny post-modernist was wrong in the assumption that it’s oh-so hard being bicultural. In reality it’s a superpower. (And by the way, what the whiners fail to realize is that if they were monocultural, there would still be parts of their culture that alienated them, because no culture will ever fit anyone perfectly. Nobody in France is perfectly in tune with all aspects of French culture.) I say that being bicultural is a superpower because you get to see the good parts and the bad parts of both cultures, and you get to see them from both an insider perspective and an outsider perspective. You can see each culture more clearly, and then you can decide for yourself which of those good and bad parts you want to keep and which ones you want to get rid of. In Mexico, I am not normal. I am a foreigner. I haven’t been to the U.S. in about four years, and I’m sure when I go back I won’t be normal there either. I’ve discarded things I don’t like about the U.S. and I’ve discarded things I don’t like about Mexico, and I’ve combined the stuff I like about each country. And when nobody considers you normal, when nobody expects you to be normal, you realize that it doesn’t matter whether you’re normal or not. All that matters is that you live the way you think you should live and that you strive to improve constantly. And so my message about Mexico’s path forward is close to what Octavio Paz seems to be laying out. I don’t think Mexicans need to be like me, bicultural out of choice, but millions of Mexicans live in the U.S anyway. And besides, Mexicanness is a combination of Spanish and indigenous culture, and there are dozens of indigenous cultures in Mexico. For people living in Mexico, the biggest cultural force besides Mexican culture is American culture. And nearly every family has relatives who’ve been to the U.S. or who are living there right now. All that’s required is to awaken this dormant superpower and use it. Just take inspiration from the good parts of Spanish culture, American culture, and Mexico’s indigenous cultures, and then get rid of the crappy parts. Not everyone is going to agree on what the good and bad parts are. That’s up to each person to decide for themselves. But in my own humble opinion, Mexico has been going about it blindly for 500 years. But Mexico isn’t alone in this; every culture moves unconsciously. Octavio Paz writes that Mexico needs to reconcile itself with its past in order to move forward. He may not be explicitly proposing this, but in my opinion the only real practicable way to carry that out is through education. Most people don’t want education. I do, which is why I do this podcast. You do, which is why you’re listening to this podcast. But most people don’t want education, because it’s just easier to not learn. And even when we do want to learn, most teaching methods are outdated and low quality. When you think of public schools in the U.S. or Mexico, quality is probably not the first word that pops in your head. If you think of a business school, are they teaching you how to operate in 2017? Or 1988? Yeah, 1988. Paz then writes about how the nations that inspired Mexico’s 19th century liberals, (meaning France, the US, and England,) are no longer inspirational like they were centuries before. He wrote this particular essay in 1979, but I think his point is still valid today. The thinkers who inspired Mexico’s liberals were people writing about freedom, writing about escaping tyranny. And they were writing about the future. They were engaging in a transformation of their cultures. But then in the 20th century the United States went from inspiring freedom to being yet another colonizing empire. I’m oversimplifying it way too much and I have very little patience for the Noam Chomsky style of everything-bad-is-America’s-fault, but no one can deny that the U.S. has done things to make lots of people in lots of coutnries less free than they would have been had the U.S. not interfered. The country that inspired tons of independence movements later became a cynical geopolitical manipulator seeking nothing but power. However, I also think Paz is exaggerating a bit, or at least he’s too close chronologically to see what had just happened in 1979. In the 60s and 70s America and England went from producing inspiring intellectuals to inspiring cultural figures. Some of the greatest art in all of human history came out of the 60s and 70s. The Beatles, Led Zeppelin, Jimi Hendrix, and Bob Dylan are all such unbelievably great artists that eventually everyone on Earth has to recognize their greatness. Even those of us who wanted to rebel against our parents by pretending we didn’t like those bands…eventually had to admit that we were wrong. And that’s just music. We don’t need to get into this whole conversation, but I think I’ve made my point that the inspiration went from intellectual to artistic. Octavio Paz is searching for a source of inspiration for Mexicans on how to move forward as a country. But I think he’s missing the point of classical Liberalism, which is about the sovereignty of the individual. And I think the message of the Liberals is more important today than ever. The 20th century was the great science experiment of liberalism versus collectivism, and we’ve seen the horrors that collectivism always produces. And now in 2017 as collectivists are trying to take over Western civilization, yet again, we must draw inspiration from the classical Liberals and remind people that the only real minority is the individual and that freedom must be defended fiercely against any force that seeks to limit it. The people who inspired Mexico’s 19th century liberals are still relevant, and they can still serve as a source of inspiration for Mexicans today, and for people of any country. Paz points out that Mexico’s position is much better than many other countries. That was true in 1979 and it’s true today. Paz mentions Latin America’s military dictatorships, most of which were propped up by the U.S. The U.S. propped up those dictatorships in order to keep collectivism from spreading like cancer, but military dictatorships and communist purges are both terrible options. And life in some Asian and African countries post-independence was sometimes worse than it was during colonialism. Then at the end of his essay he gets into some things that aren’t really relevant to anybody. He was writing before the fall of the Soviet Union. His assessment of history is great. His analysis of his present was less impressive. Mon, 3 July 2017 Cuauhtemoc was the last Aztec emperor. I’ve captured one sliver of his life in my series Fall of Tenochtitlan, but obviously he was around before and after the Spanish invaded and destroyed his city. By the way, I’m not making any moral judgements about the Spanish or the Aztecs when I say invaded and destroyed. Invasion and destruction are pretty common themes in history. Historians don’t know exactly when and where he was born, exactly who his parents were, and they don’t know where he was buried. One town in the state of Guerrero claims to have his bones, but others say it’s not him. He was born sometime around 1500 to a noble family. He was named emperor in 1521, and he was hanged by the Spanish in 1525. He had a wife and at least one child. If there were any official documents about him, they were lost or destroyed during the destruction of Tenochtitlan. After the emperor Cuitlahuac died of smallpox, Cuauhtemoc was named emperor and put in charge of the city’s defense. When he finally accepted that he couldn’t save the city, he and some advisers tried to flee and find a better place to continue the war. He was captured and brought to Cortes. He asked to be sacrificed, because that was the expectation of any captured soldier. Before the Spanish arrived, the typical battle strategy was to capture as many people as possible rather than killing them. Live prisoners could be sacrificed to the gods. After death, the soldier would ascend to the heavens and accompany the setting sun. The Spanish soldiers hadn’t been paid yet, and Cuauhtemoc said there was no more gold left. It’s likely that Cortes had kept most of it for himself, and when he did offer his men a bit of gold, the amount was so tiny that they all refused to take it. There had been a few mutinies and conspiracies before the battles of Tenochtitlan, and now that Cortes wasn’t paying up, his men were getting unruly again. But Cortes had to keep them active, and so he sent them to explore and colonize other parts of Mexico and Central America. In 1524 one of Cortes’ men, Cristobal de Olid, who had been sent to conquer Honduras, rebelled against Cortes. So Cortes went to put down the rebellion. He needed to take Cuauhtemoc with him because if he had left him behind, the emperor could have started up his own rebellion. Along the way many of them died of hunger, and others were bitten by venomous snakes. Eventually they got to a Mayan village in the state of Campeche. Today it’s an archeological site called El Tigre. They were received by the son of the chief. There were about 100 of them still alive. At some point during the stay in that village, Cuauhtemoc was executed. The motives aren’t completely clear. It’s possible that he was organizing a rebellion, but it’s also possible that Cortes just wanted to take advantage of an opportunity to isolate the emperor from his people and then just get rid of him. What happened with his body after that is an open question. The tradition for rulers of Tenochtitlan was cremation. It’s possible that his ashes were placed in an urn that his captains and advisers decided to leave in an important building at El Tigre. Or maybe he was buried in a Mayan crypt, which has since been forgotten. But the sources I’m using are pretty certain about what did not happen to his remains. An old church in the town Ixcateopan in the state of Guerrero, which has since been converted to a museum, claims to be Cuauhtemoc’s final resting place. There is a skeleton under glass surrounded by paintings of the emperor. At one of the main entrances to the town there is a statue of him standing next to an eagle with a snake in its mouth perched on a cactus. The eagle on the cactus represents Tenochtitlan’s foundational myth. The people of the town don’t much care for the scientific studies showing that the bones are indeed not Cuauhtemoc. The town is, according to the town itself, the place where he was born and where he now rests. September 26, 1949 is an important date for the town. That was the day that archeaologist Eulalia Guzman publicly declared that she had found the grave of the last Aztec emperor. (Quick side note, the term emperor is not totally accurate. The word the Aztecs used was tlatoani, which was something more like Speaker. I’ll talk a bit more about that term in a future episode. But for now, emperor works.) Eulalia Guzman had heard rumors that he was buried there, and she said a local family had documents that pointed to the exact burial location. She organized an excavation at the church and found some bones as well as some objects that appeared to back up the claims of authenticity. There was a spearhead and a plate bearing the inscription “1525-1529. King Coatemo.” A scientific committee showed up in the same year, 1949, to analyze the findings. Then another study happened in 1950, and a third in 1970. The controversy kept going until a fourth study in 1976 looked at the bones, the spearhead and plate, the grave, and the documents describing where to find the grave. The definitive statement came out: There was no scientific basis to claim that the remains belonged to Cuauhtemoc. The documents were forgeries, the grave had been recently dug, and the oral histories claiming he was from the town were also false. Nonetheless, a tradition began in 1949 and has been going on ever since. People leave flowers and offerings at the grave, and dancers in costumes fill the streets. Some people go so far as to call the town the birthplace of Mexicanness. Some people claim Cuauhtemoc was born on February 23, in that town. The first dancers start arriving the night of February 21st. The following people head to the museum with their offerings. Then there’s more dancing. The celebration attracts locals, travelers, families, and even representatives of indigenous groups from all over Latin America. The party goes on all night and into the morning of the 23rd. It’s a small town, and it’s usually calm and quiet. But the festivities completely transform the atmosphere. It’s a pretty normal festival by Mexican standards, but there is one really unfortunate bit, which is that kids are taught a false version of history where Cuauhtemoc was actually born in their town on February 23 and then his remains were buried there as well. It’s just another reminder that we can’t try to force history to conform to our own personal fantasies. Sure, it would be cool if those bones actually belonged to the last Aztec emperor. But it’s just not true, at least according to the small amount of evidence I’ve found. But facts don’t move people. Only stories do. And the people of Ixcateopan have found a story they like better than the truth. You are free to use that information as you see fit. Daniel Diaz. “El dia que asesinaron a Cuauhtemoc.” Relatos e Historias en Mexico #95 Rosalba Quintana Bustamante. “Aqui yacen los restos de Cuauhtemoc.” Relatos e Historias en Mexico #95 Wed, 28 June 2017 The previous episode ended with Pancho Villa breaking out of prison. This episode has another prison break. This is the third or fourth or fifth high-profile prison break we’ve seen in this series. That’s got to be some kind of podcasting record. Krauze writes that the country was better off with Madero. In the win column Krauze puts a return to business as usual, growing bank assets, growing external trade, creation of the Department of Labor, improved working conditions in textile factories, legalization of labor unions and the right to strike, changes to agrarian policies, creation of industrial and elementary schools, new highways, and numerous political reforms. The people did not support the anti-Madero rebellions. Yet despite all the good things going on, public opinion was being changed by rumors and distortions in the media. So we’ve got Zapata’s rebellion in the south, which General Felipe Angeles is able to contain but not totally eliminate. There’s Orozco’s rebellion in the north, which General Victoriano Huerta is in charge of combating. American Ambassador Henry Lane Wilson is up to no good, which we haven’t talked about yet. On top of that, the press continued slamming Madero. And now we’re about to have a rebellion in the capital. Generals Felix Diaz and Bernardo Reyes had been imprisoned by Madero’s army after leading their own revolts. General Mondragon took his cadets and demanded the release of the two generals. When the guy in charge of the prison resisted, he was shot, and the generals were freed. The next part of the plan was to attack the National Palace. They might have been successful if they hadn’t been spotted by one of the Palace Generals who was walking to his office in civilian clothes that morning. He saw cadets dragging a machine gun with them, and he was able to raise the alarm and get his men ready. General Reyes was shot and killed during the assault on the National Palace. By the end of the fighting tehre were about 400 dead and 1000 wounded. Madero’s men defended the National Palace effectively and forced the two rebelling generals back. The assault started at about 7:30 in the morning. President Madero was three miles away, in Chapultepec Castle. He got word of the attack at 8:00. He fled, on horseback, and went to meet with some of his advisors. Among them was Victoriano Huerta, who swore loyalty to the President. Madero made him Commander of the Army of the Capital. Huerta’s new role would put him in charge of defending the government and the president. The President stepped out onto a balcony and addressed the public, with Huerta standing next to him. He then got back on his horse and rode to the National Palace. By this point the surviving generals had retreated to the city armory, the ciudadela, where they stocked up on ammunition. That evening the President left the city and went to Cuernavaca to keep fighting the Zapatistas. He was confident that the rebellion would be crushed like previous rebellions against him in the capital. While there, he asked his Army advisors what they would think if he put Felipe Angeles in charge of defending the capital instead of Huerta. They didn’t think it was a good idea, since Felipe had only recently been promoted and was not technically a general, since Congress had not yet made his generalship official. The next day, February 11, Huerta began bombarding the rebels, who responded in kind. Both sides began tearing the city apart. American Ambassador Henry Lane Wilson started sending telegraphs to President William Howard Taft, saying the Mexican government had fallen. During the 10 days, Huerta conspired with Felix Diaz and Ambassador Wilson. They struck a deal. The deal was that Huerta would switch sides and become interim president, then Diaz would become the next president. Huerta worked from then on as a double agent, conducting battles against Felix Diaz and meeting with him in secret to plan their counterrevolution. Assassination of Madero On February 17, 1913 President Madero was sitting in his office when the door opened. His brother Gustavo walked in. Behind him, holding a gun, was General Huerta. Gustavo said he found out that Huerta had made a pact with Felix Diaz, the leader of the army rebellion. Before this incident, the President’s own mother had warned him about General Huerta. She wasn’t the only person to do so. Madero considered the situation. He gave Huerta the chance to defend himself against the accusation. Huerta swore loyalty, embraced the president, and said he would eliminate the counterrevolutionary forces within 24 hours. Huerta said his piece, and now Madero had to decide what to do. Historian Enrique Krauze writes: “It was a key moment. And Madero made a suicidal decision. In spite of Huerta’s previous commitments to Porfirio Diaz and Bernardo Reyes, in spite of the disrespect and mockery Huerta had shown him in Morelos in 1911, despite the fact that his own mother had warned him against the “counterrevolutionary” Huerta, despite the arrogant threats of Huerta at Ciudad Juarez, despite rumors that Huerta had earlier met with Felix Diaz, despite – at that very moment – the confirmation of his arrangements with the rebels, Madero freed Huerta, personally returned his pistol and granted him the 24 hours he requested to demonstrate his loyalty. He then reprimanded his brother, Gustavo, ‘for being carried away by his impulses.’” At every single decision point Madero refused to listen to people’s distrust of Huerta. The question has to be Why? It’s a question I haven’t been able to find an answer to. The next day, February 18, there was another attempt to take the National Palace. One of Huerta’s allies, General Blanquet, led the attack. After a shootout he entered the Palace and approached Madero. The President slapped him in the face and called him a traitor. Blanquet responded by saying, “Yes, I am a traitor.” He arrested the President. While that was going on Huerta had invited Gustavo Madero to lunch in a downtown restaurant. He casually asked to see Gustavo’s gun. When Gustavo gave it to him he pointed it at the man and told him he was under arrest. Huerta took him and the quartermaster general of the National Palace to the ciudadela. The Cuban Ambassador to Mexico at the time, Manuel Marquez Sterling, wrote a book called The Last Days of Madero. In it he describes what followed: “Jeers, insults, angry shouts mark their arrival. An individual named Cecilio Ocon is the judge who interrogates the defendants. Gustavo rejects all the accusations of his enemies and invokes his privileges as a legislator. But Ocon, after condemning him along with Basso to execution, slaps Gustavo brutally. ‘This is how we respect your privileges,’ he says. Felix Diaz intervenes and they lead the prisoners to another section of the ciudadela. But the mob of soldiers, full of courage, follows them in a frenetic, screaming chorus. Some of them mock Gustavo, others swing their iron fists against him. Gustavo tries to strike out at the worst of them. And a deserter from the 29th battalion pierces Gustavo’s only good eye with his sword, blinding him at once. The mob breaks into savage laughter. The disgraceful spectacle has amused them. Gustavo, his face bathed with blood, weaves and staggers, groping his way; and the ferocious audience accompanies him with bursts of laughter. Ocon takes him to the room where he is going to be shot. Gustavo, concentrating all his energies, pulls away from the murderer who is trying to force him along. Ocon, rabid, tries to grab him by the lapel of his coat. But his adversary is stronger than he is. The pistol finally ends the fistfight. More than 20 barrels discharge against the dying martyr, who shudders out a final sigh on the floor. ‘He is not the last patriot,’ shouts Basso. ‘There are still many brave men behind us who will know how to punish these infamies.’ Ocon, with his clouded gaze and unsteady walk, points a finger and says, ‘Now, that one.’ The old sailor, ramrod straight, walks to the place of his execution. One of the executioners tries to put a blindfold on his eyes. For what? ‘I want to see the sky,’ he says, in a strong voice, and raising his face toward the infinite sp[aces, he adds, ‘I can’t find the Great Bear . . . Ah yes! There it is, glittering,’ and then saying his farewell: ‘I am 62 years old. Let it zrbe remembered that I died like a man.’ He unbuttoned his overcoat to show his chest and he gave the order, ‘Fire!’ as if he wanted to overtake Gustavo on the threshold of another life, beyond the Great Bear.” Unaware of his brother’s death and wanting to prevent any further violence, Madero wrote his letter of resignation. Congress was called into session to appoint an interim president. Only one congressman, Belisario Dominguez, voted against Huerta. He was shot in the street as he left Congress. Once the interim president was chosen, his only act was to hand power over to Huerta. Madero’s murder was supposed to look like an accident. Victoriano Huerta’s office called a car rental service. In 1913 cars were still a rarity. The owner of the car rental knew Huerta was a drunk. He wouldn’t trust the new President with his expensive cars. So the owner sent his son, a boy of 13, to be the driver. Neither of them knew about the plot to kill Madero. The driver was supposed to take Madero and Suarez from the prison to the military HQ, the ciudadela. The accidental assassination would be led by supposed Madero supporters who were – according to Huerta’s explanation – firing upon a car that they didn’t know Madero was in. When the shooting started, the driver ran and hid around a corner. He saw Madero and Suarez dragged out of the car and executed. Then the ambush team sprayed the car with bullets. The boy called his father. The father called the newspapers. Richard Grabman sums up the results of foreign intervention in this part of the Revolution: “President Taft was outraged. Ambassador Wilson wrote a short article defending himself but left a disaster for the incoming Woodrow Wilson administration. Ironically, Huerta’s government would turn out to be much more radical than Madero’s, and the mild reformer’s murder led to the first 20th century cultural and social revolution. With the United States about to enter its first war overseas, its next door neighbor was in the middle of a full scale war between several forces, none of which trusted their northern neighbor. Huitzilopochtli and Tezcatlipoca were in control.” Huitzilopochtli is the Aztec god of war. Tezcatlipoca is the god of trickery. Ambassador Wilson’s excuse for conspiring with Diaz and Huerta was that a coup was necessary to keep Mexico from exploding into anarchy. In reality, the coup was exactly what Mexico needed to spark the explosion. Wilson thought Madero would implement radical reforms that would cost U.S. business interests lots of money. But Madero himself was a landowner and the tiny political reforms he pushed were nothing compared to what the real radicals wanted. And the anarchy that resulted from the coup benefitted the radicals more than anyone. Tue, 27 June 2017 Hey, remember how the last episode had a happy ending? Welcome to Episode 3. The Congress that was elected in the fraudulent elections of the year before, 1910, stayed in power as part of the negotiations between Diaz and Madero. They did everything they could to undermine the new President, blocking most of his initiatives. The press, which had fawned over Diaz during his dictatorship, now reveled in their new freedom of speech and slammed Madero. Madero won the election, but not much changed. Most of the people in government were holdovers from the Diaz regime, and they resented the new President. Then there were the young and ambitious government workers who were disappointed at the relative lack of change. Labor had been Madero’s biggest supporters in the election, but their working conditions hadn’t improved with the new presidency. Other supporters wanted land reform. Chief among them was Emiliano Zapata. The Zapata family had been defending themselves from basically nonstop attempts to steal their land since the Spanish Conquest in the 1500s. If anyone was born to carry on a family tradition, it was Emiliano. He became an orphan at 16 but managed to support himself by taking odd jobs. He used mules to haul corn into town and to haul bricks and lime to construction workers. He farmed. Was always proud of earning his own living. Great on horseback. In September 1908 the people in his village named him president of their defense committee. He and his secretary, Franco, spent the next 8 days poring over the documents they were in charge of. In 1910, before Madero published the Plan of San Luis, Zapata had already launched a tiny revolution to get back land that had been stolen in 1607. He was successful. He went back to farming until he heard about the upcoming nationwide revolution. The passages in Plan of San Luis about returning land stolen by plantations resonated with the people of Zapata’s village, and they sent a representataive to Madero in Texas. It was time for Zapata to join the Revolution. His people gathered in the town plaza to begin their march. Zapata was in the middle of the plaza, on horseback. A shot rang out. Zapata felt his hat shift on his head. He took it off and saw a hole in it. The crowd saw a man in the town hall begin to run away. Zapata told his people not to move, and he rode toward the building. He went around the building but didn’t find the assassin. One of his biggest local enemies was a plantation owner from Spain. The man had sent Zapata a message that was probably meant to intimidate, but it provoked the opposite reaction. The message stated that if Zapata were “so brave and so much a man, we have thousands of bullets and enough guns waiting to welcome you and your men as you deserve.” When he heard the message, Zapata ordered an attack on the Chinameca plantation. It was his first military action. After the fight he and his men loaded up on supplies and marched on. With each town they passed, their army grew bigger. They slowly pushed the government out of the state of Morelos. By May 1911 only two cities, Cuatla and Cuernavaca, had a strong government presence. The fighting at Cuatla went on for days, but by May 19 the Zapatistas had won. Later on in Porfirio Diaz’s life he would reflect on those early days of the Zapatista revolt and say, “I was calm until the south rose.” Foreign land ownership had exploded during the Porfiriato. Now that Porfirio Diaz was gone, Zapata wanted recognition from Madero’s government. The Zapatistas had dealt violently with plantation owners and land grabbers, so the elite of Morelos were now complaining to Madero. The president himself came from one of Mexico’s wealthiest families. Madero would face big problems no matter what he did or did not do about the land. With respect to that, his Plan of San Luis, which called for Revolution, explicitly stated that all agreements between the Diaz government and foreign governments and corporations would be respected. At this point in the narrative Madero seems to be doing everything in his power to destroy his own revolution and lose as many allies as possible. He probably would have done a fine job of it himself, but his enemies were more than happy to speed up his fall. Victoriano Huerta saw an opportunity to weaken Madero when the President met with Emiliano Zapata. Zapata and Madero Madero was surrounded by flatterers and yes men. Zapata saw this and was disappointed. His disappointment would only deepen later when Madero visited the state of Morelos. The leader of the Revolution was being equally generous with plantation owners and revolutionaries. On June 21, Zapata and Madero met at Madero’s home. There was tension in the air. Zapata tried to break the tension by pointing at a gold chain hanging from Madero’s neck. Zapata posed a hypothetical situation. If I took your watch by force, which I can do because I am armed, and our paths cross later on, and we’re both armed, would you have the right to ask me to give it back? Madero said of course, and I would demand compensation. Zapata spoke again, “That is exactly what has happened to us in the state of Morelos, where a few plantation owners by force have taken over village lands. My soldiers (the armed peasants and all the villages) insist that I tell you, with all due respect, that they want you to move immediately to restore their lands.” They met a few times over a month or so. By their third or fourth meeting the combined efforts of plantation owners, the press, the interim president, and General Huerta to turn Zapata against Madero were successful. When Madero finally came into power he met again with Zapata, who wanted the withdrawal of one federal general and a new law that would improve conditions for plantation workers. Madero was done with Zapata. He told him, “Surrender to good judgement and leave the country. Your rebellious attitude is doing serious harm to my government.” Madero would later regret those words, but the damage could no longer be undone. Zapata’s final letter to Madero said, “You can begin counting the days, because in a month I will be in Mexico City with 20,000 men and I will have the pleasure of coming to Chapultepec and hanging you from one of the tallest trees in the forest.” The Plan of Ayala was signed on November 25, 1911. It was an attack on Madero and an attempt to explain the ideas behind the new revolution. Zapata accused Madero of continuing the dictatorship of Porfirio Diaz. Since Madero hadn’t done anything regarding land reform during his presidency, among other failures, Zapata said he had to be overthrown. Madero had begun his movement “with the support of god and the people” but had not finished it, and now was the head of a tyrannical government. The only solution was to take up arms. More and more people were becoming dissatisfied with Madero and his conciliations to the old regime. The Orozco revolt Pascual Orozco and Madero’s exVP candidate, Francisco Vazquez Gomez, joined forces and took up arms against the President. Orozco joined the revolt because his men told him they were going to revolt against Madero. They said they would follow Orozco if he joined their movement. If he didn’t, they would repudiate him. Orozco wasn’t sure he was ready to lead another rebellion, but he knew he probably wouldn’t get another chance, especially if his men abandoned him. It’s possible that Madero offered Orozco the governorship of Chihuahua in order to keep him on his side. Nonetheless, on March 2, 1912 Orozco joined his men and they renewed the revolt. Orozco already had Zapata’s endorsement. Back in November 1911 Zapata’s Plan of Ayala called for Orozco to be the leader of the revolution against the President. Although Orozco and Zapata were basically on the same side of the revolution, they had very different supporters and enemies. In the south, the plantation owners and upper classes were against Zapata. In the north, those same types of people supported and even funded Orozco. Why? Orozco had made deals with them. One example and one bit of evidence is that on one occasion during a battle he had told his men not to touch land belonging to the most powerful family in Chihuahua, the Terrazas-Creel family. Orozco was mainly interested in acquiring power. He came from a relatively wealthy family and never really cared about the goals of the people fighting in the revolutionary armies. In some cases his men didn’t know about the deals he had made with plantation owners. Some who found out about it left. Others stayed, happy to use rich people’s money and support against them. If they could use the elite’s resources while making no concessions to them, why not? Things went mostly according to plan for the oligarchs who funded the revolt when the rebel army was winning, but the summer of 1912 brought defeats, and the rebels began to split off from one another. One group of former rebels pursued land reforms. They distributed plantations among the laborers. Six of those plantations belonged to the Terrazas family, who had been funding Orozco’s rebellion. Other former rebels turned to banditry and general Robin Hood-ing. Pancho gets back in the game Pancho Villa had resigned from service to Madero, but now that Orozco’s men were engaging in their various revolts, the upper classes of Chihuahua looked to Villa to stop them. There are many perspectives on Pancho Villa’s history and legend. Around the time of the Orozco rebellion he had settled down, gotten married, and had gone into business. Historian Friedrich Katz says that “the most articulate of Villa’s many wives,” describes his story as a classic rags-to-riches tale. He now was a successful businessman who enjoyed the support of Governor Gonzales and President Madero. His only political activity was to carry out missions for the president. According to his enemies and critics, his supposed settling down was a front so that he could continue his old banditry, but now with legal cover. He had gone from being a small town crook to a big city gangster, in their eyes. Now his powerful friends were asking for his support in putting down Orozco’s rebellion. Villa had grown to hate Orozco after the battle of Juarez and had always hated the Terrazas family who was now funding Orozco. He wrote a letter to one of the leaders of the Orozco revolt, asking, “Will it be a consolation to those who became widows and orphans during the last revolution to have their ranks swelled by new widows and orphans? Is it a sign of patriotism if we kill each other every time an ambitious man wants to take power?” He wasn’t exactly thrilled to go back to battle, but he took up arms at the request of Madero and Governor Gonzalez of Chihuahua, both of whom he greatly respected. Despite their disagreements on different policies, Villa still admired them and remained loyal. He felt he couldn’t stay neutral during an armed conflict in Chihuahua. Plus, he had always hated the Terrazas family and he was convinced that Orozco had tricked him into rebelling against Madero at the Battle of Juarez. He visited cities and towns in Chihuahua, gathering supporters. They were well-organized and well-disciplined. Villa ordered all bars and liquor stores closed when his men came through. They were received well in all towns they visited. But Orozco had also been campaigning throughout the state and fearmongering about the evil Pancho Villa. Orozco controlled most of Chihuahua. In one battle his men loaded a train with dynamite and drove it near a building full of federal troops, killing hundreds. The federal general of that battle later committed suicide. After suffering a few losses against Orozco, Villa was now down to 60 men. Katz writes that Villa had an uncanny ability to do the unexpected. He could snatch victory from the jaws of defeat just as easily as he snatched defeat from the jaws of victory. Villa was at his least dangerous after a big victory and at his most dangerous when he seemed close to annihilation. In the town of Parral, the federal military commander had defected to Orozco’s side. Not all of his men agreed to follow him though, and when Pancho Villa arrived with his 60 men, the federal soldiers joined him. Villa captured the defector and sent him to Mexico City, where we was imprisoned. He appropriated all arms in the town and made the wealthiest families give him a total of 150,000 pesos. He gave them receipts and said the money would be paid back as soon as the federal government won. Anyone who refused was put in jail. Eventually everyone paid up. He wouldn’t be so…generous…at one bank. Since the Creel and Terrazas families funded Orozco’s revolt, Pancho entered Enrique Creel’s bank and took 50,000 pesos as “spoils of war,” and he threatened to put the bank manager and his son on the front lines when the revolutionaries began their attack. On April 2nd at 4 a.m., a cannon woke the town up. A battle had broken out on the outskirts of town. A few hours later some of the attacking soldiers began leading a team of mules up a hill. They were carrying a cannon. An American was among those defending the city. He manned a machine gun and fired at the cannon. When he stopped shooting, the six mules carrying the cannon were dead and the ranking officer had been shot in the head. His men abandoned the mission and ran back down the hill. Madero had won Pancho Villa’s respect after the battle at Ciudad Juarez. So when Pascual Orozco offered lots of money if Villa supported him and Zapata instead of Madero, Villa turned him down. Something Villa didn’t like about Madero, though, was the President’s trust in Victoriano Huerta, a brutal man who had been one of Diaz’s favorites. Pancho Villa remained loyal to Madero and volunteered to take his men to fight against Orozco and Zapata. Madero accepted, but he ordered Villa to report to Victoriano Huerta, who would be his commanding officer. Villa and Huerta had a contentious relationship…and then it got worse. Huerta sent Villa and his men to fight on open terrain so that they would suffer more casualties and he even bombarded them with his own artillery. Villa realized that he couldn’t keep taking orders from Huerta, so he announced that he would leave and take his men with him. Huerta took that as treason and ordered Villa to be executed, no trials, no formalities. When it came time to face the firing squad, Villa lost it. He fell to his knees, holding onto an officer’s boot and begging for his life. When he regained his composure he stood up and was taken to the wall. He waited for the gunshots. At the last second, a message arrived from Madero, saving Villa’s life, but sending him to prison. In prison he met with radicals and socialists and anarchists. It’s possible he learned to read in prison as well. He appealed to the president, asking for release, but was unsuccessful. Pancho breaks out of jail. Metal bars can’t hold the Centaur of the North. I’m gonna quote Earl Shorris here. “On Christmas Day 1912, Pancho Villa, dressed in a severe black suit of the kind worn by lawyers, finished sawing through the bars of the window of his prison cell, climbed out into the yard, where he was met by a young attorney and, partially covering his face with a handkerchief, walked out of the prison, all the while chatting animatedly with his companion.” They got in a car and drove to the next state over, to the city of Toluca. From there they got on a train and headed to the coast, to a city called Manzanillo. From Manzanillo they boarded a ship to Mazatlan. Police all over the country were looking for him by now. He was nearly caught on the ship. He stayed in his cabin and had to bribe one of the ship’s officials. The bribe got him a small boat so he could leave the ship before health authorities boarded for an inspection. From there he made his way to El Paso, and safety. Mon, 26 June 2017 Francisco Madero is described by all my sources as a spiritualist rather than a revolutionary or a military leader or political theorist or philosopher. He came from a wealthy family. As a boy he was often sick. He studied in the US, lived in France for a few years, and traveled through Europe. It was there that he adopted the ideas of Spiritualism (with a capital S). Spiritualism was based on communicating with spirits of the dead. It was big. By 1854 there were more than 3 million Spiritualists worldwide. Madero writes the he didn’t just read Spiritualist books, he devoured them. He finished business school in Paris and then studied a year in Berkeley, improving his English and learning agriculture. The constant illnesses of his childhood had made him deliberately work on becoming physically strong. He wrote a pamphlet on water rights that Porfirio Diaz praised him for. He did charity work as well, giving out homeopathic…..concoctions as well as money to sick people. At his hacienda he fed about 60 kids, and he paid his staff well. He married in 1903, and they gave out scholarships and created schools, hospitals, and community kitchens. He felt his mission as a Spiritualist was to be a medium for the spirits of the dead. Specifically a writing medium. He wanted the spirits to speak through him in his writings. Soon he was claiming that his brother Raul, who had died at age four in a fire, was visiting him daily. Francisco gave up smoking, became a vegetarian, and destroyed his wine cellar. His wrote, by way of Francisco, “You can have the only happiness there is in this world solely through practicing charity in the broadest sense of the word.” He later wrote, “Aspire to do good for your fellow citizens . . . working for a lofty ideal that will raise the moral level of society, that will succeed in liberating it from oppression, slavery, and fanaticism. As government repression in Mexico increased in the first decade of the 1900s Madero came to believe that “charity in the broadest sense” meant politics. He was now seeing his future a little more clearly. In the previous episode, Francisco Madero had called for a revolution. He said it would happen on November 20, 1910. Photo ops on November 20 made it look like spontaneous uprisings were happening all over Mexico, and there were, but they were mostly small, isolated groups. In fact, it began so gradually that by January 1911 the government thought the danger had passed. The Revolution was on though, especially in Chihuahua, a northern state. The armies of Pascual Orozco and Pancho Villa were winning battles against the federal army. Revolutionary fever slowly spread thanks to a weak federal army and widespread social discontent. Small groups of men on horseback rode into villages and towns. They went to the town square and read aloud Madero’s Plan of San Luis, inviting the men of the town to join them. They took the local government’s cash reserves, guns, and horses. They freed the prisoners. Then they went to the next town and did it again. And again. The Federal army was weak. It relied on conscription – the draft – and didn’t have a single full battalion or regiment. The leaders didn’t know the terrain. Corruption was everywhere. And they were not prepared to fight small, nimble forces. By the time they arrived at a place that was being attacked, the attackers had already disappeared and were in a different area. As the months wore on, the army had become concentrated in the areas with the most fighting, which left other places basically free of soldiers. In the states of Nayarit, Colima, and Michoacan, the revolutionaries took over the government without firing a single shot. By May 1911 there was fighting in 26 states and the Federal District – Mexico City. Earlier, in March, Madero attacked Casas Grandes with about 100 men. He left the battle with a wounded arm and had lost several soldiers. Historian Earl Shorris says the attack was a fiasco that showed Madero to be a poor military commander. Nonetheless, he was the last man to retreat and he earned a reputation for courage. Battle of Juarez If I went into much detail on the battles of the Revolution this series would basically never end. So I’m really glossing over a lot. This is episode 2 and I’m already leaving tons of things out. But the battle of Juarez deserves some attention. Madero was encouraged by news of more and more uprisings throughout Mexico. He decided to take Ciudad Juarez, partly to control traffic to and from the United States. The Revolutionary army marched on Juarez on April 7, 1911. It was headed up by two columns of 500 riders each. Leading the columns were Pancho Villa and Pascual Orozco. Behind the columns was a force of 1,500 riders led by Francisco Madero. Until this point the army had mostly used guerrilla tactics, attacking swiftly and disappearing into their surroundings. This march was much more conventional. They surrounded the city on three sides. The Federal Army, which had about 700-1000 soldiers, and the city itself was cut off from communication with the outside world. Madero hesitated. If he attacked, there was a chance that stray bullets would fly into the neighboring American city of El Paso, possibly killing civilians and forcing the US to intervene. Plus, the Diaz government was now on a peace offensive, and Madero’s own family was asking him to reach some kind of compromise with the President/Dictator, whichever term you prefer. Madero wanted to avoid bloodshed, and he thought he actually could reach an agreement with Diaz. He came from the upper class, too, and shared their fear of anarchy and possible US intervention. He accepted a ceasefire that would allow Diaz to stay in power. That specific part of the ceasefire wasn’t made public, but rumors were circulating in the US media that a deal had been struck. The leaders of the revolutionary army were angry at Madero for the ceasefire. Both sides were now basically in limbo. The ceasefire had been going on for days now, and Madero’s men were getting less enthusiastic and optimistic. Food was running out and they weren’t receiving the pay they had been promised. Madero met with one of his military leaders, Pascual Orozco. He asked whether he should accept a proposal that would leave Diaz in power, or whether the president’s resignation should be a precondition for peace. Orozco said, “Don’t ask me these things, since I understand nothing about them. Tell me that the enemy is coming from somewhere, and I shall see what I can do; but these things I know nothing about. You know what you should do.” When Madero met with government representatives, he said he could no longer accept the terms. The peace negotiations ended, and so did the ceasefire. But Madero still hesitated to attack. Pascual Orozco and Pancho Villa had had enough. They decided to attack without telling Madero. So they ordered their men to start shooting, and the Federal army returned fire. Madero sent a message to the Federal commander, Juan Navarro, asking him to order his men to stop shooting. He agreed, and his men stopped. But the revolutionary army did not. They advanced on the city. Madero sent an emissary with a white flag and orders to stop. But they ignored him. The federal army was now returning fire again. Orozco eventually had to face Madero. So he simply told him the fighting was now impossible to stop. Not only was that probably true, but by now there were thousands of American spectators right across the river. Both the revolutionaries and the Federal army had to be careful not to accidentally send stray bullets across the border. One of those American onlookers, Timothy Turner, was a reporter. He had crossed into Juarez to watch. He wrote about the battle: “We sat up there on the hill and saw the river oaks swarming with insurrectos moving into Juarez. They moved in no formation whatsoever, just an irregular stream of them, silhouettes of men and rifles. Thus they began to move in and to move out along that road throughout the battle. They would fight a while, and come back to rest, sleep, and eat, returning refreshed to the front. The European-trained soldiers raved at this, tried to turn them back, to make everybody fight at one time. But that was not the way of these chaps from Chihuahua. They knew their business and they knew it well. That way of fighting, I think, more than any other thing, took Juarez. For by it, the insurrectos were always fresh with high spirits, while the littler brown federals with no sleep and little food or water, with their officers behind them ready with their pistols to kill quitters, soon lost their morale.” Later Turner actually joined the……. insurrectos….. and reported on their tactics to avoid machine gun fire from the Federal army: “I heard somebody calling me, and in the doorway was an insurrecto officer I knew, an erstwhile schoolteacher from the state capital, and I ran to where he was and then to the house. He was with some men who carried axes and crowbars in their hands, with their rifles swung onto their backs, and I saw what they were up to. They were cutting their was from one house to the other, chopping through the adobe walls dividing the structures. Thus one could walk a whole block without ever going outside a house. This made a fairly safe way of moving through the center of the town, except, of course, when one had to run across three intersections to the next block of buildings. Nobody was in any hurry.” Navarro surrendered after two days of fighting. His men were concentrated in a few buildings and were cut off from water. In previous battles Navarro had ordered his men to execute captured enemy soldiers with bayonets. The revolutionaries wanted to avenge the dead, so they asked Madero to execute Navarro. The request was denied. Now they demanded it, but Madero still refused. Orozco took out his pistol and pointed it at Madero’s chest. An officer then pointed his gun at Orozco. And now it’s a standoff. Madero walked right between the two men and out to the street. He got up on a railroad car and gave an impromptu speech that moved Pancho Villa to tears. Villa begged Madero for forgiveness. In other accounts he merely shook Madero’s hand. It’s not clear what Madero said, just that nobody killed him and he won people to his side. Then after winning the day with his speech, Madero angered his generals again by taking Navarro in his own car to the US border, and to safety. Before the battle, the revolutionary soldiers wanted Navarro dead. But it wasn’t just because of the brutal way he executed their comrades. When the revolutionaries had captured federal soldiers, they had made a point of sparing their lives. Plus, Madero’s Plan of San Luis called for federal generals who violated rules of war to be executed. So Madero was going against the desires of his men as well as his stated intentions in the document where he called for revolution. Before the meeting, Orozco told Pancho Villa to disarm Madero’s guard if Madero didn’t acquiesce to Orozco’s demands. Villa never did that, though. Instead he ran outside to get his 50 men. Later he said that he found out why Orozco wanted him to disarm the guard. “Orozco, expecting a sum of money from Don Porfirio’s agent, promised to assassinate Senor Madero and wished to involve me. At the last moment, Orozco lacked the courage to go through with it, or to go all the way, and knowing my violent character, he planned for me to disarm the guard, so that I would appear to the be principal instigator of the shooting and the president would challenge me face to face and I would draw my gun and kill him, and everything would be done with Pascual Orozco uninvolved, and me, Pancho Villa, apparently the true and only assassin.” The allegations are unproven, but what we do know is that Orozco met with representatives of the Diaz government at least four times between the sacking of the city and the confrontation with Madero. And Madero himself wrote a letter speaking of outside influences on Orozco. Historian Enrique Katz says money probably played a smaller part in Orozco’s alleged assassination scheme than power, since he was the most popular revolutionary figure at the time, behind Madero, and would have had a good shot at the presidency with Madero out of the way. If Orozco himself had murdered Madero, things might have turned out badly for him. But if a man like Pancho Villa killed Madero, Orozco’s hands would be clean and would have a chance to avenge Madero’s murder. It’s possible that this whole complicated plot was the reason behind Orozco’s insistence that Madero court martial and execute the enemy general. Then we come to Madero’s reasons for not court marshalling Navarro. There are a lot of theories on that as well, but one possible explanation is that he wanted to have the Federal army’s loyalty when he took office. As the revolutionaries racked up win after win throughout Mexico, President Diaz knew it was only a matter of weeks or months before his army was completely defeated. His representatives met with Madero, and they signed the Treaty of Juarez on May 21, 1911. Many revolutionary leaders strongly opposed the Treaty. They felt it was unnecessary. The revolutionaries already controlled most of the contested regions, and they understood what Diaz understood, that the federal army could not hold out for much longer. The revolutionaries could have a total victory on their terms without any need for negotiation. They felt the Treaty only weakened them while giving more power to people loyal to Diaz. Pancho Villa opposed the Treaty as well. He wrote about a confrontation that happened just before the signing: “I attended because he asked me to.” [He meaning Madero.] “But I already felt a deadly hatred for all those perfumed dandies. They had started in with speeches, and that bunch of politicians talked endlessly. Then Madero said to me, ‘And you, Pancho, what do you think? The war is over. Aren’t you happy? Give us a few words.’ I did not want to say anything, but Gustavo Madero who was sitting at my side nudged me, saying, ‘Go ahead, Chief. Say something.’ So I stood up and said to Francisco, ‘You, sir, have destroyed the Revolution.’ He demanded to know why, so I answered, ‘It’s simple: This bunch of dandies have made a fool of you and this will eventually cost us our necks, yours included.’ Madero kept on questioning me. “Fine, Pancho. But tell me, what do you think should be done?’ I answered, ‘Allow me to hang this roomful of politicians and then let the revolution continue.’” Villa himself is the only person who reported the exchange, so it’s not likely that it took place, but you get a glimpse into his mind there. Not much later he resigned and went back to private life. Porfirio Diaz Surrenders After the Treaty of Juarez, Diaz surrendered. All sides would agree to stop fighting and Diaz would step down. He resigned on May 25. The interim president, Francisco Leon de la Barra, was in charge of organizing new elections. As part of the negotiations, 14 unpopular governors were replaced. Madero continued angering supporters by distancing himself from them. He broke away from the National Antireelectionist Party and his vice presidential candidate, forming a new party and picking a new VP. Elections were held on the 1st and 15th of October. Madero won with 99% of the vote. His Vice President, Jose Maria Pino Suarez got 53%. The antireelectionists had achieved their main goal, but they weren’t the only group participating in the Revolution. The middle classes had goals, the unionists had goals, the anarcho-syndicalists had goals, the land reformers had goals…They were all attached to a vague idea of democracy, but they couldn’t agree on specifics. In any case the country finally had free elections. They didn’t remain too free for too long, but for now the system appeared to be working properly. You might think this is where the series ends. We’ve deposed a dictator, we’ve forced out 14 governors, and we have the new President we’ve been fighting for. We’ve blown up the Death Star. Princess Leia gives everybody a medal. The Revolution is over, right? Well… Madero’s gonna try to disarm the revolutionary army and depend entirely on the Federal army for his protection…. The army he’s just been fighting against. How do you think that’s gonna go? Mon, 26 June 2017 This episode is an intro, explaining the factors that led to the Revolution, and then ending just before the Revolution officially began. The best way to explain the structure of this series is to compare it to TV shows that have seasons and episodes. Like TV shows, the individual episodes in a season will come out regularly, but the seasons will be spaced out a little more. In between the seasons, we’ll have shorter one-off episodes, some of which will be related to the Revolution but not part of the greater narrative, like an episode about an individual person in the Revolution, and there will be episodes completely unconnected to the Revolution, such as the execution of the last Aztec emperor, Cuauhtemoc, or about the Cathedral in Mexico City, or news and culture. I also want to feature more pieces by other people, like I did with the mezcal episodes. Okay, that’s it. Let’s get to work. The Mexican Revolution wasn’t one thing. It was a series of civil wars, betrayals, assassinations, and reforms that encompass 5-7 years in some senses, and about 20-30 years in a broader sense. Then there’s the romantic (and true) idea that the reverberations are still being felt today. There weren’t two opposing forces fighting for clear objectives. It was more like Game of Thrones: multiple factions of idealists, opportunists, and freedom fighters making temporary alliances and then turning on each other. And almost every major figure gets assassinated. Spoiler alert. It’s disorienting and convoluted, with several people taking the role of president, claiming the last guy was illegitimate for X reason, and then doing X. The wars weren’t fought all over the country. They were more localized. The states of Morelos and Chihuahua were the most violent. Mexico City and the center of the country saw frequent fighting. But most of the country didn’t see much violence. The most obvious way to explain the conditions that led to the Revolution is to talk briefly about a man called Porfirio Diaz. He led a coup against a President who he said had served too many terms. He thought a leader should get one term and then step down. Diaz declared himself interim president. Elections were held. Diaz won. Then he served just one term in 1876 and stepped down when his term was up in 1880. Then he served another just-one-term in 1884. Then another in 1888, 1892, 1896, 1900, and 1904. Some sources call him a dictator, but it’s important to remember that in a few of those elections he did actually have to face an opponent. That opponent was an astrologer who lived in an expensive private mental institution. The bills were paid for by… Porfirio Diaz. So the man who said presidents should serve one term ended up ruling for about 30 years. What happened in those 30 years is described as both dictatorship and development. He stole vast amounts of land, violated property rights, granted monopolies to his supporters, and he made it so that the only way to remove him from power was the same way he had taken it: By force. During his rule electricity, railroads, trolleys, and the telephone all arrived in Mexico. Gross National Product greatly increased. Life was very good for the coutnry’s elite, who were allowed and encouraged to take land and export natural resources to industrializing nations. Porfirio was stabilizing some aspects of the country, but the trade he made was modernization and increased wealth for the upper classes in exchange for returning Mexico to colonial status: The country almost literally belonged to foreign investors. The people who worked on haciendas and ranches lived basically as slaves, even if that word wasn’t used. Whipping was a common punishment. Workers were forced to buy from the company store on the ranch or farm. Prices were much higher in those stores than in the nearby towns. Workers who didn’t purchase from those stores were whipped or docked pay. High officials in Diaz’s government were mostly of European descent. The ideologies in vogue among them were French Positivism and social Darwinism. They called themselves cientificos – scientists. Legal, paper-based land ownership was a new requirement that the cientificos had imposed on the country, and on people who had inhabited the same land for over 800 years. The people who lived there had never needed a piece of paper saying they lived there, and even if they would have had that custom, the Spanish conquistadors in the 15 and 1600s had been more than thorough in their destruction of indigenous books and documents. In 1883 a law passed in Congress that allowed foreign companies to come in and take land they considered undeveloped. Now communities that had been living on the same land for generations were suddenly told by outsiders that their farms were actually haciendas belonging to some guy who had never even visited the place, and now the communities were basically forced into something so similar to slavery that we may as well just call it slavery. They now had to work the lands while paying rent on their own homes and fields. The harvest of course went to the new owner. A special division of police were given authority to deal with peasants trying to defend their land (or peasants who couldn’t pay the rent, or who protested). Meanwhile Porfirio Diaz’s regime tried to paint a positive picture of Mexico to the outside world. They claimed Mexico was now safe, tamed, open for business. The Cientificos sold land to telegraph companies and railroad companies, which allowed the transport of natural resources to port cities. Mexico City was having a sewage problem at this time. The city is surrounded by mountains and there’s no natural drainage, and population growth was causing problems. When it rained, the sewers overflowed and streets flooded. In 1886 they began what Richard Grabman calls one of the greatest engineering projects of the 19th century, or any century. It took 14 years, but in the end they had built a 36-mile canal and six-mile tunnel that carried the sewage to the other side of the mountains and dumped it into the Lerma River. Mexico’s new industrialization came through use of death camps in the Yucatan peninsula and the valley of Oaxaca. Porfirio Diaz, who was from Oaxaca, has often been called Mexico’s first modern leader, which leads me to a point of speculation. If you’re listening to this, it’s very likely that you know of Dan Carlin’s podcast called Hardcore History. In the first episode of his series on the Mongols, he compares Genghis Khan to Hitler. He starts the episode by saying he has an idea for a book. It’s a book he wouldn’t touch with a 10 foot pole, but it’s a book that he’s certain will be written eventually, maybe in a couple hundred years, when the European Holocaust is no longer so close at hand. What will people say about Hitler in a few hundred years? Dan thinks it might be similar to what people now say about Genghis Khan. They’ll say Hitler was a force for modernization, development, industrialization, etc. I think the historians who talk about Porfirio Diaz as Mexico’s first modern leader are putting on the Genghis Khan Goggles, which are similar to beer goggles, but for historical events. Today some people wear Genghis Khan Goggles when they think about Hernan Cortes and Porfirio Diaz. Someday people might put on Genghis Khan Goggles when they think of Hitler. The comparison to Hitler works and is not just a meaningless invocation of Godwin’s Law because Diaz in some cases pioneered the techniques that would be used by the Nazis and by Stalin. Specifically, concentration camps. In Mexico, these concentration camps are referred to today by the lovely word hacienda. If you travel in Mexico you’ll probably see tempting offers to stay at a bed and breakfast that was an old hacienda, or eat at a restaurant that was a hacienda. The meal after my wedding was at a restaurant on a hacienda. The neighborhood I live in used to be a hacienda as well. The haciendas in the Yucatan and Oaxaca were ways to industrialize a place quickly while eliminating unwanted ethnic groups. One ethnic group that put up resistance was called the Yaquis. About 30,000 of them were deported to the Yucatan peninsula, which was thousands of miles away from their land and was a much different type of climate. The Yucatan is a humid jungle. The Yaquis came from the north of Mexico, the southern US, an arid desert climate. If you’ve never been to the Yucatan jungles, the heat and humidity there is unbearable in December, on vacation. Like the Jews in the Holocaust, the Yaquis were transported on overcrowded cattle cars. Many of them died along the way. Most of the people who arrived didn’t live long. They slept in overcrowded barracks, were underfed, and were literally worked to death. All of this was justified to Europe and the United States as the way of civilizing an inferior people. A reporter called John Kenneth Turner visited these death camps and his publications became very popular in the US. People were outraged, and American citizens began smuggling weapons into Mexico. Diaz became very disliked by middle class voters in the US. The US government and American business interests thought change was coming to Mexico, and they wanted to control the outcome. Historian Earl Shorris says there are several probable causes of the Mexican Revolution. The causes, or maybe precipitating factors, worked together. No single cause could have sparked the Revolution on its own, but several of them working all at once could. First, Porfirio Diaz was old. In his 80s. The average age of his Cabinet was 68. And Mexico was a young country. In 1910 a third of the population was under 10. More than half were under 20, and fewer than 10% were over 50. Dictator or no, he would soon die, as would many of his Cabinet members. Change was inevitable. Second, economics. The final two years of his rule saw contractions in the economy. These contractions hit the poor harder than anyone else, and made their lives even more difficult. Third, haciendas. The governance system in most of the country was basically feudalism, at least outside the major cities. There were no limits on how much land someone could own. Anything considered “unused” land could be settled. Any land owned by people who lacked the recently-imposed legal documentation could be taken. An enormous amount of land was stolen. Fourth, the decline of Positivism. Young Mexicans rebelled against the philosophy of the older generation. The Positivism of Pofrifio’s cientificos was losing its appeal. A new philosophy, sparked by Henri Bergson’s book The Creative Mind, connected with them deeply. Shorris says the impact of the new philosophy on the Revolutionaries is undeniable. Fifth, Diaz had been quoted as saying that this would truly be his final term and that he would welcome an opposition party in the next elections. He said Mexico should be prepared to change their government at every election and not have to face armed revolution. That interview was read by some of the most influential Revolutionaries, and nobody would forget what he had said. Sixth, freedom of the press. Diaz allowed the popular socialist polemicists, the Flores Magon brothers, to get out of jail and go to the U.S. and continue publishing articles against him. Many of their ideas made it into the 1917 Constitution, which has been called the first socialist constitution, coming even before the Soviet one. Seventh, the federal army proved that they were not invincible. An early battle before the Revolution resulted in the massacre of an entire village called Tomochi. But the people in that town of about 200 killed several soldiers in the fighting. One of the federal soldiers who survived wrote a book about it, and he said “every rebel was worth 10 federal soldiers.” The news of the battle spread quickly. The eighth cause or factor in the Revolution was strikes. A third of the land was owned by foreigners. Foreign investors owned about 90% of the value of industries in Mexico. The French owned the textile industry. The Americans owned mining. Various other countries owned the railroads. The British and Americans owned the oil. In all those industries the owners put their own countrymen in the best positions. In return for all this, foreign governments rewarded Diaz. He got rewards from Switzerland, Norway, Portugal, Spain, Venezuela, France, Japan, Italy, Belgium, Prussia, Hungary, Austria, Persia, Great Britain, the Netherlands, China, and Russia. There were at least 250 strikes or demonstrations during his dictatorship. The strikers mostly demanded better working conditions. In one of those strikes dozens were killed. Local police were helpless to stop the strikers, and the Mexican military was nowhere nearby. So the governor of Sonora actually requested US military, since the strike was at an American-owned copper mine. Eventually the Mexican military arrived and demanded the American soldiers leave. The presence of foreign soldiers protecting American interests against Mexican workers was….unpopular. That was 1906. Things did not improve after those strikes. The pressure kept building. In 1908 a man called Francisco Madero published a book called The Presidential Succession of 1910. He met with Diaz and suggested that he himself be nominated Vice President, rather than the man Diaz was considering. Diaz refused him. Madero later recalled that meeting, saying he was not impressed by the dictator. He must have felt what Christopher Hitchens famously described: The moment you begin interacting with statesmen and realize, to your horror, that they are even less intelligent than you are. After Diaz refused Madero, Madero continued his own presidential campaign, calling his party the Antireelectionists. He began touring Mexico and founding Antireelectionist clubs all over the north. And he was getting more popular all the time. He was gathering the support of cowboys, railway workers, miners, small town businessmen, cattle rustlers, and indigenous leaders. For the first time, Diaz faced serious competition. Another contender for the presidency was Bernardo Reyes. Reyes was part of the Diaz government, but he was setting up his own opposition party. Diaz sent him to Europe, ostensibly to study military recruitment systems, but the effect was exile for Reyes. Now without their candidate, his followers joined Madero’s Antireelectionists. In April 1910 the Antireelectionists held a convention. Madero was voted in as their candidate. Diaz had pro-Madero newspapers closed, his people attacked Antireelectionist rallies, and he jailed their leaders in several cities. Some were able to flee to safety in the US, but Madero was arrested and imprisoned in San Luis Potosi in June 1910. In the June 26th elections, Diaz’s people blocked suspected Antireelectionists from voting, so they called Diaz out for committing voter fraud and petitioned Congress to annul the vote. Congress basically ignored them. While in prison Madero was visited by prominent members of his anti-reelectionist campaign. He said that now was the time to take up arms. They made plans to buy weapons and recruit men willing to die for the cause. The call to arms would go out in October, after the country was done celebrating the 100th anniversary of the Mexican War of Independence. Madero came from a wealthy family, and his father bailed him out and used his influence with the governor to allow Madero to get around the city during the day. On October 7, 1910 he escaped his guards on horseback and fled to the US, helped by sympathetic railway workers. He went to San Antonio, TX, where his family owned a house. The railroads were perhaps Diaz’s biggest accomplishment, and they were the key to controlling Mexico. If you control the railroads you can send soldiers quickly to any part of the country. And the key to the railroads were the workers. Francisco Madero had the support of those workers. With their help he smuggled guns and propaganda into Mexico. From San Antonio, Texas, he wrote that the revolution would begin on November 20, 1910.
Rendering (computer graphics) Rendering is the process of generating an image from a 2D or 3D model (or models in what collectively could be called a scene file), by means of computer programs. Also, the results of such a model can be called a rendering. A scene file contains objects in a strictly defined language or data structure; it would contain geometry, viewpoint, texture, lighting, and shading information as a description of the virtual scene. The data contained in the scene file is then passed to a rendering program to be processed and output to a digital image or raster graphics image file. The term "rendering" may be by analogy with an "artist's rendering" of a scene. Though the technical details of rendering methods vary, the general challenges to overcome in producing a 2D image from a 3D representation stored in a scene file are outlined as the graphics pipeline along a rendering device, such as a GPU. A GPU is a purpose-built device able to assist a CPU in performing complex rendering calculations. If a scene is to look relatively realistic and predictable under virtual lighting, the rendering software should solve the rendering equation. The rendering equation doesn't account for all lighting phenomena, but is a general lighting model for computer-generated imagery. 'Rendering' is also used to describe the process of calculating effects in a video editing program to produce final video output. Rendering is one of the major sub-topics of 3D computer graphics, and in practice is always connected to the others. In the graphics pipeline, it is the last major step, giving the final appearance to the models and animation. With the increasing sophistication of computer graphics since the 1970s, it has become a more distinct subject. Rendering has uses in architecture, video games, simulators, movie or TV visual effects, and design visualization, each employing a different balance of features and techniques. As a product, a wide variety of renderers are available. Some are integrated into larger modeling and animation packages, some are stand-alone, some are free open-source projects. On the inside, a renderer is a carefully engineered program, based on a selective mixture of disciplines related to: light physics, visual perception, mathematics and software development. In the case of 3D graphics, rendering may be done slowly, as in pre-rendering, or in real time. Pre-rendering is a computationally intensive process that is typically used for movie creation, while real-time rendering is often done for 3D video games which rely on the use of graphics cards with 3D hardware accelerators. - 1 Usage - 2 Features - 3 Techniques - 4 Radiosity - 5 Sampling and filtering - 6 Optimization - 7 Academic core - 8 Chronology of important published ideas - 9 See also - 10 References - 11 Further reading - 12 External links When the pre-image (a wireframe sketch usually) is complete, rendering is used, which adds in bitmap textures or procedural textures, lights, bump mapping and relative position to other objects. The result is a completed image the consumer or intended viewer sees. For movie animations, several images (frames) must be rendered, and stitched together in a program capable of making an animation of this sort. Most 3D image editing programs can do this. A rendered image can be understood in terms of a number of visible features. Rendering research and development has been largely motivated by finding ways to simulate these efficiently. Some relate directly to particular algorithms and techniques, while others are produced together. - shading – how the color and brightness of a surface varies with lighting - texture-mapping – a method of applying detail to surfaces - bump-mapping – a method of simulating small-scale bumpiness on surfaces - fogging/participating medium – how light dims when passing through non-clear atmosphere or air - shadows – the effect of obstructing light - soft shadows – varying darkness caused by partially obscured light sources - reflection – mirror-like or highly glossy reflection - transparency (optics), transparency (graphic) or opacity – sharp transmission of light through solid objects - translucency – highly scattered transmission of light through solid objects - refraction – bending of light associated with transparency - diffraction – bending, spreading and interference of light passing by an object or aperture that disrupts the ray - indirect illumination – surfaces illuminated by light reflected off other surfaces, rather than directly from a light source (also known as global illumination) - caustics (a form of indirect illumination) – reflection of light off a shiny object, or focusing of light through a transparent object, to produce bright highlights on another object - depth of field – objects appear blurry or out of focus when too far in front of or behind the object in focus - motion blur – objects appear blurry due to high-speed motion, or the motion of the camera - non-photorealistic rendering – rendering of scenes in an artistic style, intended to look like a painting or drawing Many rendering algorithms have been researched, and software used for rendering may employ a number of different techniques to obtain a final image. Tracing every particle of light in a scene is nearly always completely impractical and would take a stupendous amount of time. Even tracing a portion large enough to produce an image takes an inordinate amount of time if the sampling is not intelligently restricted. Therefore, a few loose families of more-efficient light transport modelling techniques have emerged: - rasterization, including scanline rendering, geometrically projects objects in the scene to an image plane, without advanced optical effects; - ray casting considers the scene as observed from a specific point of view, calculating the observed image based only on geometry and very basic optical laws of reflection intensity, and perhaps using Monte Carlo techniques to reduce artifacts; - ray tracing is similar to ray casting, but employs more advanced optical simulation, and usually uses Monte Carlo techniques to obtain more realistic results at a speed that is often orders of magnitude slower. The fourth type of light transport technique, radiosity is not usually implemented as a rendering technique, but instead calculates the passage of light as it leaves the light source and illuminates surfaces. These surfaces are usually rendered to the display using one of the other three techniques. Most advanced software combines two or more of the techniques to obtain good-enough results at reasonable cost. Another distinction is between image order algorithms, which iterate over pixels of the image plane, and object order algorithms, which iterate over objects in the scene. Generally object order is more efficient, as there are usually fewer objects in a scene than pixels. Scanline rendering and rasterisation A high-level representation of an image necessarily contains elements in a different domain from pixels. These elements are referred to as primitives. In a schematic drawing, for instance, line segments and curves might be primitives. In a graphical user interface, windows and buttons might be the primitives. In rendering of 3D models, triangles and polygons in space might be primitives. If a pixel-by-pixel (image order) approach to rendering is impractical or too slow for some task, then a primitive-by-primitive (object order) approach to rendering may prove useful. Here, one loops through each of the primitives, determines which pixels in the image it affects, and modifies those pixels accordingly. This is called rasterization, and is the rendering method used by all current graphics cards. Rasterization is frequently faster than pixel-by-pixel rendering. First, large areas of the image may be empty of primitives; rasterization will ignore these areas, but pixel-by-pixel rendering must pass through them. Second, rasterization can improve cache coherency and reduce redundant work by taking advantage of the fact that the pixels occupied by a single primitive tend to be contiguous in the image. For these reasons, rasterization is usually the approach of choice when interactive rendering is required; however, the pixel-by-pixel approach can often produce higher-quality images and is more versatile because it does not depend on as many assumptions about the image as rasterization. The older form of rasterization is characterized by rendering an entire face (primitive) as a single color. Alternatively, rasterization can be done in a more complicated manner by first rendering the vertices of a face and then rendering the pixels of that face as a blending of the vertex colors. This version of rasterization has overtaken the old method as it allows the graphics to flow without complicated textures (a rasterized image when used face by face tends to have a very block-like effect if not covered in complex textures; the faces are not smooth because there is no gradual color change from one primitive to the next). This newer method of rasterization utilizes the graphics card's more taxing shading functions and still achieves better performance because the simpler textures stored in memory use less space. Sometimes designers will use one rasterization method on some faces and the other method on others based on the angle at which that face meets other joined faces, thus increasing speed and not hurting the overall effect. |This section does not cite any references or sources. (May 2010)| In ray casting the geometry which has been modeled is parsed pixel by pixel, line by line, from the point of view outward, as if casting rays out from the point of view. Where an object is intersected, the color value at the point may be evaluated using several methods. In the simplest, the color value of the object at the point of intersection becomes the value of that pixel. The color may be determined from a texture-map. A more sophisticated method is to modify the colour value by an illumination factor, but without calculating the relationship to a simulated light source. To reduce artifacts, a number of rays in slightly different directions may be averaged. Rough simulations of optical properties may be additionally employed: a simple calculation of the ray from the object to the point of view is made. Another calculation is made of the angle of incidence of light rays from the light source(s), and from these as well as the specified intensities of the light sources, the value of the pixel is calculated. Another simulation uses illumination plotted from a radiosity algorithm, or a combination of these two. Raycasting is primarily used for realtime simulations, such as those used in 3D computer games and cartoon animations, where detail is not important, or where it is more efficient to manually fake the details in order to obtain better performance in the computational stage. This is usually the case when a large number of frames need to be animated. The resulting surfaces have a characteristic 'flat' appearance when no additional tricks are used, as if objects in the scene were all painted with matte finish. Ray tracing aims to simulate the natural flow of light, interpreted as particles. Often, ray tracing methods are utilized to approximate the solution to the rendering equation by applying Monte Carlo methods to it. Some of the most used methods are path tracing, bidirectional path tracing, or Metropolis light transport, but also semi realistic methods are in use, like Whitted Style Ray Tracing, or hybrids. While most implementations let light propagate on straight lines, applications exist to simulate relativistic spacetime effects. In a final, production quality rendering of a ray traced work, multiple rays are generally shot for each pixel, and traced not just to the first object of intersection, but rather, through a number of sequential 'bounces', using the known laws of optics such as "angle of incidence equals angle of reflection" and more advanced laws that deal with refraction and surface roughness. Once the ray either encounters a light source, or more probably once a set limiting number of bounces has been evaluated, then the surface illumination at that final point is evaluated using techniques described above, and the changes along the way through the various bounces evaluated to estimate a value observed at the point of view. This is all repeated for each sample, for each pixel. In distribution ray tracing, at each point of intersection, multiple rays may be spawned. In path tracing, however, only a single ray or none is fired at each intersection, utilizing the statistical nature of Monte Carlo experiments. As a brute-force method, ray tracing has been too slow to consider for real-time, and until recently too slow even to consider for short films of any degree of quality, although it has been used for special effects sequences, and in advertising, where a short portion of high quality (perhaps even photorealistic) footage is required. However, efforts at optimizing to reduce the number of calculations needed in portions of a work where detail is not high or does not depend on ray tracing features have led to a realistic possibility of wider use of ray tracing. There is now some hardware accelerated ray tracing equipment, at least in prototype phase, and some game demos which show use of real-time software or hardware ray tracing. Radiosity is a method which attempts to simulate the way in which directly illuminated surfaces act as indirect light sources that illuminate other surfaces. This produces more realistic shading and seems to better capture the 'ambience' of an indoor scene. A classic example is the way that shadows 'hug' the corners of rooms. The optical basis of the simulation is that some diffused light from a given point on a given surface is reflected in a large spectrum of directions and illuminates the area around it. The simulation technique may vary in complexity. Many renderings have a very rough estimate of radiosity, simply illuminating an entire scene very slightly with a factor known as ambiance. However, when advanced radiosity estimation is coupled with a high quality ray tracing algorithim, images may exhibit convincing realism, particularly for indoor scenes. In advanced radiosity simulation, recursive, finite-element algorithms 'bounce' light back and forth between surfaces in the model, until some recursion limit is reached. The colouring of one surface in this way influences the colouring of a neighbouring surface, and vice versa. The resulting values of illumination throughout the model (sometimes including for empty spaces) are stored and used as additional inputs when performing calculations in a ray-casting or ray-tracing model. Due to the iterative/recursive nature of the technique, complex objects are particularly slow to emulate. Prior to the standardization of rapid radiosity calculation, some graphic artists used a technique referred to loosely as false radiosity by darkening areas of texture maps corresponding to corners, joints and recesses, and applying them via self-illumination or diffuse mapping for scanline rendering. Even now, advanced radiosity calculations may be reserved for calculating the ambiance of the room, from the light reflecting off walls, floor and ceiling, without examining the contribution that complex objects make to the radiosity—or complex objects may be replaced in the radiosity calculation with simpler objects of similar size and texture. Radiosity calculations are viewpoint independent which increases the computations involved, but makes them useful for all viewpoints. If there is little rearrangement of radiosity objects in the scene, the same radiosity data may be reused for a number of frames, making radiosity an effective way to improve on the flatness of ray casting, without seriously impacting the overall rendering time-per-frame. Because of this, radiosity is a prime component of leading real-time rendering methods, and has been used from beginning-to-end to create a large number of well-known recent feature-length animated 3D-cartoon films. Sampling and filtering One problem that any rendering system must deal with, no matter which approach it takes, is the sampling problem. Essentially, the rendering process tries to depict a continuous function from image space to colors by using a finite number of pixels. As a consequence of the Nyquist–Shannon sampling theorem (or Kotelnikov theorem), any spatial waveform that can be displayed must consist of at least two pixels, which is proportional to image resolution. In simpler terms, this expresses the idea that an image cannot display details, peaks or troughs in color or intensity, that are smaller than one pixel. If a naive rendering algorithm is used without any filtering, high frequencies in the image function will cause ugly aliasing to be present in the final image. Aliasing typically manifests itself as jaggies, or jagged edges on objects where the pixel grid is visible. In order to remove aliasing, all rendering algorithms (if they are to produce good-looking images) must use some kind of low-pass filter on the image function to remove high frequencies, a process called antialiasing. Optimizations used by an artist when a scene is being developed Due to the large number of calculations, a work in progress is usually only rendered in detail appropriate to the portion of the work being developed at a given time, so in the initial stages of modeling, wireframe and ray casting may be used, even where the target output is ray tracing with radiosity. It is also common to render only parts of the scene at high detail, and to remove objects that are not important to what is currently being developed. Common optimizations for real time rendering For real-time, it is appropriate to simplify one or more common approximations, and tune to the exact parameters of the scenery in question, which is also tuned to the agreed parameters to get the most 'bang for the buck'. The implementation of a realistic renderer always has some basic element of physical simulation or emulation — some computation which resembles or abstracts a real physical process. The term "physically based" indicates the use of physical models and approximations that are more general and widely accepted outside rendering. A particular set of related techniques have gradually become established in the rendering community. The basic concepts are moderately straightforward, but intractable to calculate; and a single elegant algorithm or approach has been elusive for more general purpose renderers. In order to meet demands of robustness, accuracy and practicality, an implementation will be a complex combination of different techniques. Rendering research is concerned with both the adaptation of scientific models and their efficient application. The rendering equation This is the key academic/theoretical concept in rendering. It serves as the most abstract formal expression of the non-perceptual aspect of rendering. All more complete algorithms can be seen as solutions to particular formulations of this equation. Meaning: at a particular position and direction, the outgoing light (Lo) is the sum of the emitted light (Le) and the reflected light. The reflected light being the sum of the incoming light (Li) from all directions, multiplied by the surface reflection and incoming angle. By connecting outward light to inward light, via an interaction point, this equation stands for the whole 'light transport' — all the movement of light — in a scene. The bidirectional reflectance distribution function The bidirectional reflectance distribution function (BRDF) expresses a simple model of light interaction with a surface as follows: Light interaction is often approximated by the even simpler models: diffuse reflection and specular reflection, although both can ALSO be BRDFs. Rendering is practically exclusively concerned with the particle aspect of light physics — known as geometric optics. Treating light, at its basic level, as particles bouncing around is a simplification, but appropriate: the wave aspects of light are negligible in most scenes, and are significantly more difficult to simulate. Notable wave aspect phenomena include diffraction (as seen in the colours of CDs and DVDs) and polarisation (as seen in LCDs). Both types of effect, if needed, are made by appearance-oriented adjustment of the reflection model. Though it receives less attention, an understanding of human visual perception is valuable to rendering. This is mainly because image displays and human perception have restricted ranges. A renderer can simulate an almost infinite range of light brightness and color, but current displays — movie screen, computer monitor, etc. — cannot handle so much, and something must be discarded or compressed. Human perception also has limits, and so does not need to be given large-range images to create realism. This can help solve the problem of fitting images into displays, and, furthermore, suggest what short-cuts could be used in the rendering simulation, since certain subtleties won't be noticeable. This related subject is tone mapping result. Rendering for movies often takes place on a network of tightly connected computers known as a render farm. The current[when?] state of the art in 3-D image description for movie creation is the mental ray scene description language designed at mental images and RenderMan Shading Language designed at Pixar. (compare with simpler 3D fileformats such as VRML or APIs such as OpenGL and DirectX tailored for 3D hardware accelerators). Other renderers (including proprietary ones) can and are sometimes used, but most other renderers tend to miss one or more of the often needed features like good texture filtering, texture caching, programmable shaders, highend geometry types like hair, subdivision or nurbs surfaces with tesselation on demand, geometry caching, raytracing with geometry caching, high quality shadow mapping, speed or patent-free implementations. Other highly sought features these days may include IPR and hardware rendering/shading. Chronology of important published ideas - 1968 Ray casting - 1970 Scanline rendering - 1971 Gouraud shading - 1973 Phong shading - 1973 Phong reflection - 1973 Diffuse reflection - 1973 Specular highlight - 1973 Specular reflection - 1974 Sprites - 1974 Scrolling - 1974 Texture mapping - 1974 Z-buffering - 1976 Environment mapping - 1977 Side-scrolling - 1977 Shadow volumes - 1978 Shadow buffer - 1978 Bump mapping - 1979 Tile map - 1980 BSP trees - 1980 Ray tracing - 1981 Parallax scrolling - 1981 Sprite zooming - 1981 Cook shader - 1983 MIP maps - 1984 Octree ray tracing - 1984 Alpha compositing - 1984 Distributed ray tracing - 1984 Radiosity - 1985 Row/column scrolling - 1985 Hemicube radiosity - 1986 Light source tracing - 1986 Rendering equation - 1987 Reyes rendering - 1988 Depth cue - 1988 Distance fog - 1988 Tiled rendering - 1991 Xiaolin Wu line anti-aliasing - 1991 Hierarchical radiosity - 1993 Texture filtering - 1993 Perspective correction - 1993 Transform, clipping, and lighting - 1993 Directional lighting - 1993 Trilinear interpolation - 1993 Z-culling - 1993 Oren–Nayar reflectance - 1993 Tone mapping - 1993 Subsurface scattering - 1994 Heightmap - 1995 Hidden surface determination - 1995 Photon mapping - 1996 Multisample anti-aliasing - 1997 Metropolis light transport - 1997 Instant Radiosity - 1998 Hidden surface removal - 2002 Precomputed Radiance Transfer - 2D computer graphics - 3D computer graphics - 3D rendering - Architectural rendering - Global illumination - Graphics pipeline - High dynamic range rendering - Image-based modeling and rendering - Non-photorealistic rendering - Painter's algorithm - Raster image processor - Ray tracing - Real-time computer graphics - Scanline rendering/Scanline algorithm - Software rendering - Sprite (computer graphics) - Unbiased rendering - Vector graphics - Virtual model - Virtual studio - Volume rendering - Z-buffer algorithms - "Relativistic Ray-Tracing: Simulating the Visual Appearance of Rapidly Moving Objects". CiteSeerX: 10 .1 .1 .56 .830. - A brief introduction to RenderMan - Appel, A. (1968). "Some techniques for shading machine renderings of solids". Proceedings of the Spring Joint Computer Conference 32. pp. 37–49. - Bouknight, W. J. (1970). "A procedure for generation of three-dimensional half-tone computer graphics presentations". Communications of the ACM 13 (9): 527–536. doi:10.1145/362736.362739. - Gouraud, H. (1971). "Continuous shading of curved surfaces". IEEE Transactions on Computers 20 (6): 623–629. - University of Utah School of Computing, http://www.cs.utah.edu/school/history/#phong-ref - Phong, B-T (1975). "Illumination for computer generated pictures". Communications of the ACM 18 (6): 311–316. doi:10.1145/360825.360839. - Bui Tuong Phong, Illumination for computer generated pictures, Communications of ACM 18 (1975), no. 6, 311–317. - Catmull, E. (1974). A subdivision algorithm for computer display of curved surfaces (PhD thesis). University of Utah. - Blinn, J.F.; Newell, M.E. (1976). "Texture and reflection in computer generated images". Communications of the ACM 19: 542–546. doi:10.1145/360349.360353. CiteSeerX: 10 .1 .1 .87 .8903. - Crow, F.C. (1977). "Shadow algorithms for computer graphics". Computer Graphics (Proceedings of SIGGRAPH 1977) 11 (2). pp. 242–248. - Williams, L. (1978). "Casting curved shadows on curved surfaces". Computer Graphics (Proceedings of SIGGRAPH 1978) 12 (3). pp. 270–274. CiteSeerX: 10 .1 .1 .134 .8225. - Blinn, J.F. (1978). Simulation of wrinkled surfaces. Computer Graphics (Proceedings of SIGGRAPH 1978) 12 (3). pp. 286–292. - Fuchs, H.; Kedem, Z.M.; Naylor, B.F. (1980). On visible surface generation by a priori tree structures. Computer Graphics (Proceedings of SIGGRAPH 1980) 14 (3). pp. 124–133. CiteSeerX: 10 .1 .1 .112 .4406. - Whitted, T. (1980). "An improved illumination model for shaded display". Communications of the ACM 23 (6): 343–349. doi:10.1145/358876.358882. CiteSeerX: 10 .1 .1 .114 .7629. - Cook, R.L.; Torrance, K.E. (1981). A reflectance model for computer graphics. Computer Graphics (Proceedings of SIGGRAPH 1981) 15 (3). pp. 307–316. CiteSeerX: 10 .1 .1 .88 .7796. - Williams, L. (1983). Pyramidal parametrics. Computer Graphics (Proceedings of SIGGRAPH 1983) 17 (3). pp. 1–11. CiteSeerX: 10 .1 .1 .163 .6298. - Glassner, A.S. (1984). "Space subdivision for fast ray tracing". IEEE Computer Graphics & Applications 4 (10): 15–22. doi:10.1109/mcg.1984.6429331. - Porter, T.; Duff, T. (1984). Compositing digital images. Computer Graphics (Proceedings of SIGGRAPH 1984) 18 (3). pp. 253–259. - Cook, R.L.; Porter, T.; Carpenter, L. (1984). Distributed ray tracing. Computer Graphics (Proceedings of SIGGRAPH 1984) 18 (3). pp. 137–145. - Goral, C.; Torrance, K.E.; Greenberg, D.P.; Battaile, B. (1984). Modeling the interaction of light between diffuse surfaces. Computer Graphics (Proceedings of SIGGRAPH 1984) 18 (3). pp. 213–222. CiteSeerX: 10 .1 .1 .112 .356. - Cohen, M.F.; Greenberg, D.P. (1985). The hemi-cube: a radiosity solution for complex environments. Computer Graphics (Proceedings of SIGGRAPH 1985) 19 (3). pp. 31–40. doi:10.1145/325165.325171. - Arvo, J. (1986). Backward ray tracing. SIGGRAPH 1986 Developments in Ray Tracing course notes. CiteSeerX: 10 .1 .1 .31 .581. - Kajiya, J. (1986). The rendering equation. Computer Graphics (Proceedings of SIGGRAPH 1986) 20 (4). pp. 143–150. CiteSeerX: 10 .1 .1 .63 .1402. - Cook, R.L.; Carpenter, L.; Catmull, E. (1987). The Reyes image rendering architecture. Computer Graphics (Proceedings of SIGGRAPH 1987) 21 (4). pp. 95–102. - Wu, Xiaolin (July 1991). "An efficient antialiasing technique". Computer Graphics 25 (4): 143–152. doi:10.1145/127719.122734. ISBN 0-89791-436-8. - Wu, Xiaolin (1991). "Fast Anti-Aliased Circle Generation". In James Arvo (Ed.). Graphics Gems II. San Francisco: Morgan Kaufmann. pp. 446–450. ISBN 0-12-064480-0. - Hanrahan, P.; Salzman, D.; Aupperle, L. (1991). A rapid hierarchical radiosity algorithm. Computer Graphics (Proceedings of SIGGRAPH 1991) 25 (4). pp. 197–206. CiteSeerX: 10 .1 .1 .93 .5694. - M. Oren and S.K. Nayar, "Generalization of Lambert's Reflectance Model". SIGGRAPH. pp.239-246, Jul, 1994 - Tumblin, J.; Rushmeier, H.E. (1993). "Tone reproduction for realistic computer generated images". IEEE Computer Graphics & Applications 13 (6): 42–48. doi:10.1109/38.252554. - Hanrahan, P.; Krueger, W. (1993). Reflection from layered surfaces due to subsurface scattering. Computer Graphics (Proceedings of SIGGRAPH 1993) 27. pp. 165–174. CiteSeerX: 10 .1 .1 .57 .9761. - Jensen, H.W.; Christensen, N.J. (1995). "Photon maps in bidirectional monte carlo ray tracing of complex objects". Computers & Graphics 19 (2): 215–224. doi:10.1016/0097-8493(94)00145-o. CiteSeerX: 10 .1 .1 .97 .2724. - Veach, E.; Guibas, L. (1997). Metropolis light transport. Computer Graphics (Proceedings of SIGGRAPH 1997) 16. pp. 65–76. CiteSeerX: 10 .1 .1 .88 .944. - Keller, A. (1997). Instant Radiosity. Computer Graphics (Proceedings of SIGGRAPH 1997) 24. pp. 49–56. CiteSeerX: 10 .1 .1 .15 .240. - Sloan, P.; Kautz, J.; Snyder, J. (2002). Precomputed Radiance Transfer for Real-Time Rendering in Dynamic, Low Frequency Lighting Environments. Computer Graphics (Proceedings of SIGGRAPH 2002) 29. pp. 527–536. - Pharr, Matt; Humphreys, Greg (2004). Physically based rendering from theory to implementation. Amsterdam: Elsevier/Morgan Kaufmann. ISBN 0-12-553180-X. - Shirley, Peter; Morley, R. Keith (2003). Realistic ray tracing (2 ed.). Natick, Mass.: AK Peters. ISBN 1-56881-198-5. - Dutré, Philip; Bekaert, Philippe; Bala, Kavita (2003). Advanced global illumination ([Online-Ausg.] ed.). Natick, Mass.: A K Peters. ISBN 1-56881-177-2. - Akenine-Möller, Tomas; Haines, Eric (2004). Real-time rendering (2 ed.). Natick, Mass.: AK Peters. ISBN 1-56881-182-9. - Strothotte, Thomas; Schlechtweg, Stefan (2002). Non-photorealistic computer graphics modeling, rendering, and animation (2 ed.). San Francisco, CA: Morgan Kaufmann. ISBN 1-55860-787-0. - Gooch, Bruce; Gooch, Amy (2001). Non-photorealistic rendering. Natick, Mass.: A K Peters. ISBN 1-56881-133-0. - Jensen, Henrik Wann (2001). Realistic image synthesis using photon mapping ([Nachdr.] ed.). Natick, Mass.: AK Peters. ISBN 1-56881-147-0. - Blinn, Jim (1996). Jim Blinn's corner : a trip down the graphics pipeline. San Francisco, Calif.: Morgan Kaufmann Publishers. ISBN 1-55860-387-5. - Glassner, Andrew S. (2004). Principles of digital image synthesis (2 ed.). San Francisco, Calif.: Kaufmann. ISBN 1-55860-276-3. - Cohen, Michael F.; Wallace, John R. (1998). Radiosity and realistic image synthesis (3 ed.). Boston, Mass. [u.a.]: Academic Press Professional. ISBN 0-12-178270-0. - Foley, James D.; Van Dam; Feiner; Hughes (1990). Computer graphics : principles and practice (2 ed.). Reading, Mass.: Addison-Wesley. ISBN 0-201-12110-7. - Andrew S. Glassner, ed. (1989). An introduction to ray tracing (3 ed.). London [u.a.]: Acad. Press. ISBN 0-12-286160-4. - Ward, Gregory J. (July 1994). "The RADIANCE Lighting Simulation and Rendering System". SIGGRAPH 94: 459–72. |Look up renderer in Wiktionary, the free dictionary.| - GPU Rendering Magazine Online CGI magazine about advantages of GPU rendering - SIGGRAPH The ACMs special interest group in graphics — the largest academic and professional association and conference. - http://www.cs.brown.edu/~tor/ List of links to (recent) siggraph papers (and some others) on the web.
Presentation on theme: "“SURFACE AREAS AND VOLUME”"— Presentation transcript: 1 “SURFACE AREAS AND VOLUME” PRESENTATION ON“SURFACE AREAS AND VOLUME”(CLASS IX)CREATED BY:AMIT.N.YADAVAMIT GARG (TGT MATHS)(KV JRC BAREILLY) 2 Cuboid and CubeIn our day to day life ,we come across objects like a Wooden box, a Match box, a Tea packet ,a Chalk box, a Dice , a Book etc. All these objects are made of six rectangular plane regions. These objects are in the shape of a cuboid. “cuboid is a solid bounded by six rectangular plane regions” 4 Surface Area of a Cuboid As we have seen that the surface of a cuboid consists of six rectangular faces. “Surface area of a cuboid equals the sum of the areas of its six rectangular faces” Consider a cuboid whose Length is l cm, Breadth is b cm and Height h cm. 5 Area of Top face EFGH = (l * b)cm2 Area of Bottom face ABCD =(l*b)cm2Area of Side face AEHD =(b*h)cm2Area of Side face BFGC = (b*h)cm2Area of Front face ABFE = (l*h)cm2Area of Back face DHGC = (l*h)cm2 6 Total surface Area of the Cuboid = sum of the areas of all its six faces= lb+lb+bh+bh+lh+lh= 2lb+ 2bh + 2lh= 2 (lb+bh+lh) cm2. 7 Surface Area of a Cube“A cuboid whose length, breadth and height are all equal , is called a cube” Surface area of a cube = 2(a*a + a*a + a*a) = 2(a2 + a2 + a2) = 6 a2 8 Lateral Surface Area Of a Cuboid If out of the six faces of a cuboid , we only find the sum of the areas of four faces leaving the bottom and top faces. This sum is called the lateral surface area of the cuboid. L.S.A of the cuboid =Area of face AEHD + Area of face BFGC + Area of face ABFE + Area of face DHGC = b*h + b*h + l*h + l*h = 2bh + 2lh = 2 ( l+ b) * h 9 Lateral Surface Area of a Cube L.S. Area of a cube = 2(a*a + a*a) = 2(a2 + a2) = 4a2 10 Examples… Example 1) Find the Surface Area of a match box whose length , breadth and height are 16 cm,8cm , and 6 cm respectively.Solution) since match box is in the form of a cuboid.Here l=16 cm , b= 8 cm , h= 6 cmSurface Area of Match Box=2(lb + bh + lh)=2(16*8+8*6+16*6) cm2=2( ) cm2=544cm2 11 Example 2) Find the surface Area of a cube whose edge is 11 cm Solution) Here l=11cmsurface area of the given cube = 6l2=6 (11)2 cm2= 6 * 121 Cm2=726 cm2 12 Example 3)Three cubes each of side 5 cm are joined end to end Example 3)Three cubes each of side 5 cm are joined end to end. Find the surface area of the resulting cuboid.Solution)The dimension of the cuboid so formed are as underl=15cm , b=5cm , h=5cmSurface Area of cuboid= 2(lb + bh + lh)= 2( ) cm2= 350 cm2 13 .Example 4) A swimming pool is 20m in length ,15m in breadth and 4m in depth. Find the cost of cementing its floor and walls at the rate of Rs 20 per square metre. Solution) we have l=20m , b=15m , h=4m Area of the four walls = 2(l + b) * h = 2( ) * 4 m2 = 280 m2 Area of the floor of the swimming pool= l * b =(20*15) m2 = 300 m2 14 Continue…Total Area to be cemented = ( ) m2 = 580 m2 Cost of cementing of 1 m2 = Rs 20 Cost of cementing the floor and the walls = Rs(20 * 580) = Rs 11600 15 Questions for practices… Question 1)The dimensions of the cuboid are in the ratio 1:2:3 and its total surface area is 88m2. Find the dimension.Question 2) The sum of length breadth and depth of a cuboid is 19cm and the length of its diagonal is 11cm. Find the surface area of the cuboid.Question 3) Find the lateral surface area and total surface area of a cube of edge 10cm.Question 4) Each edge of a cube is increased by 50%. Find the percentage increase in the surface area of the cube. 16 Consider a right circular cylinder of radius “r” Surface Area of Right circular Cylinder=Consider a right circular cylinder of radius “r”and height “h”Area of the lateral surface of the cylinder=Area of the rectangle= l * b= 2Πr * h= 2 Π r h square unitsb=hl=2Πr 17 Surface area of cylinder = Area of rectangle= 2 π r h Other method of Finding Surface area of cylinder with the help of paperrhL=2πrB=hSurface area of cylinder = Area of rectangle= 2 π r h 18 Thus , for a cylinder of radius “r” and height “h” , we haveL.S.A = 2 Π r h square unitsEach base Area = Π r2Total Surface Area = (2 Π r h + 2Π r2 )= 2 Π r (h + r ) Square units 19 Outer Curved Surface area of cylinder Activity -: Keep bangles of same radius one over another. It will form a cylinder.hrrFormation of Cylinder by banglesCircumference of circle = 2 π rIt is the area covered by the outer surface of a cylinder.Circumference of circle = 2 π rArea covered by cylinder = Surface area of cylinder= (2 π r) x( h) 20 Total Surface area of a solid cylinder circular surfacesCurved surface=area of two circular surfacesArea of curved surface += (2 π r) x( h) + 2 π r2= 2 π r ( h+ r) 21 Examples… Example1) The curved surface area of a right circular cylinder of height 14 cm is 88 cm2.Findthe diameter of the base of the cylinder.Solution)Let r be radius and h be the height of the cylinder.2 π r h = 88 and h = 142 * 22/7 * r * 14 = 8888 r = 88r = 1Diameter of the base = 2r = 2 cm 22 Hence, radius of the cylinder is 7 cm. 44 cm Example 2) A rectangular sheet of paper 44 cm * 18 cm is rolled along its length and a cylinder is formed. Find the radius of the cylinder.Solution) Let r be the radius of the base and h be the height. Then , h = 18 cm2 π r = 44r = 7cmHence, radius of the cylinder is 7 cm.44 cm18 cm18 cm 23 Example 3) The ratio between the curved surface area and the total surface area of a right circular cylinder is 1:2 . Find the ratio between the height and the radius of the cylinder .Solution)Let h be the height and r be the radius and of the cylinder.2 π r h / 2 π r h + 2 π r2 = 1/22 π r h / 2 π r (h + r) = ½h / h + r = ½2h = h + rh = rh:r = 1 :1 24 Questions for practices… Question 1) Curved surface area of a right circular cylinder is4.4m2 . If the radius of the base of the cylinder is 0.7 m , findits height.Question 2) In a hot water heating system , there is a cylindrical pipe of length 28m and diameter 5cm.Find the total radiating surface in the system.Question 3) A cylindrical pillar is 50cm in diameter and 3.5m in height. Find the cost of painting the curved surface of the pillar at the rate of per m2 .Question 4)It is required to make a closed cylindrical tank of height 1m and base diameter 140cm from a metal sheet. How many square meters of the sheet are required for the same? 26 Curved surface Area of the cone = Area of the sector VAB lABCurved surface Area of the cone = Area of the sector VAB= ½ * ( arc length * radius)= ½ * 2 π r * l= π r lC.S.R = ½ * (circumference of base * slant height) 27 Curved Surface Area of a Cone l=2rCurved Surface Area of a Cone = 1 / 2 * l * 2 Π r= Π r lTotal Surface Area of a cone = Π r l + Π r2= Π r ( l + r) 28 Examples… Example 1)The diameter of a cone is 14cm and its slant height is 9 cm. Find the area of itscurved surface .Solution) S = Π r lHere , r = 14/2 cm and l = 9 cms = 22/7 * 7 * 9 cm2 = 198 cm2Example 2) Find the total surface area of a cone , ifits slant height is 9 m and the radius of its base is12m.Solution) S = Π r l + Π r2S=Π r ( l + r)S = ( 22/7 * 12 * (12 + 9)) m2= 792 m2 29 Example 3) The radius of a cone is 3 cm and vertical height is 4 cm Example 3) The radius of a cone is 3 cm and vertical height is 4 cm. Find the area of the curved surface.Solution)we have r = 3 cm and h = 4 cml2 = r2 + h2l2 = 3*3+ 4*4l2 = 5 cmArea of the curved surface = S = Π r l=22/7 * 3 * 5= 47.14cm24cm3cm 30 Questions for practices… Question 1) Find the curved surface area of a cone , if its slant height is 60 cm and the radius of its base is 21 cm.Question 2) The radius of a cone is 5cm and vertical height is 12cm. Find the area of the curved surface.Question 3)The radius of a cone is 7cm. And area of curved surface is 176cm2 .Find the slant height. 31 Surface Area of a Sphere Surface Area of a Sphere = 4 Π r2Curved Surface Area of a Hemi Sphere = 2 Π r2Total Surface Area of a Hemi Sphere = 2 Π r2 + Π r2= 3 Π r2 32 Examples… Example 1) Find the surface area of a sphere of radius 7cm. Solution) s = 4 Π r2Here , r = 7cms = 4 * 22/7 * 7 * 7 cm2 = 616 cm2Example 2) Find the surface area and total surface area of a hemisphere of radius 21cm.Solution) S = 2 Π r 2 and s1 = 3 Π r2Here, r = 21s= 2 * 22/7 * 21 * 21 cm2s= 2772 cm2 s1= 4158 cm2 33 Questions for practices… Question1) Find the surface area of a sphere of radius(i) 10.5 cm (ii) 5.6 cm (iii) 14 cmQuestion 2) Find the surface area of a sphere of diameter.(i)14 cm (ii) 21cm (iii)3.5cm 34 Continue…Question 3) Find the total surface area of a hemi sphere and a solid hemisphere each of radius 10cm .Question 4) The surface area of a sphere is 5544cm2 , find its diameter. 35 Volume of a Cuboid Volume of a Cuboid = Base area * Height = length * breadth * Height 36 Examples…Example 1) The volume of a cuboid is 440 cm3 and the area of its base is 88cm2. Find the breadth of the tank if its length and the depth are respectively 2.5m and 10m.Solution)Volume = 440 cm3 and Area of the base = 88cm2Height = Volume / Area of the base= 440/ 88 cm = 5 cm 37 Questions for practices… Question 1) A cuboidal water tank is 6m long , 5m wide and 4.5 m deep. How many litre of water can it hold?Question 2) A cubical vessel is 10 m long and 8m wide. How high must it be made to hold 380 cubic metres of a liquid ? 38 VOLUME OF CUBE Area of base (square) = a2 Height of cube = a Volume of cube = Area of base x height= a2 x a = a3(unit)3 39 Examples….Example 1) How many 3 metre cubes can be cut from a cuboid measuring 18 m * 12 m * 9 m.Solution) Edge of each cube = 3mVolume of each cube = (edge)3 = 3*3*3 m3= 27m3Volume of the cuboid = (18 * 12 * 9) m3= m3Number of cube= volume of the cuboid/volume of the cube= / 27= 72 40 Volume of cylinder Volume of cylinder = Area of base x vertical height = π r2 * h 41 Examples…Example 1) Find the volume of a right circular cylinder , if the radius (r) of its base and height(h) are 7cm and 15cm respectively.Solution)volume of cylinder = π r2 hHere r = 7cm and h = 15cmVolume of the cylinder = 22/7 * 7 * 7 * 15 cm3= 2310 cm3 42 Example 2) The area of the base of a right circular cylinder is 154cm2 and its height is 15cm . Find the volume of the cylinder.Solution)volume of a cylinder= area of the base * heightHere , area of the base = 154cm and height= 15cm= (154 * 15) cm3 43 Questions for practices… Question 1 ) Find the volume of a right circular cylinder , if the radius (r) of its base and height (h) are 7cm and 15cm respectively.Question 2) The volume of a cylinder is 448 π cm3. Find its lateral surface area and total surface area.Question 3) A well with 10m inside diameter is dug 14m deep. Earth taken out of it is spread all around to a width of 5m to form an embankment. Find the height of embankment. 45 Volume of a Cone 3( volume of cone) = volume of cylinder hhHere the vertical height and radius of cylinder & cone are same.rr3( volume of cone) = volume of cylinder3( V ) = π r2hV = 1/3 π r2h 46 If both cylinder and cone have same height and radius then volume of a cylinder is three times the volume of a coneVolume = 3VVolume =V 47 Mr. Mohan has only a little jar of juice he wants to distribute it to his three friends. This time he choose the cone shaped glass so that quantity of juice seem to appreciable. 48 ExamplesExample 1) A conical tank is 3m deep and its circular top has radius 1.75m. Find the capacity of the tank in kilolitres.Solution)we have , r= 1.75 m , h= 3mCapacity of the tank = 1/3 π r2hCapacity of the tank= 1/3 * 22/7 *1.75 * 1.75 * 3 m3 Capacity of the tank = m3= kilolitre( 1m3 = 1 kilolitre) 49 Example 2) The height and the slant height of a cone are 21 cm and 28 cm respectively. Find the volume of the cone.Solution )l2 = r2 + h2r = 7√ 7 cm.volume of the cone = 1/3 π r2h=1/3 * 22/7 *7√7 *√7 *21 cm3= 7546cm3 50 Questions for practices… Question 1 ) Find the volume of a right circular cone 1.02m high, if the radius of its base is 28cm.Question 2) The area of the base of a right circular cone is 314cm2 and its height is 15cm. Find the volume of the cone.Question 3) A semi-circular sheet of metal of diameter 28cm is bent into an open conical cup. Find the depth and capacity of cup. 51 Comparison of Area and volume of different geometrical figures Surface area6a22π rhπ r l4 π r2Volumea3π r2h1/3π r2h4/3 π r3 52 Volume of a Sphere V = 4/3 π r3 volume of Sphere =4( volume of cone) h=rrHere the vertical height and radius of cone are same as radius of sphere.volume of Sphere =4( volume of cone)V = 4( 1/3πr2h ) = 4( 1/3πr3 )V = 4/3 π r3 53 If we make a cone having radius and height equal to the radius of sphere. Then a water filled cone can fill the sphere in 4 times.V1rrrV=1/3 πr2hIf h = r thenV=1/3 πr3V1 = 4V = 4(1/3 πr3)= 4/3 πr3 54 Examples…. Example 1) Find the volume of a sphere of radius 7cm. Solution) V= 4/3 πr3Here, r = 7cmV = 4/3 * 22/7 * 7 * 7 * 7 cm3= cm3Example 2) A hemispherical bowl has a radius of 3.5 cm. What would be the volume of water it would contain?Solution)The volume of water the bowl can contain = 2/3 πr3= 2/3 *22/7 * 3.5*3.5*3.5 cm3= 89.8 cm3 55 Questions for practices… Question 1) A hemispherical bowl is made of steel 0.5cm thick. The inside radius of the bowl is 4cm. Find the volume of steel used in making the bowl.Question 2) The volume of the two sphere are in the ratio 64:27. Find the difference of their surface areas , if the sum of their radii is 7.Question 3) A solid sphere of radius 3cm is melted and then cast into small spherical balls each of diameter 0.6 cm. Find the number of balls thus obtained. 56 Volume of a Cylinder= Π r2 h Volume of a Right Circular Cone =1/3 π r2hVolume of a sphere = 4/3 π r3Volume of a Hemi sphere = 2/3 π r3
AP Statistics Section 13.1 A. Which of two popular drugs, Lipitor or Pravachol , helps lower bad cholesterol more? 4000 people with heart disease were randomly assigned to two treatment groups: Lipitor or Pravachol. Which of two popular drugs, Lipitor or Pravachol, helps lower bad cholesterol more? 4000 people with heart disease were randomly assigned to two treatment groups: Lipitor or Pravachol. At the end of the study, researchers compared the mean “bad cholesterol levels” for each group. This is a question about comparing two means. The researchers also compared the proportion of subjects who died, had a heart attack or suffered other serious consequences in the first two years. This is a question about comparing two proportions. Two-sample problems can arise from a randomized comparative experiment that randomly divides the subjects into two groups and exposes each group to a different treatment. Unlike the matched pairs design studied earlier there is no matching of the units in the two samples and the samples can even be of different sizes. Two-sample problems also arise when comparing two different samples randomly selected from two populations. Conditions for Comparing Two MeansSRS: ___________________________________________ This allows us to generalize our findings. We measure the same variable for both groups.Normality: Both populations are Normally distributed. In practice, it is enough that the distributions have _______________ and that the data have no strong _________. More on this at the end of the notes.Independence: The samples are independent. That is, one sample has no influence on the other. Paired observations violate independence, for example. When sampling without replacement from two distinct populations, each population must be at least _____ times as large as the corresponding sample size. We want to compare the two population means, either by giving a confidence interval for their difference _______ or by testing the hypothesis of no difference, ___________. To do inference about the difference between the means of the two populations, we start with the difference between the means of the two samples, _____. The Two-Sample z StatisticHere are the facts about the sampling distribution of the difference between the two sample means of independent SRSs.1. The mean of equals ________ (i.e. the difference of sample means is an __________ estimator of the difference of population means. 2. The variance of the difference is the sum of the variances of , which is Note: the variances add because the samples are independent. The standard deviations do not. 3. If the two population distributions are both Normal, then the distribution of is also Normal. Two-sample z statistic (for use when is known)Suppose that is the mean of an SRS of size drawn from a Normally distributed population with mean and standard deviation and that is the mean of an SRS of size drawn from a Normally distributed population with mean and standard deviation . Then the two-sample z statistichas the standard Normal distribution. It is really very unlikely that both population standard deviations are known. Since this is rarely the case, let’s consider the more useful t procedures. The Two-Sample t ProceduresBecause we don’t know the population standard deviations, we estimate them by the standard deviations from our two samples. Recall that this is called the ______________ Note: The two-sample t statistic has approximately a t distribution. It does not have exactly a t distribution even if the populations are both exactly Normal. Example 13.2-3: Does increasing the amount of calcium in our diet reduce blood pressure? Examination of a large sample of people revealed a relationship between calcium intake and blood pressure. The relationship was strongest for black men. Such observational studies do not establish causation. Researchers therefore designed a randomized comparative experiment. The subjects in part of the experiment were 21 healthy black men. A randomly chosen group of 10 of the men received a calcium supplement for 12 weeks. The control group of 11 men received a placebo pill that looked identical. The experiment was double-blind. The response variable is the decrease in systolic blood pressure for a subject after 12 weeks, in mm of Hg. An increase appears as a negative response. We know that sample size does influence the P-value of a test. A result that fails to be significant at a specified level in a small sample may be significant in a larger sample. Subsequent analysis of data from an experiment with more subjects resulted in a P-value of 0.008. Robustness AgainThe two-sample t procedures are more robust than the one-sample t methods, particularly when the distributions are _____________. When the sizes of the two samples are _______ and the two populations being compared have distributions with similar ______, probability values from the t table are quite accurate for a broad range of distributions, even when the sample sizes are as small as ____. In planning a two-sample study, choose _______ sample sizes if you can.The two-sample t procedures are most robust against non-Normality in this case and the conservative P-values are most accurate.
Students explore vertical and phase shifts of sine and cosine functions. Students utilize the Transformation Graphing application to investigate the amplitude of trigonometric functions. The program will graph the parent function y = sin(x) and define Y1 = a sin(x) into the screen. Students can enter different values for a and observe the effect on the function. They should find that the sine curve is vertically stretched by a factor of |a|. Next, students investigate the period of a trigonometric graph. Here, Y1 = sin(bx), and students substitute given values for b, observing the effects on the function. They will find that the value of b affects the horizontal stretch of this function and thus changes the period. Students investigate a simple phase shift. The program will graph Y1 = sin(x + c) and students substitute given values of c to observe the shift. Students then investigate a vertical shift. Investigating as before, students will find that the equation Y1 = sin(x) + d has a vertical shift equal to the parameter d. © Copyright 1995-2019 Texas Instruments Incorporated. All rights reserved.
Trigonometry/Cosine and Sine The cosine and sine functions relate the angles in right triangles as the ratio of lengths of the corresponding sides. For example, the cosine function () relates the angle theta, , from the adjacent side of the angle to the opposite side of the right angle on the right traingle (i.e. the is the ratio between the adjacent side of that angle to the hypotenuse of the right triangle). There are two usual approaches of introducing the cosine and sine functions. - In one approach, the sine and cosine function are defined in terms of right angle triangles. This works fine for angles between and . Later on, the definition has to be extended to angles outside that range. - An alternative approach introduces sine and cosine in terms of 'the unit circle'. This approach is a little more sophisticated but works for all angles. The two approaches amount to exactly the same thing in the end. However, we prefer to deal with the full range of angles from the start, which is why in the previous exercise we had you plotting to get a 'unit circle'. Unit Circle Definition If a line of radius length is drawn at an angle, , to the axis (where the angle is anti-clockwise to the axis), then the coordinate is given by and the coordinate is given by is of course just an abbreviation for 'cosine', and is just an abbreviation for sine. Rather confusingly can be pronounced either 'cos' or 'coz' always with 'o' as in 'bottle', rather than 'o' as in 'code' and is often pronounced 'sine' rather than 'sin'.It's not very logical, it is just how it is. Ratios of Sides Definition The figure below shows what we are considering: Here, we shall denote the angles by - We already know that the longest side is called the hypotenuse. - The side next to the angle we have chosen is called the base of the triangle. - The remaining side which is opposite the angle is called the perpendicular or latitude of the triangle. The angle determines the ratios of the side. Once the angle is selected we can make the whole triangle larger or smaller but all lengths change in the same proportions. We can't change the length of one side without also changing the length of all sides in the same proportion, or else we have changed the angles. So, once we know the angle we know the ratio of the sides. The functions that give us those ratios are defined as: 'Unit Hypotenuse' Definition This definition of sine and cosine isn't usually given, but it is also valid. Draw a line of unit length, , from the origin to a point that is angled anti-clockwise from the horizontal axis. Then, indicate a line parallel to the vertical axis and a line parallel to the horizontal axis from the point . If the line of unit length, , is the hypotenuse of the right triangle, then for the right triangle that has a width of and a length of , the following functions are true: Because any rational number divided by 1 is the same number: Another definition remains. Let and : Use this third definition to convince yourself that the three different ways of defining sine and cosine amount to the same thing, at least for angles between and . Did you do the exercise on Plotting (cos(t), sin(t)) on the previous page? It really is important to have had a go and seen how cosine and sine are related to the unit circle.If nothing else you MUST be able to use and on your calculator or you will not get very far with trigonometry. The unit circle definition of the trig functions shows that we can work with angles greater than . represents a quarter of a circle. represents a complete circle. What happens or what should happen for and if we have angles greater than ? There is one more trigonometric function that we want to introduce on this page. It's the tangent function or just . For the unit circle definition we define the tangent of theta as: For the ratios of sides definition we define the tangent of theta as: Using the definition of sine and cosine in terms of a triangle with unit hypotenuse it is immediately clear that these are the same thing. If we didn't have the definition of sine and cosine in terms of the triangle with unit hypotenuse we'd need to do slightly more work to show that the two definitions of tan were equivalent. We'd do something like this: It is worth checking every step in this. When talking about the tangent function it is usually better to always just say 'Tan' rather than 'Tangent'.
- Applies to: Excel for Microsoft 365, Excel for Microsoft 365 for Mac, Excel for the web, Excel 2019, Excel 2016, Excel 2019 for Mac, Excel 2013, Excel 2010, Excel 2007, Excel 2016 for Mac, Excel for Mac 2011, Excel Starter 2010. This post will teach you all about using the RAND function in Excel. Firstly, we will delve into the meaning of this function. Secondly, we will take a look at how the formula is structured. Then, we will explore the vital conditions to consider when using the RAND function in Excel. Finally, there is an example for you to play around with and see how the RAND function works. What is the RAND Function? The RAND function is a mathematical function which simply returns an evenly distributed random genuine number greater than or equal to 0 and below 1. A new random real number is captured each time the worksheet is calculated. Note: From Excel 2010 onwards, Excel applies the Mersenne Twister algorithm (MT19937) to create random numbers. Syntax of the RAND Function Unlike other formulas, the RAND function syntax contains absolutely no arguments. Remarks of the RAND Function - To produce a random real number between a and b, use: - If you are looking to deploy RAND to create a random number but prefer not to have the numbers refresh each time the cell is calculated, you can type =RAND() in the formula bar, and then press F9 to modify the formula to a random number. Subsequently, the formula will calculate and leave you with only a value. Example of the RAND Function Copy the sample data included in the table underneath, and paste it into cell A1 of a completely new Excel worksheet. For formulas to indicate results, choose them, press F2, and then press Enter. Often, you may need to amend the column widths to gain a full view of the entire dataset in all its glory! |=RAND()||A random number greater than or equal to 0 and less than 1||varies| |=RAND()*100||A random number greater than or equal to 0 and less than 100||varies| |=INT(RAND()*100)||A random whole number greater than or equal to 0 and less than 100||varies| |Note: When a worksheet is recalculated by writing a formula or data in another cell, or by manually recalculating (press F9), a new random number is materialised for any formula that employs the RAND function.|
Students who wish to learn in-depth about the 9th-grade expansion of powers of binomials and trinomials are suggested to make use of our page. Here we have covered all the topics related to the binomial and trinomial expansions with theorems, formulas, examples, and so on. Thus the students who wanna become masters in maths can refer to this page and score top in the exams. Binomial and trinomial expansions are very important for mathematical study regarding probability theory and approximation techniques. Here the students of 9th grade can learn how to find the specific term or specific power of x. Click on the below-attached links in which you are lagging and prepare as you wish. Expansion of Powers of Binomials and Trinomials The topics covered in this chapter are as follows, - Expansion of (a ± b)^2 - Expansion of (a ± b ± c)^2 - Expansion of (x ± a)(x ± b) - Express a^2 + b^2 + c^2 – ab – bc – ca as Sum of Squares - Completing a Square - Simplification of (a + b)(a – b) - Application Problems on Expansion of Powers of Binomials and Trinomials - Worksheet on Expansion of (a ± b)^2 and its Corollaries - Worksheet on Expanding of (a ± b ± c)^2 and its Corollaries - Worksheet on Expansion of (x ± a)(x ± b) - Worksheet on Completing Square - Worksheet on Simplification of (a + b)(a – b) - Worksheet on Application Problems on Expansion of Powers of Binomials and Trinomials - Expansion of (a ± b)^3 - Simplification of (a ± b)(a^2 ∓ ab + b^2) - Simplification of (a + b + c)(a^2 + b^2 + c^2 – ab – bc – ca) - Expansion of (x + a)(x + b)(x + c) - Problems on Expanding of (a ± b)^3 and its Corollaries An algebraic expression with two terms is known as binomial expression. It contains two different terms a and b or x and y. Here you can learn the details of binomial theorem such as definition, properties, applications, etc. General terms used in the binomial expansion are general term, middle term, independent term, the ratio of coefficients, numerically greatest term. The binomial theorem is the method of expanding an expression that is to be raised to any determinate power. (x + y)^n = nΣr=0 nCr x^(n – r) · y^r In maths, the trinomial expansion is the power of the sum of three terms into the monomials. Here you can learn the properties, formulas, examples from here. The formula of a trinomial is a special case for, m = 3. The coefficients of the terms are written in pascal’s pyramid form. Expansion of (a ± b)² A binomial expression is an algebraic expression that has two terms like a and b. The example of a binomial expression is a ± b. (a – b)² and (a + b)² is used to find the square of binomial. Let us discuss the expansion and properties of (a ± b)² from our page. Expansion of (a ± b ± c)² A trinomial expression has three terms such as a, b, c. An example of trinomial expansion is (a + b + c)² and (a – b – c)². The trinomial expansion is used to find the square of the trinomial. We will know about the expansion of (a ± b ± c)² and the properties of trinomial expressions from here. Expansion of (x ± a)(x ± b) In this article, we will discuss learn how to expand (x + a)(x + b) and (x – a)(x – b) with some examples. It means the product of the variables and sum of constant terms and product of the constant terms. The expansion of the binomial product is nothing but the quadratic equation. Express a² + b² + c² – ab – bc – ca as Sum of Squares a² + b² + c² – ab – bc – ca are the sum of squares of three numbers and subtraction of product of two constant terms. We will the formula for a² + b² + c² with derivations and examples here. Learn the properties of a² + b² + c² – ab – bc – ca on this page. Just click on the provided links and practice the problems. Simplification of (a + b)(a – b) Simplification of (a + b)(a – b) is the multiplication of the two terms a and b. It is a binomial expression. The expansion of the binomial expression (a + b)(a – b) is the multiplication of each term and addition of it. Students who are confused about the simplification of (a + b)(a – b) can clarify their doubts with clear-cut explanations from our page. Expansion of (a ± b)³ a and b are the two constant terms that mean it is considered as the binomial algebraic expression. a plus b whole cube is equal to a cube plus b cubed plus three times product of a squared b plus three times product of a and b square. The students can understand the concept of Expansion of (a ± b)³ by referring to our article. Expansion of (x + a)(x + b)(x + c) Expansion of (x + a)(x + b)(x + c) is the product of unknown variables and the constant terms. Expansion is nothing but multiplying each variable with the constant term and writing the result in the trinomial expression. In simple, we can say that it is the cubic of x, the sum of constant terms, and the product of the constant terms. FAQs on Expansion of Powers of Binomials and Trinomials 1. How do you expand the power of a binomial? To expand the power of binomial we have to identify the terms of the binomial positions and relate it to the formulas. By using the formulas we can expand the power of a binomial expression. 2. How do you expand powers? While increasing power to power in the algebraic expression, we need to find the new power by multiplying the two powers together. There are some properties to multiply the powers and bases. 3. What is binomial and trinomial expressions? An expression with two constant terms like a and b is known as binomial expression and the expression with three constant terms such as a, b, and c then it is known as the trinomial expression.
Physics> Kinematics of Uniform Circular Motion Uniform circular motion (RCM) – movement along a circular path with a stable speed. - Combine centripetal force and centripetal acceleration with RKD. - With RCD, angular and linear values have a simple relationship. The arc length is proportional to the angle of rotation and the radius. Moreover, v = rω. - The acceleration responsible for the RVC is referred to as centripetal acceleration. Expressed in the formula ac = rω2 =. - Any pure force that creates circular motion is called centripetal. Its direction is located in the center of curvature, and the value is equal to m () = mrω2… - Centripetal – upward. With uniform circular motion, angular and linear quantities are endowed with simple relationships. When objects rotate around some axis, each point of the object follows an arc of a circle. The angle of rotation displays the amount of rotation and is similar to a linear distance. The angle of rotation Δθ can be determined as the ratio of the arc length to the radius of curvature: The radius of the circle rotates around the angle Δθ. The arc length Δs is described along a circle From the relation s (Δs = rΔθ) we see: If we consider the motion in a circular orbit, we will notice that the angular velocity remains stable. Acceleration is written as: This acceleration is called centripetal. Any force or their combination can lead to centripetal or radial acceleration. These will be the rope tension, the Earth’s gravity for the Moon, the friction between skates and ice, etc. Any pure force that leads to an RVC is called a centripetal force. Its direction is located in the center of curvature, as in centripetal acceleration. Newton’s second law says that pure force is mass acceleration. For uniform circular motion, acceleration is centripetal – a = ac… Therefore, the magnitude of the centripetal force is equal to: |An introduction to uniform circular motion and gravity|| |Irregular circular motion|| |Speed, acceleration and strength|| |Types of forces in nature|| |Newton’s law of universal gravity|| |Gravitational potential energy|| |Angular and linear quantities||
Students practice math skills, organization, and teamwork as they work as a class to plan and deliver a class party. You have probably had a birthday party and your class may have also celebrated events like Halloween, Thanksgiving, or Valentine's Day. Now it is time for you to design the party yourself. Choose an event, like the last day of school or the end of a big project and design a celebration for your classroom. While we often think parties are frivolous, celebrations are those moments of joy when we commemorate something we are proud of and love. Celebrations remind us of the unique things in our lives which we appreciate and value which inevitably sparks gratitude. Planning a party not only builds powerful skills like communication, organization, and teamwork, it provides a real world context for students to practice important math skills like counting and addition. Kick off this lesson by asking your students to remember and share celebrations they have experienced, such as birthday parties or family get togethers like Thanksgiving. If you need to kick start their ideas, ask them questions using the 5 senses. Some students will focus on their birthday party and gifts they have received, so be sure to ask students if they enjoy going to family celebrations or other student’s birthday parties too. Why is it enjoyable? What makes it fun to attend a celebration? Collect student ideas in a list on your white board or on strips of paper. You may also have individual students share their ideas using a cluster diagram. Try creating graphic organizers with your students in Wixie.Learn more Work as a class to sort the ideas into bigger categories. What makes some of the items similar? This is an exercise in listening and thinking so remind them there are no “right” answers. You might also prompt students to identify what must be at a party. Again there isn't a right answer. You want students to identify that there are many different ways to celebrate and express gratitude and joy. This will help let students know that while they can't get or do everything each student wants, they can still design and have a fun and enjoyable event. Let students know that they will be planning a party for the class. Give them a context, such as a holiday celebration or an end-of-the year class party or let them decide what they think the class should celebrate. Students might choose to celebrate a new student, or the fact that you have learned every letter of the alphabet, or that spring has arrived with robins and crocuses. Once you have chosen the reason for the party it is time to plan and design all of the details. Depending on the age of your students, you may want to discuss and plan as a class or establish small groups to organize details like decorations, food, and activities. Look back at the topical groupings your students developed as they identified what they already know about celebrations. Can you use their groupings to plan the party? The more we empower students to think, plan, and organize themselves, the more skills they learn. You can use this planning time to specifically address standards and learning goals. Party planning can involve math skills like counting, cardinality, one-to-one correspondence, addition, and measurement as students determine how many people are attending, how many cupcakes or strawberries are needed, how and how long the party will last. In Social Studies, you can work as a class to answer questions like: It is also helpful to have students communicate their ideas before you agree to put them into action. If students have worked in small teams, have each team verbally share their "proposal" with the rest of the class. If you have worked as a class, write a letter to the principal proposing your party or invite them to the class so students can share their ideas verbally. Create an invitation to remind classmates, and parents, of the event. If parents attending or contributing materials, work as a class to write a letter asking for "donations" like balloons, plates, and food. Put students in charge the day of the party as well. For example, students who love art will probably love putting up decorations. You could also put students who could use additional practice counting in charge of putting plates and cups out for each person attending. Talk to the class about the role, or roles, of the party host. While you might give students specific roles like greeter, be sure to focus on enjoying the party, not working. If you want to include this role, task each student with small roles, like greeting at least one non-family guest or cleaning up a single piece of trash. When the party is over, keep students talking. Have a class discussion to share all of the little things that made the celebration memorable. Use a tool like Wixie to retell and capture student's memories. Use this peformance-based assessment in Kindergarten to evaluate math skills like counting, and one-to-one correspondence. In first and second grade you can evaluate skip counting and basic computation. This project also lends itself to building 21st century skills like communication, creativity, and collaboration. Work together to create a checklist for the party. While the party itself is the celebration of the student's collective efforts, refer back to your checklist to help students see that the work they did to plan and execute their plan was a big part of what made it successful. Kate DePalma. Let's Celebrate!: Special Days Around the World. ISBN: 1782858342 Katy Halford. Celebrations Around the World. ISBN: 146548390X Write numbers from 0 to 20. Represent a number of objects with a written numeral 0-20 (with 0 representing a count of no objects). Understand the relationship between numbers and quantities; connect counting to cardinality. Represent addition and subtraction with objects, fingers, mental images, drawings1, sounds (e.g., claps), acting out situations, verbal explanations, expressions, or equations. Compose and decompose numbers from 11 to 19 into ten ones and some further ones... Add and subtract within 20, demonstrating fluency for addition and subtraction within 10.... Add within 100, including adding a two-digit number and a one-digit number...Understand that in adding two-digit numbers, one adds tens and tens, ones and ones; and sometimes it is necessary to compose a ten. Tell and write time in hours and half-hours using analog and digital clocks. Fluently add and subtract within 20 using mental strategies. Fluently add and subtract within 100 using strategies based on place value, properties of operations, and/or the relationship between addition and subtraction. Tell and write time from analog and digital clocks to the nearest five minutes, using a.m. and p.m. Solve word problems involving dollar bills, quarters, dimes, nickels, and pennies, using $ and ¢ symbols appropriately. Draw a picture graph and a bar graph (with single-unit scale) to represent a data set with up to four categories. 3. Knowledge Constructor Students critically curate a variety of resources using digital tools to construct knowledge, produce creative artifacts and make meaningful learning experiences for themselves and others. Students: a. plan and employ effective research strategies to locate information and other resources for their intellectual or creative pursuits. b. evaluate the accuracy, perspective, credibility and relevance of information, media, data or other resources. 6. Creative Communicator Students communicate clearly and express themselves creatively for a variety of purposes using the platforms, tools, styles, formats and digital media appropriate to their goals. Students: a. choose the appropriate platforms and tools for meeting the desired objectives of their creation or communication. b. create original works or responsibly repurpose or remix digital resources into new creations. d. publish or present content that customizes the message and medium for their intended audiences. What can your students create? Write, record, and illustrate a sentence. Interactive digital worksheets for grades K-8 to use in Brightspace or Canvas. Create custom rubrics for your classroom.
Geothermal energy is thermal energy generated and stored in the Earth. Thermal energy is the energy that determines the temperature of matter. The geothermal energy of the Earth's crust originates from the original formation of the planet (20%) and from radioactive decay of minerals (80%). The geothermal gradient, which is the difference in temperature between the core of the planet and its surface, drives a continuous conduction of thermal energy in the form of heat from the core to the surface. The adjective geothermal originates from the Greek roots γη (ge), meaning earth, and θερμος (thermos), meaning hot. Earth's internal heat is thermal energy generated from radioactive decay and continual heat loss from Earth's formation. Temperatures at the core–mantle boundary may reach over 4000 °C (7,200 °F). The high temperature and pressure in Earth's interior cause some rock to melt and solid mantle to behave plastically, resulting in portions of mantle convecting upward since it is lighter than the surrounding rock. Rock and water is heated in the crust, sometimes up to 370 °C (700 °F). From hot springs, geothermal energy has been used for bathing since Paleolithic times and for space heating since ancient Roman times, but it is now better known for electricity generation. Worldwide, 11,400 megawatts (MW) of geothermal power is online in 24 countries in 2012. An additional 28 gigawatts of direct geothermal heating capacity is installed for district heating, space heating, spas, industrial processes, desalination and agricultural applications in 2010. Geothermal power is cost effective, reliable, sustainable, and environmentally friendly, but has historically been limited to areas near tectonic plate boundaries. Recent technological advances have dramatically expanded the range and size of viable resources, especially for applications such as home heating, opening a potential for widespread exploitation. Geothermal wells release greenhouse gases trapped deep within the earth, but these emissions are much lower per energy unit than those of fossil fuels. As a result, geothermal power has the potential to help mitigate global warming if widely deployed in place of fossil fuels. The Earth's geothermal resources are theoretically more than adequate to supply humanity's energy needs, but only a very small fraction may be profitably exploited. Drilling and exploration for deep resources is very expensive. Forecasts for the future of geothermal power depend on assumptions about technology, energy prices, subsidies, and interest rates. Pilot programs like EWEB's customer opt in Green Power Program show that customers would be willing to pay a little more for a renewable energy source like geothermal. But as a result of government assisted research and industry experience, the cost of generating geothermal power has decreased by 25% over the past two decades. In 2001, geothermal energy cost between two and ten US cents per kWh. Hot springs have been used for bathing at least since paleolithic times The oldest known spa is a stone pool on China's Lisan mountain built in the Qin Dynasty in the 3rd century BC, at the same site where the Huaqing Chi palace was later built. In the first century AD, Romans conquered Aquae Sulis, now Bath, Somerset, England, and used the hot springs there to feed public baths and underfloor heating. The admission fees for these baths probably represent the first commercial use of geothermal power. The world's oldest geothermal district heating system in Chaudes-Aigues, France, has been operating since the 14th century. The earliest industrial exploitation began in 1827 with the use of geyser steam to extract boric acid from volcanic mud in Larderello, Italy. In 1892, America's first district heating system in Boise, Idaho was powered directly by geothermal energy, and was copied in Klamath Falls, Oregon in 1900. A deep geothermal well was used to heat greenhouses in Boise in 1926, and geysers were used to heat greenhouses in Iceland and Tuscany at about the same time. Charlie Lieb developed the first downhole heat exchanger in 1930 to heat his house. Steam and hot water from geysers began heating homes in Iceland starting in 1943. In the 20th century, demand for electricity led to the consideration of geothermal power as a generating source. Prince Piero Ginori Conti tested the first geothermal power generator on 4 July 1904, at the same Larderello dry steam field where geothermal acid extraction began. It successfully lit four light bulbs. Later, in 1911, the world's first commercial geothermal power plant was built there. It was the world's only industrial producer of geothermal electricity until New Zealand built a plant in 1958. In 2012, it produced some 594 megawatts. Lord Kelvin invented the heat pump in 1852, and Heinrich Zoelly had patented the idea of using it to draw heat from the ground in 1912. But it was not until the late 1940s that the geothermal heat pump was successfully implemented. The earliest one was probably Robert C. Webber's home-made 2.2 kW direct-exchange system, but sources disagree as to the exact timeline of his invention. J. Donald Kroeker designed the first commercial geothermal heat pump to heat the Commonwealth Building (Portland, Oregon) and demonstrated it in 1946. Professor Carl Nielsen of Ohio State University built the first residential open loop version in his home in 1948. The technology became popular in Sweden as a result of the 1973 oil crisis, and has been growing slowly in worldwide acceptance since then. The 1979 development of polybutylene pipe greatly augmented the heat pump’s economic viability. In 1960, Pacific Gas and Electric began operation of the first successful geothermal electric power plant in the United States at The Geysers in California. The original turbine lasted for more than 30 years and produced 11 MW net power. The binary cycle power plant was first demonstrated in 1967 in the USSR and later introduced to the US in 1981. This technology allows the generation of electricity from much lower temperature resources than previously. In 2006, a binary cycle plant in Chena Hot Springs, Alaska, came on-line, producing electricity from a record low fluid temperature of 57 °C (135 °F). The International Geothermal Association (IGA) has reported that 10,715 megawatts (MW) of geothermal power in 24 countries is online, which was expected to generate 67,246 GWh of electricity in 2010. This represents a 20% increase in online capacity since 2005. IGA projects growth to 18,500 MW by 2015, due to the projects presently under consideration, often in areas previously assumed to have little exploitable resource. In 2010, the United States led the world in geothermal electricity production with 3,086 MW of installed capacity from 77 power plants. The largest group of geothermal power plants in the world is located at The Geysers, a geothermal field in California. The Philippines is the second highest producer, with 1,904 MW of capacity online. Geothermal power makes up approximately 27% of Philippine electricity generation. |Percentage of national |Percentage of global Geothermal electric plants were traditionally built exclusively on the edges of tectonic plates where high temperature geothermal resources are available near the surface. The development of binary cycle power plants and improvements in drilling and extraction technology enable enhanced geothermal systems over a much greater geographical range. Demonstration projects are operational in Landau-Pfalz, Germany, and Soultz-sous-Forêts, France, while an earlier effort in Basel, Switzerland was shut down after it triggered earthquakes. Other demonstration projects are under construction in Australia, the United Kingdom, and the United States of America. The thermal efficiency of geothermal electric plants is low, around 10–23%, because geothermal fluids do not reach the high temperatures of steam from boilers. The laws of thermodynamics limits the efficiency of heat engines in extracting useful energy. Exhaust heat is wasted, unless it can be used directly and locally, for example in greenhouses, timber mills, and district heating. System efficiency does not materially affect operational costs as it would for plants that use fuel, but it does affect return on the capital used to build the plant. In order to produce more energy than the pumps consume, electricity generation requires relatively hot fields and specialized heat cycles. Because geothermal power does not rely on variable sources of energy, unlike, for example, wind or solar, its capacity factor can be quite large – up to 96% has been demonstrated. The global average was 73% in 2005. Geothermal energy comes in either vapor-dominated or liquid-dominated forms. Larderello and The Geysers are vapor-dominated. Vapor-dominated sites offer temperatures from 240-300 C that produce superheated steam. Liquid-dominated reservoirs (LDRs) are more common with temperatures greater than 200 °C (392 °F) and are found near young volcanoes surrounding the Pacific Ocean and in rift zones and hot spots. Flash plants are the most common way to generate electricity from these sources. Pumps are generally not required, powered instead when the water turns to steam. Most wells generate 2-10MWe. Steam is separated from liquid via cyclone separators, while the liquid is returned to the reservoir for reheating/reuse. As of 2013, the largest liquid system is Cerro Prieto in Mexico, which generates 750 MWe from temperatures reaching 350 °C (662 °F). The Salton Sea field in Southern California offers the potential of generating 2000 MWe. Lower temperature LDRs (120-200 C) require pumping. They are common in extensional terrains, where heating takes place via deep circulation along faults, such as in the Western US and Turkey. Water passes through a heat exchanger in a Rankine cycle binary plant. The water vaporizes an organic working fluid that drives a turbine. These binary plants originated in the Soviet Union in the late 1960s and predominate in new US plants. Binary plants have no emissions. Lower temperature sources produce the energy equivalent of 100M BBL per year. Sources with temperatures from 30-150 C are used without conversion to electricity for as district heating, greenhouses, fisheries, mineral recovery, industrial process heating and bathing in 75 countries. Heat pumps extract energy from shallow sources at 10-20 C in 43 countries for use in space heating and cooling. Home heating is the fastest-growing means of exploiting geothermal energy, with global annual growth rate of 30% in 2005 and 20% in 2012. Approximately 270 petajoules (PJ) of geothermal heating was used in 2004. More than half went for space heating, and another third for heated pools. The remainder supported industrial and agricultural applications. Global installed capacity was 28 GW, but capacity factors tend to be low (30% on average) since heat is mostly needed in winter. Some 88 PJ for space heating was extracted by an estimated 1.3 million geothermal heat pumps with a total capacity of 15 GW. Heat for these purposes may also be extracted from co-generation at a geothermal electrical plant. Heating is cost-effective at many more sites than electricity generation. At natural hot springs or geysers, water can be piped directly into radiators. In hot, dry ground, earth tubes or downhole heat exchangers can collect the heat. However, even in areas where the ground is colder than room temperature, heat can often be extracted with a geothermal heat pump more cost-effectively and cleanly than by conventional furnaces. These devices draw on much shallower and colder resources than traditional geothermal techniques. They frequently combine functions, including air conditioning, seasonal thermal energy storage, solar energy collection, and electric heating. Heat pumps can be used for space heating essentially anywhere. Iceland is the world leader in direct applications. Some 92.5% of its homes are heated with geothermal energy, saving Iceland over $100 million annually in avoided oil imports. Reykjavík, Iceland has the world's biggest district heating system. Once known as the most polluted city in the world, it is now one of the cleanest. Enhanced geothermal systems (EGS) actively inject water into wells to be heated and pumped back out. The water is injected under high pressure to expand existing rock fissures to enable the water to freely flow in and out. The technique was adapted from oil and gas extraction techniques. However, the geologic formations are deeper and no toxic chemicals are used, reducing the possibility of environmental damage. Drillers can employ directional drilling to expand the size of the reservoir. Geothermal power requires no fuel (except for pumps), and is therefore immune to fuel cost fluctuations. However, capital costs are significant. Drilling accounts for over half the costs, and exploration of deep resources entails significant risks. A typical well doublet (extraction and injection wells) in Nevada can support 4.5 megawatts (MW) and costs about $10 million to drill, with a 20% failure rate. In total, electrical plant construction and well drilling cost about €2–5 million per MW of electrical capacity, while the break–even price is 0.04–0.10 € per kW·h. Enhanced geothermal systems tend to be on the high side of these ranges, with capital costs above $4 million per MW and break–even above $0.054 per kW·h in 2007. Direct heating applications can use much shallower wells with lower temperatures, so smaller systems with lower costs and risks are feasible. Residential geothermal heat pumps with a capacity of 10 kilowatt (kW) are routinely installed for around $1–3,000 per kilowatt. District heating systems may benefit from economies of scale if demand is geographically dense, as in cities and greenhouses, but otherwise piping installation dominates capital costs. The capital cost of one such district heating system in Bavaria was estimated at somewhat over 1 million € per MW. Direct systems of any size are much simpler than electric generators and have lower maintenance costs per kW·h, but they must consume electricity to run pumps and compressors. Some governments subsidize geothermal projects. Geothermal power is highly scalable: from a rural village to an entire city. Geothermal projects have several stages of development. Each phase has associated risks. At the early stages of reconnaissance and geophysical surveys, many project are cancelled, making that phase unsuitable for traditional lending. Projects moving forward from the identification, exploration and exploratory drilling often trade equity for financing. The Earth's internal thermal energy flows to the surface by conduction at a rate of 44.2 terawatts (TW), and is replenished by radioactive decay of minerals at a rate of 30 TW. These power rates are more than double humanity’s current energy consumption from all primary sources, but most of this energy flow is not recoverable. In addition to the internal heat flows, the top layer of the surface to a depth of 10 meters (33 ft) is heated by solar energy during the summer, and releases that energy and cools during the winter. Outside of the seasonal variations, the geothermal gradient of temperatures through the crust is 25–30 °C (77–86 °F) per kilometer of depth in most of the world. The conductive heat flux averages 0.1 MW/km2. These values are much higher near tectonic plate boundaries where the crust is thinner. They may be further augmented by fluid circulation, either through magma conduits, hot springs, hydrothermal circulation or a combination of these. A geothermal heat pump can extract enough heat from shallow ground anywhere in the world to provide home heating, but industrial applications need the higher temperatures of deep resources. The thermal efficiency and profitability of electricity generation is particularly sensitive to temperature. The more demanding applications receive the greatest benefit from a high natural heat flux, ideally from using a hot spring. The next best option is to drill a well into a hot aquifer. If no adequate aquifer is available, an artificial one may be built by injecting water to hydraulically fracture the bedrock. This last approach is called hot dry rock geothermal energy in Europe, or enhanced geothermal systems in North America. Much greater potential may be available from this approach than from conventional tapping of natural aquifers. Estimates of the potential for electricity generation from geothermal energy vary sixfold, from .035to2TW depending on the scale of investments. Upper estimates of geothermal resources assume enhanced geothermal wells as deep as 10 kilometres (6 mi), whereas existing geothermal wells are rarely more than 3 kilometres (2 mi) deep. Wells of this depth are now common in the petroleum industry. The deepest research well in the world, the Kola superdeep borehole, is 12 kilometres (7 mi) deep. This record has recently been imitated by commercial oil wells, such as Exxon's Z-12 well in the Chayvo field, Sakhalin. According to the Geothermal Energy Association (GEA) installed geothermal capacity in the United States grew by 5%, or 147.05 MW, since the last annual survey in March 2012. This increase came from seven geothermal projects that began production in 2012. GEA also revised its 2011 estimate of installed capacity upward by 128 MW, bringing current installed U.S. geothermal capacity to 3,386 MW. Renewability and sustainability Geothermal power is considered to be renewable because any projected heat extraction is small compared to the Earth's heat content. The Earth has an internal heat content of 1031 joules (3·1015 TW·hr). About 20% of this is residual heat from planetary accretion, and the remainder is attributed to higher radioactive decay rates that existed in the past. Natural heat flows are not in equilibrium, and the planet is slowly cooling down on geologic timescales. Human extraction taps a minute fraction of the natural outflow, often without accelerating it. Geothermal power is also considered to be sustainable thanks to its power to sustain the Earth’s intricate ecosystems. By using geothermal sources of energy present generations of humans will not endanger the capability of future generations to use their own resources to the same amount that those energy sources are presently used. Further, due to its low emissions geothermal energy is considered to have excellent potential for mitigation of global warming. Even though geothermal power is globally sustainable, extraction must still be monitored to avoid local depletion. Over the course of decades, individual wells draw down local temperatures and water levels until a new equilibrium is reached with natural flows. The three oldest sites, at Larderello, Wairakei, and the Geysers have experienced reduced output because of local depletion. Heat and water, in uncertain proportions, were extracted faster than they were replenished. If production is reduced and water is reinjected, these wells could theoretically recover their full potential. Such mitigation strategies have already been implemented at some sites. The long-term sustainability of geothermal energy has been demonstrated at the Lardarello field in Italy since 1913, at the Wairakei field in New Zealand since 1958, and at The Geysers field in California since 1960. Falling electricity production may be boosted through drilling additional supply boreholes, as at Poihipi and Ohaaki. The Wairakei power station has been running much longer, with its first unit commissioned in November 1958, and it attained its peak generation of 173MW in 1965, but already the supply of high-pressure steam was faltering, in 1982 being derated to intermediate pressure and the station managing 157MW. Around the start of the 21st century it was managing about 150MW, then in 2005 two 8MW isopentane systems were added, boosting the station's output by about 14MW. Detailed data are unavailable, being lost due to re-organisations. One such re-organisation in 1996 causes the absence of early data for Poihipi (started 1996), and the gap in 1996/7 for Wairakei and Ohaaki; half-hourly data for Ohaaki's first few months of operation are also missing, as well as for most of Wairakei's history. Fluids drawn from the deep earth carry a mixture of gases, notably carbon dioxide (CO 2), hydrogen sulfide (H 2S), methane (CH 4) and ammonia (NH 3). These pollutants contribute to global warming, acid rain, and noxious smells if released. Existing geothermal electric plants emit an average of 122 kilograms (269 lb) of CO 2 per megawatt-hour (MW·h) of electricity, a small fraction of the emission intensity of conventional fossil fuel plants. Plants that experience high levels of acids and volatile chemicals are usually equipped with emission-control systems to reduce the exhaust. In addition to dissolved gases, hot water from geothermal sources may hold in solution trace amounts of toxic elements such as mercury, arsenic, boron, and antimony. These chemicals precipitate as the water cools, and can cause environmental damage if released. The modern practice of injecting cooled geothermal fluids back into the Earth to stimulate production has the side benefit of reducing this environmental risk. Direct geothermal heating systems contain pumps and compressors, which may consume energy from a polluting source. This parasitic load is normally a fraction of the heat output, so it is always less polluting than electric heating. However, if the electricity is produced by burning fossil fuels, then the net emissions of geothermal heating may be comparable to directly burning the fuel for heat. For example, a geothermal heat pump powered by electricity from a combined cycle natural gas plant would produce about as much pollution as a natural gas condensing furnace of the same size. Therefore the environmental value of direct geothermal heating applications is highly dependent on the emissions intensity of the neighboring electric grid. Plant construction can adversely affect land stability. Subsidence has occurred in the Wairakei field in New Zealand. In Staufen im Breisgau, Germany, tectonic uplift occurred instead, due to a previously isolated anhydrite layer coming in contact with water and turning into gypsum, doubling its volume. Enhanced geothermal systems can trigger earthquakes as part of hydraulic fracturing. The project in Basel, Switzerland was suspended because more than 10,000 seismic events measuring up to 3.4 on the Richter Scale occurred over the first 6 days of water injection. Geothermal has minimal land and freshwater requirements. Geothermal plants use 3.5 square kilometres (1.4 sq mi) per gigawatt of electrical production (not capacity) versus 32 square kilometres (12 sq mi) and 12 square kilometres (4.6 sq mi) for coal facilities and wind farms respectively. They use 20 litres (5.3 US gal) of freshwater per MW·h versus over 1,000 litres (260 US gal) per MW·h for nuclear, coal, or oil. Some of the legal issues raised by geothermal energy resources include questions of ownership and allocation of the resource, the grant of exploration permits, exploitation rights, royalties, and the extent to which geothermal energy issues have been recognised in existing planning and environmental laws. Other questions concern overlap between geothermal and mineral or petroleum tenements. Broader issues concern the extent to which the legal framework for encouragement of renewable energy assists in encouraging geothermal industry innovation and development. - Relative cost of electricity generated by different sources - 2010 World Geothermal Congress - Hydrothermal Vent - Earth's internal heat budget - How Geothermal energy works. Ucsusa.org. Retrieved on 2013-04-24. - Turcotte, D. L.; Schubert, G. (2002), "4", Geodynamics (2 ed.), Cambridge, England, UK: Cambridge University Press, pp. 136–137, ISBN 978-0-521-66624-4 - Lay, Thorne; Hernlund, John; Buffett, Bruce A. (2008), "Core–mantle boundary heat flow", Nature Geoscience 1: 25, Bibcode:2008NatGe...1...25L, doi:10.1038/ngeo.2007.44 - Nemzer, J, Geothermal heating and cooling - Geothermal capacity | About BP | BP Global, Bp.com, retrieved 2013-10-05 - Fridleifsson, Ingvar B.; Bertani, Ruggero; Huenges, Ernst; Lund, John W.; Ragnarsson, Arni; Rybach, Ladislaus (2008-02-11), O. Hohmeyer and T. Trittin, ed., The possible role and contribution of geothermal energy to the mitigation of climate change, IPCC Scoping Meeting on Renewable Energy Sources, Luebeck, Germany, pp. 59–80, retrieved 2009-04-06 - Glassley, William E. (2010). Geothermal Energy: Renewable Energy and the Environment, CRC Press, ISBN 9781420075700.[page needed] - Green Power. eweb.org - Cothran, Helen (2002), Energy Alternatives, Greenhaven Press, ISBN 0737709049[page needed] - Fridleifsson, Ingvar B (2001), "Geothermal energy for the benefit of the people", Renewable and Sustainable Energy Reviews 5 (3): 299, doi:10.1016/S1364-0321(01)00002-8 - Cataldi, Raffaele (August 1992), "Review of historiographic aspects of geothermal energy in the Mediterranean and Mesoamerican areas prior to the Modern Age", Geo-Heat Centre Quarterly Bulletin (Klamath Falls, Oregon: Oregon Institute of Technology) 18 (1): 13–16, retrieved 2009-11-01 - Lund, John W. (June 2007), "Characteristics, Development and utilization of geothermal resources", Geo-Heat Centre Quarterly Bulletin (Klamath Falls, Oregon: Oregon Institute of Technology) 28 (2): 1–9, retrieved 2009-04-16 - Dickson, Mary H.; Fanelli, Mario (February 2004), What is Geothermal Energy?, Pisa, Italy: Istituto di Geoscienze e Georisorse, retrieved 2010-01-17 - Bertani, Ruggero (September 2007), "World Geothermal Generation in 2007", Geo-Heat Centre Quarterly Bulletin (Klamath Falls, Oregon: Oregon Institute of Technology) 28 (3): 8–19, retrieved 2009-04-12 - Tiwari, G. N.; Ghosal, M. K. (2005), Renewable Energy Resources: Basic Principles and Applications, Alpha Science, ISBN 1-84265-125-0[page needed] - Moore, J. N.; Simmons, S. F. (2013), "More Power from Below", Science 340 (6135): 933, Bibcode:2013Sci...340..933M, doi:10.1126/science.1235640, PMID 23704561 - Zogg, M. (20–22 May 2008), ""History of Heat Pumps Swiss Contributions and International Milestones", 9th International IEA Heat Pump Conference, Zürich, Switzerland - Bloomquist, R. Gordon (December 1999), "Geothermal Heat Pumps, Four Plus Decades of Experience", Geo-Heat Centre Quarterly Bulletin (Klamath Falls, Oregon: Oregon Institute of Technology) 20 (4): 13–18, retrieved 2009-03-21 - Kroeker, J. Donald; Chewning, Ray C. (February 1948), "A Heat Pump in an Office Building", ASHVE Transactions 54: 221–238 - Gannon, Robert (February 1978), "Ground-Water Heat Pumps – Home Heating and Cooling from Your Own Well", Popular Science (Bonnier Corporation) 212 (2): 78–82, retrieved 2009-11-01 - Lund, J. (September 2004), "100 Years of Geothermal Power Production", Geo-Heat Centre Quarterly Bulletin (Klamath Falls, Oregon: Oregon Institute of Technology) 25 (3): 11–19, retrieved 2009-04-13 - McLarty, Lynn; Reed, Marshall J. (1992), "The U.S. Geothermal Industry: Three Decades of Growth", Energy Sources, Part A 14 (4): 443–455, doi:10.1080/00908319208908739 - Erkan, K.; Holdmann, G.; Benoit, W.; Blackwell, D. (2008), "Understanding the Chena Hot flopë Springs, Alaska, geothermal system using temperature and pressure data", Geothermics 37 (6): 565–585, doi:10.1016/j.geothermics.2008.09.001 - GEA 2010, p. 4 - GEA 2010, pp. 4–6 - Khan, M. Ali (2007), The Geysers Geothermal Field, an Injection Success Story, Annual Forum of the Groundwater Protection Council, archived from the original on 2011-07-26, retrieved 2010-01-25 - Holm, Alison (May 2010), Geothermal Energy:International Market Update, Geothermal Energy Association, p. 7, retrieved 2010-05-24 - Tester, Jefferson W.; et al. (2006), The Future of Geothermal Energy, Impact of Enhanced Geothermal Systems (Egs) on the United States in the 21st Century: An Assessment, Idaho Falls: Idaho National Laboratory, Massachusetts Institute of Technology, pp. 1–8 to 1–33 (Executive Summary), ISBN 0-615-13438-6, retrieved 2007-02-07 - Bertani, Ruggero (2009), "Geothermal Energy: An Overview on Resources and Potential", Proceedings of the International Conference on National Development of Geothermal Energy Use, Slovakia - Lund, John W. (2003), "The USA Geothermal Country Update", Geothermics 32 (4–6): 409–418, doi:10.1016/S0375-6505(03)00053-1 - Low-Temperature and Co-produced Geothermal Resources. U.S. Department of Energy. - Lund, John W.; Freeston, Derek H.; Boyd, Tonya L. (24–29 April 2005), "World-Wide Direct Uses of Geothermal Energy 2005", Proceedings World Geothermal Congress, Antalya, Turkey - Hanova, J; Dowlatabadi, H (9 November 2007), "Strategic GHG reduction through the use of ground source heat pump technology", Environmental Research Letters 2 (4): 044001, Bibcode:2007ERL.....2d4001H, doi:10.1088/1748-9326/2/4/044001 - Pahl, Greg (2007), The Citizen-Powered Energy Handbook: Community Solutions to a Global Crisis, Vermont: Chelsea Green Publishing - Geothermal Economics 101, Economics of a 35 MW Binary Cycle Geothermal Plant, New York: Glacier Partners, October 2009, retrieved 2009-10-17 - Sanyal, Subir K.; Morrow, James W.; Butler, Steven J.; Robertson-Tait, Ann (January 22–24, 2007), "Cost of Electricity from Enhanced Geothermal Systems", Proc. Thirty-Second Workshop on Geothermal Reservoir Engineering, Stanford, California - In the Netherlands the number of greenhouses heated by geothermal energy is increasing fast. Reif, Thomas (January 2008), "Profitability Analysis and Risk Management of Geothermal Projects", Geo-Heat Centre Quarterly Bulletin (Klamath Falls, Oregon: Oregon Institute of Technology) 28 (4): 1–4, retrieved 2009-10-16 - Lund, John W.; Boyd, Tonya (June 1999), "Small Geothermal Power Project Examples", Geo-Heat Centre Quarterly Bulletin (Klamath Falls, Oregon: Oregon Institute of Technology) 20 (2): 9–26, retrieved 2009-06-02 - Geothermal Energy Association. "Major Companies". Geothermal Energy Association. Retrieved 24 April 2014. - Deloitte, Department of Energy (February 15, 2008). "Geothermal Risk Mitigation Strategies Report". Office of Energy Efficiency and Renewable Energy Geothermal Program. - Pollack, H.N.; S. J. Hurter, and J. R. Johnson (1993), "Heat Flow from the Earth's Interior: Analysis of the Global Data Set", Rev. Geophys. 30 (3): 267–280, Bibcode:1993RvGeo..31..267P, doi:10.1029/93RG01249 - Rybach, Ladislaus (September 2007), "Geothermal Sustainability", Geo-Heat Centre Quarterly Bulletin (Klamath Falls, Oregon: Oregon Institute of Technology) 28 (3): 2–7, retrieved 2009-05-09 - Cassino, Adam (2003), "Depth of the Deepest Drilling", The Physics Factbook (Glenn Elert), retrieved 2009-04-09 - Watkins, Eric (February 11, 2008), "ExxonMobil drills record extended-reach well at Sakhalin-1", Oil & Gas Journal, retrieved 2009-10-31 - GEA Update Release 2013, Geo-energy.org, 2013-02-26, retrieved 2013-10-09 - "Is Geothermal Energy Renewable and Sustainable", Energy Auditor: Your Headquarters For Smart Sustainable Living:, retrieved 9 August 2012 - Thain, Ian A. (September 1998), "A Brief History of the Wairakei Geothermal Power Project", Geo-Heat Centre Quarterly Bulletin (Klamath Falls, Oregon: Oregon Institute of Technology) 19 (3): 1–4, retrieved 2009-06-02 - Axelsson, Gudni; Stefánsson, Valgardur; Björnsson, Grímur; Liu, Jiurong (April 2005), "Sustainable Management of Geothermal Resources and Utilization for 100 – 300 Years", Proceedings World Geothermal Congress 2005 (International Geothermal Association), retrieved 2010-01-17 - Bertani, Ruggero; Thain, Ian (July 2002), "Geothermal Power Generating Plant CO2 Emission Survey", IGA News (International Geothermal Association) (49): 1–3, archived from the original on 2011-07-26, retrieved 2010-01-17 - Bargagli1, R.; Catenil, D.; Nellil, L.; Olmastronil, S.; Zagarese, B. (1997), "Environmental Impact of Trace Element Emissions from Geothermal Power Plants", Environmental Contamination Toxicology 33 (2): 172–181, doi:10.1007/s002449900239, PMID 9294245 - Staufen: Risse: Hoffnung in Staufen: Quellvorgänge lassen nach. badische-zeitung.de. Retrieved on 2013-04-24. - DLR Portal – TerraSAR-X image of the month: Ground uplift under Staufen's Old Town. Dlr.de (2009-10-21). Retrieved on 2013-04-24. - WECHSELWIRKUNG – Numerische Geotechnik. Wechselwirkung.eu. Retrieved on 2013-04-24. - Deichmann, N.; Mai; Bethmann; Ernst; Evans; Fäh; Giardini; Häring; Husen; et al. (2007), "Seismicity Induced by Water Injection for Geothermal Reservoir Stimulation 5 km Below the City of Basel, Switzerland", American Geophysical Union (American Geophysical Union) 53: 08, Bibcode:2007AGUFM.V53F..08D - GEA (May 2010), Geothermal Energy: International Market Update (PDF), Geothermal Energy Association, pp. 4–6 |Look up geothermal in Wiktionary, the free dictionary.| |Wikimedia Commons has media related to Geothermal energy.| - Alliant Geothermal Energy - Bassfeld Technology Transfer – Introduction to Geothermal Power Generation (3.6 MB PDF file) - The Geothermal Collection by the University of Hawaii at Manoa - Geothermal Resources Council - Energy Efficiency and Renewable Energy – Geothermal Technologies Program - Geothermal Energy Association - International Energy Agency Geothermal Energy Homepage - MIT-led panel backs geothermal energy source - MIT – The Future of Geothermal Energy (14 MB PDF file) - NREL - Interactive Data Map - Geothermal Prospector Tool - Geothermal Energy Factsheet by the University of Michigan's Center for Sustainable Systems - TMBA Animation: Geothermal Energy
Future of an expanding universe(Redirected from Dark Era) Observations suggest that the expansion of the universe will continue forever. If so, then a popular theory is that the universe will cool as it expands, eventually becoming too cold to sustain life. For this reason, this future scenario once popularly called "heat death" is now known as the Big Chill or Big Freeze. If dark energy—represented by the cosmological constant, a constant energy density filling space homogeneously, or scalar fields, such as quintessence or moduli, dynamic quantities whose energy density can vary in time and space—accelerates the expansion of the universe, then the space between clusters of galaxies will grow at an increasing rate. Redshift will stretch ancient, incoming photons (even gamma rays) to undetectably long wavelengths and low energies. Stars are expected to form normally for 1012 to 1014 (1–100 trillion) years, but eventually the supply of gas needed for star formation will be exhausted. As existing stars run out of fuel and cease to shine, the universe will slowly and inexorably grow darker. According to theories that predict proton decay, the stellar remnants left behind will disappear, leaving behind only black holes, which themselves eventually disappear as they emit Hawking radiation. Ultimately, if the universe reaches a state in which the temperature approaches a uniform value, no further work will be possible, resulting in a final heat death of the universe. Infinite expansion does not determine the overall spatial curvature of the universe. It can be open (with negative spatial curvature), flat, or closed (positive spatial curvature), although if it is closed, sufficient dark energy must be present to counteract the gravitational forces or else the universe will end in a Big Crunch. Observations of the cosmic background radiation by the Wilkinson Microwave Anisotropy Probe and the Planck mission suggest that the universe is spatially flat and has a significant amount of dark energy. In this case, the universe should continue to expand at an accelerating rate. The acceleration of the universe's expansion has also been confirmed by observations of distant supernovae. If, as in the concordance model of physical cosmology (Lambda-cold dark matter or ΛCDM), dark energy is in the form of a cosmological constant, the expansion will eventually become exponential, with the size of the universe doubling at a constant rate. If the theory of inflation is true, the universe went through an episode dominated by a different form of dark energy in the first moments of the Big Bang; but inflation ended, indicating an equation of state much more complicated than those assumed so far for present-day dark energy. It is possible that the dark energy equation of state could change again resulting in an event that would have consequences which are extremely difficult to parametrize or predict. In the 1970s, the future of an expanding universe was studied by the astrophysicist Jamal Islam and the physicist Freeman Dyson. Then, in their 1999 book The Five Ages of the Universe, the astrophysicists Fred Adams and Gregory Laughlin divided the past and future history of an expanding universe into five eras. The first, the Primordial Era, is the time in the past just after the Big Bang when stars had not yet formed. The second, the Stelliferous Era, includes the present day and all of the stars and galaxies now seen. It is the time during which stars form from collapsing clouds of gas. In the subsequent Degenerate Era, the stars will have burnt out, leaving all stellar-mass objects as stellar remnants—white dwarfs, neutron stars, and black holes. In the Black Hole Era, white dwarfs, neutron stars, and other smaller astronomical objects have been destroyed by proton decay, leaving only black holes. Finally, in the Dark Era, even black holes have disappeared, leaving only a dilute gas of photons and leptons. This future history and the timeline below assume the continued expansion of the universe. If space in the universe begins to contract, subsequent events in the timeline may not occur because the Big Crunch, the collapse of the universe into a hot, dense state similar to that after the Big Bang, will supervene. - From the present to about 1014 (100 trillion) years after the Big Bang The observable universe is currently 1.38×1010 (13.8 billion) years old. This time is in the Stelliferous Era. About 155 million years after the Big Bang, the first star formed. Since then, stars have formed by the collapse of small, dense core regions in large, cold molecular clouds of hydrogen gas. At first, this produces a protostar, which is hot and bright because of energy generated by gravitational contraction. After the protostar contracts for a while, its center will become hot enough to fuse hydrogen and its lifetime as a star will properly begin. Stars of very low mass will eventually exhaust all their fusible hydrogen and then become helium white dwarfs. Stars of low to medium mass, such as our own sun, will expel some of their mass as a planetary nebula and eventually become white dwarfs; more massive stars will explode in a core-collapse supernova, leaving behind neutron stars or black holes. In any case, although some of the star's matter may be returned to the interstellar medium, a degenerate remnant will be left behind whose mass is not returned to the interstellar medium. Therefore, the supply of gas available for star formation is steadily being exhausted. Milky Way Galaxy and the Andromeda Galaxy merge into oneEdit - 4–8 billion years from now (17.8–21.8 billion years after the Big Bang) The Andromeda Galaxy is currently approximately 2.5 million light years away from our galaxy, the Milky Way Galaxy, and they are moving towards each other at approximately 300 kilometers (186 miles) per second. Approximately five billion years from now, or 19 billion years after the Big Bang, the Milky Way and the Andromeda Galaxy will collide with one another and merge into one large galaxy based on current evidence. Up until 2012, there was no way to know whether the possible collision was definitely going to happen or not. In 2012, researchers came to the conclusion that the collision is definite after using the Hubble Space Telescope between 2002 and 2010 to track the motion of Andromeda. This results in the formation of Milkomeda (also known as Milkdromeda). Coalescence of Local Group and galaxies outside the Local Supercluster are no longer accessibleEdit - 1011 (100 billion) to 1012 (1 trillion) years The galaxies in the Local Group, the cluster of galaxies which includes the Milky Way and the Andromeda Galaxy, are gravitationally bound to each other. It is expected that between 1011 (100 billion) and 1012 (1 trillion) years from now, their orbits will decay and the entire Local Group will merge into one large galaxy. Assuming that dark energy continues to make the universe expand at an accelerating rate, in about 150 billion years all galaxies outside the Local Supercluster will pass behind the cosmological horizon. It will then be impossible for events in the Local Group to affect other galaxies. Similarly it will be impossible for events after 150 billion years, as seen by observers in distant galaxies, to affect events in the Local Group. However, an observer in the Local Supercluster will continue to see distant galaxies, but events they observe will become exponentially more red shifted as the galaxy approaches the horizon until time in the distant galaxy seems to stop. The observer in the Local Supercluster never observes events after 150 billion years in their local time, and eventually all light and background radiation lying outside the local supercluster will appear to blink out as light becomes so redshifted that its wavelength has become longer than the physical diameter of the horizon. Technically, it will take an infinitely long time for all causal interaction between our local supercluster and this light; however, due to the redshifting explained above, the light will not necessarily be observed for an infinite amount of time, and after 150 billion years, no new causal interaction will be observed. Therefore, after 150 billion years intergalactic transportation and communication beyond the Local Supercluster becomes causally impossible. Luminosities of galaxies begin to diminishEdit - 8×1011 (800 billion) years 8×1011 (800 billion) years from now, the luminosities of the different galaxies, approximately similar until then to the current ones thanks to the increasing luminosity of the remaining stars as they age, will start to decrease, as the less massive red dwarf stars begin to die as black dwarfs. Galaxies outside the Local Supercluster are no longer detectableEdit - 2×1012 (2 trillion) years 2×1012 (2 trillion) years from now, all galaxies outside the Local Supercluster will be red-shifted to such an extent that even gamma rays they emit will have wavelengths longer than the size of the observable universe of the time. Therefore, these galaxies will no longer be detectable in any way. - From 1014 (100 trillion) to 1040 (10 duodecillion) years By 1014 (100 trillion) years from now, star formation will end, leaving all stellar objects in the form of degenerate remnants. If protons do not decay, stellar-mass objects will disappear more slowly, making this era last longer. Star formation ceasesEdit - 1012–14 (1–100 trillion) years By 1014 (100 trillion) years from now, star formation will end. This period, known as the Degenerate Era, will last until the degenerate remnants finally decay. The least massive stars take the longest to exhaust their hydrogen fuel (see stellar evolution). Thus, the longest living stars in the universe are low-mass red dwarfs, with a mass of about 0.08 solar masses (M☉), which have a lifetime of order 1013 (10 trillion) years. Coincidentally, this is comparable to the length of time over which star formation takes place. Once star formation ends and the least massive red dwarfs exhaust their fuel, nuclear fusion will cease. The low-mass red dwarfs will cool and become black dwarfs. The only objects remaining with more than planetary mass will be brown dwarfs, with mass less than 0.08 M☉, and degenerate remnants; white dwarfs, produced by stars with initial masses between about 0.08 and 8 solar masses; and neutron stars and black holes, produced by stars with initial masses over 8 M☉. Most of the mass of this collection, approximately 90%, will be in the form of white dwarfs. In the absence of any energy source, all of these formerly luminous bodies will cool and become faint. The universe will become extremely dark after the last star burns out. Even so, there can still be occasional light in the universe. One of the ways the universe can be illuminated is if two carbon–oxygen white dwarfs with a combined mass of more than the Chandrasekhar limit of about 1.4 solar masses happen to merge. The resulting object will then undergo runaway thermonuclear fusion, producing a Type Ia supernova and dispelling the darkness of the Degenerate Era for a few weeks. If the combined mass is not above the Chandrasekhar limit but is larger than the minimum mass to fuse carbon (about 0.9 M☉), a carbon star could be produced, with a lifetime of around 106 (1 million) years. Also, if two helium white dwarfs with a combined mass of at least 0.3 M☉ collide, a helium star may be produced, with a lifetime of a few hundred million years. Finally brown dwarfs can form new stars colliding with each other to form a red dwarf star, that can survive for 1013 (10 trillion) years, or accreting gas at very slow rates from the remaining interstellar medium until they have enough mass to start hydrogen burning as red dwarfs too. This process, at least on white dwarfs, could induce Type Ia supernovae too. Planets fall or are flung from orbits by a close encounter with another starEdit - 1015 (1 quadrillion) years Over time, the orbits of planets will decay due to gravitational radiation, or planets will be ejected from their local systems by gravitational perturbations caused by encounters with another stellar remnant. Stellar remnants escape galaxies or fall into black holesEdit - 1019 to 1020 (10 to 100 quintillion) years Over time, objects in a galaxy exchange kinetic energy in a process called dynamical relaxation, making their velocity distribution approach the Maxwell–Boltzmann distribution. Dynamical relaxation can proceed either by close encounters of two stars or by less violent but more frequent distant encounters. In the case of a close encounter, two brown dwarfs or stellar remnants will pass close to each other. When this happens, the trajectories of the objects involved in the close encounter change slightly, in such a way that their kinetic energies are more nearly equal than before. After a large number of encounters, then, lighter objects tend to gain speed while the heavier objects lose it. Because of dynamical relaxation, some objects will gain enough energy to reach galactic escape velocity and depart the galaxy, leaving behind a smaller, denser galaxy. Since encounters are more frequent in the denser galaxy, the process then accelerates. The end result is that most objects (90% to 99%) are ejected from the galaxy, leaving a small fraction (maybe 1% to 10%) which fall into the central supermassive black hole. It has been suggested that the matter of the fallen remnants will form an accretion disk around it that will create a quasar, as long as enough matter is present there. Nucleons start to decayEdit - Chance: 1034 (10 decillion) – 1039 years (1 duodecillion) The subsequent evolution of the universe depends on the possibility and rate of proton decay. Experimental evidence shows that if the proton is unstable, it has a half-life of at least 1034 years. Some of the Grand Unified theories (GUTs) predict long-term proton instability between 1031 and 1036 years, with the upper bound on standard (non-supersymmetry) proton decay at 1.4×1036 years and an overall upper limit maximum for any proton decay (including supersymmetry models) at 6×1039 years. Recent research showing proton lifetime (if unstable) at or exceeding 1034–1035 year range rules out simpler GUTs and most non-supersymmetry models. Neutrons bound into nuclei are also expected to decay with a half-life comparable to that of protons. Planets (substellar objects) would decay in a simple cascade process from heavier elements to pure hydrogen while radiating energy. In the event that the proton does not decay at all, stellar objects would still disappear, but more slowly. See Future without proton decay below. Shorter or longer proton half-lives will accelerate or decelerate the process. This means that after 1037 years (the maximum proton half-life used by Adams & Laughlin (1997)), one-half of all baryonic matter will have been converted into gamma ray photons and leptons through proton decay. All nucleons decayEdit - 1040 (10 duodecillion) years Given our assumed half-life of the proton, nucleons (protons and bound neutrons) will have undergone roughly 1,000 half-lives by the time the universe is 1040 years old. To put this into perspective, there are an estimated 1080 protons currently in the universe. This means that the number of nucleons will be slashed in half 1,000 times by the time the universe is 1040 years old. Hence, there will be roughly 0.51,000 (approximately 10−301) as many nucleons remaining as there are today; that is, zero nucleons remaining in the universe at the end of the Degenerate Age. Effectively, all baryonic matter will have been changed into photons and leptons. Some models predict the formation of stable positronium atoms with a greater diameter than the observable universe's current diameter in 1085 years, and that these will in turn decay to gamma radiation in 10141 years. If protons decay on higher order nuclear processesEdit - Chance: 10100 to 10200 years In the event that the proton does not decay according to the theories described above, the Degenerate Era will last longer, and will overlap or surpass the Black Hole Era. However, degenerate stellar objects can still experience proton decay, for example via processes involving the Adler–Bell–Jackiw anomaly, virtual black holes, or higher-dimension supersymmetry possibly with a half-life of under 10200 years. Black Hole EraEdit - 1040 (10 duodecillion) years to approximately 10100 (1 googol) years, up to 10106 years for the largest supermassive black holes After 1040 years, black holes will dominate the universe. They will slowly evaporate via Hawking radiation. A black hole with a mass of around 1 M☉ will vanish in around 2×1066 years. As the lifetime of a black hole is proportional to the cube of its mass, more massive black holes take longer to decay. A supermassive black hole with a mass of 1011 (100 billion) M☉ will evaporate in around 2×10100 years. The monster black holes in the universe are predicted to continue to grow. Larger black holes of up to 1014 (100 trillion) M☉ may form during the collapse of superclusters of galaxies. Even these would evaporate over a timescale of up to 10106 years. Hawking radiation has a thermal spectrum. During most of a black hole's lifetime, the radiation has a low temperature and is mainly in the form of massless particles such as photons and hypothetical gravitons. As the black hole's mass decreases, its temperature increases, becoming comparable to the Sun's by the time the black hole mass has decreased to 1019 kilograms. The hole then provides a temporary source of light during the general darkness of the Black Hole Era. During the last stages of its evaporation, a black hole will emit not only massless particles, but also heavier particles, such as electrons, positrons, protons, and antiprotons. Dark Era and Photon AgeEdit - From 10100 years (10 duotrigintillion years or 1 googol years) After all the black holes have evaporated (and after all the ordinary matter made of protons has disintegrated, if protons are unstable), the universe will be nearly empty. Photons, neutrinos, electrons, and positrons will fly from place to place, hardly ever encountering each other. Gravitationally, the universe will be dominated by dark matter, electrons, and positrons (not protons). By this era, with only very diffuse matter remaining, activity in the universe will have tailed off dramatically (compared with previous eras), with very low energy levels and very large time scales. Electrons and positrons drifting through space will encounter one another and occasionally form positronium atoms. These structures are unstable, however, and their constituent particles must eventually annihilate. Other low-level annihilation events will also take place, albeit very slowly. The universe now reaches an extremely low-energy state. - Beyond 102500 years (10 duotrigintaoctingentillion years) Presumably, extreme low-energy states imply that localized quantum events become major macroscopic phenomena rather than negligible microscopic events because the smallest perturbations make the biggest difference in this era, so there is no telling what may happen to space or time. It is perceived that the laws of "macro-physics" will break down, and the laws of quantum physics will prevail. The universe could possibly avoid eternal heat death through random quantum tunnelling and quantum fluctuations, given the non-zero probability of producing a new Big Bang in roughly 10101056 years. The possibilities above are based on a simple form of dark energy. But the physics of dark energy are still a very active area of research, and the actual form of dark energy could be much more complex. For example, during inflation dark energy affected the universe very differently than it does today, so it is possible that dark energy could trigger another inflationary period in the future. Until dark energy is better understood its possible effects are extremely difficult to predict or parametrize. Future without proton decayEdit Possible ionization of matterEdit - >1023 years from now In an expanding universe with decreasing density and nonzero cosmological constant, matter density would reach zero, resulting in all matter including stellar objects and planets ionizing and dissipating at thermal equilibrium. Sphaleron transitions and possible baryon violationEdit - >10150 years from now Although protons are stable in standard model physics, a quantum anomaly may exist on the electroweak level, which can cause groups of baryons (protons and neutrons) to annihilate into antileptons via the sphaleron transition. Such baryon/lepton violations have a number of 3 and can only occur in multiples or groups of three baryons, which can restrict or prohibit such events. No experimental evidence of sphalerons has yet been observed at low energy levels, though they are believed to occur regularly at high energies and temperatures. Matter decays into ironEdit - 101500 years from now In 101500 years, cold fusion occurring via quantum tunnelling should make the light nuclei in ordinary matter fuse into iron-56 nuclei (see isotopes of iron). Fission and alpha particle emission should make heavy nuclei also decay to iron, leaving stellar-mass objects as cold spheres of iron, called iron stars. Black Hole EraEdit Collapse of iron star to black holeEdit - 101026 to 101076 years from now Quantum tunnelling should also turn large objects into black holes. Depending on the assumptions made, the time this takes to happen can be calculated as from 101026 years to 101076 years. Quantum tunnelling may also make iron stars collapse into neutron stars in around 101076 years. - Big Rip – A cosmological model based on an exponentially increasing rate of expansion - Big Crunch – Theoretical scenario for the ultimate fate of the universe - Big Bounce – A hypothetical cosmological model for the origin of the known universe - Big Bang – The prevailing cosmological model for the observable universe - Chronology of the universe – The history and future of the universe according to Big Bang cosmology - Cyclic model - Dyson's eternal intelligence – A means by which an immortal society of intelligent beings in an open universe may escape the prospect of heat death by extending subjective time to infinity - Entropy (arrow of time) - Final anthropic principle - Graphical timeline of the Stelliferous Era - Graphical timeline of the Big Bang - Graphical timeline from Big Bang to Heat Death. This timeline uses the double-logarithmic scale for comparison with the graphical timeline included in this article. - Graphical timeline of the universe. This timeline uses the more intuitive linear time, for comparison with this article. - Heat death of the universe – A possible end of the universe - Timeline of the Big Bang - Timeline of the far future – Timeline of the far future - The Last Question – A short story by Isaac Asimov which considers the inevitable oncome of heat death in the universe and how it may be reversed. - Ultimate fate of the universe - WMAP – Fate of the Universe, WMAP's Universe, NASA. Accessed online July 17, 2008. - Sean Carroll (2001). "The cosmological constant". Living Reviews in Relativity. 4 (1): 1. arXiv:astro-ph/0004075. Bibcode:2001LRR.....4....1C. doi:10.12942/lrr-2001-1. Archived from the original on 2006-10-13. Retrieved 2006-09-28. - Krauss, Lawrence M.; Starkman, Glenn D. (2000). "Life, the Universe, and Nothing: Life and Death in an Ever-expanding Universe". Astrophysical Journal. 531 (1): 22–30. arXiv:astro-ph/9902189. Bibcode:2000ApJ...531...22K. doi:10.1086/308434. - Adams, Fred C.; Laughlin, Gregory (1997). "A dying universe: the long-term fate and evolution of astrophysical objects". Reviews of Modern Physics. 69 (2): 337–372. arXiv:astro-ph/9701131. Bibcode:1997RvMP...69..337A. doi:10.1103/RevModPhys.69.337. - Adams & Laughlin (1997), §IIE. - Adams & Laughlin (1997), §IV. - Adams & Laughlin (1997), §VID - Chapter 7, Calibrating the Cosmos, Frank Levin, New York: Springer, 2006, ISBN 0-387-30778-8. - Five-Year Wilkinson Microwave Anisotropy Probe (WMAP) Observations: Data Processing, Sky Maps, and Basic Results, G. Hinshaw et al., The Astrophysical Journal Supplement Series (2008), submitted, arXiv:0803.0732, Bibcode: 2008arXiv0803.0732H. - Planck 2015 results. XIII. Cosmological parameters arXiv:1502.01589 - Possible Ultimate Fate of the Universe, Jamal N. Islam, Quarterly Journal of the Royal Astronomical Society 18 (March 1977), pp. 3–8, Bibcode: 1977QJRAS..18....3I - Dyson, Freeman J. (1979). "Time without end: Physics and biology in an open universe". Reviews of Modern Physics. 51 (3): 447–460. Bibcode:1979RvMP...51..447D. doi:10.1103/RevModPhys.51.447. - The Five Ages of the Universe, Fred Adams and Greg Laughlin, New York: The Free Press, 1999, ISBN 0-684-85422-8. - Adams & Laughlin (1997), §VA - Planck collaboration (2013). "Planck 2013 results. XVI. Cosmological parameters". Astronomy & Astrophysics. 571: A16. arXiv:1303.5076. Bibcode:2014A&A...571A..16P. doi:10.1051/0004-6361/201321591. - Laughlin, Gregory; Bodenheimer, Peter; Adams, Fred C. (1997). "The End of the Main Sequence". The Astrophysical Journal. 482 (1): 420–432. Bibcode:1997ApJ...482..420L. doi:10.1086/304125. - Heger, A.; Fryer, C. L.; Woosley, S. E.; Langer, N.; Hartmann, D. H. (2003). "How Massive Single Stars End Their Life". Astrophysical Journal. 591 (1): 288–300. arXiv:astro-ph/0212469. Bibcode:2003ApJ...591..288H. doi:10.1086/375341. - van der Marel, G.; et al. (2012). "The M31 Velocity Vector. III. Future Milky Way M31-M33 Orbital Evolution, Merging, and Fate of the Sun". The Astrophysical Journal. 753 (1): 9. arXiv:1205.6865. Bibcode:2012ApJ...753....9V. doi:10.1088/0004-637X/753/1/9. - Cowen, R. (31 May 2012). "Andromeda on collision course with the Milky Way". Nature. doi:10.1038/nature.2012.10765. - Adams, F. C.; Graves, G. J. M.; Laughlin, G. (December 2004). García-Segura, G.; Tenorio-Tagle, G.; Franco, J.; Yorke, H. W., eds. "Gravitational Collapse: From Massive Stars to Planets. / First Astrophysics meeting of the Observatorio Astronomico Nacional. / A meeting to celebrate Peter Bodenheimer for his outstanding contributions to Astrophysics: Red Dwarfs and the End of the Main Sequence". Revista Mexicana de Astronomía y Astrofísica (Serie de Conferencias). 22: 46–49. Bibcode:2004RMxAC..22...46A. See Fig. 3. - Adams & Laughlin (1997), § III–IV. - Adams & Laughlin (1997), §IIA and Figure 1. - Adams & Laughlin (1997), §IIIC. - The Future of the Universe, M. Richmond, lecture notes, "Physics 240", Rochester Institute of Technology. Accessed on line July 8, 2008. - Brown Dwarf Accretion: Nonconventional Star Formation over Very Long Timescales, Cirkovic, M. M., Serbian Astronomical Journal 171, (December 2005), pp. 11–17. Bibcode: 2005SerAJ.171...11C - Adams & Laughlin (1997), §IIIF, Table I. - p. 428, A deep focus on NGC 1883, A. L. Tadross, Bulletin of the Astronomical Society of India 33, #4 (December 2005), pp. 421–431, Bibcode: 2005BASI...33..421T. - Reading notes, Liliya L. R. Williams, Astrophysics II: Galactic and Extragalactic Astronomy, University of Minnesota, accessed July 20, 2008. - Deep Time, David J. Darling, New York: Delacorte Press, 1989, ISBN 978-0-38529-757-8. - G Senjanovic Proton decay and grand unification, Dec 2009 - "Upper Bound on the Proton Lifetime and the Minimal Non-SUSY Grand Unified Theory", Pavel Fileviez Perez, Max Planck Institute for Nuclear Physics, June 2006. doi:10.1063/1.2735205 - Pran Nath and Pavel Fileviez Perez, "Proton Stability in Grand Unified Theories, in Strings and in Branes", Appendix H; 23 April 2007. arXiv:hep-ph/0601023 https://arxiv.org/abs/hep-ph/0601023 - Adams & Laughlin (1997), §IV-H. - Solution, exercise 17, One Universe: At Home in the Cosmos, Neil de Grasse Tyson, Charles Tsun-Chu Liu, and Robert Irion, Washington, D.C.: Joseph Henry Press, 2000. ISBN 0-309-06488-0. - Particle emission rates from a black hole: Massless particles from an uncharged, nonrotating hole, Don N. Page, Physical Review D 13 (1976), pp. 198–206. doi:10.1103/PhysRevD.13.198. See in particular equation (27). - Frautschi, S., 1982. Entropy in an expanding universe. Science, 217(4560), pp.593-599. See page 596: table 1 and section "black hole decay" and previous sentence on that page Since we have assumed a maximum scale of gravitational binding – for instance, superclusters of galaxies – black hole formation eventually comes to an end in our model, with masses of up to 1014M☉ ... the timescale for black holes to radiate away all their energy ranges ... to 10106 years for black holes of up to 1014M☉. - Adams & Laughlin (1997), §VD. - Adams & Laughlin (1997), §VF3. - Caldwell, Robert R.; Kamionkowski, Marc; and Weinberg, Nevin N. (2003). "Phantom energy and cosmic doomsday". arXiv:astro-ph/0302506. Bibcode:2003PhRvL..91g1301C. doi:10.1103/PhysRevLett.91.071301. - Bohmadi-Lopez, Mariam; Gonzalez-Diaz, Pedro F.; and Martin-Moruno, Prado (2008). "Worse than a big rip?". arXiv:gr-qc/0612135. Bibcode:2008PhLB..659....1B. doi:10.1016/j.physletb.2007.10.079. - Adams & Laughlin (1997), §VE. - Carroll, Sean M. and Chen, Jennifer (2004). "Spontaneous Inflation and Origin of the Arrow of Time". arXiv:hep-th/0410270. Bibcode:2004hep.th...10270C. - Tegmark, Max (2003) "Parallel Universes". arXiv:astro-ph/0302131. Bibcode:2003SciAm.288e..40T. doi:10.1038/scientificamerican0503-40. - Werlang, T., Ribeiro, G. A. P. and Rigolin, Gustavo (2012) "Interplay between quantum phase transitions and the behavior of quantum correlations at finite temperatures". arXiv:1205.1046. Bibcode:2012IJMPB..2745032W. doi:10.1142/S021797921345032X. - Xing, Xiu-San (2007) "Spontaneous entropy decrease and its statistical formula". arXiv:0710.4624. Bibcode:2007arXiv0710.4624X. - Linde, Andrei (2007) "Sinks in the Landscape, Boltzmann Brains, and the Cosmological Constant Problem". arXiv:hep-th/0611043. Bibcode:2007JCAP...01..022L. doi:10.1088/1475-7516/2007/01/022. - John Baez, University of California-Riverside (Department of Mathematics), "The End of the Universe" 7 February 2016 http://math.ucr.edu/home/baez/end.html - G. 't Hooft, "Symmetry breaking through Bell-Jackiw anomalies". Phys. Rev. Lett. 37 (1976) 8
- The electronic circuitry within a computer that carries out the instructions of a computer program by performing the basic arithmetic, logic, controlling, and input/output (I/O) operations specified by the instructions. It consists of billions of transistors on the semiconductor material silicon. Transisters are assembled into logic gates (AND, OR, NOT) to make logical decisions based on the digital signals present on its inputs. - Memory (RAM) - Quick and temporary storage space for data. The CPU has exclusive access to RAM and processes data through memory rather than much slower storage devices. Once the process is over or the computer is shut down, the data from memory is wiped. CPUs also have a small cache which stores copies of frequently accessed RAM data for even faster retrieval. - Input/Output (IO) - Examples of input devices include the mouse and keybord. Output devices are monitors, printers, speakers, etc. A hard drive can be an input or output device depending on whether the CPU is reading or writing to the disk. There is also virtual memory which provides additional inactive memory from storage devices through a technique called paging. - The glue that connects all components together. It is connected to a power supply and contains expansion slots to allow for additional functionality. For example, a GPU can offload video processing from the CPU. The operating system (OS) The operating system determines how to allocate resources (CPU and memory) for all the other applications running on your computer. Once an application is opened, the program is loaded into memory and it is called a “process”. Each process has a separate memory address space. It cannot efficiently communicate with other processes or access shared data in other processes. Switching from one process to another requires some time (relatively) for saving and loading registers, memory maps, and other resources. Every process needs 3 items: - Register - a part of the CPU that holds an instruction, a storage address, or other kind of data needed by the process - Program counter - (aka instruction pointer) keeps track of where a computer is in its program sequence - Stack - a data structure that stores information in memory about the active subroutines of a computer program and is used as scratch space for the process. It includes the Heap which is the dynamically allocated memory portion of the stack. A process can have one thread or have many threads. A thread is the unit of execution within a process. Each thread will have its own stack but all the threads in the process share the same heap. This means that communication between threads is efficient. However, an error in one thread can affect the other threads in the same process. With a multithreaded process, the CPU can perform operations in parallel or concurrently. - Parallelism - Genuine simultaneous execution in a multi-core processor. - Concurrency - Interleaving of processes within one processor, giving the illusion of simultaneous execution. Because a transmitter can only relay on and off, computers use the base-2 (binary) numeral system using only two symbols: “0” and “1”. Bit vs Byte A bit is the basic unit of information representing either 0 or 1. A byte represents a string of 8 bits. A kilobyte is equal to 1,024 (210) bytes and a kilobit is equal to 1,024 (210) bits. A megabyte is equal to 1,024 (2100) bytes and a megabit is equal to 1,024 (2100) bits. A common misunderstanding relating to bits and bytes occurs when referring to data size and data speed. Speed is measured in bits and size is measured in bytes. This means that 4G service at 100 megabits per second (Mb/s) allows downloading at 12.5 megabytes per second (MB/s). Note Mb vs MB. Software to binary Since software is written in human readable code, there are a few different ways for a CPU to execute that code. - Compilation - Source code gets converted into binary or machine code by a compiler and saved as an executable. - Interpretted - The program executes at runtime by an interpretter that parses code line by line. It has the benefit of not having to be compiled, but is slower than running native machine code. - Just-in-time compilation - A combination of the two above approaches where the interpreter profiles the program and compiles the most frequently executed parts into native code at runtime. Qubits are 1s and 0s that rely on superposition to have states determined at the time of observation. This means that a qubyte (8 qubits) can have up to 256 different outcomes when executed at runtime. The Command line interface (CLI) is a means of interacting with a computer program where the user (or client) issues commands to the program in the form of successive lines of text (command lines). Most users rely upon graphical user interfaces and menu-driven interactions with a mouse. However, many software developers, system administrators and advanced users still rely heavily on command-line interfaces to perform tasks more efficiently, configure their machine, or access programs and program features that are not available through a graphical interface. A Unix shell is a command-line interpreter or shell that provides a command line user interface for Unix-like operating systems. Bash is a Unix shell and command language that can be used for executing commands from the CLI. Windows has their own their shell and command language. Install Git Bash on Windows to run these commands.
The title might sound daunting, but properties of limits (also called limit laws) are just shortcuts to finding limits of functions. How To Use Properties of Limits To find a limit using the properties of limits rule: - Figure out what kind of function you are dealing with in the list of “Function Types” below (for example, an exponential function or a logarithmic function), - Click on the function name to skip to the correct rule, - Substitute your specific function into the rule. Click a function name in the left column to skip to that rule. |1. Constant function||f(x) = C||y = 5| |2. Constant multiplied by another function||k * f(x)||5 * 10x2| |3. Sum of functions||f(x) + g(x) + …||10x2 + 5x| |4. Product of two or more functions||f(x) * g(x) * …||10x2 * 5x| |5.Quotient Law||f(x) / g(x)||5x / 10x2| |6. Power functions||f(x) = axp||10x2| |7. Exponential functions||f(x) = bx||10x| |8. Logarithmic functions||f(x) = logbx||log10x| The limit of a constant (k) multiplied by a function equals the constant multiplied by the limit of the function. - The limit of f(x) = 5 is 5 (from rule 1 above). - The limit of 10x2 at x = 2 can be found with direct substitution (where you just plug in the x-value): 10((22) = 40 - Multiply your answers from (1) and (2) together: 5 * 40 = 200 Tip: Plot a graph (using a graphing calculator) to check your answers. The limit of a sum equals the sum of the limits. In other words, figure out the limit for each piece, then add them together. For step by step examples, see: Sum rule for limits. The limit of a product (multiplication) is equal to the product of the limits. In other words, find the limits of the individual parts and then multiply those together. Example: Find the limit as x→2 for x2 · 5 · 10x - The limit of x2 as x→2 (using direct substitution) is x2 = 22 = 4 - The limit of the constant 5 (rule 1 above) is 5 - Limit of 10x (using direct substitution again) = 10(2) = 20 - Multiply (1), (2) and (3) together: 4 · 5 · 20 = 400 Extended Product Rule Any “extended” formulas in properties of limits are just extensions of other formulas. This one is just an extension of the product rule above: you can just keep on multiplying as many parts as you need (e.g. a * b * c * d * …). The limit of a quotient is equal to the quotient of the limits. In other words: - Find the limit for the numerator, - Find the limit for the denominator, - Divide the two (assuming that the denominator isn’t zero!). The rule for power functions states: The limit of the power of a function is the power of the limit of the function, where p is any real number. Example: Find the limit of the function f(x) = x2 as x→2. - Remove the power: f(x) = x - Find the limit of step 1 at the given x-value (x→2): the limit of f(x) = 2 at x = 2 is 2. You can use direct substitution or a graph like the one on the left. - Put the power back in: 22 = 4 A particular case involving a radical: Also, if f(x) = xn, then: This particular part of the properties of limits “rule” for power functions is really just a shortcut: The limit of x power is a power when x approaches a. Properties of Limits: References Gunnels, P. (undated). Limit Laws. Retrieved May 29, 2019 from: http://people.math.umass.edu/~gunnells/teaching/Sample_Lecture_Notes.pdf Stephanie Glen. "Properties of Limits (Limit Laws)" From CalculusHowTo.com: Calculus for the rest of us! https://www.calculushowto.com/limit-of-functions/properties-of-limits-laws/ Need help with a homework or test question? With Chegg Study, you can get step-by-step solutions to your questions from an expert in the field. Your first 30 minutes with a Chegg tutor is free!
To make data transmission more extensible and efficient than a simple peer-to-peer network, network designers use specialized network devices, such as hubs, bridges and switches, routers, and wireless access points, to send data between devices. Hubs, shown in Figure 1, extend the range of a network by receiving data on one port and then regenerating the data and sending it out to all other ports. A hub can also function as a repeater. A repeater extends the reach of a network because it rebuilds the signal, which overcomes the effects of data degradation over distance. The hub can also connect to another networking device, like a switch or router that connects to other sections of the network. Hubs are used less often today because of the effectiveness and low cost of switches. Hubs do not segment network traffic, so they decrease the amount of available bandwidth for all devices connected to them. In addition, because hubs cannot filter data, a lot of unnecessary network traffic constantly moves between all the devices connected to it. Bridges and Switches Files are broken up into small pieces of data, called packets, before they are transmitted over a network. This process allows for error checking and easier retransmission if the packet is lost or corrupted. Address information is added to the beginning and end of packets before they are transmitted. The packet, along with the address information, is called a frame. LANs are often divided into sections called segments, similar to the way a company is divided into departments, or a school is divided into classes. The boundaries of segments can be defined using a bridge. A bridge filters network traffic between LAN segments. Bridges keep a record of all the devices on each segment to which the bridge is connected. When the bridge receives a frame, the bridge examines the destination address to determine if the frame is to be sent to a different segment or dropped. The bridge also helps to improve the flow of data by keeping frames confined to only the segment to which the frame belongs. Switches, shown in Figure 2, are sometimes called multiport bridges. A typical bridge has two ports, linking two segments of the same network. A switch has several ports, depending on how many network segments are to be linked. A switch is a more sophisticated device than a bridge. In modern networks, switches have replaced hubs as the central point of connectivity. Like a hub, the speed of the switch determines the maximum speed of the network. However, switches filter and segment network traffic by sending data only to the device to which it is sent. This provides higher dedicated bandwidth to each device on the network. Switches maintain a switching table. The switching table contains a list of all MAC addresses on the network, and a list of which switch port can be used to reach a device with a given MAC address. The switching table records MAC addresses by inspecting the source MAC address of every incoming frame, as well as the port on which the frame arrives. The switch then creates a switching table that maps MAC addresses to outgoing ports. When a frame arrives that is destined for a particular MAC address, the switch uses the switching table to determine which port to use to reach the MAC address. The frame is forwarded from the port to the destination. By sending frames out of only one port to the destination, other ports are not affected. Power over Ethernet (PoE) A PoE switch transfers small amounts of DC current over Ethernet cable, along with data, to power PoE devices. Low voltage devices that support PoE, such as Wi-Fi access points, surveillance video devices, and NICs, can be powered from remote locations. Devices that support PoE can receive power over an Ethernet connection at distances up to 330 ft (100 m) away.
In this article, we’ll learn derivative formulas. We’ll discuss the limit definition of the derivative and introduce the most common derivative formulas. Finally, we’ll walk through examples of how to find the derivative of a function. Derivative formulas are equations that give quick solutions to common derivative problems. We refer to them as rules—like the power rule and the chain rule, to name a few. More on these later. These formulas come from the limit definition of the derivative, and they streamline the differentiation process. That’s why we can also call them differentiation formulas. What Is a Derivative? The derivative of a function at a point x is equal to the slope of the tangent line at x. This slope value represents the instantaneous rate of change at that point. Differentiation is the process of determining the derivative of a function. For example, in the graph below, the function f(x)=ln(x) is in blue. The red line is f(x)=x−1, which is the line tangent to f at x=1. A tangent line to a point on a function is a line that just barely touches the function at that point. The slope of this tangent line f(x)=x−1 is 1, which means that the derivative of f(x)=ln(x) is 1 at x=1. We formally define derivatives using limits: The above equation represents the limit of the average rate of change of f over the interval [x,x+Δx] as Δx approaches 0. We also know the average rate of change of a function as the slope of the secant line. In this notation, Δx represents a small change in x. If this limit exists, then L is the derivative. Elements of a Derivative The notation f’(a) represents the derivative of a function f at some point a. You might hear this notation read aloud as either “the derivative of f evaluated at a” or “f prime at a.” The expressions f’(x) and dxdy both represent the general derivative function of f. The latter notation is called Leibniz’s notation. By plugging any point a into the resulting function f’(x), we can determine the slope of the tangent line of f at any point on the curve. Key Derivative Formulas It’s essential to know how to use the limit definition to calculate a derivative. However, the limit definition can be clunky to use. Usually, we rely upon the standard derivative formulas below to differentiate, instead of using the formal limit definition: Here are the steps to finding derivatives using the limit definition: Substitute your function into the limit definition of a derivative formula: The hardest part of this step is the correct substitution of x+h in the first term. You need to substitute x with the expression (x+Δx) wherever x appears in f(x). Evaluate the resulting limit. Practice Example 1 For example, let’s find the derivative of the function f(x)=3x2. Step 1 - Substitute Let's sub in our function f(x)=3x2 into the limit definition of a derivative. Step 2 - Simplify We can do this by expanding the term 3(x+Δx)2 and then combining like terms. Then we can divide by Δx since Δx is present in all terms of the numerator and denominator. Step 3 - Evaluate Now examine the limit as Δx approaches 0. Polynomials are always continuous. To evaluate this limit, we can substitute Δx=0 directly into the function we’re left with. So, we’ve found that the derivative of f(x)=3x2 is f’(x)=6x. This is the general derivative formula for any point on the curve of f. To find the derivative at a single point, we can plug x=a into f’(x)=6x. For example, we can say that: f’(1)=6(1)=6, which represents the slope of the tangent line at x=1. f’(5)=6(5)=30, which represents the slope of the tangent line at x=5. f’(−60)=6(−60)=−360, which represents the slope of the tangent line at x=−60. f’(0)=6(0)=0, which represents the slope of the tangent line at x=0. Practice Example 2 We can also use the differentiation formulas to evaluate derivatives. For example, let’s find the derivative of f(x)=4x4+ex. We can use the sum rule, which states that the derivative of a sum of functions is equal to the sum of their derivatives. We’ll also need to use the power rule for the 4x4 term, where our exponent is n=4. Finally, we’ll use the exponential function rule for ex, which tells us that dxdex=ex. This gives us: Practice Example 3 For a slightly trickier example, let’s find the derivative of f(x)=sin(4x)cos(2x). First, we’ll need the product rule. This says that the derivative of a product of functions is the sum of the first function times the derivative of the second, and the second function times the derivative of the first. Along with the sine and cosine derivative rules, we’ll also need the chain rule, since we have a composition of functions using the sine and cosine functions. The chain rule states that the derivative of a composition of functions is equal to the derivative of the outside function, multiplied by the derivative of the inside function. This means that the derivative of sin(4x) is 4cos(4x), and the derivative of cos(2x) is −2sin(2x). Types of Derivatives First and second derivatives provide different information about the behavior of a function. We use the sign of the first derivative to determine if a function is increasing, decreasing, or constant on an interval I: If f’(x)>0 for each x on I, then f is increasing on I. If f’(x) < 0 for each x on I, then f is decreasing on I. If f’(x)=0 for each x on I, then f is constant on I. We find second derivatives by simply taking the derivative of the first derivative. Second derivatives inform us of the shape of a function. This characteristic is called concavity. We use the sign of the second derivative to determine intervals of concavity: If f’’(x)>0 for each x on I, then f is concave up on I.
Transformations on a Coordinate Plane. Determine the different transformations. 5th Grade Geometry 5th Grade Geometry Vocabulary An equation of a line and a point is given. Your job is to find a parallel line to the equation that goes through the given point. Gill's Angles: Angles, Triangles, Parallel Lines, Transversals Angles, Triangles, Parallel Lines, Transversals Geometry - Chapter 6 Test Review A review game for our Chapter 6 Test in Geometry. Pythagorean Theorem: Midpoint, Perimeter and Area Unit 5 Lesson 2: We will work on finding the midpoint, the Pythagorean theorem and finding the area and perimeter of triangles and rectangles. Mr. B's Polygon Review a review on triangle area Siapa Saya? (3D Shapes in Malay Language) Mengenalpasti bentuk berdasarkan ciri-cirinya. Tic Tac Toe Area and Perimeter Solve an area or perimeter problem to place an X or O on the board! If your answer is correct, click the empty square where you want to place your X or O. Beat the computer by getting 3 in a row!
"Sound," "Valid" and "True" In formal logic, "sound", "valid", and "true" are not synonymous The simplest version of an argument in formal logic is the syllogism: A set of at least two premises, and then a conclusion that follows from the premises according to a set of standard relationships (which are too complex to go into for this article). A classic example syllogism: Premise: All men are mortal. Premise: Socrates is a man. Conclusion: Therefore, Socrates is mortal. Syllogisms are useful because they reduce an argument to its simplest form, making it easier to examine for flaws. The premises and conclusion can be "true" or "false"; the chain of reasoning itself can be "valid" or "invalid"; and the argument as a whole is either "sound" or "unsound". refers to the factual accuracy of each individual premise and conclusion. It has nothing to do with whether the argument as a whole is correct or not ; a statement can be true even if it is used in an incoherent argument. Premise: All dogs are animals. Premise: All cats are animals. Conclusion: Therefore, all turtles are animals. Both of the premises are true, and the conclusion is true; all turtles are indeed animals. The truth of all three statements is not affected by the clearly invalid logic connecting them. refers to whether the chain of reasoning that connects the premises and conclusion is logical or not. An argument is valid if it is impossible for its conclusion to be false while the premises are true. It has nothing to do with whether the premises and conclusion are in fact true. All dogs are terriers. (This is also a false premise.) Although all three parts of this argument are obviously wrong, together they form a valid argument. IF all animals were dogs AND all dogs were terriers, then all animals would indeed be terriers. refers to the argument as a whole. An argument is sound if the logic is valid and all the premises are true. If the argument is sound, then the conclusion must be accepted as true, by the definition of "valid". An unsound argument is one that contains a Logical Fallacy , making it invalid, or contains a false premise. All terriers are dogs. All dogs are animals. Therefore, all terriers are animals. The argument is sound, because the premises and conclusion are true and the logic is valid. This is a valid and factually correct argument. Strength and Cogency: Inductive logic The above only refers to deductive logic. When it comes to induction, things get more complicated. Inductive logic is drawing likely conclusions from true premises. All inductive arguments are, by their nature, invalid; induction relies on probability as a central element rather than certainty. Validity requires that true premises NEVER lead to a false conclusion, but a probabilistic premise specifically breaks that rule. This doesn't make them any less useful, and in fact most of the logic people make use of on a day to day basis is inductive, based on previous experience and 'rules of thumb' rather than strict, unbreakable truths. As an example: 98.7% of Moroccans are Muslims. Therefore, Brahim is Muslim. This argument is invalid : even if both premises are true, the only thing we know for certain is that Brahim is Moroccan.note Brahim could still be one of the 1.1% of Moroccans who are Christian, or one of the 0.2% of Moroccans who are Jewish. Nevertheless, given no other data about Brahim than that he is Moroccan, it is highly likely that Brahim is in fact Muslim. So for inductive arguments, we don't worry about validity; instead we say the argument is strong if it is unlikely to have a false conclusion, provided the premises are true. And instead of saying the argument is sound, we say it is cogent if the logic is strong and the premises are all true. it is true that Brahim is Moroccan, and it is true that 98.7% of Moroccans are Muslim, then this argument cogent and we can safely assume Brahim is Muslim unless and until we find specific information to the contrary. Theoretically, the dividing line between strong and weak inductive arguments is at 50%: at anything above 50%, the argument is strong. This can be a bit counterintuitive: 50.25% of humans are male. Therefore, Pat is male. This is theoretically a "strong" argument, and if the premises are correct then it is cogent. But as it's only a tiny fraction past 50%, one wouldn't want to rely on that conclusion. In the same way that an invalid deduction might still have a true conclusion (as with the turtles above), a weak inductive argument could still have a true conclusion. This should be fairly intuitive: 2.22% of Presidents of the United States are mixed-race. Therefore, Barack Obama is mixed-race. This is a weak argument, and therefore not cogent, even though all its premises are true and has a true conclusion. On the other hand, a strong argument can easily be cogent while still producing a false conclusion: Barack Obama was the President of the United States. 97.7% of Presidents of the United States are white. Therefore, Barack Obama is white. That's why inductive reasoning is less reliable than deduction. Using a Syllogism to Test Premises Besides the use of syllogisms to come up with conclusions from known true premises, a syllogism can also be used to test the truth of premises. If a syllogism is valid, but comes to a conclusion that is known to be untrue, then one or both of the premises must be untrue. For example: No good thing has ever come from military research. The Internet is a good thing. Therefore, the Internet did not come from military research. A valid deductive argument. However, the conclusion is known to be untrue; it is well documented that the Internet originated with United States military research in communications systems. Therefore, one or both of the premises must be false. Important: this technique does not tell you which premise is false, or whether both of them are, merely that at least one must be. In this example, it may be that good things can indeed come from military research; or that the Internet is not a good thing; or both (good things have come from military research, but the Internet is not one of them). By combining chains of logic using sets of syllogisms known to be sound, valid, and true, one can also prove the falsity of a hypothesis or logical fallacy with certainty, and establish larger absolute sound, valid, true statements: All fish are aquatic creatures. All dolphins are aquatic creatures. All dolphins are mammals. Mammals are not fish. Therefore, dolphins are not fish. All dolphins are aquatic creatures. Dolphins are not fish. Therefore, not all aquatic creatures are fish. Thus we know for certain that the statement "All fish are aquatic creatures, but not all aquatic creatures are fish." is sound, valid, and true. Induction: A Part of Science Induction allows us to make probabilistic conclusions about the world. If that makes you think of science, that's because it's part of the scientific method. The scientific method begins with a series of observations to learn about the world then uses inductive logic to reach a tentative conclusion about how that part of the world works. This tentative conclusion is your hypothesis. The next step is to turn around and use deductive logic to see what would undermine the hypothesis, then run the experiment to test whether that is true. Ideally, you would test something necessary for your hypothesis. Or it may be something that should be true if your hypothesis is true. By testing this next idea, you have a new data point, and you return to inductive reasoning to shade your confidence in whether or not your hypothesis is true. Thus, science never gets the certainty of deductive logic because it always has inductive logic mixed in. However, we can have overwhelming confidence in a conclusion which we have built up enough evidence in. This is reflected in an argument by philosopher David Hume which proved that it is impossible to ever derive certain knowledge by observing the world. Hume argued that we observe the world and expect future events to play out similarly. You throw an apple in the air and it falls. You throw an orange in the air and it falls. You throw an elephant in the air and it falls. This idea, that things you throw into the air will fall, is a prediction based on an assumption. The future will resemble the past. How do you know the future will resemble the past? Because in the past, the future has resembled the past. This is circular reasoning, which makes it spurious. Thus, inductive logic and science can never give you certainty. No observation can ever give you certainty. The strength of science is that it has a means of continually reinforcing the confidence in its conclusions, not that it can ever remove all doubt. Remember how in science, you try to disprove your hypothesis? Karl Popper, the philosopher of science, noted this is the key to science. Falsifiability is the core of a scientific statement. If you can't think of a way you might prove something wrong, it isn't science.
Click on the title to view the Teacher Prep document. * Lessons marked by an asterisk require extra notice to prepare. A single chemical may be able to take on many forms, rendering simple methods of identification such as sight ineffective. Chemists (your students) therefore use a multitude of tests to compare the properties of a sample to known values in order to identify an unknown material. This module helps students identify polymers in their surroundings, define relevant terminology, and discover the properties of some plastics and gels. A good grounding in the states of matter is recommended – see our lesson on the States of Matter if your students are not yet familiar. After an introduction to elements, compounds & mixtures, common methods & reasons for separating mixtures are discussed. Students then design and implement a multi-step purification process, the effectiveness of which is gauged by calculating the recovered fraction of components. The theory of acids and bases is relatively simple—with implications in a variety of fields including chemistry, ecology and biology—yet students of all levels consistently exhibit a poor understanding of its fundamental concepts. This module focuses on chemical strength vs. ion concentration, and formation of salts. The role of indicators is also covered briefly, and several demonstrations of acid/base reactions are presented. The lesson begins with a review of atoms, elements, and a discussion of organic versus inorganic compounds. Students learn that organic compounds, such as sugars, starches, and proteins, can be identified with the use of chemical indicators, which produce a characteristic color when a particular substance is present. Using these chemical indicators, students test a variety of food samples for the presence of proteins, and simple and complex carbohydrates. Fats (lipids) are identified by their ability to make paper translucent. This lesson is geared towards older (6th-8th grade) students. Students will learn about chromatography in general and use paper chromatography to explore the composition of various inks. We begin with a discussion about chromatography and its various forms and explain how this powerful tool can help distinguish between two or more compounds. For younger students, this module introduces the three commonly-observed states of matter (solid, liquid, gas), the most commonly-occurring one (plasma, which makes up the stars), and allows them to observe many of the transitions between the different states. For older students, the topic is connected to heat transfer, as they consider how the flow of energy between materials allows the transitions to occur. This lesson reviews the concept of fluids and bonding to provide students with an understanding of viscosity. A few brief historical tangents about New England are included, and Newtonian versus non-Newtonian fluids is optionally covered. Introduce the important mechanical concepts of stress and strain regarding material strength. Explain strength from a materials science/crystalline solid perspective, and describe material strengthening techniques. This module gives students a hands-on, team-oriented introduction to engineering within the context of space exploration. They learn about NASA’s Mars rovers as examples of the challenges engineers face in balancing competing goals, while creating a lander for a mock rover to be tested in an egg drop. This lesson is a basic introduction to engineering and design using the 8 steps of the Engineering and Design Process. This lesson focuses on the redesign step of the Engineering and Design Process. Students will begin with a flawed prototype made of Legos that must be redesigned and reconstructed based on certain constraints. The flawed prototype will be presented in SolidWorks, a 3D software program, to introduce students to the concept of design with computers. It is recommended that instructors begin with E03: Introduction to Engineering and Design if students are unfamiliar with the Engineering and Design Process. This module introduces the six basic simple machines: the inclined plane, the wedge, the screw, the lever, the wheel and axle and the pulley; the students are then challenged to design and build a Rube Goldberg device to ring a service bell in three steps. After the devices are built, the class will identify the simple machines used in their designs. Students will examine the causes of beach erosion and discuss how erosion affects a beach and its ‘stakeholders’. Students work in small groups to engineer solutions to beach erosion through brainstorming, planning, and designing prototypes for their model beaches. EXX-Earthquake Resistant Buildings New Lesson Under Construction. Coming Soon! In a series of five or six workstations, water’s properties are explored as they relate to its importance in environmental processes including: heat capacity, solvation and density. This lesson is an introduction to basic plate tectonics. It includes a review of the earth’s internal structure and the formation of continents, oceans, and mountain ranges as a result of plate movement. There will be a discussion of the mechanism of earthquake production as the sudden release of rock under stress. The types of faults will be defined and the correlation of tectonic plate boundaries with earthquake epicenters will be discussed. The students will hypothesize about how actual geologic formations were made and will test their hypotheses using sponge and clay models of faults. This lesson is geared for students in grades 4-6. Review & expand upon the basics of map literacy. In particular, to familiarize students with different projections so that they can cope with seeing “oddly” formatted maps (as in ES02-Earthquakes), and also recognize that the form of the map may distort or enhance its message. This module reviews & expands upon the basics of critical map literacy. The preceding module on projections and elements (ES03) introduces students to concepts regarding the critical reading of maps. This continues the theme with a discussion of how information is portrayed (symbology) in different types of maps and culminates with the examination of potentially misleading maps. This lesson reviews and expands on the basics of map literacy. In particular, it familiarizes students with topographic maps – a type of map that describes the physical features of an area of land. In the activity, all students will create a 3D model of a landform and then use it to create a 2D topographic map. Lengthy classes and older students (6th-8th) will also use a topographic map to create a 3D model. Discuss the possibility of finding life outside of the planet Earth: how scientists classify life, the probability of finding intelligent life in our galaxy, astronomical scale, where we could look for life in our solar system, what type of life we would look for, and the types of experiments we have performed previously. Celestial mechanics deals with the movement or motions of celestial objects (objects found in space). In this lesson, students learn about the moon’s orbit around earth, and how the moon progresses through its eight major phases and why Earthlings have only ever seen one side of the moon! ES08–Solar System (Lesson currently available, updates coming soon!) This lesson provides a detailed overview of what makes up our solar system, with an emphasis on scale and non-planetary objects. Students learn about the vastness of local space, and the myriad of “lesser” celestial bodies. The three rock types found on Earth (igneous, sedimentary and metamorphic) are discussed and their specific characteristics are identified. Students will examine and identify rock samples using a dichotomous key. Fossils are fundamental to discovering information about the Earth’s past inhabitants. This module briefly explores the various time periods known to man and provides students the opportunity to excavate fossils from rock and reconstruct and analyze a fossilized skeleton for clues to the type of creature that existed during the late Jurassic period. This module provides a brief introduction to the basic structure of a main-sequence star, some of the observations that allow astrophysicists to learn about stars, and the use of the Hertzsprung-Russell (HR) diagram, a powerful tool based on temperature and brightness data for thousands of stars. The H-R diagram is used in this lesson to determine the age of a star cluster. Stellar evolution may be introduced and discussed if time and student understanding allows. This lesson is geared towards older (6th – 8th grade) students. This module presents a game that explains how water cycles through different forms and storage types on Earth and in Earth’s atmosphere. Students act as water molecules and move around the room to the different places water is found on Earth. This lesson is geared towards younger (4th & 5th grade) students. ES13–Soil Nutrient Cycle This module provides a basic understanding of the key nutrients needed for plant growth. This lesson introduces students to the characteristics and formation of soil. In the hands-on portion of the lesson, students examine the color, texture, and field capacity of soil. The debrief includes a discussion of the importance of soil, and the significance of these properties to the ability of soil to support plant life. This lesson is appropriate for 4th-8th grade students. This lesson is an introduction to the concept that S- and P-waves travel at different speeds away from the epicenter of an earthquake, and explains how we can take advantage of this fact in order to locate the epicenter. After a brief review of basic earthquake plate tectonics, S- and P-waves will be defined and explained with a demonstration using multiple Slinky toys. Students will then be challenged to locate the epicenter of an earthquake by using data from the timing of S- and P-waves to triangulate on a map. This lesson is geared towards older (6th-8th grade) students. This lesson provides an introduction to weather and its key components that influence it. Key components include temperature, humidity, pressure, ocean currents and air currents. The four main types of precipitation are also included in the lesson. This lesson was designed to focus on weather concepts that are introduced in 4th and 5th grade, or for students who have not yet had an introduction to weather. In this lesson, students will learn about weather patterns, weather symbols, and how to interpret a weather map. They will then use the skills they have learned to highlight the weather on a national weather map and identify pressure systems and weather fronts. This lesson is geared towards older (6th-8th grade) students. Prerequisites: Students should have seen the module ES16 Weather, or have a strong background in weather basics, including air pressure and weather fronts. The introduction for this lesson should serve as a brief review of air masses, pressure systems, and weather fronts so that the weather mapping activity can be the main focus of this lesson. This is an introductory lesson detailing the components of blood and highlighting the process and importance of blood typing. The lesson starts with an introduction to the cells and fluids making up our blood, followed by a simulated blood typing activity where students work in groups to determine blood types of 4 individuals before they can donate blood to an injured friend, and wraps up with a microscopic examination of human blood smears. This lesson is geared towards older (6th-8th grade) students. New Lesson Under Construction. Coming Soon! LS03–Visual Illusions (Lesson currently available, updates coming soon!) In this module, students will learn about human vision and visual illusions. We begin with a discussion about the human eye, its physiology, function, and how we perceive visual illusions. LS04–C. Elegans * (Lesson currently available, updates coming soon!) This activity serves as introduction/unifying nexus of myriad scientific concepts including genetics (reproduction) and anatomy (differentiation). This module teaches the basics of the energy pyramid and food webs. Students learn about the different trophic levels of the energy pyramid and how to identify organisms in food webs at these trophic levels. They then construct a food web model for a simplified Yellowstone ecosystem. This lesson is geared towards younger (4th and 5th) grade students as an introduction to the topic. After reviewing lab safety, the instructor briefly introduces the dissection procedure and students work in pairs to explore the anatomy of a preserved sheep eye. We end the lesson with a review of mammalian eye anatomy and the basic mechanics of vision. The physical and behavioral adaptations that make owls excellent (nocturnal) predators are reviewed. Students then examine an owl pellet and identify the bones found within. By competing to construct a model, students learn about its components and their functions. The metaphor of the cell as a city is used to make the information more accessible. This lesson introduces and reviews a wide variety of ecological and population-related concepts including carrying capacity and natural fluctuations, namely that population levels are not static but vary over time. After reviewing lab safety, the instructor will provide students with an orientation of the heart’s surface features and identification of key structures and vessels. The basic pathways of blood flow will be outlined and the physiology of heart function will be introduced. Students will complete a dissection of a preserved sheep heart to identify key external and internal structures. Orders for hearts are needed at least 2 weeks before the lesson. Students learn about fingerprint analysis beginning with a discussion of what fingerprints are and how they are collected. They will have the opportunity to practice identifying different types and patterns of fingerprints. This lesson provides an exploration in electrophoresis using wet and dry activities. The wet activity is to run an actual gel electrophoresis, which is typically done to separate DNA in a laboratory setting. In this lesson the students will be running food dyes instead of DNA. The dry activities are designed (1) to convey the concept that the gel matrix acts as a molecular sieve, (2) to simulate the methodology used to generate a DNA profile (also known as a DNA fingerprint), and (3) to use a fictitious case study to analyze a DNA profile to solve a crime and/or design a drug therapy. This lesson is geared towards older (6 – 8th grade) students. This module teaches the basics of mitosis using plant root tips. Students learn to identify cells in the different stages of mitosis, as well as how to use a compound light microscope and (for classes with ample time) prepare a wet-mount slide. This lesson is geared towards older (7th & 8th grade) or advanced students. It is recommended that LS09: Cell City and LS16: DNA is Everywhere are taught prior to this lesson unless students are familiar with the structure and function of cells and of DNA. This lesson introduces the study of epidemiology and focuses on the transmission of infectious disease. The importance of disease mapping and methods of preventing infection are emphasized.This lesson is geared towards older (6 – 8th grade) students. This lesson begins with an introduction to the location and structure of DNA and provides an overview of DNA’s role as the blueprints of life and is followed by an exciting hands-on activity designed to extract DNA from strawberries (or other plant matter). Students learn about the relationship between nutrition and fresh/processed foods, then verify this information by measuring the concentration of vitamin C in different forms of orange juice. This stations-based lesson allows students gain an understanding of the cardiovascular system and an appreciation for the importance of physical activity for heart health. This module allows students to become more aware of what they eat and why as we explore a variety of food additives prevalent in the modern diet of processed foods and how they are used. This lesson begins with a broad overview of the plant kingdom and classifies plants into groups according to the presence/absence of a vascular system, seeds, and flowers/fruits. Topics of discussion include photosynthesis, pollination, and plants effects on the weather. The lesson ends with a student investigation and dissection of plant anatomy. Camouflage & mimicry are explored as examples of adaptations adopted by animals to increase their chances of survival. Students play a tabletop hunting game as desert island castaways to gain appreciation of the problems that camouflage adaptations pose for predators. This module is a hands-on simulation of DNA mutations and their effects on protein encoding genes. Students are provided with a normal gene sequence, which they must first transcribe into mRNA and then translate into a protein. Students have opportunities to investigate the effects of three types of mutations (insertion, deletion and substitution) on their gene sequence or to evaluate how differing gene sequences contribute to diversity within a species. This is an advanced DNA module designed for 7th and 8th grade students. Students should have a firm understanding of cells and the structure and roles of DNA prior to this lesson. If students have not been taught these concepts, LS16 DNA is Everywhere and LS09 Cell City should be taught before teaching this lesson. The human brain is highly adaptable. This activity demonstrates how the brain learns to adapt to a new situation. Students divide into small groups and learn to toss beanbags at a target while wearing prism goggles. They then remove the goggles and “unlearn” the task. Students collect data from these experiments and interpret it in the context of connections between neurons (synapses) in the brain being made stronger and weaker during the learning process. This lesson is an introduction to the human nervous system (NS), and focuses on the human brain and its functional units, the neurons. The neuron is the basic working unit of the NS: it is a specialized cell designed to transmit information to other nerve cells. The activity in this lesson allows younger students to explore the structure and function of the brain and neurons through the construction of models. Older (6th-8th grade) students will construct models, as well as learn about nerve cell communication. This module explores the mechanisms by which biodiversity (genetic variation) is created within populations. It explains the concepts of mutation, gene flow, genetic drift, and natural selection, and how these mechanisms work in different ways to create genetic variation in a population. Natural selection is explored further with a few examples from different species and time-scales. In the activity, students will use beads and event cards to simulate how populations change over time, collect data, and plot a graph. This lesson is intended for older (6th-8th grade) students. Students should be familiar with DNA, genes, and heritable traits before this lesson is taught. New Lesson Under Construction. Coming Soon! LSXX-Carbon Transfer in Snails (Virtual Lesson) New Lesson Under Construction. Coming Soon! Students will get the chance to dissect a frog and observe frog anatomy. AThis lesson is a basic introduction to electricity and circuits for younger audiences or for audiences with no prior exposure to the topic. Students create a basic circuit, test the conductance of various materials and examine a battery made out of produce. This lesson is geared towards younger (4th – 5th grade) audiences. A better choice for older students would be our lesson on Circuits or Ohm’s Law. P04–Gravity (Lesson currently available, updates coming soon!) Students learn about the difference between mass and weight while exploring the phenomenon of gravity with demonstrations of inverse-square laws, mass-dependence & the curvature of space. Students are introduced to pendulums and their periodic motion. They design and execute an experiment to determine whether bob mass, chain length, or displacement angle affects the period of a pendulum. This lesson is appropriate for older (6th-8th grade) students. P06–Ballistics * (Under Construction. Updates coming soon!) This activity-based study of projectile motion teaches students fundamental scientific concepts while generating student interest in the sciences. The lesson begins with the definition of a projectile and gives various examples of projectiles. It then discusses specific properties of projectile motion. Students will learn the fundamentals of electrostatics and its role in everyday life. It is recommended that this module be used during the drier winter months since humidity will interfere with the build-up of sufficient charge. This lesson is a more advanced version of the P02: Electricity lesson. The basics of electricity are reviewed and circuits with lamps in series vs. parallel are explored in depth. P09–(Electro-)Magnetism (Lesson currently available, updates coming soon!) Introduces a variety of magnetic phenomenon including but not limited to ferromagnetism. Students should have some familiarity with electricity/electrons. This workstation-based module introduces students to the idea that sound is a form of energy, transmitted as a longitudinal wave that can travel through solids, liquids, and gases. Simple “instruments” constructed at some stations prompt the students to consider how the vibrations are produced and how their frequency (pitch) and amplitude (loudness) can be changed. This module introduces students to the properties of light. At the end of this module, students should be able to identify transparent, translucent, and opaque objects, discuss absorption, transmission, reflection and refraction of light, and have a better understanding of light waves and the electromagnetic spectrum. This module presents the concept of energy as the ability to do work and familiarizes students with many of the various forms of energy – by direct observation whenever possible. It also introduces the First Law of Thermodynamics (i.e. “Energy can neither be created nor destroyed.”). Lecture demonstrations and a series of workstations allow students to observe a variety of conversions of one form of energy to another. This lesson is aimed at a 4th to 6th grade audience or for students who need an introduction to energy. P17–Mass, Weight & Density (Lesson currently available, updates coming soon!) The related and often confused concepts of mass, weight and density are discussed with several compelling demonstrations. Students construct boats then calculate and test how much mass they can support before sinking. This lesson is a more advanced version of our lesson on Circuits. Students are assumed to have a thorough understanding of the basics of electricity. Ohm’s Law is discussed in depth and resistors are introduced as useful circuit elements. The use of multiple resistors in a circuit is explored; specifically the effect of using them in series vs. parallel. This lesson is aimed at older (6-8th grade) students. For an introduction to the topic of circuits and electricity, see our lesson on Electricity. This lesson provides students with an introduction to the concept of friction and a chance to discover two types of friction. Students explore the differences in frictional forces for different materials through experimentation and compare their results to those of their classmates in order to draw conclusions about the nature of frictional forces. Advanced students or lengthy classes may present their results graphically. This lesson is geared towards older (6th-8th grade) students. Observations in science are crucial to the development and sharing of ideas and theories. This hands-on module introduces the scientific method and focuses on the importance of making detailed observations, writing valid hypotheses and developing and building models that support these hypotheses. Students’ introduction to “data science” continues with a discussion of measures of central tendency (averages) and exploratory data analysis (quartiles, histograms and box plots). Coverage of how scientists measure certainty is possible (t-Test) with a longer period for older students. This module introduces students to good practices for scientific experiments and presentations, by placing them in the role of “judges” of a (mock) science fair experiment writeup. Students work in groups to identify weaknesses and suggest improvements to the execution and presentation of the experiment. They may also present their findings to their classmates. SM04–Mock Science Fair This multi-session module serves a gradual low-level introduction to the scientific method through the use of a mock science fair project, “Under what conditions will a bean grow the most?” It is most appropriate for younger audiences where one will subsequently be later conducting a full science fair. The ability to create and follow clear, ordered plans is useful in many aspects of life. Thinking in steps is necessary to assemble anything from furniture to lasagna. Students will try to replicate the creation of a classmate from written directions. SM07–Observation (Lesson currently available, updates coming soon!) Observations in science are crucial to the development and sharing of ideas and theories. This module focuses on the importance of making detailed observations with oft neglected senses of touch, smell, and hearing. Various forms of classification are used on a nearly daily basis, as a way to organize scientific observations, describe relationships between things we see, and communicate our understanding of such clearly. Students will be exposed to the logic and pitfalls involved in creating and using classification schemes. Students will learn the difference between estimating and measuring. The difference between precision and accuracy will be explained. Class measurements will be plotted to demonstrate the importance of taking multiple measurements. Students will be taught that precision is largely a function of the measurement tool, and accuracy is a function of the user. Advanced students will be introduced to the concept of significant figures. This lesson is aimed at 4th – 6th graders, or students who are not familiar with measurements and estimations. Students are introduced to the central limit theorem through a variety of activities (including graphing), in order to emphasize the importance of collecting multiple data points for an experiment. Real world examples of the normal distribution are shown, as well as related mathematical phenomena. This lesson provides an introduction to technologies used to communicate information. Students will learn about the components of a communication system and some of the machines and devices used in such a system. The activities provide students with an opportunity to both encode and decode English alphabetic characters to the binary number base, which is the system that computers use to communicate information. This lesson is an introductory lesson for students who have no prior experience with the topic, though it is best suited to older (6th-8th grade) students.
By Ravi Desai - UCL How chemical reactions on a lifeless planet floating around in the cold darkness of space can suddenly give rise to living organisms is one of the biggest questions in science. We don’t even know whether the molecular building blocks of life on Earth were created here or whether they were brought here by comets and meteorites. Using data from the NASA/ESA Cassini mission, we have now discovered molecules on Saturn’s largest moon Titan which we think drive the production of complex organic compounds. These are molecules that have never been seen in our solar system before. The discovery not only makes Titan a great contender for hosting some sort of primitive life, it also makes it the ideal place to study how life may have arisen from chemical reactions on our own planet. The molecular building blocks of life are organic compounds including amino acids that can be assembled into proteins, RNA and DNA in living cells. To date, scientists have found these compounds in meteorites, comets and interstellar dust. But the problem is that these materials formed millions of years ago, which means we have no way of knowing how they were created. Excitingly, it seems these compounds are being created on Titan today. Sunlight and energetic particles from Saturn’s magnetosphere drive reactions in the moon’s upper atmosphere, which is dominated by nitrogen, methane and hydrogen. These lead to larger organic compounds which drift downwards to form the moon’s characteristic “haze” and the extensive dunes – eventually reaching the surface. To make these surprising discoveries, published in the Astrophysical Journal Letters, the Cassini spacecraft dipped through Titan’s upper atmosphere. Using data beamed back to Earth, we identified the presence of negatively charged molecules called “carbon chain anions”. These appear to “seed” the larger organic compounds observed at the moon – such as polyaromatic hydrocarbons and cyanopolynnes – which could serve as key ingredients for early forms of life. Laboratory experiments have also shown that amino acids could exist there, but the instruments on Cassini are not equipped to detect them. Negatively charged molecules like these are rare in space environments as they want to react and combine with other molecules – meaning they can be quickly lost. When present, however, they appear to be a crucial “missing link” between simple molecules and complex organic compounds. So could life currently exist on Titan? It’s not impossible. Water plumes erupting from another of Saturn’s moons, Enceladus, provides a key source of oxygen, which rains down onto Titan’s upper atmosphere. Titan has even been judged the most likely place beyond the Earth to host life by the Planetary habitability index. But life there would likely be quite primitive due to the cold conditions. The presence of liquid methane and ethane seas also means potential organisms would have to function quite differently to those on Earth. Tracing life on Earth Remarkably, similar processes are observed in vast molecular clouds beyond our solar system, where stars are born. After the first stars in the universe entered their death throes and fused together heavier elements, rich organic chemistry took place. In these environments, negatively charged molecules have been shown to act as a catalyst for the formation of larger organics, which could then be transferred to solar systems and comets forming from the cloud. Complex interstellar chemistry has led to the theory that the building blocks of life could have been delivered to Earth from comets which once formed in these molecular clouds. ESA’s Rosetta mission detected the amino acid glycine when visiting Comet 67P/Churymov-Gerasimenko. However, the new discovery makes it entirely possible that the process of creating life from simple molecules took place on Earth instead. Titan’s dense nitrogen and methane atmosphere is similar to the early Earth’s, some 2.5-4 billion years ago. At this time, before the build-up of oxygen occurred, large quantities of methane resulted in organic chemistry similar to that observed at Titan today. The moon is therefore a high priority target in the search for the beginnings of life. By making long-term, detailed observations of Titan, we may one day be able to trace the journey from small to large chemical species in order to understand how complex organic molecules are produced. Perhaps we may even be able catch the sudden change from complex organic molecules to living organisms. Follow-up observations of Titan’s atmosphere are already underway using powerful ground-based telescopes such as ALMA. Further missions to explore Titan are also in the works – it is crucial that these are equipped to detect the signatures of life. The fact that we now see the same chemistry occurring at Titan as in molecular clouds is fascinating, as it indicates the universal nature of these processes. The question now is, could this also be happening within other atmospheres rich in nitrogen and methane, such as at Pluto or Neptune’s moon Triton? What about the thousands of exoplanets discovered in recent years, circling nearby stars? The concept of a universal pathway towards the building blocks of life has implications for what we need to look for in the onward search for life in the universe. If we detect the molecules just seen on Titan in another environment, we would know that much larger organics and therefore amino acids are likely to exist there. Future missions, such as NASA’s James Webb Space Telescope and ESA’s exoplanet mission Plato, are set to further study these processes within our solar system and at planets orbiting nearby stars. The UK is even planning its own exoplanet mission, Twinkle, which will also search for signatures of organic molecules. Although we haven’t detected life itself, the presence of complex organic molecules at Titan, comets and within the interstellar medium means we are certainly coming close to finding its beginnings. And it’s all thanks to Cassini’s near 20-year exploratory journey. So spare a thought for this magnificent spacecraft as it ends its mission in September with a final death-plunge into Saturn’s atmosphere. Source: The Conversation If you enjoy our selection of content please consider following Universal-Sci on social media:
Alkenes and Hydrogen Halides This page looks at the reaction of the carbon-carbon double bond in alkenes such as ethene with hydrogen halides such as hydrogen chloride and hydrogen bromide. Symmetrical alkenes (like ethene or but-2-ene) are dealt with first. These are alkenes where identical groups are attached to each end of the carbon-carbon double bond. The extra problems associated with unsymmetrical ones like propene are covered in a separate section afterwards. Addition to Symmetrical Alkenes All alkenes undergo addition reactions with the hydrogen halides. A hydrogen atom joins to one of the carbon atoms originally in the double bond, and a halogen atom to the other. For example, with ethene and hydrogen chloride, you get chloroethane: With but-2-ene you get 2-chlorobutane: Note: Follow this link if you aren't happy about naming organic compounds. What happens if you add the hydrogen to the carbon atom at the right-hand end of the double bond, and the chlorine to the left-hand end? You would still have the same product. The chlorine would be on a carbon atom next to the end of the chain – you would simply have drawn the molecule flipped over in space. That would be different of the alkene was unsymmetrical – that's why we have to look at them separately. The alkenes react with gaseous hydrogen halides at room temperature. If the alkene is also a gas, you can simply mix the gases. If the alkene is a liquid, you can bubble the hydrogen halide through the liquid. Alkenes will also react with concentrated solutions of the gases in water. A solution of hydrogen chloride in water is, of course, hydrochloric acid. A solution of hydrogen bromide in water is hydrobromic acid – and so on. There are, however, problems with this. The water will also get involved in the reaction and you end up with a mixture of products. Warning! The mechanism for this reaction is almost invariably given for the reaction involving the alkene and the simple molecules H-Cl or H-Br or whatever. In the presence of water, these molecules will already have reacted with the water to produce hydroxonium ions, H3O+, and halide ions. The mechanism will therefore be different – involving an initial attack by a hydroxonium ion. Avoid this problem by using the pure gaseous hydrogen halide. Variation of rates when you change the halogen Reaction rates increase in the order HF – HCl – HBr – HI. Hydrogen fluoride reacts much more slowly than the other three, and is normally ignored in talking about these reactions. When the hydrogen halides react with alkenes, the hydrogen-halogen bond has to be broken. The bond strength falls as you go from HF to HI, and the hydrogen-fluorine bond is particularly strong. Because it is difficult to break the bond between the hydrogen and the fluorine, the addition of HF is bound to be slow. Variation of rates when you change the alkene This applies to unsymmetrical alkenes as well as to symmetrical ones. For simplicity the examples given below are all symmetrical ones- but they don't have to be. Reaction rates increase as the alkene gets more complicated – in the sense of the number of alkyl groups (such as methyl groups) attached to the carbon atoms at either end of the double bond. There are two ways of looking at the reasons for this – both of which need you to know about the mechanism for the reactions. Note: If you should know about the mechanism, but are a bit uncertain about it, then you should spend some time exploring the electrophilic addition mechanisms menu before you go on, and then come back to this page later. You should look at the addition of hydrogen halides to unsymmetrical alkenes as well as symmetrical ones. If you don't need to know about the mechanisms, skip over the next bit! Alkenes react because the electrons in the π bond attract things with any degree of positive charge. Anything which increases the electron density around the double bond will help this. Alkyl groups have a tendency to "push" electrons away from themselves towards the double bond. The more alkyl groups you have, the more negative the area around the double bonds becomes. The more negatively charged that region becomes, the more it will attract molecules like hydrogen chloride. Note: If you aren't sure about π bonds, you will find a simple mention of them in the introductory page on alkenes. You will find more about the electron pushing effect of alkyl groups on a page about carbocations in the mechanism section of this site. That is also important reading if you are to understand the next bit. The more important reason, though, lies in the stability of the intermediate ion formed during the reaction. The three examples given above produce these carbocations (carbonium ions) at the half-way stage of the reaction: The stability of the intermediate ions governs the activation energy for the reaction. As you go towards the more complicated alkenes, the activation energy for the reaction falls. That means that the reactions become faster. Didn't understand this? You should have followed the link to the page about carbocations mentioned above! Addition to Unsymmetrical Alkenes In terms of reaction conditions and the factors affecting the rates of the reaction, there is no difference whatsoever between these alkenes and the symmetrical ones described above. The problem comes with the orientation of the addition – in other words, which way around the hydrogen and the halogen add across the double bond. Orientation of Addition If HCl adds to an unsymmetrical alkene like propene, there are two possible ways it could add. However, in practice, there is only one major product. This is in line with Markovnikov's Rule which says: When a compound HX is added to an unsymmetrical alkene, the hydrogen becomes attached to the carbon with the most hydrogens attached to it already. In this case, the hydrogen becomes attached to the CH2 group, because the CH2 group has more hydrogens than the CH group. Notice that only the hydrogens directly attached to the carbon atoms at either end of the double bond count. The ones in the CH3 group are totally irrelevant. Warning! Markovnikov's Rule is a useful guide for you to work out which way round to add something across a double bond, but it isn't the reason why things add that way. As a general principle, don't quote Markovnikov's Rule in an exam unless you are specifically asked for it. You will find the proper reason for this in a page about the addition of hydrogen halides to unsymmetrical alkenes in the mechanism section of this site. A Special Problem With Hydrogen Bromide Unlike the other hydrogen halides, hydrogen bromide can add to a carbon-carbon double bond either way around – depending on the conditions of the reaction. If the hydrogen bromide and alkene are entirely pure In this case, the hydrogen bromide adds on according to Markovnikov's Rule. For example, with propene you would get 2-bromopropane. That is exactly the same as the way the other hydrogen halides add. If the hydrogen bromide and alkene contain traces of organic peroxides Oxygen from the air tends to react slowly with alkenes to produce some organic peroxides, and so you don't necessarily have to add them separately. This is therefore the reaction that you will tend to get unless you take care to exclude all air from the system. In this case, the addition is the other way around, and you get 1-bromopropane: This is sometimes described as an anti-Markovnikov addition or as the peroxide effect. Organic peroxides are excellent sources of free radicals. In the presence of these, the hydrogen bromide reacts with alkenes using a different (faster) mechanism. For various reasons, this doesn't happen with the other hydrogen halides. This reaction can also happen in this way in the presence of ultra-violet light of the right wavelength to break the hydrogen-bromine bond into hydrogen and bromine free radicals. Note: All this is explored in detail on the page about free radical addition of HBr to alkenes in the mechanism section of this site. Questions to test your understandingQuestions on the reactions between alkenes and hydrogen halides Answers
4.1: Which One Doesn’t Belong: Distribution Shape Which one doesn’t belong? 4.2: Matching Distributions Take turns with your partner matching 2 different data displays that represent the distribution of the same set of data. - For each set that you find, explain to your partner how you know it’s a match. - For each set that your partner finds, listen carefully to their explanation. If you disagree, discuss your thinking and work to reach an agreement. - When finished with all ten matches, describe the shape of each distribution. 4.3: Where Did The Distribution Come From? Your teacher will assign you some of the matched distributions. Using the information provided in the data displays, make an educated guess about the survey question that produced this data. Be prepared to share your reasoning. This distribution shows the length in inches of fish caught and released from a nearby lake. Describe the shape of the distribution. Make an educated guess about what could cause the distribution to have this shape. We can describe the shape of distributions as symmetric, skewed, bell-shaped, bimodal, or uniform. Here is a dot plot, histogram, and box plot representing the distribution of the same data set. This data set has a symmetric distribution. In a symmetric distribution, the mean is equal to the median and there is a vertical line of symmetry in the center of the data display. The histogram and the box plot both group data together. Since histograms and box plots do not display each data value individually, they do not provide information about the shape of the distribution to the same level of detail that a dot plot does. This distribution, in particular, can also be called bell-shaped. A bell-shaped distribution has a dot plot that takes the form of a bell with most of the data clustered near the center and fewer points farther from the center. This makes the measure of center a very good description of the data as a whole. Bell-shaped distributions are always symmetric or close to it. Here is a dot plot, histogram, and box plot representing a skewed distribution. In a skewed distribution, one side of the distribution has more values farther from the bulk of the data than the other side. This results in the mean and median not being equal. In this skewed distribution, the data is skewed to the right because most of the data is near the 8 to 10 interval, but there are many points to the right. The mean is greater than the median. The large data values to the right cause the mean to shift in that direction while the median remains with the bulk of the data, so the mean is greater than the median for distributions that are skewed to the right. In a data set that is skewed to the left, a similar effect happens but to the other side. Again, the dot plot provides a greater level of detail about the shape of the distribution than either the histogram or the dot plot. A uniform distribution has the data values evenly distributed throughout the range of the data. This causes the distribution to look like a rectangle. In a uniform distribution the mean is equal to the median since a uniform distribution is also a symmetric distribution. The box plot does not provide enough information to describe the shape of the distribution as uniform, though the even length of each quarter does suggest that the distribution may be approximately symmetric. A bimodal distribution has two very common data values seen in a dot plot or histogram as distinct peaks. Sometimes, a bimodal distribution has most of the data clustered in the middle of the distribution. In these cases the center of the distribution does not describe the data very well. Bimodal distributions are not always symmetric. For example, the peaks may not be equally spaced from the middle of the distribution or other data values may disrupt the symmetry. A distribution whose dot plot or histogram takes the form of a bell with most of the data clustered near the center and fewer points farther from the center. A distribution with two very common data values seen in a dot plot or histogram as distinct peaks. In the dot plot shown, the two common data values are 2 and 7, A distribution where one side of the distribution has more values farther from the bulk of the data than the other side, so that the mean is not equal to the median. In the dot plot shown, the data values on the left, such as 1, 2, and 3, are further from the bulk of the data than the data values on the right. A distribution with a vertical line of symmetry in the center of the graphical representation, so that the mean is equal to the median. In the dot plot shown, the distribution is symmetric about the data value 5. A distribution which has the data values evenly distributed throughout the range of the data.
Grade 6 Mathematics Curricula 206. Number –Write story problems to generate calculations involving the four operations. 207. Number –Generate number patterns and identify their rule using algebra. 208. Number –Compute with common and decimal fractions using the four operations. 209. Number –Use the calculator to estimate and check routinely and to perform calculations. 210. Number –Divide a fraction, mixed number or decimal fraction by a whole number. 211. Number –Divide a whole number by any fractional number. 212. Number –Divide a decimal fraction by a power of ten. 213. Number –Solve problems involving the division of fractional numbers. 214. Number –Perform any computation with whole or fractional numbers. 215. Number –Divide a decimal fraction by another decimal fraction to two or three decimal places of decimals. 216. Measurement–Draw and measure angles using the protractor. 217. Measurement–Use the compasses to draw circles. 218. Measurement–Interpret a simple scale drawing and calculate actual distances using the scale of a road map or floor plan. 219. Measurement–Identify the relationship between the parts of a circle. 220. Measurement–Investigate the concept of pi. 221. Geometry –Identify and draw the following polygons: Triangle, square, rectangle and irregular quadrilaterals. 222–Geometry–Identify and count the number of lines of symmetry in plane figures. 223–Geometry–Draw pictures of polygons to a reasonable degree of accuracy where the length of a side is given. 224. Geometry–Recognize faces, edges, vertices, of a solid and classify solids according to the number and shape of their faces. 225–Geometry–Represent and solve problems using geometrical models. 226–Geometry–Describe the physical world in terms of geometric concepts. 227. Statistics–Discuss the appropriate uses of various tables and graphs. 228. Statistics–Represent data using bar graphs, double bar graphs, pictographs, circle graphs and line graphs. 229. Statistics–Read information on a stem and leaf plot. 230. Statistics–Read information on a box and whisker plot. 231. Statistics–Plot information on a stem and leaf plot. 232. Statistics–Plot information on a box and whisker plot. 233. Statistics–Collect data using direct observation, experiments, interviews and questionnaires. 234. Statistics–Make inferences and draw conclusions based on experiments and collected data. 235. Number –Read and write Roman Numerals representing any number using the symbols I,V,X,C,M. 236. Number –Read and use numbers written, using the principle of place value, in the Hindu-Arabic system of numeration. 237. Number –Write numbers in exponent form. 238. Number –Express place values using exponent form. 239. Number –List all the prime factors of a given number. 240. Number –Write a composite number as a product of primes in exponent form. 241. Number –Identify the Greatest Common Factor of two numbers. 242. Number –Differentiate between the use of multiples and factors. 243. Number –Identify the reciprocal of a whole number or fractional number. 244. Number –Use ratio to compare quantities. 245. Number –Write a ratio to compare the numbers of items in two sets or two parts of a single set. 246. Number –Write a ratio using the formats 1:5, to 5, or 1/5. 247. Number –Write equivalent ratios for a given ratio. 248. Number –Solve problems which require the use of equivalent ratios. 249. Number –Apply the concept of ratio to percentage forms and use the symbol % correctly. 250. Number –Tell what percentage of a set or object is shown. 251. Number –Write a percentage as a fraction with denominator 100 or in its simplest form and/or as a decimal. 252. Number –Solve problems requiring the conversion of fractions to percentages and vice versa. 253. Number –Know that 100% is a whole. 254. Number –Add or subtract using percentage forms. 255. Number –Calculate the percentage a given number is of another given number which is a factor of ten. (Measurements and money may be used.) 256. Number –Calculate a given percentage of a number, amount of money, measure of mass, capacity, etc. 258. Measurement–Investigate and use the formula for the volume of a rectangular solid to solve problems. 259–Measurement–Apply measurement concepts to problem solving and real life situations. 260. Measurement–Use ratio to compare measurements. 261. Measurement–Use the idea of rates of various quantities. 262. Measurement–Calculate any one of the measures of distance, time and rate of travel (average speed), given the measures of the other two. 263. Measurement–Apply the principles of measurements to Road Safety. 264. Measurement–Identify surface area and angle measure in three-dimensional shapes. 265. Measurement–Use the idea of a ‘unit solid’. 266. Measurement–Build unit solids of volume 1dm3, 1m3, and 1cm3. 267. Measurement–Use the 24hr clock in problem situations. 268. Measurement–Interpret a simple scale drawing and calculate the actual distances using the scale on a road map or floor plan. 269. Measurement–Calculate the volume of a rectangular prism when given the number of unit solids in one layer and the number of layers. 270. Geometry –Demonstrate a knowledge and understanding of congruence in two and three dimensions. 271–Geometry –Identify, describe, compare and classify geometric shapes and figures. 272–Geometry –Explore the transformation of geometric figures. 273. Algebra–Substitute in algebraic expressions with up to two variables. 274–Algebra–Solve word problems using algebraic expressions and formulae. 275–Algebra–Substitute in simple inequalities to make statements true. 276–Algebra–Insert one of the symbols >, <, =, etc to make a true mathematical sentence. 277. Number –Identify members of a set, equivalent sets, finite and infinite sets. 278. Number –Associate the number of members in a set with the properties of that set. 279. Number –Use the symbols associated with set operations ‘intersection and union. 280. Number –Draw Venn diagrams to show set relationships including sets and subsets. 281. Measurement–Explore the tiling of a plane using different shapes. 282. Measurement–Differentiate between the size and use of the following units: square centimeters, square metre, hectare and square kilometer. 283. Measurement–Calculate the measurement of one side of a polygon given the perimeter and the lengths of the other side. 284. Measurement–Name and measure regions, compute the area of a region shaped as rectangles, right-triangles, or parallelograms individually; in combination or as the surfaces of three dimensional objects. 285. Measurement–Solve problems involving area measures. 286. Probability –Make inferences and draw conclusions based on experiments and collected data. 287–Probability –Formulate all possible outcomes of an experiment. 288–Probability –State the probability of a simple event. 289–Probability –State the range of probability values, perform and report on a variety of probability experiments. 290. Number –Write a ratio with denominator 100 which is equivalent to a given ratio. 291. Number –Write a given ratio with denominator 100 (or another multiple of ten) in percentage form. 292. Number –Write a percentage as a fraction with denominator 100 or in its simplest form and/or decimal. 293. Number –Use the following terms in problem situations: interest, rate of interest, simple interest. 294. Number –Use simple proportion of principal, rate and time to develop the simple interest formula. 295. Number –Investigate the services offered by financial institutions. 296. Number –Calculate cost, given number of objects and rate of charge; calculate rate of charge, given number of objects and total cost (include applications such as taxes). 297. Number –Calculate the entire amount when a percentage of the amount is know. 298. Number –Solve problems requiring the use of percentages. 299. Number –Compute the simple interest on a sum of money, with or without the formula.
One of the most important aspects of bioinformatics is identifying genes within a long DNA sequence. Until the development of bioinformatics, the only way to locate genes along the chromosome was to study their behavior in the organism (in vivo) or isolate the DNA and study it in a test tube (in vitro). Bioinformatics allows scientists to make educated guesses about where genes are located simply by analyzing sequence data using a computer (in silico). In principle, locating genes should be easy. DNA sequences that code for proteins begin with the three bases ATG that code for the amino acid methionine and they end with one or more stop codons; either TAA, TAG or TGA. Unfortunately, finding genes isn't always so easy. Let's consider a DNA sequence that contains a gene of interest. The DNA strand that codes for the protein is called the sense strand because its sequence reads the same as that of the messenger RNA. The other strand is called the antisense strand and serves as the template for RNA polymerase during transcription. A gene begins with a codon for the amino acid methionine and ends with one of three stop codons. The codons between the start and stop signals code for the various amino acids of the gene product but do not include any of the three stop codons. When examining an unknown DNA sequence, one indication that it may be part of a gene is the presence of an open reading frame or ORF. An ORF is any stretch of DNA that when transcribed into RNA has no stop codon. A computer program can be used to check an unknown DNA sequence for ORFs. The program transcribes each DNA strand into its complementary RNA sequence and then translates the RNA sequence into an amino acid sequence. Each DNA strand can be read in three different reading frames. This means that the computer must perform six different translations for any given double-stranded DNA sequence. The presence of an ORF doesn't guarantee that the DNA sequence is part of a gene. We expect that, just by chance, there will be some long stretches of DNA that do not contain stop codons yet are not parts of genes. Likewise, codons for methionine do not always mark the start of a gene sequence. Methionine codons are also found within genes. Nevertheless, searching for ORFs identifies regions of the DNA sequence that might be parts of genes. A single RNA or DNA strand has a phosphate group at one end and a sugar (ribose for RNA and deoxyribose for DNA) at the other end. The end of the strand with the phosphate group is called the 5' end and the opposite end with the sugar is called the 3' end. In the double helix, the two strands run in opposite directions. That is, one strand runs in the 5' to 3' direction while the complementary strand runs in the 3' to 5' direction. The enzymes and ribosomes that carry out protein synthesis only work in one direction. During transcription, the mRNA is made in the 5' to 3' direction. During translation, the mRNA is read in the 5' to 3' direction. This means that a computer program looking for ORFs also must read each DNA strand in the 5' to 3' direction. It is easier to locate genes in bacterial DNA than in eukaryotic DNA. In bacteria, the genes are arranged like beads on a string. Each gene consists of a single ORF. The situation in eukaryotic organisms is complicated by the split nature of the genes. Most eukaryotic genes take the form of alternating exons and introns. Each exon is an ORF that codes for amino acids. The intron sequences do not code for amino acids and contain internal stop codons. One of the surprises of the Human Genome Project was the relatively small number of genes found - about 25,000. One might ask, "How can something as complicated as a human have only 25 percent more genes than the tiny roundworm C. elegans?" Part of the answer seems to involve alternative splicing. Alternative splicing refers to the process by which a given gene is spliced into more than one type of mRNA molecule. ORFs are just one feature that a computer program looks for when locating potential genes. Genes are also characterized by specific control sequences that are recognized by enzymes involved with transcription and translation. When a computer program finds a DNA sequence that satisfies all of these gene features (an ORF plus the appropriate control sequences), it identifies the sequence as likely coming from a gene. Only testing the DNA sequence in the laboratory can prove that the gene is active in an organism however. Top of page Last Updated: March 5, 2015
Funded Interregional Project Networks “EXCELLENT” IPN 16 Almost the end: The biggest mass extinction of all times as recorded in the rocks of the Southern and Eastern Alps Naturmuseum Südtirol (Evelyn Kustatscher), Coordinator MUSE - Museo delle Scienze Trento (Massimo Bernardi) Karl KRAINER (Universität Innsbruck) Life history has been marked by the appearance of new types of organisms at different times, such as the flowering plants or mankind. However, it has also been shaped by the disappearance of once flourishing groups, most famously the dinosaurs. When a huge number of species die in a short time a “mass extinction” is said to occur. The most severe extinction event happened deep in Earth’s history, about 252 million years ago. During this event, up to 95% of all species became extinct, and entire groups of organisms were deleted from history as, for example, the spiny bizarre bugs called trilobites. Think about that for a second: 9 out of 10 of the species living in the oceans vanished in a single, catastrophic event! Despite the magnitude of this event, surprisingly little is known about its causes and its dynamics, especially for terrestrial species, those that lived on lands. This is because very few rock successions recording this mass extinction are available worldwide and because their fossils are generally few and badly preserved. The Alpine region is exceptional in this respect. Here several localities expose thick sequences of sedimentary rocks, which were formed exactly when the extinction was taking place. At that time the Alpine region was located in a different position on the globe as result of the continental drift: It was very much closer to the equator. The climate was much warmer and even the environment was not alike: sea and land touched right in this region. During the mass extinction event the present-day southern part of the Alps (Dolomites), were covered by a shallow sea, while the Lienz Dolomites and Gailtal Alps of Austria were dominated by terrestrial environments In this project we propose a multidisciplinary, cross-national research to study the effects of the extinction event in different, but very close, localities. Given that the two geographical areas were characterised about 252 million years ago by different environments (terrestrial and shallow marine) but similar climate, their comparison will give us a better picture not only of the real events that brought the ecosystems on the lands to a collapse during the extinction, but will also inform us on the pitfalls that might arise from studying just one environment at time, as it has generally be done for other sites in the World. As it has been demonstrated, present day biodiversity loss is taking us straight into a new mass extinction event. But not all is gone, yet. A deeper knowledge of past extinctions can help us tackling the risks we are facing and teach us how to recover from catastrophes, as life did even after the most profound extinction of all times. Behind our villages, the grandiose rock walls of the Alps protect the clues of a key event in Earth’s history. This project will illuminate these traces, shading light on our deepest past. The law of close corporations in the broader European regulatory competition: A View from the Euregio - Free University of Bolzano-Bozen (Paolo Giudici), Coordinator - University of Trento (Elisabetta Pederzini) - Manfred Büchele (Universität Innsbruck) After the jurisprudence of the European Court of Justice on freedom of establishment of companies in the European Union, regulatory competition for (re)incorporation among Member States and different company types has emerged as a major feature of the European company landscape. The regulatory competition paradigm seems to influence in particular the close corporation which is also the most important company type in terms of diffusion in the local area of the European Region Tyrol-South Tyrol-Trentino and justifies the reason of the research project. Where does our European Region Tyrol-South Tyrol-Trentino and its respective national legislators stand in the regulatorz competition context (race to the top or to the bottom?) for the best place of incorporation? Where does the Italian and Austrian law on close corporations stand in the competition with the supranational European corporate forms for small and medium-sized enterprises (SPE, SUP)? Considering the actual failure of most of the European attempts to create a supranational legal framework for private companies, would it not be preferable to measure the success of a harmonization effort starting from the “bottom”? In order to answer similar questions it will be necessary to understand the structural configuration of the close corporations, obviously not from a nominalistic point of view, but by composing the substantial elements of characterization of this legal vehicle. In this view, the analysis shall concentrate on the following macro aspects of the diverse regulations and practices (in and outside our Euregio): (i) formation of a close corporation; (ii) articles of association and shareholder agreements; (iii) internal governance and management of the company; (iv) transfer of shares; (v) expulsion and withdrawal of shareholders. Equally, the typical patterns of conflicts in the close corporations have to be investigated (like the one existing between shareholders and managing directors, majority and minority shareholders, etc.). Finally, the investigation will concern the role of judicial and arbitral courts in shareholder disputes, pondering the pro and cons between the equitable common law and the strict legalist civil law remedies. The present research project is characterized by particular scientific originality, because of both the research topic and the comparative-functional-empirical research methodology. Of special value will be the absolute innovative empirical approach, which aims, through the collection and elaboration of a large number of statutory bylaws, to uncover the law in action. Reform proposals we intend to formulate at the end will thus have a stronger link to reality. Improve the Science of Processes within the Cryosphere by Integrating Hydrological Modelling with Remote Sensing in a Multi-Level Data Fusion Approach - a Contribution to Cryosphere Monitoring in the EUREGIO Region - EURAC (Claudia Notarnicola), Coordinator - University of Trento (Lorenzo Bruzzone) - Ulrich Strasser (Universität Innsbruck) The cryosphere (here: snow, ice and glaciers) is the most important inter-seasonal water storage component in the Alps. Climate variability and climate change directly affects cryospheric parameters and processes related to the energy and water cycle, such as snow water equivalent, glacier mass balance or runoff. Accurate monitoring as well as understanding of such processes is still a field of scientific challenge and of utmost importance for hydropower production, agriculture, winter tourism and flood protection. Apart from direct observations, hydrological models are the most common approach to study cryospheric processes. However, particularly at larger scales (>10’000 km²), critical processes such as radiation, snow albedo and the energy balance remain underdetermined due to missing spatially explicit data. Satellite remote sensing is a promising technology for generating spatially explicit information on snow for larger areas, but operational products are mainly limited to the detection of snow cover. In view of this, the central idea of CRYOMON-SciPro is to exploit the complementary character of hydrological modelling and satellite remote sensing for monitoring key processes within the cryosphere by integrating both methods in an innovative approach (multi-level data fusion). The expected innovations out of this project include: - An improved representation and understanding of the spatial and temporal dimension of key processes within the cryosphere with a focus on the energy and water cycle on larger scales (> 10’00 km²) - An innovative approach of integrating satellite remote sensing and hydrological modelling - The first-time application of latest ESA Sentinel 1 (radar) and 2 (optical) satellites for studying the cryosphere - The integration of data from new and innovative field measurement techniques (permanent terrestrial laser scanning, field spectrometry) CRYOMON-SciPro makes use of the EUREGIO region as a field laboratory for cryosphere research with well-instrumented test-sites, high data availability, good contact to authorities and climatological conditions representative for different Alpine zones. The results of the project will thus have a scientific value that is well beyond the EUREGIO region. CRYOMON-SciPro will form the nucleus of an Interregional Project Network (IPN) on cryosphere science with a complementary expertise of three key research institutions within the EUREGO region: - Hydroclimatological modelling and analysis of Cryosphere (Innsbruck University), - Applied Remote Sensing of Cryosphere (EURAC Bolzano), - Data driven modelling and machine learning approaches (U Trento niversity). CRYOMON-SciPro is designed as a three year project, with three PhD-students and young postdoctoral researchers as funded stuff. A scheduled exchange program supports knowledge transfer and education of the young researchers. KAOS: Knowledge-Aware Operational Support - Free University of Bolzano-Bozen (Diego Calvanese), Coordinator - Fondazione Bruno Kessler (Chiara Ghidini) - Barbara Weber (Universität Innsbruck) Business Process Management (BPM) is a collection of techniques, languages, and methodologies that are meant to improve corporate performance by managing and optimizing the business processes of a company. Business processes in turn consist of a combination of tasks and operations that are coordinated towards the achievement of the strategic objectives of an organization, and the creation of value for its stakeholders. While BPM is a mature field for what concerns modeling and enactment of business processes, it is still lacking in the proper support and analysis of the active process executions. Enhancing BPM with these capabilities would make it possible to give feedback to the involved agents about issues and deviations, as well as provide them advices and predictions on the possible future continuations of the running processes. This is a key aspect, especially in those application domains where there is no guarantee that the process will be executed as expected, and where unforeseen situations may arise. This is the case, e.g., in healthcare, complex engineering processes, and inter-organizational processes. For this reason, business process operational decision support (OS) has been recently put forward as a framework that produces meaningful feedbacks, based on facts and reality, to domain experts, assisting them in the execution of business processes inside a given organizational context. OS techniques range from compliance checking between the observed and the expected behavior, to prediction of indicators related to the future continuation of the process, and recommendations on what to do next. So far, the large majority of OS techniques focused on very specific problems, without taking into account three fundamental factors: - The complexity and specificity of the organizational domain in which business processes are immersed. - The interplay among the business process executions, the manipulated data, the agents, and the organizational structure. - The fact that the organizational domain continuously evolves, i.e., is subject to “concept drift”, in turn calling for flexibility in process-aware information system but also in the corresponding OS techniques. The main goal of the KAOS project is to overcome such issues by empowering OS with domain knowledge. In particular, KAOS will develop a foundational framework of concepts covering organizations, processes, participants, and information as relevant for Knowledge-empowered OS. It will then exploit this framework as the basis for the development of a new generation of OS techniques truly flexible and able to support domain experts and business analysts in the effective execution of business processes. - Fondazione Bruno Kessler (Fabio Remondino), Coordinator - EURAC (Benni Thiebes) - Martin Rutzinger (Austrian Academy of Sciences (ÖAW), Institute for Interdisciplinary Mountain Research Innsbruck) Natural hazards like earthquakes, volcanic eruptions, landslides, droughts, floods, cyclones and fires threaten people and properties. These events can happen in any moment and need be studied and monitored. The project will focus on mass movements and particularly on landslides. The number of landslide events, as well as the direct and indirect damage caused by slope failures are largely increasing over the last years, calling for the development of more adequate methods for landslide monitoring. Therefore the overall aim of the LEMONADE project is to evaluate the abilities, potentialities and limitations of new remote and proximal sensing methods for monitoring ground deformations. The project will consider and merge different platforms (satellite, UAV, land-based), sensors (imaging, ranging, radar, etc.), techniques (photogrammetry, scanning, etc.) and algorithms to deliver an innovative fusion methodology applicable also to other application fields. Sensor fusion and data integration techniques will be used to improve the results of single methods in order to assess the capabilities of a combination of monitoring approaches. The project methodologies will be validated in three test sites with the collaboration of the regional authorities. The outcomes of the project will aid the development of novel approaches for landslide monitoring, a data fusion methodology - reusable also in other fields of application - as well as an open-access best-practice handbook for end-users. VITISANA: Dissecting the genetic basis of negative quality traits in new disease resistant grapevines - Edmund Mach Foundation (Riccardo Velasco), Coordinator - Laimburg Research Center (Jennifer Berger) - Hermann Stuppner (Universität Innsbruck) Grapevine represents great value for the EU, where over 50% of the worldwide grapevine production is concentrated (FAO, 2010). The extraordinary quality of the fruit is unfortunately accompanied by high susceptibility to pests and pathogens, meaning large volumes of chemicals must be used to control crop losses. It has been estimated that the EU uses 68,000 tons/year of fungicides to control grape diseases, equalling 65% of all fungicides used in crops (Eurostat report, 2007). Social and ethical issues impose much more attention to a more sustainable balance of high quality and low input, which is not always compatible and inversely proportioned. EU citizens are more and more sensitive to a sustainable agriculture, which pays attention not only to the production but also to the anthropic impact on the environment and life quality. This has led to an insistence on stricter management of orchards which will only be be possible through the exploitation of information gained through the study of plant genomes. It is known that wild species, related to the crops, may supply a large set of natural disease resistances, developed during their natural coexistence with the pathogens without human intervention. This allowed them to develop natural tools to cope with pest and pathogens, traits which may be transferred to cultivated species through natural breeding. These approaches will benefit from the enormous amount of information coming from genomics research over the last ten years. Knowing the DNA content of the grapevine genome, coupled with efforts toward linking traits and genetic information, traditional breeding may become extraordinarily more successful than the last century, when the breeding activity aiming to get new resistant varieties failed because of the many negative quality traits inherited from the wild species that still remained in the new hybrids. As it has been demonstrated that genetic markers may succeed in the selection of highly resistant varieties, knowledge of gene function and DNA markers linked to quality traits are the new target toward deeper knowledge of the metabolic pathways and the high quality traits of grapevine. In spite of several decades of breeding activities, mainly in central Europe, only a few new varieties have been registered in National Catalogues. This is mainly because undesirable traits are still compromising the quality of the grape and derived products, especially wine. Our goal is to dissect some of the worst consumer’s refused traits (off-flavours, mathanol, diglucosides) to decouple them from resistance traits carried over by the wildtypes into breeding products. The main result of this project will then be to understand the genetics underlying negative quality traits in disease resistant grapevines, in order to pave the way to the breeding of new varieties with outstanding taste and sustainable cultivation.
Coffee has significant impacts on the economy, has possible health benefits, is featured at many social functions, has important environmental implications depending on how it is grown, and it has been at the forefront of fair trade programs. Coffee ranks as one of the world's major commodity crops and is the major export product of some countries. In fact, coffee ranks second only to petroleum in terms of legally-traded products worldwide. Because most of the coffee producing and exporting nations are poorer countries, and coffee importing nations are the wealthier countries, coffee represents a product with the potential to alleviate the income disparity between these nations. Of course, while providing jobs for people in less developed nations, much of the wealth still ends up in the hands of middlemen and not the local farmers. When the coffee plant is grown in a traditional manner, under the shade of a forest canopy and without pesticides, there is little environmental harm. However, the development of coffee varieties that requires a lot of sunlight and pesticide use has led to river pollution, deforestation, and soil erosion. While such coffee is more economical to produce and has greater yields, concern for long-term environmental sustainability has lead to calls for consumers to support the use of the more traditional methods. Coffea (the coffee plant) is a genus of ten species of flowering plants in the family Rubiaceae. They are shrubs or small trees, native to subtropical Africa and southern Asia. Seeds of this plant are the source of coffee. The seeds, called "coffee beans" in the trade, are widely cultivated in tropical countries in plantations for both local consumption and export to temperate countries. When grown in the tropics, coffee is a vigorous bush or small tree easily grown to a height of 3–3.5 m (10–12 feet). It is capable of withstanding severe pruning. It cannot be grown where there is a winter frost. Bushes grow best at high elevations. To produce a maximum yield of coffee berries (800-1400 kg per hectare), the plants need substantial amounts of water and fertilizer. There are several species of Coffea that may be grown for the beans, but Coffea arabica is considered to have the best quality. The other species (especially Coffea canephora (robusta)) are grown on land unsuitable for Coffea arabica. The tree produces red or purple fruits (drupes, coffee berries, or "coffee cherries"), which contain two seeds (the "coffee beans"). In about 5-10 percent of any crop of coffee cherries, the cherry will contain only a single bean, rather than the two usually found. This is called a “peaberry” and contains a distinctly different flavor profile to the normal crop, with a higher concentration of the flavors, especially acidity, present due to the smaller sized bean. As such, it is usually removed from the yield and either sold separately (such as in New Guinea Peaberry), or discarded. The coffee tree will grow fruits after 3–5 years, for about 50–60 years (although up to 100 years is possible). The blossom of the coffee tree is similar to jasmine in color and smell. The fruit takes about nine months to ripen. Worldwide, an estimate of 15 billion coffee trees are growing on 100,000 km² of land. Coffee is used as a food plant by the larvae of some Lepidoptera species including Dalcera abrasa, Turnip Moth, and some members of the genus Endoclita including E. damor and E. malabaricus. Coffee bean types The two main species of the coffee plant used to produce the beverage are Coffea arabica and Coffee canephora (robusta). Coffee arabica is thought to be indigenous to Ethiopia and was first cultivated on the Arabian Peninsula. While more susceptible to disease, it is considered by most to taste better than Coffea canephora (robusta). Robusta, which contains about twice as much caffeine, can be cultivated in environments where arabica will not thrive. This has led to its use as an inexpensive substitute for arabica in many commercial coffee blends. Compared to arabica, robusta tends to be more bitter, with a telltale "burnt rubber" aroma and flavor. Good quality robustas are used as ingredients in some espresso blends to provide a better "crema" (foamy head), and to lower the ingredient cost. In Italy, many espresso blends are based on dark-roasted robusta. Arabica coffees were traditionally named by the port they were exported from, the two oldest being Mocha, from Yemen, and Java, from Indonesia. The modern coffee trade is much more specific about origin, labeling coffees by country, region, and sometimes even the producing estate. Coffee aficionados may even distinguish auctioned coffees by lot number. The largest coffee exporting nation remains Brazil, but in recent years the green coffee market has been flooded by large quantities of robusta beans from Vietnam. Many experts believe this giant influx of cheap green coffee led to the prolonged pricing crisis from 2001 to the present. In 1997 the "c" price of coffee in New York broke U.S. $3.00/pound, but by late 2001 it had fallen to U.S. $0.43/pound. Robusta coffees (traded in London at much lower prices than New York's Arabica) are preferred by large industrial clients (multinational roasters, instant coffee producers, etc.) because of their lower cost. Coffee beans from two different places, or coffee varietals, usually have distinctive characteristics, such as flavor (flavor criteria includes terms such as "citrus-like" or "earthy"), caffeine content, body or mouthfeel, and acidity. These are dependent on the local environment where the coffee plants are grown, their method of process, and the genetic subspecies or varietal. Economics of coffee Coffee is second only to petroleum in importance in commodity trade. It is the primary export of many low-income countries in Latin America, Africa, and Asia providing 25 million persons with their income. On a global scale, some 500 million people utilize coffee directly or indirectly for their incomes. The top ten coffee producers for 2005 were: |Country||Production in Millions of Metric Tons||Percent of World Production| The top ten coffee importers for 2004/2005 are: |Country||Percent of World Imports| The top ten coffee per capita consumption |Country||Cups per Capita| With over 400 billion cups consumed every year, coffee is the world's most popular beverage. Worldwide, 25 million small producers rely on coffee for a living. For instance, in Brazil alone, where almost a third of all the world's coffee is produced, over 5 million people are employed in the cultivation and harvesting of over 3 billion coffee plants. It is a much more labor-intensive culture than alternative cultures of commodities such as soy, sugar cane, wheat, or cattle, as it is not subject to automation and requires constant attention. Coffee is also bought and sold as a commodity on the New York Coffee, Sugar, and Cocoa Exchange. This is where coffee futures contracts are traded, which are a financial asset involving a standardized contract for the future sale or purchase of a unit of coffee at an agreed price. According to the Composite Index of the London-based coffee export country group International Coffee Organization, the monthly coffee price averages in international trade had been well above 100 U.S. cents/pound during in the 1970s/1980s, but then declined during the late 1990s reaching a minimum in September 2001 of just 41.17 U.S. cents per pound, and remained low until 2004. The reasons for this decline included the expansion of Brazilian coffee plantations and Vietnam's entry into the market in 1994, when the United States trade embargo against Vietnam was lifted. The market awarded the more efficient Vietnamese coffee suppliers with trade and resulted in less efficient coffee bean farmers in many countries such as Brazil, Nicaragua, and Ethiopia not being able to live off of their products; many were forced to quit the coffee bean production and move into slums in the cities (Mai 2006). Ironically, the decline in the ingredient cost of green coffee, while not the only cost component of the final cup being served, was paralleled by the rise in popularity of Starbucks and thousands of other specialty cafés, which sold their beverages at unprecedented high prices. According to the Specialty Coffee Association of America, in 2004 16 percent of adults in the United States drank specialty coffee daily; the number of retail specialty coffee locations, including cafés, kiosks, coffee carts, and retail roasters, amounted to 17,400 and total sales were $8.96 billion in 2003. In 2005, however, the coffee prices rose, with the above-mentioned ICO Composite Index monthly averages between 78.79 (September) and 101.44 (March) U.S. cents per pound. This rise was likely caused by an increase in consumption in Russia and China, as well as a harvest that was about 10 to 20 percent lower than that in the record years before. This allowed many coffee bean farmers to be able to live off their products, but not all of the extra-surplus trickled down to them, because rising petroleum prices made the transportation, roasting, and packaging of the coffee beans more expensive (Mai 2006). A number of classifications are used to label coffee produced under certain environmental or labor standards. For instance, bird-friendly or shade-grown coffee is produced in regions where natural shade (canopy trees) is used to shelter coffee plants during parts of the growing season. Organic coffee is produced under strict certification guidelines, and is grown without the use of potentially harmful artificial pesticides or fertilizers. Fair trade coffee is produced by small coffee producers; guaranteeing for these producers a minimum price. TransFair USA is the primary organization overseeing Fair Trade coffee practices in the United States, while the Fairtrade Foundation does so in the United Kingdom. Etymology and history The word coffee entered English in 1598 via Italian caffè, via Turkish kahve, from Arabic qahwa. Its ultimate origin is uncertain, there being several legendary accounts of the origin of the drink. One possible origin is the Kaffa region in Ethiopia, where the plant originated (its native name there being bunna). Coffee has been around since at least 800 B.C.E., originating in Africa and popularized throughout the Muslim world from 1000 C.E. Coffee beans were first exported from Ethiopia to Yemen. One legendary account is that of the Yemenite Sufi mystic named Shaikh ash-Shadhili. When traveling in Ethiopia, he observed goats of unusual vitality and, upon trying the berries that the goats had been eating, experienced the same effect. A similar myth ascribes the discovery to an Ethiopian goatherd named Kaldi. Qahwa originally referred to a type of wine, and need not be the name of the Kaffa region. Consumption of coffee was outlawed in Mecca in 1511 and in Cairo in 1532, but in the face of its immense popularity, the decree was later rescinded. In 1554, the first coffeehouse in Istanbul opened. Largely through the efforts of the British and Dutch East India companies, coffee became available in Europe no later than the sixteenth century, according to Leonhard Rauwolf's 1583 account. The first coffeehouse in England was set up in Oxford by a man named Jacob or Jacobs, a Turkish Jew, in 1650. The first coffeehouse in London was opened two years later in St. Michael's Alley in Cornhill. The proprietor was Pasqua Rosée, the Ragusan (Italian city) servant of a trader in Turkish goods named Daniel Edwards, who imported the coffee and assisted Rosée in setting up the establishment. The coffeehouse spread rapidly in Europe and America after that, with the first coffeehouses opening in Boston in 1670, and in Paris in 1671. By 1675, there were more than 3,000 coffeehouses in England. Women were not allowed in coffeehouses, and in London, the anonymous 1674 "Women's Petition Against Coffee" complained: - "…the Excessive Use of that Newfangled, Abominable, Heathenish Liquor called COFFEE […] has […] Eunucht our Husbands, and Crippled our more kind Gallants, that they are become as Impotent, as Age." Legend has it that the first coffeehouse opened in Vienna in 1683 after the Battle of Vienna, taking its supplies from the spoils left behind by the defeated Turks. The officer who received the coffee beans, Polish military officer Franciszek Jerzy Kulczycki, opened the first coffee house in Vienna and helped popularize the custom of adding sugar and milk to the coffee. Another more credible story is that the first coffeehouses were opened in Krakow in the sixteenth or seventeenth century because of closer trade ties with the East, most notably the Turks. The first coffee plantation in the New World was established in Brazil in 1727, and this country, like most others cultivating coffee as a commercial commodity, relied heavily on slave labor from Africa for its viability until abolition in 1888. In 1763, Pope Clemente VII was asked to forbid coffee as the “devil’s beverage.” The Pontiff decided to try it first and declared, “This beverage is so delicious that it would be a sin to let only misbelievers drink it! Let’s defeat Satan by blessing this beverage, which contains nothing objectionable to a Christian.” With this endorsement, the coffee trade was assured success. Coffee also got another huge endorsement from the American Revolution following the Boston Tea Party. Patriots began drinking coffee instead of tea as a symbol of their struggle for freedom. Today, coffee is consumed more than any beverage in the United States except water. One can find “coffee breaks” in the work place, “coffee hour” following religious services, and coffee houses for socialization and entertainment. One interesting and notable exception to the American love for coffee is that the Church of Jesus Christ of Latter Day Saints (the Mormons) prohibits tea and coffee from consumption by their members. For many decades in the nineteenth and early twentieth centuries, Brazil was the biggest producer and virtual monopolist in the trade, until a policy of maintaining high prices opened opportunities to other growers, like Colombia, Guatemala, and Indonesia. Health and pharmacology of coffee Coffee is consumed in large part not simply because of taste, but because of the effect it has on those who drink it. Coffee as a stimulant Coffee contains caffeine, which acts as a stimulant. For this reason, it is often consumed in the morning, and during working hours. Students preparing for examinations with late night "cram sessions" use coffee to maintain their concentration. Many office workers take a "coffee break" when their energy is diminished. Recent research has uncovered additional stimulating effects of coffee that are not related to its caffeine content. Coffee contains an as yet unknown chemical agent that elicits the production of cortisone and adrenaline, two stimulating hormones. For occasions when one wants to enjoy the flavor of coffee with less stimulation, decaffeinated coffee (also called “decaf”) is available. This is coffee from which most of the caffeine has been removed. This may be done by the Swiss water process (which involves the soaking of raw beans to absorb the caffeine), or by the use of a chemical solvent, such as trichloroethylene ("tri"), or the more popular methylene chloride. Another solvent used is ethyl acetate; the resultant decaffeinated coffee is marketed as "natural decaf" due to ethyl acetate being naturally present in fruit. Extraction with supercritical carbon dioxide has also been employed. Decaffeinated coffee usually loses some flavor over normal coffees and tends to be more bitter. There are also tisanes that resemble coffee in taste but contain no caffeine (see below). There have been cases all over the world of people who take far too much coffee in their drink (anywhere between 10-50 tablespoon's worth), and have experienced side effects similar to that of the illegal drug cocaine. There are many claims to the health benefits of drinking coffee. Some of the major health benefit claims include: - A moderate amount (two cups) of coffee can assist with short-term memory and can thus increase the probability to help a person be more alert for better learning. - In the workplace, a moderate amount of coffee can reduce fatigue and thus reduce the probability of accidents. (see: http://www.positivelycoffee.org/topic_workplace_references.aspx) - Coffee contains antioxidants that have been found to help reduce risks for heart disease with only two to four cups per day consumption. - Some studies have indicated that coffee may help in the prevention of liver disease. (See http://www.positivelycoffee.org/topic_liver_enzymes.aspx) - Studies indicate that type 2 diabetes is lower among those with moderate coffee consumption, and that coffee consumption may reduce the risk of gallstones, the development of colon cancer, and the risk of Parkinson disease. (see: http://www.health.harvard.edu/press_releases/coffee_health_risk.htm Coffee increases the effectiveness of pain killers—especially migraine medications—and can rid some people of asthma. For this reason, some aspirin producers also include a small dose of caffeine in the pill. Some of the beneficial effects of coffee consumption may be restricted to one sex, for instance it has been shown to reduce the occurrence of gallstones and gallbladder disease in men. Coffee intake may reduce one's risk of diabetes mellitus type 2 by up to half. While this was originally noticed in patients who consumed high amounts (seven cups a day), the relationship was later shown to be linear (Salazar-Martinez 2004). Coffee can also reduce the incidence of cirrhosis of the liver and prevent colon and bladder cancers. Coffee can reduce the risk of hepatocellular carcinoma, a variety of liver cancer (Inoue 2005). Also, coffee reduces the incidence of heart disease, though whether this is simply because it rids the blood of excess fat or because of its stimulant effect is unknown. At the annual meeting of the American Chemical Society in Washington, D.C., on August 28, 2005, chemist Joe Vinson of the University of Scranton presented his analysis showing that for Americans, who as a whole do not consume large quantities of fresh fruits and vegetables, coffee represents by far the largest source of valuable antioxidants in the diet. Coffee contains the anticancer compound methylpyridinium. This compound is not present in significant amounts in other food materials. Methylpyridinium is not present in raw coffee beans but is formed during the roasting process from trigonellin, which is common in raw coffee beans. It is present in both caffeinated and decaffeinated coffee, and even in instant coffee. Coffee is also a powerful stimulant for peristalsis and is sometimes considered to prevent constipation; it is also a diuretic. However, coffee can also cause loose bowel movements. Many people drink coffee for its ability to increase short-term recall and increase IQ. It also changes the metabolism of a person so that their body burns a higher proportion of lipids to carbohydrates, which can help athletes avoid muscle fatigue. Some of these health effects are realized by as little as four cups a day (24 U.S. fluid ounces, 700 mL), but others occur at five or more cups a day (32 U.S. fl. oz or 0.95 L or more). Some controversy over these effects exists, since by its nature, coffee consumption is associated with other behavioral variables. Therefore it has been variously suggested that the cognitive effects of caffeine are limited to those who have not developed a tolerance, or to those who have developed a tolerance and are caffeine-deprived. Practitioners in alternative medicine often recommend coffee enemas for "cleansing of the colon" due to its stimulus of peristalsis, although mainstream medicine has not proved any benefits of the practice. Many notable effects of coffee are related to its caffeine content. Many coffee drinkers are familiar with "coffee jitters," a nervous condition that occurs when one has had too much caffeine. Coffee can also increase blood pressure among those with high blood pressure, but follow-up studies showed that coffee still decreased the risk of dying from heart disease in the aggregate. Coffee can also cause insomnia in some, while paradoxically it helps a few sleep more soundly. It can also cause anxiety and irritability, in some with excessive coffee consumption, and some as a withdrawal symptom. There are also gender-specific effects of coffee. In some PMS (pre-menstral syndrome) sufferers, it increases the symptoms. It can also reduce fertility in women, and may increase the risk of osteoporosis in postmenopausal women. There may be risks to a fetus if a pregnant woman drinks substantial amounts of coffee (such as eight or more cups a day; that is, 48 U.S. fluid ounces or 1.4 L or more). A February 2003 Danish study of 18,478 women linked heavy coffee consumption during pregnancy to significantly increased risk of stillbirths (but no significantly increased risk of infant death in the first year). "The results seem to indicate a threshold effect around four to seven cups per day," the study reported. Those who drank eight or more cups a day (48 U.S. fl oz or 1.4 L) were at 220 percent increased risk compared with nondrinkers. This study has not yet been repeated, but has caused some doctors to caution against excessive coffee consumption during pregnancy. Decaffeinated coffee is occasionally regarded as a potential health risk to pregnant women, due to the high incidence of chemical solvents used to extract the caffeine. These concerns may have little or no basis, however, as the solvents in question evaporate at 80–90° C, and coffee beans are decaffeinated before roasting, which occurs at approximately 200° C. As such, these chemicals, namely trichloroethane and methylene chloride, are present in trace amounts at most, and neither pose a significant threat to unborn children. Women still worried about chemical solvents in decaffeinated coffee should opt for beans that use the Swiss water process, where no chemicals other than water are used, although higher amounts of caffeine remain. The American Journal of Clinical Nutrition published a study in 2004 that tried to discover why the beneficial and detrimental effects of coffee conflict. The study concluded that consumption of coffee is associated with significant elevations in biochemical markers of inflammation. This is a detrimental effect of coffee on the cardiovascular system, which may explain why coffee has so far only been shown to help the heart at levels of four cups (20 fluid ounces or 600 mL) or fewer per day. Coffee in large amounts has been found to be associated with increased heart rate, increased blood pressure, and occasional irregular heart beat. Much processing and human labor is required before coffee berries and its seed can be processed into roasted coffee with which most Western consumers are familiar. Coffee berries must be picked, defruited, dried, sorted, and sometimes aged. All coffee is roasted before being consumed. Roasting has a great degree of influence on the taste of the final product. Once the raw ("green") coffee beans arrive in their destination country, they are roasted. This darkens their color and alters the internal chemistry of the beans and therefore their flavor and aroma. Blending can occur before or after roasting and is often performed to ensure a consistent flavor. Once the beans are roasted, they become much more perishable. Problems of maintaining quality during bean production Achieving consistently high quality milled beans is not easy. Problems include: - Pests on the bushes (e.g., in Hawaii, scale insects and coconut mealy bugs) - Poor pruning regimes (e.g., too many verticals that allow the bush to attempt too much and so produce inferior cherries) - Poor fertilizer regimes (e.g., too little iron or insufficient nutriment for what are demanding plants) - Bad picking (e.g., picking all the berries on a branch rather than those that are bright red, or picking the berries very late) - Bad fermentation that produces unpleasant taints in the flavor - Dilution of superior tasting beans with cheaper beans When conditions permit, coffee bushes fruit aggressively, and the berries will develop at the expense of the rest of the bush. The consequent sugar consumption can produce die-back (death of leaves and branches). Die-back can be severe and can damage not just the current year's production but also the next year's production, which is borne on growth during the current year. Commercial operators come under a variety of pressures to cut costs and maximize yield. Arguably, better flavors will be produced when the coffee is grown in organic conditions. Some people who grow organically do so primarily to obtain the premium prices organic beans command, an alternative strategy to increase profits. The processing of coffee typically refers to the agricultural and industrial processes needed to deliver whole roasted coffee beans to the consumer. In order to turn this into a beverage, some preparation is typically necessary. The particular steps needed vary with the type of coffee desired, and with the raw material being worked with (e.g., pre-ground vs. whole bean). Typically, coffee must be ground to varying coarseness depending on the brewing method. Once brewed, it may be presented in a variety of ways: on its own, with or without sugar, with or without milk or cream, hot or cold, and so on. A number of products are sold for the convenience of consumers who do not want to prepare their own coffee. Instant coffee has been dried into soluble powder or granules, which can be quickly dissolved in hot water for consumption. Canned coffee is a beverage that has been popular in Asian countries for many years, particularly in Japan and South Korea. Vending machines typically sell a number of varieties of canned coffee, available both hot and cold. To match with the often-busy life of Korean city dwellers, companies mostly have canned coffee with a wide variety of tastes. Japanese convenience stores and groceries also have a wide availability of plastic-bottled coffee drinks, which typically are lightly sweetened and pre-blended with milk. In the United States, Starbucks is a retail outlet that sells a number of prepared cold coffee drinks in both bottles and cans. Lastly, liquid coffee concentrate is sometimes used in large institutional situations where coffee needs to be produced for thousands of people at the same time. It is described as having a flavor about as good as low-grade robusta coffee, and costs about 10 cents a cup to produce. The machines used to process it can handle up to 500 cups an hour, or 1,000 if the water is preheated. Social aspects of coffee The United States is the largest market for coffee, followed by Germany. The Nordic countries consume the most coffee per capita, with Finland, Norway, and Denmark trading the top spot depending on the year. However, consumption has also vastly increased in the United Kingdom in recent years. Coffee is so popular in the Americas, the Middle East, and Europe that many restaurants specialize in coffee; these are called "coffeehouses" or "cafés." Most cafés also serve tea, sandwiches, pastries, and other light refreshments (some of which may be dunked into the drink. Some shops are miniature cafés that specialize in coffee-to-go for hurried travelers, who may visit these on their way to work. Some provide other services, such as wireless internet access, for their customers. In some countries, notably in northern Europe, coffee parties are a popular form of entertaining. Besides coffee, the host or hostess at the coffee party also serves cake and pastries, hopefully homemade. Because of the stimulant properties of coffee and because coffee does not adversely impact higher mental functions, coffee is strongly associated with white-collar jobs and office workers. Social habits involving coffee in offices include the morning chat over coffee and the coffee break. Contemporary advertising tends to equate the term "coffee break" with rest and relaxation, despite coffee's stimulant role. ReferencesISBN links support NWE through referral fees - Chambers, R. 1869. Chambers' Book of Days for January 27, retrieved June 2, 2006. - Inoue, M. et al. 2005. Influence of coffee drinking on subsequent risk of hepatocellular carcinoma: A prospective study in Japan. Journal of the National Cancer Institute 97(4): 293-300. - Joffe-Walt, B., and O. Burkeman. 2005. Coffee trail—from Ethiopian village of Choche to London coffee shop. The Guardian September 16, 2005. - Koppelstaeter, F. et al. 2005. Influence of Caffeine Excess on Activation Patterns in Verbal Working Memory, Conference paper presented at the Radiological Society of North America, November 30, 2005. - Lunde, P., and J. Mandaville. 1973. Wine of Arabia. Saudi Aramco World 24(5) (September/October 1973). - Mai, M. 2006. Boom für die Bohnen in Jungle World 1 (January 4, 2006). ISSN 1613-0766. - Pendergrast, M. 1999. Uncommon Grounds: The History of Coffee and How It Transformed Our World. Basic Books. ISBN 0465054676 - Salazar-Martinez E., W.C. Willet, A. Ascherio, J. E. Manson, M. F. Leitzmann, M. J. Stampfer, and F. B. Hu. 2004. Coffee consumption and risk for type 2 diabetes mellitus. Ann Intern Med 140: 1-8. - Singleton, A. 2006. Coffee that really helps development. The New Ideas in International Development March 17, 2006. - Wisborg, K. et al. 2003. Maternal consumption of coffee during pregnancy and stillbirth and infant death in first year of life: prospective study. British Medical Journal 326: 420 (February 22). Online copy. New World Encyclopedia writers and editors rewrote and completed the Wikipedia article in accordance with New World Encyclopedia standards. This article abides by terms of the Creative Commons CC-by-sa 3.0 License (CC-by-sa), which may be used and disseminated with proper attribution. Credit is due under the terms of this license that can reference both the New World Encyclopedia contributors and the selfless volunteer contributors of the Wikimedia Foundation. To cite this article click here for a list of acceptable citing formats.The history of earlier contributions by wikipedians is accessible to researchers here: The history of this article since it was imported to New World Encyclopedia: Note: Some restrictions may apply to use of individual images which are separately licensed.
Hardware and Software Solutions for UART over Ethernet communication Definition and characteristics of UART A universal asynchronous receiver/transmitter (UART) shouldn’t be confused with communication protocols like I2C or SPI. As a rule, it's a self-contained IC or a physical circuit in a microcontroller, the main mission of which is serial data transferring and receiving. A UART (Universal Asynchronous Receiver/Transmitter) is the programmable microchip used to control a computer's interface to its attached serial devices. It allows the computer to communicate and exchange data with serial devices by providing the RS-232C Data Terminal Equipment (DTE) interface. As part of that interface the UART also performs other functions. - It converts the bytes received in parallel from the computer into a single serial bit stream required for successful outbound transmission. - On inbound transmission, the serial bit streams received from devices is converted into bytes that the computer understands. - A parity bit is added to the outgoing transmission and parity bytes are checked and discarded on incoming transmissions if parity and parity checking have been selected. - Start and stop delineators are added to outbound transmissions and stripped from inbound ones. - Handles the special interrupts generated from the keyboard and mouse and their dedicated ports. - Can be used to handle additional interrupts and device management tasks related to the coordination of the speed of the computer and associated devices. One of the primary advantages of UART is that it utilizes only 2 cables for data transfer between devices. UART communication is quite simple: UART1, after converting parallel data from a controlling device (e.g.: CPU) into a serial format, transfers it to UART2, which in its turn transforms the serial data back into parallel form for the receiving device. Therefore, data streams from the Tx pin of UART1 to the Rx pin of UART2 (see Fig 1). UARTs interact with each other directly, requiring only two wires for data transmission between them. Data is transferred asynchronously by UARTs, that’s why there won’t be any clock signal for synchronization neither of the output of bits from the transmitting UART nor of the sampling of bits by the receiving UART. As a substitute for a clock signal, the transmitting UART adds start and stop bits to the transferred data packet to designate its beginning and end, thus letting the receiving UART know when it can start reading the bits. Once the receiving UART spots a start bit, it begins reading incoming bits at a certain frequency, referred to as a baud rate. The baud rate measures data transmission speed in bits per second (bps). UARTs should both work at almost identical baud rate. At least, the difference between baud rates of receiving and transmitting UARTs shouldn't exceed 10 percent before the timing of bits gets too far off. So, is it possible to share UART data over Ethernet? If yes, how to do it best? When using UART protocol, you can easily track all the passing streams and save them into a file. However, if you need to work with a remote UART device, located within the Ethernet network coverage area, you can do nothing but utilize additional hardware or software. Hardware UART to Ethernet Converter UART TO ETH module, mostly utilized for transparent data transmission, seems to be a simple solution for communicating between UART and Ethernet. It is a serial TTL to Ethernet module that can be customized via web page. For TCP or UDP socket data to serial UART conversion a data transparent transmission appliance (USR-TCP232-T) is used. Capabilities and characteristics of Ethernet serial module: - 10/100M auto detect interface; - Automatic MDI/MDI-X support. Possibility of utilizing parallel cable connection or a crossover cable; - Different work modes available: UDP Client, UDP Server, TCP Server, TCP Client; - Possibility to adjust working model settings via a COM port or network; - Supports 3.3V TTL level (module products); - Support of virtual COM port; - Ensures reliable connection due to its exclusive heartbeat package mechanism, eliminating connect feign death; - No packet broadcasting in UDP mode, has better anti-interference ability; through the gateway/switches/routers; - Works in LAN/Internet (external network) and so on. To sum up, the aforementioned converter is an ideal solution for local networks and nearby devices. However, what should you do to access UART over Ethernet being far away, let's say, from some other part of the world? How can you employ UART via network?
Although it directly contradicted the classical equipartition theorem of energy, black body radiation was one of the first discoveries in modern quantum mechanics. This theorem states that within thermal equilibrium, where each part of the system is the same temperature, each degree of freedom has 12kBT, kB representing the Boltzmann constant, of thermal energy associated with it, meaning that the average kinetic energy in the translational movement of an object should be equal to the kinetic energy of its rotational motion. By this point, it was known how heat caused the atoms in solids to vibrate and that atoms were patterns of electrical charges, but it was unknown how these solids radiated the energy that they in turn created. Hertz and other scientists experimented with electromagnetic waves, and found that Maxwell’s previous conjectures that electromagnetic disturbances should propagate through space at the speed of light had been correct. This led to the explanation of light itself as an electromagnetic wave. From this observation, it was assumed that as a body was heated, the atoms would vibrate and create charge oscillations, which would then radiate the light and the additional heat that could be observed. From this, the idea of a “black body” formed, an object that would absorb all radiation that came in contact with it, but which also was the perfect emitter. The ideal black body was a heated oven with a small hole, which would release the radiation from inside. Based on the equipartition theorem, such an oven at thermal equilibrium would have an infinite amount of energy, and the radiation through the hole would show that of all frequencies at once. However, when the experiment was actually performed, this is not the result that occurred. As the oven heated, different frequencies of radiation were detected from the hole, one at a time, starting with infrared radiation, followed by red, then yellow light, and so on. This proved that high oscillators are not excited at low temperatures, and that equipartition was not accurate. This discovery led to Stefan’s Law, which said that the total energy per square unit of black body per unit time, the power, is proportional to the absolute temperature to the fourth power. It also led to Wien’s Displacement Law, stating that the wavelength distributions of thermal radiation of a black body at all temperatures have essentially the same shape, except that the graphs are displaced from each other. Later on, Planck characterized the light coming from a black body and derived an equation to predict the radiation at certain temperatures, as shown by the diagram below. For each given temperature, the peaks changed position, solidifying the idea that different temperatures excite different levels of the light spectrum. This was all under the assumption that radiation was released in quanta, now known as photons. All of these laws help modern physicists interpret radiation and make accurate estimations at the temperature of planets based on the radiation that comes from them. Einstein used the same quantization of electromagnetic radiation to show the photoelectric effect, which disproved the idea that more intense light would increase the kinetic energy of the electrons radiated from an object. Photoelectric effect was originally the work of Heinrich Hertz, but was later taken on by Albert Einstein. Einstein determined that light was made up of packets of energy known as photons, which have no mass, but have momentum and energy given by the equation E=hf, h representing Planck’s constant and f representing the frequency of the light used. Photoelectric effect explains that if light is shone on a metal with high enough energy, electrons will be released from the metal. Due to the energy equation, light of certain low frequencies will not cause the emission of electrons, not matter how intense, while light of certain high frequencies will always emit electrons, even at a very low intensity. The amount of energy needed to release electrons from a metal plate is dependent upon the type of metal it is, and changes from case to case, as every type of metal has a certain work function, or an amount of energy needed to remove an electron from its surface. If the photons that hit the metal plate have enough energy as the work function of the metal, the energy from the photon can transfer to an electron, which allows it to escape from the surface of the metal. Of course, the energy of the photon is dependent upon the frequency of the light. Einstein postulated that the kinetic energy of the electron once it has been freed from the surface can be written as E=hf-W, W being the work function of the material. Prior to Einstein’s work in photoelectric effect, Hertz discovered, mostly by accident, that ultraviolet light would knock electrons off of metal surfaces. However, according to the classical wave theory of light, intensity of light changed the amplitude, thus more intense light would make the kinetic energy of the electrons higher as they were emitted from the surface. His experiment showed that this was not the case, and that frequency affected the kinetic energy, while intensity determined the number of electrons that were released. By explaining the photoelectric effect, scientists find that light is a particle, but it also acts as a wave. This help support particle-wave duality. In order to explain the behavior of light, you must consider its particle like qualities as well as its wave like qualities. This means that light exhibits particle-wave duality, as it can act as a wave and a particle. In fact, everything exhibits this kind of behavior, but it is most prominent in very small objects, such as electrons. Particle-wave duality is attributed to Louis de Broglie in about 1923. He argued that since light could display wave and particle like properties, matter could as well. After centuries of thinking that electrons were solid things with definite positions, de Broglie proved that they had wave like properties by running experiments much like Young’s double slit experiments, and showing the interference patterns that arose. This idea helped scientists realize that the wavelength of an object diminishes proportionally to the momentum of the object. Around the same time that de Broglie was explaining particle-wave duality, Arthur Compton described the Compton effect, or Compton scattering. This was another discovery which showed how light could not solely be looked at as a wave, further supporting de Broglie’s particle-wave duality. Compton scattering is a phenomenon that takes place when a high-energy photon collides with an electron, causing a reduced frequency in the photon, leading to a reduced energy. Compton derived the formula to describe this occurrence to be ? '-? =hCme1-cos? = ? c(1-cos? ), where ? ' is the resulting wavelength of the photon, ? is the initial wavelength of the photon, ? is the scattering angle between the photon and the electron, and ? c is the wavelength of a resting electron, which is 2. 26 ? 10-12 meters. Compton came about this by considering the conservation of momentum and energy. Although they have no mass, photons have momentum, which is defined by ? =Ec=hfc=h?. In order to conserve momentum, or to collide at all, light must be thought of as a particle in this case, instead of a wave. Quantum mechanics is not the only facet of modern physics, and it shares equal importance with relativity. Relativity is defined as the dependence of various physical phenomena on relative motion of the observer and the observed objects, especially in relation to light, space, time, and gravity. Relativity in modern physics is hugely attributed to the work of Albert Einstein, while classical relativity can be mainly attributed to Galileo Galilei. The quintessential example of Galilean relativity is that of the person on a ship. Once the ship has reached a constant velocity, and continues in a constant direction, if the person is in the hull of the ship and is not looking outside to see any motion, the person cannot feel the ship moving. Galileo’s relativity hypothesis states that any two observers moving at constant speed and direction with respect to one another will obtain the same results for all mechanical experiments. This idea led to the realization that velocity does not exist without a reference point. This idea of a frame of reference became very important to Einstein’s own theories of relativity. Einstein had two theories of relativity, special and general. He published special relativity in 1905, and general relativity in 1916. His Theory of Special Relativity was deceptively simple, as it mostly took Galilean relativity and reapplied it to include Maxwell’s magnetic and electric fields. Special relativity states that the Laws of Physics are the same in all inertial frames. An inertial frame is a frame in which Newton’s law of inertia applies and holds true, so that objects at rest stay at rest unless an outside force is applied, and that objects in motion stay in motion unless acted upon by an outside force. The theory of relativity deals with objects that are approaching the speed of light, as it turns out that Newton’s laws begin to falter when the velocity gets too high. Special relativity only deals with the motion of objects within inertial frames, and is quite comparable to Galilean relativity, with the addition of a few new discoveries, such as magnetic and electric fields and the speed of light. The theory of general relativity is much more difficult to understand than special relativity due to the fact that it involves objects traveling close to the speed of light within non-inertial frames, or frames that do not meet the requirements given by Newton’s law of inertia. General relativity coincides with special relativity when gravity can be neglected. This involves the curvature of space and time, and the idea that time is not the definite that we have always assumed that it was. General relativity is a theory that describes the behavior of space and time, as well as gravity. In general relativity, space-time becomes curved at the presence of matter, which means that particles moving with not external forces acting upon them can spiral and travel in a curve, which becomes conflicting with Newton’s laws. In classical physics, gravity is described as a force, and as an apple falls from a tree, gravity attracts it to the center of the Earth. This also explains the orbit of planets. However, in general relativity, a massive object, such as the sun, curves space-time and forces planets to revolve around it in the same way a bead would spiral down a funnel. This idea of general relativity and the curvature of space-time led scientists to realize what black holes were and how they can be possible. This also explains the bending of light around objects. Black holes have massive centers and are hugely dense. Each particle that it includes is also living in space-time however, and so the center must continue to move and become more and more dense from the motion of these particles. Black holes are so dense that they bend space-time to an enormous degree, so that there is no escapable route from them. General relativity also explains that the universe must be either contracting or expanding. If all the stars in the universe were at rest compared to one another, gravity would begin to pull them together. General relativity would show that the space as a whole would begin to shrink and the distances between the stars would do the same. The universe could also technically be expanding, however it could never be static. In 1929, Hubble discovered that all of the distant galaxies seemed to be moving away from us, which would support the explanation that our galaxy is expanding. The basis of general relativity is the dynamic movement of space and time, and the fact that these are not static measurements that they have always been assumed to be. However, a key issue is that there has been little success in combining quantum mechanics and Einsteinian relativity, other than in quantum electrodynamics. Quantum electrodynamics, QED, is a quantum theory that involves the interaction of charged particles and the electromagnetic field. The scientific community hugely agrees upon QED, and it successfully unites quantum mechanics with relativity. QED mathematically explains the relationships between light and matter, as well as charged particles with one another. In the 1920’s, Paul Dirac laid the foundations of QED by discovering the equation for the spin of electrons, incorporating both quantum mechanics and the theory of special relativity. QED was further developed into the state that it is today in the 1940’s by Richard Feynman. QED rests on the assumption that charged particles interact by absorbing and emitting photons, which transmit electromagnetic forces. Photons cannot be seen or detected in anyway because their existence violates the conservation of energy and momentum. QED relies heavily on the Hamiltonian vector field and the use of differential equations and matrices. Feynman created the Feynman diagram used to depict QED, using a wavy line for photons, a straight line for the electron, and a junction of two straight lines and one wavy line to represent the absorption or emission of a photon, show below. QED helps define the probability of finding an electron at a certain position at a certain time, given its whereabouts at other positions and times. Since the possibilities of where and when the electron can emit or absorb a photon are infinite, this makes this a very difficult procedure. Compton scattering is very prevalent to QED due to its involvement in the scattering of electrons. Modern physics is a simple term used to cover a huge array of different discoveries made over the past two hundred years. While the two main facets of modern physics are quantum mechanics and relativity, there are an amazing number of subtopics and experiments that have brought about rapid change, giving the world new technologies and new capabilities. Thanks to scientists like Einstein, Hawking, Feynman, and many others, we have found, and will continue to find, amazing discoveries about our universe. Sources Anderson, Lauren. "Compton Scattering. " University of Washington Astronomy Department. 12 Nov. 2007. Web. 1 May 2012. Broholm, Collin. "Equipartition Theorem. " General Physics for Bio-Science Majors. 1 Dec. 1997. Web. 1 May 2012. Sept. 1995. Web. 1 May 2012.
Percentages are an integral part of everyday life whether you know it or not. Taking part in a survey, going to the bank, measuring ingredients for a recipe or calculating store discounts all require you to work out percentages in some way or another. Calculating a percentage is actually very straightforward and requires only some very basic math. Percentages can also be written and calculated as fractions. Visit the Netcomuk.co.uk website for how to work out percentages as fractions (see Resources for a direct link). Percentages can be written over 100%. This usually happens when you need to find out how much more something has increased. For example, a movie can make a profit of 800% compared to its cost to produce. Familiarize yourself with how percentages work. Percentages are proportional representations of whole objects, numbers, prices or people. They are denoted by a “%” symbol and are usually measured between 0% and 100% where 0% represents nothing/nobody and 100% represents everything/everybody. Find out the information you need to calculate a percentage value. You will need two pieces of information: the percentage you are trying to find out and the subject total of what the percentage represents. For example, you might need to find out 90% of 100 grams, 10% of $15.00 or 60% of 250 people. You will need both pieces of information because percentages are proportional and will give different results depending on what you are finding the percentage of; 48% of 100 people will be different than 48% of 200 people. Work out the percentage’s value once you have these pieces of information. This is done using a very straightforward process. Firstly, you must divide the subject total by 100. This will give you 1% of your subject. For example, if you are working out 10% of 4,500, divide 4,500 by 100 and you will have worked out 1% of 4,500—in this case, 45. Multiply your 1% answer by the percentage you wish to find out. Using the same example, 1% of 4,500 is 45. To find 10% of 4500, simply multiply the 45 by 10 to get 450. This is 10% of 4,500. Similarly, you can multiply the 1% answer by any number to find any percentage value. For example, to find 80% of 4,500, multiply the 45 by 80 to get 3,600. Learn how to work out the percentage of something if you already have the value. For example, if you want to find out what percentage 2,250 is of 4,500, you can reverse the previous process. This time, multiply the percentage value by 100 (2,250 x 100 = 225,000). You must then divide this value by the subject total, i.e. 225,000 ÷ 4500 = 50. This is your percentage answer: 2250 is 50% of 4500. Practice these two methods by working out different percentages and values with different subject totals. Vary between working out the percentages for people, weights, times, heights or money. - Calculator image by Alla Podkopaeva from Fotolia.com
A pair of researchers from Shantou University in China explored designing and manufacturing a CubeSat with 3D printing, which we have seen in the past. CubeSats, which are basically miniaturized satellites, offer plenty of advantages in space exploration, such as low cost, a short research cycle, and more lightweight construction, but conventional methods of manufacturing often negate these. Using 3D printing to make CubeSats can help achieve accurate details as well. “With the successful development of integrated technologies, many spacecraft subsystems have been continuously miniaturized, and CubeSats have gradually become the main executors of space science exploration missions,” they wrote. The main task driving research paper is an LEO, or Low Earth Orbit, CubeSat mission, which would need to accelerate to a maximum of 5 g during launch. “…the internal operating temperature range of the CubeSat is from 0 to 40 °C, external temperature from -80 to 100 °C,” the researchers explained. During the design process, the duo took into account environmental factors, the received impact load during the launch process, and the surrounding environment once the CubeSat reached orbit. Once they determined the specific design parameters, ANSYS software was used to simulate, analyze, and verify the design’s feasibility. PLA was used to make the mini satellite, which is obviously shaped like a cube. Each cube cell, called a unit, weighs approximately 1 kg, and has sides measuring 10 cm in length. “The framework structure for a single CubeSat provides enough internal workspace for the hardware required to run the CubeSat. Although there are various CubeSat structure designs, several consistent design guidelines can be found by comparing these CubeSats,” the researchers wrote about the structure of their CubeSat. These guidelines include: a cube with a side length of 100 mm 8.5 x 113.5 mm square columns placed at four parallel corners usually made of aluminum for low cost, lightweight, easy machining The CubeSat needs to be big enough to contain its power subsystem (secondary batteries and solar panels), in addition to the vitally important thermal subsystem, communication system for providing signal connections to ground stations back on Earth, ADCS, and CDH subsystems. It also consists of onboard antennae, radios, data circuit boards, a three-axis stability system, and autonomous navigation software. “The adoption of this technology changes the concept of primary and secondary structure in the traditional design process, because the whole structure can be produced at the same time, which not only reduces the number of parts, reduces the need for screws and adhesion, but also improves the stability of the overall structure,” the pair wrote about using 3D printing to construct their CubeSat. The mission overview for this 3D printed CubeSat explains that the device needs to complete performance tests on its camera payload for reliability evaluation, and test the effectiveness of any structures 3D printed “in an orbital environment.” The Von mises stress diagram of the CubeSat structure. In order to ensure that it’s ready to operate in LEO, the CubeSat’s structures was analyzed using ANSYS’ finite element analysis (FEA) software, and the researchers also performed a random vibration analysis, so that they can be certain it will hold up under the launch’s impact load. “The CubeSat structure is validated by the numerical experiment. During launch process, CubeSat will be fixed inside the P-Pod, and the corresponding structural constraints should be added to the numerical model. In addition, the maximum acceleration impact during the launch process should also be considered. Static Structural module of ANSYS is used for calculation and analysis, the results show that the maximum stress of CubeSat Structure is 8.06 MPa, lower than the PLA yield strength of 40 Mpa,” the researchers explained. Running in LEO, the 3D printed CubeSat will go through a 100°C temperature change, and the structure needs to be able to resist this, so the researchers also conducted a thermal shock test, which showed an acceptable thermal strain. The thermal strain diagram of the CubeSat structure. The team also conducted random vibration simulation experiments, so they could conform the structure of the 3D printed CubeSat to emission conditions. They simulated typical launch vibration characteristics, using NASA GEV qualification and acceptance as reference. “The specific contents of the experiment include “Harmonic Response” and “Random Vibration”. Two identical harmonic response were performed before and after the random vibration test to assess the degree of structural degradation that may result from the launch load,” the researchers explained. “This experiment helps us to evaluate the natural frequency of the structure, and the peak value indicates that the tested point (bottom panel) has reached the resonant frequency.” Pre/Post Random Vibration test comparison between the curves of Harmonic Response. As seen in the above figure, both the trend and peak points of the two curves are close to each other, which shows that there was no structural degradation after the vibration test, and that the structure itself conforms to launch stiffness specifications. “As the primary performer of today’s space exploration missions, the CubeSat design considers orbit, payload, thermal balance, subsystem layout, and mission requirements. In this research, a CubeSat design for performing LEO tasks was proposed, including power budget, mass distribution, and ground testing, and the CubeSat structure for manufacturing was combined with 3D printing technology,” the researchers concluded. “The results show that the CubeSat can withstand the launch loads without structural damage and can meet the launch stiffness specification.” Discuss this and other 3D printing topics at 3DPrintBoard.com or share your thoughts below. In 2017, Israeli additive manufacturing solutions provider XJet announced a new inkjet method of 3D printing ceramics, based on its existing NanoParticle Jetting (NPJ) 3D printing technology. According to a... Recently, Velo3D received its largest order in company history since its launch commercially in 2018. An existing aerospace customer placed an order worth $20 million for Velo3D’s innovative, industrial metal...
Open any introductory biology textbook and one of the first things you’ll learn is that our DNA spells out the instructions for making proteins, tiny machines that do much of the work in our body’s cells. Results from a study published on Jan. 2 in Science defy textbook science, showing for the first time that the building blocks of a protein, called amino acids, can be assembled without blueprints – DNA and an intermediate template called messenger RNA (mRNA). A team of researchers has observed a case in which another protein specifies which amino acids are added. Janet Iwasa, Ph.D., University of Utah Caught in the act: Rqc2 protein adds amino acids to a new protein A new finding goes against dogma, showing for the first time that the building blocks of a protein, called amino acids, can be assembled by another protein, and without genetic instructions. The Rqc2 protein (yellow) binds tRNAs (dark blue, teal) which add amino acids (bright spot in middle) to a partially made protein (green). The complex binds the ribosome (white). “This surprising discovery reflects how incomplete our understanding of biology is,” says first author Peter Shen, Ph.D., a postdoctoral fellow in biochemistry at the University of Utah. “Nature is capable of more than we realize.” To put the new finding into perspective, it might help to think of the cell as a well-run factory. Ribosomes are machines on a protein assembly line, linking together amino acids in an order specified by the genetic code. When something goes wrong, the ribosome can stall, and a quality control crew is summoned to the site. To clean up the mess, the ribosome is disassembled, the blueprint is discarded, and the partly made protein is recycled. Yet this study reveals a surprising role for one member of the quality control team, a protein conserved from yeast to man named Rqc2. Before the incomplete protein is recycled, Rqc2 prompts the ribosomes to add just two amino acids (of 20 total) – alanine and threonine - over and over, and in any order. Think of an auto assembly line that keeps going despite having lost its instructions. It picks up what it can and slaps it on: horn-wheel-wheel-horn-wheel-wheel-wheel-wheel-horn. “In this case, we have a protein, Rqc2, playing a role similar to that of mRNA,” says Adam Frost, M.D., Ph.D., assistant professor at University of California, San Francisco (UCSF) and adjunct professor of biochemistry at the University of Utah. He shares senior authorship with Jonathan Weissman, Ph.D., a Howard Hughes Medical Institute investigator at UCSF, and Onn Brandman, Ph.D., at Stanford University. “I love this story because it blurs the lines of what we thought proteins could do.” Like a half-made car with extra horns and wheels tacked to one end, a truncated protein with an apparently random sequence of alanines and threonines looks strange, and probably doesn’t work normally. But the nonsensical sequence likely serves specific purposes. The code could signal that the partial protein must be destroyed, or it could be part of a test to see whether the ribosome is working properly. Evidence suggests that either or both of these processes could be faulty in neurodegenerative diseases such as Alzheimer’s, Amyotrophic lateral sclerosis (ALS), or Huntington’s. “There are many interesting implications of this work and none of them would have been possible if we didn’t follow our curiosity,” says Brandman. “The primary driver of discovery has been exploring what you see, and that’s what we did. There will never be a substitute for that.” The scientists first considered the unusual phenomenon when they saw evidence of it with their own eyes. They fine-tuned a technique called cryo-electron microscopy to flash freeze, and then visualize, the quality control machinery in action. “We caught Rqc2 in the act,” says Frost. “But the idea was so far-fetched. The onus was on us to prove it.” It took extensive biochemical analysis to validate their hypothesis. New RNA sequencing techniques showed that the Rqc2/ribosome complex had the potential to add amino acids to stalled proteins because it also bound tRNAs, structures that bring amino acids to the protein assembly line. The specific tRNAs they saw only carry the amino acids alanine and threonine. The clincher came when they determined that the stalled proteins had extensive chains of alanines and threonines added to them. “Our job now is to determine when and where this process happens, and what happens when it fails,” says Frost. Shen, Frost, Brandman, and Weissman conducted the work in collaboration with colleagues at the University of Utah (Krishna Parsawar, James Cox), University of California at San Francisco (Xueming Li, Yifan Cheng, Matthew Larson), Stanford University (Joseph Park), and the University of Texas at Austin (Yidan Qin, Alan Lambowitz). The research was supported by grants from the Searle Scholars program, the National Institutes of Health, the Howard Hughes Medical Institute, Stanford University, and the University of Utah. Rqc2p and 60S ribosomal subunits mediate mRNA-independent elongation of nascent chains. Peter S. Shen, Joseph Park, Yidan Qin, Xueming Li, Krishna Parsawar, Matthew H. Larson, James Cox, Yifan Cheng, Alan M. Lambowitz, Jonathan S. Weissman, Onn Brandman, Adam Frost. Science, Jan. 2, 2015 Julie Kiefer | newswise Hunting pathogens at full force 22.03.2017 | Helmholtz-Zentrum für Infektionsforschung A 155 carat diamond with 92 mm diameter 22.03.2017 | Universität Augsburg Astronomers from Bonn and Tautenburg in Thuringia (Germany) used the 100-m radio telescope at Effelsberg to observe several galaxy clusters. At the edges of these large accumulations of dark matter, stellar systems (galaxies), hot gas, and charged particles, they found magnetic fields that are exceptionally ordered over distances of many million light years. This makes them the most extended magnetic fields in the universe known so far. The results will be published on March 22 in the journal „Astronomy & Astrophysics“. Galaxy clusters are the largest gravitationally bound structures in the universe. With a typical extent of about 10 million light years, i.e. 100 times the... Researchers at the Goethe University Frankfurt, together with partners from the University of Tübingen in Germany and Queen Mary University as well as Francis Crick Institute from London (UK) have developed a novel technology to decipher the secret ubiquitin code. Ubiquitin is a small protein that can be linked to other cellular proteins, thereby controlling and modulating their functions. The attachment occurs in many... In the eternal search for next generation high-efficiency solar cells and LEDs, scientists at Los Alamos National Laboratory and their partners are creating... Silicon nanosheets are thin, two-dimensional layers with exceptional optoelectronic properties very similar to those of graphene. Albeit, the nanosheets are less stable. Now researchers at the Technical University of Munich (TUM) have, for the first time ever, produced a composite material combining silicon nanosheets and a polymer that is both UV-resistant and easy to process. This brings the scientists a significant step closer to industrial applications like flexible displays and photosensors. Silicon nanosheets are thin, two-dimensional layers with exceptional optoelectronic properties very similar to those of graphene. Albeit, the nanosheets are... Enzymes behave differently in a test tube compared with the molecular scrum of a living cell. Chemists from the University of Basel have now been able to simulate these confined natural conditions in artificial vesicles for the first time. As reported in the academic journal Small, the results are offering better insight into the development of nanoreactors and artificial organelles. Enzymes behave differently in a test tube compared with the molecular scrum of a living cell. Chemists from the University of Basel have now been able to... 20.03.2017 | Event News 14.03.2017 | Event News 07.03.2017 | Event News 22.03.2017 | Materials Sciences 22.03.2017 | Physics and Astronomy 22.03.2017 | Materials Sciences
|Part of a series on| Behaviorism is a systematic approach to understanding the behavior of humans and animals. It assumes that behavior is either a reflex evoked by the pairing of certain antecedent stimuli in the environment, or a consequence of that individual's history, including especially reinforcement and punishment contingencies, together with the individual's current motivational state and controlling stimuli. Although behaviorists generally accept the important role of heredity in determining behavior, they focus primarily on environmental events. Behaviorism emerged in the early 1900s as a reaction to depth psychology and other traditional forms of psychology, which often had difficulty making predictions that could be tested experimentally, but derived from earlier research in the late nineteenth century, such as when Edward Thorndike pioneered the law of effect, a procedure that involved the use of consequences to strengthen or weaken behavior. With a 1924 publication, John B. Watson devised methodological behaviorism, which rejected introspective methods and sought to understand behavior by only measuring observable behaviors and events. It was not until the 1930s that B. F. Skinner suggested that covert behavior—including cognition and emotions—is subject to the same controlling variables as observable behavior, which became the basis for his philosophy called radical behaviorism. While Watson and Ivan Pavlov investigated how (conditioned) neutral stimuli elicit reflexes in respondent conditioning, Skinner assessed the reinforcement histories of the discriminative (antecedent) stimuli that emits behavior; the technique became known as operant conditioning. The application of radical behaviorism—known as applied behavior analysis—is used in a variety of contexts, including, for example, applied animal behavior and organizational behavior management to treatment of mental disorders, such as autism and substance abuse. In addition, while behaviorism and cognitive schools of psychological thought do not agree theoretically, they have complemented each other in the cognitive-behavior therapies, which have demonstrated utility in treating certain pathologies, including simple phobias, PTSD, and mood disorders. The titles given to the various branches of behaviorism include: Two subtypes of theoretical behaviorism are: Main article: Radical behaviorism B. F. Skinner proposed radical behaviorism as the conceptual underpinning of the experimental analysis of behavior. This viewpoint differs from other approaches to behavioral research in various ways, but, most notably here, it contrasts with methodological behaviorism in accepting feelings, states of mind and introspection as behaviors also subject to scientific investigation. Like methodological behaviorism, it rejects the reflex as a model of all behavior, and it defends the science of behavior as complementary to but independent of physiology. Radical behaviorism overlaps considerably with other western philosophical positions, such as American pragmatism. Although John B. Watson mainly emphasized his position of methodological behaviorism throughout his career, Watson and Rosalie Rayner conducted the renowned Little Albert experiment (1920), a study in which Ivan Pavlov's theory to respondent conditioning was first applied to eliciting a fearful reflex of crying in a human infant, and this became the launching point for understanding covert behavior (or private events) in radical behaviorism. However, Skinner felt that aversive stimuli should only be experimented on with animals and spoke out against Watson for testing something so controversial on a human. In 1959, Skinner observed the emotions of two pigeons by noting that they appeared angry because their feathers ruffled. The pigeons were placed together in an operant chamber, where they were aggressive as a consequence of previous reinforcement in the environment. Through stimulus control and subsequent discrimination training, whenever Skinner turned off the green light, the pigeons came to notice that the food reinforcer is discontinued following each peck and responded without aggression. Skinner concluded that humans also learn aggression and possess such emotions (as well as other private events) no differently than do nonhuman animals. As experimental behavioural psychology is related to behavioral neuroscience, we can date the first researches in the area were done in the beginning of 19th century. Later, this essentially philosophical position gained strength from the success of Skinner's early experimental work with rats and pigeons, summarized in his books The Behavior of Organisms and Schedules of Reinforcement. Of particular importance was his concept of the operant response, of which the canonical example was the rat's lever-press. In contrast with the idea of a physiological or reflex response, an operant is a class of structurally distinct but functionally equivalent responses. For example, while a rat might press a lever with its left paw or its right paw or its tail, all of these responses operate on the world in the same way and have a common consequence. Operants are often thought of as species of responses, where the individuals differ but the class coheres in its function-shared consequences with operants and reproductive success with species. This is a clear distinction between Skinner's theory and S–R theory. Skinner's empirical work expanded on earlier research on trial-and-error learning by researchers such as Thorndike and Guthrie with both conceptual reformulations—Thorndike's notion of a stimulus-response "association" or "connection" was abandoned; and methodological ones—the use of the "free operant", so-called because the animal was now permitted to respond at its own rate rather than in a series of trials determined by the experimenter procedures. With this method, Skinner carried out substantial experimental work on the effects of different schedules and rates of reinforcement on the rates of operant responses made by rats and pigeons. He achieved remarkable success in training animals to perform unexpected responses, to emit large numbers of responses, and to demonstrate many empirical regularities at the purely behavioral level. This lent some credibility to his conceptual analysis. It is largely his conceptual analysis that made his work much more rigorous than his peers, a point which can be seen clearly in his seminal work Are Theories of Learning Necessary? in which he criticizes what he viewed to be theoretical weaknesses then common in the study of psychology. An important descendant of the experimental analysis of behavior is the Society for Quantitative Analysis of Behavior. As Skinner turned from experimental work to concentrate on the philosophical underpinnings of a science of behavior, his attention turned to human language with his 1957 book Verbal Behavior and other language-related publications; Verbal Behavior laid out a vocabulary and theory for functional analysis of verbal behavior, and was strongly criticized in a review by Noam Chomsky. Skinner did not respond in detail but claimed that Chomsky failed to understand his ideas, and the disagreements between the two and the theories involved have been further discussed. Innateness theory, which has been heavily critiqued, is opposed to behaviorist theory which claims that language is a set of habits that can be acquired by means of conditioning. According to some, the behaviorist account is a process which would be too slow to explain a phenomenon as complicated as language learning. What was important for a behaviorist's analysis of human behavior was not language acquisition so much as the interaction between language and overt behavior. In an essay republished in his 1969 book Contingencies of Reinforcement, Skinner took the view that humans could construct linguistic stimuli that would then acquire control over their behavior in the same way that external stimuli could. The possibility of such "instructional control" over behavior meant that contingencies of reinforcement would not always produce the same effects on human behavior as they reliably do in other animals. The focus of a radical behaviorist analysis of human behavior therefore shifted to an attempt to understand the interaction between instructional control and contingency control, and also to understand the behavioral processes that determine what instructions are constructed and what control they acquire over behavior. Recently, a new line of behavioral research on language was started under the name of relational frame theory. See also: Philosophy of education § Realism Behaviourism focuses on one particular view of learning: a change in external behaviour achieved through using reinforcement and repetition (Rote learning) to shape behavior of learners. Skinner found that behaviors could be shaped when the use of reinforcement was implemented. Desired behavior is rewarded, while the undesired behavior is not rewarded.[better source needed] Incorporating behaviorism into the classroom allowed educators to assist their students in excelling both academically and personally. In the field of language learning, this type of teaching was called the audio-lingual method, characterised by the whole class using choral chanting of key phrases, dialogues and immediate correction. Within the behaviourist view of learning, the "teacher" is the dominant person in the classroom and takes complete control, evaluation of learning comes from the teacher who decides what is right or wrong. The learner does not have any opportunity for evaluation or reflection within the learning process, they are simply told what is right or wrong. The conceptualization of learning using this approach could be considered "superficial," as the focus is on external changes in behaviour, i.e., not interested in the internal processes of learning leading to behaviour change and has no place for the emotions involved in the process. Operant conditioning was developed by B.F. Skinner in 1937 and deals with the management of environmental contingencies to change behavior. In other words, behavior is controlled by historical consequential contingencies, particularly reinforcement—a stimulus that increases the probability of performing behaviors, and punishment—a stimulus that decreases such probability. The core tools of consequences are either positive (presenting stimuli following a response), or negative (withdrawn stimuli following a response). The following descriptions explains the concepts of four common types of consequences in operant conditioning: Classical experiment in operant conditioning, for example, the Skinner Box, "puzzle box" or operant conditioning chamber to test the effects of operant conditioning principles on rats, cats and other species. From the study of Skinner box, he discovered that the rats learned very effectively if they were rewarded frequently with food. Skinner also found that he could shape the rats' behavior through the use of rewards, which could, in turn, be applied to human learning as well. Skinner's model was based on the premise that reinforcement is used for the desired actions or responses while punishment was used to stop the responses of the undesired actions that are not. This theory proved that humans or animals will repeat any action that leads to a positive outcome, and avoiding any action that leads to a negative outcome. The experiment with the pigeons showed that a positive outcome leads to learned behavior since the pigeon learned to peck the disc in return for the reward of food. These historical consequential contingencies subsequently lead to (antecedent) stimulus control, but in contrast to respondent conditioning where antecedent stimuli elicit reflexive behavior, operant behavior is only emitted and therefore does not force its occurrence. It includes the following controlling stimuli: Main article: Classical conditioning Although operant conditioning plays the largest role in discussions of behavioral mechanisms, respondent conditioning (also called Pavlovian or classical conditioning) is also an important behavior-analytic process that needs not refer to mental or other internal processes. Pavlov's experiments with dogs provide the most familiar example of the classical conditioning procedure. In the beginning, the dog was provided meat (unconditioned stimulus, UCS, naturally elicit a response that is not controlled) to eat, resulting in increased salivation (unconditioned response, UCR, which means that a response is naturally caused by UCS). Afterward, a bell ring was presented together with food to the dog. Although bell ring was a neutral stimulus (NS, meaning that the stimulus did not have any effect), dog would start to salivate when only hearing a bell ring after a number of pairings. Eventually, the neutral stimulus (bell ring) became conditioned. Therefore, salivation was elicited as a conditioned response (the response same as the unconditioned response), pairing up with meat—the conditioned stimulus) Although Pavlov proposed some tentative physiological processes that might be involved in classical conditioning, these have not been confirmed. The idea of classical conditioning helped behaviorist John Watson discover the key mechanism behind how humans acquire the behaviors that they do, which was to find a natural reflex that produces the response being considered. Watson's "Behaviourist Manifesto" has three aspects that deserve special recognition: one is that psychology should be purely objective, with any interpretation of conscious experience being removed, thus leading to psychology as the "science of behaviour"; the second one is that the goals of psychology should be to predict and control behaviour (as opposed to describe and explain conscious mental states); the third one is that there is no notable distinction between human and non-human behaviour. Following Darwin's theory of evolution, this would simply mean that human behaviour is just a more complex version in respect to behaviour displayed by other species. Main article: Logical behaviorism Behaviorism is a psychological movement that can be contrasted with philosophy of mind. The basic premise of behaviorism is that the study of behavior should be a natural science, such as chemistry or physics. Initially behaviorism rejected any reference to hypothetical inner states of organisms as causes for their behavior, but B.F. Skinner's radical behaviorism reintroduced reference to inner states and also advocated for the study of thoughts and feelings as behaviors subject to the same mechanisms as external behavior. Behaviorism takes a functional view of behavior. According to Edmund Fantino and colleagues: "Behavior analysis has much to offer the study of phenomena normally dominated by cognitive and social psychologists. We hope that successful application of behavioral theory and methodology will not only shed light on central problems in judgment and choice but will also generate greater appreciation of the behavioral approach." Behaviorist sentiments are not uncommon within philosophy of language and analytic philosophy. It is sometimes argued that Ludwig Wittgenstein defended a logical behaviorist position (e.g., the beetle in a box argument). In logical positivism (as held, e.g., by Rudolf Carnap and Carl Hempel), the meaning of psychological statements are their verification conditions, which consist of performed overt behavior. W. V. O. Quine made use of a type of behaviorism, influenced by some of Skinner's ideas, in his own work on language. Quine's work in semantics differed substantially from the empiricist semantics of Carnap which he attempted to create an alternative to, couching his semantic theory in references to physical objects rather than sensations. Gilbert Ryle defended a distinct strain of philosophical behaviorism, sketched in his book The Concept of Mind. Ryle's central claim was that instances of dualism frequently represented "category mistakes", and hence that they were really misunderstandings of the use of ordinary language. Daniel Dennett likewise acknowledges himself to be a type of behaviorist, though he offers extensive criticism of radical behaviorism and refutes Skinner's rejection of the value of intentional idioms and the possibility of free will. This is Dennett's main point in "Skinner Skinned." Dennett argues that there is a crucial difference between explaining and explaining away… If our explanation of apparently rational behavior turns out to be extremely simple, we may want to say that the behavior was not really rational after all. But if the explanation is very complex and intricate, we may want to say not that the behavior is not rational, but that we now have a better understanding of what rationality consists in. (Compare: if we find out how a computer program solves problems in linear algebra, we don't say it's not really solving them, we just say we know how it does it. On the other hand, in cases like Weizenbaum's ELIZA program, the explanation of how the computer carries on a conversation is so simple that the right thing to say seems to be that the machine isn't really carrying on a conversation, it's just a trick.)— Curtis Brown, "Behaviorism: Skinner and Dennett", Philosophy of Mind Skinner's view of behavior is most often characterized as a "molecular" view of behavior; that is, behavior can be decomposed into atomistic parts or molecules. This view is inconsistent with Skinner's complete description of behavior as delineated in other works, including his 1981 article "Selection by Consequences". Skinner proposed that a complete account of behavior requires understanding of selection history at three levels: biology (the natural selection or phylogeny of the animal); behavior (the reinforcement history or ontogeny of the behavioral repertoire of the animal); and for some species, culture (the cultural practices of the social group to which the animal belongs). This whole organism then interacts with its environment. Molecular behaviorists use notions from melioration theory, negative power function discounting or additive versions of negative power function discounting. Molar behaviorists, such as Howard Rachlin, Richard Herrnstein, and William Baum, argue that behavior cannot be understood by focusing on events in the moment. That is, they argue that behavior is best understood as the ultimate product of an organism's history and that molecular behaviorists are committing a fallacy by inventing fictitious proximal causes for behavior. Molar behaviorists argue that standard molecular constructs, such as "associative strength", are better replaced by molar variables such as rate of reinforcement. Thus, a molar behaviorist would describe "loving someone" as a pattern of loving behavior over time; there is no isolated, proximal cause of loving behavior, only a history of behaviors (of which the current behavior might be an example) that can be summarized as "love". Skinner's radical behaviorism has been highly successful experimentally, revealing new phenomena with new methods, but Skinner's dismissal of theory limited its development. Theoretical behaviorism recognized that a historical system, an organism, has a state as well as sensitivity to stimuli and the ability to emit responses. Indeed, Skinner himself acknowledged the possibility of what he called "latent" responses in humans, even though he neglected to extend this idea to rats and pigeons. Latent responses constitute a repertoire, from which operant reinforcement can select. Theoretical behaviorism links between the brain and the behavior that provides a real understanding of the behavior. Rather than a mental presumption of how brain-behavior relates. Cultural analysis has always been at the philosophical core of radical behaviorism from the early days (as seen in Skinner's Walden Two, Science & Human Behavior, Beyond Freedom & Dignity, and About Behaviorism). During the 1980s, behavior analysts, most notably Sigrid Glenn, had a productive interchange with cultural anthropologist Marvin Harris (the most notable proponent of "cultural materialism") regarding interdisciplinary work. Very recently, behavior analysts have produced a set of basic exploratory experiments in an effort toward this end. Behaviorism is also frequently used in game development, although this application is controversial. With the fast growth of big behavioral data and applications, behavior analysis is ubiquitous. Understanding behavior from the informatics and computing perspective becomes increasingly critical for in-depth understanding of what, why and how behaviors are formed, interact, evolve, change and affect business and decision. Behavior informatics and behavior computing deeply explore behavior intelligence and behavior insights from the informatics and computing perspectives. In the second half of the 20th century, behaviorism was largely eclipsed as a result of the cognitive revolution. This shift was due to radical behaviorism being highly criticized for not examining mental processes, and this led to the development of the cognitive therapy movement. In the mid-20th century, three main influences arose that would inspire and shape cognitive psychology as a formal school of thought: In the early years of cognitive psychology, behaviorist critics held that the empiricism it pursued was incompatible with the concept of internal mental states. Cognitive neuroscience, however, continues to gather evidence of direct correlations between physiological brain activity and putative mental states, endorsing the basis for cognitive psychology. Main article: Behavior therapy Behavior therapy is a term referring to different types of therapies that treat mental health disorders. It identifies and helps change people's unhealthy behaviors or destructive behaviors through learning theory and conditioning. Ivan Pavlov's classical conditioning, as well as counterconditioning are the basis for much of clinical behavior therapy, but also includes other techniques, including operant conditioning—or contingency management, and modeling (sometimes called observational learning). A frequently noted behavior therapy is systematic desensitization (graduated exposure therapy), which was first demonstrated by Joseph Wolpe and Arnold Lazarus. Main article: Applied behavior analysis Applied behavior analysis (ABA)—also called behavioral engineering—is a scientific discipline that applies the principles of behavior analysis to change behavior. ABA derived from much earlier research in the Journal of the Experimental Analysis of Behavior, which was founded by B.F. Skinner and his colleagues at Harvard University. Nearly a decade after the study "The psychiatric nurse as a behavioral engineer" (1959) was published in that journal, which demonstrated how effective the token economy was in reinforcing more adaptive behavior for hospitalized patients with schizophrenia and intellectual disability, it led to researchers at the University of Kansas to start the Journal of Applied Behavior Analysis in 1968. Although ABA and behavior modification are similar behavior-change technologies in that the learning environment is modified through respondent and operant conditioning, behavior modification did not initially address the causes of the behavior (particularly, the environmental stimuli that occurred in the past), or investigate solutions that would otherwise prevent the behavior from reoccurring. As the evolution of ABA began to unfold in the mid-1980s, functional behavior assessments (FBAs) were developed to clarify the function of that behavior, so that it is accurately determined which differential reinforcement contingencies will be most effective and less likely for aversive consequences to be administered. In addition, methodological behaviorism was the theory underpinning behavior modification since private events were not conceptualized during the 1970s and early 1980s, which contrasted from the radical behaviorism of behavior analysis. ABA—the term that replaced behavior modification—has emerged into a thriving field. The independent development of behaviour analysis outside the United States also continues to develop. In the US, the American Psychological Association (APA) features a subdivision for Behavior Analysis, titled APA Division 25: Behavior Analysis, which has been in existence since 1964, and the interests among behavior analysts today are wide-ranging, as indicated in a review of the 30 Special Interest Groups (SIGs) within the Association for Behavior Analysis International (ABAI). Such interests include everything from animal behavior and environmental conservation, to classroom instruction (such as direct instruction and precision teaching), verbal behavior, developmental disabilities and autism, clinical psychology (i.e., forensic behavior analysis), behavioral medicine (i.e., behavioral gerontology, AIDS prevention, and fitness training), and consumer behavior analysis. The field of applied animal behavior—a sub-discipline of ABA that involves training animals—is regulated by the Animal Behavior Society, and those who practice this technique are called applied animal behaviorists. Research on applied animal behavior has been frequently conducted in the Applied Animal Behaviour Science journal since its founding in 1974. ABA has also been particularly well-established in the area of developmental disabilities since the 1960s, but it was not until the late 1980s that individuals diagnosed with autism spectrum disorders were beginning to grow so rapidly and groundbreaking research was being published that parent advocacy groups started demanding for services throughout the 1990s, which encouraged the formation of the Behavior Analyst Certification Board, a credentialing program that certifies professionally trained behavior analysts on the national level to deliver such services. Nevertheless, the certification is applicable to all human services related to the rather broad field of behavior analysis (other than the treatment for autism), and the ABAI currently has 14 accredited MA and Ph.D. programs for comprehensive study in that field. Early behavioral interventions (EBIs) based on ABA are empirically validated for teaching children with autism and has been proven as such for over the past five decades. Since the late 1990s and throughout the twenty-first century, early ABA interventions have also been identified as the treatment of choice by the US Surgeon General, American Academy of Pediatrics, and US National Research Council. Discrete trial training—also called early intensive behavioral intervention—is the traditional EBI technique implemented for thirty to forty hours per week that instructs a child to sit in a chair, imitate fine and gross motor behaviors, as well as learn eye contact and speech, which are taught through shaping, modeling, and prompting, with such prompting being phased out as the child begins mastering each skill. When the child becomes more verbal from discrete trials, the table-based instructions are later discontinued, and another EBI procedure known as incidental teaching is introduced in the natural environment by having the child ask for desired items kept out of their direct access, as well as allowing the child to choose the play activities that will motivate them to engage with their facilitators before teaching the child how to interact with other children their own age. A related term for incidental teaching, called pivotal response treatment (PRT), refers to EBI procedures that exclusively entail twenty-five hours per week of naturalistic teaching (without initially using discrete trials). Current research is showing that there is a wide array of learning styles and that is the children with receptive language delays who initially require discrete trials to acquire speech. Organizational behavior management, which applies contingency management procedures to model and reinforce appropriate work behavior for employees in organizations, has developed a particularly strong following within ABA, as evidenced by the formation of the OBM Network and Journal of Organizational Behavior Management, which was rated the third-highest impact journal in applied psychology by ISI JOBM rating. Modern-day clinical behavior analysis has also witnessed a massive resurgence in research, with the development of relational frame theory (RFT), which is described as an extension of verbal behavior and a "post-Skinnerian account of language and cognition." RFT also forms the empirical basis for acceptance and commitment therapy, a therapeutic approach to counseling often used to manage such conditions as anxiety and obesity that consists of acceptance and commitment, value-based living, cognitive defusion, counterconditioning (mindfulness), and contingency management (positive reinforcement). Another evidence-based counseling technique derived from RFT is the functional analytic psychotherapy known as behavioral activation that relies on the ACL model—awareness, courage, and love—to reinforce more positive moods for those struggling with depression. Incentive-based contingency management (CM) is the standard of care for adults with substance-use disorders; it has also been shown to be highly effective for other addictions (i.e., obesity and gambling). Although it does not directly address the underlying causes of behavior, incentive-based CM is highly behavior analytic as it targets the function of the client's motivational behavior by relying on a preference assessment, which is an assessment procedure that allows the individual to select the preferred reinforcer (in this case, the monetary value of the voucher, or the use of other incentives, such as prizes). Another evidence-based CM intervention for substance abuse is community reinforcement approach and family training that uses FBAs and counterconditioning techniques—such as behavioral skills training and relapse prevention—to model and reinforce healthier lifestyle choices which promote self-management of abstinence from drugs, alcohol, or cigarette smoking during high-risk exposure when engaging with family members, friends, and co-workers. While schoolwide positive behavior support consists of conducting assessments and a task analysis plan to differentially reinforce curricular supports that replace students' disruptive behavior in the classroom, pediatric feeding therapy incorporates a liquid chaser and chin feeder to shape proper eating behavior for children with feeding disorders. Habit reversal training, an approach firmly grounded in counterconditioning which uses contingency management procedures to reinforce alternative behavior, is currently the only empirically validated approach for managing tic disorders. Some studies on exposure (desensitization) therapies—which refer to an array of interventions based on the respondent conditioning procedure known as habituation and typically infuses counterconditioning procedures, such as meditation and breathing exercises—have recently been published in behavior analytic journals since the 1990s, as most other research are conducted from a cognitive-behavior therapy framework. When based on a behavior analytic research standpoint, FBAs are implemented to precisely outline how to employ the flooding form of desensitization (also called direct exposure therapy) for those who are unsuccessful in overcoming their specific phobia through systematic desensitization (also known as graduated exposure therapy). These studies also reveal that systematic desensitization is more effective for children if used in conjunction with shaping, which is further termed contact desensitization, but this comparison has yet to be substantiated with adults. Other widely published behavior analytic journals include Behavior Modification, The Behavior Analyst, Journal of Positive Behavior Interventions, Journal of Contextual Behavioral Science, The Analysis of Verbal Behavior, Behavior and Philosophy, Behavior and Social Issues, and The Psychological Record. Main article: Cognitive-behavior therapy Cognitive-behavior therapy (CBT) is a behavior therapy discipline that often overlaps considerably with the clinical behavior analysis subfield of ABA, but differs in that it initially incorporates cognitive restructuring and emotional regulation to alter a person's cognition and emotions. A popularly noted counseling intervention known as dialectical behavior therapy (DBT) includes the use of a chain analysis, as well as cognitive restructuring, emotional regulation, distress tolerance, counterconditioning (mindfulness), and contingency management (positive reinforcement). DBT is quite similar to acceptance and commitment therapy, but contrasts in that it derives from a CBT framework. Although DBT is most widely researched for and empirically validated to reduce the risk of suicide in psychiatric patients with borderline personality disorder, it can often be applied effectively to other mental health conditions, such as substance abuse, as well as mood and eating disorders. Most research on exposure therapies (also called desensitization)—ranging from eye movement desensitization and reprocessing therapy to exposure and response prevention—are conducted through a CBT framework in non-behavior analytic journals, and these enhanced exposure therapies are well-established in the research literature for treating phobic, post-traumatic stress, and other anxiety disorders (such as obsessive-compulsive disorder, or OCD). Cognitive-based behavioral activation (BA)—the psychotherapeutic approach used for depression—is shown to be highly effective and is widely used in clinical practice. Some large randomized control trials have indicated that cognitive-based BA is as beneficial as antidepressant medications but more efficacious than traditional cognitive therapy. Other commonly used clinical treatments derived from behavioral learning principles that are often implemented through a CBT model include community reinforcement approach and family training, and habit reversal training for substance abuse and tics, respectively.
|Part of a series of articles about| An electric field (sometimes abbreviated as E-field) surrounds an electric charge, and exerts force on other charges in the field, attracting or repelling them. Electric fields are created by electric charges, or by time-varying magnetic fields. Electric fields and magnetic fields are both manifestations of the electromagnetic force, one of the four fundamental forces (or interactions) of nature. Electric fields are important in many areas of physics, and are exploited practically in electrical technology. On an atomic scale, the electric field is responsible for the attractive force between the atomic nucleus and electrons that holds atoms together, and the forces between atoms that cause chemical bonding. The electric field is defined mathematically as a vector field that associates to each point in space the (electrostatic or Coulomb) force per unit of charge exerted on an infinitesimal positive test charge at rest at that point. The SI unit for electric field strength is volt per meter (V/m), exactly equivalent to newton per coulomb (N/C) in the SI system. From Coulomb's law a particle with electric charge at position exerts a force on a particle with charge at position of - where is the unit vector in the direction from point to point , and ε0 is the electric constant (also known as "the absolute permittivity of free space") in C2 m−2 N−1 When the charges and have the same sign this force is positive, directed away from the other charge, indicating the particles repel each other. When the charges have unlike signs the force is negative, indicating the particles attract. To make it easy to calculate the Coulomb force on any charge at position this expression can be divided by , leaving an expression that only depends on the other charge (the source charge) This is the electric field at point due to the point charge ; it is a vector equal to the Coulomb force per unit charge that a positive point charge would experience at the position . Since this formula gives the electric field magnitude and direction at any point in space (except at the location of the charge itself, , where it becomes infinite) it defines a vector field. From the above formula it can be seen that the electric field due to a point charge is everywhere directed away from the charge if it is positive, and toward the charge if it is negative, and its magnitude decreases with the inverse square of the distance from the charge. If there are multiple charges, the resultant Coulomb force on a charge can be found by summing the vectors of the forces due to each charge. This shows the electric field obeys the superposition principle: the total electric field at a point due to a collection of charges is just equal to the vector sum of the electric fields at that point due to the individual charges. - where is the unit vector in the direction from point to point . This is the definition of the electric field due to the point source charges . It diverges and becomes infinite at the locations of the charges themselves, and so is not defined there. The Coulomb force on a charge of magnitude at any point in space is equal to the product of the charge and the electric field at that point The electric field due to a continuous distribution of charge in space (where is the charge density in coulombs per cubic meter) can be calculated by considering the charge in each small volume of space at point as a point charge, and calculating its electric field at point where is the unit vector pointing from to , then adding up the contributions from all the increments of volume by integrating over the volume of the charge distribution Causes and description Electric fields are caused by electric charges, described by Gauss's law, or varying magnetic fields, described by Faraday's law of induction. Together, these laws are enough to define the behavior of the electric field as a function of [clarification needed] and magnetic field. However, since the magnetic field is described as a function of electric field, the equations of both fields are coupled and together form Maxwell's equations that describe both fields as a function of charges and currents. In the special case of a steady state (stationary charges and currents), the Maxwell-Faraday inductive effect disappears. The resulting two equations (Gauss's law and Faraday's law with no induction term ), taken together, are equivalent to Coulomb's law, written as for a charge density ( is position in space). Notice that , the vacuum electric permittivity, must be substituted with , permittivity, when charges are in non-empty media. Continuous vs. discrete charge representation The equations of electromagnetism are best described in a continuous description. However, charges are sometimes best described as discrete points; for example, some models may describe electrons as point sources where charge density is infinite on an infinitesimal section of space. A charge located at can be described mathematically as a charge density , where the Dirac delta function (in three dimensions) is used. Conversely, a charge distribution can be approximated by many small point charges. Electric fields satisfy the superposition principle, because Maxwell's equations are linear. As a result, if and are the electric fields resulting from distribution of charges and , a distribution of charges will create an electric field ; for instance, Coulomb's law is linear in charge density as well. This principle is useful to calculate the field created by multiple point charges. If charges are stationary in space at , in the absence of currents, the superposition principle proves that the resulting field is the sum of fields generated by each particle as described by Coulomb's law: If a system is static, such that magnetic fields are not time-varying, then by Faraday's law, the electric field is curl-free. In this case, one can define an electric potential, that is, a function such that . This is analogous to the gravitational potential. Parallels between electrostatic and gravitational fields Coulomb's law, which describes the interaction of electric charges: is similar to Newton's law of universal gravitation: This suggests similarities between the electric field E and the gravitational field g, or their associated potentials. Mass is sometimes called "gravitational charge". A uniform field is one in which the electric field is constant at every point. It can be approximated by placing two conducting plates parallel to each other and maintaining a voltage (potential difference) between them; it is only an approximation because of boundary effects (near the edge of the planes, electric field is distorted because the plane does not continue). Assuming infinite planes, the magnitude of the electric field E is: where ΔV is the potential difference between the plates and d is the distance separating the plates. The negative sign arises as positive charges repel, so a positive charge will experience a force away from the positively charged plate, in the opposite direction to that in which the voltage increases. In micro- and nano-applications, for instance in relation to semiconductors, a typical magnitude of an electric field is in the order of 106 V⋅m−1, achieved by applying a voltage of the order of 1 volt between conductors spaced 1 µm apart. Electrodynamic fields are electric fields which do change with time, for instance when charges are in motion. The electric field cannot be described independently of the magnetic field in that case. If A is the magnetic vector potential, defined so that , one can still define an electric potential such that: One can recover Faraday's law of induction by taking the curl of that equation which justifies, a posteriori, the previous form for E. Energy in the electric field As E and B fields are coupled, it would be misleading to split this expression into "electric" and "magnetic" contributions. However, in the steady-state case, the fields are no longer coupled (see Maxwell's equations). It makes sense in that case to compute the electrostatic energy per unit volume: The total energy U stored in the electric field in a given volume V is therefore Definitive equation of vector fields In the presence of matter, it is helpful to extend the notion of the electric field into three vector fields: where P is the electric polarization – the volume density of electric dipole moments, and D is the electric displacement field. Since E and P are defined separately, this equation can be used to define D. The physical interpretation of D is not as clear as E (effectively the field applied to the material) or P (induced field due to the dipoles in the material), but still serves as a convenient mathematical simplification, since Maxwell's equations can be simplified in terms of free charges and currents. For linear, homogeneous, isotropic materials E and D are proportional and constant throughout the region, there is no position dependence: For inhomogeneous materials, there is a position dependence throughout the material: For non-linear media, E and D are not proportional. Materials can have varying extents of linearity, homogeneity and isotropy. - Classical electromagnetism - Field strength - Optical field - Signal strength in telecommunications - Teltron tube - Teledeltos, a conductive paper that may be used as a simple analog computer for modelling fields - Roche, John (2016). "Introducing electric fields". Physics Education. 51: 1. - Purcell, Edward M.; Morin, David J. (2013). Electricity and Magnetism, (3rd ed.). New York: Cambridge University Press. pp. 14–20. ISBN 978-1-107-01402-2. - Browne, p 225: "... around every charge there is an aura that fills all space. This aura is the electric field due to the charge. The electric field is a vector field... and has a magnitude and direction." - Richard Feynman (1970). The Feynman Lectures on Physics Vol II. Addison Wesley Longman. pp. 1–3, 1–4. ISBN 978-0-201-02115-8. - Purcell, Edward M.; Morin, David J. (2013). Electricity and Magnetism, (3rd ed.). New York: Cambridge University Press. pp. 15–16. ISBN 978-1-107-01402-2. - Serway, Raymond A.; Vuille, Chris (2014). College Physics, 10th Ed. Cengage Learning. pp. 532–533. ISBN 1305142829. - International Bureau of Weights and Measures (2019-05-20), SI Brochure: The International System of Units (SI) (PDF) (9th ed.), ISBN 978-92-822-2272-0, p. 23 - Purcell, Edward (2011). Electricity and Magnetism, 2nd Ed. Cambridge University Press. pp. 8–9. ISBN 1139503553. - Purcell (2011) Electricity and Magnetism, 2nd Ed., p. 20-21 - Purcell, p 25: "Gauss's Law: the flux of the electric field E through any closed surface... equals 1/e times the total charge enclosed by the surface." - Purcell, p 356: "Faraday's Law of Induction." - Purcell, p7: "... the interaction between electric charges at rest is described by Coulomb's Law: two stationary electric charges repel or attract each other with a force proportional to the product of the magnitude of the charges and inversely proportional to the square of the distance between them. - Purcell, pp. 5-7. - gwrowe (8 October 2011). "Curl & Potential in Electrostatics". physicspages.com. Archived from the original on 24 October 2016. Retrieved 21 January 2017. - Salam, Abdus (16 December 1976). "Quarks and leptons come out to play". New Scientist. 72: 652. - Huray, Paul G. (2009). Maxwell's Equations. Wiley-IEEE. p. 205. ISBN 0-470-54276-4. - Introduction to Electrodynamics (3rd Edition), D.J. Griffiths, Pearson Education, Dorling Kindersley, 2007, ISBN 81-7758-293-3 - Electromagnetism (2nd Edition), I.S. Grant, W.R. Phillips, Manchester Physics, John Wiley & Sons, 2008, ISBN 978-0-471-92712-9 - Electricity and Modern Physics (2nd Edition), G.A.G. Bennet, Edward Arnold (UK), 1974, ISBN 0-7131-2459-8 - Purcell, Edward; Morin, David (2013). ELECTRICITY AND MAGNETISM (3rd ed.). Cambridge University Press, New York. ISBN 978-1-107-01402-2. - Browne, Michael (2011). PHYSICS FOR ENGINEERING AND SCIENCE (2nd ed.). McGraw-Hill, Schaum, New York. ISBN 978-0-07-161399-6. |Wikimedia Commons has media related to Electric field.| This article's use of external links may not follow Wikipedia's policies or guidelines. (January 2017) (Learn how and when to remove this template message) - Electric field in "Electricity and Magnetism", R Nave – Hyperphysics, Georgia State University - 'Gauss's Law' – Chapter 24 of Frank Wolfs's lectures at University of Rochester - 'The Electric Field' – Chapter 23 of Frank Wolfs's lectures at University of Rochester - MovingCharge.html – An applet that shows the electric field of a moving point charge - Fields – a chapter from an online textbook - Learning by Simulations Interactive simulation of an electric field of up to four point charges - Interactive Flash simulation picturing the electric field of user-defined or preselected sets of point charges by field vectors, field lines, or equipotential lines. Author: David Chappell
A tropical rainforest is an ecosystem type that occurs roughly within the latitudes 28 degrees north or south of the equator (in the equatorial zone between the Tropic of Cancer and Tropic of Capricorn). This ecosystem experiences high average temperatures and a significant amount of rainfall. Rainforests can be found in Asia, Australia, Africa, South America, Central America, Mexico and on many of the Pacific, Caribbean, and Indian Ocean islands. Within the World Wildlife Fund's biome classification, tropical rainforests are thought to be a type of tropical wet forest (or tropical moist broadleaf forest) and may also be referred to as lowland equatorial evergreen rainforest.3 - 1 Overview - 2 Forest structure - 3 Ecology - 4 Geography - 5 Biodiversity and speciation - 6 Human dimensions - 7 Conservation - 8 See also - 9 References - 10 External links Tropical rainforests can be characterized in two words: hot and wet. Mean monthly temperatures exceed 18 °C (64 °F) during all months of the year.4 Average annual rainfall is no less than 168 cm (66 in) and can exceed 1,000 cm (390 in) although it typically lies between 175 cm (69 in) and 200 cm (79 in).5 This high level of precipitation often results in poor soils due to leaching of soluble nutrients. Tropical rainforests exhibit high levels of biodiversity. Around 40% to 75% of all biotic species are indigenous to the rainforests.6 Rainforests are home to half of all the living animal and plant species on the planet.7 Two-thirds of all flowering plants can be found in rainforests.5 A single hectare of rainforest may contain 42,000 different species of insect, up to 807 trees of 313 species and 1,500 species of higher plants.5 Tropical rainforests have been called the "world's largest pharmacy", because over one quarter of natural medicines have been discovered within them.89 It is likely that there may be many millions of species of plants, insects and microorganisms still undiscovered in tropical rainforests. Tropical rainforests are among the most threatened ecosystems globally due to large-scale fragmentation as a result of human activity. Habitat fragmentation caused by geological processes such as volcanism and climate change occurred in the past, and have been identified as important drivers of speciation.10 However, fast human driven habitat destruction is suspected to be one of the major causes of species extinction. Tropical rain forests have been subjected to heavy logging and agricultural clearance throughout the 20th century, and the area covered by rainforests around the world is rapidly shrinking.1112 Tropical rainforests have existed on Earth for hundreds of millions of years. Most tropical rainforests today are on fragments of the Mesozoic era supercontinent of Gondwana.13 The separation of the landmass resulted in a great loss of amphibian diversity while at the same time the drier climate spurred the diversification of reptiles.10 The division left tropical rainforests located in five major regions of the world: tropical America, Africa, Southeast Asia, Madagascar, and New Guinea, with smaller outliers in Australia.13 However, the specifics of the origin of rainforests remain uncertain due to an incomplete fossil record. Types of tropical rainforest Several types of forest comprise the general tropical rainforest biome: - Lowland equatorial evergreen rain forests are forests which receive high rainfall (more than 2000 mm, or 80 inches, annually) throughout the year. These forests occur in a belt around the equator, with the largest areas in the Amazon Basin of South America, the Congo Basin of Central Africa, Indonesia, and New Guinea. - Moist deciduous and semi-evergreen seasonal forests, receive high overall rainfall with a warm summer wet season and a cooler winter dry season. Some trees in these forests drop some or all of their leaves during the winter dry season. These forests are found in parts of South America, in Central America and around the Caribbean, in coastal West Africa, parts of the Indian subcontinent, and across much of Indochina. - Montane rain forests, some of which are known as cloud forests, are found in cooler-climate mountain areas. Depending on latitude, the lower limit of montane rainforests on large mountains is generally between 1500 and 2500 m while the upper limit is usually from 2400 to 3300 m.14 - Flooded forests, seven types of flooded forest are recognized for Tambopata Reserve in Amazonian Peru:15 - Permanently waterlogged swamp forest—Former oxbow lakes still flooded but covered in forest. - Seasonally waterlogged swamp forest—Oxbow lakes in the process of filling in. - Lower floodplain forest—Lowest floodplain locations with a recognizable forest. - Middle floodplain forest—Tall forest, flooded occasionally. - Upper floodplain forest—Tall forest, rarely flooded. - Old floodplain forest—Subjected to flooding within the last two hundred years. - Previous floodplain—Now terra firme, but historically ancient floodplain of Tambopata River. Rainforests are divided into different strata, or layers, with vegetation organized into a vertical pattern from the top of the soil to the canopy16 Each layer is a unique biotic community containing different plants and animals adapted for life in that particular strata. Only the emergent layer is unique to tropical rainforests, while the others are also found in temperate rainforests. The forest floor, the bottom-most layer, receives only 2% of the sunlight. Only plants adapted to low light can grow in this region. Away from riverbanks, swamps and clearings, where dense undergrowth is found, the forest floor is relatively clear of vegetation because of the low sunlight penetration. This more open quality permits the easy movement of larger animals such as: ungulates like the okapi (Okapia johnstoni), tapir (Tapirus sp.), Sumatran rhinoceros (Dicerorhinus sumatrensis), and apes like the western lowland gorilla (Gorilla gorilla), as well as many species of reptiles, amphibians, and insects. The forest floor also contains decaying plant and animal matter, which disappears quickly, because the warm, humid conditions promote rapid decay. Many forms of fungi growing here help decay the animal and plant waste. The understory layer lies between the canopy and the forest floor. The understory is home to a number of birds, small mammals, insects, reptiles, and predators. Examples include leopard (Panthera pardus), poison dart frogs (Dendrobates sp.), ring-tailed coati (Nasua nasua), boa constrictor (Boa constrictor), and many species of Coleoptera.5 The vegetation at this layer generally consists of shade-tolerant shrubs, herbs, small trees, and large woody vines which climb into the trees to capture sunlight. Only about 5% of sunlight breaches the canopy to arrive at the understory causing true understory plants to seldom grow to 3 m (10 feet). As an adaptation to these low light levels, understory plants have often evolved much larger leaves. Many seedlings that will grow to the canopy level are in the understory. The canopy is the primary layer of the forest forming a roof over the two remaining layers. It contains the majority of the largest trees, typically 30–45 m in height. Tall, broad-leaved evergreen trees are the dominant plants. The densest areas of biodiversity are found in the forest canopy, as it often supports a rich flora of epiphytes, including orchids, bromeliads, mosses and lichens. These epiphytic plants attach to trunks and branches and obtain water and minerals from rain and debris that collects on the supporting plants. The fauna is similar to that found in the emergent layer, but more diverse. It is suggested that the total arthropod species richness of the tropical canopy might be as high as 20 million.17 Other species habituating this layer include many avian species such as the yellow-casqued wattled hornbill (Ceratogymna elata), collared sunbird (Anthreptes collaris), African gray parrot (Psitacus erithacus), keel-billed toucan (Ramphastos sulfuratus), scarlet macaw (Ara macao) as well as other animals like the spider monkey (Ateles sp.), African giant swallowtail (Papilio antimachus), three-toed sloth (Bradypus tridactylus), kinkajou (Potos flavus), and tamandua (Tamandua tetradactyla).5 The emergent layer contains a small number of very large trees, called emergents, which grow above the general canopy, reaching heights of 45–55 m, although on occasion a few species will grow to 70–80 m tall.1618 Some examples of emergents include: Balizia elegans, Dipteryx panamensis, Hieronyma alchorneoides, Hymenolobium mesoamericanum, Lecythis ampla and Terminalia oblonga.19 These trees need to be able to withstand the hot temperatures and strong winds that occur above the canopy in some areas. Several unique faunal species inhabit this layer such as the crowned eagle (Stephanoaetus coronatus), the king colobus (Colobus polykomos), and the large flying fox (Pteropus vampyrus).5 However, stratification is not always clear. Rainforests are dynamic and many changes affect the structure of the forest. Emergent or canopy trees collapse, for example, causing gaps to form. Openings in the forest canopy are widely recognized as important for the establishment and growth of rainforest trees. It’s estimated that perhaps 75% of the tree species at La Selva Biological Station, Costa Rica are dependent on canopy opening for seed germination or for growth beyond sapling size, for example.20 Most tropical rainforests are located around and near the equator, therefore having what is called an equatorial climate characterized by three major climatic parameters: temperature, rainfall, and dry season intensity21 Other parameters that affect tropical rainforests are carbon dioxide concentrations, solar radiation, and nitrogen availability. In general, climatic patterns consist of warm temperatures and high annual rainfall. However, the abundance of rainfall changes throughout the year creating distinct wet and dry seasons. Rainforests are classified by the amount of rainfall received each year, which has allowed ecologists to define differences in these forests that look so similar in structure. According to Holdridge’s classification of tropical ecosystems, true tropical rainforests have an annual rainfall greater than 800 cm and annual temperature greater than 24 degrees Celsius. However, most lowland tropical rainforests can be classified as tropical moist or wet forests, which differ in regards to rainfall. Tropical rainforest ecology- dynamics, composition, and function- are sensitive to changes in climate especially changes in rainfall.21 The climate of these forests is controlled by a band of clouds called the Intertropical Convergence Zone located near the equator and created by the convergence of the trade winds from the northern and southern hemispheres. The position of the band varies seasonally, moving north in the northern summer and south in the northern winter, and ultimately controlling the wet and dry seasons in the tropics.22 These regions have experienced strong warming at a mean rate of 0.26 degrees Celsius per decade which coincides with a global rise in temperature resulting from the anthropogenic inputs of greenhouse gases into the atmosphere. Studies have also found that precipitation has declined and tropical Asia has experienced an increase in dry season intensity whereas Amazonia has no significant pattern change in precipitation or dry season.21 Additionally, El Niño-Southern Oscillation events drive the interannual climatic variability in temperature and precipitation and result in drought and increased intensity of the dry season. As anthropogenic warming increases the intensity and frequency of ENSO will increase, rendering tropical rainforest regions susceptible to stress and increased mortality of trees and plants .21 Soil types are highly variable in the tropics and are the result of a combination of several variables such as climate, vegetation, topographic position, parent material, and soil age23 Most tropical soils are characterized by significant leaching and poor nutrients; however there are some areas that contain fertile soils. Soils throughout the tropical rainforests fall into two classifications which include the ultisols and oxisols. Ultisols are known as well weathered, acidic red clay soils, deficient in major nutrients such as calcium and potassium. Similarly, oxisols are acidic, old, typically reddish, highly weathered and leached, however are well drained compared to ultisols. The clay content of ultisols is high, making it difficult for water to penetrate and flow through. The reddish color of both soils is the result of heavy heat and moisture forming oxides of iron and aluminum, which are insoluble in water and not taken up readily by plants. Soil chemical and physical characteristics are strongly related to above ground productivity and forest structure and dynamics. The physical properties of soil control the tree turnover rates whereas chemical properties such as available nitrogen and phosphorus control forest growth rates.24 The soils of the eastern and central Amazon as well as the Southeast Asian Rainforest are old and mineral poor whereas the soils of the western Amazon (Ecuador and Peru) and volcanic areas of Costa Rica are young and mineral rich. Primary productivity or wood production is highest in western Amazon and lowest in eastern Amazon which contains heavily weathered soils classified as oxisols.23 Additionally, Amazonian soils greatly weathered, making them devoid of minerals like phosphorus, potassium, calcium, and magnesium, which come from rock sources. However, not all tropical rainforests occur on nutrient poor soils, but on nutrient rich floodplains and volcanic soils located in the Andean foothills, and volcanic areas of Southeast Asia, Africa, and Central America.25 Oxisols, infertile, deeply weathered and severely leached, have developed on the ancient Gondwanan shields. Rapid bacterial decay prevents the accumulation of humus. The concentration of iron and aluminum oxides by the laterization process gives the oxisols a bright red color and sometimes produces minable deposits (e.g., bauxite). On younger substrates, especially of volcanic origin, tropical soils may be quite fertile. This high rate of decomposition is the result of phosphorus levels in the soils, precipitation, high temperatures and the extensive microorganism communities.26 In addition to the bacteria and other microorganisms, there are an abundance of other decomposers such as fungi and termites that aid in the process as well. Nutrient recycling is important because below ground resource availability controls the above ground biomass and community structure of tropical rainforests. These soils are typically phosphorus limited, which inhibits net primary productivity or the uptake of carbon.23 The soil contains tiny microbial organisms such as bacteria, which break down leaf litter and other organic matter into inorganic forms of carbon usable by plants through a process called decomposition. During the decomposition process the microbial community is respiring, taking up oxygen and releasing carbon dioxide. The decomposition rate can be evaluated by measuring the uptake of oxygen.26 High temperatures and precipitation increase decomposition rate, which allows plant litter to rapidly decay in tropical regions, releasing nutrients that are immediately taken up by plants through surface or ground waters. The seasonal patterns in respiration are controlled by leaf litter fall and precipitation, the driving force moving the decomposable carbon from the litter to the soil. Respiration rates are highest early in the wet season because the recent dry season results in a large percentage of leaf litter and thus a higher percentage of organic matter being leached into the soil.26 A common feature of many tropical rainforests is the distinct buttress roots of trees. Instead of penetrating to deeper soil layers, buttress roots create a widespread root network at the surface for more efficient uptake of nutrients in a very nutrient poor and competitive environment. Most of the nutrients within the soil of a tropical rainforest occur near the surface because of the rapid turnover time and decomposition of organisms and leaves.27 Because of this, the buttress roots occur at the surface so the trees can maximize uptake and actively compete with the rapid uptake of other trees. These roots also aid in water uptake and storage, increase surface area for gas exchange, and collect leaf litter for added nutrition.27 Additionally, these roots reduce soil erosion and maximize nutrient acquisition during heavy rains by diverting nutrient rich water flowing down the trunk into several smaller flows while also acting as a barrier to ground flow. Also, the large surface areas these roots create provide support and stability to rainforests trees, which commonly grow to significant heights. This added stability allows these trees to withstand the impacts of severe storms, thus reducing the occurrence of fallen trees.27 Succession is an ecological process that changes the biotic community structure over time towards a more stable, diverse community structure after an initial disturbance to the community. The initial disturbance is often a natural phenomenon or human caused event. Natural disturbances include hurricanes, volcanic eruptions, river movements or an event as small as a fallen tree that creates gaps in the forest. In tropical rainforests, these same natural disturbances have been well documented in the fossil record, and are credited with encouraging speciation and endemism.10 South and Central America Australasia and Oceania Biodiversity and speciation |This section needs additional citations for verification. (November 2013)| Tropical rainforests exhibit a vast diversity in plant and animal species. The root for this remarkable speciation has been a query of scientists and ecologists for years. A number of theories have been developed for why and how the tropics can be so diverse. Interspecific competition results from a high density of species with similar niches in the tropics and limited resources available. Species which "lose" the competition may either become extinct or find a new niche. Direct competition will often lead to one species dominating another by some advantage, ultimately driving it to extinction. Niche partitioning is the other option for a species. This is the separation and rationing of necessary resources by utilizing different habitats, food sources, cover or general behavioral differences. A species with similar food items but different feeding times is an example of niche partitioning.10 The Theory of Pleistocene Refugia was developed by Jürgen Haffer in 1969 with his article Speciation of Amazonian Forest Birds. Haffer proposed the explanation for speciation was the product of rainforest patches being separated by stretches of non forest vegetation during the last glacial period. He called these patches of rainforest areas refuges and within these patches allopatric speciation occurred. With the end of the glacial period and increase in atmospheric humidity, rainforest began to expand and the refuges reconnected.28 This theory has been the subject of debate. Scientists are still skeptical of whether or not this theory is legitimate. Genetic evidence suggests speciation had occurred in certain taxa 1–2 million years ago, preceding the Pleistocene.29 Tropical rainforests have harboured human life for many millennia, with many Indian tribes in South- and Central America, who belong to the Indigenous peoples of the Americas, the Congo Pygmies in Central Africa, and several tribes in South-East Asia, like the Dayak people and the Penan people in Borneo.30 Food resources within the forest are extremely dispersed due to the high biological diversity and what food does exist is largely restricted to the canopy and requires considerable energy to obtain. Some groups of hunter-gatherers have exploited rainforest on a seasonal basis but dwelt primarily in adjacent savanna and open forest environments where food is much more abundant. Other peoples described as rainforest dwellers are hunter-gatherers who subsist in large part by trading high value forest products such as hides, feathers, and honey with agricultural people living outside the forest.31 A variety of indigenous people live within the rainforest as hunter-gatherers, or subsist as part-time small scale farmers supplemented in large part by trading high-value forest products such as hides, feathers, and honey with agricultural people living outside the forest.3031 Peoples have inhabited the rainforests for tens of thousands of years and have remained so elusive that only recently have some tribes been discovered.30 These indigenous peoples are greatly threatened by loggers in search for old-growth tropical hardwoods like Ipe, Cumaru and Wenge, and by farmers who are looking to expand their land, for cattle(meat), and soybeans, which are used to feed cattle in Europe and China.30323334 On 18 January 2007, FUNAI reported also that it had confirmed the presence of 67 different uncontacted tribes in Brazil, up from 40 in 2005. With this addition, Brazil has now overtaken the island of New Guinea as the country having the largest number of uncontacted tribes.35 The province of Irian Jaya or West Papua in the island of New Guinea is home to an estimated 44 uncontacted tribal groups.36 The pygmy peoples are hunter-gatherer groups living in equatorial rainforests characterized by their short height (below one and a half meters, or 59 inches, on average). Amongst this group are the Efe, Aka, Twa, Baka, and Mbuti people of Central Africa. However, the term pygmy is considered pejorative so many tribes prefer not to be labeled as such.37 Some notable indigenous peoples of the Americas, or Amerindians, include the Huaorani, Ya̧nomamö, and Kayapo people of the Amazon. The traditional agricultural system practiced by tribes in the Amazon is based on swidden cultivation (also known as slash-and-burn or shifting cultivation) and is considered a relatively benign disturbance.3839 In fact, when looking at the level of individual swidden plots a number of traditional farming practices are considered beneficial. For example, the use of shade trees and fallowing all help preserve soil organic matter, which is a critical factor in the maintenance of soil fertility in the deeply weathered and leached soils common in the Amazon.40 There is a diversity of forest people in Asia, including the Lumad peoples of the Philippines and the Penan and Dayak people of Borneo. The Dayaks are a particularly interesting group as they are noted for their traditional headhunting culture. Fresh human heads were required to perform certain rituals such as the Iban “kenyalang” and the Kenyah “mamat”.41 Pygmies who live in Southeast Asia are, amongst others, referred to as “Negrito”. Cultivated foods and spices Yam, coffee, chocolate, banana, mango, papaya, macadamia, avocado, and sugarcane all originally came from tropical rainforest and are still mostly grown on plantations in regions that were formerly primary forest. In the mid-1980s and 90s, 40 million tons of bananas were consumed worldwide each year, along with 13 million tons of mango. Central American coffee exports were worth US$3 billion in 1970. Much of the genetic variation used in evading the damage caused by new pests is still derived from resistant wild stock. Tropical forests have supplied 250 cultivated kinds of fruit, compared to only 20 for temperate forests. Forests in New Guinea alone contain 251 tree species with edible fruits, of which only 43 had been established as cultivated crops by 1985.42 In addition to extractive human uses rain forests also have non-extractive uses that are frequently summarized as ecosystem services. Rain forests play an important role in maintaining biological diversity, sequestering & storing carbon, global climate regulation, disease control, and pollination.43 Despite the negative effects of tourism in the tropical rainforests, there are also several important positive effects. - In recent years ecotourism in the tropics has increased. While rainforests are becoming increasingly rare, people are travelling to nations that still have this diverse habitat. Locals are benefiting from the additional income brought in by visitors, as well areas deemed interesting for visitors are often conserved. Ecotourism can be an incentive for conservation, especially when it triggers positive economic change.44 Ecotourism can include a variety of activities including animal viewing, scenic jungle tours and even viewing cultural sights and native villages. If these practices are performed appropriately this can be beneficial for both locals and the present flora and fauna. - An increase in tourism has increased economic support, allowing more revenue to go into the protection of the habitat. Tourism can contribute directly to the conservation of sensitive areas and habitat. Revenue from park-entrance fees and similar sources can be utilised specifically to pay for the protection and management of environmentally sensitive areas. Revenue from taxation and tourism provides an additional incentive for governments to contribute revenue to the protection of the forest. - Tourism also has the potential to increase public appreciation of the environment and to spread awareness of environmental problems when it brings people into closer contact with the environment. Such increased awareness can induce more environmentally conscious behavior. Tourism has had a positive effect on wildlife preservation and protection efforts, notably in Africa but also in South America, Asia, Australia, and the South Pacific.45 Mining and drilling Deposits of precious metals (gold, silver, coltan) and fossil fuels (oil and natural gas) occur underneath rainforests globally. These resources are important to developing nations and their extraction is often given priority to encourage economic growth. Mining and Drilling can require large amounts of land development, directly causing deforestation. In Ghana, a West African nation, deforestation from decades of mining activity left about 12% of the country's original rainforest intact.46 Conversion to agricultural land With the invention of agriculture, humans were able to clear sections of rainforest to produce crops, converting it to open farmland. Such people, however, obtain their food primarily from farm plots cleared from the forest3147 and hunt and forage within the forest to supplement this. The issue arising is between the independent farmer providing for his family and the needs and wants of the globe as a whole. This issue has seen little improvement because no plan has been established for all parties to be aided.48 Agriculture on formerly forested land is not without difficulties. Rainforest soils are often thin and leached of many minerals, and the heavy rainfall can quickly leach nutrients from area cleared for cultivation. People such as the Yanomamo of the Amazon, utilize slash-and-burn agriculture to overcome these limitations and enable them to push deep into what were previously rainforest environments. However, these are not rainforest dwellers, rather they are dwellers in cleared farmland3147 that make forays into the rainforest. Up to 90% of the typical Yanamomo diet comes from farmed plants.47 Some action has been taken by suggesting fallow periods of the land allowing secondary forest to grow and replenish the soil.49 Beneficial practices like soil restoration and conservation can benefit the small farmer and allow better production on smaller parcels of land. The tropics take a major role in reducing atmospheric carbon dioxide. The tropics (most notably the Amazon rainforest) are called carbon sinks.citation needed As major carbon reducers and carbon and soil methane storages, their destruction contributes to increasing global energy trapping, atmospheric gases.citation needed Climate change has been significantly contributed to by the destruction of the rainforests. A simulation was performed in which all rainforest in Africa were removed. The simulation showed an increase in atmospheric temperature by 2.5 to 5 degrees Celsius.50 Efforts to protect and conserve tropical rainforest habitats are diverse and widespread. Tropical rainforest conservation ranges from strict preservation of habitat to finding sustainable management techniques for people living in tropical rainforests. International policy has also introduced a market incentive program called Reducing Emissions from Deforestation and Forest Degradation (REDD) for companies and governments to outset their carbon emissions through financial investments into rainforest conservation.51 - Why the Amazon Rainforest is So Rich in Species. Earthobservatory.nasa.gov (5 December 2005). Retrieved on 28 March 2013. - Why The Amazon Rainforest Is So Rich In Species. ScienceDaily.com (5 December 2005). Retrieved on 28 March 2013. - Olson, David M.; Dinerstein, Eric; Wikramanayake, Eric D.; Burgess, Neil D.; Powell, George V. N.; Underwood, Emma C.; d'Amico, Jennifer A.; Itoua, Illanga et al. (2001). "Terrestrial Ecoregions of the World: A New Map of Life on Earth". BioScience 51 (11): 933–938. doi:10.1641/0006-3568(2001)051[0933:TEOTWA]2.0.CO;2.dead link - Woodward, Susan. Tropical broadleaf Evergreen Forest: The rainforest. Retrieved on 14 March 2009. - Newman, Arnold (2002). Tropical Rainforest: Our Most Valuable and Endangered Habitat With a Blueprint for Its Survival Into the Third Millennium (2 ed.). Checkmark. ISBN 0816039739. - "Rainforests.net – Variables and Math". Retrieved 4 January 2009. - The Regents of the University of Michigan. The Tropical Rain Forest. Retrieved on 14 March 2008. - Rainforests. Animalcorner.co.uk (1 January 2004). Retrieved on 28 March 2013. - The bite that heals - Sahney, S., Benton, M.J. & Falcon-Lang, H.J. (2010). "Rainforest collapse triggered Pennsylvanian tetrapod diversification in Euramerica". Geology 38 (12): 1079–1082. doi:10.1130/G31182.1. - Brazil: Deforestation rises sharply as farmers push into Amazon, The Guardian, 1 September 2008 - China is black hole of Asia's deforestation, Asia News, 24 March 2008 - Corlett, R. and Primack, R. (2006). "Tropical Rainforests and the Need for Cross-continental Comparisons". Trends in Ecology & Evolution 21 (2): 104–110. doi:10.1016/j.tree.2005.12.002. - Bruijnzeel, L. A. and Veneklaas, E. J. (1998). "Climatic Conditions and Tropical Montane Forest Productivity: The Fog Has Not Lifted Yet". Ecology 79 (1): 3. doi:10.1890/0012-9658(1998)079[0003:CCATMF]2.0.CO;2. - Phillips, O.; Gentry, A.H.; Reynel, C.; Wilkin, P.; Galvez-Durand b, C. (1994). "Quantitative Ethnobotany and Amazonian Conservation". Conservation Biology 8 (1): 225–48. doi:10.1046/j.1523-1739.1994.08010225.x. - Bourgeron, Patrick S. (1983). "Spatial Aspects of Vegetation Structure". In Frank B. Golley. Tropical Rain Forest Ecosystems. Structure and Function. Ecosystems of the World (14A ed.). Elsevier Scientific. pp. 29–47. ISBN 0-444-41986-1. - Erwin, T.L. (1982). "Tropical forests: Their richness in Coleoptera and other arthropod species". The Coleopterists Bulletin 36: 74–75. JSTOR 4007977. - "Sabah". Eastern Native Tree Society. Retrieved 14 November 2007. - King, David A. and Clark, Deborah A. (2011). "Allometry of Emergent Tree Species from Saplings to Above-canopy Adults in a Costa Rican Rain Forest". Journal of Tropical Ecology 27 (6): 573–79. doi:10.1017/S0266467411000319. - Denslow, J S (1987). "Tropical Rainforest Gaps and Tree Species Diversity". Annual Review of Ecology and Systematics 18: 431. doi:10.1146/annurev.es.18.110187.002243. - Malhi, Yadvinder and Wright, James (2004). "Spatial patterns and recent trends in the climate of tropical rainforest regions". The Royal Society Biological Sciences 359 (1443): 311–329. doi:10.1098/rstb.2003.1433. - NWS JetStream – Inter-Tropical Convergence Zone. Srh.noaa.gov (5 January 2010). Retrieved on 28 March 2013. - Aragao, L. E. O. C. (2009). "Above- and below-ground net primary productivity across ten Amazonian forests on contrasting soils". Biogeosciences 6 (12): 2759–2778. doi:10.5194/bg-6-2759-2009. - Moreira, A.; Fageria, N. K.; Garcia y Garcia, A. (2011). "Soil Fertility, Mineral Nitrogen, and Microbial Biomass in Upland Soils of the Central Amazon under Different Plant Covers". Communications in Soil Science and Plant Analysis 42 (6): 694–705. doi:10.1080/00103624.2011.550376. - Environmental news and information. mongabay.com. Retrieved on 28 March 2013. - Cleveland, Cory C. and Townsend, Alan R. (2006). "Nutrient additions to a tropical rain forest drive substantial soil carbon dioxide losses to the atmosphere". PNAS 103 (27): 10316–10321. doi:10.1073/pnas.0600989103. PMC 1502455. PMID 16793925. - Tang, Yong; Yang, Xiaofei; Cao, Min; Baskin, Carol C.; Baskin, Jerry M. (2010). "Buttress Trees Elevate Soil Heterogeneity and Regulate Seedling Diversity in a Tropical Rainforest". Plant and Soil 338: 301–309. doi:10.1007/s11104-010-0546-4. - Haffer, J. (1969). "Speciation in Amazonian Forest Birds". Science 165 (131): 131. doi:10.1126/science.165.3889.131. - Moritz, C.; Patton, J. L.; Schneider, C. J.; Smith, T. B. (2000). "DIVERSIFICATION OF RAINFOREST FAUNAS: An Integrated Molecular Approach". Annu. Rev. Ecol. Syst. 31: 533. doi:10.1146/annurev.ecolsys.31.1.533. - Barton, Huw; Denham, Tim; Neumann, Katharina; Arroyo-Kalin, Manuel (6 Feb 2012). "Long-term perspectives on human occupation of tropical rainforests: An introductory overview". Quaternary International 249. pp. 1-3 (theme issue pp. 1-162). doi:10.1016/j.quaint.2011.07.044. ISSN 1040-6182. Retrieved 23 Nov 2013. - Bailey, R.C., Head, G., Jenike, M., Owen, B., Rechtman, R., Zechenter, E. (1989). "Hunting and gathering in tropical rainforest: is it possible". American Anthropologist 91 (1): 59–82. doi:10.1525/aa.1989.91.1.02a00040. - Guardian: 'They're killing us': world's most endangered tribe cries for help (Sunday 22 April 2012). - Brazil's Indigenous Awa Tribe At Risk (06 June 2012). - ONTOLOGY OF THE SELF AND MATERIAL CULTURE: ARROW-MAKINGAMONG THE AWA HUNTER-GATHERERS (BRAZIL) - Brazil sees traces of more isolated Amazon tribes. Reuters.com (17 January 2007). Retrieved on 28 March 2013. - BBC: First contact with isolated tribes? survivalinternational.org (25 January 2007) - Forest peoples in the central African rain forest: focus on the pygmies. fao.org - Dufour, D. R. (1990). "Use of tropical rainforest by native Amazonians". BioScience 40 (9): 652–659. doi:10.2307/1311432. JSTOR 1311432. - Herrera, Rafael; Jordan, Carl F.; Medina, Ernesto and Klinge, Hans (1981). How Human Activities Disturb the Nutrient Cycles of a Tropical Rainforest in Amazonia 10 (2/3, MAB: A Special Issue). pp. 109–114. JSTOR 4312652. - Ewel, J J (1986). "Designing Agricultural Ecosystems for the Humid Tropics". Annual Review of Ecology and Systematics 17: 245. doi:10.1146/annurev.es.17.110186.001333. JSTOR 2096996. - Jessup, T. C. and Vayda, A. P. (1988). "Dayaks and forests of interior Borneo". Expedition 30 (1): 5–17. - Myers, N. (1985). The primary source, W. W. Norton and Co., New York, pp. 189–193, ISBN 0-393-30262-8 - Foley, Jonathan A.; Asner, Gregory P.; Costa, Marcos Heil; Coe, Michael T.; Defries, Ruth; Gibbs, Holly K.; Howard, Erica A.; Olson, Sarah et al. (2007). "Amazonia revealed: forest degradation and loss of ecosystem goods and services in the Amazon Basin". Frontiers in Ecology 5 (1): 25–32. doi:10.1890/1540-9295(2007)5[25:ARFDAL]2.0.CO;2. - Stronza, A. and Gordillo, J. (2008). "Community views of ecotourism: Redefining benefits". Annals of Tourism Research 35 (2): 448. doi:10.1016/j.annals.2008.01.002. - Fotiou, S. (October 2001). Environmental Impacts of Tourism. Retrieved 30 November 2007, from Uneptie.org - Ismi, A. (1 October 2003), Canadian mining companies set to destroy Ghana’s forest reserves, Canadian Centre for Policy Alternatives Monitor, Ontario, Canada. - Walker, Philip L.; Sugiyama, Larry and Chacon, Richard (1998) "Diet, Dental Health, and Cultural Change among Recently Contacted South American Indian Hunter-Horticulturalists", Ch. 17 in Human Dental Development, Morphology, and Pathology. University of Oregon Anthropological Papers, No. 54 - Tomich, P. T., Noordwijk, V. M., Vosti, A. S., Witcover, J (1998). "Agricultural development with rainforest conservation: methods for seeking best bet alternatives to slash-and-burn, with applications to Brazil and Indonesia". Agricultural Economics 19: 159–174. doi:10.1016/S0169-5150(98)00032-2. - De Jong, Wil; Freitas, Luis; Baluarte, Juan; Van De Kop, Petra; Salazar, Angel; Inga, Erminio; Melendez, Walter; Germaná, Camila (2001). "Secondary forest dynamics in the Amazon floodplain in Peru". Forest Ecology and Management 150: 135–146. doi:10.1016/S0378-1127(00)00687-3. - Semazzi, F. H., Song, Y (2001). "A GCM study of climate change induced by deforestation in Africa". Climate Research 17: 169–182. doi:10.3354/cr017169. - Varghese, Paul (August 2009). "An Overview of REDD, REDD Plus and REDD Readiness". Retrieved 23 November 2009. |Wikimedia Commons has media related to Tropical rainforests.| - Rainforest Action Network - Rain Forest Info from Blue Planet Biomes - Passport to Knowledge Rainforests
Visit a creative classroom or watch a video about how a creative classroom works. One the easiest ways to promote creativity in your classroom is to design an actual designated space for exploration and creative thinking. Use these to “fill in the blank” for basic certificates, like a Certificate of Appreciation. In Chapter 1, The key elements of creativity, the definition of creativity and the three different types of creativity in education are outlined and discussed. ArtClass Curator's 13 Art and Math Projects. Come up with creative content for blogs and blog stories with the help of these creative writing ideas. If students do not already know about counter-examples, you could stop to highlight the contribution and do a side lesson on examples (generating test cases, remaining skeptical in the face of confirming examples, extreme and degenerate cases. For example, a teacher may design a task by checking the Bloom's Taxonomy to make sure that different levels of skill sets are required for different students. Carolyn Fox, an educator, discusses how digital technology and creativity in the classroom prepares kids for a bright future. For example, you find yourself walking on the streets of Paris, taking in the French scene, when suddenly, a young chap hands. An idea is to paint the main teaching wall a brighter or deeper tone than the other classroom walls. Examples of Modified Assignments for Students with Special Needs Here are some examples of modifications. Engaging students can assist in the struggle against loss of motivation, dislike of subjects and disruption of classroom management (Handley, 2010). Examples of the use of constructivism in your classroom. Become a Member of the YIMS Teacher’s Lounge and access fun, engaging yoga games to rock your classes. Creative Drama for the Classroom: Life & Three Entrances Activity Posted on May 26, 2013 by Anna Smith This activity is based on the Three Entrances exercise in Uta Hagen’s book Respect for Acting. the number 5 means counting the correct number of objects to make the number 5). This forces students to support their own theories, in essence taking responsibility for their words and respecting those of others. A Passionate, Unapologetic Plea for Creative Writing in Schools. Solid scientific research indicates that encouraging creativity. Getting involved with the students in the community is the best way to give push to their creativity. On the clothespin, she glued a laminated cutout or punchout relating to the classroom theme on either side of the clothespin. If students do not already know about counter-examples, you could stop to highlight the contribution and do a side lesson on examples (generating test cases, remaining skeptical in the face of confirming examples, extreme and degenerate cases. But in all honesty, most of the time, one student would make the choice to walk away and find a different workspace. Behavior modification is a method of eliciting better classroom performance from reluctant students. It is the reshaping of ideas. If you have classroom pics. Consider some of the aspects that combine to make up who we are as individuals: our age, gender, ethnicity, race, intellectual ability, socio-economic level, language, culture, education, religion, birthplace, where we grew up, learning styles, multiple intelligence preferences,. In The Creative Curriculum® for Preschool classroom, children have daily opportunities to learn to recognize, name, and write the letters of the alphabet and to associate them with sounds. Think of it as a mini-reference handbook to some of the techniques that can be used to come up with many different ideas. This isn’t an arts-based interview question! This is a question that aims to get a handle on you’ve creatively solved a problem at work, or brought about positive change or innovation. Creative pupils are curious, question and challenge, and don’t necessarily follow the rules. The use of creative drama in the classroom is a student-focused process where experiential learning can be fostered and developed within any given curriculum. Creative professional and lecturer, Lawrence Lartey, finds his choice of dress is often a sticking point with colleagues. I will respect others and myself. Give students 30 seconds to look each other over really good, paying attention to all details about their partner. End the school year right with some fun physical activity. If you don’t already know, authentic materials are what native speakers use on a daily basis. Ask the average Jane or Joe on the street how creativity works and you'll probably hear a few well worn understandings about the genesis of new ideas. 0 Unported licence. The following are recommended: 1. 5 ways to use audio in Glogster. Please do not have too little, too much, and make sure that the majority,, were formally trained at american institutes for research air where she supported the advancement of teaching. However, I thought this seemed like a fascinating idea and I was eager to find a way to incorporate this teaching method into my classroom in a way that was appropriate for my students. The second aspect of classroom instruction that interested us was the degree to which students and teachers collaborate in the classroom. Brainstorming in the classroom motivate students to freely express their ideas and thoughts on a subject. Teaching the cell can be fun and engaging for students. She used colored yarn and tied one end to the paper clip, and one end to a clothespin. Routines can be useful. Creativity Exercises - Some creative games for the classroom “Creativity is just connecting things. It is the very definition of "thinking outside the box. In their review of the literature on young children’s object play, Barton and Wolery found that adult modeling and prompting helped increase children’s level of pretend play. We also use strategies like think-pair-share to engage students. The goal here is awareness. JumpStart has an extensive collection of educational classroom activities for kids in preschool, kindergarten, 1st grade, 2nd grade, 3rd grade, 4th grade and 5th grade. This is an updated version of an article that was originally created in. Along with the examples and discussion about gamification, use this step-by-step guide to smoothly implement game-based learning in the classroom. Each step in the process is discussed in detail and fully illustrated with fun, creative teaching strategies. The ability to work effectively as part of a team or independently is a key skill for adult life. handles disagreements with peers appropriately. Here are just a few examples of how UDL can work in a classroom. The better the reward, the higher the risks, and vice versa. Love this list of what an effective teaching classroom should look like. Scientifically speaking, creativity is part of our consciousness and we can be creative – if we know – ’what goes on in our mind during the process of creation’. Routines can be useful. 20 Clever Ways to Teach Creativity in the Classroom 1. Each step in the process is discussed in detail and fully illustrated with fun, creative teaching strategies. Many countries include it as a core aim for their students in national curricula and even countries such as Singapore that come top of world. This makes lessons more interesting and entertaining for. Using play as a tool to teach in the early childhood classroom will bring a wholistic approach to the content and will help develop every part of each child. Synonym Discussion of example. The Four C Model of creativity (Kaufman and Beghetto, 2009) attempts to bridge this gap by differentiating between four types of creativity: Big C creativity refers to the creative outcomes of. Goal and Objectives: Using Elana Bell’s piece Searching for the Lost Jews of Alexandria as inspiration, students creatively explore identity, engage in learning about the Israel/Palestine conflict and write their own creative piece. Bodrova and Leong (2001) developed the Tools of the Mind (Tools) curriculum to improve all of the three core mental executive functions involved in creative problem solving: cognitive flexibility, working memory, and inhibitory control. An example of cooperative learning is when small groups build a block building together. • The Learning Environment: the structure of the classroom that makes it possible for teachers to. In one of the most popular Ted Talks of all time, "How Great Leaders Inspire Action," Simon Sinek constantly repeats the key takeaway - "people don't buy what you do, they buy why you do it. Please adapt these questions to use in your own lessons. Observe a working model of creativity. Define creative. During the anticipatory set, just as with the 3rd grade class, the students clapped on every beat, stomped on beat one, and vocalized the fruit names as rhythms. ’ Creativity is a much broader construct. While a oneyear quasiexperiment was based on gold, is tied to innovation and creativity papalazarou, using paintings, and other financial institutions. 30 Excellent Examples Of Creativity In Simplicity Posted in Inspiration January 16th, 2015 By Anders Ross 1 Comment You must have heard the impression “ Creativity in simplicity ” which states the power of simple things. Introduction of classroom creativity: Classroom creativity refers to the situation of a child/student's thoughts in any work. Children are often viewed as naturally strong in imagination. It is a valuable skill to practice because when you have many different ideas, you have more options and are therefore more likely to find more viable solutions to your problem. They are exploring their ability to cre-ate and communicate using a variety of media (crayons, felt-tip markers, paints and other art materials, blocks, dramatic play materials, miniature life figures) and through creative movement, singing, danc-. In #5 above I say that I kill creativity if I show examples before students have developed their own concepts of what might go into a significant creative effort. His work shares a number of important themes with John Dewey (on experience, creativity, education and art), Donald Schön (on reflective practice) and Howard Gardner (around multiple intelligences). Secrets of the Creative Brain. Creativity is a big deal in the 21st century classroom. These formative evaluations provide information that can be used to modify/improve course content, adjust teaching methods, and, ultimately improve student learning. The following is an example of how my students, after being exposed to the author ware HotPotatoes, go beyond the minimum knowledge of using the software. Creativity is often associated with optimism. During a given day, many parents focus on teaching their tots important social skills and academic skills,. example, if exploring poetry, the corners might be Metaphor, Reading, Vocabulary, and Personal Expression. Below, 10 ways to teach creativity in the classroom: 1. Do you have a boatload of bulletin boards to fill in your classroom? I know I did. Our super easy storytelling formula-- combined with creative writing prompts and story prompts, free writing worksheets, writing games and more-- make it easy to write and tell fun stories instantly. Inside a Montessori Classroom Montessori classrooms are peaceful, happy places designed to meet the developmental needs of each child in every stage of life. At the opposite end of the spectrum are Creating Behaviors, which require students to plan artwork, problem solve, and express meaning. But being artistic is only a small part of creativity. Find ways to teach by example. For example, Grand Valley State University math professor Robert Talbert provides screencasts on class topics on his YouTube channel, while Vanderbilt computer science professor Doug Fisher provides his students video lectures prior to class (see examples here and here. The idea is to use the audience to help refine the prototype or draft. classroom, I nurture creative properties such as openness, flexibility, risk-taking, and the. Along with the examples and discussion about gamification, use this step-by-step guide to smoothly implement game-based learning in the classroom. Flexibility: ability to look at something from a different angle or point of view, shifting to an opposing viewpoint, angle, direction, chronology, modality, putting yourself "in someone else's shoes. They contain many places for children to learn and play, in many different ways: by themselves, in pairs, in small groups, in large groups, inside, outside, at tables, on the floor. To include more creativity in your classroom try the following tips. Creative Thinking Strategies This section of Creativiteach is devoted to strategies intended to enhance creativity, primarily divergent-thinking strategies. You can repeat this with other items in the classroom, incorporating these and those. The point of creative studies, says Roger L. doc Example of Preschool Observation: documentation and analysis Ana, aged 3-1/2 , was observed in the Cabrillo preschool classroom. For example, you find yourself walking on the streets of Paris, taking in the French scene, when suddenly, a young chap hands. A sensitive approach to your work with students can save you from many problems. Defining Creativity for the Classroom A lesson plan used to develop a class definition of creativity. Free Choice & Creative Writing. variety of disciplines have historically examined classroom dynamics, it is only relatively recently that physics, as a discipline, has begun to explore the effects of classroom structure on cognitive development. Creativity is an elusive concept that has intrigued researchers for years. 30 Things You Can Do To Promote Creativity. Get everything you need to establish a nurturing classroom community with The First Six Weeks. Teacher Created Resources is the leading publisher of educational materials, classroom decorations & teacher supplies for preschool, elementary & middle schools. classroom, I nurture creative properties such as openness, flexibility, risk-taking, and the. 1 Demonstrate beginning skill in the use of basic tools and art-making processes, such as printing, crayon rubbings, collage, and stencils. A simple definition: Learning is a change in behavior resulting from experience; in evolutionary terms, learning is an adaptive change in behavior that results from experience. INTRODUCTION CREATIVITY AND GESTALT Gestalt Review, 13(2):135-148, 2009 have been qualitatively different from that of reading their words now, but each author/presenter approached the task of describing their workshops to you, the greater Gestalt community, in a spirit of sharing. These benefits range from cognitive aspects of language learning to more co-operative group dynamics. Whether you view humor in the classroom as a well-earned option or a utilitarian strategy to infuse young brains with learning (or somewhere in between), seeing students smile or laugh with a spark of comprehension can make a teacher’s day. For example, when a rule is not being followed, refer back to the rule pizza to remind students of the rules they established. The students also teach me technology that I may not know about during this time also. Increasing Inclusivity in the Classroom. Role and Importance of Creativity in Classroom. Guided by. This will also encourage them to explore their creativity and the different mediums to present the material. This is probably the most straightforward way of illustrating the basic use of demonstratives, one which can most easily be done in the classroom. Through play a child's creativity, physical and cognitive abilities are refined and strengthened. Project Examples. Another approach that involves students in developing classroom expectations is to have students write or draw expectations for the classroom. Provide enough space for a safe block corner and enough cars and blocks for creating highways and traffic jams. With the need to use various learning platforms including television, film, radio, newspaper, the internet and. treats other students with fairness and understanding. This article discusses the environment of the classroom and suggests that one which is creative and stimulating is of most benefit to both teachers and students. You have what it takes to think about education differently right inside of you. Therefore, teachers have the responsibility of ensuring the development and promotion of creativity in students. The Creative Classroom Project was a collaboration between Project Zero and Disney Worldwide Outreach to produce materials that help teachers explore and understand: The role of creativity and innovation in teaching and learning. MISCONCEPTION: The process of science is purely analytic and does not involve creativity. Examples of common centers are dramatic play, art, science, math, building (blocks and Legos), and music. When used calmly, consistently, and respectfully, Responsive Classroom time-out can be a valuable strategy for helping students develop self-control while keeping the classroom calm, safe, and orderly. *This page contains the complete lesson plans for a thirteen week course in creative writing which I taught for Lane Community College for 22 years, most recently spring quarter, 2002. Conditioning and Learning I. Classroom Collaboration/Team Building: Classroom. Creative professional and lecturer, Lawrence Lartey, finds his choice of dress is often a sticking point with colleagues. Often, as teachers, we say this about ourselves. Overall, the setting should include classroom applications of constructivism within a few key concepts. Google Sheets in the Classroom Scenario 1: Digital Portfolios with Google Forms. Hit enter to search or ESC to close. Creative collaboration. Example of Creative Drama Creative drama begins with a warm-up exercise such as relaxation, improvisation, or theater game for the whole group. e the washing of the hand) and the care of the environment (i. Using play as a tool to teach in the early childhood classroom will bring a wholistic approach to the content and will help develop every part of each child. Offering innovative ways to teach problem solving skills along with content, this book is designed for teachers with little or no background in CPS. creative thinking has a definite role in the classroom, a fact that has long been recognized by academics such as the late Ellis Paul Torrance, who dedicated a lifetime of work to advancing creativity in education. Using Art and Creativity to Engage an Autistic Child in the Classroom How low-budget art projects can enhance the lives of children with autism and other learning differences Historical figures who may have been on the autism spectrum. zParticipants will explore ways they can implement the program to meet the needs of the children in their care. An example of cooperative learning is when small groups build a block building together. 14 Creative Ways to Engage Students Fostering creativity can range from simple team-building exercises to complex, open-ended problems that may require a semester to solve. Love this list of what an effective teaching classroom should look like. The purpose of thinking hats is to focus your way of thinking, so putting an actual hat on your head may help you or your students, or it could just be a new form of distraction. The world is unpredictable. and creativity. Plan an activity for the students in each corner and have them share their findings with the class. All it takes to make creativity a part of your life is the willingness to make it a habit. If you have classroom pics. They are curious about the world around them and about learning. Focused Listing Focused Listing is a quick and simple student writing activity. for example, teachers can utilise members of the school community, have lessons outside of the classroom or have students dress up as certain characters. This technique can be very effective for. Learning ways to think and resolve issues and complex problems will help students with different facets of life. Presenting the music curriculum, for example, the outdoor classroom not only allows children to experience the range of sounds, tempos and dynamics of musical instruments, but also challenges groups to compose musical. This Creative Commons license lets others remix, tweak, and build upon our work non-commercially, as long as they credit us and indicate if changes were made. Combine iPads with communication apps to allow students a variety of ways to convey their ideas with a tap of the screen. variety of disciplines have historically examined classroom dynamics, it is only relatively recently that physics, as a discipline, has begun to explore the effects of classroom structure on cognitive development. Creativity is found in the obvious—art and music, but can also be found in science and play. A creative method for giving students feedback on their written assignments, suggested by Linton Hutchinson. The following is an example of how my students, after being exposed to the author ware HotPotatoes, go beyond the minimum knowledge of using the software. Add Comment. Sarah Diaz believes this wholeheartedly. If ideas are butterflies, notebooks are nets. Observe a working model of creativity. But being artistic is only a small part of creativity. I particularly like this whole-class tracker because you can edit it and use it to track mastery on standards, tests and quizzes, behavior expectations, and classroom goals. Preventing Conflict. These routines help you maintain order and also help the kids stay calm. Melinda Kolk is the Editor of Creative Educator and the author of Teaching with Clay Animation. What were students’ attitudes toward a classroom environment that promoted creative thinking and problem solving? S. Whether you view humor in the classroom as a well-earned option or a utilitarian strategy to infuse young brains with learning (or somewhere in between), seeing students smile or laugh with a spark of comprehension can make a teacher’s day. In order to avoid a competitive and extrinsically rewarding classroom, the teacher needs to provide a friendly and comfortable environment that students can feel comfortable enough to voice their opinions and explore new ideas. The Math area of the Montessori classroom encompasses the use of concrete materials for the recognition of numbers and the recognition of quantity as well. Across the U. Creative work in the language classroom can lead to genuine communication and co-operation. Even outside of the classroom this technique is well as a creative learning technique. Play contributes to developing the whole child. Examples of creative themes to use in your classroom. If the ability to be creative is indeed vital for students' future success,. An instructor that presents innovative and challenging prompts will encourage students to work creatively through a problem to a solution. According to state standards, student learners will be able to use ___________ to track academic growth. In their book How to Develop Student Creativity, authors Robert Sternberg and Wendy Williams state, "The most powerful way to develop creativity in your students is to be a role model. Teacher drags these docs to class folders. Nevertheless, creative and critical thinking skills should not be taught separately as an isolated entity, but embedded in the subject matter and "woven into the curriculum" (Mirman and. Fluency, flexibility, originality, and elaboration can be thought of as the cornerstones of creative thinking. Resist Running Like Clockwork. Hope these ideas are worth a try in your classroom. There are many fun and educational ways to integrate math and art for elementary school students. Your only teaching aid is an empty glass. Learners use the language to do the creative task, so they use it as a tool, in its original function. In beginning to redefine the term “creativity” for myself, the examples in Uncommon Genius, by Denise. Teachers can increase their effectiveness by considering the affective domain in planning courses, delivering lectures and activities, and assessing student learning. for example, teachers can utilise members of the school community, have lessons outside of the classroom or have students dress up as certain characters. Creativity is a skill to be learned, practiced, and developed, just like any other. Here are a couple of examples of visuals you can make and use for whole class data tracking, and for individual student tracking as well. However, I thought this seemed like a fascinating idea and I was eager to find a way to incorporate this teaching method into my classroom in a way that was appropriate for my students. The Creative Curriculum Framework • How Children Develop and Learn: what children are like in terms of their social/emotional, physical, cognitive, and language development, and the characteristics and experiences that make each child unique. The Teacher's Corner has organized a great collection of bulletin board resources. Benefits of Creative Play in Early Education 20 JUL 2017 Creative play encompasses a range of different activities that just about all children love participating in, from drawing and painting to building with Lego and dressing up. In this article, we explore how to integrate divergent thinking into our everyday classroom practices. Defining Creativity for the Classroom A lesson plan used to develop a class definition of creativity. Creative pupils are curious, question and challenge, and don't necessarily follow the rules. Information and Innovation. Then point to a chair across the room and say 'That chair'. Adobe Spark is a free creativity multi-tool for the classroom. Creativity in the English language classroom Edited by Alan Maley and Nik Peachey The focus of this book is on practical activities which can help to nurture, develop and motivate our students. Although this video may not be appropriate to show to younger children; high school students would get the message and would be able to engage with the political, environmental and legal. Because we think of art, music, dance, and drama as examples of creative ideas, we may have forgotten that creative thought is found in all aspects of a growing child's life and can be learned from daily. Learning to be creative is akin to learning a sport. This implies that the learners must be creative in their production of ideas, and critically support them with logical explanation, details and examples. In fact, some 21st century educational psychologists have modified Bloom's taxonomy to show creating as the most developed intellectual skill. It will help those kids understand the black/white concepts and help break up misconceptions. The layout is there to offer guidance to you, the teacher. Classroom Challenges Overview. Many of the examples EducationDive shares illustrate unique models of how a teacher can invert their class. Technology has become second nature. Instead of focusing on individual assignments or group work, teachers should create partner activities. ESL Games: 176 English Language Games for Children aged 6 to 12. The Creative Curriculum Framework • How Children Develop and Learn: what children are like in terms of their social/emotional, physical, cognitive, and language development, and the characteristics and experiences that make each child unique. In this recollection he recounts the value of having a dog in his own classroom and pays tribute to dogs as real "teachers' pets. Here's the phases as we describe them in our upcoming book. Imagination and creativity in using community resources can help students connect school science and mathematics with applications in the community, as well as helping students better learn basic concepts. Jump to main navigation. While the creative child may share ideas which seem really "out there" to others, there is an upside to recognizing the creative thinkers in our lives. In most college courses, instructors teach science primarily through lectures and textbooks that are dominated by facts and algorithmic processing rather than by concepts, principles, and evidence-based ways of thinking. The Creative Educator has lesson plans for Language Arts, Math, Science, and Social Studies to help you get started integrating Wixie into your curriculum. Students need hands-on experience tackling tough problems in creative ways. This second page of the planning form looks more like a lesson plan form you might see in any classroom. The giant nurtures a learning culture: The more the corporation grows, the more it needs to reinvent its culture. " Using dogs as a creative teaching. Because we think of art, music, dance, and drama as examples of creative ideas, we may have forgotten that creative thought is found in all aspects of a growing child's life and can be learned from daily. This technique can be very effective for. Publishers of Classroom Publishing:A Practical Guide for Teachers, a resource guide for educators using publishing as a teaching tool December 01, 2009 - Channel Partner. 4/3/12 Ana-Preschool-example. Then the entire class is transformed with the excitement of a new direction. Creativity and Innovation Critical Thinking and Problem Solving Communication and Collaboration. Barry Ziff "One of the beauties of teaching is that there is no limit to one's growth as a teacher, just as there is no knowing beforehand how much your students can learn. Flexibility: ability to look at something from a different angle or point of view, shifting to an opposing viewpoint, angle, direction, chronology, modality, putting yourself "in someone else's shoes. A language objective specifically outlines the language that ELLs will need in order to meet the content objective. My second presumption is that mathematical knowledge and skill gained as children grow older allows them to think creatively and critically. But in all honesty, most of the time, one student would make the choice to walk away and find a different workspace. ) side by side. 0 tools, resources, and examples of. We might think of competition in the classroom as we do a timed or public performance -- it raises the level of threat in a situation. This post has been updated as of December 2017. Critical Thinking in the Elementary Classroom: Problems and Solutions • 1 Critical thinking has been an important issue in education for many years. Creativity and Innovation Critical Thinking and Problem Solving Communication and Collaboration. Shake things up with this list of engaging creative movement activities for the classroom. Top selling book on ESL games worldwide. Along with the examples and discussion about gamification, use this step-by-step guide to smoothly implement game-based learning in the classroom. We make our slide shows engaging and ready to use right away in your classroom. Increasing Inclusivity in the Classroom. Wordgames: Activities for Creative Thinking and Writing 5. For example, in the United States, 89 percent of educators and 87 percent of parents agreed that teachers can do more to teach creativity. Learning Technologies and Creativity in the Classroom Posted on 05/10/2015 31/07/2017 by admin If newer technologies can foster creativity in students then educators must seriously think about how they can incorporate them into their classroom teaching strategies. Resist Running Like Clockwork. Classroom Applications of Constructivism. The Creative Thinking Course (CCT 602) was my first step in beginning to “unlearn” fixed patterns of thinking and defining things and begin to entertain many perspectives on one issue or question. Applying learning theory in the classroom. Nicole is an educator who advocates for the inclusion of students with disabilities in the general education classroom. The Queensland Academy for Creative Industries in Australia was set up as a government initiative to encourage creative and artistic skills, and creativity is. For example, a videotape of students speaking French in the classroom can be used to evoke a critical evaluation of each other's conversational skills at various points during the school year. For example, the Iroquois tribe in The Rough-Face Girl (Martin, 1992) historically lived in longhouses, but the illustrator depicts these Native Americans as living in teepees. Start student notebooks for ideas in the wild. Explain that by using simple classroom instruments, we can create the sound of actions. Orff activities, and every student thrived during the lesson. The Educational Outreach Office at WPAFB is committed to motivating students to explore the world of science and technology, and to increasing. In many cases, valuable creative ideas occur within the constraints of solving a particular problem. Some examples of brainstorming games include: telling children to describe what they would do if they were in a different time/place, asking them to tell a story using only gestures and taking turns building a story - 1 sentence at a time, per student. Use of Technology In The Classroom. 2 CREATIVITY IN THE PRIMARY CLASSROOM help all children realise their potential' (DfE 2011: 1). Creative pupils are curious, question and challenge, and don’t necessarily follow the rules. In fact, all of the skills that students have picked up in the classroom can be seen when you give them a problem and time to solve it. Conduct activities that help children develop creativity. There is a common misconception that the word "creative" has to do mostly with the arts. Classroom Authors, dba is a designated agent of R. Teachers use technology to plan lessons, teach lessons and keep track of student progress. Fluency is all about generating a lot of different ideas. , schools feature the handprint outlines of young children, decorated with eyes, mouths, and some coloring to make strings of hand turkeys in November. What is creativity? You might think that this process sounds more analytical than creative, but experts who study creativity have found that logical thinking is always a part of the creative process in any field, from art to science to business (Tardif & Sternberg, 1988). Tests of Applied Creativity, Logic, and Reasoning A lesson plan for grades 5 or 6 that can be used in a variety of subjects and contexts. Over the course of nineteen. This acting exercise gets students focused on doing rather than thinking. Click on a classroom below and you will be taken to pictures of that particular room! Enjoy exploring - I did! Updated July 23, 2004. Fair Use Guidelines. ESL Games: 176 English Language Games for Children aged 6 to 12. Submit your art lesson plan or activity today. Along the way, she dispels some commonly held myths about what creativity is or is not, suggests some concrete prompts that can be used quickly in any classroom, and bemoans the fact that she. The justification for using games in the classroom has been well demonstrated as benefiting students in a variety of ways. She has been helping educators implement project-based learning and creative technologies like clay animation into classroom teaching and learning for the past 15 years. Read on and learn more about what is involved in art integration and examples of how it's possible to integrate art concepts into lesson plans. Computers can be used for many things in the mathematics classroom. By: Jamshed N. Create assignments that celebrate multiculturalism. Trest talked with students about the categories and invited them to give personal examples of each. " Using dogs as a creative teaching. If you think AI and chalkboards don’t go hand-in-hand, we’ll prove you wrong with five examples of classroom-based Artificial Intelligence. Let's say that the brainstorm topic is "weather", the students would state whatever comes to mind, which would most likely include words like rain, hot, cold, temperature, seasons, mild, cloudy, stormy, etc. Fluency, flexibility, originality, and elaboration can be thought of as the cornerstones of creative thinking. Demonstrate how to use simple classroom instruments to musically express a thought or image. If you are looking for a way to reinvigorate your classroom, Arts Integration is a wonderful way to engage your students and spark your own creativity! Additional Sources: Math Activities Home. when brainstorming ways to mop up an oil spill, one idea might be to use hair clippings. It can help students to develop divergent thinking skills, inventive creativity, and cognitive thinking skills, and it can stimulate the development of oral and written communication skills. example, is very easy with an overhead projector: the teacher simply needs to place a piece of paper over what he/she wants to hide. Barry Ziff "One of the beauties of teaching is that there is no limit to one's growth as a teacher, just as there is no knowing beforehand how much your students can learn. For media such as offline materials, video, audio, and images, consider: 1. An instructor that presents innovative and challenging prompts will encourage students to work creatively through a problem to a solution.
Students learn geometry at almost every grade level. Elementary students learn the basics of geometry: shapes and counting the number of sides. Middle school student begin learning how to find the volume and area of circles and squares. High school students jump into geometry with Euclidean/Plane Geometry and Symmetry & Tessellations. College students can take their knowledge from high school geometry to the next level and learn about Spherical Geometry, Hyperbolic Geometry, as well as Riemannian Geometry and Fourth Dimensional Geometry. No matter the grade level your student is in, we have expert tutors that will help students understand and conceptualize what they need to know in their geometry class. What is it? Euclidean/Plane Geometry is the study of flat space. Between every pair of points there is a unique line segment which is the shortest curve between those two points. These line segments can be extended to lines. Lines are infinitely long in both directions and for every pair of points on the line the segment of the line between them is the shortest curve that can be drawn between them. All of these ideas can be described by drawing on a flat piece of paper. From the laws of Euclidean Geometry, we get the famous Pythagorean Theorem. Non-Euclidean Geometry is any geometry that is different from Euclidean geometry. It is a consistent system of definitions, assumptions, and proofs that describe such objects as points, lines and planes. The two most common non-Euclidean geometries are spherical geometry and hyperbolic geometry. The essential difference between Euclidean geometry and these two non-Euclidean geometries is the nature of parallel lines: In Euclidean geometry, given a point and a line, there is exactly one line through the point that is in the same plane as the given line and never intersects it. In spherical geometry there are no such lines. In hyperbolic geometry there are at least two distinct lines that pass through the point and are parallel to (in the same plane as and do not intersect) the given line. Riemannian Geometry is the study of curved surfaces and higher dimensional spaces. For example, you might have a cylinder, or a sphere and your goal is to find the shortest curve between any pair of points on such a curved surface, also known as a minimal geodesic. Or you may look at the universe as a three dimensional space and attempt to find the distance between/around several planets. Students can succeed in any geometry class. From elementary school to college, math can be a difficult subject for many students. We make it easier and more understandable for them by providing expert tutors in every mathematics class including geometry. We will be happy to provide you with all the information you need to choose the tutor that is best suited for the geometry class you or your student is taking. You will review their educational background and experience to know that the geometry tutors we offer are experts in their field. College coursework is challenging. Don’t struggle alone or waste your time reviewing with classmates who don’t know any more than you do. Our experienced tutors understand college course content and precisely how college professors evaluate student progress. Our tutors will help you focus on weak areas, channel your studying energy, and help you prioritize how to spend your studying time. Our Tutoring Service Every Advanced Learners tutor is a highly qualified, college-degreed, experienced, and fully approved educator. You can feel secure knowing that each tutor has been thoroughly pre-screened and approved. We have stringent requirements for all of our tutors. We require a national background check, a personal interview, and both personal and professional references of each applicant. We select only the very best tutors for our clients to choose from. Your personalized list of matched tutors will include professionals specifically suited to your child’s current academic needs. The backgrounds of our tutors are varied and their experience diverse, but the common factor is the passion for learning and education that they all share. As our client, you have the opportunity to review and speak with as many tutors as you wish until you find the right match for your student.
A programming language or computer language is a standardized communication technique for expressing instructions to a computer. It is a set of syntactic and semantic rules used to define computer programs. A language enables a programmer to precisely specify what data a computer will act upon, how these data will be stored/transmitted, and precisely what actions to take under various circumstances. ==Features of a programming language== Each programming language can be thought of as a set of formal specifications concerning syntax, vocabulary, and meaning. These specifications usually include: * Data and Data Structures * Instruction and Control Flow * Reference Mechanisms and Re-use * Design Philosophy Most languages that are widely used, or have been used for a considerable period of time, have standardization bodies that meet regularly to create and publish formal definitions of the language, and discuss extending or supplementing the already extant definitions. Most languages also provide ways to assemble complex data structures from built-in types and to associate names with these new combined types (using arrays, lists, stacks, files). Object oriented languages allow the programmer to define data-types called "Objects" which have their own intrinsic functions and variables (called methods and attributes respectively). A program containing objects allows the objects to operate as independent but interacting sub-programs: this interaction can be designed at coding time to model or simulate real-life interacting objects. This is a very useful, and intuitive, functionality. Languages such as Python and Ruby have developed as OO (Object oriented) languages. They are comparatively easy to learn and to use, and are gaining popularity in professional programming circles, as well as being accessible to non-professionals. It is commonly thought that object-orientation makes languages more intuitive, increasing the public availability and power of customised computer applications. ===Instruction and control flow=== Once data has been specified, the machine must be instructed how to perform operations on the data. Elementary statements may be specified using keywords or may be indicated using some well-defined grammatical structure. Each language takes units of these well-behaved statements and combines them using some ordering system. Depending on the language, differing methods of grouping these elementary statements exist. This allows one to write programs that are able to cover a variety of input, instead of being limited to a small number of cases. Furthermore, beyond the data manipulation instructions, other typical instructions in a language are those used for control flow (branches, definitions by cases, loops, backtracking, functional composition). For the above-mentioned purposes, each language has been developed using a special design or philosophy. Some aspect or another is particularly stressed by the way the language uses data structures, or by which its special notation encourages certain ways of solving problems or expressing their structure. Since programming languages are artificial languages, they require a high degree of discipline to accurately specify which operations are desired. Programming languages are not error tolerant; however, the burden of recognising and using the special vocabulary is reduced by help messages generated by the programming language implementation. There are a few languages which offer a high degree of freedom in allowing self-modification in which a program re-writes parts of itself to handle new cases. Typically, only machine language, Prolog, PostScript, and the members of the Lisp family (Common Lisp, Scheme) provide this capability. Some languages such as MUMPS and is called dynamic recompilation; emulators and other virtual machines exploit this technique for greater performance. There are a variety of ways to classify programming languages. The distinctions are not clear-cut; a particular language standard may be implemented in multiple classifications. For example, a language may have both compiled and interpreted implementations. In addition, most compiled languages contain some run-time interpreted features. The most notable example is the familiar I/O format string, which is written in a specialized, little language and which is used to describe how to convert program data to or from an external representation. This string is typically interpreted at run time by a specialized format-language interpreter program included in the run-time support libraries. Many programmers have found the flexibility of this arrangement to be very valuable. ==History of programming languages== The development of programming languages, unsurprisingly, follows closely the development of the physical and electronic processes used in today's computers. Charles Babbage is often credited with designing the first computer-like machines, which had several programs written for them (in the equivalent of assembly language) by Ada Lovelace. In the 1940s the first recognisably modern, electrically powered computers were created. Some military calculation needs were a driving force in early computer development, such as encryption, decryption, trajectory calculation and massive number crunching needed in the development of atomic bombs. At that time, computers were extremely large, slow and expensive: advances in electronic technology in the post-war years led to the construction of more practical electronic computers. At that time only Konrad Zuse imagined the use of a programming language (developed eventually as [[Plankalk�l]]) like those of today for solving problems. Subsequent breakthroughs in electronic technology (transistors, integrated circuits, and chips) drove the development of increasingly reliable and more usable computers. This was paralleled by the development of a variety of standardised computer languages to run on them. The improved availability and ease of use of computers led to a much wider circle of people who can deal with computers. The subsequent explosive development has resulted in the Internet, the ubiquity of personal computers, and increased use of computer programming, through more accessible languages such as Python, Visual Basic, etc.. ==Classifications of programming languages== * Array programming language * Concatenative programming language * Concurrent programming language * Declarative programming language * Domain-specific programming language * Dynamic programming language * Educational programming language * Esoteric programming language * Functional programming language * General-purpose programming language * Logic programming language * Object-oriented programming language * Procedural programming language * Scripting programming language The following are major programming languages used by at least several thousand programmers worldwide: Special programming language and modules The rigorous definition of the meaning of programming languages is the subject of Formal semantics. *List of programming languages **Alphabetical list of programming languages **Categorical list of programming languages **Chronological list of programming languages **Generational list of programming languages **List of esoteric programming languages *Hello world program, examples of a simple program in many different programming languages *Software engineering and List of software engineering topics == External links == *Syntax Patterns for Various Languages *Wikisource Source Code Examples *99 Bottles of Beer - One application written in 621 different programming languages. *Open Directory - Computer Programming Languages af:Programmeertaal bg:Език за програмиране [[ca:Llenguatge Inform�tic]] cs:Programování da:Programmeringssprog de:Programmiersprache et:Programmeerimiskeel eo:Komputillingvo [[es:Lenguaje de programaci�n]] fr:Programmation he:שפת תכנות hr:Programski jezik ia:Linguage de programmation it:Linguaggio di programmazione ja:プログラミング言語 ko:프로그래밍 lt:Programavimo kalba hu:Programozási_nyelv nl:Programmeertaal [[no:Programmeringsspr�k]] [[pl:J%EAzyk programowania]] pt:Linguagens_de_programação fi:Ohjelmointikieli sl:Programski jezik [[sv:Programmeringsspr�k]] tokipona:toki pali tr:programlama dilleri zh-cn:程序设计语言 zh-tw:程式設計語言
Grade Level: 6 (5-7) Time Required: 15 minutes Lesson Dependency: None Subject Areas: Physical Science SummaryStudents learn about the underlying engineering principals in the inner workings of a simple household object – the faucet. Students use the basic concepts of simple machines, force and fluid flow to describe the path of water through a simple faucet. Lastly, they translate this knowledge into thinking about how different designs of faucets also use these same concepts. Engineers use the principles of mechanical systems and fluid systems to design many everyday objects, such as the faucet. Engineers use mathematical equations to figure out the associated pressure, force and flow of a fluid in such objects. Another good example of engineering, mechanical and fluid systems used together is a dam and the gates of the dam that hold the water back. As the gates open, the water starts to drain out from the associated reservoir. To prevent disaster, the water pressure on the gate has to be directly proportional to the area of the water that is flowing past the gate. Engineers must know how everything relates in order for all systems to work together. After this lesson, students should be able to: - List two engineering concepts used in designing a faucet: simple machines and fluid flow. - Describe at least one simple machine used in a basic faucet. - Describe the flow of water in a faucet. Each TeachEngineering lesson or activity is correlated to one or more K-12 science, technology, engineering or math (STEM) educational standards. All 100,000+ K-12 STEM standards covered in TeachEngineering are collected, maintained and packaged by the Achievement Standards Network (ASN), a project of D2L (www.achievementstandards.org). In the ASN, standards are hierarchically structured: first by source; e.g., by state; within source by type; e.g., science or mathematics; within type by subtype, then by grade, etc. Each TeachEngineering lesson or activity is correlated to one or more K-12 science, technology, engineering or math (STEM) educational standards. All 100,000+ K-12 STEM standards covered in TeachEngineering are collected, maintained and packaged by the Achievement Standards Network (ASN), a project of D2L (www.achievementstandards.org). In the ASN, standards are hierarchically structured: first by source; e.g., by state; within source by type; e.g., science or mathematics; within type by subtype, then by grade, etc. Fluently divide multi-digit numbers using the standard algorithm. Do you agree with this alignment? Thanks for your feedback! Fluently add, subtract, multiply, and divide multi-digit decimals using the standard algorithm for each operation. Do you agree with this alignment? Thanks for your feedback! Write, read, and evaluate expressions in which letters stand for numbers. Do you agree with this alignment? Thanks for your feedback! Worksheets and AttachmentsVisit [ ] to print or download. More Curriculum Like This Students learn about the fundamental concepts important to fluid power, which includes both pneumatic (gas) and hydraulic (liquid) systems. Students explore building a pyramid, learning about the simple machine called an inclined plane. They also learn about another simple machine, the screw, and how it is used as a lifting or fastening device. Learn the basics of the analysis of forces engineers perform at the truss joints to calculate the strength of a truss bridge known as the “method of joints.” Find the tensions and compressions to solve systems of linear equations where the size depends on the number of elements and nodes in the trus... Students should have some basic knowledge of simple machines and be able to understand the concept of pressure. There is so much cool engineering in our houses. Today, we are going to learn about a common object that everyone comes in contact with often. We are going to learn about faucets. Think about all the different kinds of faucets you have seen. There are faucets that you screw a knob to turn on; there are faucets that you pull a lever to turn on; there are faucets that you push a button to turn on and there are some faucets that you do not even have to touch to turn on. At some point, an engineer had to design all of these faucets. Several important engineering concepts are used in each and every faucet. The first one is simple machines. Most faucets use a common simple machine such as a screw or a lever in order to produce the force necessary to stop water from continuously flowing. Another engineering concept that a faucet uses is fluid flow. Faucets must hold back water flow when turned off, and regulate water flow when turned on. For our lesson today, we will focus on a common faucet with just one knob, similar to an outdoor faucet to which you might attach a garden hose. We will start with the water and work our way from the pipes in your house, through the faucet, and out into your sink. Water in the pipes in your house is held at a pressure higher then the pressure of the air around you. This pressure difference is what causes the water to come up from the ground-level pipes coming into your house, and out through the faucet. If, for some reason, the pressure of the air around you rose to that of the water pressure in the pipe, the water would no longer flow out of the pipe. So, due to the pressure in the pipe, the water is ready for motion. All a faucet has to do is hold the water back until we want to use it. If you were a water molecule flowing through this faucet, the first thing you would come in contact with is a small circular opening: about a quarter of an inch in diameter. On the other side of this opening would be a small rubber stopper. If the faucet were off, that stopper would be pressed up against the opening from the other side. This is what holds the water back. The rubber stopper is held in place by a simple machine, the screw. Who knows to which simple machine the screw is related? The wedge. A screw is a wedge wrapped around a cylinder. Now let's add some fluid flow. As the faucet is turned on, the screw makes the rubber stopper back away from the opening, creating a small crack through which you — the water molecule — and a few thousand of your friends can pass. The pressure inside the pipe drops as you go through the opening. Remember, without this pressure drop, you would not be able to go through the opening. When the faucet is turned off, the screw works in the other direction and closes the opening, stopping the water flow — thus prohibiting you from passing through the opening. That is basically how the water gets from the pipes in the ground, through the faucet in your house, and out into your sink. Remember, engineers design simple faucets with two important engineering concepts in mind – simple machines and fluid (water) flow. Following the lesson, conduct the associated activity Too Much Pressure! Modeling Force-Pressure-Area Relationships to help illustrate the relationsip betwen force, pressure and area, by creating a simple system that holds back water from pipes of varying diameters. Lesson Background and Concepts for Teachers The Role of Pressure Water in the pipes in our house is held at a pressure higher than the pressure of the air around us. This pressure difference is what causes water to come up through ground-level pipes, into your house, and eventually spill out of the faucet. Usually, that pressure is created due to the hydrostatic pressure gradient. The best way to think of hydrostatic pressure is to think about swimming pools. When you dive deep under water in a swimming pool, you ears usually hurt — this is because of the increase in pressure. The deeper you go, the more pressure there is and the more your ears hurt. This holds true so long as the water is not moving much — which is where the "static" part comes into play. In fact, the relationship between pressure and depth is accurately modeled by the following equation: P = ρgh where P is the pressure, ρ is the density of the liquid (water), g is gravity, and h is the height of water above the point in question. Sample Calculation: Water has a density of 999 kilograms per cubic meter; gravity on Earth is about 9.81 meters per second squared. If a cylinder of water if filled five meters high, the pressure can be calculated by: 999 kg/m3 × 9.81 m/s2 × 5 m = 49.0 kPa. So the pressure is 49 kilopascals, which is equal to 1000 Newtons per square meter. Therefore, it is important to know that pressure can be measured in kilopascals or newtons. One can find the pressure of a fluid at any given point, knowing no more than the density of fluid, gravity, and the height up to the top of the water. So, because the density of water and gravity are always equal, pressure is really just dependant on height. Therefore, as illustrated in Figure 1, point A and B are at the same height, h, and so the pressure starts out equal at both points. However, the pressure drops as the water goes through the narrow opening (in the vessel on the right of Figure 1). Without this pressure drop, the water would not go through the opening. And because the hole is rather small, it offers resistance to the flow. The rate that water goes through the faucet, then, is equal to the difference in pressure divided by the resistance of the opening, or: PInside - POutside = (Rate of Flow)*(Resistance) You can see, in the above equation, that if the pressure outside equals the pressure inside, then the left side of the equation must equal zero. Thus, either the rate of flow or the resistance must be equal to zero. We can conclude that the rate of flow must equal zero, since all pipes have at least some small amount of resistance. Additionally, if the pressure difference becomes very large, and the resistance is relatively small, the rate of flow will become rather large. Finally, if the pressure difference stays the same, but the resistance drops, the rate of flow will increase. This resistance drop is exactly what happens when the faucet is opened, and the rubber stopper is moved away from the opening. How Much Force? So how much force does it take to hold back all that water? Luckily, it is easy enough to calculate: the force needed is equal to the pressure of the water multiplied by the area of the opening. This is expressed by the common equation: Where F is the force of the water on the stopper, P is the pressure of the water, and A is the area of the opening. Because the stopper is not moving, the force of the water on the stopper must equal the force of the stopper on the water. Sample Calculation: Let's say that the water in the pipe (shown in Figure 2) is at about 30 psi (pounds per square inch; this will become important later). Secondly, the area of the opening is 0.049 square inches (obtained by taking the radius of the opening squared, multiplied by pi). So the force of the water on the rubber stopper is equal to 30 psi multiplied by 0.049 square inches or: (30 psi)*(0.049 in^2) = 1.47 lb We know that the answer should be some unit of force. Pounds per square inch multiplied by square inches is indeed equal to pounds, so the units work out. How is a Faucet a Machine? Now let's move on to the mechanical system in the common faucet. The rubber stopper is moved into place by a screw (see Figure 2). This screw serves to reduce the force needed to hold back the water. A screw converts a torque (a twisting force) into a linear force. It also provides mechanical advantage that converts a small input force into a potentially large output force. To illustrate the concept of mechanical advantage, we can look at an example of a simple wedge lifting a box in Figure 3. Sample Calculation: Let's say that the box in Figure 3 weighs 100 pounds, and that we need 50 pounds of force to lift the box. If we use a wedge such as the one in Figure 3, we can use less force to lift the box. That would mean that about 50 pounds would rest on the wedge, and the other 50 pounds rests on the floor at the opposite corner of the box. This approximation works just fine while the box is not tilted much. Therefore, if our wedge is five inches long and one inch tall, we can approximate the amount of force needed. The first step is to calculate the slope of the hypotenuse of the triangle that forms the wedge. Slope is calculated by the height divided by the length. Height ÷ Length = Slope 1 in ÷ 5 in = 1/5 So the slope of the wedge is one fifth. This is a unitless quantity because inches divided by inches has no units. Next, the force needed can be calculated using the following equation. Force to lift the box × Slope = Force to push the Wedge 50 lb × 1/5 = 10 lb Again, the same operation performed on the numbers is also performed on the units (pounds), multiplied by a unitless quantity, is again pounds. The equation yields a necessary applied force of ten pounds. So, it would only take ten pounds applied to the wedge to lift the box. This estimation ignores friction between the wedge and the box, as well as between the wedge and the floor. However, this reduction of force comes at a cost. In order for the corner of the box to be lifted one inch, the wedge must slide five inches horizontally. More specifically, the distance that you need to push the wedge is equal to the distance that you want to lift the box divided by the slope of the wedge: 1 in ÷ 1/5 = 5 in In the case above, work is the force necessary to lift the box multiplied by the distance over which that force is applied. The important concept here is that regardless of how the box is lifted, it will take an equal amount of work to lift the box one inch. In the example above, we pushed with a force of ten pounds, for a distance of five inches. The work needed is then equal to 50 inch pounds. If we chose to not use the wedge, we would need to apply fifty pounds instead of ten. However, we would only need to apply that force for one inch. - Too Much Pressure! Modeling Force-Pressure-Area Relationships - Students experiment with the relationship between force, pressure and area, by creating a simple system that holds back water from pipes of varying diameters. Today, we talked about how a faucet works. We learned that engineers design different types of faucets. There are many engineering concepts that go into the design of a faucet. Who can name one? The first one is simple machines. The faucet we talked about today used the simple machine, a screw, to help stop water from always flowing. Another engineering concept that a faucet is designed on is fluid flow. Engineers need to think about how water flows in order to regulate turning the water on and off. How does the water flow through our faucet? Well, the first thing the water comes in contact with is a small circular opening. On the other side of this opening is a small rubber stopper. When the faucet is off, the stopper is pressed up against the opening from the other side. This is what holds the water back. The rubber stopper is held in place by a screw. When we turn the screw, it releases the rubber stopper and water flows. Can you think of other ways to stop water from flowing fast or slow through a pipe? That is what engineers work on when designing new faucets. Force: Something that acts from the outside to push or pull and object. Pressure: The quantity of a force distributed over an area; measured as force per unit area, such as psi. psi: Pounds per square inch; a unit of pressure commonly used in the U.S. system. Resistance: The opposition of a body or object to something passing through it, such as a pipe (object) with water passing through it. Simple Machine: A category of devices including the wedge, lever and screw that have the ability to provide mechanical advantage and transmit a force. Brainstorming: At the very start of the lesson, give pairs or groups of students a few minutes to come up with a solution to a simple challenge. Remind students that in brainstorming, no idea or suggestion is "silly." All ideas should be respectfully heard. Encourage wild ideas and discourage criticism of ideas. Ask the students: - How could they lift a 100 lb crate a few inches off the ground? Do students come up with ways that incorporate simple machines, such as a wedge or a screw? If so, point out which of the student ideas use the concepts and principles of simple machines. Group Discussion: Display an image of the inner workings of a faucet (or use the attached overhead of Figure 2). Point to the screw mechanism of the faucet and ask the students what kind of mechanism it is. (Answer: a simple machine) Point to the rubber stopper and ask the students what function this part performs. (Answer: It holds back water flow.) Ask the students what would happen if the rubber stopper is moved to various heights. (Answer: The water flow is faster or slower.) Ask the students what must be done to move the stopper to various locations. (Answer: You have to turn the screw.) Lesson Summary Assessment Engineer it Better!: Engineers have used a screw to control water in a simple faucet. Have the students think about other simple machines that may be able to use to hold back fluid flow for a new design of a faucet. Have them draw a picture of their new faucet design. They should label the simple machines they used as well as other parts to help explain their design. Lesson Extension Activities The lesson can then be scaled in explanation depending on the knowledge the students already have about simple machines. If students know what a simple machine is, then the teacher can go into more depth about the math used to calculate force reduction and work done. Or perhaps just a lesson on what simple machines are, what they can do, and what they cannot do is appropriate. Ask the students, "What happens when you dive down deep under water." The mathematical answer is: the pressure increases linearly according to the equation: P = ρgh, where ρ is equal to the density of the fluid, g is gravity, and h is the height of water above the person. Of course, you will not get this exact response, but most girls and boys are familiar with how pressure increases as a swimmer goes deeper. This leads to the question, "What is water pressure?" Finally, "how much do you have to push to hold water pressure back?" Math Extension 1: (upper-level students) Using the equation for the pressure in a water column, have students calculate how much pressure there is at the deepest point in the ocean, the Mariana Trench. First, find the depth of the trench and use the hydrostatic pressure gradient equation to calculate the pressure. Research the answer on http://www.marianatrench.com/ Math Extension 2: (upper-level students) If students need more of a math challenge, they can calculate the necessary force and displacement needed to lift a 2 pound book, one inch for different wedges. Then, the work for each wedge can be calculated. (Answer: The work should be the same.) U.S. Environmental Protection Agency, Ground Water and Drinking Water, Drinking Water Academy, Satellite Training, February 21, 2006. water.epa.gov/drink/index.cfm Accessed January 2, 2007. Copyright© 2006 by Regents of the University of Colorado. ContributorsChris Sheridan, Tod Sullivan, Jackie Sullivan, Malinda Schaefer Zarske, Janet Yowell Supporting ProgramIntegrated Teaching and Learning Program, College of Engineering, University of Colorado Boulder The contents of this digital library curriculum were developed under a grant from the Fund for the Improvement of Postsecondary Education (FIPSE), U.S. Department of Education and National Science Foundation GK-12 grant no. 0338326. However, these contents do not necessarily represent the policies of the Department of Education or National Science Foundation, and you should not assume endorsement by the federal government. Last modified: January 19, 2021
Home | Audio | DIY | Guitar | iPods | Music | Brain/Problem Solving | Links| Site Map This work is licensed under a Creative Commons License. Simple Guitar Physics Construction of the Guitar In order to achieve the specific sounds required for music, guitars have various components that enable them to produce these specialized sounds. The narrow end of the guitar is called the headstock, and is attached to the neck of the guitar. On the headstock there are machine heads, also known as tuning keys, around which the strings are wound. At the point where the headstock meets the neck of the guitar, there is a small piece of material (plastic, bone, etc.) called the nut, in which small grooves are carved in order to guide the strings up to the machine heads. The neck of the guitar runs all the way down the guitar until it meets the body of the guitar at the upper bout, and it contains the fret board of the guitar, containing the frets embedded in it at points along the length of the neck that divide it mathematically. The body of the guitar is a resonating chamber which projects the vibrations of the body through the hole cut on the top of it, called the sound hole. The strings of the guitar run from the machine heads, over the nut, down the neck, body, and the sound hole, and are anchored at a piece of hardware attached to the body of the guitar, called the bridge. It is these components of the guitar that allow it to produce the specific sounds required to create music. In order to understand music and how guitars produce it, it is first required to understand the physics of sound. Sound is created when a wave motion is set up in the air by the vibration of material bodies. What this means is that when material bodies vibrate, they create a vibrational energy that travels in pressure waves through a medium. All forms of instruments create vibrations in order to produce sound waves that make the music, which is essentially organized sound, and guitars are a type of musical instrument called a string instrument, meaning that they create their sound through the vibrations of a string. On the guitar, the string that vibrates to produce the sound is fixed at both ends, is elastic, and therefore can vibrate . When the guitar string is either strummed or plucked, the string of the guitar begins to vibrate, and since these vibrations are waves, they begin to travel in both directions along the string and are reflected back at each fixed end. These waves will not cancel each other out as they reflect back upon themselves, but instead form a standing wave, which is a situation where crests and troughs remain at fixed positions in the medium while the wave as a whole increases and decreases together. The guitar strings act in such a way that they can satisfy the relationship between wavelength and frequency, represented by the equation v = fλ . This equation can be rearranged to f = v/λ, meaning that the frequency of a wave (f) is dependent on both the speed of the wave (v), and the length of the wave (λ). As well, the speed of the wave traveling on the guitar string depends on the tension of the string (T) and the linear mass density of the string (µ), in fact, “the root frequency for a string is proportional to the square root of the tension, inversely proportional to its length, and inversely proportional to the square root of its linear mass density” . What this means is that waves will travel faster when the tension of the string is higher, which in turn means that the frequency will be higher as the tension is increased (f = v/λ, the v is increasing). This also means that waves will travel slower on a more massive string, since if the mass is increased, the v will decrease. This relationship between the speed, tension, and mass density can be arranged into a new equation, When a standing wave vibrates, a combination of reflection and interference occur in such a way that the reflected waves interfere constructively with the incident waves, because the waves have changed phase when they reflected from one of the fixed ends. When this is happening, the medium appears to vibrate in segments, and it is not apparent that the whole wave is traveling. Since a guitar string has two fixed ends, it will act like a standing wave, and therefore when agitated by either being plucked or strummed; the wavelength that the string can produce is twice the length of the string . Since all the strings are the same length, all six strings on the guitar use the same range of wavelengths, however, in order to produce different sound waves required to create music, different amounts of air must be displaced at different frequencies, meaning the guitar strings must be able to vibrate at different frequencies to do so. In order to create different frequencies on the guitar, one of the factors of the equation f = v/λ must be changed, so either the speed, or the length of the wave must be changed. Since the strings on the guitar are attached to the nut and bridge, and when played open have a fixed wavelength, the only other factor that can be changed to produce a different frequency is the speed of the wave, ‘v’. Since the speed of the wave is affected by the tension on the string and the mass density (v = T/µ.), either the tension of the string, or the mass density must be changed in order to create a different frequency. However, if the frequency of the vibration of the guitar string were only changed by varying the tension, then the high strings (needing a higher frequency) would have to be wound tight since the tension required would be fairly high, while the lower strings (needing a lower frequency) would require much less tension, and subsequently be very loose. Since it would be very difficult to play a guitar where the high strings are tight and the low strings loose, guitars are constructed in such a way that the tension of the strings should be equal. Since the only other factor that can be changed while playing all the strings open is the mass density, guitars are constructed so that the tension of the strings, as well as the mass density are increased together. As a result, guitar strings are made so that the higher the frequency required from the open string, the less mass density the string will have, since higher frequencies require a higher tension, and the less mass they have the less tension is needed to achieve the same frequency. Subsequently, the lower the frequency of a string required is, the higher the mass density is, since a lower tension produces lower frequencies, and the more mass the strings contain, the more tension is required. Since in standard tuning the strings on the guitar are a perfect fourth apart on pitch (frequency), except between G and B, the amount that the mass density must be increased so that the tension remains constant can be calculated. Frets and Intonation However, music is complex, and many frequencies are required in order to create the correct sound waves that will produce the music. This poses a problem, because although the 6 strings of the guitar are set up in a playing-friendly manner, at this point each individual string can only produce one frequency, and since no different part of the equation to f = v/λ is being changed when an open string is played, which is not nearly enough variation to produce complex music. Therefore, one part of the equation to f = v/λ must be changed while playing a guitar in order to produce a different frequency. However, the speed of the wave cannot be changed, since the two factors (v = T/µ.), the tension of the string and the mass density are not changed significantly enough while playing to affect the speed of the wave enough to change the frequency. As a result, on the neck of the guitar there are little strips of metal called frets, whose function is to decrease the length of the string, which will cause a higher frequency. When a string is pressed down near a fret, the resonant length of the string is decreased, as it no longer stretches from the bridge to the nut but from the bridge to the fret where the string is being held down. This decreases the length of the wave (λ) through decreasing the length of the medium (string), which consequently increases the frequency of the string. Thus, on every string, the guitar player has an option of decreasing the length of the string in about 24 different ways, which will produce 24 different frequencies on each string. Since a guitar has six strings, and each string can have up to 24 frets, the number of notes available from which to choose is greatly increased. As multiple strings can be played together, the guitarist now has many frequencies from which to choose in order to create music on the instrument. Frets on the fingerboard serve to fix the positions of notes and scales, which gives them equal temperament. Consequently, the ratio of the widths of two consecutive frets is the twelfth root of two , whose numeric value is about 1.059. The twelfth fret divides the string in two exact halves and the 24th fret (if present) divides the string in half yet again. Every twelve frets represents one octave. The position of the bridge saddles, upon which the strings rest, determines the distance to the nut (at the top of the fingerboard). This distance defines the positions of the harmonic nodes for the strings over the fretboard, and is the basis of intonation. Intonation refers to the property that the actual frequency of each string at each fret matches what those frequencies should be according to music theory. Because of the physical limitations of fretted instruments, intonation is at best approximate; thus, the guitar's intonation is said to be tempered. The twelfth, or octave, fret resides directly under the first harmonic node (half-length of the string), and in the tempered fretboard, the ratio of distances between consecutive frets is approximately 1.06, as derived above. However if a guitar string had only one single frequency that it vibrated on, the guitar would sound quite boring, and there would not be much difference between the guitar and other stringed instruments. Guitars sound different from other stringed instruments because of the different overtones, or harmonics dominant on a guitar. When a guitar string is either strummed or plucked, the string begins to vibrate, and these vibrations are in the form of waves. However, the waves that are created by the vibrations of the string travel in both directions along the string, and continue forward until they are reflected off the fixed ends. When the waves are reflected, they change direction, and travel back the other way through the medium (the string). When the waves are traveling back through the string, they cause interference with the other waves traveling the string that were also caused by the vibration . The standing wave pattern is formed when there is perfectly timed interference of two waves passing through the same medium, to create a situation where the crests and troughs remain at fixed positions. On a guitar string, the waves that are reflected and are traveling in the opposite direction of the other waves on the string create a standing wave. Because of the interfering vibrations on a guitar string, standing wave patterns are created, meaning that there are some points along the string that appear to be standing still, and these points of no displacement are referred to as nodes. As well, there are other points along the medium that undergo vibrations between a large positive and large negative displacement, and are the points that undergo the maximum displacement during each vibrational cycle of the standing wave and are called antinodes. On the guitar string, a number of different patterns of standing waves may be produced, and each pattern will have different number of nodes and antinodes. Standing wave patterns can only be produced within the string of the guitar when it is vibrated at certain frequencies, however there are several frequencies with which the string can be vibrated to produce the different patterns of standing waves, each with a different number of nodes and antinodes. Every different frequency is associated with a different standing wave pattern, and they are referred to as harmonics. The most simple pattern of standing wave that can be produced is one at which the two nodes are at the fixed ends, which is the longest wavelength, and it is called the first harmonic, or fundamental harmonic. Since on a guitar string the waves keep on being reflected off the fixed ends and causing interference with each other, there are many different frequencies, but with any medium fixed at both ends, only certain sized waves can stand. This means that on a guitar string, only certain types of frequencies can stand, so we say that such a medium is tuned. Therefore, the strings on the guitar are tuned in such a way that the second pattern of the standing wave, or second harmonic, can only have half the wavelength and twice the frequency of the first harmonic. The second harmonic is also referred to as the first overtone, and it is these multiple overtones that we hear from the guitar string that make the guitar sound different from other instruments. Similarly, the third harmonic, or the third pattern possibility for the standing wave on a guitar, has one third the wavelength and three times the frequency when compared to the first harmonic and is called the second overtone. The rest of the harmonics follow the same pattern that the nth harmonic has 1/n wavelength and n times the frequency. It is the fundamental frequency (first harmonic) that determines the note that we hear, and the higher harmonics determine the timbre. This means that the simplest standing wave pattern on the guitar string containing only two nodes and two antinodes, determines what musical note we hear, while the more complex standing wave patterns, the other harmonics, determine how that note sounds. Sound is created when material vibrations cause changes in air pressure and create pressure waves. However, guitar strings are not large enough to move large enough amounts of air to create a sound loud enough to be easily heard by the human ear. Therefore, the body of an acoustic guitar is used to amplify the sounds the strings produce, and the body of the guitar is made up of different components that allow it to do so. The body of the guitar is basically a larger hollow space that is specially constructed to amplify the sound of the strings. The top plate of the body, the piece of wood located on the front of the body of the guitar, is constructed so that it can vibrate up and down relatively easily, and is usually made of light, springy wood, about 2.5 mm thick. Inside the actual body of the guitar there are series of braces that strengthen the plate and the keep the plate flat, despite the movement of the strings that will tend to make the bridge move, since it is attached to the top plate. On the opposite side of the guitar, there is the back plate that does not play as big a role in amplifying the sound, since it is held against the player's body and cannot vibrate much. The sides of the guitar also do not vibrate much in the direction perpendicular to their surface, so they also don’t radiate much sound. When the strings are plucked or strummed, they begin to vibrate, and these vibrations in the form of waves are transmitted to the bridge of the guitar. Since the bridge is attached the top plate of the guitar, the top plate also begins to vibrate as a result of the vibrations of the string, via the bridge. If the string is vibrating at a high frequency, and subsequently the bridge is vibrating at a high frequency, most of the sound is radiated by the vibrations of the top plate. Since the top plate has a much larger surface area than the string, when the top plate vibrates as a result of the vibrations of the string, the volume of air the top plate is displacing is much larger than that of the string. Therefore, the pressure waves being produced by the top plate will be bigger, and the sound will be louder. For lower frequencies, the strings vibrations are transmitted via the bridge to the top plate, where it is then transmitted to the back plate, then reflected through the sound hole, which is constantly increasing the volume of the pressure waves being produced. In fact, it is not the vibrations of the guitar string that we hear when listening to a guitar, rather the amplification of the vibrations it produces through the body of the guitar. Home | Audio | DIY | Guitar | iPods | Music | Links | Brain and Problem Solving | Site Map | Contact
ANSWERS TO END-OF-CHAPTER QUESTIONS 2-1 Explain this statement: "If resources were unlimited and freely available, there would be no subject called economics." If resources were unlimited and freely available, making choices would not be necessary. Every person could have as much as they wanted of any good or service. Economics, the science of choice, would be unnecessary. 2-2 Comment on the following statement from a newspaper article: "Our junior high school serves a splendid hot meal for $1 without costing the taxpayers anything, thanks in part to a government subsidy." Obviously the writer is confused. Government subsidies come from government revenues and taxpayers are the source of tax revenues. It may be true that local property taxes that fund the junior high school are not being used for the lunches, but the federal government’s funds do come from taxpayers across the country, including those in the town with the junior high. This example helps support the saying, "There ain’t no such thing as a free lunch!" 2-3 Critically analyze: "Wants aren’t insatiable. I can prove it. I get all the coffee I want to drink every morning at breakfast." Explain: "Goods and services are scarce because resources are scarce." Analyze: "It is the nature of all economic problems that absolute solutions are denied us." It may be that you get all the coffee you want on a particular morning, but will that satisfy your wants forever? Not if you want coffee in the future. Therefore, even your desire for coffee is insatiable over time. Goods and services are the product of resources. If resources were abundant without limit, then we would not have a scarcity of the products they produce. Economic problems are problems of relative scarcity—wants exceed resources in the relative sense. We cannot absolutely solve all of our economic problems; that is, satisfy all of everyone’s wants and needs. If all our wants were completely fulfilled, nothing would have a price—why pay for anything if you’ve got everything already? And if there were no unfulfilled wants there would be no economic resources—why pay for an input when you’ve got all the outputs you could ever need? The fact that totally free goods and services do not exist provides support for the notion that total fulfillment of our wants is impossible. 2-4 What are economic resources? What are the major functions of the entrepreneur? Economic resources are of four main types: labor, land (natural resources), real capital (machines, factories, buildings, etc.,) and entrepreneurs. Economic resources are also called factors of production or inputs in the productive process. As these names imply, economic resources are required to produce the outputs desired by society. Since certain outputs are desired, they command a price and so, therefore, do economic resources. This can lead to some things being economic resources in some circumstances but not in others. Water in the middle of a lake, for example, is not an economic resource: Anyone can have it free. But the same water piped to a factory site is no longer free: Its movement must be paid for by taxes or by a specific charge. It is now an economic resource because the factory owner would not pay for its delivery unless the water was to be used in the factory’s production. These four types of resources are highlighted in the circular flow diagram where the type of income accruing to each type of resource is shown. Entrepreneurs are risk-takers: They coordinate the activities of the other three inputs for profit—or loss, which is why they are called risk-takers. Entrepreneurs sometimes manage companies that they own, but a manager who is not an owner is not necessarily an entrepreneur but may be performing some of the entrepreneurial functions for the company. Entrepreneurs are also innovators, or perhaps inventors, and profits help to motivate such activities. 2-5 (Key Question) Why is the problem of unemployment a part of the subject matter of economics? Distinguish between allocative efficiency and productive efficiency. Give an illustration of achieving productive, but not allocative, efficiency. Economics deals with the "limited resources—unlimited wants" problem. Unemployment represents valuable resources that could have been used to produce more goods and services—to meet more wants and ease the economizing problem. Allocative efficiency means that resources are being used to produce the goods and services most wanted by society. The economy is then located at the optimal point on its production possibilities curve where marginal benefit equals marginal cost for each good. Productive efficiency means the least costly production techniques are being used to produce wanted goods and services. Example: manual typewriters produced using the least-cost techniques but for which there is no demand. 2-6 (Key Question) Here is a production possibilities table for war goods and civilian goods: Type of Production a. Show these data graphically. Upon what specific assumptions is this production possibilities curve based? b. If the economy is at point C, what is the cost of one more automobile? One more rocket? Explain how this curve reflects increasing opportunity costs. c. What must the economy do to operate at some point on the production possibilities curve? (b) 4.5 rockets; .33 automobiles, as determined from the table. Increasing opportunity costs are reflected in the concave-from-the-origin shape of the curve. This means the economy must give up larger and larger amounts of rockets to get constant added amounts of automobiles—and vice versa. (c) It must obtain full employment and productive efficiency. 2-7 What is the opportunity cost of attending college? In 2000, nearly 80% of college-educated Americans held jobs, whereas only about 40% of those who did not finish high school held jobs. How might this difference relate to opportunity costs? The opportunity cost of attending college (and of doing anything else) consists of the income forgone while attending college (and of doing anything else such as enjoying leisure) and the value of the goods that the student or the student’s parents sacrifice in order to pay tuition and buy books, and other items necessary for college but not necessary otherwise. Those who are college-educated have the potential of earning more income than those who did not finish high school. The opportunity cost (sacrifice of goods and services) of not working is much greater for those with the higher earning potential. 2-8 Suppose you arrive at a store expecting to pay $100 for an item, but learn that a store two miles away is charging $50 for it. Would you drive there and buy it? How does your decision benefit you? What is the opportunity cost of your decision? Now suppose you arrive at a store expecting to pay $6000 for an item, but learn that it costs $5950 at the other store. Do you make the same decision as before? Perhaps surprisingly, you should! Explain why. Driving to the other store to save $50 does involve some cost in terms of time and inconvenience. However, for most of us the time it takes to drive two miles would be worth $50. For example, if it takes about ten minutes extra time and a negligible amount of gasoline (unless your time is worth $300 an hour, or $50 per each ten-minute period), it would benefit you to drive to the other store. While in the second case, $50 may seem like less compared to the $6000 total price, for you the $50 is still a $50 savings, exactly the same as in the first case. Therefore, you should apply the same reasoning. Is the $50 benefit from driving the extra two miles worth the cost? The conclusion should be the same in both cases. 2-9 (Key Question) Specify and explain the shapes of the marginal-benefit and marginal-cost curves and use these curves to determine the optimal allocation of resources to a particular product. If current output is such that marginal cost exceeds marginal benefit, should more or less resources be allocated to this product? Explain. The marginal benefit curve is downward sloping, MB falls as more of a product is consumed because additional units of a good yield less satisfaction than previous units. The marginal cost curve is upward sloping, MC increases as more of a product is produced since additional units require the use of increasingly unsuitable resource. The optimal amount of a particular product occurs where MB equals MC. If MC exceeds MB, fewer resources should be allocated to this use. The resources are more valuable in some alternative use (as reflected in the higher MC) than in this use (as reflected in the lower MB). 2-10 (Key Question) Label point G inside the production possibilities curve you have drawn for question 6. What does it indicate? Label point H outside the curve. What does this point indicate? What must occur before the economy can attain the level of production indicated by point H? G indicated unemployment, productive inefficiency, or both. H is at present unattainable. Economic growth—through more inputs, better inputs, improved technology—must be achieved to attain H. 2-11 (Key Question) Referring again to question 6, suppose improvement occurs in the technology of producing rockets but not in the production of automobiles. Draw the new production possibilities curve. Now assume that a technological advance occurs in producing automobiles but not in producing rockets. Draw the new production possibilities curve. Now draw a production possibilities curve that reflects technological improvement in the production of both products. See the graph for question 2-6. PPC1 shows improved rocket technology. PPC2 shows improved auto technology. PPC3 shows improved technology in producing both products. 2-12 Explain how, if at all, each of the following affects the location of the production possibilities curve. a. Standardized examination scores of high school and college students decline. b. The unemployment rate falls from 9 to 6 percent of the labor force. c. Defense spending is reduced to allow government to spend more on health care. d. A new technique improves the efficiency of extracting copper from ore. (a) Assuming scores indicate lower skills, then productivity should fall and this would move the curve inward. (b) Should not affect location of curve. Production moves from inside the curve toward frontier. (c) Should not affect location of curve. Resources are allocated away from one type of government spending toward another (health care). (d) The curve should shift outward as more production is possible with existing resources. 2-13 Explain: "Affluence tomorrow requires sacrifice today." This quote refers to the fact that economic growth and a rising standard of living in the future require investment today. Society can choose to consume all of its income today, or it can set aside some of it for investment purposes. Productive resources that go for investment goods today, e.g., new factories, machines, equipment, are obviously not being used for producing consumer goods. Therefore, consumption is being sacrificed today so that investment goods can be produced with some of today’s resources. 2-14 Suppose that, based on a nation’s production possibilities curve, an economy must sacrifice 10,000 pizzas domestically to get the one additional industrial robot it desires, but can get that robot from another country in exchange for 9,000 pizzas. Relate this information to the following statement: "Through international specialization and trade, a nation can reduce its opportunity cost of obtaining goods and thus ‘get outside its production possibilities curve.’" The message of the production possibilities curve is that an individual nation is limited to the combinations of output indicated by its production possibilities curve. International specialization means directing domestic resources to output which a nation is highly efficient at producing. International trade involves the exchange of these goods for goods produced abroad. Specialization and trade have the same effect as having more and better resources or discovering improved production techniques. The output gains from greater international specialization and trade are the equivalent of economic growth. 2-15 Contrast how a market system and a command economy try to cope with economic scarcity. A market system allows for the private ownership of resources and coordinates economic activity through market prices. Participants act in their own self-interest and seek to maximize satisfaction or profit through their own decisions regarding consumption or production. Goods and services are produced and resources are supplied by whoever is willing to do so. The result is competition and widely dispersed economic power. The command economy is characterized by public ownership of nearly all property resources and economic decisions are made through central planning. The planning board, appointed by the government determines production goals for each enterprise. The division of output between capital and consumer goods is centrally decided based on the board’s long-term priorities. 2-16 Distinguish between the resource market and product market in the circular flow model. In what way are businesses and households both sellers and buyers in this model? What are the flows in the circular flow model? The resource markets are where the owners of the resources (the households) sell their resources to the buyers of the resources (businesses). In the product markets, businesses sell the goods and services they have produced to the buyers of the goods and services, the households. Households (individuals) either own all economic resources directly or own them indirectly through their ownership of business corporations. These households are willing to sell their resources to businesses because attractive prices draw them into specific resource markets. Businesses buy resources because they are necessary for producing goods and services. The interaction of the buyers and sellers establishes the price of each resource. In the product market, businesses are the sellers and householders are the buyers; their role in the market has been reversed. Each group of economic units both buys and sells. One flow is the flow of real goods and services (including resource services) and the other flow is the flow of money (money income, consumption expenditures, revenue, production costs). 2-17 (Last Word) Which two of the six reasons listed in the Last Word do you think are the most important in explaining the rise in participation of women in the workplace? Explain your reasoning. A poll taken in a class of 60 college freshman gave the first three reasons (women's rising wage rates, expanded job accessibility, and changing preferences and attitudes) nearly all the votes. Each of these explanations received about one third of the votes. Surprisingly, not a single student voted for "declining birth rates" as a reason for the rise in the number of women in the workforce. The consensus of the class was that the last three explanations (declining birth rates, rising divorce rates, and stagnating male earnings) were the effects, rather than the cause of more women joining the workforce. Because wage rates are higher the opportunity cost of raising children has risen. Women have chosen to bear fewer children, because they are now relatively more expensive. Similarly, women who have a higher earning capacity find the opportunity cost of getting a divorce reduced. Finally, male earnings may have stagnated partially because of the entrance of large numbers of well-educated women into the workforce, increasing the competition for the available jobs.