text stringlengths 4 602k |
|---|
In countries where there is a carbon tax, businesses must pay a levy based on the amount of carbon emissions produced by their business operations. A carbon tax is designed to reduce the amount of carbon — also known as CO2 emissions.
There are two types of carbon taxes: a tax on quantities of greenhouse gases emitted, and a tax on carbon-intensive goods and services such as gasoline production.
In the United States, several carbon tax proposals have been introduced in Congress, but none have yet been implemented.
How Does a Carbon Tax Work?
When a government implements a carbon tax, a price per ton of greenhouse gas emissions is chosen, and a company gets taxed that amount for every ton they emit or are responsible for.
In some cases, the price per ton increases the more an entity emits, thereby incentivizing companies to reduce and prevent emissions.
What Type of Carbon Is Taxed?
Although it is called a carbon tax, usually the price is actually per ton of CO2 gas emitted. That’s because every fossil fuel has a particular amount of carbon content in it, and when burned each carbon molecule combines with two oxygen molecules and becomes CO2 gas which goes into the atmosphere.
So the amount of emissions associated with the fuel can be taxed at the point of extraction, refinement, import, or use.
As many ESG investors know, burning coal emits the highest amount of CO2, followed by diesel, gasoline, propane, and natural gas. Therefore, coal gets taxed higher than other fossil fuels. Once CO2 is emitted into the atmosphere, it remains there for a hundred years or more, creating a greenhouse effect which heats up the planet and leads to climate change.
Only products associated with the burning of fossil fuels get taxed. So products such as plastic that contain petroleum but don’t directly result in CO2 emissions don’t get taxed.
What Is the Economic Impact of Carbon Taxation?
Since a carbon tax increases costs across the entire supply chain, everyone from extractors to consumers are incentivized to reduce fossil fuel consumption — at least in theory.
Those being taxed can raise the prices of their goods and services, but only as much as the market is willing to pay while allowing them to remain competitive.
What Is the Social Impact of Carbon Emissions?
The theory around carbon pricing is that each ton of CO2 should have a price equal to the social cost of carbon. The social cost of carbon is the current amount of estimated damages over time that each ton of CO2 emitted causes today.
In addition to causing global warming, emissions and pollution typically lead to negative effects on human health and natural ecosystems. Thus investing in companies with lower carbon emissions can be considered a type of socially responsible investing.
Over time, the social cost of carbon increases, because each ton of emissions is more damaging as climate change worsens. Therefore, the price of carbon and the tax would increase over time.
Those producing emissions know that the tax will increase over time, and so investments into decarbonization are worth it to them today. For instance, a company can invest in solar energy and wind power, and while that might have a high upfront cost for them, over time it could be worth it could help them avoid a rising carbon tax.
Examples of Carbon Taxes
Understanding carbon taxes is an important facet of sustainable investing. Carbon taxes have been put into place in many countries around the world so far, and their popularity is rising. As of 2021, 35 countries had a form of carbon tax or energy tax.
• Finland was the first country to implement a carbon tax in 1990, soon followed by Norway and Sweden in 1991. In 2021 Finland’s price per ton was $73.02. Norway is known to have one of the strictest carbon taxes.
• The Canadian province of British Columbia implemented a carbon tax in 2008. In 2019, South Africa became the first African country to install a carbon tax.
• Although there is not yet a federal carbon tax in the U.S., there are more than 50 regional ones. For instance, the city of Boulder, CO, implemented a carbon tax in 2006 after it passed a local vote. However, the average price per ton across these programs is very low: generally about $2.
Support for a U.S. federal carbon tax has been growing over time, but one of the things holding it back is debate about how the revenue from the tax would be used. A few ideas include: paying back consumers through a carbon dividend; using the money to fund infrastructure upgrades or low-emissions technologies, or reducing other taxes.
Get up to $1,000 in stock when you fund a new Active Invest account.*
Access stock trading, options, auto investing, IRAs, and more. Get started in just a few minutes.
*Customer must fund their Active Invest account with at least $10 within 30 days of opening the account. Probability of customer receiving $1,000 is 0.028%. See full terms and conditions.
The Importance of Carbon Tax
Reducing global emissions is essential to stop the buildup of CO2 in the Earth’s atmosphere. The more CO2 gets emitted, the more the planet warms and the worse climate change becomes — including the frequency of climate-related disasters.
Global temperatures have already increased 1°C over pre-industrial levels, and if emissions are not reduced temperatures are projected to rise 4°C by the end of this century. The more temperatures rise the more the effects become irreversible and catastrophic.
A carbon tax is a powerful tool to discourage the use of fossil fuels and incentivize a shift to low- and zero-emission energy sources. This is why many people invest in green stocks.
Pros and Cons of Carbon Tax
There are several pros and cons to a carbon tax.
Some of the pros of a carbon tax include:
• The carbon tax is a way to regulate emissions without having to actually mandate production and consumption limits directly.
• Carbon taxes incentivize companies and individuals to reduce and avoid emissions.
• Carbon taxes are easy to administer.
• A carbon tax may help reduce the buildup of greenhouse gasses, which in turn may help reduce pollution, improve air and water quality, and more.
• The revenue raised through a tax can be used to fund decarbonization efforts, environmental restoration, and other projects.
• Other programs, such as an incentive using renewable energy, haven’t been as successful in reducing fossil-fuel use.
A few cons of a carbon tax include:
• It can be challenging to figure out how the revenues should be spent.
• A carbon tax can be put on any point of a supply chain and it’s hard to decide which is best.
• It’s hard to predict how much emissions will be reduced as a result of the carbon tax.
• If a carbon tax increases energy costs, this can have a big impact on lower-income households which tend to spend a higher percentage of their income on energy than higher-income households.
• If one country implements a carbon tax and others don’t, then that puts local industries at a competitive disadvantage. If they have to raise prices, customers may start buying from the countries that don’t have the tax, resulting in the same or more emissions. For this reason carbon tax plans build in ways to prevent emissions leakage and issues with competition. Some of these include rebates, exemptions for particular industries, and taxation based on past emissions.
• Companies can purchase carbon offsets or carbon credits to lower the amount they pay in taxes. They can also use those offsets to claim that they are carbon neutral or carbon negative. This isn’t exactly true, since they are still emitting carbon. The ability to purchase offsets reduces their incentive to decarbonize.
Who Regulates Carbon Taxes?
Carbon tax programs are regulated by federal, state, or local governments. Regulation involves setting the price per ton of carbon, deciding which entities get taxed, collecting the tax, and deciding how the revenues are spent.
There is an ongoing discussion about the international coordination of carbon pricing. If a minimum price per ton is set, this would eliminate issues around competition and guarantee a certain amount of effort towards emission reduction. Canada has already implemented national price coordination. The minimum price per ton in Canadian provinces and territories is CAD $50.
Which Countries Have the Highest Carbon Tax?
Below are a few of the countries that have the highest carbon tax rates. The rates are in USD price per ton:
• Uruguay: $137
• Sweden: $129.89
• Switzerland: $129.86
• Liechtenstein: $129.86
• Norway: $87.61
A carbon tax can be a powerful tool for reining in carbon emissions, and potentially helping reduce the amount of greenhouse gasses in the atmosphere. Essentially, these taxes penalize companies by making them pay a fee for CO2 emissions relating to their products or operations.
While the U.S. doesn’t have a federally mandated carbon tax, there are state and local levies. Given concerns about climate change, it’s likely that more countries will continue to adopt and adjust carbon taxes.
If you’re interested in investing in sustainably focused businesses, you can explore your options using SoFi Invest, and start trading from your Active Invest account with as little as $5.
Photo credit: iStock/Delmaine Donson
SoFi Invest refers to the two investment and trading platforms operated by Social Finance, Inc. and its affiliates (described below). Individual customer accounts may be subject to the terms applicable to one or more of the platforms below.
1) Automated Investing and advisory services are provided by SoFi Wealth LLC, an SEC-registered investment adviser (“SoFi Wealth“). Brokerage services are provided to SoFi Wealth LLC by SoFi Securities LLC.
2) Active Investing and brokerage services are provided by SoFi Securities LLC, Member FINRA(www.finra.org)/SIPC(www.sipc.org). Clearing and custody of all securities are provided by APEX Clearing Corporation.
For additional disclosures related to the SoFi Invest platforms described above, including state licensure of SoFi Digital Assets, LLC, please visit SoFi.com/legal.
Neither the Investment Advisor Representatives of SoFi Wealth, nor the Registered Representatives of SoFi Securities are compensated for the sale of any product or service sold through any SoFi Invest platform. Information related to lending products contained herein should not be construed as an offer or pre-qualification for any loan product offered by SoFi Bank, N.A.
Claw Promotion: Customer must fund their Active Invest account with at least $10 within 30 days of opening the account. Probability of customer receiving $1,000 is 0.028%. See full terms and conditions. |
NCERT Solutions for Class 12 Maths Chapter 10 Exercise 10.2 (Ex. 10.2) Vector Algebra in Hindi and English Medium updated for new academic session 2022-2023 for CBSE and UP board students. Other state board’s students also take the benefits of these solutions in Hindi Medium or English Medium. Videos related to all questions in Hindi and English are also available free to use.
Class 12 Maths Exercise 10.2 Solution in Hindi and English
|Chapter: 10||Exercise: 10.2|
|Contents:||NCERT Solutions in Hindi and English Medium|
NCERT Solutions for Class 12 Maths Chapter 10 Exercise 10.2
Class 12 Maths Exercise 10.2 in Hindi and English Medium are given below in PDF file format. All the solutions are prepared for academic session 2022-2023 for CBSE and UP Board. Videos related to exercise 10.2 of 12th Maths in Hindi and English Medium are given separately.
Class 12 Maths Chapter 10 Exercise 10.2 Solutions in Videos
Basic Concepts of Line
Let “L” be any straight line in a plane or three-dimensional space. This line can use two directions. A line in one of these fixed directions is called a directed line. If we limit the line L to the line segment AB, a magnitude is determined on the line L along one of the two directions, so that we get the directed line segment. Therefore, a directed line segment has both magnitude and direction.
Important Terms related to Vector
A quantity that has both magnitude and direction is called a vector. The point A from where the vector starts is called its starting point, and the point B where it ends is called its terminal point. The distance between the initial and terminal points of the vector is called the magnitude (or length) of the vector, which is represented as AB or | A |, or a. The arrow indicates the direction of the vector.
What is a Position Vector?
From class XI, remember the right-hand three-dimensional rectangular coordinate system. Consider a point P in space, which has coordinates (x, y, z) with respect to the origin O (0, 0, 0). So the position of the point P with respect to its initial and terminal point, O, is the vector which contains O and P respectively.
What are the main topics which are covered in class 12 Maths Exercise 10.2?
Addition of vectors, components of vectors, section formula and multiplication of vector to a scaler are the main topics discussed in class 12 Maths chapter 10 Exercise 10.2.
How many questions are there in Class 12 Maths Exercise 10.2?
There are total of 19 questions given in Exercise 10.2 of class 12 Mathematics out of which two questions are MCQ.
Which is the most important questions for Board Exams?
Question number 14 and 17 are frequently asked questions for CBSE board exams. |
A hydrogen vehicle is a vehicle that uses hydrogen as its onboard fuel for motive power. Hydrogen vehicles include hydrogen fueled space rockets, as well as automobiles and other transportation vehicles. The power plants of such vehicles convert the chemical energy of hydrogen to mechanical energy either by burning hydrogen in an internal combustion engine, or by reacting hydrogen with oxygen in a fuel cell to run electric motors. Widespread use of hydrogen for fueling transportation is a key element of a proposed hydrogen economy.
Hydrogen fuel does not occur naturally on Earth and thus is not an energy source; rather it is an energy carrier. As of 2014, 95% of hydrogen is made from methane. It can be produced using renewable sources, but that is an expensive process. Integrated wind-to-hydrogen (power to gas) plants, using electrolysis of water, are exploring technologies to deliver costs low enough, and quantities great enough, to compete with traditional energy sources.
Many companies are working to develop technologies that might efficiently exploit the potential of hydrogen energy for use in motor vehicles. As of November 2013[update] there are demonstration fleets of hydrogen fuel cell vehicles undergoing field testing including the Chevrolet Equinox Fuel Cell, Honda FCX Clarity, Hyundai ix35 FCEV and Mercedes-Benz B-Class F-Cell. The drawbacks of hydrogen use are high carbon emissions intensity when produced from natural gas, capital cost burden, low energy content per unit volume, low performance of fuel cell vehicles compared with gasoline vehicles, production and compression of hydrogen, and the large investment in infrastructure that would be required to fuel vehicles.
- 1 Vehicles
- 2 Internal combustion vehicle
- 3 Fuel cell
- 4 Hydrogen
- 5 Official support
- 6 Criticism
- 7 Comparison with other types of alternative fuel vehicle
- 8 See also
- 9 References
- 10 External links
Buses, trains, PHB bicycles, canal boats, cargo bikes, golf carts, motorcycles, wheelchairs, ships, airplanes, submarines, and rockets can already run on hydrogen, in various forms. NASA used hydrogen to launch Space Shuttles into space. A working toy model car runs on solar power, using a regenerative fuel cell to store energy in the form of hydrogen and oxygen gas. It can then convert the fuel back into water to release the solar energy. Since the advent of hydraulic fracturing the key concern for environmentalists with hydrogen fuel cell vehicles is consumer and public policy confusion that could result adoption of natural gas powered hydrogen vehicles with heavy hidden emissions to the detriment of environmentally friendly transportation.
The current land speed record for a hydrogen-powered vehicle is 286.476 miles per hour (461.038 km/h) set by Ohio State University's Buckeye Bullet 2, which achieved a "flying-mile" speed of 280.007 miles per hour (450.628 km/h) at the Bonneville Salt Flats in August 2008. For production-style vehicles, the current record for a hydrogen-powered vehicle is 207.297 miles per hour (333.612 km/h) set by a prototype Ford Fusion Hydrogen 999 Fuel Cell Race Car at Bonneville Salt Flats in Wendover, Utah, in August 2007. It was accompanied by a large compressed oxygen tank to increase power.
Toyota launched its first production fuel cell vehicle, the Toyota Mirai, in Japan at the end of 2014 and began sales in California, mainly the Los Angeles area, in 2015. The car has a range of 312 mi (502 km) and takes about five minutes to refill its hydrogen tank. The initial sale price in Japan was about 7 million yen ($69,000). Former European Parliament President Pat Cox estimates that Toyota will initially lose about $100,000 on each Mirai sold.
Many automobile companies have been researching the feasibility of commercially producing hydrogen cars, and some have introduced demonstration models in limited numbers (see list of fuel cell vehicles). Since 1980, car companies have made numerous predictions about the commercialization of FC vehicles. At the 2012 World Hydrogen Energy Conference, Daimler AG, Honda, Hyundai and Toyota all confirmed plans to produce hydrogen fuel cell vehicles for sale by 2015. Charles Freese, GM's executive director of global powertrain engineering, stated that the company believes that both fuel-cell vehicles and battery electric vehicles are needed for reduction of greenhouse gases and reliance on oil. The use of hydrogen as fuel in an automobile is problematic because of hydrogen's low density.
In 2012, Lux Research, Inc. issued a report that stated: "The dream of a hydrogen economy ... is no nearer." It concluded that "Capital cost, not hydrogen supply, will limit adoption to a mere 5.9 GW" by 2030, providing "a nearly insurmountable barrier to adoption, except in niche applications". Lux's analysis concluded that by 2030, the PEM stationary market will reach $1 billion, while the vehicle market, including automobiles and forklifts, will reach a total of $2 billion.
Hydrogen was first stored in roof mounted tanks, although models are now incorporating onboard tanks. Some double deck models use between floor tanks.
In March 2015, China South Rail Corporation (CSR) demonstrated the world's first hydrogen fuel cell-powered tramcar at an assembly facility in Qingdao. The chief engineer of the CSR subsidiary CSR Sifang Co Ltd., Liang Jianying, said that the company is studying how to reduce the running costs of the tram. A total of 83 miles of tracks for the new vehicle have been built in seven Chinese cities. China plans to spend 200 billion yuan ($32 billion) over the next five years to increase tram tracks to more than 1,200 miles.
Pearl Hydrogen Power Sources of Shanghai, China, unveiled a hydrogen bicycle at the 9th China International Exhibition on Gas Technology, Equipment and Applications in 2007.
Motorcycles and scooters
ENV develops electric motorcycles powered by a hydrogen fuel cell, including the Crosscage and Biplane. Other manufacturers as Vectrix are working on hydrogen scooters. Finally, hydrogen fuel cell-electric hybrid scooters are being made such as the Suzuki Burgman Fuel cell scooter. and the FHybrid. The Burgman received "whole vehicle type" approval in the EU. The Taiwanese company APFCT conducted a live street test with 80 fuel cell scooters for Taiwans Bureau of Energy.
Quads and tractors
Companies such as Boeing, Lange Aviation, and the German Aerospace Center pursue hydrogen as fuel for manned and unmanned airplanes. In February 2008 Boeing tested a manned flight of a small aircraft powered by a hydrogen fuel cell. Unmanned hydrogen planes have also been tested. For large passenger airplanes however, The Times reported that "Boeing said that hydrogen fuel cells were unlikely to power the engines of large passenger jet airplanes but could be used as backup or auxiliary power units onboard."
In Britain, the Reaction Engines A2 has been proposed to use the thermodynamic properties of liquid hydrogen to achieve very high speed, long distance (antipodal) flight by burning it in a precooled jet engine.
A HICE forklift or HICE lift truck is a hydrogen fueled, internal combustion engine-powered industrial forklift truck used for lifting and transporting materials. The first production HICE forklift truck based on the Linde X39 Diesel was presented at an exposition in Hannover on May 27, 2008. It used a 2.0 litre, 43 kW (58 hp) diesel internal combustion engine converted to use hydrogen as a fuel with the use of a compressor and direct injection.
A fuel cell forklift (also called a fuel cell lift truck or a fuel cell forklift) is a fuel cell powered industrial forklift truck. In 2013 there were over 4,000 fuel cell forklifts used in material handling in the US. Only 500 of these received funding from DOE in 2012. The global market is 1 million fork lifts per year. As of 2013[update], fuel cell fleets are being operated by several of companies, including Sysco Foods, FedEx Freight, GENCO (at Wegmans, Coca-Cola, Kimberly Clark, and Whole Foods), and H-E-B Grocers. A total of 30 fuel cell forklifts with Hylift were demonstrated in Europe and extended it with HyLIFT-EUROPE to 200 units. With other projects in France and Austria. Pike Research stated in 2011 that fuel-cell-powered forklifts will be the largest driver of hydrogen fuel demand by 2020.
Most companies in Europe and the US do not use petroleum powered forklifts, as these vehicles work indoors where emissions must be controlled and instead use electric forklifts. Fuel-cell-powered forklifts can provide benefits over battery powered forklifts as they can work for a full 8-hour shift on a single tank of hydrogen and can be refueled in 3 minutes. Fuel cell-powered forklifts can be used in refrigerated warehouses, as their performance is not degraded by lower temperatures. The FC units are often designed as drop-in replacements.
Many large rockets use liquid hydrogen as fuel, with liquid oxygen as an oxidizer. An advantage of hydrogen rocket fuel is the high effective exhaust velocity compared to kerosene/LOX or UDMH/NTO engines. According to the Tsiolkovsky rocket equation, a rocket with higher exhaust velocity needs less propellant mass to achieve a given change of speed. Before combustion, the hydrogen runs through cooling pipes around the exhaust nozzle to protect the nozzle from damage by the hot exhaust gases. Also the energy content or energy density of hydrogen calculated from weight is the best compared to any other chemical energy storage. In combination with an oxidizer such as liquid oxygen, liquid hydrogen yields the highest specific impulse, or efficiency in relation to the amount of propellant consumed, of any known rocket propellant.
A disadvantage of LH2/LOX engines are the low density and low temperature of liquid hydrogen, which means bigger and insulated and thus heavier fuel tanks are needed. This increases the rocket's structural mass which reduces its delta-v significantly. Another disadvantage is the poor storability of LH2/LOX-powered rockets: Due to the constant hydrogen boil-off, the rocket can only be fueled shortly before launch, which makes cryogenic engines unsuitable for ICBMs and other rocket applications with the need for short launch preparations.
Overall, the delta-v of a hydrogen stage is typically not much different from that of a dense fuelled stage, however, the weight of a hydrogen stage is much less, which makes it particularly effective for upper stages, since they are carried by the lower stages. For first stages, dense fuelled rockets in studies may show a small advantage, due to the smaller vehicle size and lower air drag.
Liquid hydrogen and oxygen were also used in the Space Shuttle to run the fuel cells that power the electrical systems. The byproduct of the fuel cell is water, which is used for drinking and other applications that require water in space.
Internal combustion vehicle
Hydrogen internal combustion engine cars are different from hydrogen fuel cell cars. The hydrogen internal combustion car is a slightly modified version of the traditional gasoline internal combustion engine car. These hydrogen engines burn fuel in the same manner that gasoline engines do; the main difference is the exhaust product. Gasoline combustion results in carbon dioxide and water vapour, while the only exhaust product of hydrogen combustion is water vapour.
In 1807 Francois Isaac de Rivaz designed the first hydrogen-fueled internal combustion engine. In 1970 Paul Dieges patented a modification to internal combustion engines which allowed a gasoline-powered engine to run on hydrogen US 3844262 .
Mazda has developed Wankel engines burning hydrogen. The advantage of using ICE (internal combustion engine) like Wankel and piston engines is the cost of retooling for production is much lower. Existing-technology ICE can still be applied for solving those problems where fuel cells are not a viable solution insofar, for example in cold-weather applications.
Fuel cell cost
Hydrogen fuel cells are relatively expensive to produce, as their designs require rare substances such as platinum as a catalyst. The U.S. Department of Energy (DOE) estimated in 2002 that the cost of a fuel cell for an automobile (assuming high-volume manufacturing) was approximately $275/kW, which translated into each vehicle costing an estimated 100,000 dollars. However, by 2010, DOE estimated the cost had fallen 80% and that automobile fuel cells might be manufactured for $51/kW, assuming high-volume manufacturing cost savings.
The projected cost, assuming a manufacturing volume of 500,000 units/year, using 2012 technology, was estimated by the DOE to be $47/kW for an 80 kW PEM fuel cell. Assuming a manufacturing volume of 10,000 units/year, however, the cost was projected to be $84/kW using 2012 technology. The Department of Energy wrote: "Hydrogen fuel cells for cars have never been manufactured at large scale, in part because of the prohibitive price tag. But the DOE estimates that the cost of producing fuel cells is falling fast".
In 2014, Toyota said it would sell its Toyota Mirai in Japan for less than $70,000 by April 2015 and that it has brought the cost of the fuel cell system down to 5 percent of the fuel cell prototypes of the last decade. Former European Parliament President Pat Cox estimates that Toyota will initially lose about $100,000 on each Mirai sold.
The problems in early fuel cell designs at low temperatures concerning range and cold start capabilities have been addressed so that they "cannot be seen as show-stoppers anymore". Users in 2014 said that their fuel cell vehicles perform flawlessly in temperatures below zero, even with the heaters blasting, without significantly reducing range.
Hydrogen does not come as a pre-existing source of energy like fossil fuels, but is first produced and then stored as a carrier, much like a battery. A suggested benefit of large-scale deployment of hydrogen vehicles is that it could lead to decreased emissions of greenhouse gases and ozone precursors. However, as of 2014, 95% of hydrogen is made from methane. It can be produced using renewable sources, but that is an expensive process. Integrated wind-to-hydrogen (power to gas) plants, using electrolysis of water, are exploring technologies to deliver costs low enough, and quantities great enough, to compete with traditional energy sources.
According to Ford Motor Company, "when FCVs are run on hydrogen reformed from natural gas using this process, they do not provide significant environmental benefits on a well-to-wheels basis (due to GHG emissions from the natural gas reformation process)." While methods of hydrogen production that do not use fossil fuel would be more sustainable, currently renewable energy represents only a small percentage of energy generated, and power produced from renewable sources can be used in electric vehicles and for non-vehicle applications.
The challenges facing the use of hydrogen in vehicles include production, storage, transport and distribution. The well-to-wheel efficiency for hydrogen is less than 25%. A study sponsored by the U.S. Department of Energy said in 2004 that the well-to-wheel efficiency of gasoline or diesel powered vehicles is even less.
The molecular hydrogen needed as an on-board fuel for hydrogen vehicles can be obtained through many thermochemical methods utilizing natural gas, coal (by a process known as coal gasification), liquefied petroleum gas, biomass (biomass gasification), by a process called thermolysis, or as a microbial waste product called biohydrogen or Biological hydrogen production. 95% of hydrogen is produced using natural gas, and 85% of hydrogen produced is used to remove sulfur from gasoline. Hydrogen can also be produced from water by electrolysis or by chemical reduction using chemical hydrides or aluminum. Current technologies for manufacturing hydrogen use energy in various forms, totaling between 25 and 50 percent of the higher heating value of the hydrogen fuel, used to produce, compress or liquefy, and transmit the hydrogen by pipeline or truck.
Environmental consequences of the production of hydrogen from fossil energy resources include the emission of greenhouse gases, a consequence that would also result from the on-board reforming of methanol into hydrogen. Analyses comparing the environmental consequences of hydrogen production and use in fuel-cell vehicles to the refining of petroleum and combustion in conventional automobile engines do not agree on whether a net reduction of ozone and greenhouse gases would result. Hydrogen production using renewable energy resources would not create such emissions or, in the case of biomass, would create near-zero net emissions assuming new biomass is grown in place of that converted to hydrogen. However the same land could be used to create Biodiesel, usable with (at most) minor alterations to existing well developed and relatively efficient diesel engines. In either case, the scale of renewable energy production today is small and would need to be greatly expanded to be used in producing hydrogen for a significant part of transportation needs. As of December 2008, less than 3 percent of U.S. electricity was produced from renewable sources, not including dams. In a few countries, renewable sources are being used more widely to produce energy and hydrogen. For example, Iceland is using geothermal power to produce hydrogen, and Denmark is using wind.
Hydrogen has a very low volumetric energy density at ambient conditions, equal to about one-third that of methane. Even when the fuel is stored as liquid hydrogen in a cryogenic tank or in a compressed hydrogen storage tank, the volumetric energy density (megajoules per liter) is small relative to that of gasoline. Hydrogen has a three times higher specific energy by mass compared to gasoline (143 MJ/kg versus 46.9 MJ/kg). Some research has been done into using special crystalline materials to store hydrogen at greater densities and at lower pressures. A recent study by Dutch researcher Robin Gremaud has shown that metal hydride hydrogen tanks are actually 40 to 60-percent lighter than an equivalent energy battery pack on an electric vehicle permitting greater range for H2 cars. In 2011, scientists at Los Alamos National Laboratory and University of Alabama, working with the U.S. Department of Energy, found a new single-stage method for recharging ammonia borane, a hydrogen storage compound.
Hydrogen storage is a key area for the advancement of hydrogen and fuel cell power. An article discussing the issue of storage states, “Alternatives to large storage tanks may be found in hydrides, materials that can absorb, store, and release large quantities of hydrogen gas. More work and development needs to be performed with hydrides before they are of practical use”. Some other options available for hydrogen fuel cells storage include: High pressure tanks and cryogenic tanks. Both of which strive to improve volumetric capacity, conformability, and cost of storage. The DOE’s efforts on this matter have focused on on-board vehicular hydrogen storage systems that will allow for a driving range of 300+ miles while meeting all requirements in order to stay competitive with current means of transportation.
The hydrogen infrastructure consists mainly of industrial hydrogen pipeline transport and hydrogen-equipped filling stations like those found on a hydrogen highway. Hydrogen stations which are not situated near a hydrogen pipeline can obtain supply via hydrogen tanks, compressed hydrogen tube trailers, liquid hydrogen tank trucks or dedicated onsite production.
Hydrogen use would require the alteration of industry and transport on a scale never seen before in history. For example, according to GM, 70% of the U.S. population lives near a hydrogen-generating facility but has little access to hydrogen, despite its wide availability for commercial use. The distribution of hydrogen fuel for vehicles throughout the U.S. would require new hydrogen stations that would cost, by some estimates approximately 20 billion dollars and 4.6 billion in the EU. Other estimates place the cost as high as half trillion dollars in the United States alone.
The California Hydrogen Highway is an initiative to build a series of hydrogen refueling stations along California state highways. As of 2013, 10 publicly accessible hydrogen filling stations were in operation in the U.S., eight of which were in Southern California, one in the San Francisco bay area, and one in South Carolina.
Codes and standards
Hydrogen codes and standards, as well as codes and technical standards for hydrogen safety and the storage of hydrogen, have been identified as an institutional barrier to deploying hydrogen technologies and developing a hydrogen economy. To enable the commercialization of hydrogen in consumer products, new codes and standards must be developed and adopted by federal, state and local governments.
In 2003, George W. Bush announced an initiative to promote hydrogen powered vehicles. In 2009, President Obama and his Department of Energy Secretary Steven Chu stripped the funding of fuel cell technology due to their belief that the technology was still decades away. Under heavy criticsm, the funding was partially restored. In 2014 the Obama administration announced they want to speed up production and development of hydrogen powered vehicles. The press release states that, “by partnering with a private sector, the Obama administration thinks that it can create success stories and help speed up the process”. The Department of Energy is spreading a $7.2 million investment to the states of Georgia, Kansas, Pennsylvania, and Tennessee to support projects that fuel vehicles and support power systems. Companies like The Center for Transportation and The Environment, Fed Ex Express, Air Products and Chemicals, and Sprint are invested in the development of these fuel cells. Fuel cells could also be used in handling equipment such as forklifts as well as telecommunications infrastructure.
Senator Byron L. Dorgan stated in 2013: “The Energy and Water Appropriations bill makes investments in our nation’s efforts to develop safe, homegrown energy sources that will reduce our reliance on foreign oil. And, because ongoing research and development is necessary to develop game-changing technologies, this bill also restores funding for Hydrogen energy research”. Much work has been done in developing these fuel cell cars. The U.S. Department of Energy supports next generation fuel cell systems and they are the nations lead innovative clean energy technologies. In June 2013 the DOE gave 9 million dollars in grants to speed up the technology and another 4.5 million for advanced fuel cell membranes. Minnesota based 3M will receive 3 million and the Colorado School of Mines will receive 1.5 million as well. Minnesota and Colorado are working toward these developments. Minnesota is focusing on innovative membranes with improved durability and performance. Colorado is focusing on fuel cell membranes, making them simpler and more affordable. Last year $54 million was given by the government to the SECA Program as “congress recognized and embraced the role hydrogen fuel cells and their fuels play in the portfolio of energy technologies for the 21st centuries”. The Energy and Security program was passed to boost hydrogen environmental cleanup programs and fossil fuel programs. The overall goals of these efforts are to improve efficiency and lower costs of fuel cells.
Critics claim the time frame for overcoming the technical and economic challenges to implementing wide-scale use of hydrogen cars is likely to last for at least several decades, and hydrogen vehicles may never become broadly available. They claim that the focus on the use of the hydrogen car is a dangerous detour from more readily available solutions to reducing the use of fossil fuels in vehicles. In May 2008, Wired News reported that "experts say it will be 40 years or more before hydrogen has any meaningful impact on gasoline consumption or global warming, and we can't afford to wait that long. In the meantime, fuel cells are diverting resources from more immediate solutions."
K. G. Duleep commented that "a strong case exists for continuing fuel-efficiency improvements from conventional technology at relatively low cost." Critiques of hydrogen vehicles are presented in the 2006 documentary, Who Killed the Electric Car?. According to former U.S. Department of Energy official Joseph Romm, "A hydrogen car is one of the least efficient, most expensive ways to reduce greenhouse gases." Asked when hydrogen cars will be broadly available, Romm replied: "Not in our lifetime, and very possibly never." The Los Angeles Times wrote, in February 2009, "Hydrogen fuel-cell technology won't work in cars. ... Any way you look at it, hydrogen is a lousy way to move cars."
The Economist magazine, in September 2008, quoted Robert Zubrin, the author of Energy Victory, as saying: "Hydrogen is 'just about the worst possible vehicle fuel'". The magazine noted the withdrawal of California from earlier goals: "In March the California Air Resources Board, an agency of California's state government and a bellwether for state governments across America, changed its requirement for the number of zero-emission vehicles (ZEVs) to be built and sold in California between 2012 and 2014. The revised mandate allows manufacturers to comply with the rules by building more battery-electric cars instead of fuel-cell vehicles." The magazine also noted that most hydrogen is produced through steam reformation, which creates at least as much emission of carbon per mile as some of today's gasoline cars. On the other hand, if the hydrogen could be produced using renewable energy, "it would surely be easier simply to use this energy to charge the batteries of all-electric or plug-in hybrid vehicles."
The Washington Post asked in November 2009, "But why would you want to store energy in the form of hydrogen and then use that hydrogen to produce electricity for a motor, when electrical energy is already waiting to be sucked out of sockets all over America and stored in auto batteries"?. A December 2009 study at UC Davis, published in the Journal of Power Sources, found that, over their lifetimes, hydrogen vehicles will emit more carbon than gasoline vehicles. This agrees with a 2014 analysis. The Motley Fool stated in 2013 that "there are still cost-prohibitive obstacles [for hydrogen cars] relating to transportation, storage, and, most importantly, production."
Volkswagen's Rudolf Krebs said in 2013 that "no matter how excellent you make the cars themselves, the laws of physics hinder their overall efficiency. The most efficient way to convert energy to mobility is electricity." He elaborated: "Hydrogen mobility only makes sense if you use green energy", but ... you need to convert it first into hydrogen "with low efficiencies" where "you lose about 40 percent of the initial energy". You then must compress the hydrogen and store it under high pressure in tanks, which uses more energy. "And then you have to convert the hydrogen back to electricity in a fuel cell with another efficiency loss". Krebs continued: "in the end, from your original 100 percent of electric energy, you end up with 30 to 40 percent." The Business Insider commented:
Pure hydrogen can be industrially derived, but it takes energy. If that energy does not come from renewable sources, then fuel-cell cars are not as clean as they seem. ... Another challenge is the lack of infrastructure. Gas stations need to invest in the ability to refuel hydrogen tanks before FCEVs become practical, and it's unlikely many will do that while there are so few customers on the road today. ... Compounding the lack of infrastructure is the high cost of the technology. Fuel cells are "still very, very expensive".
In 2014, Joseph Romm devoted three articles to updating his critiques of hydrogen vehicles. He states that FCVs still have not overcome the following issues: high cost of the vehicles, high fueling cost, and a lack of fuel-delivery infrastructure. "It would take several miracles to overcome all of those problems simultaneously in the coming decades." Most importantly, he says, "FCVs aren't green" because of escaping methane during natural gas extraction and when hydrogen is produced, as 95% of it is, using the steam reforming process. He concludes that renewable energy cannot economically be used to make hydrogen for an FCV fleet "either now or in the future." GreenTech Media's analyst reached similar conclusions in 2014. In 2015, Clean Technica listed some of the disadvantages of hydrogen fuel cell vehicles. So did Car Throttle. Another Clean Technica writer concluded, "while hydrogen may have a part to play in the world of energy storage (especially seasonal storage), it looks like a dead end when it comes to mainstream vehicles."
Comparison with other types of alternative fuel vehicle
|This section is outdated. (November 2013)|
Plug-in hybrid electric vehicles, or PHEVs, are hybrid vehicles that can be plugged into the electric grid and contain an electric motor and also an internal combustion engine. The PHEV concept augments standard hybrid electric vehicles with the ability to recharge their batteries from an external source, enabling increased use of the vehicle's electric motors while reducing their reliance on internal combustion engines. The infrastructure required to charge PHEVs is already in place, and transmission of power from grid to car is about 93% efficient. This, however, is not the only energy loss in transferring power from grid to wheels. AC/DC conversion must take place from the grids AC supply to the PHEV's DC. This is roughly 98% efficient. The battery then must be charged. As of 2007, the Lithium iron phosphate battery was between 80-90% efficient in charging/discharging. The battery needs to be cooled; the GM Volt's battery has 4 coolers and two radiators. As of 2009, "the total well-to-wheels efficiency with which a hydrogen fuel cell vehicle might utilize renewable electricity is roughly 20% (although that number could rise to 25% or a little higher with the kind of multiple technology breakthroughs required to enable a hydrogen economy). The well-to-wheels efficiency of charging an onboard battery and then discharging it to run an electric motor in a PHEV or EV, however, is 80% (and could be higher in the future)—four times more efficient than current hydrogen fuel cell vehicle pathways." A 2006 article in Scientific American argued that PHEVs, rather than hydrogen vehicles, would become standard in the automobile industry. A December 2009 study at UC Davis found that, over their lifetimes, PHEVs will emit less carbon than current vehicles, while hydrogen cars will emit more carbon than gasoline vehicles.
ICE-based CNG, HCNG or LNG vehicles (Natural gas vehicles or NGVs) use methane (Natural gas or Biogas) directly as a fuel source. Natural gas has a higher energy density than hydrogen gas. NGVs using biogas are nearly carbon neutral. Unlike hydrogen vehicles, CNG vehicles have been available for many years, and there is sufficient infrastructure to provide both commercial and home refueling stations. Worldwide, there were 14.8 million natural gas vehicles by the end of 2011.
A 2008 Technology Review article stated, "Electric cars—and plug-in hybrid cars—have an enormous advantage over hydrogen fuel-cell vehicles in utilizing low-carbon electricity. That is because of the inherent inefficiency of the entire hydrogen fueling process, from generating the hydrogen with that electricity to transporting this diffuse gas long distances, getting the hydrogen in the car, and then running it through a fuel cell—all for the purpose of converting the hydrogen back into electricity to drive the same exact electric motor you'll find in an electric car." Thermodynamically, each additional step in the conversion process decreases the overall efficiency of the process.
A 2013 comparison of hydrogen and battery electric vehicles agreed with the 25% figure from Ulf Bossel in 2006 and stated that the cost of an electric vehicle battery "is rapidly coming down, and the gap will widen further", while there is little "existing infrastructure to transport, store and deliver hydrogen to vehicles and would cost billions of dollars to put into place, everyone's household power sockets are "electric vehicle refueling" station and the "cost of electricity (depending on the source) is at least 75% cheaper than hydrogen." In 2013 the National Academy of Sciences and DOE stated that even under optimistic conditions by 2030 the price for the battery is not expected to go below $17,000 ($200–$250/kWh) on 300 miles of range. In 2013 Matthew Mench, from the University of Tennessee stated "If we are sitting around waiting for a battery breakthrough that will give us four times the range than we have now, we are going to be waiting for a long time". Navigant Research, (formerly Pike research), on the other hand, forecasts that “lithium-ion costs, which are tipping the scales at about $500 per kilowatt hour now, could fall to $300 by 2015 and to $180 by 2020.” In 2013 Takeshi Uchiyamada, a designer of the Toyota Prius stated: "Because of its shortcomings – driving range, cost and recharging time – the electric vehicle is not a viable replacement for most conventional cars".
Many electric car designs offer limited driving range causing range anxiety. For example, the 2013 Nissan Leaf has a range of 75 mi (121 km), the 2014 Mercedes-Benz B-Class Electric Drive has an estimated range of 115 mi (185 km) and the Tesla Model S has a range of up to 265 mi (426 km). However, most US commutes are 30–40 miles (48–64 km) miles per day round trip and in Europe most commutes are around 20 kilometres (12 mi) round-trip
In 2013, The New York Times stated that there are only 10 publicly accessible hydrogen filling stations in the U.S., eight of which are in Southern California, and that BEVs' cost-per-mile expense in 2013 is one-third as much as hydrogen cars, when comparing electricity from the grid and hydrogen at a filling station. The Times commented: "By the time Toyota sells its first fuel-cell sedan, there will be about a half-million plug-in vehicles on the road in the United States – and tens of thousands of E.V. charging stations." In 2013 John Swanton of the California Air Resources Board, who sees them as complementary technologies, stated that EVs have the jump on fuel-cell autos, which "are like electric vehicles were 10 years ago. EVs are for real consumers, no strings attached. With EVs you have a lot of infrastructure in place. The Business Insider, in 2013 commented that if the energy to produce hydrogen "does not come from renewable sources, then fuel-cell cars are not as clean as they seem. ... Gas stations need to invest in the ability to refuel hydrogen tanks before FCEVs become practical, and it's unlikely many will do that while there are so few customers on the road today. ... Compounding the lack of infrastructure is the high cost of the technology. Fuel cells are "still very, very expensive", even compared to battery-powered EVs.
- "Toyota Unveils 2015 Fuel Cell Sedan, Will Retail in Japan For Around ¥7 Million". transportevolved.com. 2014-06-25. Retrieved 2014-06-26.
- A portfolio of power-trains for Europe: a fact-based analysis
- Romm, Joseph. Tesla Trumps Toyota: Why Hydrogen Cars Can’t Compete With Pure Electric Cars", ThinkProgress, August 5, 2014.
- "Wind-to-Hydrogen Project". Hydrogen and Fuel Cells Research. Golden, CO: National Renewable Energy Laboratory, U.S. Department of Energy. September 2009. Retrieved 7 January 2010.. See also Energy Department Launches Public-Private Partnership to Deploy Hydrogen Infrastructure, US Dept. of Energy, accessed November 15, 2014
- Berman, Bradley (2013-11-22). "Fuel Cells at Center Stage". The New York Times. Retrieved 2013-11-26.
- Davies, Alex (2013-11-22). "Honda Is Working On Hydrogen Technology That Will Generate Power Inside Your Car". The Business Insider. Retrieved 2013-11-26.
- Cox, Julian. "Time To Come Clean About Hydrogen Fuel Cell Vehicles", CleanTechnica.com, June 4, 2014
- Thames & Kosmos kit, Other educational materials, and many more demonstration car kits.
- "New Hydrogen-Powered Land Speed Record from Ford". Motorsportsjournal.com. Retrieved 2010-12-12.
- Voelcker, John. "Decades Of Promises: 'Dude, Where's My Hydrogen Fuel-Cell Car?'", Yahoo.com, March 31, 2015
- "Toyota to Offer $69,000 Car After Musk Pans ‘Fool Cells’". 2014-06-25. Retrieved 2014-06-27.
- Ayre, James. "Toyota To Lose $100,000 On Every Hydrogen FCV Sold?", CleanTechnica.com, November 19, 2014; and Blanco, Sebastian. "Bibendum 2014: Former EU President says Toyota could lose 100,000 euros per hydrogen FCV sedan", GreenAutoblog.com, November 12, 2014
- Whoriskey, Peter. "The Hydrogen Car Gets Its Fuel Back", Washington Post, October 17, 2009
- "Hyundai Debuts All New", Hyundai Australia, July 2011
- Bloomberg News (24 August 2009). "Hydrogen-powered vehicles on horizon". Washington Times. Retrieved 5 September 2009.
- "Hydrogen fuel cells to hit showrooms by 2013", Collision Repair Magazine, 7 June 2012
- Alan Ohnsman. "GM to Maintain Hydrogen Push as Plug-In Volt Readied for Sale". BusinessWeek, March 17, 2010
- Lanz, Walter (December 2001). "Hydrogen Properties" (PDF). U.S. Department of Energy. College of the Desert. Energy Density. Retrieved 2015-10-05.
On this basis, hydrogen’s energy density is poor (since it has such low density) although its energy to weight ratio is the best of all fuels (because it is so light).
- Zubrin, Robert (2007). Energy Victory: Winning the War on Terror by Breaking Free of Oil. Amherst, New York: Prometheus Books. p. 121. ISBN 978-1-59102-591-7.
In order for hydrogen to be used as fuel in a car, it has to be stored in the car. As at the station, this could be done either in the form of super-cold liquid hydrogen or as highly compressed gas. In either case, we come up against serious problems caused by the low density of hydrogen.
- Olah, George A.; Goeppert, Alain; Prakash, G. K. Surya (2006). Beyond Oil and Gas: The Methanol Economy. Weinheim, Germany: Wiley-VCH. p. 155. ISBN 3-527-31275-7.
Despite the advantages of hydrogen ICE, the problem of on-board hydrogen storage, which presently limits the driving range, also remains.
- Brian Warshay, Brian. "The Great Compression: the Future of the Hydrogen Economy", Lux Research, Inc. January 2013
- "China Presents the World's First Hydrogen-Fueled Tram".
- "China's Hydrogen-Powered Future Starts in Trams, Not Cars".
- "Hydrogen scooter by vectrix". Jalopnik.com. 2007-07-13. Retrieved 2010-12-12.
- "Suzuki Burgman fuel-cell scooter". Hydrogencarsnow.com. 2009-10-27. Retrieved 2010-12-12.
- "Fhybrid fuel cell-electric hybrid scooter". Io.tudelft.nl. Retrieved 2010-12-12.
- "SUZUKI - BURGMAN Fuel-Cell Scooter". Retrieved 30 May 2015.
- APFCT won Taiwan BOE project contract for 80 FC scooters fleet demonstration
- "Autostudi S.r.l. H-Due". Ecofriend.org. 2008-04-15. Retrieved 2010-12-12.
- New Holland Wins Gold for Energy Independent Farm Concept or Hydrogen-powered tractor in an Energy Independent Farm
- "Ion tiger hydrogen UAV". Sciencedaily.com. 2009-10-15. Retrieved 2010-12-12.
- David Robertson (3 April 2008). "Boeing tests first hydrogen powered plane". London: The Times.
- "oeing's 'Phantom Eye' Ford Fusion powered stratocraft". The Register. 2010-07-13. Retrieved 2010-07-143. Check date values in:
- "Hydrogen engines get a lift". Accessmylibrary.com. 2008-10-01. Retrieved 2010-12-12.
- Press release: "Fuel Cell Forklifts Gain Ground", fuelcells.org, July 9, 2013
- Fuel cell technologies program overview
- Economic Impact of Fuel Cell Deployment in Forklifts and for Backup Power under the American Recovery and Reinvestment Act
- "Global and Chinese Forklift Industry Report, 2014-2016", Research and Markets, November 6, 2014
- "Fact Sheet: Materials Handling and Fuel Cells"
- "HyLIFT - Clean Efficient Power for Materials Handling". Retrieved 30 May 2015.
- "First Hydrogen Station for Fuel Cell Forklift Trucks in France, for IKEA". Retrieved 30 May 2015.
- "Technologie HyPulsion : des piles pour véhicules de manutention - Horizon Hydrogène Énergie". Retrieved 30 May 2015.
- "HyGear Delivers Hydrogen System for Fuel Cell Based Forklift Trucks". Retrieved 30 May 2015.
- "Hydrogen Fueling Stations Could Reach 5,200 by 2020". Environmental Leader: Environmental & Energy Management News, 20 July 2011, accessed 2 August 2011
- Full Fuel-Cycle Comparison of Forklift Propulsion Systems
- "Fuel cell technology". Retrieved 30 May 2015.
- "Creating Innovative Graphite Solutions for Over 125 Years". GrafTech International. Retrieved 30 May 2015.
- "Rocket propulsion". Braeunig.us. Retrieved 2010-12-12.
- College of the Desert, “Module 1, Hydrogen Properties”, Revision 0, December 2001 Hydrogen Properties. Retrieved 2015-10-05.
- Sutton, George P. and Oscar Biblarz. Rocket Propulsion Elements, Seventh edition, John Wiley & Sons (2001), p. 257, ISBN 0-471-32642-9
- "Fuel cell use in the Space Shuttle". NASA. Retrieved 2012-02-17.
- "H2Mobility - Hydrogen Vehicles - netinform". Retrieved 30 May 2015.
- "Error 404". Retrieved 30 May 2015.
- Eberle, Ulrich; Mueller, Bernd; von Helmolt, Rittmar (2012-07-15). "Fuel cell electric vehicles and hydrogen infrastructure: status 2012". Royal Society of Chemistry. Retrieved 2013-01-08.
- "Toyota’s Fuel Cell Car for 2015 Gets A Whole Lot More Expensive". 2011-11-08. Retrieved 2014-06-27.
- "Accomplishments and Progress". Fuel Cell Technology Program, U.S. Dept. of Energy, June 24, 2011
- "DOE Fuel Cell Technologies Program Record", U.S. Dept. of Energy, September 14, 2012
- "Bookmarkable URL intermediate page". Retrieved 30 May 2015.
- "Toyota's Approach to Fuel Cell Vehicles". Toyota. 2014-06-25. p. 33. Retrieved 2014-06-27.
- Telias, Gabriela et al. RD&D cooperation for the development of fuel cell hybrid and electric vehicles, NREL.gov, November 2010, accessed September 1, 2014
- LeSage, Jon. Toyota says freezing temps pose zero problems for fuel cell vehicles, Autoblog.com, February 6, 2014
- "EERE Service life 5000 hours" (PDF). Retrieved 2010-12-12.
- "Fuel Cell School Buses: Report to Congress" (PDF). Retrieved 2010-12-12.
- Schultz, M.G., Thomas Diehl, Guy P. Brasseur, and Werner Zittel. "Air Pollution and Climate-Forcing Impacts of a Global Hydrogen Economy", Science, October 24, 2003 302: 624-627
- "Wind-to-Hydrogen Project". Hydrogen and Fuel Cells Research. Golden, CO: National Renewable Energy Laboratory, U.S. Department of Energy. September 2009. Retrieved 7 January 2010.
- "Hydrogen Fuel Cell Vehicles (FCVs)", Ford Motor Company, accessed November 15, 2014
- F. Kreith, "Fallacies of a Hydrogen Economy: A Critical Analysis of Hydrogen Production and Utilization" in Journal of Energy Resources Technology (2004), 126: 249–257.
- "From TechnologyReview.com "Hell and Hydrogen", March 2007". Technologyreview.com. Retrieved 2011-01-31.
- Bossel, Ulf. "Does a Hydrogen Economy Make Sense?" Proceedings of the IEEE, Vol. 94, No. 10, October 2006
- Heetebrij, Jan. "A vision on a sustainable electric society supported by Electric Vehicles", Olino Renewable Energy, June 5, 2009
- Romm, Joseph. "Climate and hydrogen car advocate gets almost everything wrong about plug-in cars", The Energy Collective, October 6, 2009
- "Comparing Apples to Apples: Well-to-Wheel Analysis of Current ICE and Fuel Cell Vehicle Technologies" (PDF). Argonne National Laboratory. 10 March 2004. Retrieved 4 July 2014.
- Suplee, Curt. "Don't bet on a hydrogen car anytime soon". Washington Post, November 17, 2009
- L. Soler, J. Macanás, M. Muñoz, J. Casado. Journal of Power Sources 169 (2007) 144-149
- F. Kreith (2004). "Fallacies of a Hydrogen Economy: A Critical Analysis of Hydrogen Production and Utilization". Journal of Energy Resources Technology 126: 249–257.
- "US Energy Information Administration, "World Primary Energy Production by Source, 1970–2004"". Eia.doe.gov. Retrieved 2010-12-12.
- Galbraith, Kate and Matthew L. Wald. "Energy Goals a Moving Target for States", The New York Times, December 4, 2008
- Iceland's hydrogen buses zip toward oil-free economy. Retrieved 17-July-2007.
- First Danish Hydrogen Energy Plant Is Operational. Retrieved 17-July-2007.
- "Light Weight Hydrogen 'Tank' Could Fuel Hydrogen Economy". Sciencedaily.com. 2008-11-05. Retrieved 2010-12-12.
- ”Hydrazine fuels hydrogen power hopes.” ChemistryWorld.com, March 2011.
- The Drive Toward Hydrogen Vehicles Just Got Shorter. ChemNews.com, March 2011.
- "DOE Hydrogen and Fuel Cells Program: Background". Retrieved 30 May 2015.
- "Bookmarkable URL intermediate page". Retrieved 30 May 2015.
- Henry, Jim (October 29, 2007). "GM's Fuel-Cell Hedge". BusinessWeek. Retrieved 9 May 2008.
- Gardner, Michael (November 22, 2004). "Is 'hydrogen highway' the answer?". San Diego Union-Tribune. Retrieved 9 May 2008.
- Stanley, Dean. "Shell Takes Flexible Approach to Fueling the Future". hydrogenforecast.com. Archived from the original on January 21, 2008. Retrieved 9 May 2008.
- Romm, Joseph (2004). The Hype about Hydrogen, Fact and Fiction in the Race to Save the Climate. New York: Island Press. ISBN 1-55963-703-X. (ISBN 1-55963-703-X), Chapter 5
- "DOE codes and standards". Hydrogen.energy.gov. Retrieved 2011-01-31.
- Ken Silverstein. "Obama Administration Wants to Speed Up Hydrogen-Powered Vehicles". Retrieved 30 May 2015.
- "Government Funding for Hydrogen Fuel Cells Program Reinstated". Gas 2. Retrieved 30 May 2015.
- Jon LeSage. "DOE funds more hydrogen fuel cell research with $4.5m investment". Autoblog. Retrieved 30 May 2015.
- Meyers, Jeremy P. "Getting Back Into Gear: Fuel Cell Development After the Hype". The Electrochemical Society Interface, Winter 2008, pp. 36–39, accessed August 7, 2011
- White, Charlie. "Hydrogen fuel cell vehicles are a fraud" Dvice TV, July 31, 2008
- Squatriglia, Chuck. "Hydrogen Cars Won't Make a Difference for 40 Years", Wired, May 12, 2008
- Boyd, Robert S. (May 15, 2007). "Hydrogen cars may be a long time coming". McClatchy Newspapers. Retrieved 9 May 2008.
- Neil, Dan (February 13, 2009). "Honda FCX Clarity: Beauty for beauty's sake". Los Angeles Times. Retrieved 11 March 2009.
- Wrigglesworth, Phil. "The car of the perpetual future"' September 4, 2008, retrieved on September 15, 2008
- "Hydrogen Cars' Lifecycle Emits More Carbon Than Gas Cars, Study Says", Digital Trends, January 1, 2010
- Chatsko, Maxx. "1 Giant Obstacle Keeping Hydrogen Fuel Out of Your Gas Tank", The Motley Fool, November 23, 2013
- Blanco, Sebastian. "VW's Krebs talks hydrogen, says 'most efficient way to convert energy to mobility is electricity'", AutoblogGreen, November 20, 2013
- Davies, Alex. "Honda Is Working On Hydrogen Technology That Will Generate Power Inside Your Car", The Business Insider, November 22, 2013
- Romm, Joseph. "Tesla Trumps Toyota Part II: The Big Problem With Hydrogen Fuel Cell Vehicles", CleanProgress.com, August 13, 2014 and "Tesla Trumps Toyota 3: Why Electric Vehicles Are Beating Hydrogen Cars Today", CleanProgress.com, August 25, 2014
- Romm, Joseph. "Tesla Trumps Toyota: Why Hydrogen Cars Can’t Compete With Pure Electric Cars", CleanProgress.com, August 5, 2014
- Hunt, Tam. "Should California Reconsider Its Policy Support for Fuel-Cell Vehicles?", GreenTech Media, July 10, 2014
- Brown, Nicholas. "Hydrogen Cars Lost Much of Their Support, But Why?", Clean Technica, June 26, 2015
- "Engineering Explained: 5 Reasons Why Hydrogen Cars Are Stupid", Car Throttle, October 8, 2015
- Meyers, Glenn. "Hydrogen Economy: Boom or Bust?", Clean Technica, March 19, 2015
- "US government news release". Pnl.gov. 2006-12-11. Retrieved 2011-01-31.
- "Domestic Energy use in the UK". Powerwatch. Retrieved 2011-01-31.
- "CR4 - Blog Entry: Transformer Efficiency Standards Proposed". Google.co.uk. 6 November 2006. Retrieved 19 September 2009.
- "Plug-in Hybrid Electric Vehicle 2007 Conference - Home" (PDF). Retrieved 30 May 2015.
- Stewart, Ben (4 April 2008). "Chevy Volt Plug-in Car Batteries Ready for 2010 - GM Technical Center". Popular Mechanics. Retrieved 19 September 2009.
- Romm, Joseph and Prof. Andrew A. Frank. "Hybrid Vehicles Gain Traction", Scientific American (April 2006)
- "Plug-in Hybrid Advocacy Group". Pluginpartners.org. Retrieved 2011-01-31.
- "Car Fueled With Biogas From Cow Manure: WWU Students Convert Methane Into Natural Gas". Retrieved 30 May 2015.
- "Worldwide NGV Statistics". NGV Journal. Retrieved 2012-04-24.
- "The Last Car You Would Ever Buy – Literally: Why we shouldn't get excited by the latest hydrogen cars", Technology Review, June 18, 2008
- "Efficiency of Hydrogen PEFC, Diesel-SOFC-Hybrid and Battery Electric Vehicles" (PDF). 15 July 2003. Retrieved January 7, 2009.
- "Information from". cta.ornl.gov. Retrieved 2011-01-31.
- Dansie, Mark. "Hydrogen vs Electric", revolution-green.com, July 4, 2013
- "Transitions to Alternative Vehicles and Fuels - The National Academies Press". Retrieved 30 May 2015.
- Katie Spence (16 November 2013). "Toyota’s Hydrogen vs. Tesla’s Batteries: Which Car Will Win?". Retrieved 30 May 2015.
- Los Angeles Times (18 November 2013). "L.A. Auto Show: Will fuel cells make battery electric cars obsolete?". latimes.com. Retrieved 30 May 2015.
- King, Danny. “Li-ion battery prices still headed way, way down, to $180/kWh by 2020”, ‘’AutoblogGreen’’, November 8, 2013
- "Insight: Electric cars head toward another dead end". Reuters. Retrieved 30 May 2015.
- "2013 Nissan Leaf Gets Fuel Economy, Range Improvement, Says EPA", Edmunds.com, May 3, 2013
- Jablansky, Jeffrey. "First ride: 2014 Mercedes-Benz B-Class Electric Drive will beat Nissan Leaf in range, boasts Tesla-built battery", New York Daily News, October 29, 2013
- Welsh, Jonathan. "Is Tesla Model S the Cure for 'Range Anxiety?'", Wall Street Journal, November 24, 2013
- "2009 National Household Travel Survey", US Department of Transportation, 12 August 2014,
- EEA Survey. European Environment Agency
- Scauzillo, Steve. "L.A. Auto Show drives new green-car market", The Trentonian, November 23, 2013
|Wikimedia Commons has media related to Hydrogen vehicles.|
- California Fuel Cell Partnership homepage
- Fuel Cell Today - Market-based intelligence on the fuel cell industry
- Clean Energy Partnership
- C-Net – Hydrogen: More Polluting than Petroleum? Cnet news 2007
- U.S. Dept. of Energy hydrogen pages
- Toronto Star article on hydrogen trains dated October 21, 2007
- NOVA – Video on Fuel Cell Cars (aired on PBS, July 26, 2005)
- Sandia Corporation – Hydrogen internal combustion engine description
- Inside world's first hydrogen-powered production car BBC News, 14 September 2010 |
Sharecropping is a form of agriculture in which a landowner allows a tenant to use the land in return for a share of the crops produced on their portion of land. Sharecropping has a long history and there are a wide range of different situations and types of agreements that have used a form of the system. Some are governed by tradition, and others by law. Legal contract systems such as the Italian mezzadria, the French métayage, the Spanish mediero, the Slavic połowcy,издoльщина or the Islamic system of muqasat, occur widely.
Sharecropping has benefits and costs for both the owners and the tenant. Everyone encourages the cropper to remain on the land, solving the harvest rush problem[clarification needed]. At the same time, since the cropper pays in shares of his harvest, owners and croppers share the risks of harvests being large or small and of prices being high or low. Because tenants benefit from larger harvests, they have an incentive to work harder and invest in better methods than in a slave plantation system. However, by dividing the working force into many individual workers, large farms no longer benefit from economies of scale. On the whole, sharecropping was not as economically efficient as the gang agriculture of slave plantations.
In the U.S., "tenant" farmers own their own mules and equipment, and "sharecroppers" do not, and thus sharecroppers are poorer and of lower status. Sharecropping occurred extensively in Scotland, Ireland and colonial Africa, and came into wide use in the Southern United States during the Reconstruction era (1865–1877). The South had been devastated by war – planters had ample land but little money for wages or taxes. At the same time, most of the former slaves had labor but no money and no land – they rejected the kind of gang labor that typified slavery. A solution was the sharecropping system focused on cotton, which was the only crop that could generate cash for the croppers, landowners, merchants and the tax collector. Poor white farmers, who previously had done little cotton farming, needed cash as well and became sharecroppers.
Jeffery Paige made a distinction between centralized sharecropping found on cotton plantations and the decentralized sharecropping with other crops. The former is characterized by political conservatism and long lasting tenure. Tenants are tied to the landlord through the plantation store. Their work is heavily supervised as slave plantations were. This form of tenure tends to be replaced by wage slavery as markets penetrate. Decentralized sharecropping involves virtually no role for the landlord: plots are scattered, peasants manage their own labor and the landowners do not manufacture the crops. Leases are very short which leads to peasant radicalism. This form of tenure becomes more common when markets penetrate.
Use of the sharecropper system has also been identified in England (as the practice of "farming to halves"). It is still used in many rural poor areas of the world today, notably in Pakistan and India.
Although there is a perception that sharecropping was exploitative, "evidence from around the world suggests that sharecropping is often a way for differently endowed enterprises to pool resources to mutual benefit, overcoming credit restraints and helping to manage risk." According to Dr. Hunter, "a few acres to the cottage would make the labourers too independent."
It can have more than a passing similarity to serfdom or indenture, particularly where associated with large debts at a plantation store that effectively ties down the workers and their family to the land. It has therefore been seen as an issue of land reform in contexts such as the Mexican Revolution. However, Nyambara states that Eurocentric historiographical devices such as 'feudalism' or 'slavery' often qualified by weak prefixes like 'semi-' or 'quasi-' are not helpful in understanding the antecedents and functions of sharecropping in Africa.
Sharecropping agreements can, however, be made fairly, as a form of tenant farming or sharefarming that has a variable rental payment, paid in arrears. There are three different types of contracts.
- Workers can rent plots of land from the owner for a certain sum and keep the whole crop.
- Workers work on the land and earn a fixed wage from the land owner but keep some of the crop.
- No money changes hands but the worker and land owner each keep a share of the crop.
It has been pointed out that sharecropping was economically inefficient in a free market. However, many outside factors make it efficient. One factor is slave emancipation: sharecropping provided the freed slaves of the US, Brazil and the late Roman Empire with land access. It is efficient also as a way of escaping inflation, hence its rise in 16th-century France and Italy.
It also gave sharecroppers a vested interest in the land, incentivizing hard work and care. However, American plantation were wary of this interest, as they felt that would lead to African Americans demanding rights of partnership. Many black laborers denied the unilateral authority that landowners hoped to achieve, further complicating relations between landowners and sharecroppers.
Landlords opt for sharecropping to avoid the administrative costs and shirking that occurs on plantations and haciendas. It is preferred to cash tenancy because cash tenants take all the risks, and any harvest failure will hurt them and not the landlord. Therefore, they tend to demand lower rents than sharecroppers.
The practice was harmful to tenants with many cases of high interest rates, unpredictable harvests, and unscrupulous landlords and merchants often keeping tenant farm families severely indebted. The debt was often compounded year on year leaving the cropper vulnerable to intimidation and shortchanging. Nevertheless, it appeared to be inevitable, with no serious alternative unless the croppers left agriculture.
A new system of credit, the crop lien, became closely associated with sharecropping. Under this system, a planter or merchant extended a line of credit to the sharecropper while taking the year's crop as collateral. The sharecropper could then draw food and supplies all year long. When the crop was harvested, the planter or merchants who held the lien sold the harvest for the sharecropper and settled the debt.
In settler colonies of colonial Africa, sharecropping was a feature of the agricultural life. White farmers, who owned most of the land, were frequently unable to work the whole of their farm for lack of capital. They, therefore, allowed African farmers to work the excess on a sharecropping basis. In South Africa the 1913 Natives' Land Act outlawed the ownership of land by Africans in areas designated for white ownership and effectively reduced the status of most sharecroppers to tenant farmers and then to farm laborers. In the 1960s, generous subsidies to white farmers meant that most farmers could afford to work their entire farms, and sharecropping faded out.
Sharecropping became widespread in the South as a response to economic upheaval caused by the end of slavery during and after Reconstruction. Sharecropping was a way for very poor farmers, both white and black, to earn a living from land owned by someone else. The landowner provided land, housing, tools and seed, and perhaps a mule, and a local merchant provided food and supplies on credit. At harvest time, the sharecropper received a share of the crop (from one-third to one-half, with the landowner taking the rest). The cropper used his share to pay off his debt to the merchant.
The system started with blacks when large plantations were subdivided. By the 1880s, white farmers also became sharecroppers. The system was distinct from that of the tenant farmer, who rented the land, provided his own tools and mule, and received half the crop. Landowners provided more supervision to sharecroppers, and less or none to tenant farmers. Sharecropping in the United States probably originated in the Natchez District, roughly centered in Adams County, Mississippi with its county seat, Natchez.
Sharecroppers worked a section of the plantation independently, usually growing cotton, tobacco, rice, sugar, and other cash crops, and receiving half of the parcel's output. Sharecroppers also often received their farming tools and all other goods from the landowner they were contracted with. Landowners dictated decisions relating to the crop mix, and sharecroppers were often in agreements to sell their portion of the crop back to the landowner, thus being subjected to manipulated prices. In addition to this, landowners, threatening to not renew the lease at the end of the growing season, were able to apply pressure to their tenants. Sharecropping often proved economically problematic, as the landowners held significant economic control.
Although the sharecropping system was primarily a post-Civil War development, it did exist in antebellum Mississippi, especially in the northeastern part of the state, an area with few slaves or plantations, and most likely existed in Tennessee. Sharecropping, along with tenant farming, was a dominant form in the cotton South from the 1870s to the 1950s, among both blacks and whites.
Following the Civil War of the United States, the South lay in ruins. Plantations and other lands throughout the South were seized by the federal government, and thousands of former slaves, known as freedmen, found themselves free, yet without means to support their families. The situation was made more complex due to General William T. Sherman's Special Field Orders No. 15, which in January 1865, announced he would temporarily grant newly freed families 40 acres of land on the islands and coastal regions of Georgia. This policy was also referred to as Forty Acres and a Mule. Many believed that this policy would be extended to all former slaves and their families as repayment for their treatment at the end of the war.
An alternative path was selected and enforced. In the summer of 1865, President Andrew Johnson, as one of the first acts of Reconstruction, instead ordered all land under federal control be returned to its previous owners. This meant that plantation and land owners in the South regained their land but lacked a labor force. The solution was sharecropping, which enabled the government to match labor with demand and begin the process of economically rebuilding the nation via labor contracts.
In Reconstruction-era United States, sharecropping was one of few options for penniless freedmen to support themselves and their families. Other solutions included the crop-lien system (where the farmer was extended credit for seed and other supplies by the merchant), a rent labor system (where the former slave rents his land but keeps his entire crop), and the wage system (worker earns a fixed wage, but keeps none of their crop). Sharecropping was by far the most economically efficient, as it provided incentives for workers to produce a bigger harvest. It was a stage beyond simple hired labor because the sharecropper had an annual contract. During Reconstruction, the federal Freedmen's Bureau ordered the arrangements and wrote and enforced the contracts.
After the Civil War, plantation owners had to borrow money to farm, at around 15 percent interest. The indebtedness of cotton planters increased through the early 1940s, and the average plantation fell into bankruptcy about every 20 years. It is against this backdrop that the wealthiest owners maintained their concentrated ownership of the land.
Croppers were assigned a plot of land to work, and in exchange owed the owner a share of the crop at the end of the season, usually one half. The owner provided the tools and farm animals. Farmers who owned their own mule and plow were at a higher stage, and were called tenant farmers: They paid the landowner less, usually only a third of each crop. In both cases, the farmer kept the produce of gardens.
The sharecropper purchased seed, tools, and fertilizer, as well as food and clothing, on credit from a local merchant, or sometimes from a plantation store. At harvest time, the cropper would harvest the whole crop and sell it to the merchant who had extended credit. Purchases and the landowner's share were deducted and the cropper kept the difference—or added to his debt.
Though the arrangement protected sharecroppers from the negative effects of a bad crop, many sharecroppers (both black and white) remained quite poor. Arrangements typically left a third of the crop to the sharecropper.
By the early 1930s, there were 5.5 million white tenants, sharecroppers, and mixed cropping/laborers in the United States; and 3 million blacks. In Tennessee, whites made up two thirds or more of the sharecroppers. In Mississippi, by 1900, 36% of all white farmers were tenants or sharecroppers, while 85% of black farmers were. In Georgia, fewer than 16,000 farms were operated by black owners in 1910, while, at the same time, African Americans managed 106,738 farms as tenants.
Sharecropping continued to be a significant institution in Tennessee agriculture for more than 60 years after the Civil War, peaking in importance in the early 1930s, when sharecroppers operated approximately one-third of all farm units in the state.
The situation of landless farmers who challenged the system in the rural South as late as 1941 has been described thus: "he is at once a target subject of ridicule and vitriolic denunciation; he may even be waylaid by hooded or unhooded leaders of the community, some of whom may be public officials. If a white man persists in 'causing trouble', the night riders may pay him a visit, or the officials may haul him into court; if he is a Negro, a mob may hunt him down."
Sharecroppers formed unions in the 1930s, beginning in Tallapoosa County, Alabama in 1931, and Arkansas in 1934. Membership in the Southern Tenant Farmers Union included both blacks and poor whites. As leadership strengthened, meetings became more successful, and protest became more vigorous, landlords responded with a wave of terror.
Sharecroppers' strikes in Arkansas and the Missouri Bootheel, the 1939 Missouri Sharecroppers' Strike, were documented in the film Oh Freedom After While. The plight of a sharecropper was addressed in the song Sharecropper's Blues recorded by Charlie Barnet and His Orchestra with vocals by Kay Starr (Decca 24264) in 1944. It was rerecorded and released by Capitol with Starr being backed by the David Beckham Ork" (Capitol Americana 40051). Decca then reissued the Barnet/Star recording.
In the 1930s and 1940s, increasing mechanization virtually brought the institution of sharecropping to an end in the United States. The sharecropping system in the U.S. increased during the Great Depression with the creation of tenant farmers following the failure of many small farms throughout the Dustbowl. Traditional sharecropping declined after mechanization of farm work became economical in the mid-20th century. As a result, many sharecroppers were forced off the farms, and migrated to cities to work in factories, or become migrant workers in the Western United States during World War II.
Typically, a sharecropping agreement would specify which party was expected to cover certain expenses, like seed, fertilizer, weed control, irrigation district assessments, and fuel. Sometimes the sharecropper covered those costs, but they expected a larger share of the crop in return. The agreement would also indicate whether the sharecropper would use his own equipment to raise the crops, or use the landlord's equipment. The agreement would also indicate whether the landlord would pick up his or her share of the crop in the field or whether the sharecropper would deliver it (and where it would be delivered.)
For example, a landowner may have a sharecropper farming an irrigated hayfield. The sharecropper uses his own equipment, and covers all the costs of fuel and fertilizer. The landowner pays the irrigation district assessments and does the irrigating himself. The sharecropper cuts and bales the hay, and delivers one-third of the baled hay to the landlord's feedlot. The sharecropper might also leave the landlord's share of the baled hay in the field, where the landlord would fetch it when he wanted hay.
Another arrangement could have the sharecropper delivering the landlord's share of the product to market, in which case the landlord would get his share in the form of the sale proceeds. In that case, the agreement should indicate the timing of the delivery to market, which can have a significant effect on the ultimate price of some crops. The market timing decision should probably be decided shortly before harvest, so that the landlord has more complete information about the area's harvest, to determine whether the crop will earn more money immediately after harvest, or whether it should be stored until the price rises. Market timing can entail storage costs and losses to spoilage for some crops as well.
Cooperative farming exists in many forms throughout the United States, Canada, and the rest of the world. Various arrangements can be made through collective bargaining or purchasing to get the best deals on seeds, supplies, and equipment. For example, members of a farmers' cooperative who cannot afford heavy equipment of their own can lease them for nominal fees from the cooperative. Farmers' cooperatives can also allow groups of small farmers and dairymen to manage pricing and prevent undercutting by competitors.
The theory of share tenancy was long dominated by Alfred Marshall's famous footnote in Book VI, Chapter X.14 of Principles where he illustrated the inefficiency of agricultural share-contracting. Steven N.S. Cheung (1969), challenged this view, showing that with sufficient competition and in the absence of transaction costs, share tenancy will be equivalent to competitive labor markets and therefore efficient.
He also showed that in the presence of transaction costs, share-contracting may be preferred to either wage contracts or rent contracts—due to the mitigation of labor shirking and the provision of risk sharing. Joseph Stiglitz (1974, 1988), suggested that if share tenancy is only a labor contract, then it is only pairwise-efficient and that land-to-the-tiller reform would improve social efficiency by removing the necessity for labor contracts in the first place.
Reid (1973), Murrel (1983), Roumasset (1995) and Allen and Lueck (2004) provided transaction cost theories of share-contracting, wherein tenancy is more of a partnership than a labor contract and both landlord and tenant provide multiple inputs. It has also been argued that the sharecropping institution can be explained by factors such as informational asymmetry (Hallagan, 1978; Allen, 1982; Muthoo, 1998), moral hazard (Reid, 1976; Eswaran and Kotwal, 1985; Ghatak and Pandey, 2000), intertemporal discounting (Roy and Serfes, 2001), price fluctuations (Sen, 2011) or limited liability (Shetty, 1988; Basu, 1992; Sengupta, 1997; Ray and Singh, 2001).
|Wikimedia Commons has media related to Sharecroppers.|
- Larry J. Griffin; Don Harrison Doyle (1995). The South As an American Problem. U. of Georgia Press. p. 168. ISBN 9780820317526.
- Eva O'Donovan, Becoming Free in the Cotton South (2007); Gavin Wright, Old South, New South: Revolutions in the Southern Economy Since the Civil War (1986); Roger L. Ransom and David Beckham, One Kind of Freedom: The Economic Consequences of Emancipation (2nd ed. 2008)
- Jeffery Paige, Agrarian Revolution, page 373
- Griffiths, Liz Farming to Halves: A New Perspective on an Absurd and Miserable System in Rural History Today, Issue 6:2004 p.5, accessed at British Agricultural History Society, 16 February 2013.
- Heath, John & Binswanger, Hans P. (October 1998). "Chapter 3: Policy-Induced Effects of Natural Resource Degradation: The Case of Colombia" (PDF). In Lutz, Ernest (ed.). Agriculture and the Environment: Perspectives on Sustainable Rural Development. Washington, DC: The World Bank. p. 32. ISBN 0-8213-4249-5. Retrieved 2011-04-01.
- George Roberts: " The Social History of the People of the Southern Counties of England in past centuries." Lond., 1856, pp. 181-186.
- Pius S. Nyambara (2003). "Rural Landlords, Rural Tenants, and the Sharecropping Complex in Gokwe, Northwestern Zimbabwe, 1980s–2002" (PDF). Archived from the original (PDF) on 2006-03-26. Retrieved 2006-05-18., Centre for Applied Social Sciences, University of Zimbabwe and Land Tenure Center, University of Wisconsin–Madison, March 2003 (200Kb PDF)
- Arthur F. Raper and Ira De A. Reid, Sharecroppers All (1941); Gavin Wright, Old South, New South: Revolutions in the Southern Economy since the Civil War (1986).
- Sharecropping and Sharecroppers, T J Byres
- Royce, Edward (1993). Royce, Edward (ed.). The Origins of Southern Sharecropping. Temple University Press. pp. 181–222. ISBN 9781566390699. JSTOR j.ctt14bt3nz.9.
- Bruce, John W.- Country Profiles of Land Tenure: Africa, 1996 (Lesotho, p. 221) Research Paper No. 130, December 1998, Land Tenure Center, University of Wisconsin-Madison accessed at UMN.edu Archived 2001-11-25 at the Wayback Machine June 19, 2006
- "Sharecropping | Slavery By Another Name Bento | PBS". Sharecropping | Slavery By Another Name Bento | PBS.
- Rufus B. Spain (1967). At Ease in Zion: Social History of Southern Baptists, 1865-1900. p. 130. ISBN 9780817350383.
- Johnny E. Williams (2008). African American Religion and the Civil Rights Movement in Arkansas. Univ. Press of Mississippi. p. 73. ISBN 9781604731866.
- South African History Online, 19 June 1913 – The native land act was passed Archived 14 October 2010 at the Wayback Machine
- Leonard, R. and Longbottom, J., Land Tenure Lexicon: A glossary of terms from English and French speaking West Africa International Institute for Environment and Development (IIED), London, 2000
- Sharon Monteith, ed. (2013). The Cambridge Companion to the Literature of the American South. Cambridge U.P. p. 94. ISBN 9781107036789.CS1 maint: Extra text: authors list (link)
- Joseph D. Reid, "Sharecropping as an understandable market response: The postbellum South." Journal of Economic History (1973) 33#1 pp: 106-130. in JSTOR
- Ronald L. F. Davis "The U. S. Army and the Origins of Sharecropping in the Natchez District-A Case Study" The Journal of Negro History, Vol. 62, No.1 (January 1977), pp. 60–80 in JSTOR
- Ronald L. F. Davis "The U. S. Army and the Origins of Sharecropping in the Natchez District-A Case Study" The Journal of Negro History, Vol. 62, No.1 (January, 1977), pp. 60–80 in JSTOR
- Woodman, Harold D. (1995). New South – New Law: The legal foundations of credit and labor relations in the Postbellum agricultural South. Louisiana State University Press. ISBN 0-8071-1941-5.
- F. N. Boney (2004-02-06). "Poor Whites". The New Georgia Encyclopedia. Retrieved 2006-05-18.
- Mandle, Jay R. Not Slave, Not Free: The African American Economic Experience Since the Civil War. Duke University Press, 1992, 22.
- Ransom, Roger L., and Richard Sutch. One Kind of Freedom: The Economic Consequences of Emancipation. 2 edition. Cambridge England ; New York: Cambridge University Press, 2001, 149.
- Charles Bolton, "Farmers Without Land: The Plight of White Tenant Farmers and Sharecroppers," Mississippi History Now, March 2004.
- Robert Tracy McKenzie, "Sharecropping," Tennessee Encyclopedia of History and Culture.
- Gregorie, Anne King (1954). History of Sumter County, South Carolina, p. 274. Library Board of Sumter County.
- Sharecroppers All. Arthur F Raper and Ira De A. Reid. Chapell Hill 1941. The University of North Carolina Press. pp. 35–36
- The Rockabilly Legends; They Called It Rockabilly Long Before they Called It Rock and Roll by Jerry Naylor and Steve Halliday DVD
- The Devil's Music: A History of the Blues By Giles Oakley Edition: 2. Da Capo Press, 1997, p. 184. ISBN 0-306-80743-2, ISBN 978-0-306-80743-5
- Geisen, James C. (January 26, 2007). "Sharecropping". New Georgia Encyclopedia. Retrieved April 23, 2019.
- Sharecroppers All. Arthur F. Raper and Ira De A. Reid. Chapell Hill 1941. The University of North Carolina Press.
- The Devil's Music: A History of the Blues By Giles Oakley Edition: 2. Da Capo Press, 1997, p. 185. ISBN 0-306-80743-2, ISBN 978-0-306-80743-5
- California Newsreel – Oh Freedom After While
- Charlie Barnet - Sharecropper's Blues. YouTube. 26 August 2011.
- Billboard Oct 25, 1947 Advance Record Releases Hot Jazz p 137
- Billboard - Dec 20, 1947 - p. 98
- Gordon Marshall, "Sharecropping," Encyclopedia.com, 1998.
- Alfred Marshall (1920). Principles of Economics (8th ed.). London: Macmillan and Co., Ltd.
- Cheung, Steven N S (1969). "Transaction Costs, Risk Aversion, and the Choice of Contractual Arrangements". Journal of Law & Economics. 12 (1): 23–42. doi:10.1086/466658. Retrieved 2009-06-14.
- Formalized in Roumasset, James (1979). "Sharecropping, Production Externalities and the Theory of Contracts". American Journal of Agricultural Economics. 61 (4): 640–647. doi:10.2307/1239911. JSTOR 1239911.
- Stiglitz, Joseph (1974). "Incentives and Risk Sharing in Sharecropping" (PDF). The Review of Economic Studies. 41 (2): 219–255 j. doi:10.2307/2296714. JSTOR 2296714.
- Stiglitz, Joseph (1988). "Principal And Agent". Princeton, Woodrow Wilson School – Discussion Paper (12). Retrieved 2009-06-14.
- Reid, Jr., Joseph D. (March 1973). "Sharecropping As An Understandable Market Response: The Post-Bellum South". The Journal of Economic History. 33 (1): 106–130. doi:10.1017/S0022050700076476. JSTOR 2117145.
- Murrell, Peter (Spring 1983). "The Economics of Sharing: A Transactions Cost Analysis of Contractual Choice in Farming". The Bell Journal of Economics. 14 (1): 283–293. doi:10.2307/3003555. JSTOR 3003555.
- Roumasset, James (March 1995). "The nature of the agricultural firm". Journal of Economic Behavior & Organization. 26 (2): 161–177. doi:10.1016/0167-2681(94)00007-2.
- Allen, Douglas W.; Dean Lueck (2004). The Nature of the Farm: Contracts, Risk, and Organization in Agriculture. MIT Press. p. 258. ISBN 9780262511858.
- Hallagan, William (1978). "Self-selection by contractual choice and the theory of sharecropping". Bell Journal of Economics. 9 (2): 344–354. doi:10.2307/3003586. JSTOR 3003586.
- Allen, Franklin (1982). "On share contracts and screening". Bell Journal of Economics. 13 (2): 541–547. doi:10.2307/3003473. JSTOR 3003473.
- Muthoo, Abhinay (1998). "Renegotiation-proof tenurial contracts as screening mechanisms". Journal of Development Economics. 56: 1–26. doi:10.1016/S0304-3878(98)00050-9.
- Reid, Jr., Joseph D. (1976). "Sharecropping and agricultural uncertainty". Economic Development and Cultural Change. 24 (3): 549–576. doi:10.1086/450897. JSTOR 1153005.
- Eswaran, Mukesh; Ashok Kotwal (1985). "A theory of contractual structure in agriculture". American Economic Review. 75 (3): 352–367. JSTOR 1814805.
- Ghatak, Maitreesh; Priyanka Pandey (2000). "Contract choice in agriculture with joint moral hazard in effort and risk". Journal of Development Economics. 63 (2): 303–326. doi:10.1016/S0304-3878(00)00116-4.
- Roy, Jaideep; Konstantinos Serfes (2001). "Intertemporal discounting and tenurial contracts". Journal of Development Economics. 64 (2): 417–436. doi:10.1016/S0304-3878(00)00144-9.
- Sen, Debapriya (2011). "A theory of sharecropping: the role of price behavior and imperfect competition". Journal of Economic Behavior & Organization. 80 (1): 181–199. doi:10.1016/j.jebo.2011.03.006.
- Shetty, Sudhir (1988). "Limited liability, wealth differences, and the tenancy ladder in agrarian economies". Journal of Development Economics. 29: 1–22. doi:10.1016/0304-3878(88)90068-5.
- Basu, Kaushik (1992). "Limited liability and the existence of share tenancy". Journal of Development Economics. 38: 203–220. doi:10.1016/0304-3878(92)90026-6.
- Sengupta, Kunal (1997). "Limited liability, moral hazard and share tenancy". Journal of Development Economics. 52 (2): 393–407. doi:10.1016/S0304-3878(96)00444-0.
- Ray, Tridip; Nirvikar Singh (2001). "Limited liability, contractual choice and the tenancy ladder". Journal of Development Economics. 66: 289–303. doi:10.1016/S0304-3878(01)00163-8.
- Adams, Jane, and D. Gorton, "This Land Ain’t My Land: The Eviction of Sharecroppers by the Farm Security Administration," Agricultural History, 83 (Spring 2009), 323–51.
- Agee, James, and Walker Evans. 1941. Let Us Now Praise Famous Men: Three Tenant Families. Boston: Houghton Mifflin.
- Allen, D. W. and D. Lueck. "Contract Choice in Modern Agriculture: Cash Rent versus Cropshare," Journal of Law and Economics, (1992) v. 35, pp. 397–426.
- Barbagallo, Tricia (June 1, 2005). "Black Beach: The Mucklands of Canastota, New York" (PDF). Archived from the original (PDF) on November 13, 2013. Retrieved 2008-06-04.
- Davis, Ronald L. F. Good and Faithful Labor: From Slavery to Sharecropping in the Natchez District, 1860-1890 Westport, Connecticut, Greenwood Press, 1982
- Ferleger, Louis. "Sharecropping Contracts in the Late-Nineteenth-Century South," Agricultural History Vol. 67, No. 3 (Summer, 1993), pp. 31–46 in JSTOR
- Garrett, Martin A., and Zhenhui Xu. "The Efficiency of Sharecropping: Evidence from the Postbellum South," Southern Economic Journal, Vol. 69, 2003
- Grubbs, Donald H. Cry from the Cotton: The Southern Tenant Farmer's Union and the New Deal (1971)
- Hurt, R. Douglas Hurt. African American Life in the Rural South, 1900–1950 (2003)
- Liebowitz, Jonathan J. "Tenants, Sharecroppers, and the French Agricultural Depression of the Late Nineteenth Century," Journal of Interdisciplinary History, Vol. 19, No. 3 (Winter, 1989), pp. 429–445 in JSTOR
- Reid, Jr., Joseph D. "Sharecropping in History and Theory," Agricultural History Vol. 49, No. 2 (April 1975), pp. 426–440 in JSTOR
- Roll, Jarod. "Out Yonder on the Road": Working Class Self-Representation and the 1939 Roadside Demonstration in Southeast Missouri", Southern Spaces, March 16, 2010. Southernspaces.org
- Shaban, R. A. "Testing Between Competing Models of Sharecropping," Journal of Political Economy, (1987) 95(5), pp. 893–920.
- Singh, N. "Theories of Sharecropping," in P. Bardhan. ed., The Economic Theory of Agrarian Institutions, (1989) pp. 33–72.
- Southworth, Caleb. "Aid to Sharecroppers: How Agrarian Class Structure and Tenant-Farmer Politics Influenced Federal Relief in the South, 1933–1935," Social Science History, Vol. 26, No. 1 (Spring, 2002), pp. 33–70
- Stiglitz, J. "Incentives and Risk Sharing in Share Cropping," Review of Economic Studies, (1974) v.41 219–255.
- Turner, Howard A. (1937). "Farm Tenancy Distribution and Trends in the United States". Law and Contemporary Problems. 4 (4): 424–433. doi:10.2307/1189524. JSTOR 1189524.
- Virts, Nancy. "The Efficiency of Southern Tenant Plantations, 1900–1945," Journal of Economic History, Vol. 51, No. 2 (June 1991), pp. 385–395 in JSTOR
- Wayne, Michael The Reshaping of Plantation Society: The Natchez District, 1860-1880 Baton Rouge, Louisiana, Louisiana State University Press, 1983. |
1. Define the following terms in your own words: (a) hypothesis-testing procedure, (b) .05 significance level, and (c) two-tailed test.
2. List the five steps of hypothesis testing, and explain the procedure and logic of each.
3. Based on the information given for each of the following studies, decide whether to reject the null hypothesis. For each, give (a) the Z-score cutoff (or cutoffs) on the comparison distribution at which the null hypothesis should be rejected, (b) the Z score on the comparison distribution for the sample score, and (c) your conclusion. Assume that all populations are normally distributed.
|Study||μ||σ||Sample Score||p||Tails of Test|
|A||5||1||7||.05||1 (high predicted)|
|C||5||1||7||.01||1 (high predicted)|
1. A. B. C.
2. A. B. C.
3. A. B. C.
4. A. B. C.
4. Evolutionary theories often emphasize that humans have adapted to their physical environment. One such theory hypothesizes that people should spontaneously follow a 24-hour cycle of sleeping and waking—even if they are not exposed to the usual pattern of sunlight. To test this notion, eight paid volunteers were placed (individually) in a room in which there was no light from the outside and no clocks or other indications of time. They could turn the lights on and off as they wished. After a month in the room, each individual tended to develop a steady cycle. Their cycles at the end of the study were as follows: 25, 27, 25, 23, 24, 25, 26, and 25. Using the .05 level of significance, what should we conclude about the theory that 24 hours is the natural cycle? (That is, does the average cycle length under these conditions differ significantly from 24 hours?) (a) Use the steps of hypothesis testing. (b) Sketch the distributions involved, (c) Explain your answer to someone who has never taken a course in statistics.
6. Do students at various universities differ in how sociable they are? Twenty-five students were randomly selected from each of three universities in a region and were asked to report on the amount of time they spent socializing each day with other students. The result for University X was a mean of 5 hours and an estimated population variance of 2 hours; for University Y, M = 4, S2 = 1.5; and for University Z, M = 6, S2 = 2.5. What should you conclude? Use the .05 level. (a) Use the steps of hypothesis testing, (b) figure the effect size for the study; and (c) explain your answers to parts (a) and (b) to someone who has never had a course in statistics.
a. Make up a scatter diagram with 10 dots for each of the following situations:
b. perfect positive linear correlation,
c. large but not perfect positive linear correlation,
d. small positive linear correlation,
e. large but not perfect negative linear correlation,
f. no correlation,
g. clear curvilinear correlation.
|7. Four research participants take a test of manual dexterity (high scores mean better dexterity) and an anxiety test (high scores mean more anxiety). The scores are as follows. |
Which one of the vector triangles correctly shows the magnitude and direction of the vector z as the vector x y. A acceleration b mass c momentum d velocity.
This web page is designed to provide some additional practice with the use of scaled vector diagrams for the addition of two or more vectors.
Vector questions physics. Parallelogram law of vector states that if the vectors acting simultaneously at a point both in direction and magnitude represented by the adjacent sides of the parallelogram drawn from the point then the resultant of the vectors both in magnitude and direction are represented by the diagonal of. Physics vector exercise answer f3 physics exercise heat vectors test questions and solutions f 3 physics exercise online problem with solution of vector vectors physics solution of 1 physics exercise on vectors exercise vector in physic physics exercise vector physics vector problems with answers physics vectors problems with solution. Choose your answers to the questions and click next to see the next set of questions.
Practice questions on vector addition. When they are acting on a body simultaneously to find the resultant effect of them on the object vector addition is applied. What is vector addition.
Three forces act on a point. Free sat ii physics practice questions vectors with detailed solutions and explanations. Given the position vector of the particle r t t 1 i t 2 1 j 2t k find.
Z x y z x y or x z y. Vector addition using and html5 applet to understand the geometrical meaning of the addition of vectors important concept in physics as it is related to addition of forces. 3 n at 0 4 n at 90 and 5 n at 217.
Interactive html 5 applets to add and subtract vectors. Vectors in physics chapter exam instructions. You can skip questions if you would like and come back to them.
This is an example of an inclined plane problem something common in. Your time will be best spent if you read each practice problem carefully attempt to solve the problem with a scaled vector diagram and then check your answer. The resultant vector is the vector that results from adding two or more vectors together.
The process of adding two or more vectors is called vector addition. Why vector addition is important in physics. Which of the following is represented by a vector.
Which one of the following is not a vector quantity. Answers at the bottom of the page with also detailed solutions and explanations included. Physics comprises plenty of vector quantities.
Don t forget to answer the question. And then the students learned that there really was no such thing as a bad vector and everyone lived happily ever after. Sat physics subject questions on vectors similar to the questions in the sat test are presented.
I velocity ii speed iii displacement iv distance v force vi acceleration. |
B FIGURE III.1 c C b A a
It is assumed that the reader is familiar with the sine and cosine formulas for the solution of the triangle: a b c = = sin A sin B sin C
a 2 = b 2 + c 2 − 2bc cos A,
and understands that the art of solving a triangle involves recognition as to which formula is appropriate under which circumstances. Two quick examples - each with a warning - will suffice. Example: A plane triangle has sides a = 7 inches, b = 4 inches and angle B = 28o. Find the angle A.
See figure III.2. We use the sine formula, to obtain
sin A =
7 sin 28o = 0.821575 4
A = 55o 14'.6
3 The pitfall is that there are two values of A between 0o and 180o that satisfy sin A = 0.821575, namely 55o 14'.6 and 124o 45'.4. Figure III.3 shows that, given the original data, either of these is a valid solution.
The lesson to be learned from this is that all inverse trigonometric functions (sin-1 , cos-1 , tan-1 ) have two solutions between 0o and 360o . The function sin-1 is particularly troublesome since, for positive arguments, it has two solutions between 0o and 180o . The reader must always be on guard for "quadrant problems" (i.e. determining which quadrant the desired solution belongs to) and is warned that, unless particular care is taken in programming calculators or computers, quadrant problems are among the most frequent problems in trigonometry, and especially in spherical astronomy. Example: Find x in the triangle illustrated in figure III.4.
4 Application of the cosine rule results in 25 = x2 + 64 − 16x cos 32o Solution of the quadratic equation yields x = 4.133 or 9.435 This illustrates that the problem of "two solutions" is not confined to angles alone. Figure III.4 is drawn to scale for one of the solutions; the reader should draw the second solution to see how it is that two solutions are possible. The reader is now invited to try the following "guaranteed all different" problems by hand calculator. Some may have two real solutions. Some may have none. The reader should draw the triangles accurately, especially those that have two solutions or no solutions. It is important to develop a clear geometric understanding of trigonometric problems, and not merely to rely on the automatic calculations of a machine. Developing these critical... |
Open Source Your Knowledge, Become a Contributor
Technology knowledge has to be shared and made accessible for free. Join the movement.
What is Recursion?
In a very basic sense, you can think of recursion as a term used to describe algorithms that solve for a value in a sequence that depends on other values in the same sequence.
One of the most common tutorials for learning recursive algorithms is to calculate the factorial of some integer n, denoted n! The formula can be expressed as follows:
N! = N * (N-1) * (N-2) * ... * 2 * 1
- 5! = 5 * 4 * 3 * 2 * 1 = 120
- 4! = 4 * 3 * 2 * 1 = 24
- 3! = 3 * 2 * 1 = 6
Below is a recursive function that calculates N! Given some input value n, the function will return a call to itself until the condition in the if-statement is false.
If you have not worked with this type of algorithm before, then it can be a little counter-intuitive for most top-down programming mindsets. So lets break it down a little further.
In the example above, we want to calculate 6! so we call factorial(6).
The function tries to evaluate. On line 2 since
n = 6 and 6 > 1, we
return 6 * factorial(5) on line 3. However, we do not return/exit the function at this point, because we still need to evaluate factorial(5).
So the cycle repeats for n = 5, then n = 4 ... n = 2 and finally n = 1.
return 6 * factorial(5)
return 5 * factorial(4)
return 4 * facorial(3)
return 3 * factorial(2)
return 2 * factorial(1)
At this point we have hit the maximum recursion depth of the algorithm. factorial(1) evaluates to 1 and factorial(2) returns 2 * 1. Then factorial(3) returns 3 * 2, factorial(4) returns 4 * 6, and on until our initial function call factorial(6) returns 720.
The order of execution is extremely important when designing a recursion algorithm. Often times the value you are interested in finding is at the bottom depth of your function calls, with half of your calculations remaining. Just because something is returned from a function in a recursion algorithm does not mean you will see that result like you would in a more linear top-down program.
In the above code, the function will call itself until the parameter n is less than or equal to one. Also, every time it is called recursively, the input parameter n is decremented by 1
return n * factorial(n-1).
This prevents the function from calling itself endlessly over and over again... Or does it? If your program allows for user input, then you have to consider the case when a user enters a negative number. Would the above algorithm be able to handle such a case?
What if the user enters a non-numeric value?
For more complex examples, make sure the value you are interested in calculating is well-defined and will not lead to an infinite recursion depth. Sometimes, depending on how your code is being implemented, you will also have to account for errors from invalid input arguements.
There is no generic recursion algorithm, no catch-all code that can be implemented for any problem. Always try to consider the constraints of the problem you are trying to solve. Sometimes a little creativity is necessary too.
Limiting the Depth of your Calculations
Depending on the problem you are trying to solve, it may be a good idea to limit the depth of your recursive function. For more advanced algorithms such as a Monte Carlo Tree Search (MCTS) used in chess programs and many others, it is unreasonable to try to calculate every possible move. However, if you only want to look 3 or so steps ahead then you need to keep track of the "depth" of your algorithm.
Below is a snippet of an almost identical factorial function to the one above. However, this one keeps track of how many times the factorial function has called itself. If it calls itself 7 times or more, then it will print an error message and return 0. |
You can see the rainbow in the sky when the sun shines, but you cannot see anything in the dark. The entire credit goes to your eyes that enable you to see the colourful rainbow. Did you wonder why you cannot see things in the dark even with your eyes open? Have you ever thought of science behind the law of light? It is because your eyes can see objects when an object emits light or reflects light.
Furthermore, you often turn to a mirror or any other shiny object to get a glimpse of your appearance. Whenever light falls on a mirror, it changes the direction of light. However, what we can see depends upon the direction of the light it reflects.
The bouncing back of light rays from the surface of an object is called reflection. To get a clear picture of the laws of reflection you need to understand different terms of lights, rays and angles.
The light which falls on an object is called incident ray.
The ray of light which gets reflected from the surface is called reflected ray.
Angle of Incidence in Law of Light
The angle of incidence is the angle established between the incident ray and normal at the point of incidence.
Angle of Reflection in Law of Light
The angle derived between the reflected ray and normal is called the angle of reflection.
Normal to the reflecting surface at that point is when a line makes an angle of 90° to the line of the mirror at the point where the incident ray strikes the mirror.
When a ray of light gets reflected from the smooth, shiny surface, it obeys specific laws. These are laws of reflection.
The first law of reflection states that the reflection angle is always equivalent to the angle of incidence. If the incident ray falls on the plane mirror along the normal, i.e. 90°, the reflected ray will travel along the same path
The incident ray, reflected ray, angle of incidence and reflection, and point of incidence lie on the same plane.
Under the law of light, there are two types of reflection depending upon its surface – regular and irregular or diffused reflection.
Laws of Light on a Plane Surface
Regular reflection is a condition when light reflects in the same direction from a smooth or plane surface. Let us understand this with an example. Suppose an object is placed before the mirror, a smooth surface, the reflecting light of parallel ray will be parallel. The image on the left of the object appears on the right, and the image on the right appears on the left. Such reflection is known as lateral inversion.
(image will be uploaded soon)
Laws of Light on a Rough Surface
Irregular or diffused reflection is a condition when light reflects in an irregular pattern from a rough surface. Let us understand this with an example. Suppose light falls upon a wall. The reflection of parallel rays of light will not be parallel. The reflecting light spreads in different directions. Here also the law of reflection of light is working.
(image will be uploaded soon)
You can see beautiful patterns in a kaleidoscope because of multiple reflections from the mirror placed inside it.
The sunlight is a white light which also consists of seven colours of the rainbow which we can see after the rainfall.
When you stand before an inclined two mirrors, you can see multiple images of yourself.
The moon receives the light from the sun, which makes it illuminate. Thus, we can see the moon at night.
Periscope is an excellent example of reflection from the two mirrors that enables you to see far-away objects. They are used by soldiers in bunkers at the border. Furthermore, submarines and tanks also have an inbuilt periscope.
Q1. How Many Types of Reflection are There?
A1. There are two types of reflection depending upon the nature of the reflecting surface. The reflection from a smooth surface differs from the reflection of a rough surface.
When a ray of light falls on a smooth and shiny reflecting surface, it gets reflected in a particular direction. Reflection of light from a smooth surface, such as mirror, a stainless steel plate, etc. is called regular reflection.
When a ray of light falls on an uneven surface, they get reflected in diverse directions. As a result, the reflected ray falls over a larger area, and the image formed is not sharp and clear. Such reflection is irregular or diffused reflection. Reflection of light from a wall, a paper, and many other everyday objects is irregular or diffused reflection.
Q2. Explain how Multiple Images are Formed with an Example. Justify your Answer with the Factors.
A2. If you want to see multiple images of yourself, then you will need to place more mirrors at different angles. Multiple reflections are the phenomenon in which you can see multiple images because the images formed by one mirror acts as an object for the second mirror. The number of visible images is determined depending upon the angle at which the second mirror is placed. You can calculate the number of images formed using the following formula.
Number of images = (360°/ angle of placement) – 1
Suppose, you place a candle between two mirrors, which is at a distance of 40 cm from each other, the number of images of candles between two parallel mirrors is infinite. The angle is considered to be 0° which results in an infinite number of images. |
This past summer, NASA launched its first satellite devoted to measuring atmospheric carbon dioxide, a heat-trapping gas that is driving global warming.
Today (Dec. 18), scientists with the space agency unveiled the first carbon maps obtained by the spacecraft, named the Orbiting Carbon Observatory-2, or OCO-2.
OCO-2 only started collecting its first scientifically useful information at the end of September, but the initial results "are quite amazing," said Annmarie Eldering, OCO-2 deputy project scientist, based at NASA's Jet Propulsion Laboratory in Pasadena, California. [ In Photos: World's Most Polluted Places ]
In a news conference at the annual meeting of the American Geophysical Union in San Francisco, Eldering and her colleagues showed a map of the globe that uses about 600,000 data points taken by OCO-2 from Oct. 1 through Nov. 17. It shows hotspots of carbon dioxide over northern Australia, southern Africa and eastern Brazil.
These carbon spikes could be explained by agricultural fires and land clearing — practices that are widespread during spring in the Southern Hemisphere, OCO-2 scientists said.
NASA scientists aren't just interested in learning more about the understudied effects of biomass burning. As OCO-2 collects more data, the scientists are hoping to compile the most complete picture to date of how carbon dioxide is distributed — geographically and seasonally. They'll also look at the places where that carbon dioxide is removed.
"We feel certain that once we have a larger data set with this kind of density and precision, it will really be valuable to the scientific community and to understand the carbon dioxide fluxes," Eldering said.
OCO-2 launched on July 2 from Vandenberg Air Force Base in California, carried aloft by a United Launch Alliance Delta 2 rocket. About a month later, the spacecraft reached its final, near-polar orbit 438 miles (705 kilometers) above Earth. The $465-million mission was more than a decade in the making. The original OCO spacecraft crashed into the Pacific Ocean in February 2009, after a failure with its rocket.
What sets OCO-2 apart from past spacecraft, such as Japan's Greenhouse Gases Observing Satellite (GOSAT), is the amount of data it can collect.
The satellite has a grading spectrometer to measure carbon dioxide levels with a precision of about 1 part per million, or ppm. (Today's carbon concentration, 400 ppm, is the highest in at least 800,000 years. This number means there are 400 molecules of carbon dioxide in the air per every million air molecules. Before the Industrial Revolution, carbon concentration was thought to be about 280 ppm.)
OCO-2 takes about a million measurements each day, generating tens of thousands of useful data points. (Some data has to be thrown out because of cloud cover and uneven elevations.) And, the satellite can cover the entire globe in 16 days. While this is not the right scale to link an individual source (such as a specific event at a power plant or factory) to a spike in carbon emissions in a given area, the mission scientists say they are more focused on understanding the carbon cycle on a regional, monthly scale. |
Water on Mars
Water on Mars exists today almost entirely as ice, though it also exists in small quantities as vapor in the atmosphere and occasionally as low-volume liquid brines in shallow Martian soil. The only place where water ice is visible at the surface is at the north polar ice cap. Abundant water ice is also present beneath the permanent carbon dioxide ice cap at the Martian south pole and in the shallow subsurface at more temperate latitudes. More than five million cubic kilometers of ice have been identified at or near the surface of modern Mars, enough to cover the whole planet to a depth of 35 meters (115 ft). Even more ice is likely to be locked away in the deep subsurface.
Some liquid water may occur transiently on the Martian surface today, but only under certain conditions. No large standing bodies of liquid water exist, because the atmospheric pressure at the surface averages just 600 pascals (0.087 psi)—about 0.6% of Earth's mean sea level pressure—and because the global average temperature is far too low (210 K (−63 °C; −82 °F)), leading to either rapid evaporation (sublimation) or rapid freezing. Before about 3.8 billion years ago, Mars may have had a denser atmosphere and higher surface temperatures, allowing vast amounts of liquid water on the surface, possibly including a large ocean that may have covered one-third of the planet. Water has also apparently flowed across the surface for short periods at various intervals more recently in Mars' history. On December 9, 2013, NASA reported that, based on evidence from the Curiosity rover studying Aeolis Palus, Gale Crater contained an ancient freshwater lake that could have been a hospitable environment for microbial life.
Many lines of evidence indicate that water is abundant on Mars and has played a significant role in the planet's geologic history. The present-day inventory of water on Mars can be estimated from spacecraft imagery, remote sensing techniques (spectroscopic measurements, radar, etc.), and surface investigations from landers and rovers. Geologic evidence of past water includes enormous outflow channels carved by floods, ancient river valley networks, deltas, and lakebeds; and the detection of rocks and minerals on the surface that could only have formed in liquid water. Numerous geomorphic features suggest the presence of ground ice (permafrost) and the movement of ice in glaciers, both in the recent past and present. Gullies and slope lineae along cliffs and crater walls suggest that flowing water continues to shape the surface of Mars, although to a far lesser degree than in the ancient past.
Although the surface of Mars was periodically wet and could have been hospitable to microbial life billions of years ago, the current environment at the surface is dry and subfreezing, probably presenting an insurmountable obstacle for living organisms. In addition, Mars lacks a thick atmosphere, ozone layer, and magnetic field, allowing solar and cosmic radiation to strike the surface unimpeded. The damaging effects of ionizing radiation on cellular structure is another one of the prime limiting factors on the survival of life on the surface. Therefore, the best potential locations for discovering life on Mars may be in subsurface environments.
Understanding water on Mars is vital to assess the planet’s potential for harboring life and for providing usable resources for future human exploration. For this reason, 'Follow the Water' was the science theme of NASA's Mars Exploration Program (MEP) in the first decade of the 21st century. Discoveries by the 2001 Mars Odyssey, Mars Exploration Rovers (MERs), Mars Reconnaissance Orbiter (MRO), and Mars Phoenix Lander have been instrumental in answering key questions about water's abundance and distribution on Mars. The ESA's Mars Express orbiter has also provided essential data in this quest. The Mars Odyssey, Mars Express, MER Opportunity rover, MRO, and Mars Science Lander Curiosity rover are still sending back data from Mars, and discoveries continue to be made.
- 1 Historical background
- 2 Evidence from rocks and minerals
- 3 Geomorphic evidence
- 4 Present water ice
- 5 Development of Mars' water inventory
- 6 Ice ages
- 7 Evidence for recent flows
- 8 Habitability assessment
- 9 Findings by probes
- 10 See also
- 11 References
- 12 Bibliography
- 13 External links
The notion of water on Mars preceded the space age by hundreds of years. Early telescopic observers correctly assumed that the white polar caps and clouds were indications of water's presence. For many years, the dark regions visible on the surface were interpreted as oceans. These observations, coupled with the fact that Mars has a 24-hour day, led astronomer William Herschel to declare in 1784 that Mars probably offered its inhabitants "a situation in many respects similar to ours."
By the start of the 20th century, most astronomers recognized that Mars was far colder and drier than Earth. The presence of oceans was no longer accepted, so the paradigm changed to an image of Mars as a "dying" planet with only a meager amount of water. The dark areas, which could be seen to change seasonally, were now thought to be tracts of vegetation. The man most responsible for popularizing this view of Mars was Percival Lowell (1855–1916), who imagined a race of Martians constructing a network of canals to bring water from the poles to the inhabitants at the equator. Although generating tremendous public enthusiasm, Lowell's ideas were rejected by most astronomers. The consensus of the scientific establishment at the time is probably best summarized by English astronomer Edward Walter Maunder (1851–1928) who compared the climate of Mars to conditions atop a twenty-thousand-foot peak on an arctic island where only lichen might be expected to survive.
In the meantime, many astronomers were refining the tool of planetary spectroscopy in hope of determining the composition of the Martian atmosphere. Between 1925 and 1943, Walter Adams and Theodore Dunham at the Mount Wilson Observatory tried to identify oxygen and water vapor in the Martian atmosphere, with generally negative results. The only component of the Martian atmosphere known for certain was carbon dioxide (CO2) identified spectroscopically by Gerard Kuiper in 1947. Water vapor was not unequivocally detected on Mars until 1963.
The composition of the polar caps, assumed to be water ice since the time of Cassini (1666), was questioned by a few scientists in the late 1800s who favored CO2 ice, because of the planet's overall low temperature and apparent lack of appreciable water. This hypothesis was confirmed theoretically by Robert Leighton and Bruce Murray in 1966. Today we know that the winter caps at both poles are primarily composed of CO2 ice, but that a permanent (or perennial) cap of water ice remains during the summer at the northern pole. At the southern pole, a small cap of CO2 ice remains during summer, but this cap too is underlain by water ice.
The final piece of the Martian climate puzzle was provided by Mariner 4 in 1965. Grainy television pictures from the spacecraft showed a surface dominated by impact craters, which implied that the surface was very old and had not experienced the level of erosion and tectonic activity seen on Earth. Little erosion meant that liquid water had probably not played a large role in the planet's geomorphology for billions of years. Furthermore, the variations in the radio signal from the spacecraft as it passed behind the planet allowed scientists to calculate the density of the atmosphere. The results showed an atmospheric pressure less than 1% of Earth’s at sea level, effectively precluding the existence of liquid water, which would rapidly boil or freeze at such low pressures. Thus, a vision of Mars was born of a world much like the Moon, but with just a wisp of an atmosphere to blow the dust around. This view of Mars would last nearly another decade until Mariner 9 showed a much more dynamic Mars with hints that the planet’s past environment was more clement than the present one.
On January 24, 2014, NASA reported that current studies on Mars by the Curiosity and Opportunity rovers will now be searching for evidence of ancient life, including a biosphere based on autotrophic, chemotrophic and/or chemolithoautotrophic microorganisms, as well as ancient water, including fluvio-lacustrine environments (plains related to ancient rivers or lakes) that may have been habitable.
For many years it was thought that the observed remains of floods were caused by the release of water from a global water table, but research published in 2015 reveals regional deposits of sediment and ice emplaced 450 million years earlier to be the source. "Deposition of sediment from rivers and glacial melt filled giant canyons beneath primordial ocean contained within the planet's northern lowlands. It was the water preserved in these canyon sediments that was later released as great floods, the effects of which can be seen today."
Evidence from rocks and minerals
Today, it is widely accepted that Mars had abundant water very early in its history, but all large areas of liquid water have since disappeared. A fraction of this water is retained on modern Mars as both ice and locked into the structure of abundant water-rich materials, including clay minerals (phyllosilicates) and sulfates. Studies of hydrogen isotopic ratios indicate that asteroids and comets from beyond 2.5 astronomical units (AU) provide the source of Mars' water, that currently totals 6% to 27% of the Earth's present ocean.
Water in weathering products (aqueous minerals)
The primary rock type on the surface of Mars is basalt, a fine-grained igneous rock made up mostly of the mafic silicate minerals olivine, pyroxene, and plagioclase feldspar. When exposed to water and atmospheric gases, these minerals chemically weather into new (secondary) minerals, some of which may incorporate water into their crystalline structures, either as H2O or as hydroxyl (OH). Examples of hydrated (or hydoxylated) minerals include the iron hydroxide goethite (a common component of terrestrial soils); the evaporate minerals gypsum and kieserite; opalline silica; and phyllosilicates (also called clay minerals), such as kaolinite and montmorillonite. All of these minerals have been detected on Mars.
One direct effect of chemical weathering is to consume water and other reactive chemical species, taking them from mobile reservoirs like the atmosphere and hydrosphere and sequestering them in rocks and minerals. The amount of water in the Martian crust stored in hydrated minerals is currently unknown, but may be quite large. For example, mineralogical models of the rock outcroppings examined by instruments on the Opportunity rover at Meridiani Planum suggest that the sulfate deposits there could contain up to 22% water by weight.
On Earth, all chemical weathering reactions involve water to some degree. Thus, many secondary minerals do not actually incorporate water, but still require water to form. Some examples of anhydrous secondary minerals include many carbonates, some sulfates (e.g., anhydrite), and metallic oxides such as the iron oxide mineral hematite. On Mars, a few of these weathering products may theoretically form without water or with scant amounts present as ice or in thin molecular-scale films (monolayers). The extent to which such exotic weathering processes operate on Mars is still uncertain. Minerals that incorporate water or form in the presence of water are generally termed "aqueous minerals."
Aqueous minerals are sensitive indicators of the type of environment that existed when the minerals formed. The ease with which aqueous reactions occur (see Gibbs free energy) depends on the pressure, temperature, and on the concentrations of the gaseous and soluble species involved. Two important properties are pH and oxidation-reduction potential (Eh). For example, the sulfate mineral jarosite forms only in low pH (highly acidic) water. Phyllosilicates usually form in water of neutral to high pH (alkaline). Eh is a measure is the oxidation state of an aqueous system. Together Eh and pH indicate the types of minerals that are thermodynamically most likely to form from a given set of aqueous components. Thus, past environmental conditions on Mars, including those conducive to life, can be inferred from the types of minerals present in the rocks.
Aqueous minerals can also form in the subsurface by hydrothermal fluids migrating through pores and fissures. The heat source driving a hydrothermal system may be nearby magma bodies or residual heat from large impacts. One important type of hydrothermal alteration in the Earth’s oceanic crust is serpentinization, which occurs when seawater migrates through ultramafic and basaltic rocks. The water-rock reactions result in the oxidation of ferrous iron in olivine and pyroxene to produce ferric iron (as the mineral magnetite) yielding molecular hydrogen (H2) as a byproduct. The process creates a highly alkaline and reducing (low Eh) environment favoring the formation of certain phyllosilicates (serpentine minerals) and various carbonate minerals, which together form a rock called serpentinite. The hydrogen gas produced can be an important energy source for chemosynthtetic organisms or it can react with CO2 to produce methane gas, a process that has been considered as a non-biological source for the trace amounts of methane reported in the Martian atmosphere. Serpentine minerals can also store a lot of water (as hydroxyl) in their crystal structure. A recent study has argued that hypothetical serpentinites in the ancient highland crust of Mars could hold as much as a 500 metres (1,600 ft)-thick global equivalent layer (GEL) of water. Although some serpentine minerals have been detected on Mars, no widespread outcroppings are evident from remote sensing data. This fact does not preclude the presence of large amounts of sepentinite hidden at depth in the Martian crust.
The rates at which primary minerals convert to secondary aqueous minerals vary. Primary silicate minerals crystallize from magma under pressures and temperatures vastly higher than conditions at the surface of a planet. When exposed to a surface environment these minerals are out of equilibrium and will tend to interact with available chemical components to form more stable mineral phases. In general, the silicate minerals that crystallize at the highest temperatures (solidify first in a cooling magma) weather the most rapidly. On the Earth and Mars, the most common mineral to meet this criterion is olivine, which readily weathers to clay minerals in the presence of water.
Over 60 meteorites have been found that came from Mars. Some of them contain evidence that they were exposed to water when on Mars. Some Martian meteorites called basaltic shergottites, appear (from the presence of hydrated carbonates and sulfates) to have been exposed to liquid water prior to ejection into space. It has been shown that another class of meteorites, the nakhlites, were suffused with liquid water around 620 million years ago and that they were ejected from Mars around 10.75 million years ago by an asteroid impact. They fell to Earth within the last 10,000 years.
In 1996, a group of scientists reported the possible presence of microfossils in the Allan Hills 84001, a meteorite from Mars. Many studies disputed the validity of the fossils. It was found that most of the organic matter in the meteorite was of terrestrial origin.
Lakes and river valleys
The 1971 Mariner 9 spacecraft caused a revolution in our ideas about water on Mars. Huge river valleys were found in many areas. Images showed that floods of water broke through dams, carved deep valleys, eroded grooves into bedrock, and traveled thousands of kilometers. Areas of branched streams, in the southern hemisphere, suggested that rain once fell. The numbers of recognised valleys has increased through time. Research published in June 2010 mapped 40,000 river valleys on Mars, roughly quadrupling the number of river valleys that had previously been identified. Martian water-worn features can be classified into two distinct classes: 1) dendritic (branched), terrestrial-scale, widely distributed, Noachian-age valley networks and 2) exceptionally large, long, single-thread, isolated, Hesperian-age outflow channels. Recent work suggests that there may also be a class of currently enigmatic, smaller, younger (Hesperian to Amazonian) channels in the midlatitudes, perhaps associated with the occasional local melting of ice deposits.
Some parts of Mars show inverted relief. This occurs when sediments are deposited on the floor of a stream and then become resistant to erosion, perhaps by cementation. Later the area may be buried. Eventually, erosion removes the covering layer and the former streams become visible since they are resistant to erosion. Mars Global Surveyor found several examples of this process. Many inverted streams have been discovered in various regions of Mars, especially in the Medusae Fossae Formation, Miyamoto Crater, Saheki Crater, and the Juventae Plateau.
A variety of lake basins have been discovered on Mars. Some are comparable in size to the largest lakes on Earth, such as the Caspian Sea, Black Sea, and Lake Baikal. Lakes that were fed by valley networks are found in the southern highlands. There are places that are closed depressions with river valleys leading into them. These areas are thought to have once contained lakes; one is in Terra Sirenum that had its overflow move through Ma'adim Vallis into Gusev Crater, explored by the Mars Exploration Rover Spirit. Another is near Parana Valles and Loire Vallis. Some lakes are thought to have formed by precipitation, while others were formed from groundwater. Lakes are estimated to have existed in the Argyre basin, the Hellas basin, and maybe in Valles Marineris. It is likely that at times in the Noachian, very many craters hosted lakes. These lakes are consistent with a cold, dry (by Earth standards) hydrological environment somewhat like that of the Great Basin of the western USA during the Last Glacial Maximum.
Research from 2010 suggests that Mars also had lakes along parts of the equator. Although earlier research had showed that Mars had a warm and wet early history that has long since dried up, these lakes existed in the Hesperian Epoch, a much later period. Using detailed images from NASA's Mars Reconnaissance Orbiter, the researchers speculate that there may have been increased volcanic activity, meteorite impacts or shifts in Mars' orbit during this period to warm Mars' atmosphere enough to melt the abundant ice present in the ground. Volcanoes would have released gases that thickened the atmosphere for a temporary period, trapping more sunlight and making it warm enough for liquid water to exist. In this study, channels were discovered that connected lake basins near Ares Vallis. When one lake filled up, its waters overflowed the banks and carved the channels to a lower area where another lake would form. These dry lakes would be targets to look for evidence (biosignatures) of past life.
On September 27, 2012, NASA scientists announced that the Curiosity rover found direct evidence for an ancient streambed in Gale Crater, suggesting an ancient "vigorous flow" of water on Mars. In particular, analysis of the now dry streambed indicated that the water ran at 3.3 km/h (0.92 m/s), possibly at hip-depth. Proof of running water came in the form of rounded pebbles and gravel fragments that could have only been weathered by strong liquid currents. Their shape and orientation suggests long-distance transport from above the rim of the crater, where a channel named Peace Vallis feeds into the alluvial fan.
Researchers have found a number of examples of deltas that formed in Martian lakes. Finding deltas is a major sign that Mars once had a lot of liquid water. Deltas usually require deep water over a long period of time to form. Also, the water level needs to be stable to keep sediment from washing away. Deltas have been found over a wide geographical range, though there is some indication that deltas may be concentrated around the edges of the putative former northern ocean of Mars.
By 1979 it was thought that outflow channels formed in single, catastrophic ruptures of subsurface water reservoirs, possibly sealed by ice, discharging colossal quantities of water across an otherwise arid Mars surface. In addition, evidence in favor of heavy or even catastrophic flooding is found in the giant ripples in the Athabasca Vallis. Many outflow channels begin at Chaos or Chasma features, providing evidence for the rupture that could have breached a subsurface ice seal.
The branching valley networks of Mars are not consistent with formation by sudden catastrophic release of groundwater, both in terms of their dendritic shapes that do not come from a single outflow point, and in terms of the discharges that apparently flowed along them. Instead, some authors have argued that they were formed by slow seepage of groundwater from the subsurface essentially as springs. In support of this interpretation, the upstream ends of many valleys in such networks begin with box canyon or "amphitheater" heads, which on Earth are typically associated with groundwater seepage. There is also little evidence of finer scale channels or valleys at the tips of the channels, which some authors have interpreted as showing the flow appeared suddenly from the subsurface with appreciable discharge, rather than accumulating gradually across the surface. Others have disputed the strong link between amphitheater heads of valleys and formation by groundwater for terrestrial examples, and have argued that the lack of fine scale heads to valley networks is due to their removal by weathering or impact gardening. Most authors accept that most valley networks are at least partly influenced and shaped by groundwater seep processes.
Groundwater also plays a vital role in controlling broad scale sedimentation patterns and processes on Mars. According to this hypothesis, groundwater with dissolved minerals came to the surface, in and around craters, and helped to form layers by adding minerals —especially sulfate— and cementing sediments. In other words, some layers may be formed by groundwater rising up depositing minerals and cementing existing, loose, aeolian sediments. The hardened layers are consequently more protected from erosion. This process may occur instead of layers forming under lakes. A study published in 2011 using data from the Mars Reconnaissance Orbiter, show that the same kinds of sediments exist in a large area that includes Arabia Terra. It has been argued that areas that we know from satellite remote sensing are rich in sedimentary rocks are also those areas that are most likely to experience groundwater upwelling on a regional scale.
Mars ocean hypothesis
The Mars ocean hypothesis proposes that the Vastitas Borealis basin was the site of an ocean of liquid water at least once, and presents evidence that nearly a third of the surface of Mars was covered by a liquid ocean early in the planet's geologic history. This ocean, dubbed Oceanus Borealis, would have filled the Vastitas Borealis basin in the northern hemisphere, a region that lies 4–5 kilometres (2.5–3.1 mi) below the mean planetary elevation. Two major putative shorelines have been suggested: a higher one, dating to a time period of approximately 3.8 billion years ago and concurrent with the formation of the valley networks in the Highlands, and a lower one, perhaps correlated with the younger outflow channels. The higher one, the 'Arabia shoreline', can be traced all around Mars except through the Tharsis volcanic region. The lower, the 'Deuteronilus', follows the Vastitas Borealis formation.
A study in June 2010 concluded that the more ancient ocean would have covered 36% of Mars. Data from the Mars Orbiter Laser Altimeter (MOLA), which measures the altitude of all terrain on Mars, was used in 1999 to determine that the watershed for such an ocean would have covered about 75% of the planet. Early Mars would have required a warmer climate and denser atmosphere to allow liquid water to exist at the surface. In addition, the large number of valley networks strongly supports the possibility of a hydrological cycle on the planet in the past.
The existence of a primordial Martian ocean remains controversial among scientists, and the interpretations of some features as 'ancient shorelines' has been challenged. One problem with the conjectured 2-billion-year-old (2 Ga) shoreline is that it is not flat—i.e., does not follow a line of constant gravitational potential. This could be due to a change in distribution in Mars' mass, perhaps due to volcanic eruption or meteor impact; the Elysium volcanic province or the massive Utopia basin that is buried beneath the northern plains have been put forward as the most likely causes.
In March 2015, scientists stated that evidence exists for an ancient Martian ocean, likely in the planet's northern hemisphere and about the size of Earth's Arctic Ocean, or approximately 19% of the Martian surface. This finding was derived from the ratio of water and deuterium in the modern Martian atmosphere compared to the ratio found on Earth. Eight times as much deuterium was found at Mars than exists on Earth, suggesting that ancient Mars had significantly higher levels of water. Results from the Curiosity rover had previously found a high ratio of deuterium in Gale Crater, though not significantly high enough to suggest the presence of an ocean. Other scientists caution that this new study has not been confirmed, and point out that Martian climate models have not yet shown that the planet was warm enough in the past to support bodies of liquid water.
New evidence for a northern ocean was published in May 2016. A large team of scientists described how some of the surface in Ismenius Lacus quadrangle was altered by two Tsunamis. The Tsunamis were caused by asteroids striking the ocean. Both were thought to have been strong enough to create 30 Km diameter craters. The first Tsunami picked up and carried boulders the size of cars or small houses. The backwash from the wave formed channels by rearranging the boulders. The second came in when the ocean was 300 m lower. The second carried a great deal of ice which was dropped in valleys. Calculations show that the average height of the waves would have been 50 m, but the heights would vary from 10 m to 120 m. Numerical simulations show that in this particular part of the ocean two impact craters of the size of 30 km in diameter would form every 30 million years. The implication here is that a great northern ocean may have existed for millions of years. One argument against an ocean has been the lack of shoreline features. These features may have been washed away by these Tsunami events. The parts of Mars studied in this research are Chryse Planitia and northwestern Arabia Terra. These tsunamis affected some surfaces in the Ismenius Lacus quadrangle and in the Mare Acidalium quadrangle.
Present water ice
A significant amount of surface hydrogen has been observed globally by the Mars Odyssey Neutron Spectrometer and Gamma Ray Spectrometer. This hydrogen is thought to be incorporated into the molecular structure of ice, and through stoichiometric calculations the observed fluxes have been converted into concentrations of water ice in the upper meter of the Martian surface. This process has revealed that ice is both widespread and abundant on the modern surface. Below 60 degrees of latitude, ice is concentrated in several regional patches, particularly around the Elysium volcanoes, Terra Sabaea, and northwest of Terra Sirenum, and exists in concentrations up to 18% ice in the subsurface. Above 60 degrees latitude, ice is highly abundant. Polewards on 70 degrees of latitude, ice concentrations exceed 25% almost everywhere, and approach 100% at the poles. More recently, the SHARAD and MARSIS radar sounding instruments have begun to be able to confirm whether individual surface features are ice rich. Due to the known instability of ice at current Martian surface conditions, it is thought that almost all of this ice must be covered by a veneer of rocky or dusty material.
The Mars Odyssey neutron spectrometer observations indicate that if all the ice in the top meter of the Martian surface were spread evenly, it would give a Water Equivalent Global layer (WEG) of at least ≈14 centimetres (5.5 in)—in other words, the globally averaged Martian surface is approximately 14% water. The water ice currently locked in both Martian poles corresponds to a WEG of 30 metres (98 ft), and geomorphic evidence favors significantly larger quantities of surface water over geologic history, with WEG as deep as 500 metres (1,600 ft). It is believed that part of this past water has been lost to the deep subsurface, and part to space, although the detailed mass balance of these processes remains poorly understood. The current atmospheric reservoir of water is important as a conduit allowing gradual migration of ice from one part of the surface to another on both seasonal and longer timescales. It is insignificant in volume, with a WEG of no more than 10 micrometres (0.00039 in).
On July 28, 2005, the European Space Agency announced the existence of a crater partially filled with frozen water; some then interpreted the discovery as an "ice lake". Images of the crater, taken by the High Resolution Stereo Camera on board the European Space Agency's Mars Express orbiter, clearly show a broad sheet of ice in the bottom of an unnamed crater located on Vastitas Borealis, a broad plain that covers much of Mars' far northern latitudes, at approximately 70.5° North and 103° East. The crater is 35 kilometres (22 mi) wide and about 2 kilometres (1.2 mi) deep. The height difference between the crater floor and the surface of the water ice is about 200 metres (660 ft). ESA scientists have attributed most of this height difference to sand dunes beneath the water ice, which are partially visible. While scientists do not refer to the patch as a "lake", the water ice patch is remarkable for its size and for being present throughout the year. Deposits of water ice and layers of frost have been found in many different locations on the planet.
As more and more of the surface of Mars has been imaged by the modern generation of orbiters, it has become gradually more apparent that there are probably many more patches of ice scattered across the Martian surface. Many of these putative patches of ice are concentrated in the Martian midlatitudes (≈30–60° N/S of the equator). For example, many scientists believe that the widespread features in those latitude bands variously described as "latitude dependent mantle" or "pasted-on terrain" consist of dust- or debris-covered ice patches, which are slowly degrading. A cover of debris is required both to explain the dull surfaces seen in the images that do not reflect like ice, and also to allow the patches to exist for an extended period of time without subliming away completely. These patches have been suggested as possible water sources for some of the enigmatic channelized flow features like gullies also seen in those latitudes.
Equatorial frozen sea
Surface features consistent with existing pack ice have been discovered in the southern Elysium Planitia. What appear to be plates, ranging in size from 30 metres (98 ft) to 30 kilometres (19 mi), are found in channels leading to a flooded area of approximately the same depth and width as the North Sea. The plates show signs of break up and rotation that clearly distinguish them from lava plates elsewhere on the surface of Mars. The source for the flood is thought to be the nearby geological fault Cerberus Fossae that spewed water as well as lava aged some 2 to 10 million years. It was suggested that the water exited the Cerberus Fossae then pooled and froze in the low, level plains and that such lakes may still exist. Not all scientists agree with these conclusions.
Polar ice caps
Both the northern polar cap (Planum Boreum) and the southern polar cap (Planum Australe) are thought to grow in thickness during the winter and partially sublime during the summer. In 2004, the MARSIS radar sounder on the Mars Express satellite targeted the southern polar cap, and was able to confirm that ice there extends to a depth of 3.7 kilometres (2.3 mi) below the surface. In the same year, the OMEGA instrument on the same orbiter revealed that the cap is divided into three distinct parts, with varying contents of frozen water depending on latitude. The first part is the bright part of the polar cap seen in images, centered on the pole, which is a mixture of 85% CO2 ice to 15% water ice. The second part comprises steep slopes known as scarps, made almost entirely of water ice, that ring and fall away from the polar cap to the surrounding plains. The third part encompasses the vast permafrost fields that stretch for tens of kilometres away from the scarps, and is not obviously part of the cap until the surface composition is analysed. NASA scientists calculate that the volume of water ice in the south polar ice cap, if melted, would be sufficient to cover the entire planetary surface to a depth of 11 meters (36 ft). Observations over both poles and more widely over the planet suggest melting all the surface ice would produce a water equivalent global layer 35 meters deep.
On July 2008, NASA announced that the Phoenix lander had confirmed the presence of water ice at its landing site near the northern polar ice cap (at 68.2° latitude). This was the first ever direct observation of ice from the surface. Two years later, the shallow radar on board the Mars Reconnaissance Orbiter took measurements of the north polar ice cap and determined that the total volume of water ice in the cap is 821,000 cubic kilometres (197,000 cu mi). That is equal to 30% of the Earth's Greenland ice sheet, or enough to cover the surface of Mars to a depth of 5.6 metres (18 ft). Both polar caps reveal abundant fine internal layers when examined in HiRISE and Mars Global Surveyor imagery. Many researchers have attempted to use this layering to attempt to understand the structure, history, and flow properties of the caps, although their interpretation is not straightforward.
Lake Vostok in Antarctica may have implications for liquid water still existing on Mars, because if water existed before the polar ice caps on Mars, it is possible that there is still liquid water below the ice caps.
For many years, various scientists have suggested that some Martian surfaces look like periglacial regions on Earth. By analogy with these terrestrial features, it has been argued for many years that these are regions of permafrost. This would suggest that frozen water lies right beneath the surface. A common feature in the higher latitudes, patterned ground, can occur in a number of shapes, including stripes and polygons. On the Earth, these shapes are caused by the freezing and thawing of soil. There are other types of evidence for large amounts of frozen water under the surface of Mars, such as terrain softening, which rounds sharp topographical features. Theoretical calculations and analysis have tended to bear out the possibility that these are features are formed by the effects of ground ice. Evidence from Mars Odyssey's Gamma Ray Spectrometer and direct measurements with the Phoenix lander have corroborated that many of these features are intimately associated with the presence of ground ice.
Some areas of Mars are covered with cones that resemble those on Earth where lava has flowed on top of frozen ground. The heat of the lava melts the ice, then changes it into steam. The powerful force of the steam works its way through the lava and produces such rootless cones. These features can be found for example in Athabasca Valles, associated with lava flowing along this outflow channel. Larger cones may be made when the steam passes through thicker layers of lava.
- Scalloped topography
Certain regions of Mars display scalloped-shaped depressions. The depressions are suspected to be the remains of a degrading ice-rich mantle deposit. Scallops are caused by ice sublimating from frozen soil. A study published in Icarus, found that the landforms of scalloped topography can be made by the subsurface loss of water ice by sublimation under current Martian climate conditions. Their model predicts similar shapes when the ground has large amounts of pure ice, up to many tens of meters in depth. This mantle material was probably deposited from the atmosphere as ice formed on dust when the climate was different due to changes in the tilt of the Mars pole (see "Ice ages", below). The scallops are typically tens of meters deep and from a few hundred to a few thousand meters across. They can be almost circular or elongated. Some appear to have coalesced causing a large heavily pitted terrain to form. The process of forming the terrain may begin with sublimation from a crack. There are often polygonal cracks where scallops form, and the presence of scalloped topography seems to be an indication of frozen ground.
These scalloped features are superficially similar to Swiss cheese features, found around the south polar cap. Swiss cheese features are thought to be due to cavities forming in a surface layer of solid carbon dioxide, rather than water ice—although the floors of these holes are probably H2O-rich.
Many large areas of Mars either appear to host glaciers, or carry evidence that they used to be present. Much of the areas in high latitudes, especially the Ismenius Lacus quadrangle, are suspected to still contain enormous amounts of water ice. Recent evidence has led many planetary scientists to believe that water ice still exists as glaciers across much of the Martian mid- and high latitudes, protected from sublimation by thin coverings of insulating rock and/or dust. In January 2009, scientists released the results of a radar study of the glacier-like features called lobate debris aprons in an area called Deuteronilus Mensae, which found widespread evidence of ice lying beneath a few meters of rock debris. Glaciers are associated with fretted terrain, and many volcanoes. Researchers have described glacial deposits on Hecates Tholus, Arsia Mons, Pavonis Mons, and Olympus Mons. Glaciers have also been reported in a number of larger Martian craters in the midlatitudes and above.
Glacier-like features on Mars are known variously as viscous flow features, Martian flow features, lobate debris aprons, or lineated valley fill, depending on the form of the feature, its location, the landforms it is associated with, and the author describing it. Many, but not all, small glaciers seem to be associated with gullies on the walls of craters and mantling material. The lineated deposits known as lineated valley fill are probably rock-covered glaciers that are found on the floors most channels within the fretted terrain found around Arabia Terra in the northern hemisphere. Their surfaces have ridged and grooved materials that deflect around obstacles. Lineated floor deposits may be related to lobate debris aprons, which have been proven to contain large amounts of ice by orbiting radar. For many years, researchers interpreted that features called 'lobate debris aprons' were glacial flows and it was thought that ice existed under a layer of insulating rocks. With new instrument readings, it has been confirmed that lobate debris aprons contain almost pure ice that is covered with a layer of rocks.
Moving ice carries rock material, then drops it as the ice disappears. This typically happens at the snout or edges of the glacier. On Earth, such features would be called moraines, but on Mars they are typically known as moraine-like ridges, concentric ridges, or arcuate ridges. Because ice tends to sublime rather than melt on Mars, and because Mars's low temperatures tend to make glaciers "cold based" (frozen down to their beds, and unable to slide), the remains of these glaciers and the ridges they leave do not appear the exactly same as normal glaciers on Earth. In particular, Martian moraines tend to be deposited without being deflected by the underlying topography, which is thought to reflect the fact that the ice in Martian glaciers is normally frozen down and cannot slide. Ridges of debris on the surface of the glaciers indicate the direction of ice movement. The surface of some glaciers have rough textures due to sublimation of buried ice. The ice evaporates without melting and leaves behind an empty space. Overlying material then collapses into the void. Sometimes chunks of ice fall from the glacier and get buried in the land surface. When they melt, a more or less round hole remains. Many of these "kettle holes" have been identified on Mars.
Despite strong evidence for glacial flow on Mars, there is little convincing evidence for landforms carved by glacial erosion, e.g., U-shaped valleys, crag and tail hills, arêtes, drumlins. Such features are abundant in glaciated regions on Earth, so their absence on Mars has proven puzzling. The lack of these landforms is thought to be related to the cold-based nature of the ice in most recent glaciers on Mars. Because the solar insolation reaching the planet, the temperature and density of the atmosphere, and the geothermal heat flux are all lower on Mars than they are on Earth, modelling suggests the temperature of the interface between a glacier and its bed stays below freezing and the ice is literally frozen down to the ground. This prevents it from sliding across the bed, which is thought to inhibit the ice's ability to erode the surface.
Development of Mars' water inventory
The variation in Mars's surface water content is strongly coupled to the evolution of its atmosphere and may have been marked by several key stages.
Early Noachian era (4.6 Ga to 4.1 Ga)
Atmospheric loss to space from heavy meteoritic bombardment and hydrodynamic escape. Ejection by meteorites may have removed ~60% of the early atmosphere. Significant quantities of phyllosilicates may have formed during this period requiring a sufficiently dense atmosphere to sustain surface water, as the spectrally dominant phyllosilicate group, smectite, suggests moderate water-to-rock ratios. However, the pH-pCO2 between smectite and carbonate show that the precipitation of smectite would constrain pCO2 to a value not more than 1×10−2 atm (1.0 kPa). As a result, the dominant component of a dense atmosphere on early Mars becomes uncertain, if the clays formed in contact with the Martian atmosphere, particularly given the lack of evidence for carbonate deposits. An additional complication is that the ~25% lower brightness of the young Sun would have required an ancient atmosphere with a significant greenhouse effect to raise surface temperatures to sustain liquid water. Higher CO2 content alone would have been insufficient, as CO2 precipitates at partial pressures exceeding 1.5 atm (1,500 hPa), reducing its effectiveness as a greenhouse gas.
Middle to late Noachian era (4.1 Ga to 3.8 Ga)
Potential formation of a secondary atmosphere by outgassing dominated by the Tharsis volcanoes, including significant quantities of H2O, CO2, and SO2. Martian valley networks date to this period, indicating globally widespread and temporally sustained surface water as opposed to catastrophic floods. The end of this period coincides with the termination of the internal magnetic field and a spike in meteoritic bombardment. The cessation of the internal magnetic field and subsequent weakening of any local magnetic fields allowed unimpeded atmospheric stripping by the solar wind. For example, when compared with their terrestrial counterparts, 38Ar/36Ar, 15N/14N, and 13C/12C ratios of the Martian atmosphere are consistent with ~60% loss of Ar, N2, and CO2 by solar wind stripping of an upper atmosphere enriched in the lighter isotopes via Rayleigh fractionation. Supplementing the solar wind activity, impacts would have ejected atmospheric components in bulk without isotopic fractionation. Nevertheless, cometary impacts in particular may have contributed volatiles to the planet.
Hesperian era to the present (~3.8 Ga to ~3.5 Ga)
Atmospheric enhancement by sporadic outgassing events were countered by solar wind stripping of the atmosphere, albeit less intensely than by the young Sun. Catastrophic floods date to this period, favoring sudden subterranean release of volatiles, as opposed to sustained surface flows. While the earlier portion of this era may have been marked by aqueous acidic environments and Tharsis-centric groundwater discharge dating to the late Noachian, much of the surface alteration processes during the latter portion is marked by oxidative processes including the formation of Fe3+ oxides that impart a reddish hue to the Martian surface. Such oxidation of primary mineral phases can be achieved by low-pH (and possibly high temperature) processes related to the formation of palagonitic tephra, by the action of H2O2 that forms photochemically in the Martian atmosphere, and by the action of water, none of which require free O2. The action of H2O2 may have dominated temporally given the drastic reduction in aqueous and igneous activity in this recent era, making the observed Fe3+ oxides volumetrically small, though pervasive and spectrally dominant. Nevertheless, aquifers may have driven sustained, but highly localized surface water in recent geologic history, as evident in the geomorphology of craters such as Mojave. Furthermore, the Lafayette Martian meteorite shows evidence of aqueous alteration as recently as 650 Ma.
Mars has experienced large scale changes in the amount and distribution of ice on its surface in its relatively recent geological past, and as on Earth, these are known as ice ages. Ice ages on Mars are very different from the ones that the Earth experiences. During a Martian ice age, the poles get warmer, and water ice then leaves the ice caps and is redeposited in mid latitudes. The moisture from the ice caps travels to lower latitudes in the form of deposits of frost or snow mixed with dust. The atmosphere of Mars contains a great deal of fine dust particles, the water vapor condenses on these particles that then fall down to the ground due to the additional weight of the water coating. When ice at the top of the mantling layer returns to the atmosphere, it leaves behind dust that serves to insulate the remaining ice. The total volume of water removed is a few percent of the ice caps, or enough to cover the entire surface of the planet under one meter of water. Much of this moisture from the ice caps results in a thick smooth mantle with a mixture of ice and dust. This ice-rich mantle, a few meters thick, smoothes the land at lower latitudes, but in places it displays a bumpy texture. Multiple stages of glaciations probably occurred. Because there are few craters on the current mantle, it is thought to be relatively young. It is thought that this mantle was laid in place during a relatively recent ice age.
Ice ages are driven by changes in Mars's orbit and tilt, which can be compared to terrestrial Milankovich cycles. Orbital calculations show that Mars wobbles on its axis far more than Earth does. The Earth is stabilized by its proportionally large moon, so it only wobbles a few degrees. Mars may change its tilt—also known as its obliquity—by many tens of degrees. When this obliquity is high, its poles get much more direct sunlight and heat; this causes the ice caps to warm and become smaller as ice sublimes. Adding to the variability of the climate, the eccentricity of the orbit of Mars changes twice as much as Earth's eccentricity. As the poles sublime, the ice is redeposited closer to the equator, which receive somewhat less solar insolation at these high obliquities. Computer simulations have shown that a 45° tilt of the Martian axis would result in ice accumulation in areas that display glacial landforms. A 2008 study provided evidence for multiple glacial phases during Late Amazonian glaciation at the dichotomy boundary on Mars.
Evidence for recent flows
Pure liquid water cannot exist in a stable form on the surface of Mars with its present low atmospheric pressure and low temperature, except at the lowest elevations for a few hours. So, a geological mystery commenced in 2006 when observations from NASA's Mars Reconnaissance Orbiter revealed gully deposits that were not there ten years prior, possibly caused by flowing liquid brine during the warmest months on Mars. The images were of two craters called Terra Sirenum and Centauri Montes that appear to show the presence of liquid water flows on Mars at some point between 1999 and 2001.
There is disagreement in the scientific community as to whether or not gullies are formed by liquid water. It is also possible that the flows that carve gullies are dry, or perhaps lubricated by carbon dioxide. Even if gullies are carved by flowing water at the surface, the exact source of the water and the mechanisms behind its motion are not well understood.
In August 2011, NASA announced the discovery by Nepalese American undergraduate student Lujendra Ojha of current seasonal changes on steep slopes below rocky outcrops near crater rims in the Southern hemisphere. These dark streaks, now called recurrent slope lineae, were seen to grow downslope during the warmest part of the Martian Summer, then to gradually fade through the rest of the year, recurring cyclically between years. The researchers suggested these marks were consistent with salty water (brines) flowing downslope and then evaporating, possibly leaving some sort of residue. The CRISM spectroscopic instrument has since made direct observations of hydrous salts appearing at the same time that these recurrent slope lineae form, confirming in 2015 that these lineae are produced by the flow of liquid brines through shallow soils. The lineae contain hydrated chlorate and perchlorate salts (ClO
4-), which contain liquid water molecules. The lineae flow downhill in Martian summer, when the temperature is above −23 °C (−9 °F; 250 K). However, the source of the water remains unknown.
Life is understood to require liquid water, but it is not the only essential requirement for life. These requirements include water, an energy source, and materials necessary for cellular growth, while all under appropriate environmental conditions. The confirmation that liquid water once flowed on Mars, the existence of nutrients, and the previous discovery of a past magnetic field that protected the planet from cosmic and solar radiation, together strongly suggest that Mars could have had the environmental factors to support life. To be clear, the find of past habitability is not evidence that Martian life has ever actually existed.
When there is a magnetic field, the atmosphere is protected from erosion by solar wind, and ensures the maintenance of a dense atmosphere, necessary for liquid water to exist on the surface of Mars. The two current ecological approaches for predicting the potential habitability of the Martian surface use 19 or 20 environmental factors, with emphasis on water availability, temperature, presence of nutrients, an energy source, and protection from solar ultraviolet and galactic cosmic radiation. In particular, the damaging effect of ionising radiation on cellular structure is one of the prime limiting factors on the survival of life in potential astrobiological habitats. Even at a depth of 2 meters beneath the surface, any microbes would likely be dormant, cryopreserved by the current freezing conditions, and so metabolically inactive and unable to repair cellular degradation as it occurs.
Therefore, the best potential locations for discovering life on Mars may be at subsurface environments that have not been studied yet. The extensive volcanism in the past, possibly created subsurface cracks and caves within different strata, and liquid water could have been stored in these subterraneous places, forming large aquifers with deposits of saline liquid water, minerals, organic molecules, and geothermal heat – potentially providing a current habitable environment away from the harsh surface conditions.
Findings by probes
The images acquired by the Mariner 9 Mars orbiter, launched in 1971, revealed the first direct evidence of past water in the form of dry river beds, canyons (including the Valles Marineris, a system of canyons over about 4,020 kilometres (2,500 mi) long), evidence of water erosion and deposition, weather fronts, fogs, and more. The findings from the Mariner 9 missions underpinned the later Viking program. The enormous Valles Marineris canyon system is named after Mariner 9 in honor of its achievements.
By discovering many geological forms that are typically formed from large amounts of water, the two Viking orbiters and the two landers caused a revolution in our knowledge about water on Mars. Huge outflow channels were found in many areas. They showed that floods of water broke through dams, carved deep valleys, eroded grooves into bedrock, and traveled thousands of kilometers. Large areas in the southern hemisphere contained branched valley networks, suggesting that rain once fell. Many craters look as if the impactor fell into mud. When they were formed, ice in the soil may have melted, turned the ground into mud, then the mud flowed across the surface. Regions, called "Chaotic Terrain," seemed to have quickly lost great volumes of water that caused large channels to form downstream. Estimates for some channel flows run to ten thousand times the flow of the Mississippi River. Underground volcanism may have melted frozen ice; the water then flowed away and the ground collapsed to leave chaotic terrain. Also, general chemical analysis by the two Viking landers suggested the surface has been either exposed to or submerged in water in the past.
Mars Global Surveyor
The Mars Global Surveyor's Thermal Emission Spectrometer (TES) is an instrument able to determine the mineral composition on the surface of Mars. Mineral composition gives information on the presence or absence of water in ancient times. TES identified a large (30,000 square kilometres (12,000 sq mi)) area in the Nili Fossae formation that contains the mineral olivine. It is thought that the ancient asteroid impact that created the Isidis basin resulted in faults that exposed the olivine. The discovery of olivine is strong evidence that parts of Mars have been extremely dry for a long time. Olivine was also discovered in many other small outcrops within 60 degrees north and south of the equator. The probe has imaged several channels that suggest past sustained liquid flows, two of them are found in Nanedi Valles and in Nirgal Vallis.
The Pathfinder lander recorded the variation of diurnal temperature cycle. It was coldest just before sunrise, about −78 °C (−108 °F; 195 K), and warmest just after Mars noon, about −8 °C (18 °F; 265 K). At this location, the highest temperature never reached the freezing point of water (0 °C (32 °F; 273 K)), too cold for pure liquid water to exist on the surface.
The atmospheric pressure measured by the Pathfinder on Mars is very low —about 0.6% of Earth's, and it would not permit pure liquid water to exist on the surface.
Other observations were consistent with water being present in the past. Some of the rocks at the Mars Pathfinder site leaned against each other in a manner geologists term imbricated. It is suspected that strong flood waters in the past pushed the rocks around until they faced away from the flow. Some pebbles were rounded, perhaps from being tumbled in a stream. Parts of the ground are crusty, maybe due to cementing by a fluid containing minerals. There was evidence of clouds and maybe fog.
The 2001 Mars Odyssey found much evidence for water on Mars in the form of images, and with its spectrometer, it proved that much of the ground is loaded with water ice. Mars has enough ice just beneath the surface to fill Lake Michigan twice. In both hemispheres, from 55° latitude to the poles, Mars has a high density of ice just under the surface; one kilogram of soil contains about 500 grams (18 oz) of water ice. But close to the equator, there is only 2% to 10% of water in the soil. Scientists think that much of this water is also locked up in the chemical structure of minerals, such as clay and sulfates. Although the upper surface contains a few percent of chemically-bound water, ice lies just a few meters deeper, as it has been shown in Arabia Terra, Amazonis quadrangle, and Elysium quadrangle that contain large amounts of water ice. Analysis of the data suggests that the southern hemisphere may have a layered structure, suggestive of stratified deposits beneath a now extinct large water mass.
The instruments aboard the Mars Odyssey are only able to study the top meter of soil, while the radar aboard the Mars Reconnaissance Orbiter can measure a few kilometers deep. In 2002, available data were used to calculate that if all soil surfaces were covered by an even layer of water, this would correspond to a global layer of water (GLW) 0.5–1.5 kilometres (0.31–0.93 mi).
Thousands of images returned from Odyssey orbiter also support the idea that Mars once had great amounts of water flowing across its surface. Some images show patterns of branching valleys; others show layers that may have been formed under lakes; even river and lake deltas have been identified. For many years researchers thought that glaciers existed under a layer of insulating rocks. Lineated valley fill is one example of these rock-covered glaciers. They are found on the floors of some channels. Their surfaces have ridged and grooved materials that deflect around obstacles. Lineated floor deposits may be related to lobate debris aprons, which have been shown by orbiting radar to contain large amounts of ice.
The Phoenix lander also confirmed the existence of large amounts of water ice in the northern region of Mars. This finding was predicted by previous orbital data and theory, and was measured from orbit by the Mars Odyssey instruments. On June 19, 2008, NASA announced that dice-sized clumps of bright material in the "Dodo-Goldilocks" trench, dug by the robotic arm, had vaporized over the course of four days, strongly indicating that the bright clumps were composed of water ice that sublimes following exposure. Even though CO2 (dry ice) also sublimes under the conditions present, it would do so at a rate much faster than observed. On July 31, 2008, NASA announced that Phoenix further confirmed the presence of water ice at its landing site. During the initial heating cycle of a sample, the mass spectrometer detected water vapor when the sample temperature reached 0 °C (32 °F; 273 K). Liquid water cannot exist on the surface of Mars with its present low atmospheric pressure and temperature, except at the lowest elevations for short periods.
Perchlorate (ClO4), a strong oxidizer, was confirmed to be in the soil. The chemical, when mixed with water, can lower the water freezing point in a manner similar to how salt is applied to roads to melt ice.
When Phoenix landed, the retrorockets splashed soil and melted ice onto the vehicle. Photographs showed the landing had left blobs of material stuck to the landing struts. The blobs expanded at a rate consistent with deliquescence, darkened before disappearing (consistent with liquefaction followed by dripping), and appeared to merge. These observations, combined with thermodynamic evidence, indicated that the blobs were likely liquid brine droplets. Other researchers suggested the blobs could be "clumps of frost." In 2015 it was confirmed that perchlorate plays a role in forming recurring slope lineae on steep gullies.
For about as far as the camera can see, the landing site is flat, but shaped into polygons between 2–3 metres (6 ft 7 in–9 ft 10 in) in diameter and are bounded by troughs that are 20–50 centimetres (7.9–19.7 in) deep. These shapes are due to ice in the soil expanding and contracting due to major temperature changes. The microscope showed that the soil on top of the polygons is composed of rounded particles and flat particles, probably a type of clay. Ice is present a few inches below the surface in the middle of the polygons, and along its edges, the ice is at least 8 inches (200 mm) deep.
Snow was observed to fall from cirrus clouds. The clouds formed at a level in the atmosphere that was around −65 °C (−85 °F; 208 K), so the clouds would have to be composed of water-ice, rather than carbon dioxide-ice (CO2 or dry ice), because the temperature for forming carbon dioxide ice is much lower than −120 °C (−184 °F; 153 K). As a result of mission observations, it is now suspected that water ice (snow) would have accumulated later in the year at this location. The highest temperature measured during the mission, which took place during the Martian summer, was −19.6 °C (−3.3 °F; 253.6 K), while the coldest was −97.7 °C (−143.9 °F; 175.5 K). So, in this region the temperature remained far below the freezing point (0 °C (32 °F; 273 K)) of water.
Mars Exploration Rovers
The Mars Exploration Rovers, Spirit and Opportunity found a great deal of evidence for past water on Mars. The Spirit rover landed in what was thought to be a large lake bed. The lake bed had been covered over with lava flows, so evidence of past water was initially hard to detect. On March 5, 2004, NASA announced that Spirit had found hints of water history on Mars in a rock dubbed "Humphrey".
As Spirit traveled in reverse in December 2007, pulling a seized wheel behind, the wheel scraped off the upper layer of soil, uncovering a patch of white ground rich in silica. Scientists think that it must have been produced in one of two ways. One: hot spring deposits produced when water dissolved silica at one location and then carried it to another (i.e. a geyser). Two: acidic steam rising through cracks in rocks stripped them of their mineral components, leaving silica behind. The Spirit rover also found evidence for water in the Columbia Hills of Gusev crater. In the Clovis group of rocks the Mössbauer spectrometer (MB) detected goethite, that forms only in the presence of water. iron in the oxidized form Fe3+, carbonate-rich rocks, which means that regions of the planet once harbored water.
The Opportunity rover was directed to a site that had displayed large amounts of hematite from orbit. Hematite often forms from water. The rover indeed found layered rocks and marble- or blueberry-like hematite concretions. Elsewhere on its traverse, Opportunity investigated aeolian dune stratigraphy in Burns Cliff in Endurance Crater. Its operators concluded that the preservation and cementation of these outcrops had been controlled by flow of shallow groundwater. In its years of continuous operation, Opportunity is still sending back evidence that this area on Mars was soaked in liquid water in the past.
The MER rovers had been finding evidence for ancient wet environments that were very acidic. In fact, what Opportunity has mostly discovered, or found evidence for, was sulphuric acid, a harsh chemical for life. But in May 17, 2013, NASA announced that Opportunity found clay deposits that typically form in wet environments that are near neutral acidity. This find provides additional evidence about a wet ancient environment possibly favorable for life.
Mars Reconnaissance Orbiter
The Mars Reconnaissance Orbiter's HiRISE instrument has taken many images that strongly suggest that Mars has had a rich history of water-related processes. A major discovery was finding evidence of ancient hot springs. If they have hosted microbial life, they may contain biosignatures. Research published in January 2010, described strong evidence for sustained precipitation in the area around Valles Marineris. The types of minerals there are associated with water. Also, the high density of small branching channels indicates a great deal of precipitation.
Rocks on Mars have been found to frequently occur as layers, called strata, in many different places. Layers form by various ways, including volcanoes, wind, or water. Light-toned rocks on Mars have been associated with hydrated minerals like sulfates and clay.
The ice mantle under the shallow subsurface is thought to result from frequent, major climate changes. Changes in Mars' orbit and tilt cause significant changes in the distribution of water ice from polar regions down to latitudes equivalent to Texas. During certain climate periods water vapor leaves polar ice and enters the atmosphere. The water returns to the ground at lower latitudes as deposits of frost or snow mixed generously with dust. The atmosphere of Mars contains a great deal of fine dust particles. Water vapor condenses on the particles, then they fall down to the ground due to the additional weight of the water coating. When ice at the top of the mantling layer goes back into the atmosphere, it leaves behind dust, which insulates the remaining ice.
In 2008, research with the Shallow Radar on the Mars Reconnaissance Orbiter provided strong evidence that the lobate debris aprons (LDA) in Hellas Planitia and in mid northern latitudes are glaciers that are covered with a thin layer of rocks. Its radar also detected a strong reflection from the top and base of LDAs, meaning that pure water ice made up the bulk of the formation. The discovery of water ice in LDAs demonstrates that water is found at even lower latitudes.
Research published in September 2009, demonstrated that some new craters on Mars show exposed, pure water ice. After a time, the ice disappears, evaporating into the atmosphere. The ice is only a few feet deep. The ice was confirmed with the Compact Imaging Spectrometer (CRISM) on board the Mars Reconnaissance Orbiter.
Very early in its ongoing mission, NASA's Curiosity rover discovered unambiguous fluvial sediments on Mars. The properties of the pebbles in these outcrops suggested former vigorous flow on a streambed, with flow between ankle- and waist-deep. These rocks were found at the foot of an alluvial fan system descending from the crater wall, which had previously been identified from orbit.
On October 2012, the first X-ray diffraction analysis of a Martian soil was performed by Curiosity. The results revealed the presence of several minerals, including feldspar, pyroxenes and olivine, and suggested that the Martian soil in the sample was similar to the weathered basaltic soils of Hawaiian volcanoes. The sample used is composed of dust distributed from global dust storms and local fine sand. So far, the materials Curiosity has analyzed are consistent with the initial ideas of deposits in Gale Crater recording a transition through time from a wet to dry environment.
In December 2012, NASA reported that Curiosity performed its first extensive soil analysis, revealing the presence of water molecules, sulfur and chlorine in the Martian soil. And on March 2013, NASA reported evidence of mineral hydration, likely hydrated calcium sulfate, in several rock samples including the broken fragments of "Tintina" rock and "Sutton Inlier" rock as well as in veins and nodules in other rocks like "Knorr" rock and "Wernicke" rock. Analysis using the rover's DAN instrument provided evidence of subsurface water, amounting to as much as 4% water content, down to a depth of 60 cm (2.0 ft), in the rover's traverse from the Bradbury Landing site to the Yellowknife Bay area in the Glenelg terrain.
On September 26, 2013, NASA scientists reported the Mars Curiosity rover detected abundant chemically-bound water (1.5 to 3 weight percent) in soil samples at the Rocknest region of Aeolis Palus in Gale Crater. In addition, NASA reported the rover found two principal soil types: a fine-grained mafic type and a locally derived, coarse-grained felsic type. The mafic type, similar to other martian soils and martian dust, was associated with hydration of the amorphous phases of the soil. Also, perchlorates, the presence of which may make detection of life-related organic molecules difficult, were found at the Curiosity rover landing site (and earlier at the more polar site of the Phoenix lander) suggesting a "global distribution of these salts". NASA also reported that Jake M rock, a rock encountered by Curiosity on the way to Glenelg, was a mugearite and very similar to terrestrial mugearite rocks.
On December 9, 2013, NASA reported that the planet Mars had a large freshwater lake (that could have been a hospitable environment for microbial life) based on evidence from the Curiosity rover studying the plain Aeolis Palus near Mount Sharp in Gale Crater.
On December 16, 2014, NASA reported detecting an unusual increase, then decrease, in the amounts of methane in the atmosphere of the planet Mars; in addition, organic chemicals were detected in powder drilled from a rock by the Curiosity rover. Also, based on deuterium to hydrogen ratio studies, much of the water at Gale Crater on Mars was found to have been lost during ancient times, before the lakebed in the crater was formed; afterwards, large amounts of water continued to be lost.
On April 13, 2015, Nature published an analysis of humidity and ground temperature data collected by Curiosity, showing evidence that films of liquid brine water form in the upper 5 cm of Mars's subsurface at night. The water activity and temperature remain below the requirements for reproduction and metabolism of known terrestrial microorganisms.
- Atmosphere of Mars#Water
- Climate of Mars
- Colonization of Mars
- Evolution of water on Mars and Earth
- Extraterrestrial life
- Extraterrestrial liquid water
- Glaciers on Mars
- Groundwater on Mars
- Jezero crater
- Lakes on Mars
- Life on Mars
- List of quadrangles on Mars
- List of rocks on Mars
- Lobate debris apron
- Mars Express § Scientific discoveries and important events
- Mars Global Surveyor § Discovery of water ice on Mars
- Mars § Hydrology
- Martian canal
- Scalloped topography
- Scientific information from the Mars Exploration Rover mission
- Uzboi-Landon-Morava (ULM)
- Water vapor § Extraterrestrial water vapor
- Jakosky, B.M.; Haberle, R.M. (1992). "The Seasonal Behavior of Water on Mars". In Kieffer, H.H.; et al. Mars. Tucson, AZ: University of Arizona Press. pp. 969–1016.
- Martín-Torres, F. Javier; Zorzano, María-Paz; Valentín-Serrano, Patricia; Harri, Ari-Matti; Genzer, Maria (April 13, 2015). "Transient liquid water and water activity at Gale crater on Mars". Nature Geocience. 8: 357–361. doi:10.1038/ngeo2412. Retrieved April 14, 2015.
- Ojha, L.; Wilhelm, M. B.; Murchie, S. L.; McEwen, A. S.; Wray, J. J.; Hanley, J.; Massé, M.; Chojnacki, M. (2015). "Spectral evidence for hydrated salts in recurring slope lineae on Mars". Nature Geoscience. 8: 829–832. doi:10.1038/ngeo2546.
- Carr, M.H. (1996). Water on Mars. New York: Oxford University Press. p. 197.
- Bibring, J.-P.; Langevin, Yves; Poulet, François; Gendrin, Aline; Gondet, Brigitte; Berthé, Michel; Soufflot, Alain; Drossart, Pierre; Combes, Michel; Bellucci, Giancarlo; Moroz, Vassili; Mangold, Nicolas; Schmitt, Bernard; Omega Team, the; Erard, S.; Forni, O.; Manaud, N.; Poulleau, G.; Encrenaz, T.; Fouchet, T.; Melchiorri, R.; Altieri, F.; Formisano, V.; Bonello, G.; Fonti, S.; Capaccioni, F.; Cerroni, P.; Coradini, A.; Kottsov, V.; et al. (2004). "Perennial Water Ice Identified in the South Polar Cap of Mars". Nature. 428 (6983): 627–630. Bibcode:2004Natur.428..627B. PMID 15024393. doi:10.1038/nature02461.
- "Water at Martian south pole". European Space Agency (ESA). March 17, 2004.
- "Mars Odyssey: Newsroom". Mars.jpl.nasa.gov. May 28, 2002.
- Feldman, W.C.; et al. (2004). "Global Distribution of Near-Surface Hydrogen on Mars". J. Geophysical Research. 109. Bibcode:2004JGRE..10909006F. doi:10.1029/2003JE002160.
- Christensen, P. R. (2006). "Water at the Poles and in Permafrost Regions of Mars". GeoScienceWorld Elements. 3 (2): 151–155.
- Carr, 2006, p. 173.
- Hecht, M.H. (2002). "Metastability of Liquid Water on Mars". Icarus. 156 (2): 373–386. Bibcode:2002Icar..156..373H. doi:10.1006/icar.2001.6794.
- Webster, Guy; Brown, Dwayne (December 10, 2013). "NASA Mars Spacecraft Reveals a More Dynamic Red Planet". NASA.
- "Liquid Water From Ice and Salt on Mars". Geophysical Research Letters. NASA Astrobiology. July 3, 2014. Retrieved August 13, 2014.
- Pollack, J.B. (1979). "Climatic Change on the Terrestrial Planets". Icarus. 37 (3): 479–553. Bibcode:1979Icar...37..479P. doi:10.1016/0019-1035(79)90012-5.
- Pollack, J.B.; Kasting, J.F.; Richardson, S.M.; Poliakoff, K. (1987). "The Case for a Wet, Warm Climate on Early Mars". Icarus. 71 (2): 203–224. Bibcode:1987Icar...71..203P. doi:10.1016/0019-1035(87)90147-3.
- "releases/2015/03/150305140447". sciencedaily.com. Retrieved May 25, 2015.
- Villanueva, G.; Mumma, M.; Novak, R.; Käufl, H.; Hartogh, P.; Encrenaz, T.; Tokunaga, A.; Khayat, A.; Smith, M. (2015). "Strong water isotopic anomalies in the martian atmosphere: Probing current and ancient reservoirs". Science. 348: 218–221. doi:10.1126/science.aaa3630.
- Baker, V.R.; Strom, R.G.; Gulick, V.C.; Kargel, J.S.; Komatsu, G.; Kale, V.S. (1991). "Ancient oceans, ice sheets and the hydrological cycle on Mars". Nature. 352 (6348): 589–594. Bibcode:1991Natur.352..589B. doi:10.1038/352589a0.
- Parker, T.J.; Saunders, R.S.; Schneeberger, D.M. (1989). "Transitional Morphology in West Deuteronilus Mensae, Mars: Implications for Modification of the Lowland/Upland Boundary". Icarus. 82: 111–145. Bibcode:1989Icar...82..111P. doi:10.1016/0019-1035(89)90027-4.
- Dohm, J.M.; Baker, Victor R.; Boynton, William V.; Fairén, Alberto G.; Ferris, Justin C.; Finch, Michael; Furfaro, Roberto; Hare, Trent M.; Janes, Daniel M.; Kargel, Jeffrey S.; Karunatillake, Suniti; Keller, John; Kerry, Kris; Kim, Kyeong J.; Komatsu, Goro; Mahaney, William C.; Schulze-Makuch, Dirk; Marinangeli, Lucia; Ori, Gian G.; Ruiz, Javier; Wheelock, Shawn J. (2009). "GRS Evidence and the Possibility of Paleooceans on Mars". Planetary and Space Science. 57 (5–6): 664–684. Bibcode:2009P&SS...57..664D. doi:10.1016/j.pss.2008.10.008.
- "PSRD: Ancient Floodwaters and Seas on Mars". Psrd.hawaii.edu. July 16, 2003.
- "Gamma-Ray Evidence Suggests Ancient Mars Had Oceans". SpaceRef. November 17, 2008.
- Clifford, S.M.; Parker, T.J. (2001). "The Evolution of the Martian Hydrosphere: Implications for the Fate of a Primordial Ocean and the Current State of the Northern Plains". Icarus. 154: 40–79. Bibcode:2001Icar..154...40C. doi:10.1006/icar.2001.6671.
- Di Achille, Gaetano; Hynek, Brian M. (2010). "Ancient ocean on Mars supported by global distribution of deltas and valleys". Nature Geoscience. 3 (7): 459–463. Bibcode:2010NatGe...3..459D. doi:10.1038/ngeo891.
- "Ancient ocean may have covered third of Mars". Sciencedaily.com. June 14, 2010.
- Carr, 2006, pp 144–147.
- Fassett, C. I.; Dickson, James L.; Head, James W.; Levy, Joseph S.; Marchant, David R. (2010). "Supraglacial and Proglacial Valleys on Amazonian Mars". Icarus. 208 (1): 86–100. Bibcode:2010Icar..208...86F. doi:10.1016/j.icarus.2010.02.021.
- "Flashback: Water on Mars Announced 10 Years Ago". SPACE.com. June 22, 2000.
- Chang, Kenneth (December 9, 2013). "On Mars, an Ancient Lake and Perhaps Life". New York Times.
- Various (December 9, 2013). "Science – Special Collection – Curiosity Rover on Mars". Science.
- Parker, T.; Clifford, S. M.; Banerdt, W. B. (2000). "Argyre Planitia and the Mars Global Hydrologic Cycle" (PDF). Lunar and Planetary Science. XXXI: 2033. Bibcode:2000LPI....31.2033P.
- Heisinger, H.; Head, J. (2002). "Topography and morphology of the Argyre basin, Mars: implications for its geologic and hydrologic history". Planet. Space Sci. 50 (10–11): 939–981. Bibcode:2002P&SS...50..939H. doi:10.1016/S0032-0633(02)00054-5.
- Soderblom, L.A. (1992). Kieffer, H.H.; et al., eds. "The Composition and Mineralogy of the Martian Surface from Spectroscopic Observations: 0.3–50 micrometres (1.2×10−5–0.001969 in). In Mars". Tucson, AZ: University of Arizona Press: 557–593. ISBN 0-8165-1257-4.
- Glotch, T.; Christensen, P. (2005). "Geologic and mineralogical mapping of Aram Chaos: Evidence for water-rich history". J. Geophys. Res. 110: E09006. Bibcode:2005JGRE..110.9006G. doi:10.1029/2004JE002389.
- Holt, J. W.; Safaeinili, A.; Plaut, J. J.; Young, D. A.; Head, J. W.; Phillips, R. J.; Campbell, B. A.; Carter, L. M.; Gim, Y.; Seu, R.; Team, Sharad (2008). "Radar Sounding Evidence for Ice within Lobate Debris Aprons near Hellas Basin, Mid-Southern Latitudes of Mars" (PDF). Lunar and Planetary Science. XXXIX: 2441. Bibcode:2008LPI....39.2441H.
- Amos, Jonathan (June 10, 2013). "Old Opportunity Mars rover makes rock discovery". NASA. BBC News.
- "Mars Rover Opportunity Examines Clay Clues in Rock". Jet Propulsion Laboratory, NASA. May 17, 2013.
- "Regional, Not Global, Processes Led to Huge Martian Floods". Planetary Science Institute. SpaceRef. 11 September 2015. Retrieved 2015-09-12.
- Harrison, K; Grimm, R. (2005). "Groundwater-controlled valley networks and the decline of surface runoff on early Mars". Journal of Geophysical Research. 110: E12S16. Bibcode:2005JGRE..11012S16H. doi:10.1029/2005JE002455.
- Howard, A.; Moore, Jeffrey M.; Irwin, Rossman P. (2005). "An intense terminal epoch of widespread fluvial activity on early Mars: 1. Valley network incision and associated deposits". Journal of Geophysical Research. 110: E12S14. Bibcode:2005JGRE..11012S14H. doi:10.1029/2005JE002459.
- Salese, F., G. Di Achille, A. Neesemann, G. G. Ori, and E. Hauber (2016), Hydrological and sedimentary analyses of well-preserved paleofluvial-paleolacustrine systems at Moa Valles, Mars, J. Geophys. Res. Planets, 121, 194–232, doi:10.1002/2015JE004891.
- Irwin, Rossman P.; Howard, Alan D.; Craddock, Robert A.; Moore, Jeffrey M. (2005). "An intense terminal epoch of widespread fluvial activity on early Mars: 2. Increased runoff and paleolake development". Journal of Geophysical Research. 110: E12S15. Bibcode:2005JGRE..11012S15I. doi:10.1029/2005JE002460.
- Fassett, C.; Head, III (2008). "Valley network-fed, open-basin lakes on Mars: Distribution and implications for Noachian surface and subsurface hydrology". Icarus. 198: 37–56. Bibcode:2008Icar..198...37F. doi:10.1016/j.icarus.2008.06.016.
- Moore, J.; Wilhelms, D. (2001). "Hellas as a possible site of ancient ice-covered lakes on Mars" (PDF). Icarus. 154 (2): 258–276. Bibcode:2001Icar..154..258M. doi:10.1006/icar.2001.6736.
- Weitz, C.; Parker, T. (2000). "New evidence that the Valles Marineris interior deposits formed in standing bodies of water" (PDF). Lunar and Planetary Science. XXXI: 1693. Bibcode:2000LPI....31.1693W.
- "New Signs That Ancient Mars Was Wet". Space.com. October 28, 2008.
- Squyres, S.W.; et al. (1992). "Ice in the Martian Regolith". In Kieffer, H.H. Mars. Tucson, AZ: University of Arizona Press. pp. 523–554. ISBN 0-8165-1257-4.
- Head, J.; Marchant, D. (2006). Modifications of the walls of a Noachian crater in Northern Arabia Terra (24 E, 39 N) during northern mid-latitude Amazonian glacial epochs on Mars: Nature and evolution of Lobate Debris Aprons and their relationships to lineated valley fill and glacial systems (abstract). Lunar. Planet. Sci. 37. p. 1128.
- Head, J.; et al. (2006). "Modification if the dichotomy boundary on Mars by Amazonian mid-latitude regional glaciation". Geophys. Res. Lett.: 33.
- Head, J.; Marchant, D. (2006). "Evidence for global-scale northern mid-latitude glaciation in the Amazonian period of Mars: Debris-covered glacial and valley glacial deposits in the 30 – 50 N latitude band (abstract)". Lunar. Planet. Sci. 37: 1127.
- Lewis, Richard (April 23, 2008). "Glaciers Reveal Martian Climate Has Been Recently Active". Brown University.
- Plaut, Jeffrey J.; Safaeinili, Ali; Holt, John W.; Phillips, Roger J.; Head, James W.; Seu, Roberto; Putzig, Nathaniel E.; Frigeri, Alessandro (2009). "Radar Evidence for Ice in Lobate Debris Aprons in the Mid-Northern Latitudes of Mars" (PDF). Geophysical Research Letters. 36 (2). Bibcode:2009GeoRL..3602203P. doi:10.1029/2008GL036379.
- Wall, Mike (March 25, 2011). "Q & A with Mars Life-Seeker Chris Carr". Space.com.
- Dartnell, L.R.; Desorgher; Ward; Coates (January 30, 2007). "Modelling the surface and subsurface Martian radiation environment: Implications for astrobiology". Geophysical Research Letters. 34 (2). Bibcode:2007GeoRL..34.2207D. doi:10.1029/2006GL027494.
The damaging effect of ionising radiation on cellular structure is one of the prime limiting factors on the survival of life in potential astrobiological habitats.
- Dartnell, L. R.; Desorgher, L.; Ward, J. M.; Coates, A. J. (2007). "Martian sub-surface ionising radiation: biosignatures and geology". Biogeosciences. 4: 545–558. Bibcode:2007BGeo....4..545D. doi:10.5194/bg-4-545-2007. Retrieved June 1, 2013.
This ionising radiation field is deleterious to the survival of dormant cells or spores and the persistence of molecular biomarkers in the subsurface, and so its characterisation. [..] Even at a depth of 2 meters beneath the surface, any microbes would likely be dormant, cryopreserved by the current freezing conditions, and so metabolically inactive and unable to repair cellular degradation as it occurs.
- de Morais, A. (2012). "A Possible Biochemical Model for Mars" (PDF). 43rd Lunar and Planetary Science Conference (2012). Retrieved June 5, 2013.
The extensive volcanism at that time much possibly created subsurface cracks and caves within different strata, and the liquid water could have been stored in these subterraneous places, forming large aquifers with deposits of saline liquid water, minerals organic molecules, and geothermal heat – ingredients for life as we know on Earth.
- Didymus, JohnThomas (January 21, 2013). "Scientists find evidence Mars subsurface could hold life". Digital Journal – Science.
There can be no life on the surface of Mars, because it is bathed in radiation and it's completely frozen. Life in the subsurface would be protected from that. - Prof. Parnell.
- Steigerwald, Bill (January 15, 2009). "Martian Methane Reveals the Red Planet is not a Dead Planet". NASA's Goddard Space Flight Center. NASA.
If microscopic Martian life is producing the methane, it likely resides far below the surface, where it's still warm enough for liquid water to exist
- NASA Mars Exploration Program Overview. http://www.nasa.gov/mission_pages/mars/overview/index.html.
- Hartmann, 2003, p. 11.
- Sheehan, 1996, p. 35.
- Kieffer, H.H.; Jakosky, B.M; Snyder, C. (1992). "The Planet Mars: From Antiquity to the Present". In Kieffer, H.H.; et al. Mars. Tucson, AZ: University of Arizona Press. pp. 1–33.
- hartmann, 2003, p. 20.
- Sheehan, 1996, p. 150.
- Spinrad, H.; Münch, G.; Kaplan, L. D. (1963). "Letter to the Editor: the Detection of Water Vapor on Mars". Astrophysical Journal. 137: 1319. Bibcode:1963ApJ...137.1319S. doi:10.1086/147613.
- Leighton, R.B.; Murray, B.C. (1966). "Behavior of Carbon Dioxide and Other Volatiles on Mars". Science. 153 (3732): 136–144. PMID 17831495. doi:10.1126/science.153.3732.136.
- Leighton, R.B.; Murray, B.C.; Sharp, R.P.; Allen, J.D.; Sloan, R.K. (1965). "Mariner IV Photography of Mars: Initial Results". Science. 149 (3684): 627–630. PMID 17747569. doi:10.1126/science.149.3684.627.
- Kliore, A.; et al. (1965). "Occultation Experiment: Results of the First Direct Measurement of Mars's Atmosphere and Ionosphere". Science. 149 (3689): 1243–1248. PMID 17747455. doi:10.1126/science.149.3689.1243.
- Grotzinger, John P. (January 24, 2014). "Introduction to Special Issue – Habitability, Taphonomy, and the Search for Organic Carbon on Mars". Science. 343 (6169): 386–387. Bibcode:2014Sci...343..386G. PMID 24458635. doi:10.1126/science.1249944.
- Various (January 24, 2014). "Special Issue – Table of Contents – Exploring Martian Habitability". Science. 343 (6169): 345–452.
- Various (January 24, 2014). "Special Collection – Curiosity – Exploring Martian Habitability". Science.
- Grotzinger, J.P.; et al. (January 24, 2014). "A Habitable Fluvio-Lacustrine Environment at Yellowknife Bay, Gale Crater, Mars". Science. 343 (6169): 1242777. PMID 24324272. doi:10.1126/science.1242777.
- Rodriguez, J. Alexis P.; Kargel, Jeffrey S.; Baker, Victor R.; Gulick, Virginia C.; et al. (8 September 2015). "Martian outflow channels: How did their source aquifers form, and why did they drain so rapidly?". Nature - Scientific Reports. 5: 13404. doi:10.1038/srep13404. Retrieved 2015-09-12.
- Staff (July 2, 2012). "Ancient Mars Water Existed Deep Underground". Space.com.
- Craddock, R.; Howard, A. (2002). "The case for rainfall on a warm, wet early Mars". J. Geophys. Res. 107: E11. Bibcode:2002JGRE..107.5111C. doi:10.1029/2001je001505.
- Head, J.; et al. (2006). "Extensive valley glacier deposits in the northern mid-latitudes of Mars: Evidence for the late Amazonian obliquity-driven climate change". Earth Planet. Sci. Lett. 241: 663–671. Bibcode:2006E&PSL.241..663H. doi:10.1016/j.epsl.2005.11.016.
- Madeleine, J.; et al. (2007). Mars: A proposed climatic scenario for northern mid-latitude glaciation. Lunar Planet. Sci. (Abstract). 38. p. 1778.
- Madeleine, J.; et al. (2009). "Amazonian northern mid-latitude glaciation on Mars: A proposed climate scenario". Icarus. 203: 300–405. Bibcode:2009Icar..203..390M. doi:10.1016/j.icarus.2009.04.037.
- Mischna, M.; et al. (2003). "On the orbital forcing of Martian water and CO2 cycles: A general circulation model study with simplified volatile schemes.". J. Geophys. Res. 108 (E6): 5062. Bibcode:2003JGRE..108.5062M. doi:10.1029/2003je002051.
- Staff (October 28, 2008). "NASA Mars Reconnaissance Orbiter Reveals Details of a Wetter Mars". SpaceRef. NASA.
- Lunine, Jonathan I.; Chambers, John; et al. (September 2003). "The Origin of Water on Mars". Icarus. 165 (1): 1–8. Bibcode:2003Icar..165....1L. doi:10.1016/S0019-1035(03)00172-6. Retrieved June 10, 2013.
- Soderblom, L.A.; Bell, J.F. (2008). "Exploration of the Martian Surface: 1992–2007". In Bell, J.F. The Martian Surface: Composition, Mineralogy, and Physical Properties. Cambridge University Press. pp. 3–19.
- Ming, D.W.; Morris, R.V.; Clark, R.C. (2008). "Aqueous Alteration on Mars". In Bell, J.F. The Martian Surface: Composition, Mineralogy, and Physical Properties. Cambridge University Press. pp. 519–540.
- Lewis, J.S. (1997). Physics and Chemistry of the Solar System (revised ed.). San Diego, CA: Academic Press. ISBN 0-12-446742-3.
- Lasue, J.; et al. (2013). "Quantitative Assessments of the Martian Hydrosphere". Space Sci. Rev. 174: 155–212. doi:10.1007/s11214-012-9946-5.
- Clark, B.C.; et al. (2005). "Chemistry and Mineralogy of Outcrops at Meridiani Planum". Earth Planet. Sci. Lett. 240: 73–94. Bibcode:2005E&PSL.240...73C. doi:10.1016/j.epsl.2005.09.040.
- Bloom, A.L. (1978). Geomorphology: A Systematic Analysis of Late Cenozoic Landforms. Englewood Cliffs, N.J: Prentice-Hall. p. 114.
- Boynton, W.V.; et al. (2009). "Evidence for Calcium Carbonate at the Mars Phoenix Landing Site". Science. 325 (5936): 61–4. PMID 19574384. doi:10.1126/science.1172768.
- Gooding, J.L.; Arvidson, R.E.; Zolotov, M. YU. (1992). "Physical and Chemical Weathering". In Kieffer, H.H.; et al. Mars. Tucson, AZ: University of Arizona Press. pp. 626–651. ISBN 0-8165-1257-4.
- Melosh, H.J. (2011). Planetary Surface Processes. Cambridge University Press. p. 296. ISBN 978-0-521-51418-7.
- Abramov, O.; Kring, D.A. (2005). "Impact-Induced Hydrothermal Activity on Early Mars". J. Geophys. Res. 110: E12S09. Bibcode:2005JGRE..11012S09A. doi:10.1029/2005JE002453.
- Schrenk, M.O.; Brazelton, W.J.; Lang, S.Q. (2013). "Serpentinization, Carbon, and Deep Life". Reviews in Mineralogy & Geochemistry. 75: 575–606. doi:10.2138/rmg.2013.75.18.
- Baucom, Martin (March–April 2006). "Life on Mars?". American Scientist.
- Chassefière, E; Langlais, B; Quesnel, Y; Leblanc, F. (2013), "The Fate of Early Mars' Lost Water: The Role of Serpentinization." (PDF), EPSC Abstracts, 8, p. EPSC2013-188
- Ehlmann, B. L.; Mustard, J.F.; Murchie, S.L. (2010). "Geologic Setting of Serpentine Deposits on Mars". Geophys. Res. Lett. 37: L06201. Bibcode:2010GeoRL..37.6201E. doi:10.1029/2010GL042596.
- Bloom, A.L. (1978). Geomorphology: A Systematic Analysis of Late Cenozoic Landforms. Englewood Cliffs, N.J.: Prentice-Hall.., p. 120
- Ody, A.; et al. (2013). "Global Investigation of Olivine on Mars: Insights into Crust and Mantle Compositions". J. Geophys. Res. 118: 234–262. Bibcode:2013JGRE..118..234O. doi:10.1029/2012JE004149.
- Swindle, T. D.; Treiman, A. H.; Lindstrom, D. J.; Burkland, M. K.; Cohen, B. A.; Grier, J. A.; Li, B.; Olson, E. K. (2000). "Noble Gases in Iddingsite from the Lafayette meteorite: Evidence for Liquid water on Mars in the last few hundred million years". Meteoritics and Planetary Science. 35 (1): 107–115. Bibcode:2000M&PS...35..107S. doi:10.1111/j.1945-5100.2000.tb01978.x.
- Gulick, V.; Baker, V. (1989). "Fluvial valleys and martian palaeoclimates". Nature. 341 (6242): 514–516. Bibcode:1989Natur.341..514G. doi:10.1038/341514a0.
- Head, J.; Kreslavsky, M. A.; Ivanov, M. A.; Hiesinger, H.; Fuller, E. R.; Pratt, S. (2001). "Water in Middle Mars History: New Insights From MOLA Data". American Geophysical Union. Bibcode:2001AGUSM...P31A02H.
- Head, J.; et al. (2001). "Exploration for standing Bodies of Water on Mars: When Were They There, Where did They go, and What are the Implications for Astrobiology?". American Geophysical Union. 21: 03. Bibcode:2001AGUFM.P21C..03H.
- David, Leonard (January 20, 2005). "Mars Rover's Meteorite Discovery Triggers Questions". Space.com. Retrieved February 10, 2013.
- Meyer, C. (2012) The Martian Meteorite Compendium; National Aronautics and Space Administration. http://curator.jsc.nasa.gov/antmet/mmc/.
- "Shergotty Meteorite – JPL, NASA". NASA. Retrieved December 19, 2010.
- Hamiliton, W.; Christensen, Philip R.; McSween, Harry Y. (1997). "Determination of Martian meteorite lithologies and mineralogies using vibrational spectroscopy". Journal of Geophysical Research. 102: 25593–25603. Bibcode:1997JGR...10225593H. doi:10.1029/97JE01874.
- Treiman, A. (2005). "The nakhlite meteorites: Augite-rich igneous rocks from Mars" (PDF). Chemie der Erde – Geochemistry. 65 (3): 203–270. Bibcode:2005ChEG...65..203T. doi:10.1016/j.chemer.2005.01.004. Retrieved September 8, 2006.
- McKay, D.; Gibson Jr., EK; Thomas-Keprta, KL; Vali, H; Romanek, CS; Clemett, SJ; Chillier, XD; Maechling, CR; Zare, RN (1996). "Search for Past Life on Mars: Possible Relic Biogenic Activity in Martian Meteorite AL84001". Science. 273 (5277): 924–930. Bibcode:1996Sci...273..924M. PMID 8688069. doi:10.1126/science.273.5277.924.
- Gibbs, W.; Powell, C. (August 19, 1996). "Bugs in the Data?". Scientific American.
- "Controversy Continues: Mars Meteorite Clings to Life – Or Does It?". SPACE.com. March 20, 2002.
- Bada, J.; Glavin, DP; McDonald, GD; Becker, L (1998). "A Search for Endogenous Amino Acids in Martian Meteorite AL84001". Science. 279 (5349): 362–365. Bibcode:1998Sci...279..362B. PMID 9430583. doi:10.1126/science.279.5349.362.
- Raeburn, P. (1998). "Uncovering the Secrets of the Red Planet Mars". National Geographic. Washington D.C.
- Moore, P.; et al. (1990). The Atlas of the Solar System. New York: Mitchell Beazley Publishers.
- Kieffer, Hugh H., ed. (1994). Mars (2nd ed.). Tucson: University of Arizona Press. ISBN 0-8165-1257-4.
- Berman, Daniel C.; Crown, David A.; Bleamaster, Leslie F. (2009). "Degradation of mid-latitude craters on Mars". Icarus. 200: 77–95. Bibcode:2009Icar..200...77B. doi:10.1016/j.icarus.2008.10.026.
- Fassett, Caleb I.; Head, James W. (2008). "The timing of martian valley network activity: Constraints from buffered crater counting". Icarus. 195: 61–89. Bibcode:2008Icar..195...61F. doi:10.1016/j.icarus.2007.12.009.
- Malin, Michael C. (2010). "An overview of the 1985–2006 Mars Orbiter Camera science investigation". The Mars Journal. 5: 1–60. Bibcode:2010IJMSE...5....1M. doi:10.1555/mars.2010.0001.
- "Sinuous Ridges Near Aeolis Mensae". Hiroc.lpl.arizona.edu. January 31, 2007.
- Zimbelman, J.; Griffin, L. (2010). "HiRISE images of yardangs and sinuous ridges in the lower member of the Medusae Fossae Formation, Mars". Icarus. 205: 198–210. Bibcode:2010Icar..205..198Z. doi:10.1016/j.icarus.2009.04.003.
- Newsom, H.; Lanza, Nina L.; Ollila, Ann M.; Wiseman, Sandra M.; Roush, Ted L.; Marzo, Giuseppe A.; Tornabene, Livio L.; Okubo, Chris H.; Osterloo, Mikki M.; Hamilton, Victoria E.; Crumpler, Larry S. (2010). "Inverted channel deposits on the floor of Miyamoto crater, Mars". Icarus. 205: 64–72. Bibcode:2010Icar..205...64N. doi:10.1016/j.icarus.2009.03.030.
- Morgan, A.M.; Howard, A.D.; Hobley, D.E.J.; Moore, J.M.; Dietrich, W.E.; Williams, R.M.E.; Burr, D.M.; Grant, J.A.; Wilson, S.A.; Matsubara, Y. (2014). "Sedimentology and climatic environment of alluvial fans in the martian Saheki crater and a comparison with terrestrial fans in the Atacama Desert". Icarus. 229: 131–156. Bibcode:2014Icar..229..131M. doi:10.1016/j.icarus.2013.11.007.
- Weitz, C.; Milliken, R.E.; Grant, J.A.; McEwen, A.S.; Williams, R.M.E.; Bishop, J.L.; Thomson, B.J. (2010). "Mars Reconnaissance Orbiter observations of light-toned layered deposits and associated fluvial landforms on the plateaus adjacent to Valles Marineris". Icarus. 205: 73–102. Bibcode:2010Icar..205...73W. doi:10.1016/j.icarus.2009.04.017.
- "Atmospheric mass loss by stellar wind from planets around main sequence M stars". Icarus. 210 (2): 539–1000. December 2010. Bibcode:2010Icar..210..539Z. doi:10.1016/j.icarus.2010.07.013. Retrieved December 19, 2010.
- Cabrol, N.; Grin, E., eds. (2010). Lakes on Mars. New York: Elsevier.
- Goldspiel, J.; Squires, S. (2000). "Groundwater sapping and valley formation on Mars". Icarus. 148: 176–192. Bibcode:2000Icar..148..176G. doi:10.1006/icar.2000.6465.
- Carr, Michael H. The Surface of Mars. Cambridge Planetary Science Series (No. 6). ISBN 978-0-511-26688-1.
- McCauley, J. 1978. Geologic map of the Coprates quadrangle of Mars. U.S. Geol. Misc. Inv. Map I-897
- Nedell, S.; Squyres, Steven W.; Andersen, David W. (1987). "Origin and evolution of the layered deposits in the Valles Marineris, Mars". Icarus. 70 (3): 409–441. Bibcode:1987Icar...70..409N. doi:10.1016/0019-1035(87)90086-8.
- Matsubara, Yo, Alan D. Howard, and Sarah A. Drummond. "Hydrology of early Mars: Lake basins." Journal of Geophysical Research: Planets (1991–2012) 116.E4 (2011).
- "Spectacular Mars images reveal evidence of ancient lakes". Sciencedaily.com. January 4, 2010.
- Gupta, Sanjeev; Warner, Nicholas; Kim, Rack; Lin, Yuan; Muller, Jan; -1#Jung-, Shih- (2010). "Hesperian equatorial thermokarst lakes in Ares Vallis as evidence for transient warm conditions on Mars". Geology. 38: 71–74. doi:10.1130/G30579.1.
- Brown, Dwayne; Cole, Steve; Webster, Guy; Agle, D.C. (September 27, 2012). "NASA Rover Finds Old Streambed On Martian Surface". NASA.
- NASA (September 27, 2012). "NASA's Curiosity Rover Finds Old Streambed on Mars – video (51:40)". NASAtelevision.
- Chang, Alicia (September 27, 2012). "Mars rover Curiosity finds signs of ancient stream". Associated Press.
- "NASA Rover Finds Conditions Once Suited for Ancient Life on Mars". NASA. March 12, 2013.
- Di Achille, Gaetano, and Brian M. Hynek. "Ancient ocean on Mars supported by global distribution of deltas and valleys." Nature Geoscience 3.7 (2010) 459-463.
- Carr, M.H. (1979). "Formation of Martian flood features by relaease of water from confined aquifers" (PDF). J. Geophys. Res. 84: 2995–3007. Bibcode:1979JGR....84.2995C. doi:10.1029/JB084iB06p02995.
- Baker, V.; Milton, D. (1974). "Erosion by Catastrophic Floods on Mars and Earth". Icarus. 23: 27–41. Bibcode:1974Icar...23...27B. doi:10.1016/0019-1035(74)90101-8.
- "Mars Global Surveyor MOC2-862 Release". Msss.com. Retrieved January 16, 2012.
- Andrews-Hanna, Jeffrey C.; Phillips, Roger J.; Zuber, Maria T. (2007). "Meridiani Planum and the global hydrology of Mars". Nature. 446 (7132): 163–6. Bibcode:2007Natur.446..163A. PMID 17344848. doi:10.1038/nature05594.
- Irwin; Rossman, P.; Craddock, Robert A.; Howard, Alan D. (2005). "Interior channels in Martian valley networks: Discharge and runoff production". Geology. 33 (6): 489–492. doi:10.1130/g21333.1.
- Jakosky, Bruce M. (1999). "Water, Climate, and Life". Science. 283 (5402): 648–649. PMID 9988657. doi:10.1126/science.283.5402.648.
- Lamb, Michael P., et al. "Can springs cut canyons into rock?." Journal of Geophysical Research: Planets (1991–2012) 111.E7 (2006).
- Grotzinger, J.P.; Arvidson, R.E.; Bell III, J.F.; Calvin, W.; Clark, B.C.; Fike, D.A.; Golombek, M.; Greeley, R.; Haldemann, A.; Herkenhoff, K.E.; Jolliff, B.L.; Knoll, A.H.; Malin, M.; McLennan, S.M.; Parker, T.; Soderblom, L.; Sohl-Dickstein, J.N.; Squyres, S.W.; Tosca, N.J.; Watters, W.A. (November 25, 2005). "Stratigraphy and sedimentology of a dry to wet eolian depositional system, Burns formation, Meridiani Planum". Earth and Planetary Science Letters. 240 (1): 11–72. Bibcode:2005E&PSL.240...11G. ISSN 0012-821X. doi:10.1016/j.epsl.2005.09.039.
- Michalski, Joseph R.; Niles, Paul B.; Cuadros, Javier; Parnell, John; Rogers, A. Deanne; Wright, Shawn P. (January 20, 2013). "Groundwater activity on Mars and implications for a deep biosphere". Nature Geoscience. 6 (2): 133–138. Bibcode:2013NatGe...6..133M. doi:10.1038/ngeo1706. Retrieved June 17, 2013.
Here we present a conceptual model of subsurface habitability of Mars and evaluate evidence for groundwater upwelling in deep basins.
- Zuber, Maria T. (2007). "Planetary Science: Mars at the tipping point". Nature. 447 (7146): 785–786. Bibcode:2007Natur.447..785Z. PMID 17568733. doi:10.1038/447785a.
- Andrews‐Hanna, J. C.; Zuber, M. T.; Arvidson, R. E.; Wiseman, S. M. (2010). "Early Mars hydrology: Meridiani playa deposits and the sedimentary record of Arabia Terra". J. Geophys. Res. 115: E06002. Bibcode:2010JGRE..115.6002A. doi:10.1029/2009JE003485.
- McLennan, S. M.; et al. (2005). "Provenance and diagenesis of the evaporitebearing Burns formation, Meridiani Planum, Mars". Earth Planet. Sci. Lett. 240: 95–121. Bibcode:2005E&PSL.240...95M. doi:10.1016/j.epsl.2005.09.041.
- Squyres, S. W.; Knoll, A. H. (2005). "Sedimentary rocks at Meridiani Planum: Origin, diagenesis, and implications for life on Mars". Earth Planet. Sci. Lett. 240: 1–10. Bibcode:2005E&PSL.240....1S. doi:10.1016/j.epsl.2005.09.038..
- Squyres, S. W.; et al. (2006). "Two years at Meridiani Planum: Results from the Opportunity rover". Science. 313: 1403–1407. doi:10.1126/science..
- Wiseman, M.; Andrews-Hanna, J. C.; Arvidson, R. E.; Mustard, J. F.; Zabrusky, K. J. (2011). Distribution of Hydrated Sulfates Across Arabia Terra Using CRISM Data: Implications for Martian Hydrology. 42nd Lunar and Planetary Science Conference.
- Andrews‐Hanna, Jeffrey C.; Lewis, Kevin W. (2011). "Early Mars hydrology: 2. Hydrological evolution in the Noachian and Hesperian epochs". Journal of Geophysical Research: Planets (1991–2012). 116: E2. Bibcode:2011JGRE..116.2007A. doi:10.1029/2010je003709.
- Clifford, S. M.; Parker, T. J. (2001). "The Evolution of the Martian Hydrosphere: Implications for the Fate of a Primordial Ocean and the Current State of the Northern Plains". Icarus. 154: 40–79. Bibcode:2001Icar..154...40C. doi:10.1006/icar.2001.6671.
- Smith, D.; et al. (1999). "The Gravity Field of Mars: Results from Mars Global Surveyor" (PDF). Science. 284 (5437): 94–97. Bibcode:1999Sci...286...94S. PMID 10506567. doi:10.1126/science.286.5437.94.
- Read, Peter L.; Lewis, S. R. (2004). The Martian Climate Revisited: Atmosphere and Environment of a Desert Planet (Paperback). Chichester, UK: Praxis. ISBN 978-3-540-40743-0. Retrieved December 19, 2010.
- "Martian North Once Covered by Ocean". Astrobio.net. Retrieved December 19, 2010.
- "New Map Bolsters Case for Ancient Ocean on Mars". SPACE.com. November 23, 2009.
- Carr, M.; Head, J. (2003). "Oceans on Mars: An assessment of the observational evidence and possible fate". Journal of Geophysical Research. 108: 5042. Bibcode:2003JGRE..108.5042C. doi:10.1029/2002JE001963.
- "Mars Ocean Hypothesis Hits the Shore". NASA Astrobiology. NASA. January 26, 2001.
- Perron; Taylor, J.; et al. (2007). "Evidence for an ancient Martian ocean in the topography of deformed shorelines". Nature. 447 (7146): 840–843. doi:10.1038/nature05873.
- Kaufman, Marc (March 5, 2015). "Mars Had an Ocean, Scientists Say, Pointing to New Data". The New York Times. Retrieved March 5, 2015.
- Rodriguez, J., et al. 2016. Tsunami waves extensively resurfaced the shorelines of an early Martian ocean. Scientific Reports: 6, 25106.
- Cornell University. "Ancient tsunami evidence on Mars reveals life potential." ScienceDaily. ScienceDaily, 19 May 2016. <www.sciencedaily.com/releases/2016/05/160519101756.htm>.
- Boynton, W. V.; et al. (2007). "Concentration of H, Si, Cl, K, Fe, and Th in the low and mid latitude regions of Mars". Journal of Geophysical Research Planets, in press. 112 (E12). Bibcode:2007JGRE..11212S99B. doi:10.1029/2007JE002887.
- Feldman, W. C.; Prettyman, T. H.; Maurice, S.; Plaut, J. J.; Bish, D. L.; Vaniman, D. T.; Tokar, R. L. (2004). "Global distribution of near-surface hydrogen on Mars". Journal of Geophysical Research. 109: E9. Bibcode:2004JGRE..109.9006F. doi:10.1029/2003JE002160. E09006.
- Feldman, W. C.; et al. (2004). "Global distribution of near-surface hydrogen on Mars". Journal of Geophysical Research. 109 (E9). Bibcode:2004JGRE..109.9006F. doi:10.1029/2003JE002160.
- "Water ice in crater at Martian north pole" (Press release). ESA. July 27, 2005.
- "Ice lake found on the Red Planet". BBC. July 29, 2005.
- Murray, John B.; et al. (2005). "Evidence from the Mars Express High Resolution Stereo Camera for a frozen sea close to Mars' equator". Nature. 434 (7031): 352–356. Bibcode:2005Natur.434..352M. PMID 15772653. doi:10.1038/nature03379.
Here we present High Resolution Stereo Camera images from the European Space Agency Mars Express spacecraft that indicate that such lakes may still exist.
- Orosei, R.; Cartacci, M.; Cicchetti, A.; Federico, C.; Flamini, E.; Frigeri, A.; Holt, J. W.; Marinangeli, L.; Noschese, R.; Pettinelli, E.; Phillips, R. J.; Picardi, G.; Plaut, J. J.; Safaeinili, A.; Seu, R. (2008). "Radar subsurface sounding over the putative frozen sea in Cerberus Palus, Mars" (PDF). Lunar and Planetary Science. XXXIX: 1. Bibcode:2007AGUFM.P14B..05O. ISBN 978-1-4244-4604-9. doi:10.1109/ICGPR.2010.5550143.
- Barlow, Nadine G. Mars: an introduction to its interior, surface and atmosphere. Cambridge University Press. ISBN 978-0-521-85226-5.
- "Mars' South Pole Ice Deep and Wide". NASA News & Media Resources. NASA. March 15, 2007. External link in
- Kostama, V.-P.; Kreslavsky, M. A.; Head, J. W. (June 3, 2006). "Recent high-latitude icy mantle in the northern plains of Mars: Characteristics and ages of emplacement". Geophysical Research Letters. 33 (11): L11201. Bibcode:2006GeoRL..3311201K. doi:10.1029/2006GL025946.
- Plaut, J. J.; et al. (March 15, 2007). "Subsurface Radar Sounding of the South Polar Layered Deposits of Mars". Science. 316 (5821): 92–95. PMID 17363628. doi:10.1126/science.1139672.
- Johnson, John (August 1, 2008). "There's water on Mars, NASA confirms". Los Angeles Times.
- "Radar Map of Buried Mars Layers Matches Climate Cycles". OnOrbit. Retrieved December 19, 2010.
- Fishbaugh, KE; Byrne, Shane; Herkenhoff, Kenneth E.; Kirk, Randolph L.; Fortezzo, Corey; Russell, Patrick S.; McEwen, Alfred (2010). "Evaluating the meaning of "layer" in the Martian north polar layered depsoits and the impact on the climate connection" (PDF). Icarus. 205 (1): 269–282. Bibcode:2010Icar..205..269F. doi:10.1016/j.icarus.2009.04.011.
- Duxbury, N. S.; Zotikov, I. A.; Nealson, K. H.; Romanovsky, V. E.; Carsey, F. D. (2001). "A numerical model for an alternative origin of Lake Vostok and its exobiological implications for Mars" (PDF). Journal of Geophysical Research. 106: 1453. Bibcode:2001JGR...106.1453D. doi:10.1029/2000JE001254.
- Kieffer, Hugh H. (1992). Mars. University of Arizona Press. ISBN 978-0-8165-1257-7. Retrieved March 7, 2011.
- "Polygonal Patterned Ground: Surface Similarities Between Mars and Earth". SpaceRef. September 28, 2002.
- Squyres, S. (1989). "Urey Prize Lecture: Water on Mars". Icarus. 79 (2): 229–288. Bibcode:1989Icar...79..229S. doi:10.1016/0019-1035(89)90078-X.
- Lefort, A.; Russell, P.S.; Thomas, N. (2010). "Scaloped terrains in the Peneus and Amphitrites Paterae region of Mars as observed by HiRISE". Icarus. 205: 259–268. Bibcode:2010Icar..205..259L. doi:10.1016/j.icarus.2009.06.005.
- "NASA – Turbulent Lava Flow in Mars' Athabasca Valles". Nasa.gov. January 11, 2010.
- Dundas, C., S. Bryrne, A. McEwen. 2015. Modeling the development of martian sublimation thermokarst landforms. Icarus: 262, 154-169.
- Head, James W.; Mustard, John F.; Kreslavsky, Mikhail A.; Milliken, Ralph E.; Marchant, David R. (2003). "Recent ice ages on Mars". Nature. 426 (6968): 797–802. Bibcode:2003Natur.426..797H. PMID 14685228. doi:10.1038/nature02114.
- "HiRISE Dissected Mantled Terrain (PSP_002917_2175)". Arizona University. Retrieved December 19, 2010.
- Lefort, A.; Russell, P.S.; Thomas, N. (2010). "Scalloped terrains in the Peneus and Amphitrites Paterae region of Mars as observed by HiRISE". Icarus. 205: 259–268. Bibcode:2010Icar..205..259L. doi:10.1016/j.icarus.2009.06.005.
- Byrne, S.; Ingersoll, A. P. (2002). "A Sublimation Model for the Formation of the Martian Polar Swiss-cheese Features". American Astronomical Society. American Astronomical Society. 34: 837. Bibcode:2002DPS....34.0301B.
- Strom, R.G.; Croft, Steven K.; Barlow, Nadine G. (1992). The Martian Impact Cratering Record, Mars. University of Arizona Press. ISBN 0-8165-1257-4.
- "ESA – Mars Express – Breathtaking views of Deuteronilus Mensae on Mars". Esa.int. March 14, 2005.
- Hauber, E.; et al. (2005). "Discovery of a flank caldera and very young glacial activity at Hecates Tholus, Mars". Nature. 434 (7031): 356–61. Bibcode:2005Natur.434..356H. PMID 15772654. doi:10.1038/nature03423.
- Shean, David E.; Head, James W.; Fastook, James L.; Marchant, David R. (2007). "Recent glaciation at high elevations on Arsia Mons, Mars: Implications for the formation and evolution of large tropical mountain glaciers" (PDF). Journal of Geophysical Research. 112 (E3): E03004. Bibcode:2007JGRE..11203004S. doi:10.1029/2006JE002761.
- Shean, D.; et al. (2005). "Origin and evolution of a cold-based mountain glacier on Mars: The Pavonis Mons fan-shaped deposit". Journal of Geophysical Research. 110 (E5): E05001. Bibcode:2005JGRE..11005001S. doi:10.1029/2004JE002360.
- Basilevsky, A.; et al. (2006). "Geological recent tectonic, volcanic and fluvial activity on the eastern flank of the Olympus Mons volcano, Mars". Geophysical Research Letters. 33. L13201. Bibcode:2006GeoRL..3313201B. doi:10.1029/2006GL026396.
- Milliken, R.; et al. (2003). "Viscous flow features on the surface of Mars: Observations from high-resolution Mars Orbiter Camera (MOC) images". Journal of Geophysical Research. 108 (E6): 5057. Bibcode:2003JGRE..108.5057M. doi:10.1029/2002je002005.
- Arfstrom, J.; Hartmann, W. (2005). "Martian flow features, moraine-like ridges, and gullies: Terrestrial analogs and interrelationships". Icarus. 174 (2): 321–35. Bibcode:2005Icar..174..321A. doi:10.1016/j.icarus.2004.05.026.
- Head, J. W.; Neukum, G.; Jaumann, R.; Hiesinger, H.; Hauber, E.; Carr, M.; Masson, P.; Foing, B.; Hoffmann, H.; Kreslavsky, M.; Werner, S.; Milkovich, S.; van Gasselt, S.; HRSC Co-Investigator Team (2005). "Tropical to mid-latitude snow and ice accumulation, flow and glaciation on Mars". Nature. 434 (7031): 346–350. Bibcode:2005Natur.434..346H. PMID 15772652. doi:10.1038/nature03359.
- Staff (October 17, 2005). "Mars' climate in flux: Mid-latitude glaciers". Marstoday. Brown University.
- Berman, D.; et al. (2005). "The role of arcuate ridges and gullies in the degradation of craters in the Newton Basin region of Mars". Icarus. 178 (2): 465–86. Bibcode:2005Icar..178..465B. doi:10.1016/j.icarus.2005.05.011.
- "Fretted Terrain Valley Traverse". Hirise.lpl.arizona.edu. Retrieved January 16, 2012.
- "Jumbled Flow Patterns". Arizona University. Retrieved January 16, 2012.
- Jakosky, B. M.; Phillips, R. J. (2001). Nature. 412: 237–244. doi:10.1038/35084184. Missing or empty
- Chaufray, J. Y.; et al. (2007). "Mars solar wind interaction: Formation of the Martian corona and atmospheric loss to space". Journal of Geophysical Research. 112. Bibcode:2007JGRE..112.9009C. doi:10.1029/2007JE002915.
- Chevrier, V.; et al. (2007). "Early geochemical environment of Mars as determined from thermodynamics of phyllosilicates". Nature. 448: 60–63. doi:10.1038/nature05961.
- Catling, D. C. (2007). "Mars: Ancient fingerprints in the clay". Nature. 448: 31–32. PMID 17611529. doi:10.1038/448031a.
- Andrews-Hanna, J. C.; et al. (2007). "Meridiani Planum and the global hydrology of Mars". Nature. 446: 163–6. PMID 17344848. doi:10.1038/nature05594.
- Morris, R. V.; et al. (2001). "Phyllosilicate-poor palagonitic dust from Mauna Kea Volcano (Hawaii): A mineralogical analogue for magnetic Martian dust?". Journal of Geophysical Research. 106: 5057. Bibcode:2001JGR...106.5057M. doi:10.1029/2000JE001328.
- Chevrier, V.; et al. (2006). "Iron weathering products in a CO2+(H2O or H2O2) atmosphere: Implications for weathering processes on the surface of Mars". Geochimica et Cosmochimica Acta. 70: 4295–4317. Bibcode:2006GeCoA..70.4295C. doi:10.1016/j.gca.2006.06.1368.
- Bibring, J-P.; et al. (2006). "Global mineralogical and aqueous mars history derived from OMEGA/Mars Express data". Science. 312: 400–4. PMID 16627738. doi:10.1126/science.1122659.
- McEwen, A. S.; et al. (2007). "A Closer Look at Water-Related Geologic Activity on Mars". Science. 317: 1706–1709. PMID 17885125. doi:10.1126/science.1143987.
- Smith, Isaac B.; Putzig, Nathaniel E.; Holt, John W.; Phillips, Roger J. (27 May 2016). "An ice age recorded in the polar deposits of Mars". Science. 352 (6289): 1075–1078. doi:10.1126/science.aad6968. Retrieved 2016-05-27.
- "Mars may be emerging from an ice age". ScienceDaily. MLA NASA/Jet Propulsion Laboratory. December 18, 2003.
- Mustard, J.; et al. (2001). "Evidence for recent climate change on Mars from the identification of youthful near-surface ground ice". Nature. 412 (6845): 411–4. PMID 11473309. doi:10.1038/35086515.
- Kreslavsky, M.; Head, J. (2002). "Mars: Nature and evolution of young latitude-dependent water-ice-rich mantle" (PDF). Geophysical Research Letters. 29 (15): 14–1–14–4. Bibcode:2002GeoRL..29o..14K. doi:10.1029/2002GL015392.
- Shean, David E. (2005). "Origin and evolution of a cold-based tropical mountain glacier on Mars: The Pavonis Mons fan-shaped deposit". Journal of Geophysical Research. 110. Bibcode:2005JGRE..11005001S. doi:10.1029/2004JE002360.
- Forget, F.; et al. (2006). "Formation of Glaciers on Mars by Atmospheric Precipitation at High Obliquity". Science. 311 (5759): 368–71. Bibcode:2006Sci...311..368F. PMID 16424337. doi:10.1126/science.1120335.
- Dickson, James L.; Head, James W.; Marchant, David R. (2008). "Late Amazonian glaciation at the dichotomy boundary on Mars: Evidence for glacial thickness maxima and multiple glacial phases". Geology. 36 (5): 411–4. doi:10.1130/G24382A.1.
- Heldmann, Jennifer L.; et al. (May 7, 2005). "Formation of Martian gullies by the action of liquid water flowing under current Martian environmental conditions" (PDF). Journal of Geophysical Research. 110: Eo5004. Bibcode:2005JGRE..11005004H. doi:10.1029/2004JE002261. 'conditions such as now occur on Mars, outside of the temperature-pressure stability regime of liquid water' … 'Liquid water is typically stable at the lowest elevations and at low latitudes on the planet, because the atmospheric pressure is greater than the vapor pressure of water and surface temperatures in equatorial regions can reach 220 K (−53 °C; −64 °F) for parts of the day.
- "Mars Gullies May Have Been Formed By Flowing Liquid Brine". Sciencedaily.com. February 15, 2009.
- Malin, Michael C.; Edgett, Kenneth S.; Posiolova, Liliya V.; McColley, Shawn M.; Dobrea, Eldar Z. Noe (December 8, 2006). "Present-Day Impact Cratering Rate and Contemporary Gully Activity on Mars". Science. 314 (5805): 1573–1577. Bibcode:2006Sci...314.1573M. PMID 17158321. doi:10.1126/science.1135156. Retrieved September 3, 2009.
- Head, JW; Marchant, DR; Kreslavsky, MA (2008). "Formation of gullies on Mars: Link to recent climate history and insolation microenvironments implicate surface water flow origin". PNAS. 105 (36): 13258–63. Bibcode:2008PNAS..10513258H. PMC . PMID 18725636. doi:10.1073/pnas.0803760105.
- Henderson, Mark (December 7, 2006). "Water has been flowing on Mars within past five years, Nasa says". The Times. UK.
- "Mars photo evidence shows recently running water.". The Christian Science Monitor. Retrieved March 17, 2007.
- Malin, Michael C.; Edgett, Kenneth S. (2000). "Evidence for Recent Groundwater Seepage and Surface Runoff on Mars". Science. 288 (5475): 2330–2335. Bibcode:2000Sci...288.2330M. PMID 10875910. doi:10.1126/science.288.5475.2330.
- Kolb, K.; Pelletier, Jon D.; McEwen, Alfred S. (2010). "Modeling the formation of bright slope deposits associated with gullies in Hale Crater, Mars: Implications for recent liquid water". Icarus. 205: 113–137. Bibcode:2010Icar..205..113K. doi:10.1016/j.icarus.2009.09.009.
- Hoffman, Nick (2002). "Active polar gullies on Mars and the role of carbon dioxide". Astrobiology. 2 (3): 313–323. PMID 12530241. doi:10.1089/153110702762027899.
- Musselwhite, Donald S.; Swindle, Timothy D.; Lunine, Jonathan I. (2001). "Liquid CO2 breakout and the formation of recent small gullies on Mars". Geophysical Research Letters. 28 (7): 1283–1285. Bibcode:2001GeoRL..28.1283M. doi:10.1029/2000gl012496.
- McEwen, Alfred. S.; Ojha, Lujendra; Dundas, Colin M. (June 17, 2011). "Seasonal Flows on Warm Martian Slopes". Science. American Association for the Advancement of Science. 333 (6043): 740–743. Bibcode:2011Sci...333..740M. ISSN 0036-8075. PMID 21817049. doi:10.1126/science.1204816.
- "Nepali Scientist Lujendra Ojha spots possible water on Mars". Nepali Blogger. August 6, 2011.
- "NASA Spacecraft Data Suggest Water Flowing on Mars". NASA. August 4, 2011.
- McEwen, Alfred; Lujendra, Ojha; Dundas, Colin; Mattson, Sarah; Bryne, S; Wray, J; Cull, Selby; Murchie, Scott; Thomas, Nicholas; Gulick, Virginia (5 August 2011). "Seasonal Flows On Warm Martian Slopes.". Science. 333 (6043): 743–743. PMID 21817049. doi:10.1126/science.1204816. Retrieved 28 September 2015.
- Drake, Nadia; 28, National Geographic PUBLISHED September. "NASA Finds 'Definitive' Liquid Water on Mars". National Geographic News. Retrieved 2015-09-30.
- Moskowitz, Clara. "Water Flows on Mars Today, NASA Announces". Retrieved 2015-09-30.
- "NASA News Conference: Evidence of Liquid Water on Today’s Mars". NASA. 28 September 2015.
- "NASA Confirms Evidence That Liquid Water Flows on Today’s Mars". Retrieved 2015-09-30.
- "Essential requirements for life". CMEX-NASA. Retrieved May 26, 2013.
- Schuerger, Andrew C.; Golden, D.C.; Ming, Doug W. (July 20, 2012). "Biotoxicity of Marssoils:1. Dry deposition of analog soils on microbial colonies and survival under Martian conditions" (PDF). Elsevier -Planetary and Space Science.
- Beaty, David W.; et al. (July 14, 2006). "MEPAG SR-SAG (2006) Unpublished white paper" (PDF). In the Mars Exploration Program Analysis Group (MEPAG). Findings of the Mars Special Regions Science Analysis Group. Jet Propulsion Laboratory – NASA. p. 17.
- "Technologies for the Discovery and Characterization of Subsurface Habitable Environments on Mars". Bibcode:2012DPS....4421522F. Retrieved March 21, 2014.
- Neal-Jones, Nancy; O'Carroll, Cynthia (October 12, 2005). "New Map Provides More Evidence Mars Once Like Earth". Goddard Space Flight Center. NASA.
- "Martian Interior: Paleomagnetism". Mars Express. European Space Agency. January 4, 2007.
- Dehant, V.; Lammer, H.; Kulikov, Y. N.; Grießmeier, J.-M.; et al. (2007). "Planetary Magnetic Dynamo Effect on Atmospheric Protection of Early Earth and Mars". Space Sciences Series of ISSI. 24: 279–300. doi:10.1007/978-0-387-74288-5_10. Retrieved June 6, 2013.
- "What makes Mars so hostile to life?". BBC News. January 7, 2013.
- Dartnell, L.R.; Desorgher; Ward; Coates (January 30, 2007). "Modelling the surface and subsurface Martian radiation environment: Implications for astrobiology". Geophysical Research Letters. 34 (2). Bibcode:2007GeoRL..34.2207D. doi:10.1029/2006GL027494.
Bacteria or spores held dormant by freezing conditions cannot metabolise and become inactivated by accumulating radiation damage. We find that at 2 metres (6 ft 7 in) depth, the reach of the ExoMars drill, a population of radioresistant cells would need to have reanimated within the last 450,000 years to still be viable. Recovery of viable cells cryopreserved within the putative Cerberus pack-ice requires a drill depth of at least 7.5 metres (25 ft).
- "Mars: 'Strongest evidence' planet may have supported life, scientists say". BBC News. January 20, 2013.
- Michalski, Joseph R.; Cuadros, Javier; Niles, Paul B.; Parnell, John; Rogers, A. Deanne; Wright, Shawn P. (January 20, 2013). "Groundwater activity on Mars and implications for a deep biosphere". Nature Geoscience. 6 (2): 133–138. Bibcode:2013NatGe...6..133M. doi:10.1038/ngeo1706. Retrieved January 22, 2013.
- Anderson, Paul S. (December 15, 2011). "New Study Says Large Regions of Mars Could Sustain Life". Universe Today.
Most scientists would agree that the best place that any organisms could hope to survive and flourish would be underground.
- "Habitability and Biology: What are the Properties of Life?". Phoenix Mars Mission. The University of Arizona. Retrieved June 6, 2013.
If any life exists on Mars today, scientists believe it is most likely to be in pockets of liquid water beneath the Martian surface.
- Than, Ker (April 2, 2007). "Possible New Mars Caves Targets in Search for Life". Space.com.
- Hayne, Paul O.; Schofield, John T.; Kleinböhl, Armin; Kass, David A.; McCleese, Daniel J. (February 4–6, 2013). "Thermodynamic Stability of Liquid Water on Present‐Day Mars: Surface" (PDF). The Present-Day Habitability of Mars 2013 (PDF). California, USA: The UCLA Institute for Planets and Exoplanets. Retrieved June 17, 2013.
These results suggest that present day fluvial activity [gullies] on Mars may be associated with discharge from aquifers supplied during seasonal or inter‐annual climate cycles, rather than ubiquitous ground ice.
- "Mars Exploration: Missions". Marsprogram.jpl.nasa.gov. Retrieved December 19, 2010.
- "Viking Orbiter Views of Mars". History.nasa.gov. Retrieved December 19, 2010.
- "ch5". NASA History. NASA. Retrieved December 19, 2010.
- "Craters". NASA. Retrieved December 19, 2010.
- Morton, O. (2002). Mapping Mars. Picador, NY.
- Arvidson, R; Gooding, James L.; Moore, Henry J. (1989). "The Martian surface as Imaged, Sampled, and Analyzed by the Viking Landers". Review of Geophysics. 27: 39–60. Bibcode:1989RvGeo..27...39A. doi:10.1029/RG027i001p00039.
- Clark, B.; Baird, AK; Rose Jr., HJ; Toulmin P, 3rd; Keil, K; Castro, AJ; Kelliher, WC; Rowe, CD; Evans, PH (1976). "Inorganic Analysis of Martian Samples at the Viking Landing Sites". Science. 194 (4271): 1283–1288. Bibcode:1976Sci...194.1283C. PMID 17797084. doi:10.1126/science.194.4271.1283.
- Hoefen, T.M.; et al. (2003). "Discovery of Olivine in the Nili Fossae Region of Mars". Science. 302: 627–630. PMID 14576430. doi:10.1126/science.1089647.
- Hoefen, T.; Clark, RN; Bandfield, JL; Smith, MD; Pearl, JC; Christensen, PR (2003). "Discovery of Olivine in the Nili Fossae Region of Mars". Science. 302 (5645): 627–630. Bibcode:2003Sci...302..627H. PMID 14576430. doi:10.1126/science.1089647.
- Malin, Michael C.; Edgett, Kenneth S. (2001). "Mars Global Surveyor Mars Orbiter Camera: Interplanetary cruise through primary mission". Journal of Geophysical Research. 106 (E10): 23429–23570. Bibcode:2001JGR...10623429M. doi:10.1029/2000JE001455.
- "Atmospheric and Meteorological Properties". NASA.
- Golombek, M. P.; Cook, R. A.; Economou, T.; Folkner, W. M.; Haldemann, A. F. C.; Kallemeyn, P. H.; Knudsen, J. M.; Manning, R. M.; Moore, H. J.; Parker, T. J.; Rieder, R.; Schofield, J. T.; Smith, P. H.; Vaughan, R. M. (1997). "Overview of the Mars Pathfinder Mission and Assessment of Landing Site Predictions". Science. 278 (5344): 1743–1748. Bibcode:1997Sci...278.1743G. PMID 9388167. doi:10.1126/science.278.5344.1743.
- Murche, S.; Mustard, John; Bishop, Janice; Head, James; Pieters, Carle; Erard, Stephane (1993). "Spatial Variations in the Spectral Properties of Bright Regions on Mars". Icarus. 105 (2): 454–468. Bibcode:1993Icar..105..454M. doi:10.1006/icar.1993.1141.
- "Home Page for Bell (1996) Geochemical Society paper". Marswatch.tn.cornell.edu. Retrieved December 19, 2010.
- Feldman, W. C.; Boynton, W. V.; Tokar, R. L.; Prettyman, T. H.; Gasnault, O.; Squyres, S. W.; Elphic, R. C.; Lawrence, D. J.; Lawson, S. L.; Maurice, S.; McKinney, G. W.; Moore, K. R.; Reedy, R. C. (2002). "Global Distribution of Neutrons from Mars: Results from Mars Odyssey". Science. 297 (5578): 75–78. Bibcode:2002Sci...297...75F. PMID 12040088. doi:10.1126/science.1073541.
- Mitrofanov, I.; Anfimov, D.; Kozyrev, A.; Litvak, M.; Sanin, A.; Tret'yakov, V.; Krylov, A.; Shvetsov, V.; Boynton, W.; Shinohara, C.; Hamara, D.; Saunders, R. S. (2002). "Maps of Subsurface Hydrogen from the High Energy Neutron Detector, Mars Odyssey". Science. 297 (5578): 78–81. Bibcode:2002Sci...297...78M. PMID 12040089. doi:10.1126/science.1073616.
- Boynton, W. V.; Feldman, W. C.; Squyres, S. W.; Prettyman, T. H.; Brückner, J.; Evans, L. G.; Reedy, R. C.; Starr, R.; Arnold, J. R.; Drake, D. M.; Englert, P. A. J.; Metzger, A. E.; Mitrofanov, Igor; Trombka, J. I.; d'Uston, C.; Wänke, H.; Gasnault, O.; Hamara, D. K.; Janes, D. M.; Marcialis, R. L.; Maurice, S.; Mikheeva, I.; Taylor, G. J.; Tokar, R.; Shinohara, C. (2002). "Distribution of Hydrogen in the Near Surface of Mars: Evidence for Subsurface Ice Deposits". Science. 297 (5578): 81–85. Bibcode:2002Sci...297...81B. PMID 12040090. doi:10.1126/science.1073722.
- "Dao Vallis". Mars Odyssey Mission. THEMIS. August 7, 2002. Retrieved December 19, 2010.
- Smith, P. H.; Tamppari, L.; Arvidson, R. E.; Bass, D.; Blaney, D.; Boynton, W.; Carswell, A.; Catling, D.; Clark, B.; Duck, T.; DeJong, E.; Fisher, D.; Goetz, W.; Gunnlaugsson, P.; Hecht, M.; Hipkin, V.; Hoffman, J.; Hviid, S.; Keller, H.; Kounaves, S.; Lange, C. F.; Lemmon, M.; Madsen, M.; Malin, M.; Markiewicz, W.; Marshall, J.; McKay, C.; Mellon, M.; Michelangeli, D.; et al. (2008). "Introduction to special section on the phoenix mission: Landing site characterization experiments, mission overviews, and expected science". J. Geophysical Research. 113: E00A18. Bibcode:2008JGRE..113.0A18S. doi:10.1029/2008JE003083.
- "NASA Data Shed New Light About Water and Volcanoes on Mars". NASA. September 9, 2010. Retrieved March 21, 2014.
- Mellon, M.; Jakosky, B. (1993). "Geographic variations in the thermal and diffusive stability of ground ice on Mars". J. Geographical Research. 98: 3345–3364. Bibcode:1993JGR....98.3345M. doi:10.1029/92JE02355.
- "Confirmation of Water on Mars". Nasa.gov. June 20, 2008.
- "The Dirt on Mars Lander Soil Findings". SPACE.com. Retrieved December 19, 2010.
- Martínez, G. M. & Renno, N. O. (2013). "Water and brines on Mars: current evidence and implications for MSL". Space Science Reviews. 175 (1-4): 29–51. doi:10.1007/s11214-012-9956-3.
- Rennó, Nilton O.; Bos, Brent J.; Catling, David; Clark, Benton C.; Drube, Line; Fisher, David; Goetz, Walter; Hviid, Stubbe F.; Keller, Horst Uwe; Kok, Jasper F.; Kounaves, Samuel P.; Leer, Kristoffer; Lemmon, Mark; Madsen, Morten Bo; Markiewicz, Wojciech J.; Marshall, John; McKay, Christopher; Mehta, Manish; Smith, Miles; Zorzano, M. P.; Smith, Peter H.; Stoker, Carol; Young, Suzanne M. M. (2009). "Possible physical and thermodynamical evidence for liquid water at the Phoenix landing site". Journal of Geophysical Research. 114: E00E03. Bibcode:2009JGRE..114.0E03R. doi:10.1029/2009JE003362.
- Chang, Kenneth (March 16, 2009). "Blobs in Photos of Mars Lander Stir a Debate: Are They Water?". New York Times (online).
- "Liquid Saltwater Is Likely Present On Mars, New Analysis Shows". ScienceDaily. March 20, 2009.
- "Astrobiology Top 10: Too Salty to Freeze". Astrobio.net. Retrieved December 19, 2010.
- Hecht, M. H.; Kounaves, S. P.; Quinn, R. C.; West, S. J.; Young, S. M. M.; Ming, D. W.; Catling, D. C.; Clark, B. C.; Boynton, W. V.; Hoffman, J.; DeFlores, L. P.; Gospodinova, K.; Kapit, J.; Smith, P. H. (2009). "Detection of Perchlorate and the Soluble Chemistry of Martian Soil at the Phoenix Lander Site". Science. 325 (5936): 64–67. Bibcode:2009Sci...325...64H. PMID 19574385. doi:10.1126/science.1172466 (inactive 2015-01-23).
- Smith, P. H.; Tamppari, L. K.; Arvidson, R. E.; Bass, D.; Blaney, D.; Boynton, W. V.; Carswell, A.; Catling, D. C.; Clark, B. C.; Duck, T.; DeJong, E.; Fisher, D.; Goetz, W.; Gunnlaugsson, H. P.; Hecht, M. H.; Hipkin, V.; Hoffman, J.; Hviid, S. F.; Keller, H. U.; Kounaves, S. P.; Lange, C. F.; Lemmon, M. T.; Madsen, M. B.; Markiewicz, W. J.; Marshall, J.; McKay, C. P.; Mellon, M. T.; Ming, D. W.; Morris, R. V.; et al. (2009). "H2O at the Phoenix Landing Site". Science. 325 (5936): 58–61. Bibcode:2009Sci...325...58S. PMID 19574383. doi:10.1126/science.1172339 (inactive 2015-01-23).
- Whiteway, J. A.; Komguem, L.; Dickinson, C.; Cook, C.; Illnicki, M.; Seabrook, J.; Popovici, V.; Duck, T. J.; Davy, R.; Taylor, P. A.; Pathak, J.; Fisher, D.; Carswell, A. I.; Daly, M.; Hipkin, V.; Zent, A. P.; Hecht, M. H.; Wood, S. E.; Tamppari, L. K.; Renno, N.; Moores, J. E.; Lemmon, M. T.; Daerden, F.; Smith, P. H. (2009). "Mars Water-Ice Clouds and Precipitation". Science. 325 (5936): 68–70. Bibcode:2009Sci...325...68W. PMID 19574386. doi:10.1126/science.1172344 (inactive 2015-01-23).
- "CSA – News Release". Asc-csa.gc.ca. July 2, 2009.
- "Mars Exploration Rover Mission: Press Releases". Marsrovers.jpl.nasa.gov. March 5, 2004.
- "NASA – Mars Rover Spirit Unearths Surprise Evidence of Wetter Past". NASA. May 21, 2007.
- Bertster, Guy (December 10, 2007). "Mars Rover Investigates Signs of Steamy Martian Past". Press Release. Jet Propulsion Laboratory, Pasadena, California.
- Klingelhofer, G.; et al. (2005). Lunar Planet. Sci. (abstr.). XXXVI: 2349. Missing or empty
- Schroder, C.; et al. (2005). Journal of Geophysical Research (abstr.). European Geosciences Union, General Assembly. 7: 10254. Missing or empty
- Morris, S.; et al. "Mössbauer mineralogy of rock, soil, and dust at Gusev crater, Mars: Spirit’s journal through weakly altered olivine basalt on the plains and pervasively altered basalt in the Columbia Hills.". J. Geophys. Res. 111. Bibcode:2006JGRE..111.2S13M. doi:10.1029/2005je002584.
- Ming, D.; Mittlefehldt, D. W.; Morris, R. V.; Golden, D. C.; Gellert, R.; Yen, A.; Clark, B. C.; Squyres, S. W.; Farrand, W. H.; Ruff, S. W.; Arvidson, R. E.; Klingelhöfer, G.; McSween, H. Y.; Rodionov, D. S.; Schröder, C.; De Souza, P. A.; Wang, A. (2006). "Geochemical and mineralogical indicators for aqueous processes in the Columbia Hills of Gusev crater, Mars". J. Geophys. Res. 111: E02S12. Bibcode:2006JGRE..111.2S12M. doi:10.1029/2005JE002560.
- Bell, J, ed. (2008). "The Martian Surface". Cambridge University Press. ISBN 978-0-521-86698-9.
- Morris, R. V.; Ruff, S. W.; Gellert, R.; Ming, D. W.; Arvidson, R. E.; Clark, B. C.; Golden, D. C.; Siebach, K.; Klingelhofer, G.; Schroder, C.; Fleischer, I.; Yen, A. S.; Squyres, S. W. (June 4, 2010). "Outcrop of long-sought rare rock on Mars found". Science. Sciencedaily.com. 329 (5990): 421–424. Bibcode:2010Sci...329..421M. PMID 20522738. doi:10.1126/science.1189667.
- Morris, Richard V.; Ruff, Steven W.; Gellert, Ralf; Ming, Douglas W.; Arvidson, Raymond E.; Clark, Benton C.; Golden, D. C.; Siebach, Kirsten; et al. (June 3, 2010). "Identification of Carbonate-Rich Outcrops on Mars by the Spirit Rover". Science. 329 (5990): 421–424. Bibcode:2010Sci...329..421M. PMID 20522738. doi:10.1126/science.1189667.
- "Opportunity Rover Finds Strong Evidence Meridiani Planum Was Wet". Retrieved July 8, 2006.
- Harwood, William (January 25, 2013). "Opportunity rover moves into 10th year of Mars operations". Space Flight Now.
- Benison, KC; Laclair, DA (2003). "Modern and ancient extremely acid saline deposits: terrestrial analogs for martian environments?". Astrobiology. 3 (3): 609–618. Bibcode:2003AsBio...3..609B. PMID 14678669. doi:10.1089/153110703322610690.
- Benison, K; Bowen, B (2006). "Acid saline lake systems give clues about past environments and the search for life on Mars". Icarus. 183 (1): 225–229. Bibcode:2006Icar..183..225B. doi:10.1016/j.icarus.2006.02.018.
- Osterloo, MM; Hamilton, VE; Bandfield, JL; Glotch, TD; Baldridge, AM; Christensen, PR; Tornabene, LL; Anderson, FS (2008). "Chloride-Bearing Materials in the Southern Highlands of Mars". Science. 319 (5870): 1651–1654. Bibcode:2008Sci...319.1651O. PMID 18356522. doi:10.1126/science.1150690.
- Grotzinger, J.; Milliken, R., eds. (2012). "Sedimentary Geology of Mars". SEPM.
- "HiRISE – High Resolution Imaging Science Experiment". HiriUniversity of Arizona. Retrieved December 19, 2010.
- "Target Zone: Nilosyrtis? | Mars Odyssey Mission THEMIS". Themis.asu.edu. Retrieved December 19, 2010.
- Mellon, M. T.; Jakosky, B. M.; Postawko, S. E. (1997). "The persistence of equatorial ground ice on Mars". J. Geophys. Res. onlinelibrary.wiley.com. 102(E8): 19357–19369. Bibcode:1997JGR...10219357M. doi:10.1029/97JE01346.
- Arfstrom, John D. (2012). "A Conceptual Model of Equatorial Ice Sheets on Mars. J" (PDF). Comparative Climatology of Terrestrial Planets. Lunar and Planetary Institute.
- Byrne, Shane; Dundas, Colin M.; Kennedy, Megan R.; Mellon, Michael T.; McEwen, Alfred S.; Cull, Selby C.; Daubar, Ingrid J.; Shean, David E.; Seelos, Kimberly D.; Murchie, Scott L.; Cantor, Bruce A.; Arvidson, Raymond E.; Edgett, Kenneth S.; Reufer, Andreas; Thomas, Nicolas; Harrison, Tanya N.; Posiolova, Liliya V.; Seelos, Frank P. (2009). "Distribution of mid-latitude ground ice on Mars from new impact craters". Science. 325 (5948): 1674–1676. Bibcode:2009Sci...325.1674B. PMID 19779195. doi:10.1126/science.1175307.
- "Water Ice Exposed in Mars Craters". SPACE.com. Retrieved December 19, 2010.
- Brown, Dwayne (October 30, 2012). "NASA Rover's First Soil Studies Help Fingerprint Martian Minerals". NASA.
- Brown, Dwayne; Webster, Guy; Neal-Jones, Nance (December 3, 2012). "NASA Mars Rover Fully Analyzes First Martian Soil Samples". NASA.
- Chang, Ken (December 3, 2012). "Mars Rover Discovery Revealed". New York Times.
- Webster, Guy; Brown, Dwayne (March 18, 2013). "Curiosity Mars Rover Sees Trend In Water Presence". NASA.
- Rincon, Paul (March 19, 2013). "Curiosity breaks rock to reveal dazzling white interior". BBC.
- Staff (March 20, 2013). "Red planet coughs up a white rock, and scientists freak out". MSN.
- Lieberman, Josh (September 26, 2013). "Mars Water Found: Curiosity Rover Uncovers 'Abundant, Easily Accessible' Water In Martian Soil". iSciencetimes.
- Leshin, L. A.; et al. (September 27, 2013). "Volatile, Isotope, and Organic Analysis of Martian Fines with the Mars Curiosity Rover". Science. 341 (6153): 1238937. doi:10.1126/science.1238937.
- Grotzinger, John (September 26, 2013). "Introduction To Special Issue: Analysis of Surface Materials by the Curiosity Mars Rover". Science. 341 (6153): 1475. Bibcode:2013Sci...341.1475G. doi:10.1126/science.1244258.
- Neal-Jones, Nancy; Zubritsky, Elizabeth; Webster, Guy; Martialay, Mary (September 26, 2013). "Curiosity's SAM Instrument Finds Water and More in Surface Sample". NASA.
- Webster, Guy; Brown, Dwayne (September 26, 2013). "Science Gains From Diverse Landing Area of Curiosity". NASA.
- Chang, Kenneth (October 1, 2013). "Hitting Pay Dirt on Mars". New York Times.
- Meslin, P.-Y.; et al. (September 26, 2013). "Soil Diversity and Hydration as Observed by ChemCam at Gale Crater, Mars". Science. 341 (6153): 1238670. doi:10.1126/science.1238670.
- Stolper, E.M.; Baker, M.B.; Newcombe, M.E.; Schmidt, M.E.; Treiman, A.H.; Cousin, A.; Dyar, M.D.; Fisk, M.R.; Gellert, R.; King, P.L.; Leshin, L.; Maurice, S.; McLennan, S.M.; Minitti, M.E.; Perrett, G.; Rowland, S.; Sautter, V.; Wiens, R.C.; MSL ScienceTeam (2013). "The Petrochemistry of Jake_M: A Martian Mugearite". Science. AAAS. 341 (6153): 1239463. doi:10.1126/science.1239463. Retrieved September 28, 2013.
- Webster, Guy; Neal-Jones, Nancy; Brown, Dwayne (December 16, 2014). "NASA Rover Finds Active and Ancient Organic Chemistry on Mars". NASA. Retrieved December 16, 2014.
- Chang, Kenneth (December 16, 2014). "‘A Great Moment’: Rover Finds Clue That Mars May Harbor Life". New York Times. Retrieved December 16, 2014.
- Mahaffy, P. R.; et al. (December 16, 2014). "Mars Atmosphere - The imprint of atmospheric evolution in the D/H of Hesperian clay minerals on Mars". Science. 347 (6220): 412–414. doi:10.1126/science.1260291. Retrieved December 16, 2014.
- Rincon, Paul (April 13, 2015). "Evidence of liquid water found on Mars". BBC News. Retrieved April 15, 2015.
- Clavin, Whitney (October 8, 2015). "NASA's Curiosity Rover Team Confirms Ancient Lakes on Mars". NASA. Retrieved October 9, 2015.
- Grotzinger, J.P.; et al. (October 9, 2015). "Deposition, exhumation, and paleoclimate of an ancient lake deposit, Gale crater, Mars". Science. 350 (6257): aac7575. doi:10.1126/science.aac7575. Retrieved October 9, 2015.
- Boyce, Joseph, M. (2008). The Smithsonian Book of Mars; Konecky & Konecky: Old Saybrook, CT, ISBN 978-1-58834-074-0
- Carr, Michael, H. (1996). Water on Mars; Oxford University Press: New York, ISBN 0-19-509938-9.
- Carr, Michael, H. (2006). The Surface of Mars; Cambridge University Press: Cambridge, UK, ISBN 978-0-521-87201-0.
- Hartmann, William, K. (2003). A Traveler’s Guide to Mars: The Mysterious Landscapes of the Red Planet; Workman: New York, ISBN 0-7611-2606-6.
- Hanlon, Michael (2004). The Real Mars: Spirit, Opportunity, Mars Express and the Quest to Explore the Red Planet; Constable: London, ISBN 1-84119-637-1.
- Kargel, Jeffrey, S. (2004). Mars: A Warmer Wetter Planet; Springer-Praxis: London, ISBN 1-85233-568-8.
- Morton, Oliver (2003). Mapping Mars: Science, Imagination, and the Birth of a World; Picador: New York, ISBN 0-312-42261-X.
- Sheehan, William (1996). The Planet Mars: A History of Observation and Discovery; University of Arizona Press: Tucson, AZ, ISBN 0-8165-1640-5.
- Viking Orbiter Imaging Team (1980). Viking Orbiter Views of Mars, C.R. Spitzer, Ed.; NASA SP-441: Washington DC.
|Wikimedia Commons has media related to Water on Mars.|
|Wikinews has related news: NASA announces water on Mars|
- NASA – Curiosity Rover Finds Evidence For An Ancient Streambed – September, 2012
- Images – Signs Of Water On Mars (HiRISE)
- Video (02:01) – Liquid Flowing Water Discovered on Mars – August, 2011
- Video (04:32) – Evidence: Water "Vigorously" Flowed On Mars – September, 2012
- Video (03:56) – Measuring Mars' Ancient Ocean – March, 2015 |
|Part of a series on|
This article has multiple issues. Please help improve it or discuss these issues on the talk page. (Learn how and when to remove these template messages)(Learn how and when to remove this template message)
In linguistics, morphology (//) is the study of words, how they are formed, and their relationship to other words in the same language. It analyzes the structure of words and parts of words, such as stems, root words, prefixes, and suffixes. Morphology also looks at parts of speech, intonation and stress, and the ways context can change a word's pronunciation and meaning. Morphology differs from morphological typology, which is the classification of languages based on their use of words, and lexicology, which is the study of words and how they make up a language's vocabulary.
While words, along with clitics, are generally accepted as being the smallest units of syntax, in most languages, if not all, many words can be related to other words by rules that collectively describe the grammar for that language. For example, English speakers recognize that the words dog and dogs are closely related, differentiated only by the plurality morpheme "-s", only found bound to noun phrases. Speakers of English, a fusional language, recognize these relations from their innate knowledge of English's rules of word formation. They infer intuitively that dog is to dogs as cat is to cats; and, in similar fashion, dog is to dog catcher as dish is to dishwasher. By contrast, Classical Chinese has very little morphology, using almost exclusively unbound morphemes ("free" morphemes) and depending on word order to convey meaning. (Most words in modern Standard Chinese ["Mandarin"], however, are compounds and most roots are bound.) These are understood as grammars that represent the morphology of the language. The rules understood by a speaker reflect specific patterns or regularities in the way words are formed from smaller units in the language they are using, and how those smaller units interact in speech. In this way, morphology is the branch of linguistics that studies patterns of word formation within and across languages and attempts to formulate rules that model the knowledge of the speakers of those languages.
Phonological and orthographic modifications between a base word and its origin may be partial to literacy skills. Studies have indicated that the presence of modification in phonology and orthography makes morphologically complex words harder to understand and that the absence of modification between a base word and its origin makes morphologically complex words easier to understand. Morphologically complex words are easier to comprehend when they include a base word.
Polysynthetic languages, such as Chukchi, have words composed of many morphemes. The Chukchi word "təmeyŋəlevtpəγtərkən", for example, meaning "I have a fierce headache", is composed of eight morphemes t-ə-meyŋ-ə-levt-pəγt-ə-rkən that may be glossed. The morphology of such languages allows for each consonant and vowel to be understood as morphemes, while the grammar of the language indicates the usage and understanding of each morpheme.
The discipline that deals specifically with the sound changes occurring within morphemes is morphophonology.
The history of morphological analysis dates back to the ancient Indian linguist Pāṇini, who formulated the 3,959 rules of Sanskrit morphology in the text Aṣṭādhyāyī by using a constituency grammar. The Greco-Roman grammatical tradition also engaged in morphological analysis. Studies in Arabic morphology, conducted by Marāḥ al-arwāḥ and Aḥmad b. ‘alī Mas‘ūd, date back to at least 1200 CE.
Lexemes and word forms
The term "word" has no well-defined meaning. Instead, two related terms are used in morphology: lexeme and word-form. Generally, a lexeme is a set of inflected word-forms that is often represented with the citation form in small capitals. For instance, the lexeme eat contains the word-forms eat, eats, eaten, and ate. Eat and eats are thus considered different word-forms belonging to the same lexeme eat. Eat and Eater, on the other hand, are different lexemes, as they refer to two different concepts.
Prosodic word vs. morphological word
Here are examples from other languages of the failure of a single phonological word to coincide with a single morphological word form. In Latin, one way to express the concept of 'NOUN-PHRASE1 and NOUN-PHRASE2' (as in "apples and oranges") is to suffix '-que' to the second noun phrase: "apples oranges-and", as it were. An extreme level of this theoretical quandary posed by some phonological words is provided by the Kwak'wala language.[b] In Kwak'wala, as in a great many other languages, meaning relations between nouns, including possession and "semantic case", are formulated by affixes instead of by independent "words". The three-word English phrase, "with his club", where 'with' identifies its dependent noun phrase as an instrument and 'his' denotes a possession relation, would consist of two words or even just one word in many languages. Unlike most languages, Kwak'wala semantic affixes phonologically attach not to the lexeme they pertain to semantically, but to the preceding lexeme. Consider the following example (in Kwak'wala, sentences begin with what corresponds to an English verb):[c]
kwixʔid-i-da bəgwanəmai-χ-a q'asa-s-isi t'alwagwayu
Morpheme by morpheme translation:
- kwixʔid-i-da = clubbed-PIVOT-DETERMINER
- bəgwanəma-χ-a = man-ACCUSATIVE-DETERMINER
- q'asa-s-is = otter-INSTRUMENTAL-3SG-POSSESSIVE
- t'alwagwayu = club
- "the man clubbed the otter with his club."
- accusative case marks an entity that something is done to.
- determiners are words such as "the", "this", "that".
- the concept of "pivot" is a theoretical construct that is not relevant to this discussion.)
That is, to the speaker of Kwak'wala, the sentence does not contain the "words" 'him-the-otter' or 'with-his-club' Instead, the markers -i-da (PIVOT-'the'), referring to "man", attaches not to the noun bəgwanəma ("man") but to the verb; the markers -χ-a (ACCUSATIVE-'the'), referring to otter, attach to bəgwanəma instead of to q'asa ('otter'), etc. In other words, a speaker of Kwak'wala does not perceive the sentence to consist of these phonological words:
kwixʔid i-da-bəgwanəma χ-a-q'asa s-isi-t'alwagwayu
clubbed PIVOT-the-mani hit-the-otter with-hisi-club
A central publication on this topic is the recent volume edited by Dixon and Aikhenvald (2007), examining the mismatch between prosodic-phonological and grammatical definitions of "word" in various Amazonian, Australian Aboriginal, Caucasian, Eskimo, Indo-European, Native North American, West African, and sign languages. Apparently, a wide variety of languages make use of the hybrid linguistic unit clitic, possessing the grammatical features of independent words but the prosodic-phonological lack of freedom of bound morphemes. The intermediate status of clitics poses a considerable challenge to linguistic theory.
Inflection vs. word formation
Given the notion of a lexeme, it is possible to distinguish two kinds of morphological rules. Some morphological rules relate to different forms of the same lexeme; while other rules relate to different lexemes. Rules of the first kind are inflectional rules, while those of the second kind are rules of word formation. The generation of the English plural dogs from dog is an inflectional rule, while compound phrases and words like dog catcher or dishwasher are examples of word formation. Informally, word formation rules form "new" words (more accurately, new lexemes), while inflection rules yield variant forms of the "same" word (lexeme).
The distinction between inflection and word formation is not at all clear cut. There are many examples where linguists fail to agree whether a given rule is inflection or word formation. The next section will attempt to clarify this distinction.
Word formation is a process where one combines two complete words, whereas with inflection you can combine a suffix with some verb to change its form to subject of the sentence. For example: in the present indefinite, we use ‘go’ with subject I/we/you/they and plural nouns, whereas for third person singular pronouns (he/she/it) and singular nouns we use ‘goes’. So this ‘-es’ is an inflectional marker and is used to match with its subject. A further difference is that in word formation, the resultant word may differ from its source word's grammatical category whereas in the process of inflection the word never changes its grammatical category.
Types of word formation
There is a further distinction between two primary kinds of morphological word formation: derivation and compounding. Compounding is a process of word formation that involves combining complete word forms into a single compound form. Dog catcher, therefore, is a compound, as both dog and catcher are complete word forms in their own right but are subsequently treated as parts of one form. Derivation involves affixing bound (i.e. non-independent) forms to existing lexemes, whereby the addition of the affix derives a new lexeme. The word independent, for example, is derived from the word dependent by using the prefix in-, while dependent itself is derived from the verb depend. There is also word formation in the processes of clipping in which a portion of a word is removed to create a new one, blending in which two parts of different words are blended into one, acronyms in which each letter of the new word represents a specific word in the representation i.e. NATO for North Atlantic Treaty Organization, borrowing in which words from one language are taken and used in another, and finally coinage in which a new word is created to represent a new object or concept.
Paradigms and morphosyntax
A linguistic paradigm is the complete set of related word forms associated with a given lexeme. The familiar examples of paradigms are the conjugations of verbs and the declensions of nouns. Also, arranging the word forms of a lexeme into tables, by classifying them according to shared inflectional categories such as tense, aspect, mood, number, gender or case, organizes such. For example, the personal pronouns in English can be organized into tables, using the categories of person (first, second, third); number (singular vs. plural); gender (masculine, feminine, neuter); and case (nominative, oblique, genitive).
The inflectional categories used to group word forms into paradigms cannot be chosen arbitrarily; they must be categories that are relevant to stating the syntactic rules of the language. Person and number are categories that can be used to define paradigms in English, because English has grammatical agreement rules that require the verb in a sentence to appear in an inflectional form that matches the person and number of the subject. Therefore, the syntactic rules of English care about the difference between dog and dogs, because the choice between these two forms determines which form of the verb is used. However, no syntactic rule for the difference between dog and dog catcher, or dependent and independent. The first two are nouns and the second two are adjectives.
An important difference between inflection and word formation is that inflected word forms of lexemes are organized into paradigms that are defined by the requirements of syntactic rules, and there are no corresponding syntactic rules for word formation. The relationship between syntax and morphology is called "morphosyntax" and concerns itself with inflection and paradigms, not with word formation or compounding.
Above, morphological rules are described as analogies between word forms: dog is to dogs as cat is to cats and as dish is to dishes. In this case, the analogy applies both to the form of the words and to their meaning: in each pair, the first word means "one of X", while the second "two or more of X", and the difference is always the plural form -s (or -es) affixed to the second word, signaling the key distinction between singular and plural entities.
One of the largest sources of complexity in morphology is that this one-to-one correspondence between meaning and form scarcely applies to every case in the language. In English, there are word form pairs like ox/oxen, goose/geese, and sheep/sheep, where the difference between the singular and the plural is signaled in a way that departs from the regular pattern, or is not signaled at all. Even cases regarded as regular, such as -s, are not so simple; the -s in dogs is not pronounced the same way as the -s in cats; and, in plurals such as dishes, a vowel is added before the -s. These cases, where the same distinction is effected by alternative forms of a "word", constitute allomorphy .
Phonological rules constrain which sounds can appear next to each other in a language, and morphological rules, when applied blindly, would often violate phonological rules, by resulting in sound sequences that are prohibited in the language in question. For example, to form the plural of dish by simply appending an -s to the end of the word would result in the form *[dɪʃs], which is not permitted by the phonotactics of English. In order to "rescue" the word, a vowel sound is inserted between the root and the plural marker, and [dɪʃɪz] results. Similar rules apply to the pronunciation of the -s in dogs and cats: it depends on the quality (voiced vs. unvoiced) of the final preceding phoneme.
Lexical morphology is the branch of morphology that deals with the lexicon, which, morphologically conceived, is the collection of lexemes in a language. As such, it concerns itself primarily with word formation: derivation and compounding.
There are three principal approaches to morphology and each tries to capture the distinctions above in different ways:
- Morpheme-based morphology, which makes use of an item-and-arrangement approach.
- Lexeme-based morphology, which normally makes use of an item-and-process approach.
- Word-based morphology, which normally makes use of a word-and-paradigm approach.
While the associations indicated between the concepts in each item in that list are very strong, they are not absolute.
In morpheme-based morphology, word forms are analyzed as arrangements of morphemes. A morpheme is defined as the minimal meaningful unit of a language. In a word such as independently, the morphemes are said to be in-, de-, pend, -ent, and -ly; pend is the (bound) root and the other morphemes are, in this case, derivational affixes.[d] In words such as dogs, dog is the root and the -s is an inflectional morpheme. In its simplest and most naïve form, this way of analyzing word forms, called "item-and-arrangement", treats words as if they were made of morphemes put after each other ("concatenated") like beads on a string. More recent and sophisticated approaches, such as distributed morphology, seek to maintain the idea of the morpheme while accommodating non-concatenated, analogical, and other processes that have proven problematic for item-and-arrangement theories and similar approaches.
Morpheme-based morphology presumes three basic axioms:
- Baudoin’s "single morpheme" hypothesis: Roots and affixes have the same status as morphemes.
- Bloomfield’s "sign base" morpheme hypothesis: As morphemes, they are dualistic signs, since they have both (phonological) form and meaning.
- Bloomfield's "lexical morpheme" hypothesis: morphemes, affixes and roots alike are stored in the lexicon.
Morpheme-based morphology comes in two flavours, one Bloomfieldian and one Hockettian. For Bloomfield, the morpheme was the minimal form with meaning, but did not have meaning itself.[clarification needed] For Hockett, morphemes are "meaning elements", not "form elements". For him, there is a morpheme plural using allomorphs such as -s, -en and -ren. Within much morpheme-based morphological theory, the two views are mixed in unsystematic ways so a writer may refer to "the morpheme plural" and "the morpheme -s" in the same sentence.
Lexeme-based morphology usually takes what is called an item-and-process approach. Instead of analyzing a word form as a set of morphemes arranged in sequence, a word form is said to be the result of applying rules that alter a word-form or stem in order to produce a new one. An inflectional rule takes a stem, changes it as is required by the rule, and outputs a word form; a derivational rule takes a stem, changes it as per its own requirements, and outputs a derived stem; a compounding rule takes word forms, and similarly outputs a compound stem.
Word-based morphology is (usually) a word-and-paradigm approach. The theory takes paradigms as a central notion. Instead of stating rules to combine morphemes into word forms or to generate word forms from stems, word-based morphology states generalizations that hold between the forms of inflectional paradigms. The major point behind this approach is that many such generalizations are hard to state with either of the other approaches. Word-and-paradigm approaches are also well-suited to capturing purely morphological phenomena, such as morphomes. Examples to show the effectiveness of word-based approaches are usually drawn from fusional languages, where a given "piece" of a word, which a morpheme-based theory would call an inflectional morpheme, corresponds to a combination of grammatical categories, for example, "third-person plural". Morpheme-based theories usually have no problems with this situation since one says that a given morpheme has two categories. Item-and-process theories, on the other hand, often break down in cases like these because they all too often assume that there will be two separate rules here, one for third person, and the other for plural, but the distinction between them turns out to be artificial. The approaches treat these as whole words that are related to each other by analogical rules. Words can be categorized based on the pattern they fit into. This applies both to existing words and to new ones. Application of a pattern different from the one that has been used historically can give rise to a new word, such as older replacing elder (where older follows the normal pattern of adjectival superlatives) and cows replacing kine (where cows fits the regular pattern of plural formation).
In the 19th century, philologists devised a now classic classification of languages according to their morphology. Some languages are isolating, and have little to no morphology; others are agglutinative whose words tend to have lots of easily separable morphemes; others yet are inflectional or fusional because their inflectional morphemes are "fused" together. That leads to one bound morpheme conveying multiple pieces of information. A standard example of an isolating language is Chinese. An agglutinative language is Turkish. Latin and Greek are prototypical inflectional or fusional languages.
It is clear that this classification is not at all clearcut, and many languages (Latin and Greek among them) do not neatly fit any one of these types, and some fit in more than one way. A continuum of complex morphology of language may be adopted.
The three models of morphology stem from attempts to analyze languages that more or less match different categories in this typology. The item-and-arrangement approach fits very naturally with agglutinative languages. The item-and-process and word-and-paradigm approaches usually address fusional languages.
As there is very little fusion involved in word formation, classical typology mostly applies to inflectional morphology. Depending on the preferred way of expressing non-inflectional notions, languages may be classified as synthetic (using word formation) or analytic (using syntactic phrases).
Pingelapese is a Micronesian language spoken on the Pingelap atoll and on two of the eastern Caroline Islands, called the high island of Pohnpei. Similar to other languages, words in Pingelapese can take different forms to add to or even change its meaning. Verbal suffixes are morphemes added at the end of a word to change its form. Prefixes are those that are added at the front. For example, the Pingelapese suffix –kin means ‘with’ or 'at.’ It is added at the end of a verb.
ius = to use --> ius-kin = to use with
mwahu = to be good --> mwahu-kin = to be good at
sa- is an example of a verbal prefix. It is added to the beginning of a word and means ‘not.’
pwung = to be correct --> sa-pwung = to be incorrect
There are also directional suffixes that when added to the root word give the listener a better idea of where the subject is headed. The verb alu means to walk. A directional suffix can be used to give more detail.
-da = ‘up’ --> aluh-da = to walk up
-di = ‘down’ --> aluh-di = to walk down
-eng = ‘away from speaker and listener’ --> aluh-eng = to walk away
Directional suffixes are not limited to motion verbs. When added to non-motion verbs, their meanings are a figurative one. The following table gives some examples of directional suffixes and their possible meanings.
|Directional suffix||Motion verb||Non-motion verb|
|-da||up||Onset of a state|
|-di||down||Action has been completed|
|-la||away from||Change has caused the start of a new state|
|-doa||towards||Action continued to a certain point in time|
Morphology analysis is used in various fields. For example, using morphological features it is possible to assess data quality in Wikipedia in English, Polish, Russian and other language versions.
- Für die lere von der wortform wäle ich das wort « morphologie», nach dem vorgange der naturwißenschaften [...] (Standard High German "Für die Lehre von der Wortform wähle ich das Wort „Morphologie“, nach dem Vorgange der Naturwissenschaften [...]", "For the science of word-formation, I choose the term "morphology"...."
- Formerly known as Kwakiutl, Kwak'wala belongs to the Northern branch of the Wakashan language family. "Kwakiutl" is still used to refer to the tribe itself, along with other terms.
- Example taken from Foley (1998) using a modified transcription. This phenomenon of Kwak'wala was reported by Jacobsen as cited in van Valin & LaPolla (1997).
- The existence of words like appendix and pending in English does not mean that the English word depend is analyzed into a derivational prefix de- and a root pend. While all those were indeed once related to each other by morphological rules, that was only the case in Latin, not in English. English borrowed such words from French and Latin but not the morphological rules that allowed Latin speakers to combine de- and the verb pendere 'to hang' into the derivative dependere.
- Aronoff, Mark (1993). Morphology by Itself. Cambridge, MA: MIT Press. ISBN 9780262510721.
- Aronoff, Mark (2009). "Morphology: an interview with Mark Aronoff" (PDF). ReVEL. 7 (12). ISSN 1678-8931. Archived from the original (PDF) on 2011-07-06..
- Åkesson, Joyce (2001). Arabic morphology and phonology: based on the Marāḥ al-arwāḥ by Aḥmad b. ʻAlī b. Masʻūd. Leiden, The Netherlands: Brill. ISBN 9789004120280.
- Bauer, Laurie (2003). Introducing linguistic morphology (2nd ed.). Washington, DC: SGeorgetown University Press. ISBN 0-87840-343-4.
- Bauer, Laurie (2004). A glossary of morphology. Washington, DC: Georgetown University Press.
- Bloomfield, Leonard (1933). Language. New York: Henry Holt. OCLC 760588323.
- Bubenik, Vit (1999). An introduction to the study of morphology. LINCOM coursebooks in linguistics, 07. Muenchen: LINCOM Europa. ISBN 3-89586-570-2.
- Dixon, R. M. W.; Aikhenvald, Alexandra Y., eds. (2007). Word: A cross-linguistic typology. Cambridge: Cambridge University Press.
- Foley, William A (1998). Symmetrical Voice Systems and Precategoriality in Philippine Languages (Speech). Voice and Grammatical Functions in Austronesian. University of Sydney.
- Hockett, Charles F. (1947). "Problems of morphemic analysis". Language. 24: 414–441.
- Fabrega, Antonio; Scalise, Sergio (2012). Morphology: from Data to Theory. Edinburgh: Edinburgh University Press.
- Katamba, Francis (1993). Morphology. New York: St. Martin's Press. ISBN 0-312-10356-5.
- Korsakov, Andrey Konstantinovich (1969). "The use of tenses in English". In Korsakov, Andrey Konstantinovich (ed.). Structure of Modern English pt. 1.
- Kishorjit, N; Vidya Raj, RK; Nirmal, Y; Sivaji, B. (December 2012). Manipuri Morpheme Identification (PDF) (Speech). Proceedings of the 3rd Workshop on South and Southeast Asian Natural Language Processing (SANLP). Mumbai: COLING.
- Matthews, Peter (1991). Morphology (2nd ed.). Cambridge University Press. ISBN 0-521-42256-6.
- Mel'čuk, Igor A (1993). Cours de morphologie générale (in French). Montreal: Presses de l'Université de Montréal.
- Mel'čuk, Igor A (2006). Aspects of the theory of morphology. Berlin: Mouton.
- Scalise, Sergio (1983). Generative Morphology. Dordrecht: Foris.
- Singh, Rajendra; Starosta, Stanley, eds. (2003). Explorations in Seamless Morphology. SAGE. ISBN 0-7619-9594-3.
- Spencer, Andrew (1991). Morphological theory: an introduction to word structure in generative grammar. Blackwell textbooks in linguistics. Oxford: Blackwell. ISBN 0-631-16144-9.
- Spencer, Andrew; Zwicky, Arnold M., eds. (1998). The handbook of morphology. Blackwell handbooks in linguistics. Oxford: Blackwell. ISBN 0-631-18544-5.
- Stump, Gregory T. (2001). Inflectional morphology: a theory of paradigm structure. Cambridge studies in linguistics. Cambridge University Press. ISBN 0-521-78047-0.
- van Valin, Robert D.; LaPolla, Randy (1997). Syntax : Structure, Meaning And Function. Cambridge University Press.
- Jones, Daniel (2003) , Peter Roach; James Hartmann; Jane Setter (eds.), English Pronouncing Dictionary, Cambridge: Cambridge University Press, ISBN 3-12-539683-2
- Anderson, Stephen R. (n.d.). "Morphology". Encyclopedia of Cognitive Science. Macmillan Reference, Ltd., Yale University. Retrieved 30 July 2016.
- Aronoff, Mark; Fudeman, Kirsten (n.d.). "Morphology and Morphological Analysis" (PDF). What is Morphology?. Blackwell Publishing. Retrieved 30 July 2016.
- Brown, Dunstan (December 2012) . "Morphological Typology" (PDF). In Jae Jung Song (ed.). The Oxford Handbook of Linguistic Typology. pp. 487–503. doi:10.1093/oxfordhb/9780199281251.013.0023. Retrieved 30 July 2016.
- Sankin, A.A. (1979) . "I. Introduction" (PDF). In Ginzburg, R.S.; Khidekel, S.S.; Knyazeva, G. Y.; Sankin, A.A. (eds.). A Course in Modern English Lexicology (Revised and Enlarged, Second ed.). Moscow: VYSŠAJA ŠKOLA. p. 7. Retrieved 30 July 2016.
- Wilson-Fowler, E.B., & Apel, K. (2015). "Influence of Morphological Awareness on College Students' Literacy Skills: A path Analytic Approach". Journal of Literacy Research. 47 (3): 405–32. doi:10.1177/1086296x15619730.
- Beard, Robert (1995). Lexeme-Morpheme Base Morphology: A General Theory of Inflection and Word Formation. Albany: NY: State University of New York Press. pp. 2, 3. ISBN 0-7914-2471-5.
- Åkesson 2001.
- Schleicher, August (1859). "Zur Morphologie der Sprache". Mémoires de l'Académie Impériale des Sciences de St.-Pétersbourg. VII°. I, N.7. St. Petersburg. p. 35.
- Haspelmath & Sims 2002, p. 15.
- Haspelmath & Sims 2002, p. 16.
- Anderson, Stephen R. A-Morphous Morphology. Cambridge: Cambridge University Press. p. 74, 75.
- Plag, Ingo (2003). "Word Formation in English" (PDF). Library of Congress. Cambridge. Retrieved 2016-11-30.
- Haspelmath, Martin; Sims, Andrea D. (2002). Understanding Morphology. London: Arnold. ISBN 0-340-76026-5.
- Beard 1995.
- Bloomfield 1993. Harv error: no target: CITEREFBloomfield1993 (help)
- Hockett 1947.
- Bybee, Joan L. (1985). Morphology: A Study of the Relation Between Meaning and Form. Amsterdam: John Benjamins. pp. 11, 13.
- Hattori, Ryoko (2012). Preverbal Particles in Pingelapese. pp. 31–33.
- Biber, D., Conrad, S., & Reppen, R. (1998). Corpus linguistics: Investigating language structure and use. Cambridge University Press.
- Gries, S. T., Wulff, S., & Davies, M. (2010). Corpus-linguistic applications:" Current studies, new directions". BRILL.
- Xu, Y., Luo, T. (2011). Measuring article quality in Wikipedia: Lexical clue model. In: 2011 3rd Symposium on Web Society (SWS), pp. 141–146. IEEE.
- Lewoniewski, Włodzimierz; Węcel, Krzysztof; Abramowicz, Witold (2018). "Determining Quality of Articles in Polish Wikipedia Based on Linguistic Features". Communications in Computer and Information Science. 920: 546–558. doi:10.1007/978-3-319-99972-2_45. ISBN 978-3-319-99971-5. Retrieved 2019-01-13.
- Lewoniewski, Włodzimierz; Khairova, Nina; Węcel, Krzysztof; Stratiienko, Nataliia; Abramowicz, Witold (2017-09-23). "Using Morphological and Semantic Features for the Quality Assessment of Russian Wikipedia". Communications in Computer and Information Science. 756: 550–560. doi:10.1007/978-3-319-67642-5_46. ISBN 978-3-319-67641-8. Retrieved 2019-01-13. |
Daisy V: Machine Language4 min read
A computer can be described , abstractly, by specifying and demonstrating its machine language capabilities. Seeing some low-level programs written in machine language helps us understand not only how to get the computer to do things, but also why its hardware was designed in a certain way. Machine language is the most profound interface in the overall computer enterprise—the fine line where hardware meets software. This the point where abstract thoughts and symbolic instructions are turned into physical operations performed in silicon.
A machine language can be viewed as an agreed-upon formalism, designed to manipulate a memory using a processor and a set of registers.
- Memory: refers loosely to the collection of hardware devices that store data and instructions. From a programmer’s standpoint, a memory is simply a contiguous array of cells (words) of some fixed width. A particular word can be accessed by specifying its address.
- Processor: or Central Processing Unit (CPU), is a device capable of performing a fixed set of elementary operations. These typically include arithmetic and logic operations, memory access operations, and control (branching) operations.
- Registers: Memory access is a relatively slow process. So most processors are equipped with several registers, each capable of holding a single value. Located in the processor’s immediate proximity, the registers serve as a high-speed local memory, allowing the processor to manipulate data and instructions quickly.
A machine language program is a series of coded instructions. For example, a typical instruction in a 16-bit computer may be
1010001100011001. To understand what this means, we’ll need to know the instruction set of the hardware.
The Daisy machine language is based on two 16-bit command types. The address instruction has the format
v being either 0 or 1. The instruction causes the computer to load the 15-bit constant
vvv...v into the
The compute instruction has the format
c- bits instruct the ALU which function to compute, the
d- bits instruct where to store the ALU output, and the
j- bits specify an optional jump condition.
Since binary codes are rather cryptic, machine languages are normally specified using both binary codes and symbolic labels. For example, the language designer could decide that the operation code
0010 will be represented by the mnemonic
ADD and that the registers will be symbolically referred to as
R0, R1 etc. Using these conventions one can specify a slightly more readable instruction such as
ADD R0, R1.
This symbolic notation is called assembly language and the program that translates from assembly to binary (machine code) is called assembler.
Daisy Language Specification
Memory Address Spaces
A Daisy programmer must be aware of two distinct address spaces: an instruction memory and a data memory. Both memories are 16-bit wide and have a 15-bit address space. This means the maximum addressable size of each memory is 215 = 32K, 16-bit words.
The CPU can only execute programs that reside in the instruction memory. The instruction memory is a read-only advice, and programs are loaded into it using some exogenous means. Loading a new program is done by replacing the entire ROM chip, similar to replacing a cartridge in a game console.
A Daisy programmer can make use of two 16-bit registers called
D is used solely to store data values,
A doubles as both a data register and an address register. These registers can be manipulated explicitly by arithmetic and logical instructions like
! means a 16-bit, bitwise Not operation.
A registers facilitates direct access to data memory (simply memory form now on). Since Daisy instructions are 16-bit wide, and since addresses are specified using 15 bits, it is impossible to pack both an operation code and an address in one instruction. Thus, the syntax of Daisy language mandates that memory access instructions operate on an implicit memory label
M always refers to the memory word whose address is the current value of the
A register. For example, to effect the operation
D = Memory + 1, we have to use one instruction to set the
A register to 516 (
@516), and a subsequent instruction to specify
D = M - 1.
A register also facilitates direct access to the instruction memory. This can be used to implement jump controls in code (if/else, goto etc.). Similarly to memory access convention, a Daisy jump instruction always effects a jump to the instruction located in the memory word address by
A. For example, to effect the operation
goto 36 we will first set the
A register to 36 (
@36), and then issue a second instruction with a
goto command without an address. This sequence causes the computer to fetch
ROM in the next clock cycle.
The Daisy assembly program to multiply two numbers:
// Multiplies R0 and R1 and stores the result in R2. // (R0, R1, R2 refer to RAM, RAM, and RAM, respectively.) @R2 M=0 // Initialise result memory @R1 D=M @END D;JEQ // If second number is 0, answer is 0, jump to end @R3 M=D // Set counter to second number (LOOP) @R0 D=M @END D;JEQ // If first number is 0, answer is 0, jump to end @R2 M=M+D @R3 M=M-1 @R3 D=M @END D;JLE // Exit loop if counter is less than or equal to 0 @LOOP 0;JMP (END) @END 0;JMP |
densities for teeth, compact bone, and cancellous (porous) bone (Chapter 8) in modern vertebrates average about 2.0, 1.7, and 1.1 g/cm3, respectively. Consequently, movement of teeth and compact bone would have been more likely as bedload; such dragging would have caused visible pits and fractures in most exposed bone. Such telltale marks from the physical transport of dinosaur bones have indeed been interpreted among the bones in high-energy facies. This information provides evidence for whether dinosaur remains were reworked into deposits much younger than the time when a dinosaur was alive. However, if the bones had any flesh remaining, these parts might have been cushioned from the abrasive effects of stream transport, thus the absence of fractures is not necessarily diagnostic of an autochthonous fossil.
Most modern examples of bone are cancellous, which with included organic matter are less dense than solid dahllite; loss of the organic material results in more open spaces and correspondingly less density. Dinosaur bones were similar in this respect and some, such as those of theropods, were lightly built and noticeably less dense than those of other dinosaurs (Chapters 8 and 9). Different hard parts on the same individual could also have had different densities, such as the bone composing the parietals of pachycephalosaurs (Chapter 13), or the teeth of any toothed dinosaur versus their limb bones.
Size and shape of a body or bone are also important factors in transport. Well-rounded tarsals of dinosaurs, for example, were more likely to roll along a stream bottom than their femurs or tibias. Of course, smaller bones were more susceptible to transport, with all other factors in the bones and stream being equal. Nevertheless, shape is probably more important to consider than size, because equal density of a large or small body translates into equal buoyancy regardless of size. Shape can be measured by looking at the ratio of an object's surface area to its volume, which is expressed through the simple relation of
where S is shape, A is surface area, and V is volume. Using the example of a sphere, surface area is calculated by the following equation:
and volume for that same sphere is
Using a typical orange (before peeling or squeezing) with a diameter of 10 cm (radius of 5 cm) as an example, its surface area to volume ratio can be calculated through the following procedure:
This ratio is actually the smallest that can be derived for any sedimentary particle; any particle shape deviating from a perfect sphere will result in a larger number.
The important application of this measurement to stream transport is that spherical particles are less likely to be lifted by a current than long or flat particles. In the same way, Frisbees™ (which have a high A/V) can stay airborne longer than baseballs (low A/V). Thus, long, flat particles are lifted more easily than spherical particles because of an important principle first formulated by Swiss mathematician Daniel Bernoulli (1700-82). Bernoulli was a primary contributor to hydrodynamics, the physics of water flow, which is an important science to taphonomists interested in estimating transport of bodies in water. Bernoulli discovered that a moving fluid (either water or air) caused less pressure on an object than stagnant fluid, this lower pressure providing a lift force to the object affected by the flow. This principle is exemplified by wings on aircraft, which are designed so that the pressure caused by air moving rapidly over them is less on top, causing an aircraft to lift off the ground.
Of all of the bones mentioned in Chapter 5, none are spherical, which means that all dinosaur bones had higher A/V ratios than a sphere. Bones with the largest ratios were those that were long, flat, or both, such as some cranial bones (pari-etals, frontals), the femur, humerus, tibia, and scapula. Notice that a typical ilium is shaped more like an aeroplane wing than, say, a cervical vertebra. So an ilium was more likely to be lifted in a stream and transported far away from the original death site of a dinosaur than its semispherical parts.
Consequently, the densities, sizes, and shapes of bones varied enough that all of these factors have to be taken into account when looking at a final assemblage of dinosaur bones in a deposit. In fact, some taphonomists were industrious enough to experiment with various bones of modern vertebrates, calculating A/V ratios and proportion of compact to cancellous bone (which affects density) to categorize bones on the basis of how easily they could be transported by water. These data provide a hypothetical model to test when encountering dinosaur bones in the field and assessing their possible transport (Table 7.3).
Most of the preceding discussion on transport of dinosaur bodies was based on water as a medium, but wind was also a possible (albeit less probable) agent of transport. The physics of air and its movement is aerodynamics, an essential science for people who design and fly aircraft, but one that can also be applied to any effects of air movement on any objects. For example, modern hurricanes and tornadoes have carried large, multi-ton objects for considerable distances. Living animals also have been transported hundreds or thousands of meters away from their original environment. An example of the lift forces generated by some tornadoes is illustrated by the instance of a home freezer, which probably weighed about 200 kg, that was moved 2 km by a tornado in Mississippi in 1975, and a 70 metric-ton railroad car, which weighed more than most adult sauropods (Chapter 10), that was also moved a measurable distance by a tornado.
Storms have been interpreted in the geologic record on the basis of the distinctive deposits that they leave in marine and coastal sediments. Such storm deposits, called tempestites, are common in strata formed in shallow-marine environments from the Mesozoic, so dinosaurs certainly experienced violent storms. However, no one has ever provided evidence for transport of dinosaur bodies by wind, hence this is only an idea, not a hypothesis. Of course, observations of the impact of modern hurricanes, as well as interpreted Mesozoic tempestites, could lend themselves to the hypothesis that similar inland flooding occurred from the massive amounts of precipitation and coastal storm surges that accompanied Mesozoic hurricanes or other storms. These phenomena would have increased the amount of stream discharges and correspondingly increased the likelihood of dinosaurs either drowning or having their otherwise-dead bodies washed into water bodies and later buried.
How would a paleontologist look for clues of postmortem transport (or lack of it) once dinosaur bones are found in a Mesozoic deposit? One clue already
Was this article helpful? |
Grade Level: 9 (8-10)
Time Required: 30 minutes
Lesson Dependency: None
Subject Areas: Biology
SummaryStudents learn about mutations to both DNA and chromosomes, and uncontrolled changes to the genetic code. They are introduced to small-scale mutations (substitutions, deletions and insertions) and large-scale mutations (deletion duplications, inversions, insertions, translocations and nondisjunctions). The effects of different mutations are studied as well as environmental factors that may increase the likelihood of mutations. Students practice their understanding of different mutation types and processes with the associated activity based off of the childhood game “telephone” . A PowerPoint® presentation and pre/post-assessments are provided.
Genetic engineers are able to manipulate the genomes of organisms, however, the consequences are not always beneficial. In order to prevent harmful and unwanted mutations, it is important for engineers to understand what effects result from certain changes to organisms’ genomes (several of which can be seen by studying natural mutations) and how environmental factors can affect the probability of mutations occurring.
After this lesson, students should be able to:
- List the different types of mutations.
- Describe some possible effects of mutations.
- Explain the role of mutations in genetic syndromes.
Each TeachEngineering lesson or activity is correlated to one or more K-12 science,
technology, engineering or math (STEM) educational standards.
All 100,000+ K-12 STEM standards covered in TeachEngineering are collected, maintained and packaged by the Achievement Standards Network (ASN),
a project of D2L (www.achievementstandards.org).
In the ASN, standards are hierarchically structured: first by source; e.g., by state; within source by type; e.g., science or mathematics;
within type by subtype, then by grade, etc.
Each TeachEngineering lesson or activity is correlated to one or more K-12 science, technology, engineering or math (STEM) educational standards.
All 100,000+ K-12 STEM standards covered in TeachEngineering are collected, maintained and packaged by the Achievement Standards Network (ASN), a project of D2L (www.achievementstandards.org).
In the ASN, standards are hierarchically structured: first by source; e.g., by state; within source by type; e.g., science or mathematics; within type by subtype, then by grade, etc.
Make and defend a claim based on evidence that inheritable genetic variations may result from: (1) new genetic combinations through meiosis, (2) viable errors occurring during replication, and/or (3) mutations caused by environmental factors.
(Grades 9 - 12 )
Do you agree with this alignment? Thanks for your feedback!This Performance Expectation focuses on the following Three Dimensional Learning aspects of NGSS:
Science & Engineering Practices Disciplinary Core Ideas Crosscutting Concepts Make and defend a claim based on evidence about the natural world that reflects scientific knowledge, and student-generated evidence.
Alignment agreement: Thanks for your feedback!
In sexual reproduction, chromosomes can sometimes swap sections during the process of meiosis (cell division), thereby creating new genetic combinations and thus more genetic variation. Although DNA replication is tightly regulated and remarkably accurate, errors do occur and result in mutations, which are also a source of genetic variation. Environmental factors can also cause mutations in genes, and viable mutations are inherited.
Alignment agreement: Thanks for your feedback!Environmental factors also affect expression of traits, and hence affect the probability of occurrences of traits in a population. Thus the variation and distribution of traits observed depends on both genetic and environmental factors.
Alignment agreement: Thanks for your feedback!
Empirical evidence is required to differentiate between cause and correlation and make claims about specific causes and effects.
Alignment agreement: Thanks for your feedback!
identify components of DNA, and describe how information for specifying the traits of an organism is carried in the DNA;
Do you agree with this alignment? Thanks for your feedback!
identify and illustrate changes in DNA and evaluate the significance of these changes;
Do you agree with this alignment? Thanks for your feedback!
Worksheets and AttachmentsVisit [ ] to print or download.
More Curriculum Like This
Students learn how engineers apply their understanding of DNA to manipulate specific genes to produce desired traits, and how engineers have used this practice to address current problems facing humanity. Students fill out a flow chart to list the methods to modify genes to create GMOs and example a...
As a class, students work through an example showing how DNA provides the "recipe" for making human body proteins. They see how the pattern of nucleotide bases (adenine, thymine, guanine, cytosine) forms the double helix ladder shape of DNA, and serves as the code for the steps required to make gene...
Students use DNA profiling to determine who robbed a bank. After they learn how the FBI's Combined DNA Index System (CODIS) is used to match crime scene DNA with tissue sample DNA, students use CODIS principles and sample DNA fragments to determine which of three suspects matches evidence obtain at ...
Students are introduced to the latest imaging methods used to visualize molecular structures and the method of electrophoresis that is used to identify and compare genetic code (DNA).
Students should have a good understanding of how DNA is copied from one cell to another through either meiosis or mitosis. They should also know that changes in the DNA or genes result in the alteration of proteins that may or may not cause noticeable changes to organisms’ traits.
(Be ready to show the class the 22-slide Mutations Presentation, a PowerPoint® file.)
(Slides 1-3) Introduction/Motivation: Who can tell me how Cyclops from the X-Men got his superpowers? (Answer: He’s a mutant and was born with his superpowers.) What about the Hulk? (Answer: Mutation due to exposure to gamma radiation.) And Spiderman? (Answer: Mutated when bitten by a radioactive spider.)
So, we have identified three superheroes who all gained some sort of special abilities from mutations. For Cyclops and any of the X-Men, the powers were caused by a pre-birth DNA or genome mutation. The Hulk and Spiderman powers happened a little differently since the mutations occurred later when they were exposed to radioactivity in some form or another.
Today we will discuss some of the science behind mutations. While the superpowers and abilities we just discussed may be fictional, it is true that mutations can have significant impacts on people and evidence exists that radiation exposure can lead to an increased rate of mutations. First, we will discuss the different types of mutations, then where or how they can occur. We will also talk about some environmental factors that can influence the rate of mutations, and finish by looking at some possible effects of mutations.
(Continue on, presenting the content in the Lesson Background section.)
Lesson Background and Concepts for Teachers
(Slide 4) Types of Mutations: Mutations can be classified several different ways. In this lesson, we will focus on sorting mutations by their effects on the structure of DNA or a chromosome. For this categorization, mutations can be organized into two main groups, each with multiple specific types. The two general categories are small-scale and large-scale mutations. Similar to the childhood game of "telephone" the Mutation Telephone activity helps students illustrate how mutations occur in nature.
Small-scale mutations are those that affect the DNA at the molecular level by changing the normal sequence of nucleotide base pairs. These types of mutations may occur during the process of DNA replication during either meiosis or mitosis. Three possible types of small-scale mutations may occur: substitutions, deletions and insertions.
(Slide 5) Also referred to as a “point” mutation, substitutions occur when a nucleotide is replaced with a different nucleotide in the DNA sequence. The most common substitutions involve the switching of adenine and guanine (A ↔ G) or cytosine and thymine (C ↔ T). Since the total number of nucleotides is conserved, this type of mutation only affects the codon for a single amino acid.
(Slide 6) A deletion is the removal of a nucleotide from the DNA sequence. Deletions are referred to as “frameshift” mutations because the removal of even a single nucleotide from a gene subsequently alters every codon after the mutation (it is said that the reading frame is “shifted”); this is illustrated in Figure 1 for both deletions and insertions. The change in the number of nucleotides changes which ones are normally read together.
(Slide 7) An insertion is the addition of a nucleotide to the DNA sequence. Similar to a deletion, insertions are also considered “frameshift” mutations and alter every codon that is read after the mutation.
(Slide 8) Large-scale mutations are those that affect entire portions of a chromosome. Some large-scale mutations affect only single chromosomes, others occur across nonhomologous pairs. Some large-scale mutations in the chromosome are analogous to the small-scale mutations in DNA; the difference is that for large-scale mutations, entire genes or sets of genes are altered rather than only single nucleotides of the DNA. Single chromosome mutations are most likely to occur by some error in the DNA replication stage of cell growth, and therefore could occur during meiosis or mitosis. Mutations involving multiple chromosomes are more likely to occur in meiosis during the crossing-over that occurs during the prophase I. Most of these mutations are illustrated in Figure 2.
(Slide 9) Large-scale deletion is a single chromosome mutation involving the loss of one or more gene(s) from the parent chromosome.
(Slide 10) Duplication is the addition of one or more gene(s) that are already present in the chromosome. This is a single chromosome mutation.
(Slide 11) An inversion mutation involves the complete reversal of one or more gene(s) within a chromosome. The genes are present, but the order is backwards from the parent chromosome. This is also a single chromosome mutation.
(Slide 12) Large-scale insertion involves multiple chromosomes. For this type of insertion, one or more gene(s) are removed from one chromosome and inserted into another nonhomologous chromosome. This can occur by an error during the prophase I of meiosis when the chromosomes are swapping genes to increase diversity.
(Slide 13) Translocation also involves multiple nonhomologous chromosomes. Here, the chromosomes swap one or more gene(s) with another chromosome.
(Slide 14) A nondisjunction mutation does not involve any errors in DNA replication or crossing-over. Instead, these mutations occur during the anaphase and telophase when the chromosomes are not separated correctly into the new cells. Common nondisjunctions are missing or extra chromosomes. When gametes with nondisjunctions are produced during meiosis, it can result in offspring with monosomy or trisomy (a missing or extra homologous chromosome).
(Slide 15) The effects of mutations may range from nothing to the unviability of a cell. All mutations affect the proteins that are created during protein synthesis, but not all mutations have a significant impact. The effects can also be looked at differently between the small-scale and large-scale mutations.
(Slide 16) The effects of small-scale mutations: Frameshift mutations, insertions and deletions on genes have similar effects. When a nucleotide is added or removed from the DNA sequence, the sequence is shifted and every codon after the mutation is changed, as shown in Figure 1. This results in severe alterations to the proteins that are encoded by the DNA, which can lead to a loss of functionality for those proteins.
Substitutions, or point mutations, are much more subtle and have three possible effects. The table in Figure 3 shows how some point mutations may lead to common disorders.
- Silent: The nucleotide is replaced, but the codon still produces the same amino acid.
- Missense: The codon now results in a different amino acid, which may or may not significantly alter the protein’s function.
- Nonsense: The codon now results in a “stop” command, truncating the protein at the location where the mutated codon is read; this almost always leads to a loss of protein functionality.
These mutations may occur anywhere in the DNA, so the effect of the mutation really depends on its location. If the mutation occurs in a gene, the result is an altered protein, but the mutation can also occur in a nongenic region of the DNA. In the latter case, the mutation has no effect on the organism.
(Slides 17-18) The effects of large-scale mutations are more obvious than those of small-scale mutations. Duplication of multiple genes causes those genes to be overexpressed while deletions result in missing or incomplete genes. Mutations that change the order of the genes on the chromosome—such as deletions, inversions, insertions and translocations—result in close-together genes that were previously separated either by a set of genes on the same chromosome or on another chromosome altogether. When certain genes are positioned closely together, they may encode for a “fusion protein,” which is a protein that would not normally exist but is created by a mutation in which two genes were combined. Some of these new proteins give cells a growth advantage leading to tumors and cancer. Astrocytoma, a type of brain tumor, is the result of a deletion that creates a new fusion gene that permits the cells to become cancerous.
(Slide 19) Often, large-scale mutations lead to cells that are not viable (and die due to the mutation). This is especially true with nondisjunction mutations in gametes in which entire chromosomes are missing or extra. In humans, when the gamete from a male (sperm) merges its chromosomes with the gamete from a female (egg), the offspring receive 23 chromosomes from each parent to form 23 homologous pairs, as shown in the karyotype in Figure 4. However, when one of the gametes has a nondisjunction mutation, the resulting offspring end up with only one homolog in a pair (monosomy) or with three homologs in a pair (trisomy). Most of the time, these offspring are not viable. The ones that do result in viable offspring will possess some noticeable differences due to the extra or missing chromosome; this alteration leads to a permanent syndrome in the offspring. The most well-known syndrome is trisomy 21, an extra 21st chromosome (this karyotype is shown in Figure 5); this particular nondisjunction mutation leads to Down syndrome.
(Slide 20) What can influence mutations? Mutations naturally occur over time, which is the underlying cause of evolution. As we can see, evolution is a very slow process with a net benefit to an organism, but some environmental factors may influence or induce additional mutations. These induced mutations often lead to harmful diseases, such as cancer.
Exposure to certain chemicals is one environmental factor that may induce DNA mutations. Typically, anything that we identify as carcinogenic (may cause cancer) has negative side effects on DNA, and may lead to cancer. This includes the chemicals found in cigarette smoke as well as those found in meats cooked on the grill. These chemicals belong to a larger class called mutagens, meaning they can lead to changes in genetic material.
Chemicals are not the only types of mutagens that we encounter; physical mutagens also exist in the environment, namely radiation. Ultraviolet radiation from the sun can damage genetic material by changing the properties of nucleotides in the DNA. Overexposure to ultraviolet radiation is known to lead to skin cancer. X-rays and gamma radiation are also physical mutagens and forms of ionizing radiation; this means that these types of radiation possess enough energy to remove electrons from atoms, thus forming ions and affecting how different biomolecules interact. While a typical dose of x-rays received during a medical procedure is low, it does marginally increase a person’s cancer risk.
Alternatively, retroviruses such as HIV naturally experience mutations at a much higher rate than other organisms, which can be attributed to the fact that they possess RNA instead of DNA. The process by which RNA is copied and replicated is not as precise as that of DNA. Therefore, by the time our immune system has adjusted to fight a virus like HIV, the HIV virus has already mutated again and the immune system must start over. The mutations in the HIV’s RNA lead to alterations in the protein markers on the virus that the immune system targets, and if the target is always changing, it is almost impossible for the immune system to remove the virus.
(Slides 21-22) Engineering Connection: While mutations occur naturally over time, biological engineers are able to genetically modify various organisms. Humans have been genetically modifying plants and animals for thousands of years. Humans have accomplished this by selectively breeding or inbreeding in order to produce and “improve” specific traits, such as breeding watermelons to be larger and have fewer seeds or breeding chickens to have more white meat and more breast meat.
With the advancement of technology, engineers can directly manipulate the genetic code of plants and animals. Some examples of genetically modified (and controversial) organisms include disease-resistant papaya, vitamin A-rich rice and drought-tolerant corn. Currently, researchers are studying gene editing in the womb. If it is determined that an unborn child has a disease or disability, then we may one day be able to edit the genes of the unborn child and prevent the issue from appearing in the child.
- Mutation Telephone - As a way to illustrate how DNA mutations can happen, students conduct an activity similar to the childhood “telephone” game that models the biological process related to the passage of DNA from one cell to another. Then, students act as predators to test how various mutation types (normal, substitution, deletion or insertion) affect the survivability of an organism in the wild, which serves as a demonstration of natural selection based on mutation.
chromosome: A long strand of DNA wrapped around a protein that stores instructions to create several proteins. Humans have 46 chromosomes composed of 23 pairs of homologous chromosomes.
disjunction: Normal separation of chromosomes during meiosis.
DNA: A molecule that contains an organism’s complete genetic information. Abbreviation for deoxyribonucleic acid.
DNA replication: The process by which DNA is copied and passed on to new cells.
gamete: A sex cell. In mammals, the sperm and eggs. Has half the chromosomes of the parent organism.
gene: A subset of DNA that provides instructions for a cell to build a single protein.
genome: The complete genetic information for an organism; it includes all of the chromosomes.
karyotype: A picture of an organism’s genome with the chromosomes organized by homologous pairs.
meiosis: A type of cell division that occurs in sexually reproducing organisms and typically results in four cells with half the number of chromosomes of the parent. In humans, meiosis results in the creation of sperm or eggs with 23 chromosomes each.
mitosis: A type of cell division that results in two identical cells with the same number of chromosomes as the parent.
monosomy: A situation in which a homolog is missing from a chromosome pair. For example, if only one homolog exists for chromosome 21, it is called monomsomy 21.
mutagen: A physical or chemical agent that affects genetic material.
mutation: A permanent alteration in either the DNA nucleotide sequence during DNA replication or a chromosome during meiosis or mitosis.
nondisjunction: The abnormal separation of chromosomes during meiosis.
protein synthesis: A process by which the instructions contained in DNA are used to produce proteins for a cell or organism.
trisomy: A situation in which an extra chromosome is present. For example, if three homologs exist for chromosome 21, it is called trisomy 21 or Down syndrome.
Mutation Questions: At the beginning of class, have students write short answers to the three questions on the Pre-Lesson Worksheet. Tip: To save paper and ink, since the color of the tiger in the photograph is important for this assessment, display the worksheet via projector and have students write their answers on their own papers. Students’ answers reveal their base understanding of genetics, traits and mutations.
Lesson Summary Assessment
Mutation Questions: After the lesson, have students write short answers to the four questions on the Post-Lesson Worksheet. Tip: To save paper and ink, since the color of the tiger in the photograph is important for this assessment, display the worksheet via projector and have students write their answers on their own papers. Students’ answers reveal their comprehension of the lesson subject matter and content.
Research: Have students choose a syndrome caused by a mutation (such as extra or missing chromosomes) and write a brief, 3-5 sentence paragraph on it. Make sure they mention the specific mutation to the chromosome that leads to the syndrome and what effects that mutation causes.
ContributorsMatthew Zelisko; Kimberly Anderson; Kent Kurashima
Copyright© 2016 by Regents of the University of Colorado; original © 2015 University of Houston
Supporting ProgramNational Science Foundation GK-12 and Research Experience for Teachers (RET) Programs, University of Houston
This digital library content was developed by the University of Houston's College of Engineering under National Science Foundation GK-12 grant number DGE 0840889. However, these contents do not necessarily represent the policies of the NSF and you should not assume endorsement by the federal government.
Last modified: May 27, 2019 |
220 likes | 354 Views
Geometry. Grades 3-5. Goals: . B uild an understanding of the mathematical concepts within Geometry, Measurement, and NBT Domains Analyze and describe how concepts of geometry and geometric measurement progress through the grades Engage in math tasks that make connections among domains
E N D
Geometry Grades 3-5
Goals: • Build an understanding of the mathematical concepts within Geometry, Measurement, and NBT Domains • Analyze and describe how concepts of geometry and geometric measurement progress through the grades • Engage in math tasks that make connections among domains • Explore Geometry Tasks and Resources to Take Back to Your Classroom
CCSS Goals for Elementary Geometry • Geometric shapes, their components (e.g. sides, angles, faces), their properties, and their categorization based on those properties. • Composing and decomposing geometric shapes. • Spatial relations and spatial structuring.
Progression of Geometry and Geometric Measurement • Domain: Geometry (G) and Measurement & Data (MD – measurement progression) • Read and make note of the the main concepts of grades K-5 • Describe how K-5Geometry Standards connect to the K-5 MD and NBT Standards • Identify how the concept changes and increases in rigor and understanding for the student • What benefits might be gained from understanding the progression?
Components, Properties, and Categorization of Geometric Shapes Concepts included build on our K-2 work from the morning. The activities are similar with increased sophistication. Examples: • Venn diagrams • Guess my rule • Sorting • Flow chart
Composing and Decomposing Geometric Shapes Assemble your tangram pieces into one large square.
Connecting the measurement and number • What if the area of the small square piece is ONE SQUARE UNIT? What is the area of each of the other pieces? What is the area of the entire large square? • What if the area of the large square is ONE SQUARE UNIT? What is the area of each of the seven smaller pieces?
Assume the perimeter of the small square is FOUR UNITS. What is the perimeter of each of the seven pieces, and what is the perimeter of the large square?
Jerry says, “I did a big triangle instead of a big square. Even if the small square is still ONE SQUARE UNIT, my answers for the area and perimeter of the big triangle will be different than we found for the big square.” Tom disagrees, “If the small square is still ONE SQUARE UNIT, my big rectangle will have the same area and perimeter as the big square because we are using all the same pieces even if they are making a different shape.”
Spatial Relations and Structuring • Mental operation of constructing an organization or form for an object or set of objects in space • Precedes meaningful mathematical use of multiplication, area, volume, number properties, and the coordinate plane
Quick Rectangles • Build the rectangle you see flashed on the screen with square tiles. • Draw the rectangle you see flashed on the screen. What mathematical issues are raised when you have to draw instead of build?
Commutative Propertya x b = b x a • How many rectangles can you make with an area of 12 square tiles?
How many cards are needed? A club wants to create a card section at the next football game by placing a card on each seat in a certain section. They are planning to use a section of seats that has 5 rows, and then plan to put 3 yellow cards, then 4 blue cards, and then 5 green cards down each row. How many cards altogether will be needed so that each seat in this section has a card on it? Write number sentences to show how you found your answer. From Benson, Wall, & Malm (2013)
How many cards are needed? From Benson, Wall, & Malm (2013)
Distributive Property 5 x 12 = 5 x (3 + 4 + 5) = (5 x 3) + (5 x 4) + (5 x 5) A x (B + C) = (A x B) + (A x C)
Multiplying Larger Numbers • How can you use the base ten blocks to show 12 x 14 using the least number of pieces possible?
Connecting to Written Algorithms Partial Products Algorithm 2 10 14 X 12 8 20 40 + 100 40 4 8 20 100 10
3-D Spatial Structuring Build a Building • What patterns do you see as you continue to add stories to each building? • How could you use these patterns to figure out the number of rooms even without building?
3-D Spatial Structuring • Describe the rectangular prism on your table. • What are all the different ways to find the number of cubes it took to build it? • Look at the net for this prism. How does it relate to the 3-D shape?
How many other rectangular prisms can be made with 24 cubes? • Build • Draw the nets, cut out and make them • Use the cm paper to make the biggest rectangular prism possible in terms of volume (you must draw the net from the paper).
Resources • Check out LAUNCH Blog for today’s power point and other websites and resources for teaching geometry. |
Taken from the International Space Station in 2016, this picture of the southern tip of Greenland shows the island’s vast ice sheet fringed by glaciers that flow into the sea. The glaciers and ice sheet of Greenland cover a land area greater than the European countries of Germany, France, Spain and Italy combined. If all Greenland’s ice melted, sea levels would rise by about 7 meters (23 feet).
A new analysis of Greenland’s past temperatures will help scientists figure out how fast the island’s vast ice sheet is melting, according to a new report from University of Arizona atmospheric scientists.
The ice sheet has been shrinking since 1900 and the yearly loss of ice has doubled since 2003, other researchers have shown. The accelerated melting of the Greenland ice sheet is contributing to sea level rise.
The glaciers and ice sheet of Greenland cover a land area greater than the European countries of Germany, France, Spain and Italy combined. If all Greenland’s ice melted, sea levels would rise by about 7 meters (23 feet).
Figuring out how fast the island’s ice has melted and will melt in the future requires knowing the past and the present surface air temperatures, according to UA researchers J. E. Jack Reeves Eyre and Xubin Zeng.
“Greenland is particularly important to global climate change because it has the potential to cause a big change in sea level,” lead author Reeves Eyre said. “Knowing how it’s going to change over the next century is important.”
Calculating an average yearly surface temperature for the whole of Greenland is difficult. During most of the 20th century, the only weather stations were along the coast. There was no network of weather stations in Greenland’s interior until 1995.
Find your dream job in the space industry. Check our Space Job Board »
Other groups of researchers have used combinations of weather station readings, satellite remote sensing data, statistical analyses and climate models to calculate the island’s annual surface temperatures back to 1901. However, the results of those analyses disagree with one another substantially.
How Greenland’s massive ice sheet will respond to future warming is not well understood, said Zeng, a UA professor of hydrology and atmospheric sciences.
By combining the best two of the previous analyses, the UA study provides the most accurate estimates of Greenland’s 20th century temperatures, said Reeves Eyre, a doctoral student in the UA Department of Hydrology and Atmospheric Sciences.
The finding will help improve climate models so they more accurately project future global climate change and its effects.
“That’s why we look at the historical period — it’s not about the history. It’s about the future,” said Zeng, who holds the Agnese N. Haury Endowed Chair in Environment.
Reeves Eyre and Zeng’s research article, “Evaluation of Greenland near surface air temperature data sets,” is published online July 5 in the open-access journal The Cryosphere.
NASA, the U.S. Department of Energy and the UA Agnese Nelms Haury Program in Environment and Social Justice funded the research.
Knowing Greenland’s past temperatures is important for improving climate models, because scientists test regional and global climate models by seeing how well they predict what the climate was in the past.
Previous analyses of the island’s past temperatures came up with contradictory results: Some said the 1930s were warmer than present, while other analyses said the opposite.
To find the best estimate of 20th century temperatures, the UA scientists compared 16 different analyses. The UA team compared more datasets covering the period 1901 to 2014 and used more information from weather stations and field expeditions than previous studies.
“We are the first to bring all those datasets together,” Zeng said.
To avoid bias from lumping temperature data from different elevations, Reeves Eyre and Zeng divided the temperature data into three categories: data from coastal regions, data from lower than 1,500 meters (about 4,900 feet) and data from above 1,500 meters.
The coastal regions of Greenland are ice-free year-round, whereas the glaciers and ice sheet at the intermediate elevation melt some in the summer, but refreeze in the winter, Reeves Eyre said. The ice sheet and glaciers at the intermediate elevations are shrinking a bit each year because temperatures are increasing.
Above 1,500 meters, the ice generally does not melt and may even gain mass, he said. However, the bit of ice gained at the highest elevations does not offset the loss of ice at the lower elevations.
The UA study resolves the discrepancies among the other analyses and provides the best estimates of Greenland’s past temperatures.
“The combination of the MERRA2 and GISTEMP (analyses) gives the most accurate results over the 20th century,” he said. “Putting them together is more than the sum of the parts. Neither of them individually can do what both of them together can do.”
Although some previous analyses suggest the 1930s were warmer than it is now, the UA analysis shows that current temperatures are warmer than the 1930s. The long-term trend for Greenland’s ice sheet appears to be for ever-higher surface temperatures, he said.
“By studying a wide range of available data and combining two of the best data sources, we’ve come up with a combination that best represents the whole distribution of temperatures over Greenland from 1880 to 2016,” Reeves Eyre said. “Using this dataset is the best way to evaluate climate models and their projection of temperature change over Greenland.”
Story Source:Materials provided by University of Arizona.Note: Content may be edited for style and length.
J. E. Jack Reeves Eyre, Xubin Zeng. Evaluation of Greenland near surface air temperature datasets. The Cryosphere, 2017; 11 (4): 1591 DOI: 10.5194/tc-11-1591-2017 |
Galaxy clusters are often described by superlatives. After all, they are huge conglomerations of galaxies, hot gas, and dark matter and represent the largest structures in the Universe held together by gravity.
Galaxy clusters tend to be poor at producing new stars in their centers. They generally have one giant galaxy in their middle that forms stars at a rate significantly slower than most galaxies – including our Milky Way. The central galaxy contains a supermassive black hole roughly a thousand times more massive than the one at the center of our galaxy. Without heating by outbursts from this black hole, the copious amounts of hot gas found in the central galaxy should cool, allowing stars to form at a high clip. It is thought that the central black hole acts as a thermostat, preventing rapid cooling of surrounding hot gas and impeding star formation.
New data provide more details on how the galaxy cluster SPT-CLJ2344-4243, nicknamed the Phoenix Cluster for the constellation in which it is found, challenges this trend. The cluster has shattered multiple records in the past: In 2012, scientists announced that the Phoenix cluster featured the highest rate of cooling hot gas and star formation ever seen in the center of a galaxy cluster, and is the most powerful producer of X-rays of all known clusters. The rate at which hot gas is cooling in the center of the cluster is also the largest ever observed.
New observations of this galaxy cluster at X-ray, ultraviolet, and optical wavelengths by NASA’s Chandra X-ray Observatory, the Hubble Space Telescope, and the Clay-Magellan telescope located in Chile, are helping astronomers better understand this remarkable object. Clay-Magellan’s optical data reveal narrow filaments from the center of the cluster where stars are forming. These massive cosmic threads of gas and dust, most of which had never been detected before, extend for 160,000 to 330,000 lights years. This is longer than the entire breadth of the Milky Way galaxy, making them the most extensive filaments ever seen in a galaxy cluster.
These filaments surround large cavities – regions with greatly reduced X-ray emission – in the hot gas. The X-ray cavities can be seen in this composite image that shows the Chandra X-ray data in blue and optical data from the Hubble Space Telescope (red, green, and blue). For the location of these “inner cavities”, mouse over the image. Astronomers think that the X-ray cavities were carved out of the surrounding gas by powerful jets of high-energy particles emanating from near a supermassive black hole in the central galaxy of the cluster. As matter swirls toward a black hole, an enormous amount of gravitational energy is released. Combined radio and X-ray observations of supermassive black holes in other galaxy clusters have shown that a significant fraction of this energy is released as jets of outbursts that can last millions of years. The observed size of the X-ray cavities indicates that the outburst that produced the cavities in SPT-CLJ2344-4243 SPT- CLJ2344-4243 was one of the most energetic such events ever recorded.
However, the central black hole in the Phoenix cluster is suffering from somewhat of an identity crisis, sharing properties with both “quasars”, very bright objects powered by material falling onto a supermassive black hole, and “radio galaxies” containing jets of energetic particles that glow in radio waves, and are also powered by giant black holes. Half of the energy output from this black hole comes via jets mechanically pushing on the surrounding gas (radio-mode), and the other half from optical, UV and X-radiation originating in an accretion disk (quasar-mode). Astronomers suggest that the black hole may be in the process of flipping between these two states.
X-ray cavities located farther away from the center of the cluster, labeled as “outer cavities”, provide evidence for strong outbursts from the central black hole about a hundred million years ago (neglecting the light travel time to the cluster). This implies that the black hole may have been in a radio mode, with outbursts, about a hundred million years ago, then changed into a quasar mode, and then changed back into a radio mode.
It is thought that rapid cooling may have occurred in between these outbursts, triggering star formation in clumps and filaments throughout the central galaxy at a rate of about 610 solar masses per year. By comparison, only a couple of new stars form every year in our Milky Way galaxy. The extreme properties of the Phoenix cluster system are providing new insights into various astrophysical problems, including the formation of stars, the growth of galaxies and black holes, and the co-evolution of black holes and their environment.
A paper describing these results, led by Michael McDonald (Massachusetts Institute of Technology), has been accepted for publication in The Astrophysical Journal and is available online. NASA’s Marshall Space Flight Center in Huntsville, Alabama, manages the Chandra program for NASA’s Science Mission Directorate in Washington. The Smithsonian Astrophysical Observatory in Cambridge, Massachusetts, controls Chandra’s science and flight operations.
Image credit: X-ray: NASA/CXC/MIT/M. McDonald et al.; Optical: NASA/STScI
For more Chandra images, multimedia and related materials, visit:
Last Updated: Sept. 30, 2015
Editor: Lee Mohon |
Samuel Birley Rowbotham, an English inventor, created the Flat Earth Society in the early part of the 1800s. (1)
The Flat Earth Society mission statement was “to promote and initiate discussion of Flat Earth theory as well as archive Flat Earth literature.”
Even though the Flat Earth Theory was abandoned during the fourth century BC when the Greeks hypothesized that the earth was a spherical shape, there were still many who held onto the belief that the earth was flat. (2)
Rowbotham based his society on the Bedford Level Experiment (3) that attempted to determine the shape of the earth.
“Observations carried out along a six-mile length of the Old Bedford River on the Bedford Level, Norfolk, England” were taken during both the 19th and 20th centuries. Rowbotham took the first test results that concluded the earth was flat and ignored the following tests that attempted to recreate those first results to validate the original experiment results.
All subsequent tests “firmly supported the established view that the earth is a sphere”.
Using the erroneous results of the first round of testing in the Bedford Level Experiment, Rowbotham invented the Zetetic Astronomy. This proposed form of astronomy contends that:
“The earth is a flat disk centered at the North Pole and bounded along its southern edge by a wall of ice, with the sun, moon, planets, and stars only a few hundred miles above the surface of the earth”. (4)
Transatlantic Migration of Rowbotham’s Society
The UK-based society published volumes of newsletters, leaflets and even books. The society and its theory of a flat earth were eagerly embraced by John Alexander Dowie of Zion, Illinois, where he founded the Christian Catholic Apostolic Church. “Dowie was a restorationist and sought to recover the ‘primitive condition’ of the Church”. He was a believer in divine healing and somehow Flat Earth theory fit nicely into his brand of religion. (5)
Unlike many organizations, the society didn’t die when Rowbotham did in 1884.
“Lady Elizabeth Blount established the Universal Zetetic Society.” Her goal was to continue spreading the Flat Earth Theory as a “Natural Cosmogony in confirmation of the Holy Scriptures, based on practical scientific investigation”.
In 1956, Samuel Shenton of Dover Britain picked up the torch once more by forming the International Flat Earth Society. Shenton led the society away from the original focus of religion-based theory and imbued the organization with his personal interest in “alternative science and technology”.
Shenton Claims Satellite Photos Fake
As astounding as it might be, even the photos from the first satellite put into orbit around the earth didn’t deter the society’s stance against a spherical earth. In fact, Shenton’s response to images taken of earth from the satellite that clearly depicted a globe-like shape was:
“It’s easy to see how a photograph like that could fool the untrained eye.”
When Shenton died in 1971, Ellis Hillman, the president of Shenton’s organization, added a large portion of Shenton’s library to the Science Fiction Foundation archives (Hillman helped to found). The SF Foundation’s function is to “promote science fiction, and bring together those who read, write, study, teach, research or archive science fiction in Britain and the rest of the world”. (6) Hillman’s actions are most curious.
Shenton’s wife retained the other portion of the library and gave it to Shenton’s successor, Charles K. Johnson, a former airplane mechanic. By now the organization was known as the International Flat Earth Society and Johnson ran it from his Lancaster, California home.
Society Claims Apollo Moon Landings Hoaxed
In 1969, when the first Apollo Moon Mission was successful with astronauts actually walking on the moon, Shenton accused the government of hoaxing the entire mission.
Even though the event was televised and photos were published in magazines and newspapers, the society claimed the landing and subsequent ones were all filmed on a Hollywood sound stage. There are many people today who believe this conspiracy theory who perhaps don’t realize the origins of this conspiracy theory. (7)
His successor, Johnson stated:
“We maintain that what is called ‘Science’ today and ‘scientists’ consist of the same old gang of witch doctors, sorcerers, tellers of tales, the ‘Priest-Entertainers’ for the common people. ‘Science’ consists of a weird, way-out occult concoction of gibberish theory-theology…unrelated to the real world of facts, technology and inventions, tall buildings and fast cars, airplanes and other Real and Good things in life; technology is not in any way related to the web of idiotic scientific theory. ALL inventors have been anti-science.” (4)
The modern version of the Flat Earth Society claims that the United States government and its agencies, especially NASA have misrepresented the truth to the public about science, technology and, of course, the actual shape of the planet.
This accusation begs the question that if the moon and other telescopically visible planets within our solar system appear spherical, why would the earth not conform to this common cosmic shape? How can the Flat Earth Theory stand up to watching a lunar eclipse when the earth’s shadow falls over the moon?
Another simple method that clearly reveals a spherical earth is one the ancient astronomers figured out. The stars rise at different times and at varying heights above the horizon. This can only happen with a spherical planet.
And, perhaps the easiest test is to view the horizon from a tall building, mountaintop or airplane. Any of these points of view reveal an undeniable spherical curve to the horizon and has to leave the average person with the question of why anyone would still believe in a Flat Earth Theory.
References & Image Credits:
(1) Wikipedia: Samuel Rowbotham
(2) Wikipedia: Flat Earth Society
(3) Wikipedia: Bedford Level Experiment
(4) Flat Earth Society
(5) Wikipedia: John Alexander Dowie
(6) Science Fiction Foundation
(7) Wikipedia: Moon Landing Conspiracy Theories
(8) Glenn Beck
(9) Earth Not a Globe
(10) Big Education Ape |
Indigenous peoples of the Americas
|Mayan women in Guatemala, 2012|
|Approximately 52 million|
|Regions with significant populations|
|(not including mixed race populations in Latin America)|
The indigenous peoples of the Americas are the pre-Columbian inhabitants of North and South America and their descendants. Pueblos indígenas (indigenous peoples) is a common term in Spanish-speaking countries. Aborigen (aboriginal/native) is used in Argentina, while "Amerindian" is used in Guyana, but not commonly used in other countries. Indigenous peoples are commonly known in Canada as Aboriginal peoples, which include First Nations, Inuit, and Métis peoples. Indigenous peoples of the United States are known as Native Americans or American Indians and Alaskan Natives.
According to a prevailing New World migration model, migrations of humans from Eurasia to the Americas took place via Beringia, a land bridge which connected the two continents across what is now the Bering Strait. The most recent migration could have taken place around 12,000 years ago, with the earliest period remaining a matter of some unresolved contention. These early Paleo-Indians soon spread throughout the Americas, diversifying into many hundreds of culturally distinct nations and tribes. According to the oral histories of many of the indigenous peoples of the Americas, they have been living there since their genesis, described by a wide range of traditional creation accounts.
Application of the term "Indian" originated with Christopher Columbus, who thought that he had arrived in the East Indies, while seeking Asia. Later, the Americas came to be known as the "West Indies," a name still used today to refer to the Caribbean. The use of the names "Indies" and "Indian" has served to imply some kind of racial or cultural unity for the aboriginal peoples of the Americas. Once created, the unified "Indian" was codified in law, religion, and politics. The unitary idea of "Indians" was not originally shared by indigenous peoples, but many over the last two centuries have embraced the identity. The term "Indian" does not include Aleuts, Inuit, or Yupik peoples.
While some indigenous peoples of the Americas were traditionally hunter-gatherers—and many, especially in Amazonia, still are—many groups practiced aquaculture and agriculture. The impact of their agricultural endowment to the world is a testament to their time and work in reshaping and cultivating the flora indigenous to the Americas. Some societies depended heavily on agriculture while others practiced a mix of farming, hunting, and gathering. In some regions the indigenous peoples created monumental architecture, large-scale organized cities, chiefdoms, states, and empires.
Many parts of the Americas are still populated by indigenous Americans; some countries have sizable populations, especially Bolivia, Peru, Mexico, Guatemala, Belize, Colombia, Ecuador, and Greenland. At least a thousand different indigenous languages are spoken in the Americas. Some, such as Quechua languages, Aymara, Guaraní, Mayan languages, and Nahuatl, count their speakers in millions. Many also maintain aspects of indigenous cultural practices to varying degrees, including religion, social organization and subsistence practices. Some indigenous peoples still live in relative isolation from Western society, and a few are still counted as uncontacted peoples.
Migration into the continents
The specifics of Paleo-Indian migration to and throughout the Americas, including the exact dates and routes traveled, are subject to ongoing research and discussion. The traditional Western theory has been that these early migrants moved into the Beringia land bridge between eastern Siberia and present-day Alaska around 40,000—16,500 years ago, when sea levels were significantly lowered due to the Quaternary glaciation. These people are believed to have followed herds of now-extinct Pleistocene megafauna along ice-free corridors that stretched between the Laurentide and Cordilleran ice sheets. Another route proposed is that, either on foot or using primitive boats, they migrated down the Pacific Northwest coast to South America. Evidence of the latter would since have been covered by a sea level rise of hundreds of meters following the last ice age. Some recent DNA studies suggest additional migration from Europe around the northern fringe of the Atlantic possibly as long ago as either 36,000 to 23,000 years ago or between 17,000 to 12,000 years. However, this is also attributed to admixture of Europeans into northern Asia before the Beringian migration. Recent genetic studies have shown that that Paleolithic Europeans and Native Americans share a genetic founder population and that there is strong evidence that the "population that crossed the Bering Strait from Siberia into the Americas more than 15,000 years ago was likely related to the ancient population of Europe."
The time range of 40,000—16,500 years ago is a hot topic of debate and will be for years to come. The few agreements achieved to date are the origin from Central Asia, with widespread habitation of the Americas during the end of the last glacial period, or more specifically what is known as the late glacial maximum, around 16,000 — 13,000 years before present.
Stone tools, particularly projectile points and scrapers, are the primary evidence of the earliest human activity in the Americas. Crafted lithic flaked tools are used by archaeologists and anthropologists to classify cultural periods.
Pre-Columbian era
The Pre-Columbian era incorporates all period subdivisions in the history and prehistory of the Americas before the appearance of significant European and African influences on the American continents, spanning the time of the original settlement in the Upper Paleolithic to European colonization during the Early Modern period.
While technically referring to the era before Christopher Columbus's voyages of 1492 to 1504, in practice the term usually includes the history of American indigenous cultures until they were either conquered or significantly influenced by Europeans, even if this happened decades or even centuries after Columbus' initial landing. Pre-Columbian is used especially often in the context of the great indigenous civilizations of the Americas, such as those of Mesoamerica (the Olmec, the Toltec, the Teotihuacano, the Zapotec, the Mixtec, the Aztec, and the Maya) and the Andes (Inca, Moche, Chibcha, Cañaris).
Many pre-Columbian civilizations established characteristics and hallmarks which included permanent or urban settlements, agriculture, civic and monumental architecture, and complex societal hierarchies. Some of these civilizations had long faded by the time of the first permanent European and African arrivals (ca. late 15th–early 16th centuries), and are known only through oral history and archaeological investigations. Others were contemporary with this period, and are also known from historical accounts of the time. A few, such as the Maya, Olmec, Mixtec, and Nahua peoples, had their own written records. However, the European colonists of the time viewed such texts as heretical, and much was destroyed in Christian pyres. Only a few hidden documents remain today, leaving contemporary historians with glimpses of ancient culture and knowledge.
According to both indigenous American and European accounts and documents, American civilizations at the time of European encounter possessed many impressive accomplishments. For instance, the Aztecs built one of the most impressive cities in the world, Tenochtitlan, the ancient site of Mexico City, with an estimated population of 200,000. American civilizations also displayed impressive accomplishments in astronomy and mathematics. Inuit, Alaskan Native, and American Indian creation myths tell of a variety of originations of their respective peoples. Some were "always there" or were created by gods or animals, some migrated from a specified compass point, and others came from "across the ocean."
So far, the only verifiable site of "pre-Columbian" European settlement anywhere in the Americas, outside of Greenland, is L'Anse aux Meadows, located near the very northern tip of the Canadian island of Newfoundland. It was settled by the Norse around the end of the 10th century.
European colonization
The European colonization of the Americas forever changed the lives, bloodlines and cultures of the peoples of the continent. The population history of American indigenous peoples postulates that infectious disease exposure, displacement, and warfare diminished populations, with the first the most significant cause. The first indigenous group encountered by Columbus were the 250,000 Taínos of Hispaniola who were the dominant culture in the Greater Antilles and the Bahamas. In thirty years, about 70% of the Taínos died. They had no immunity to European diseases, so outbreaks of measles and smallpox ravaged their population. The increased ignorance towards the practice of punishing the Taínos for revolting against forced labour, despite the measures brought by the encomienda which included religious education and protection from warring tribes, eventually helped conceive the last great Taíno rebellion.
Mistreated, the Taínos began to adopt suicidal behaviors, with women aborting or killing their infants, men jumping from the cliffs or ingesting manioc, a violent poison. Eventually, a Taíno Cacique named Enriquillo managed to hold out in the mountain range of Bahoruco for thirteen years conducting serious damage to the Spanish, Carib-held plantations and their Indian auxiliaries. After hearing of the seriousness of the revolt, Emperor Charles V sent captain Francisco Barrionuevo to negotiate a peace treaty with the ever increasing number of rebels. Two months later, with the consulting of the Audencia of Santo Domingo, Enriquillo was offered any part of the island to live in peace.
The Laws of Burgos, 1512-1513 were the first codified set of laws governing the behavior of Spanish settlers in America, particularly with regards to native Indians. They forbade the maltreatment of natives, and endorsed their conversion to Catholicism. The Spanish crown found it difficult to enforce these laws in a distant colony.
Reasons for the decline of the Native American populations are variously theorized to be epidemic diseases, conflicts with Europeans, and conflicts among warring tribes. Scholars now believe that, among the various contributing factors, epidemic disease was the overwhelming cause of the population decline of the American natives. After first contacts with Europeans and Africans, some believe that the death of 90 to 95% of the native population of the New World was caused by Old World diseases. Half the native population of Hispaniola in 1518 was killed by smallpox. Within a few years smallpox killed between 60% and 90% of the Inca population, with other waves of European disease weakening them further. Smallpox was only the first epidemic. Typhus (probably) in 1546, influenza and smallpox together in 1558, smallpox again in 1589, diphtheria in 1614, measles in 1618—all ravaged the remains of Inca culture. Smallpox had killed millions of native inhabitants of Mexico. Unintentionally introduced at Veracruz with the arrival of Pánfilo de Narváez on April 23, 1520, smallpox ravaged Mexico in the 1520s, possibly killing over 150,000 in Tenochtitlan alone (the heartland of the Aztec Empire), and aided in the victory of Hernán Cortés over the Aztec empire at Tenochtitlan (present-day Mexico City) in 1521.
Over the centuries, the Europeans had developed high degrees of immunity to these diseases, while the indigenous Americans had no such immunity. Europeans had been ravaged in their own turn by such diseases as bubonic plague and Asian flu that moved west from Asia to Europe. In addition, when they went to some territories, such as Africa and Asia, they were more vulnerable to malaria.
The repeated outbreaks of influenza, measles and smallpox probably resulted in a decline of between one-half and two-thirds of the Aboriginal population of eastern North America during the first 100 years of European contact. In 1617–1619, smallpox reportedly killed 90% of the Massachusetts Bay Colony Native American residents. In 1633, in Plymouth, the Native Americans there were exposed to smallpox because of contact with Europeans. As it had done elsewhere, the virus wiped out entire population groups of Native Americans. It reached Lake Ontario in 1636, and the lands of the Iroquois by 1679. During the 1770s, smallpox killed at least 30% of the West Coast Native Americans. Smallpox epidemics in 1780–1782 and 1837–1838 brought devastation and drastic population depletion among the Plains Indians. In 1832, the federal government of the United States established a smallpox vaccination program for Native Americans (The Indian Vaccination Act of 1832).
Later explorations of the Caribbean led to the discovery of the Arawak peoples of the Lesser Antilles. The culture was destroyed by 1650. Only 500 had survived by the year 1550, though the bloodlines continued through the modern populace. In Amazonia, indigenous societies weathered centuries of colonization.
The Spaniards and other Europeans brought horses to the Americas. Some of these animals escaped and began to breed and increase their numbers in the wild. The re-introduction of the horse, extinct in the Americas for over 7,500 years, had a profound impact on Native American culture in the Great Plains of North America and of Patagonia in South America. By domesticating horses, some tribes had great success: they expanded their territories, exchanged many goods with neighboring tribes, and more easily captured game, especially bison.
Over the course of thousands of years, American indigenous peoples domesticated, bred and cultivated a large array of plant species. These species now constitute 50–60% of all crops in cultivation worldwide. In certain cases, the indigenous peoples developed entirely new species and strains through artificial selection, as was the case in the domestication and breeding of maize from wild teosinte grasses in the valleys of southern Mexico. Numerous such agricultural products retain native names in the English and Spanish lexicons.
The South American highlands were a center of early agriculture. Genetic testing of the wide variety of cultivars and wild species suggest that the potato has a single origin in the area of southern Peru, from a species in the Solanum brevicaule complex. Over 99% of all modern cultivated potatoes worldwide are descendants of a subspecies indigenous to south-central Chile, Solanum tuberosum ssp. tuberosum, where it was cultivated as long as 10,000 years ago. According to George Raudzens, "It is clear that in pre-Columbian times some groups struggled to survive and often suffered food shortages and famines, while others enjoyed a varied and substantial diet." The persistent drought around 850 AD coincided with the collapse of Classic Maya civilization, and the famine of One Rabbit (AD. 1454) was a major catastrophe in Mexico.
Natives of North America began practicing farming approximately 4,000 years ago, late in the Archaic period of North American cultures. Technology had advanced to the point that pottery was becoming common, and the small-scale felling of trees became feasible. Concurrently, the Archaic Indians began using fire in a widespread manner. Intentional burning of vegetation was used to mimic the effects of natural fires that tended to clear forest understories. It made travel easier and facilitated the growth of herbs and berry-producing plants, which were important for both food and medicines.
In the Mississippi River valley, Europeans noted Native Americans' managed groves of nut and fruit trees as orchards, not far from villages and towns, in addition to their gardens and agricultural fields. Wildlife competition could be reduced by understory burning. Further away, prescribed burning would have been used in forest and prairie areas.
Many crops first domesticated by indigenous Americans are now produced and/or used globally. Chief among these is maize or "corn", arguably the most important crop in the world. Other significant crops include cassava, chia, squash (pumpkins, zucchini, marrow, acorn squash, butternut squash), the pinto bean, Phaseolus beans including most common beans, tepary beans and lima beans, tomato, potatoes, avocados, peanuts, cocoa beans (used to make chocolate), vanilla, strawberries, pineapples, Peppers (species and varieties of Capsicum, including bell peppers, jalapeños, paprika and chili peppers) sunflower seeds, rubber, brazilwood, chicle, tobacco, coca, manioc and some species of cotton.
Studies of contemporary indigenous environmental management, including agro-forestry practices among Itza Maya in Guatemala and hunting and fishing among the Menominee of Wisconsin, suggest that longstanding "sacred values" may represent a summary of sustainable millennial traditions.
Cultural practices in the Americas seem to have been mostly shared within geographical zones where otherwise unrelated peoples might adopt similar technologies and social organizations. An example of such a cultural area could be Mesoamerica, where millennia of coexistence and shared development between the peoples of the region produced a fairly homogeneous culture with complex agricultural and social patterns. Another well-known example could be the North American plains area, where until the 19th century, several different peoples shared traits of nomadic hunter-gatherers primarily based on buffalo hunting.
Writing systems
An independent origin and development of writing is counted among the many achievements and innovations of pre-Columbian American cultures. The Mesoamerican region produced several indigenous writing systems from the 1st millennium BCE onwards. What may be the earliest-known example in the Americas of an extensive text thought to be writing is by the Cascajal Block. The Olmec hieroglyphs tablet has been indirectly dated from ceramic shards found in the same context to approximately 900 BCE, around the time that Olmec occupation of San Lorenzo Tenochtitlán began to wane.
The Maya writing system (often called hieroglyphs from a superficial resemblance to the Ancient Egyptian writing) was a combination of phonetic symbols and logograms. It is most often classified as a logographic or (more properly) a logosyllabic writing system, in which syllabic signs play a significant role. It is the only pre-Columbian writing system known to completely represent the spoken language of its community. In total, the script has more than one thousand different glyphs, although a few are variations of the same sign or meaning, and many appear only rarely or are confined to particular localities. At any one time, no more than around five hundred glyphs were in use, some two hundred of which (including variations) had a phonetic or syllabic interpretation.
Aztec codices (singular codex) are books written by pre-Columbian and colonial-era Aztecs. These codices provide some of the best primary source for Aztec culture. The pre-Columbian codices differ from European codices in that they are largely pictorial; they were not meant to symbolize spoken or written narratives. The colonial era codices not only contain Aztec pictograms, but also Classical Nahuatl (in the Latin alphabet), Spanish, and occasionally Latin.
Music and art
Native American music in North America is almost entirely monophonic, but there are notable exceptions. Traditional Native American music often centers around drumming. Rattles, clappersticks, and rasps were also popular percussive instruments. Flutes were made of rivercane, cedar, and other woods. The tuning of these flutes is not precise and depends on the length of the wood used and the hand span of the intended player, but the finger holes are most often around a whole step apart and, at least in Northern California, a flute was not used if it turned out to have an interval close to a half step. The Apache fiddle is a single stringed instrument.
Music from indigenous peoples of Central Mexico and Central America often was pentatonic. Before the arrival of the Spaniards and other Europeans it was inseparable from religious festivities and included a large variety of percussion and wind instruments such as drums, flutes, sea snail shells (used as a kind of trumpet) and "rain" tubes. No remnants of pre-Columbian stringed instruments were found until archaeologists discovered a jar in Guatemala, attributed to the Maya of the Late Classic Era (600–900 CE), which depicts a stringed musical instrument which has since been reproduced. This instrument is astonishing in at least two respects. First, it is one of the very few string instruments known in the Americas prior to the introduction of European musical instruments. Second, when played, it produces a sound virtually identical to a jaguar's growl.
Visual arts by indigenous peoples of the Americas composes a major category in the world art collection. Contributions include pottery, paintings, jewellery, weavings, sculptures, basketry, carvings and beadwork. Due to the many artists posing as Native Americans, the United States passed the Indian Arts and Crafts Act of 1990, requiring artists prove that they are enrolled in a state or federally recognized tribe.
Demography of contemporary populations
||The neutrality of this section is disputed. (September 2011)|
The following table provides estimates of the per-country populations of indigenous peoples of the Americas, and also those with partial indigenous ancestry, expressed as a percentage of the overall country population of each country that is comprised by indigenous peoples of the Americas, and of people of partial indigenous descent. The total percentage obtained by adding both of these categories is also given.
Note: these categories are inconsistently defined and measured differently from country to country. Some are based on the results of population wide genetic surveys, while others are based on self-identification or observational estimation.
History and status by country
Argentina's indigenous population in 2005 was about 600,329 (1.6% of total population); this figure includes 457,363 people who self-identified as belonging to an indigenous ethnic group, and the remaining 142,966 who recognized themselves as first-generation descendants of an indigenous people. The ten most populous indigenous peoples are the Mapuche (113,680 people), the Kolla (70,505), the Toba (69,452), the Guaraní (68,454), the Wichi (40,036), the Diaguita-Calchaquí (31,753), the Mocoví (15,837), the Huarpe (14,633), the Comechingón (10,863) and the Tehuelche (10,590). Minor but important peoples are the Quechua (6,739), the Charrúa (4,511), the Pilagá (4,465), the Chané (4,376), and the Chorote (2,613). The Selknam (Ona) people are now virtually extinct in its pure form. The languages of the Diaguita, Tehuelche, and Selknam nations are now extinct or virtually extinct: the Cacán language (spoken by Diaguitas) in the 18th century, the Selknam language in the 20th century; whereas one Tehuelche language (Southern Tehuelche) is still spoken by a small handful of elderly people.
Mestizos (European with indigenous peoples) number about 34 percent of the population; unmixed Maya make up another 10.6 percent (Ketchi, Mopan, and Yucatec). The Garifuna, who came to Belize in the 19th century, originating from Saint Vincent and the Grenadines, with a mixed African, Carib, and Arawak ancestry make up another 6% of the population.
|This article's factual accuracy may be compromised due to out-of-date information. (April 2012)|
In Bolivia, a 62% majority of residents over the age of 15 self-identify as belonging to an indigenous people, while another 3.7% grew up with an indigenous mother tongue yet do not self-identify as indigenous. Including both of these categories, and children under 15, some 66.4% of Bolivia's population was registered as indigenous in the 2001 Census. The largest indigenous ethnic groups are: Quechua, about 2.5 million people; Aymara, 2.0 million; Chiquitano, 181 thousand; Guaraní, 126 thousand; and Mojeño, 69 thousand. Some 124 thousand pertain to smaller indigenous groups. The Constitution of Bolivia, enacted in 2009, recognizes 36 cultures, each with their own language, as part of a plurinational state. Others, including CONAMAQ (the National Council of Ayllus and Markas of Qollasuyu) draw ethnic boundaries within the Quechua- and Aymara-speaking population, resulting in a total of fifty indigenous peoples native to Bolivia.
Large numbers of Bolivian highland peasants retained indigenous language, culture, customs, and communal organization throughout the Spanish conquest and the post-independence period. They mobilized to resist various attempts at the dissolution of communal landholdings, and used legal recognition of "empowered caciques" to further communal organization. Indigenous revolts took place frequently until 1953. While the National Revolutionary Movement government begun in 1952 discouraged self-identification as indigenous (reclassifying rural people as campesinos, or peasants), renewed ethnic and class militancy re-emerged in the Katarista movement beginning in the 1970s. Lowland indigenous peoples, mostly in the east, entered national politics through the 1990 March for Territory and Dignity organized by the CIDOB confederation. That march successfully pressured the national government to sign ILO Convention 169 and to begin a still-ongoing process of recognizing and titling indigenous territories. The 1994 Law of Popular Participation granted "grassroots territorial organizations" recognized by the state certain rights to govern local areas.
Radio and some television in Quechua and Aymara is produced. The constitutional reform in 1997 for the first time recognized Bolivia as a multilingual, pluri-ethnic society and introduced education reform. In 2005, for the first time in the country's history, an indigenous descendant Aymara, Evo Morales, was elected as President.
Morales began work on his “indigenous autonomy” policy which he launched in the eastern lowlands department on 3 August 2009, making Bolivia the first country in the history of South America to declare the right of indigenous people to govern themselves. Speaking in Santa Cruz Department, the President called it "a historic day for the peasant and indigenous movement", saying that he might make errors but he would "never betray the fight started by our ancestors and the fight of the Bolivian people". A vote on further autonomy will take place in referendums which are expected to be held in December 2009. The issue has divided the country.
Indigenous peoples of Brazil make up 0.4% of Brazil's population, or about 700,000 people, even though millions of Brazilians have some indigenous ancestry. Indigenous peoples are found in the entire territory of Brazil, although the majority of them live in Indian reservations in the North and Centre-Western part of the country. On 18 January 2007, FUNAI reported that it had confirmed the presence of 67 different uncontacted tribes in Brazil, up from 40 in 2005. With this addition Brazil has now overtaken the island of New Guinea as the country having the largest number of uncontacted tribes.
In a 2007 news story, The Washington Post reported, "As has been proved in the past when uncontacted tribes are introduced to other populations and the microbes they carry, maladies as simple as the common cold can be deadly. In the 1970s, 185 members of the Panara tribe died within two years of discovery after contracting such diseases as flu and chickenpox, leaving only 69 survivors."
Aboriginal peoples in Canada comprise the First Nations, Inuit and Métis; the descriptors "Indian" and "Eskimo" are falling into disuse. Hundreds of Aboriginal nations evolved trade, spiritual and social hierarchies. The Métis culture of mixed blood originated in the mid-17th century when First Nation and native Inuit married European settlers. The Inuit had more limited interaction with European settlers during that early period. Various laws, treaties, and legislation have been enacted between European immigrants and First Nations across Canada. Aboriginal Right to Self-Government provides opportunity to manage historical, cultural, political, health care and economic control aspects within first people's communities.
Although not without conflict, European/Canadian early interactions with First Nations and Inuit populations were relatively peaceful, compared to the experience of native peoples in the United States. Combined with relatively late economic development in many regions, this peaceful history has allowed Canadian Indigenous peoples to have a relatively strong influence on the national culture while preserving their own identity. National Aboriginal Day recognises the cultures and contributions of Aboriginal peoples of Canada. There are currently over 600 recognized First Nations governments or bands encompassing 1,172,790 2006 people spread across Canada with distinctive Aboriginal cultures, languages, art, and music.
According to the 2002 Census, 4.6% of the Chilean population, including the Rapanui of Easter Island, was indigenous, although most show varying degrees of miscegenation. Many are descendants of the Mapuche, and live in Santiago, Araucanía and the lake district. The Mapuche successfully fought off defeat in the first 300–350 years of Spanish rule during the Arauco War. Relations with the new Chilean Republic were good until the Chilean state decided to occupy their lands. During the Occupation of Araucanía the Mapuche surrendered to the country's army in the 1880s. Their land was opened to settlement by Chileans and Europeans. Conflict over Mapuche land rights continues until present days.
Other groups include the Aimara who live mainly in Arica-Parinacota and Tarapacá Region and has the mayority of their alikes living in Bolivia and Peru and the Alacalufe survivors who now reside mainly in Puerto Edén.
A minority today within Colombia's overwhelmingly Mestizo and Afro-Colombian population, Colombia's indigenous peoples nonetheless encompass at least 85 distinct cultures and more than 1,378,884 people. A variety of collective rights for indigenous peoples are recognized in the 1991 Constitution.
One of these is the Muisca culture, a subset of the larger Chibcha ethnic group, famous for their use of gold, which led to the legend of El Dorado. At the time of the Spanish conquest, the Chibchas were the largest native civilization between the Incas and the Aztecs.
Costa Rica
There are over 60,000 inhabitants of Native American origins, representing 1.5% of the population. Most of them live in secluded reservations, distributed among eight ethnic groups: Quitirrisí (In the Central Valley), Matambú or Chorotega (Guanacaste), Maleku (Northern Alajuela), Bribri (Southern Atlantic), Cabécar (Cordillera de Talamanca), Guaymí (Southern Costa Rica, along the Panamá border), Boruca (Southern Costa Rica) and Térraba (Southern Costa Rica).
These native groups are characterized for their work in wood, like masks, drums and other artistic figures, as well as fabrics made of cotton.
Their subsistence is based on agriculture, having corn, beans and plantains as the main crops.
Ecuador was the site of many indigenous cultures, and civilizations of different proportions. An early sedentary culture, known as the Valdivia culture, developed in the coastal region, while the Caras and the Quitus unified to form an elaborate civilization that ended at the birth of the Capital Quito. The Cañaris near Cuenca were the most advanced, and most feared by the Inca, due to their fierce resistance to the Incan expansion. Their architecture remains were later destroyed by Spaniards and the Incas.
Approximately 96.4% of Ecuador's Indigenous population are Highland Quichuas living in the valleys of the Sierra region. Primarily consisting of the descendents of Incans, they are Kichwa speakers and include the Caranqui, the Otavalos, the Cayambi, the Quitu-Caras, the Panzaleo, the Chimbuelo, the Salasacan, the Tugua, the Puruhá, the Cañari, and the Saraguro. Linguistic evidence suggests that the Salascan and the Saraguro may have been the descendants of Bolivian ethnic groups transplanted to Ecuador as mitimaes.
Coastal groups, including the Awá, Chachi, and the Tsáchila, make up 0.24% percent of the indigenous population, while the remaining 3.35 percent live in the Oriente and consist of the Oriente Kichwa (the Canelo and the Quijos), the Shuar, the Huaorani, the Siona-Secoya, the Cofán, and the Achuar.
In 1986, indigenous people formed the first "truly" national political organization. The Confederation of Indigenous Nationalities of Ecuador (CONAIE) has been the primary political institution of the Indigenous since then and is now the second largest political party in the nation. It has been influential in national politics, contributing to the ouster of presidents Abdalá Bucaram in 1997 and Jamil Mahuad in 2000.
El Salvador
Much of El Salvador was home to the Pipil, Lenca, and Maya. The Pipil lived in western El Salvador, spoke Nahuat, and had many settlements there most noticeably the Señorío of Cuzcatlán. The Pipil had no treasure but held land that had rich and fertile soil, good for farming. This both disappointed and garnered attention from the Spaniards who were shocked not to find gold or jewels in El Salvador like they did in other lands like Guatemala or Mexico, but later learned of the fertile land El Salvador had to offer and attempted to conquer it. Noticeable Meso-American Indigenous warriors to rise militarily against the Spanish are Prince Atonal and Atlacatl of the Pipil people in central El Salvador, and Princess Antu Silan Ulap of the Lenca people in eastern El Salvador, who saw the Spanish not as gods, but as barbaric invaders. After fierce battles, the Pipil successfully retreated the Spanish army led by Pedro de Alvarado along with their Mexican Indian allies the (tlaxcalas) sending them back to Guatemala for some time. At first the Pipil people had repelled Spanish Attacks but after many other attacks and reinforcing their army with Guatemalan Indian allies, the Spanish were able to conquer Cuzcatlán. Later the Spanish, after many struggles, were also able to conquer the Lenca people. Eventually the Spaniards had children with the Pipil and the Lenca women resulting in the Mestizo population, which later would become the majority of the Salvadoran people. Today many Pipil and other Indigenous populations live in small towns of El Salvador like Izalco, Panchimalco, Sacacoyo, and Nahuizalco.
Pure Maya account for some 40 percent of the population; although around 40 percent of the population speaks an indigenous language, those tongues (of which there are more than 20) enjoy no official status. Guatemala's majority population holds a percentage of 59.4% in White or Mestizo (of mixed European and indigenous ancestry) people. The area of Livingston, Guatemala is highly influenced by the Caribbean and its population includes a combination of Mestizos and Garifuna people.
About 5 percent of the population are of full-blooded indigenous descent, but upwards to 80 percent more or the majority of Hondurans are mestizo or part-indigenous with European admixture, and about 10 percent are of indigenous and/or African descent. The main concentration of indigenous in Honduras are in the rural westernmost areas facing Guatemala and to the Caribbean Sea coastline, as well on the Nicaraguan border. The majority of indigenous people are Lencas, Miskitos to the east, Mayans, Pech, Sumos, and Tolupan.
The territory of modern-day Mexico was home to numerous indigenous civilizations prior to the arrival of the Spanish conquistadores: The Olmecs, who flourished from between 1200 BCE to about 400 BCE in the coastal regions of the Gulf of Mexico; the Zapotecs and the Mixtecs, who held sway in the mountains of Oaxaca and the Isthmus of Tehuantepec; the Maya in the Yucatán (and into neighbouring areas of contemporary Central America); the P'urhépecha or Tarascan in present day Michoacán and surrounding areas, and the Aztecs/Mexica, who, from their central capital at Tenochtitlan, dominated much of the centre and south of the country (and the non-Aztec inhabitants of those areas) when Hernán Cortés first landed at Veracruz.
In contrast to what was the general rule in the rest of North America, the history of the colony of New Spain was one of racial intermingling (mestizaje). Mestizos quickly came to account for a majority of the colony's population; however, significant numbers and communities of indígenas (as the native peoples are now known) survive to the present day. The CDI identifies 62 indigenous groups in Mexico, each with a unique language.
In the states of Chiapas and Oaxaca and in the interior of the Yucatán peninsula the majority of the population is indigenous. Large indigenous minorities, including Aztecs or Nahua, P'urhépechas, Mazahua, Otomi, and Mixtecs are also present in the central regions of Mexico. In Northern Mexico indigenous people are a small minority.
The "General Law of Linguistic Rights of the Indigenous Peoples" grants all indigenous languages spoken in Mexico, regardless of the number of speakers, the same validity as Spanish in all territories in which they are spoken, and indigenous peoples are entitled to request some public services and documents in their native languages. Along with Spanish, the law has granted them — more than 60 languages — the status of "national languages". The law includes all indigenous languages of the Americas regardless of origin; that is, it includes the indigenous languages of ethnic groups non-native to the territory. As such the National Commission for the Development of Indigenous Peoples recognizes the language of the Kickapoo, who immigrated from the United States, and recognizes the languages of the Guatemalan indigenous refugees. The Mexican government has promoted and established bilingual primary and secondary education in some indigenous rural communities. Nonetheless, of the indigenous peoples in Mexico, only about 67% of them (or 5.4% of the country's population) speak an indigenous language and about a sixth do not speak Spanish (1.2% of the country's population).
The indigenous peoples in Mexico have the right of free determination under the second article of the constitution. According to this article the indigenous peoples are granted:
- the right to decide the internal forms of social, economic, political and cultural organization;
- the right to apply their own normative systems of regulation as long as human rights and gender equality are respected;
- the right to preserve and enrich their languages and cultures;
- the right to elect representatives before the municipal council in which their territories are located;
amongst other rights.
The Miskito are a native people in Central America. Their territory extended from Cape Camarón, Honduras, to Rio Grande, Nicaragua along the Mosquito Coast. There is a native Miskito language, but large groups speak Miskito Coast Creole, Spanish, Rama and other languages. The Creole English came about through frequent contact with the British who colonized the area. Many are Christians.
Traditional Miskito society was highly structured with a defined political structure. There was a king, but he did not have total power. Instead, the power was split between himself, a governor, a general, and by the 1750s, an admiral. Historical information on kings is often obscured by the fact that many of the kings were semi-mythical.
Indigenous population in Peru make up around 30% Native Peruvian traditions and customs have shaped the way Peruvians live and see themselves today. Cultural citizenship—or what Renato Rosaldo has called, "the right to be different and to belong, in a democratic, participatory sense" (1996:243)—is not yet very well developed in Peru. This is perhaps no more apparent than in the country's Amazonian regions where indigenous societies continue to struggle against state-sponsored economic abuses, cultural discrimination, and pervasive violence.
United States
Indigenous peoples in what is now the contiguous United States are commonly called "American Indians", or simply "Indians" domestically, but are also referred to as "Native Americans". In Alaska, indigenous peoples, which include American Indians, Aleut, Inuit, and Yupik peoples are referred to collectively as Alaska Natives.
Native Americans and Alaska Natives make up 2 percent of the population. In the 2010 census 2.9 million people identified as American Indian and Alaska Native alone, and 5.2 million people identified as American Indian and Alaska Native, either alone or in combination with one or more other races. 1.8 million are recognized as registered tribal members. Tribes have established their own criteria for membership, which are often based on blood quantum, lineal descent, or residency. A minority of U.S. Native Americans live in land units called Indian reservations. Some southwestern U.S. tribes, such as the Kumeyaay, Cocopa, Pascua Yaqui and Apache span both sides of the US–Mexican border. Haudenosaunee people have the legal right to freely cross the US–Canadian border. Tlingit, Haida, Tsimshian, Inuit, Blackfeet, Nakota, Cree, Anishinaabe, Huron, Lenape, Mi'kmaq, Penobscot, and Haudenosaunee, among others live in both Canada and the US.
Most Venezuelans have some indigenous heritage, but the indigenous population make up only around 2% of the total population. They speak around 29 different languages and many more dialects, but some of the ethnic groups are very small and their languages are in danger of becoming extinct in the next decades. The most important indigenous groups are the Ye'kuana, the Wayuu, the Pemon and the Warao. The most advanced native people to have lived in present-day Venezuela is thought to have been the Timoto-cuicas, who mainly lived in the Venezuelan Andes. In total it is estimated that there were between 350 thousand and 500 thousand inhabitants, the most densely populated area being the Andean region (Timoto-cuicas), thanks to the advanced agricultural techniques used.
The 1999 constitution of Venezuela gives them special rights, although the vast majority of them still live in very critical conditions of poverty. The largest groups receive some basic primary education in their languages.
Other parts of the Americas
Indigenous peoples make up the majority of the population in Bolivia and Peru, and are a significant element in most other former Spanish colonies. Exceptions to this include Uruguay (Native Charrúa). At least four of the native American languages (Quechua in Peru and Bolivia; Aymara also in Peru and Bolivia, Guaraní in Paraguay, and Greenlandic in Greenland) are recognized as official languages.
Native American name controversy
The Native American name controversy is an ongoing dispute over the acceptable ways to refer to the indigenous peoples of the Americas and to broad subsets thereof, such as those living in a specific country or sharing certain cultural attributes. Once-common terms like "Indian" remain in use, despite the introduction of terms such as "Native American" and "Amerindian" during the latter half of the 20th century.
Rise of indigenous movements
|Part of a series on|
|Conflict resolution · Cultural diversity
Cultural heritage · Forced assimilation
Forced relocation · Freedom of religion
Gender equality · Human rights
Intellectual property · Land rights
Land-use planning · Language
Racial discrimination · Right to identity
Self-determination · Traditional knowledge
|AADNC · ACHPR · Arctic Council
Bureau of Indian Affairs · CDI
Council of Indigenous Peoples
FUNAI · NCIP · UNPFII
|NGOs and political groups|
|AFN · Amazon Watch · CAP · COICA
CONAIE · Cultural Survival · EZLN · fPcN
IPACC · IPCB · IWGIA · NARF · ONIC
Survival International · UNPO · (more ...)
|Colonialism · Civilizing mission
Cultural genocide · Manifest Destiny
Postdevelopment theory · Lands inhabited by indigenous peoples
|ILO 169 · United Nations Declaration|
In recent years, there has been a rise of indigenous movements in the Americas (mainly South America). These are rights-driven groups that organize themselves in order to achieve some sort of self-determination and the preservation of their culture for their peoples. Organizations like the Coordinator of Indigenous Organizations of the Amazon River Basin and the Indian Council of South America are examples of movements that are breaking the barrier of borders in order to obtain rights for Amazonian indigenous populations everywhere. Similar movements for indigenous rights can also be seen in Canada and the United States, with movements like the International Indian Treaty Council and the accession of native Indian group into the Unrepresented Nations and Peoples Organization.
There has also been a recognition of indigenous movements on an international scale, with the United Nations adopting the Declaration on the Rights of Indigenous Peoples, despite dissent from the stronger countries of the Americas.
In Colombia, various indigenous groups protested the denial of their rights. People organized a march in Cali in October 2008 to demand the government live up to promises to protect indigenous lands, defend the indigenous against violence, and reconsider the free trade pact with the United States.
Legal prerogative
With the rise to power of governments in Venezuela, Ecuador, Paraguay, and especially Bolivia where Evo Morales was the first indigenous descendant elected president of Bolivia, the indigenous movement gained a strong foothold.
Representatives from indigenous and rural organizations from major South American countries, including Bolivia, Ecuador, Colombia, Chile and Brazil, started a forum in support of Morales' legal process of change. The meeting condemned plans by the European "foreign power elite" to destabilize the country. The forum also expressed solidarity with the Morales and his economic and social changes in the interest of historically marginalized majorities. Furthermore, in a cathartic blow to the US-backed elite, it questioned US interference through diplomats and NGO's. The forum was suspicious of plots against Bolivia and other countries, including Cuba, Venezuela, Ecuador, Paraguay and Nicaragua.
The forum rejected the supposed violent method used by regional civic leaders from the called "Crescent departments" in Bolivia to impose their autonomous statutes, applauded the decision to expel the US ambassador to Bolivia, and reafirmed the sovereignty and independence of the presidency. Amongst others, representatives of CONAIE, the National Indigenous Organization of Colombia, the Chilean Council of All Lands, and the Brazilian Landless Movement participated in the forum.
Genetic history of indigenous peoples of the Americas primarily focus on Human Y-chromosome DNA haplogroups and Human mitochondrial DNA haplogroups. "Y-DNA" is passed solely along the patrilineal line, from father to son, while "mtDNA" is passed down the matrilineal line, from mother to offspring of both sexes. Neither recombines, and thus Y-DNA and mtDNA change only by chance mutation at each generation with no intermixture between parents' genetic material. Autosomal "atDNA" markers are also used, but differ from mtDNA or Y-DNA in that they overlap significantly. AtDNA is generally used to measure the average continent-of-ancestry genetic admixture in the entire human genome and related isolated populations.
The genetic pattern indicates indigenous peoples of the Americas experienced two very distinctive genetic episodes; first with the initial-peopling of the Americas, and secondly with European colonization of the Americas. The former is the determinant factor for the number of gene lineages, zygosity mutations and founding haplotypes present in today's indigenous peoples of the Americas populations.
Human settlement of the New World occurred in stages from the Bering sea coast line, with an initial 15, 000 to 20,000-year layover on Beringia for the small founding population. The micro-satellite diversity and distributions of the Y lineage specific to South America indicates that certain indigenous peoples of the Americas populations have been isolated since the initial colonization of the region. The Na-Dené, Inuit and Indigenous Alaskan populations exhibit haplogroup Q (Y-DNA) mutations, however are distinct from other indigenous peoples of the Americas with various mtDNA and atDNA mutations. This suggests that the earliest migrants into the northern extremes of North America and Greenland derived from later migrant populations.
Scientific evidence links indigenous Americans to Asian peoples, specifically eastern Siberian populations. Indigenous peoples of the Americas have been linked to North Asian populations by linguistic factors, the distribution of blood types, and in genetic composition as reflected by molecular data, such as DNA.
See also
- Classification of indigenous peoples of the Americas
- Alaska Natives
- History of the west coast of North America
- Hyphenated American
- Indigenous arts of the Americas
- Indigenous languages of the Americas
- Indigenous Movements in the Americas
- Indigenous rights
- List of American Inuit
- List of Greenlandic Inuit
- List of indigenous artists of the Americas
- List of indigenous people of the Americas
- List of traditional territories of the indigenous peoples of North America
- List of writers from peoples indigenous to the Americas
- Native American Languages Act of 1990
- Native American religion
- Native Hawaiians
- Pacific Islander
- Population history of American indigenous peoples
- Uncontacted peoples
- "Proyecciones de indígenas de México y de las entidades federativas 2000-2010". Comisión Nacional para el Desarrollo de los Pueblos Indígenas. 2010. Retrieved 2013-4-11.
- "CIA, The World Factbook Peru" (PDF). Retrieved 2011-07-12.
- "CIA - The World Factbook". Cia.gov. Retrieved 2011-02-23.
- "CIA - The World Factbook". Cia.gov. Retrieved 2011-02-23.
- United States Census Bureau. The American Indian and Alaska Native Population: 2010
- DANE 2005 National Census
- Canada 2006 Census
- "Brazil urged to protect Indians". BBC News. 2005-03-30.
- 2002 Chilean Census
- INDEC: Encuesta Complementaria de Pueblos Indígenas (ECPI) 2004 - 2005
- CIA - The World Factbook - Honduras
- 2005 Census
- "CIA - The World Factbook". Cia.gov. Retrieved 2011-02-23.
- "Una comunidad indígena salvadoreña pide su reconocimiento constitucional en el país". soitu.es. Retrieved 2011-02-23.
- "Costa Rica: Ethnic groups". Cia.gov. Retrieved 2010-12-21.
- "Terminology." Survival International. Retrieved 30 March 2012. "Aborigen" Diccionario de la Real Academia Española. Retrieved 8 February 2012.
- "Terminology". Indian and Northern Affairs Canada. Retrieved 2009-11-11. "The Canadian Constitution recognizes three groups of Aboriginal people — Indians (First Nations), Métis and Inuit. These separate peoples have unique heritages, languages, cultural practices, and spiritual beliefs"
- "Terminology of First Nations Native, Aboriginal and Indian" (PDF). the Office of the Aboriginal Advisor for Aboriginals. Retrieved 2009-11-11. "Native is a word similar in meaning to Aboriginal. Native Peoples or First peoples is a collective term to describe the descendants of the original peoples of North America"
- Jacobs, James Q. (2001). "The Paleoamericans: Issues and Evidence Relating to the Peopling of the New World". Anthropology and Archaeology Pages. jqjacobs.net. Retrieved 14 September 2009.
- Jacobs, James Q. (2002). "Paleoamerican Origins: A Review of Hypotheses and Evidence Relating to the Origins of the First Americans". Anthropology and Archaeology Pages. jqjacobs.net. Retrieved 14 September 2009.
- Wilton, David (2004-12-02). Word myths: debunking linguistic urban legends. Oxford University Press, USA. p. 163. ISBN 978-0-19-517284-3. Retrieved 2011-07-03.
- Adams, Cecil (2001-10-25). "Does "Indian" derive from Columbus's description of Native Americans as "una gente in Dios"?". The Straight Dope. Retrieved 2011-07-03.
- Zimmer, Ben (2009-10-12). "The Biggest Misnomer of All Time?". VisualThesaurus.
- Hoxie, Frederick E. (1996). Encyclopedia of North American Indians. Houghton Mifflin Harcourt. p. 568. ISBN 978-0-395-66921-1.
- Herbst, Philip (1997). The Color of Words: An Encyclopaedic Dictionary of Ethnic Bias in the United States. Intercultural Press. p. 116. ISBN 978-1-877864-97-1.
- Gómez-Moriana, Antonio (1993-05-12). "The Emerging of a Discursive Instance:Columbus and the invention of the "Indian"". Discourse Analysis as Sociocriticism : The Spanish Golden Age. University Of Minnesota Press. pp. 124–132. ISBN 978-0-8166-2073-9. Retrieved 2011-07-04.
- Mann, Charles C. (2005). 1491: New Revelations of the Americas Before Columbus. New York: Knopf Publishing Group. ISBN 1-4000-4006-X. OCLC 56632601.
- Göran Burenhult: Die ersten Menschen, Weltbild Verlag, 2000. ISBN 3-8289-0741-5
- "Atlas of the Human Journey-The Genographic Project". National Geographic Society. 1996-2009. Retrieved 2009-10-06.
- Wells, Spencer; Read, Mark (2002). The Journey of Man - A Genetic Odyssey (Digitised online by Google books). Random House. pp. 138–140. ISBN 0-8129-7146-9. Retrieved 2009-11-21.
- "Introduction". Government of Canada. Parks Canada. 2009. Retrieved 2010-01-09. "Canada's oldest known home is a cave in Yukon occupied not 12,000 years ago like the U.S. sites, but at least 20,000 years ago"
- "Pleistocene Archaeology of the Old Crow Flats". Vuntut National Park of Canada. 2008. Retrieved 2010-01-10. "However, despite the lack of this conclusive and widespread evidence, there are suggestions of human occupation in the northern Yukon about 24,000 years ago, and hints of the presence of humans in the Old Crow Basin as far back as about 40,000 years ago."
- "Jorney of mankind". Brad Shaw Foundation. Retrieved 2009-11-17.
- Fitzhugh, Drs. William; Goddard, Ives; Ousley, Steve; Owsley, Doug; Stanford., Dennis. "Paleoamerican". Smithsonian Institution Anthropology Outreach Office. Retrieved 2009-01-15.
- "The peopling of the Americas: Genetic ancestry influences health". Scientific American. Retrieved 2009-11-17.
- "Alternate Migration Corridors for Early Man in North America". American Antiquity, Vol. 44, No. 1 (Jan., 1979), p2. Retrieved 2009-11-17.
- "68 Responses to "Sea will rise ‘to levels of last Ice Age’"". Center for Climate Systems Research, Columbia University. Retrieved 2009-11-17.
- Villiger, Maggie. "Coming into America: Tracing the Genes". Scientific American Frontiers. PBS. Retrieved November 30, 2012.
- "Native Americans and Northern Europeans More Closely Related Than Previously Thought". Science Daily. 2012-11-30. Retrieved 2012-12-03.
- "A single and early migration for the peopling of the Americas supported by mitochondrial DNA sequence data". The National Academy of Sciences of the US. National Academy of Sciences. Retrieved 2009-10-10.
- "Method and Theory in American Archaeology" (Digitised online by Questia Media). Gordon Willey and Philip Phillips. University of Chicago. 1958. Retrieved 2009-11-20.
- "Method and Theory in American Archaeology" (Digitised online by Questia Media). Gordon Willey and Philip Phillips. University of Chicago. 1958. Retrieved 2009-11-20.
- Fernández-Armesto, Felipe (1987). Before Columbus: Exploration and Colonisation from the Mediterranean to the Atlantic: 1229-1492. New studies in medieval history series. Basingstoke, Hampshire: Macmillan Education. ISBN 0-333-40382-7. OCLC 20055667.
- Sorenson, John L.; and Carl L. Johannessen (2006). "Biological evidence for pre-Columbian transoceanic voyages". In Victor H. Mair (ed.). Contact and Exchange in the Ancient World. Perspectives on the global past series. Honolulu: University of Hawaiʻi Press. pp. 238–297. ISBN 0-8248-2884-4. OCLC 62896389.
- Wright, Ronald (2005). Stolen Continents: 500 Years of Conquest and Resistance in the Americas (1st Mariner Books ed.). Boston, MA: Houghton Mifflin. ISBN 0-618-49240-2. OCLC 57511483.
- Richard Erdoes, Alfonso Ortiz, (Eds.) "American Indian Myths and Legends." Pantheon, 1985.
- "Native Americans of North America", Microsoft Encarta Online Encyclopedia 2006, Trudy Griffin-Pierce. Retrieved September 14, 2006. Archived 2009-11-01.
- "Espagnols-Indiens: le choc des civilisations" in L'Histoire, n°322, July–August 2007, pp.14–21
- "Smallpox Through History". Archived from the original on 2009-10-31.
- Junius P. Rodriguez (2007). Encyclopedia of slave resistance and rebellion, Volume 1. ISBN 978-0-313-33272-2. Retrieved 1 July 2010.
- David M. Traboulay (1994-09). Columbus and Las Casas: the conquest and Christianization of America, 1492-1566. ISBN 978-0-8191-9642-2. Retrieved 1 July 2010.
- "Laws of Burgos, 1512-1513". Faculty.smu.edu. Retrieved 2010-05-23.
- Cook, p. 1.
- "BBC Smallpox: Eradicating the Scourge". Bbc.co.uk. 2009-11-05. Retrieved 2010-05-23.
- "The Story Of... Smallpox – and other Deadly Eurasian Germs". Pbs.org. Retrieved 2010-05-23.
- "American Indian Epidemics". Kporterfield.com. Retrieved 2010-05-23.
- "Smallpox: The Disease That Destroyed Two Empires". Allicinfacts.com. Retrieved 2010-05-23.
- "Epidemics". Libby-genealogy.com. 2009-04-30. Retrieved 2010-05-23.
- American plague, New Scientist
- Oaxaca[dead link]
- Smallpox's history in the world
- "Stacy Goodling, "Effects of European Diseases on the Inhabitants of the New World"". Millersville.edu. Archived from the original on May 10, 2008. Retrieved 2010-05-23.
- "Aboriginal Distributions 1630 to 1653". Natural Resources Canada.
- "David A. Koplow, '' Smallpox: The Fight to Eradicate a Global Scourge''". Ucpress.edu. Retrieved 2011-02-23.
- Dutch Children's Disease Kills Thousands of Mohawks[dead link]
- W.B. Spaulding. "Smallpox". Thecanadianencyclopedia.com. Retrieved 2010-05-23.
- "Iroquois". Fourdir.com. Retrieved 2010-05-23.
- Lange, Greg (2003-01-23). "Smallpox epidemic ravages Native Americans on the northwest coast of North America in the 1770s". Historylink.org. Retrieved 2010-05-23.
- "The first smallpox epidemic on the Canadian Plains: In the fur-traders' words". Pubmedcentral.nih.gov. Retrieved 2010-05-23.
- "Mountain Man Plain Indian Fur Trade". Thefurtrapper.com. Retrieved 2010-05-23.
- "Lewis Cass and the Politics of Disease: The Indian Vaccination Act of 1832". Muse.jhu.edu. Retrieved 2010-05-23.
- "Wicazo Sa Review: Vol. 18, No. 2, The Politics of Sovereignty (Autumn, 2003), pp. 9–35". Links.jstor.org. Retrieved 2010-05-23.
- Fineberg, Gail. "'500 Years of Brazil's Discovery'". Loc.gov. Retrieved 2010-05-23.
- "Brazil urged to protect Indians". BBC News. 2005-03-30. Retrieved 2010-05-23.
- See Varese (2004), as reviewed in Dean (2006).
- Ancient Horse (Equus cf. E. complicatus), The Academy of Natural Sciences, Thomas Jefferson Fossil Collection, Philadelphia, PA, (See: species Equus scotti) Others died out at the end of the last ice age with other megafauna.
- ""Native Americans: The First Farmers." ''AgExporter'' October 1, 1999". Allbusiness.com. Retrieved 2010-05-23.
- Spooner, DM; et al. (2005). "A single domestication for potato based on multilocus amplified fragment length polymorphism genotyping". PNAS 102 (41): 14694–99. doi:10.1073/pnas.0507400102. PMC 1253605. PMID 16203994. Lay summary
- Miller, N (2008-01-29). "Using DNA, scientists hunt for the roots of the modern potato". American Association for the Advancement of Science. Retrieved 2008-09-10.
- Solis, JS; et al. (2007). "Molecular description and similarity relationships among native germplasm potatoes (Solanum tuberosum ssp. tuberosum L.) using morphological data and AFLP markers". Electronic Journal of Biotechnology 10 (3): 0. doi:10.2225/vol10-issue3-fulltext-14.
- John Michael Francis (2005). Iberia and the Americas. ABC-CLIO. ISBN 978-1-85109-421-9.
- "Technology, disease, and colonial conquests, sixteenth to eighteenth centuries: essays reappraising the guns and germs theories". George Raudzens (2003). BRILL. p.190. ISBN 0-391-04206-8
- "The great Maya droughts: water, life, and death". Richardson Benedict Gill (2000). UNM Press. p.123. ISBN 0-8263-2774-5
- Owen, Wayne (2002). "Chapter 2 (TERRA–2): The History of Native Plant Communities in the South". Southern Forest Resource Assessment Final Report. U.S. Department of Agriculture, Forest Service, Southern Research Station. Retrieved 2008-07-29.
- David L. Lentz, ed. (2000). Imperfect balance: landscape transformations in the Precolumbian Americas. New York: Columbia University Press. pp. 241–242. ISBN 0-231-11157-6.
- Michael Pollan, The Omnivore's Dilemma
- Atran, Scott: Medin, Douglas (2010) The Native Mind and the Cultural Construction of Nature, MIT Press
- Skidmore, Joel (2006). "The Cascajal Block: The Earliest Precolumbian Writing". Mesoweb Reports & News. pp. 1–4. Retrieved 14 September 2009.
- Elizabeth Hill Boone, "Pictorial Documents and Visual Thinking in Postconquest Mexico". p. 158.
- A sample of this sound is available at the Princeton Art Museum website.
- ""Hair Pipes in Plains Indian Adornment" by John C. Ewers". Sil.si.edu. Retrieved 2009-09-14.
- "North America: Greenland." CIA Factbook. Retrieved 7 October 2012.
- "Aboriginal Identity (8), Area of Residence (6), Age Groups (12) and Sex (3) for the Population of Canada, Provinces and Territories, 2006 Census - 20% Sample Data". Statistics Canada. 2010-05-19. Retrieved 2012-12-11.
- Lizcano (2005), pg 218, Martinez-Torres (2008)
- "Overview of Race and Hispanic Origin, 2010 US Census" (PDF). March 2011. pp. 6–7. Retrieved 2012-08-12.
- "Belize 2000 Housing and Population Census". Belize Central Statistical Office. 2000. Retrieved 2008-09-30.
- "CIA — The World Factbook — Costa Rica". Cia.gov. Retrieved 2009-09-14.
- "El Salvador". CIA World Fact Book. 26 Apr 2010. Retrieved 9 May 2010.
- Guatemala entry at The World Factbook
- Honduras entry at The World Factbook
- Nicaragua entry at The World Factbook
- Panama entry at The World Factbook
- Dominica entry at The World Factbook
- Grenada entry at The World Factbook
- Haiti entry at The World Factbook
- Puerto Rico entry at The World Factbook
- Bonilla et al., Ancestral proportions and their association with skin pigmentation and bone mineral density in Puerto Rican women from New York City. Hum Gen (2004) 115: 57-58, and Reconstructing the population history of Puerto Rico by means of mtDNA phylogeographic analysis, Martinez-Cruzado et al, Am J Phys Anthropol. 2005 NCBI.nlm.nih.gov
- Suriname entry at The World Factbook
- "''Primeros Resultados de la Encuesta Complementaria de Pueblos Indígenas (ECPI)''" (PDF). Retrieved 2010-05-23.
- "''El 56% de los argentinos tiene antepasados indígenas''". 2005-01-16. Retrieved 2012-03-10.
- Argentina entry at The World Factbook
- Bolivia entry at The World Factbook
- "População residente, por cor ou raça, segundo a situação do domicílio - Instituto Brasileiro de Geografia e Estatística" (PDF). Retrieved 2010-05-23.
- Chile entry at The World Factbook
- Colombia entry at The World Factbook
- Ecuador entry at The World Factbook
- Guyana entry at The World Factbook
- "Paraguay." Pan-American Health Organization. (retrieved 12 July 2011)
- Paraguay entry at The World Factbook
- "CIA World Factbook: Suriname". CIA. Retrieved 23 Mar 2010.
- Uruguay entry at The World Factbook
- "Resultado Básico del XIV Censo Nacional de Población y Vivienda 2011". Ine.gov.ve. p. 14. Retrieved 2012-02-18.
- Indigenous identification was treated in a complex way in the 2001 Census, which collected data on self-identification, capacity to speak an indigenous language, and learning an indigenous language as a child. CEPAL, "Los pueblos indígenas de Bolivia: diagnóstico sociodemográfico a partir del censo del 2001," 2005, p. 32
- CEPAL, "Los pueblos indígenas de Bolivia: diagnóstico sociodemográfico a partir del censo del 2001," 2005, p. 42
- CEPAL, "Los pueblos indígenas de Bolivia: diagnóstico sociodemográfico a partir del censo del 2001," 2005, p. 47
- Gotkowitz, Laura (2007). A revolution for our rights: Indigenous struggles for land and justice in Bolivia, 1880–1952. Durham: Duke University Press. ISBN 0-8223-4049-6.
- Rivera Cusicanqui, Silvia (1987). Oppressed but not defeated: Peasant struggles among the Aymara and Qhechwa in Bolivia, 1900-1980. Geneva: United Nations Research Institute for Social Development.
- "Bolivian president Morales launches the "indigenous autonomy"". MercoPress. 2009-08-03. Retrieved 2009-08-05.
- "Bolivian Indians in historic step". BBC. 2009-08-03. Retrieved 2009-08-05.
- Colitt, Raymond (2011-02-01). "Uncontacted Amazonian Tribe Spotted in Rare Photos: Big Pics h". Discovery.com. Retrieved 2012-02-12.
- "In Amazonia, Defending the Hidden Tribes," The Washington Post, 8 July 2007.
- "Civilization.ca-Gateway to Aboriginal Heritage-Culture". Canadian Museum of Civilization Corporation. Government of Canada. May 12, 2006. Retrieved 2009-09-18.
- "Inuit Circumpolar Council (Canada)-ICC Charter". Inuit Circumpolar Council > ICC Charter and By-laws > ICC Charter. 2007. Retrieved 2009-09-18.
- "In the Kawaskimhon Aboriginal Moot Court Factum of the Federal Crown Canada" (PDF). Faculty of Law. University of Manitoba. 2007. p. 2. Retrieved 2009-09-18.
- Kaplam, Lawrence (2002). "Inuit or Eskimo: Which names to use?". Alaska Native Language Center, University of Alaska Fairbanks. Retrieved 2007-04-06.
- "What to Search: Topics-Canadian Genealogy Centre-Library and Archives Canada". Ethno-Cultural and Aboriginal Groups. Government of Canada. 2009-05-27. Retrieved 2009-10-02.
- "Innu Culture 3. Innu-Inuit 'Warfare'". 1999, Adrian Tanner Department of Anthropology-Memorial University of Newfoundland. Retrieved 2009-10-05.
- A Dialogue on Foreign Policy. Department of Foreign Affairs and International Trade. 2003-01. pp. 15–16. Retrieved 2006-11-30.
- "National Aboriginal Day History" (PDF). Indian and Northern Affairs Canada. Retrieved 2009-10-18.
- "Assembly of First Nations - Assembly of First Nations-The Story". Assembly of First Nations. Retrieved 2009-10-02.
- "Civilization.ca-Gateway to Aboriginal Heritage-object". Canadian Museum of Civilization Corporation. May 12, 2006. Retrieved 2009-10-02.
- "Aboriginal Identity (8), Sex (3) and Age Groups (12) for the Population of Canada, Provinces, Territories, Census Metropolitan Areas and Census Agglomerations, 2006 Census - 20% Sample Data". Canada 2006 Census data products. Statistics Canada, Government of Canada. 06/12/2008. Retrieved 2009-09-18.
- "El gradiente sociogenético chileno y sus implicaciones ético-sociales". Medwave.cl. 2000-06-15. Retrieved 2010-05-23.
- DANE 2005 national census
- "Health equity and ethnic minorities in emergency situations", Pier Paolo Balladelli, José Milton Guzmán, Marcelo Korc, Paula Moreno, Gabriel Rivera, The Commission on Social Health Determinants, Pan American Health Organization, World Health Organization, Bogotá, Colombia, 2007
- Bourgois, Philippe (Anthropology Today, Vol. 2, No. 2 (Apr., 1986), pp. 4-9). The Miskitu of Nicaragua: Politicized Ethnicity. Royal Anthropological Institute of Great Britain and Ireland. JSTOR 3033029.
- Ley General de Derechos Lingüísticos de los Pueblos Indígenas (PDF).
- (Spanish) "Ley General de Derechos Lingüísticos de los Pueblos Indígenas (General Law of the Rights of the Indigenous Peoples)" (PDF). CDI México. Archived from the original on September 25, 2007. Retrieved 2007-10-02.
- "Kikapúes — Kikaapoa". CDI México. Retrieved 2007-10-02.
- "Aguacatecos, cakchiqueles, ixiles, kekchíes, tecos y quichés". CDI México. Archived from the original on 2007-09-26. Retrieved 2007-10-02.
- "Poblicación de 5 años y más por Entidad Federativa, sexo y grupos lengüa indígena quinquenales de edad, y su distribución según condición de habla indígena y habla española" (PDF). INEGI, México. Retrieved 2007-12-13.
- PDF (779 KB). Second article.
- Dean, Bartholomew 2009 Urarina Society, Cosmology, and History in Peruvian Amazonia, Gainesville: University Press of Florida ISBN 978-0-8130-3378-5, UPF.com
- The American Heritage Dictionary of the English Language. Boston: Houghton Mifflin. 2000. ISBN 0-395-82517-2 (hardcover), ISBN 0-618-08230-1hardcover with CD ROM)
- Mandel, Michael. The Charter of Rights and the Legalization of Politics in Canada. Revised, Updated and Expanded Edition. (Toronto: Thompson Educational Publishing, Inc., 1994), pp. 354-356.
- ( R.S., 1985, c. I-5 )Canadian Constitution Act, 1982, Section Twenty-five of the Canadian Charter of Rights and Freedoms 35.
- Africa.euters.com[dead link]
- Harten, Sven (2011). The Rise of Evo Morales. Zed Books. ISBN 978-1-84813-523-9.
- Plenglish.com[dead link]
- A Nomenclature System for the Tree of Human Y-Chromosomal Binary Haplogroups 12 (2). Genome Research. 2002. pp. Vol. 12(2), 339–348. doi:10.1101/gr.217602. PMC 155271. PMID 11827954. Retrieved 2010-01-19. (Detailed hierarchical chart)
- Griffiths, Anthony J. F. (1999). An Introduction to genetic analysis. New York: W.H. Freeman. ISBN 0-7167-3771-X. Retrieved 2010-02-03.
- "Learn about Y-DNA Haplogroup Q. Genebase Tutorials" (Verbal tutorial possible). Genebase Systems. 2008. Retrieved 2009-11-21.
- Orgel L (2004). "Prebiotic chemistry and the origin of the RNA world" (PDF). Crit Rev Biochem Mol Biol 39 (2): 99–123. doi:10.1080/10409230490460765. PMID 15217990. Retrieved 2010-01-19.
- First Americans Endured 20,000-Year Layover — Jennifer Viegas, Discovery News. Discovery Channel. Retrieved 2009-11-18 page 2
- Than, Ker (2008). "New World Settlers Took 20,000-Year Pit Stop". National Geographic Society. Retrieved 2010-01-23.
- "Summary of knowledge on the subclades of Haplogroup Q". Genebase Systems. 2009. Retrieved 2009-11-22.
- Ruhlen M (November 1998). "The origin of the Na-Dene". Proceedings of the National Academy of Sciences of the United States of America 95 (23): 13994–6. doi:10.1073/pnas.95.23.13994. PMC 25007. PMID 9811914.
- Zegura SL, Karafet TM, Zhivotovsky LA, Hammer MF (January 2004). "High-resolution SNPs and microsatellite haplotypes point to a single, recent entry of Native American Y chromosomes into the Americas". Molecular Biology and Evolution 21 (1): 164–75. doi:10.1093/molbev/msh009. PMID 14595095.
- "mtDNA Variation among Greenland Eskimos. The Edge of the Beringian Expansion". Laboratory of Biological Anthropology, Institute of Forensic Medicine, University of Copenhagen, Copenhagen, McDonald Institute for Archaeological Research,University of Cambridge, Cambridge, University of Hamburg, Hamburg. 2000. doi:10.1086/303038. Retrieved 2009-11-22.
- "The peopling of the New World — Perspectives from Molecular Anthropology". Department of Anthropology, University of Pennsylvania (Annual Review of Anthropology): Vol. 33, 551–583. 2004. doi:10.1146/annurev.anthro.33.070203.143932. Retrieved 2010-02-03.
- "Native American Mitochondrial DNA Analysis Indicates That the Amerind and the Nadene Populations Were Founded by Two Independent Migrations". Center for Genetics and Molecular Medicine and Departments of Biochemistry and Anthropology, Emory University School of Medicine, Atlanta, Georgia. Genetics Society of America. Vol 130, 153-162. Retrieved 2009-11-28.
- Peter N. Jones (October 2002). American Indian Mtdna, Y Chromosome Genetic Data, and the Peopling of North America. Bauu Institute. p. 4. ISBN 978-0-9721349-1-0. Retrieved 13 July 2011.
- Gaskins, S. (1999). Children’s daily lives in a Mayan village: A case study of culturally constructed roles and activities. Children’s engagement in the world: Sociocultural perspectives, 25-61.
- Nimmo, J. (2008). Young children's access to real life: An examination of the growing boundaries between children in child care and adults in the community. Contemporary Issues in Early Childhood, 9(1), 3-13.
- Morelli, G., Rogoff, B., & Angelillo, C. (2003). Cultural variation in young children's access to work or involvement in specialised child-focused activities. International Journal of Behavioral Development, 27(3), 264-274.
- Woodhead, M. (1998). Children's perspectives on their working lives: A participatory study in Bangladesh, Ethiopia, the Philippines, Guatemala, El Salvador and Nicaragua.
- Rogoff, B., Morelli, G. A., & Chavajay, P. (2010). Children’s Integration in Communities and Segregation From People of Differing Ages. Perspectives on Psychological Science, 5(4), 431-440.
- Gaskins, S. (2006). 13 The Cultural Organization of Yucatec Mayan Children’s Social Interactions. Peer relationships in cultural context, 283.== References ==
- König, Eva (2002). Indianer 1858-1928, Photographische Reisen von Alaska bis Feuerland. Museum für Volkerkunde Hamburg: Edition Braus. ISBN 3-89904-021-X.
- Cappel, Constance (2007). The Smallpox Genocide of the Odawa Tribe at L'Arbre Croche, 1763: The History of a Native American People. Lewiston, N.Y.: Edwin Mellen Press. ISBN 978-0-7734-5220-6. OCLC 175217515.
- Cappel, Constance,(editor) (2006). Odawa Language and Legends: Andrew J. Blackbird and Raymond Kiogima. Xlibris. ISBN 1-59926-920-1.
- Churchill, Ward (1997). A Little Matter of Genocide: Holocaust and Denial in the Americas, 1492 to the Present. San Francisco: City Lights Books. ISBN 978-0-87286-323-1. OCLC 35029491.
- Dean, Bartholomew (2002). "State Power and Indigenous Peoples in Peruvian Amazonia: A Lost Decade, 1990–2000". In Maybury-Lewis, David. The Politics of Ethnicity: Indigenous Peoples in Latin American States. David Rockefeller Center series on Latin American studies, Harvard University 9. Cambridge, Mass.: Harvard University/David Rockefeller Center for Latin American Studies. pp. 199–238. ISBN 0-674-00964-9. OCLC 427474742.
- Dean, Bartholomew; Levi, Jerome M. (2003). At the Risk of Being Heard: Identity, Indigenous Rights, and Postcolonial States. Ann Arbor: University of Michigan Press. ISBN 978-0-472-09736-4. OCLC 50841012.
- Dean, Bartholomew (January 2006). "Salt of the Mountain: Campa Asháninka History and Resistance in the Peruvian Jungle (review)". The Americas 62 (3): 464–466. doi:10.1353/tam.2006.0013. ISSN 0003-1615.
- Kane, Katie (1999). "Nits Make Lice: Drogheda, Sand Creek, and the Poetics of Colonial Extermination". Cultural Critique (University of Minnesota Press) 42 (42): 81–103. doi:10.2307/1354592. ISSN 0882-4371. JSTOR 1354592.
- Krech, Shepard III (1999). The Ecological Indian: Myth and History. New York: W. W. Norton & Company. ISBN 978-0-393-04755-4. OCLC 318358852.
- Varese, Stefano; Ribeiro, Darcy (2004) . Salt of the Mountain: Campa Ashaninka History and Resistance in the Peruvian Jungle. trans. Susan Giersbach Rascón. Norman: University of Oklahoma Press. ISBN 0-8061-3512-3. OCLC 76909908.
|Wikimedia Commons has media related to: Indigenous peoples of the Americas|
- The Peopling of the American Continents, Early California History
- North America, Indigenous Peoples Issues and Resources
- South America, Indigenous Peoples Issues and Resources
- Indigenous Peoples in Brazil. Instituto Socioambiental (ISA)
- America's Stone Age explorers, PBS Nova
- A history of Native people of Canada - The Canadian Museum of Civilization
- Alexander Francis Chamberlain (1911). "Indians, North American". In Chisholm, Hugh. Encyclopædia Britannica (11th ed.). Cambridge University Press. |
Heuristics in judgment and decision-making
In psychology, heuristics are simple, efficient rules which people often use to form judgments and make decisions. They are mental shortcuts that usually involve focusing on one aspect of a complex problem and ignoring others. These rules work well under most circumstances, but they can lead to systematic deviations from logic, probability or rational choice theory. The resulting errors are called "cognitive biases" and many different types have been documented. These have been shown to affect people's choices in situations like valuing a house, deciding the outcome of a legal case, or making an investment decision. Heuristics usually govern automatic, intuitive judgments but can also be used as deliberate mental strategies when working from limited information.
Cognitive scientist Herbert A. Simon originally proposed that human judgments are limited by available information, time contraints, and cognitive limitations, calling this bounded rationality. In the early 1970s, psychologists Amos Tversky and Daniel Kahneman demonstrated three heuristics that underlie a wide range of intuitive judgments. These findings set in motion the heuristics and biases research program, which studies how people make real-world judgments and the conditions under which those judgments are unreliable. This research challenged the idea that human beings are rational actors, but provided a theory of information processing to explain how people make estimates or choices. This research, which first gained worldwide attention in 1974 with the Science paper "Judgment Under Uncertainty: Heuristics and Biases", has guided almost all current theories of decision-making, and although the originally proposed heuristics have been challenged in the further debate, this research program has changed the field by permanently setting the research questions.
This heuristics-and-biases tradition has been criticised by Gerd Gigerenzer and others for being too focused on how heuristics lead to errors. The critics argue that heuristics can be seen as rational in an underlying sense. According to this perspective, heuristics are good enough for most purposes without being too demanding on the brain's resources. Another theoretical perspective sees heuristics as fully rational in that they are rapid, can be made without full information and can be as accurate as more complicated procedures. By understanding the role of heuristics in human psychology, marketers and other persuaders can influence decisions, such as the prices people pay for goods or the quantity they buy.
- 1 Types
- 2 Theories
- 3 Consequences
- 4 See also
- 5 Footnotes
- 6 Citations
- 7 References
- 8 Further reading
- 9 External links
In their initial research, Tversky and Kahneman proposed three heuristics—availability, representativeness, and anchoring and adjustment. Subsequent work has identified many more. Heuristics that underlie judgment are called "judgment heuristics". Another type, called "evaluation heuristics", are used to judge the desirability of possible choices.
In psychology, availability is the ease with which a particular idea can be brought to mind. When people estimate how likely or how frequent an event is on the basis of its availability, they are using the availability heuristic. When an infrequent event can be brought easily and vividly to mind, people tend to overestimate its likelihood. For example, people overestimate their likelihood of dying in a dramatic event such as a tornado or terrorism. Dramatic, violent deaths are usually more highly publicised and therefore have a higher availability. On the other hand, common but mundane events are hard to bring to mind, so their likelihoods tend to be underestimated. These include deaths from suicides, strokes, and diabetes. This heuristic is one of the reasons why people are more easily swayed by a single, vivid story than by a large body of statistical evidence. It may also play a role in the appeal of lotteries: to someone buying a ticket, the well-publicised, jubilant winners are more available than the millions of people who have won nothing.
When people judge whether more English words begin with T or with K, the availability heuristic gives a quick way to answer the question. Words that begin with T come more readily to mind, and so subjects give a correct answer without counting out large numbers of words. However, this heuristic can also produce errors. When people are asked whether there are more English words with K in the first position or with K in the third position, they use the same process. It is easy to think of words that begin with K, such as kangaroo, kitchen, or kept. It is harder to think of words with K as the third letter, such as lake, or acknowledge, although objectively these are three times more common. This leads people to the incorrect conclusion that K is more common at the start of words. In another experiment, subjects heard the names of many celebrities, roughly equal numbers of whom were male and female. The subjects were then asked whether the list of names included more men or more women. When the men in the list were more famous, a great majority of subjects incorrectly thought there were more of them, and vice versa for women. Tversky and Kahneman's interpretation of these results is that judgments of proportion are based on availability, which is higher for the names of better-known people.
In one experiment that occurred before the 1976 U.S. Presidential election, some participants were asked to imagine Gerald Ford winning, while others did the same for a Jimmy Carter victory. Each group subsequently viewed their allocated candidate as significantly more likely to win. The researchers found a similar effect when students imagined a good or a bad season for a college football team. The effect of imagination on subjective likelihood has been replicated by several other researchers.
A concept's availability can be affected by how recently and how frequently it has been brought to mind. In one study, subjects were given partial sentences to complete. The words were selected to activate the concept either of hostility or of kindness: a process known as priming. They then had to interpret the behavior of a man described in a short, ambiguous story. Their interpretation was biased towards the emotion they had been primed with: the more priming, the greater the effect. A greater interval between the initial task and the judgment decreased the effect.
Tversky and Kahneman offered the availability heuristic as an explanation for illusory correlations in which people wrongly judge two events to be associated with each other. They explained that people judge correlation on the basis of the ease of imagining or recalling the two events together.
The representativeness heuristic is seen when people use categories, for example when deciding whether or not a person is a criminal. An individual thing has a high representativeness for a category if it is very similar to a prototype of that category. When people categorise things on the basis of representativeness, they are using the representativeness heuristic. "Representative" is here meant in two different senses: the prototype used for comparison is representative of its category, and representativeness is also a relation between that prototype and the thing being categorised. While it is effective for some problems, this heuristic involves attending to the particular characteristics of the individual, ignoring how common those categories are in the population (called the base rates). Thus, people can overestimate the likelihood that something has a very rare property, or underestimate the likelihood of a very common property. This is called the base rate fallacy. Representativeness explains this and several other ways in which human judgments break the laws of probability.
The representativeness heuristic is also an explanation of how people judge cause and effect: when they make these judgements on the basis of similarity, they are also said to be using the representativeness heuristic. This can lead to a bias, incorrectly finding causal relationships between things that resemble one another and missing them when the cause and effect are very different. Examples of this include both the belief that "emotionally relevant events ought to have emotionally relevant causes", and magical associative thinking.
Ignorance of base rates
A 1973 experiment used a psychological profile of Tom W., a fictional graduate student. One group of subjects had to rate Tom's similarity to a typical student in each of nine academic areas (including Law, Engineering and Library Science). Another group had to rate how likely it is that Tom specialised in each area. If these ratings of likelihood are governed by probability, then they should resemble the base rates, i.e. the proportion of students in each of the nine areas (which had been separately estimated by a third group). If people based their judgments on probability, they would say that Tom is more likely to study Humanities than Library Science, because there are many more Humanities students, and the additional information in the profile is vague and unreliable. Instead, the ratings of likelihood matched the ratings of similarity almost perfectly, both in this study and a similar one where subjects judged the likelihood of a fictional woman taking different careers. This suggests that rather than estimating probability using base rates, subjects had substituted the more accessible attribute of similarity.
When people rely on representativeness, they can fall into an error which breaks a fundamental law of probability. Tversky and Kahneman gave subjects a short character sketch of a woman called Linda, describing her as, "31 years old, single, outspoken, and very bright. She majored in philosophy. As a student, she was deeply concerned with issues of discrimination and social justice, and also participated in anti-nuclear demonstrations". People reading this description then ranked the likelihood of different statements about Linda. Amongst others, these included "Linda is a bank teller", and, "Linda is a bank teller and is active in the feminist movement". People showed a strong tendency to rate the latter, more specific statement as more likely, even though a conjunction of the form "Linda is both X and Y" can never be more probable than the more general statement "Linda is X". The explanation in terms of heuristics is that the judgment was distorted because, for the readers, the character sketch was representative of the sort of person who might be an active feminist but not of someone who works in a bank. A similar exercise concerned Bill, described as "intelligent but unimaginative". A great majority of people reading this character sketch rated "Bill is an accountant who plays jazz for a hobby", as more likely than "Bill plays jazz for a hobby".
Without success, Tversky and Kahneman used what they described as "a series of increasingly desperate manipulations" to get their subjects to recognise the logical error. In one variation, subjects had to choose between a logical explanation of why "Linda is a bank teller" is more likely, and a deliberately illogical argument which said that "Linda is a feminist bank teller" is more likely "because she resembles an active feminist more than she resembles a bank teller". Sixty-five percent of subjects found the illogical argument more convincing. Other researchers also carried out variations of this study, exploring the possibility that people had misunderstood the question. They did not eliminate the error. The error disappears when the question is posed in terms of frequencies. Everyone in these versions of the study recognised that out of 100 people fitting an outline description, the conjunction statement ("She is X and Y") cannot apply to more people than the general statement ("She is X").
Ignorance of sample size
Tversky and Kahneman asked subjects to consider a problem about random variation. Imagining for simplicity that exactly half of the babies born in a hospital are male, the ratio will not be exactly half in every time period. On some days, more girls will be born and on others, more boys. The question was, does the likelihood of deviating from exactly half depend on whether there are many or few births per day? It is a well-established consequence of sampling theory that proportions will vary much more day-to-day when the typical number of births per day is small. However, people's answers to the problem do not reflect this fact. They typically reply that the number of births in the hospital makes no difference to the likelihood of more than 60% male babies in one day. The explanation in terms of the heuristic is that people consider only how representative the figure of 60% is of the previously given average of 50%.
Richard E. Nisbett and colleagues suggest that representativeness explains the dilution effect, in which irrelevant information weakens the effect of a stereotype. Subjects in one study were asked whether "Paul" or "Susan" was more likely to be assertive, given no other information than their first names. They rated Paul as more assertive, apparently basing their judgment on a gender stereotype. Another group, told that Paul's and Susan's mothers each commute to work in a bank, did not show this stereotype effect; they rated Paul and Susan as equally assertive. The explanation is that the additional information about Paul and Susan made them less representative of men or women in general, and so the subjects' expectations about men and women had a weaker effect. This means unrelated and non-diagnostic information about certain issue can make relative information less powerful to the issue when people understand the phenomenon.
Misperception of randomness
Representativeness explains systematic errors that people make when judging the probability of random events. For example, in a sequence of coin tosses, each of which comes up heads (H) or tails (T), people reliably tend to judge a clearly patterned sequence such as HHHTTT as less likely than a less patterned sequence such as HTHTTH. These sequences have exactly the same probability, but people tend to see the more clearly patterned sequences as less representative of randomness, and so less likely to result from a random process. Tversky and Kahneman argued that this effect underlies the gambler's fallacy; a tendency to expect outcomes to even out over the short run, like expecting a roulette wheel to come up black because the last several throws came up red. They emphasised that even experts in statistics were susceptible to this illusion: in a 1971 survey of professional psychologists, they found that respondents expected samples to be overly representative of the population they were drawn from. As a result, the psychologists systematically overestimated the statistical power of their tests, and underestimated the sample size needed for a meaningful test of their hypotheses.
Anchoring and adjustment
Anchoring and adjustment is a heuristic used in many situations where people estimate a number. According to Tversky and Kahneman's original description, it involves starting from a readily available number—the "anchor"—and shifting either up or down to reach an answer that seems plausible. In Tversky and Kahneman's experiments, people did not shift far enough away from the anchor. Hence the anchor contaminates the estimate, even if it is clearly irrelevant. In one experiment, subjects watched a number being selected from a spinning "wheel of fortune". They had to say whether a given quantity was larger or smaller than that number. For instance, they might be asked, "Is the percentage of African countries which are members of the United Nations larger or smaller than 65%?" They then tried to guess the true percentage. Their answers correlated well with the arbitrary number they had been given. Insufficient adjustment from an anchor is not the only explanation for this effect. An alternative theory is that people form their estimates on evidence which is selectively brought to mind by the anchor.
The anchoring effect has been demonstrated by a wide variety of experiments both in laboratories and in the real world. It remains when the subjects are offered money as an incentive to be accurate, or when they are explicitly told not to base their judgment on the anchor. The effect is stronger when people have to make their judgments quickly. Subjects in these experiments lack introspective awareness of the heuristic, denying that the anchor affected their estimates.
Even when the anchor value is obviously random or extreme, it can still contaminate estimates. One experiment asked subjects to estimate the year of Albert Einstein's first visit to the United States. Anchors of 1215 and 1992 contaminated the answers just as much as more sensible anchor years. Other experiments asked subjects if the average temperature in San Francisco is more or less than 558 degrees, or whether there had been more or fewer than 100,025 top ten albums by The Beatles. These deliberately absurd anchors still affected estimates of the true numbers.
Anchoring results in a particularly strong bias when estimates are stated in the form of a confidence interval. An example is where people predict the value of a stock market index on a particular day by defining an upper and lower bound so that they are 98% confident the true value will fall in that range. A reliable finding is that people anchor their upper and lower bounds too close to their best estimate. This leads to an overconfidence effect. One much-replicated finding is that when people are 98% certain that a number is in a particular range, they are wrong about thirty to forty percent of the time.
Anchoring also causes particular difficulty when many numbers are combined into a composite judgment. Tversky and Kahneman demonstrated this by asking a group of people to rapidly estimate the product 8 x 7 x 6 x 5 x 4 x 3 x 2 x 1. Another group had to estimate the same product in reverse order; 1 x 2 x 3 x 4 x 5 x 6 x 7 x 8. Both groups underestimated the answer by a wide margin, but the latter group's average estimate was significantly smaller. The explanation in terms of anchoring is that people multiply the first few terms of each product and anchor on that figure. A less abstract task is to estimate the probability that an aircraft will crash, given that there are numerous possible faults each with a likelihood of one in a million. A common finding from studies of these tasks is that people anchor on the small component probabilities and so underestimate the total. A corresponding effect happens when people estimate the probability of multiple events happening in sequence, such as an accumulator bet in horse racing. For this kind of judgment, anchoring on the individual probabilities results in an overestimation of the combined probability.
People's valuation of goods, and the quantities they buy, respond to anchoring effects. In one experiment, people wrote down the last two digits of their social security numbers. They were then asked to consider whether they would pay this number of dollars for items whose value they did not know, such as wine, chocolate, and computer equipment. They then entered an auction to bid for these items. Those with the highest two-digit numbers submitted bids that were many times higher than those with the lowest numbers. When a stack of soup cans in a supermarket was labelled, "Limit 12 per customer", the label influenced customers to buy more cans. In another experiment, real estate agents appraised the value of houses on the basis of a tour and extensive documentation. Different agents were shown different listing prices, and these affected their valuations. For one house, the appraised value ranged from US$114,204 to $128,754.
Anchoring and adjustment has also been shown to affect grades given to students. In one experiment, 48 teachers were given bundles of student essays, each of which had to be graded and returned. They were also given a fictional list of the students' previous grades. The mean of these grades affected the grades that teachers awarded for the essay.
One study showed that anchoring affected the sentences in a fictional rape trial. The subjects were trial judges with, on average, more than fifteen years of experience. They read documents including witness testimony, expert statements, the relevant penal code, and the final pleas from the prosecution and defence. The two conditions of this experiment differed in just one respect: the prosecutor demanded a 34-month sentence in one condition and 12 months in the other; there was an eight-month difference between the average sentences handed out in these two conditions. In a similar mock trial, the subjects took the role of jurors in a civil case. They were either asked to award damages "in the range from $15 million to $50 million" or "in the range from $50 million to $150 million". Although the facts of the case were the same each time, jurors given the higher range decided on an award that was about three times higher. This happened even though the subjects were explicitly warned not to treat the requests as evidence.
"Affect", in this context, is a feeling such as fear, pleasure or surprise. It is shorter in duration than a mood, occurring rapidly and involuntarily in response to a stimulus. While reading the words "lung cancer" might generate an affect of dread, the words "mother's love" can create an affect of affection and comfort. When people use affect ("gut responses") to judge benefits or risks, they are using the affect heuristic. The affect heuristic has been used to explain why messages framed to activate emotions are more persuasive than those framed in a purely factual way.
There are competing theories of human judgment, which differ on whether the use of heuristics is irrational. A cognitive laziness approach argues that heuristics are inevitable shortcuts given the limitations of the human brain. According to the natural assessments approach, some complex calculations are already done rapidly and automatically by the brain, and other judgments make use of these processes rather than calculating from scratch. This has led to a theory called "attribute substitution", which says that people often handle a complicated question by answering a different, related question, without being aware that this is what they are doing. A third approach argues that heuristics perform just as well as more complicated decision-making procedures, but more quickly and with less information. This perspective emphasises the "fast and frugal" nature of heuristics.
In 2002 Daniel Kahneman and Shane Frederick proposed a process called attribute substitution which happens without conscious awareness. According to this theory, when somebody makes a judgment (of a target attribute) which is computationally complex, a rather more easily calculated heuristic attribute is substituted. In effect, a difficult problem is dealt with by answering a rather simpler problem, without the person being aware this is happening. This explains why individuals can be unaware of their own biases, and why biases persist even when the subject is made aware of them. It also explains why human judgments often fail to show regression toward the mean.
This substitution is thought of as taking place in the automatic intuitive judgment system, rather than the more self-aware reflective system. Hence, when someone tries to answer a difficult question, they may actually answer a related but different question, without realizing that a substitution has taken place.
In 1975, psychologist Stanley Smith Stevens proposed that the strength of a stimulus (e.g. the brightness of a light, the severity of a crime) is encoded by brain cells in a way that is independent of modality. Kahneman and Frederick built on this idea, arguing that the target attribute and heuristic attribute could be very different in nature.
Kahneman and Frederick propose three conditions for attribute substitution:
- The target attribute is relatively inaccessible.
Substitution is not expected to take place in answering factual questions that can be retrieved directly from memory ("What is your birthday?") or about current experience ("Do you feel thirsty now?).
- An associated attribute is highly accessible.
This might be because it is evaluated automatically in normal perception or because it has been primed. For example, someone who has been thinking about their love life and is then asked how happy they are might substitute how happy they are with their love life rather than other areas.
- The substitution is not detected and corrected by the reflective system.
For example, when asked "A bat and a ball together cost $1.10. The bat costs $1 more than the ball. How much does the ball cost?" many subjects incorrectly answer $0.10. An explanation in terms of attribute substitution is that, rather than work out the sum, subjects parse the sum of $1.10 into a large amount and a small amount, which is easy to do. Whether they feel that is the right answer will depend on whether they check the calculation with their reflective system.
Kahneman gives an example where some Americans were offered insurance against their own death in a terrorist attack while on a trip to Europe, while another group were offered insurance that would cover death of any kind on the trip. Even though "death of any kind" includes "death in a terrorist attack", the former group were willing to pay more than the latter. Kahneman suggests that the attribute of fear is being substituted for a calculation of the total risks of travel. Fear of terrorism for these subjects was stronger than a general fear of dying on a foreign trip. See Morewedge and Kahneman (2010), for a recent summary of attribute substitution.
Fast and frugal
Gerd Gigerenzer and colleagues have argued that heuristics can be used to make judgments that are accurate rather than biased. According to them, heuristics are "fast and frugal" alternatives to more complicated procedures, giving answers that are just as good. The benefits of heuristic or 'less is more' decision-making strategies have been observed in a variety of settings, ranging from food consumption, to the stock market to online dating.
Efficient decision heuristics
Warren Thorngate, an emeritus social psychologist, implemented 10 simple decision rules or heuristics in a simulation program as computer subroutines chose an alternative. He determined how often each heuristic selected alternatives with highest-through-lowest expected value in a series of randomly generated decision situations. He found that most of the simulated heuristics selected alternatives with highest expected value and almost never selected alternatives with lowest expected value. More information about the simulation can be found in his "Efficient decision heuristics" article (1980).
Psychologist Benoît Monin reports a series of experiments in which subjects, looking at photographs of faces, have to judge whether they have seen those faces before. It is repeatedly found that attractive faces are more likely to be mistakenly labeled as familiar. Monin interprets this result in terms of attribute substitution. The heuristic attribute in this case is a "warm glow"; a positive feeling towards someone that might either be due to their being familiar or being attractive. This interpretation has been criticised, because not all the variance in familiarity is accounted for by the attractiveness of the photograph.
Judgments of morality and fairness
Legal scholar Cass Sunstein has argued that attribute substitution is pervasive when people reason about moral, political or legal matters. Given a difficult, novel problem in these areas, people search for a more familiar, related problem (a "prototypical case") and apply its solution as the solution to the harder problem. According to Sunstein, the opinions of trusted political or religious authorities can serve as heuristic attributes when people are asked their own opinions on a matter. Another source of heuristic attributes is emotion: people's moral opinions on sensitive subjects like sexuality and human cloning may be driven by reactions such as disgust, rather than by reasoned principles. Sunstein has been challenged as not providing enough evidence that attribute substitution, rather than other processes, is at work in these cases.
|This section needs expansion. You can help by adding to it. (October 2016)|
- Lewis, Alan (17 April 2008). The Cambridge Handbook of Psychology and Economic Behaviour. Cambridge University Press. p. 43. ISBN 978-0-521-85665-2. Retrieved 7 February 2013.
- Harris, Lori A. (21 May 2007). CliffsAP Psychology. John Wiley & Sons. p. 65. ISBN 978-0-470-19718-9. Retrieved 7 February 2013.
- Nevid, Jeffrey S. (1 October 2008). Psychology: Concepts and Applications. Cengage Learning. p. 251. ISBN 978-0-547-14814-4. Retrieved 7 February 2013.
- Bazerman, M. H. (2017). "Judgment and decision making". In R. Biswas-Diener & E. Diener. Noba textbook series: Psychology. Champaign, IL: DEF publishers.
- Kahneman, Daniel; Klein, Gary (2009). "Conditions for intuitive expertise: A failure to disagree". American Psychologist. 64 (6): 515–526. PMID 19739881. doi:10.1037/a0016755.
- Kahneman, Daniel (2011). "Introduction". Thinking, Fast and Slow. Farrar, Straus and Giroux. ISBN 978-1-4299-6935-2.
- Plous 1999, p. 109
- Fiedler, Klaus; von Sydow, Momme (2015). "Heuristics and Biases: Beyond Tversky and Kahneman's (1974) Judgment under Uncertainty" (PDF). In Eysenck, Michael W.; Groome, David. Cognitive Psychology: Revising the Classical Studies. Sage, London. pp. 146–161. ISBN 978-1-4462-9447-5.
- Gigerenzer, G. (1996). "On narrow norms and vague heuristics: A reply to Kahneman and Tversky". Psychological Review. 103 (3): 592–596. doi:10.1037/0033-295X.103.3.592.
- Hastie & Dawes 2009, pp. 210–211
- Tversky, Amos; Kahneman, Daniel (1973), "Availability: A Heuristic for Judging Frequency and Probability", Cognitive Psychology, 5: 207–232, ISSN 0010-0285, doi:10.1016/0010-0285(73)90033-9
- Morewedge, Carey K.; Todorov, Alexander (24 January 2012). "The Least Likely Act: Overweighting Atypical Past Behavior in Behavioral Predictions". Social Psychological and Personality Science. 3 (6): 760–766. doi:10.1177/1948550611434784.
- Sutherland 2007, pp. 16–17
- Plous 1993, pp. 123–124
- Tversky & Kahneman 1974
- Carroll, J. (1978). "The Effect of Imagining an Event on Expectations for the Event: An Interpretation in Terms of the Availability Heuristic". Journal of Experimental Social Psychology. 14 (1): 88–96. ISSN 0022-1031. doi:10.1016/0022-1031(78)90062-8.
- Srull, Thomas K.; Wyer, Robert S. (1979). "The Role of Category Accessibility in the Interpretation of Information About Persons: Some Determinants and Implications". Journal of Personality and Social Psychology. 37 (10): 1660–72. ISSN 0022-3514. doi:10.1037/0022-3522.214.171.1240.
- Plous 1993, pp. 109–120
- Nisbett, Richard E.; Ross, Lee (1980). Human inference: strategies and shortcomings of social judgment. Englewood Cliffs, NJ: Prentice-Hall. pp. 115–118. ISBN 9780134450735.
- Kahneman, Daniel; Amos Tversky (July 1973). "On the Psychology of Prediction". Psychological Review. American Psychological Association. 80 (4): 237–51. ISSN 0033-295X. doi:10.1037/h0034747.
- Tversky, Amos; Kahneman, Daniel (1983). "Extensional versus intuitive reasoning: The conjunction fallacy in probability judgment.". Psychological Review. 90 (4): 293–315. doi:10.1037/0033-295X.90.4.293. reprinted in Gilovich, Thomas; Griffin, Dale; Kahneman, Daniel, eds. (2002), Heuristics and Biases: The Psychology of Intuitive Judgment, Cambridge: Cambridge University Press, pp. 19–48, ISBN 9780521796798, OCLC 47364085
- Poundstone 2010, p. 89
- Tentori, K.; Bonini, N.; Osherson, D. (1 May 2004). "The conjunction fallacy: a misunderstanding about conjunction?". Cognitive Science. 28 (3): 467–477. doi:10.1016/j.cogsci.2004.01.001.
- Moro, Rodrigo (29 July 2008). "On the nature of the conjunction fallacy". Synthese. 171 (1): 1–24. doi:10.1007/s11229-008-9377-8.
- Gigerenzer, Gerd (1991). "How to make cognitive illusions disappear: Beyond "heuristics and biases". European Review of Social Psychology. 2: 83–115. doi:10.1080/14792779143000033.
- Kunda 1999, pp. 70–71
- Kunda 1999, pp. 68–70
- Zukier, Henry (1982). "The dilution effect: The role of the correlation and the dispersion of predictor variables in the use of nondiagnostic information.". Journal of Personality and Social Psychology. 43 (6): 1163–1174. doi:10.1037/0022-35126.96.36.1993.
- Kunda 1999, pp. 71–72
- Tversky, Amos; Kahneman, Daniel (1971). "Belief in the law of small numbers.". Psychological Bulletin. 76 (2): 105–110. doi:10.1037/h0031322. reprinted in Daniel Kahneman; Paul Slovic; Amos Tversky, eds. (1982). Judgment under uncertainty: heuristics and biases. Cambridge: Cambridge University Press. pp. 23–31. ISBN 9780521284141.
- Baron 2000, p. 235?
- Plous 1993, pp. 145–146
- Koehler & Harvey 2004, p. 99
- Mussweiler, Englich & Strack 2004, pp. 185–186,197
- Yudkowsky 2008, pp. 102–103
- Lichtenstein, Sarah; Fischoff, Baruch; Phillips, Lawrence D. (1982), "Calibration of probabilities: The state of the art to 1980", in Kahneman, Daniel; Slovic, Paul; Tversky, Amos, Judgment under uncertainty: Heuristics and biases, Cambridge University Press, pp. 306–334, ISBN 9780521284141
- Sutherland 2007, pp. 168–170
- Hastie & Dawes 2009, pp. 78–80
- George Loewenstein (2007), Exotic Preferences: Behavioral Economics and Human Motivation, Oxford University Press, pp. 284–285, ISBN 9780199257072
- Mussweiler, Englich & Strack 2004, p. 188
- Plous 1993, pp. 148–149
- Caverni, Jean-Paul; Péris, Jean-Luc (1990), "The Anchoring-Adjustment Heuristic in an 'Information-Rich, Real World Setting': Knowledge Assessment by Experts", in Caverni, Jean-Paul; Fabré, Jean-Marc; González, Michel, Cognitive biases, Elsevier, pp. 35–45, ISBN 9780444884138
- Mussweiler, Englich & Strack 2004, p. 183
- Finucane, M.L.; Alhakami, A.; Slovic, P.; Johnson, S.M. (January 2000). "The Affect Heuristic in Judgment of Risks and Benefits". Journal of Behavioral Decision Making. 13 (1): 1–17. doi:10.1002/(SICI)1099-0771(200001/03)13:1<1::AID-BDM333>3.0.CO;2-S.
- Keller, Carmen; Siegrist, Michael; Gutscher, Heinz (June 2006). "The Role of Affect and Availability Heuristics in Risk Analysis". Risk Analysis. 26 (3): 631–639. PMID 16834623. doi:10.1111/j.1539-6924.2006.00773.x.
- Kahneman, Daniel; Frederick, Shane (2002), "Representativeness Revisited: Attribute Substitution in Intuitive Judgment", in Gilovich, Thomas; Griffin, Dale; Kahneman, Daniel, Heuristics and Biases: The Psychology of Intuitive Judgment, Cambridge: Cambridge University Press, pp. 49–81, ISBN 9780521796798, OCLC 47364085
- Hardman 2009, pp. 13–16
- Shah, Anuj K.; Daniel M. Oppenheimer (March 2008). "Heuristics Made Easy: An Effort-Reduction Framework". Psychological Bulletin. American Psychological Association. 134 (2): 207–222. ISSN 1939-1455. PMID 18298269. doi:10.1037/0033-2909.134.2.207.
- Newell, Benjamin R.; David A. Lagnado; David R. Shanks (2007). Straight choices: the psychology of decision making. Routledge. pp. 71–74. ISBN 9781841695884.
- Kahneman, Daniel (December 2003). "Maps of Bounded Rationality: Psychology for Behavioral Economics" (PDF). American Economic Review. American Economic Association. 93 (5): 1449–1475. ISSN 0002-8282. doi:10.1257/000282803322655392.
- Morewedge, Carey K.; Kahneman, Daniel (October 2010). "Associative processes in intuitive judgment". Trends in Cognitive Sciences. 14 (10): 435–440. PMID 20696611. doi:10.1016/j.tics.2010.07.004.
- Kahneman, Daniel (2007). "Short Course in Thinking About Thinking". Edge.org. Edge Foundation. Retrieved 2009-06-03.
- Gerd Gigerenzer, Peter M. Todd, and the ABC Research Group (1999). Simple Heuristics That Make Us Smart. Oxford, UK, Oxford University Press. ISBN 0-19-514381-7
- van der Linden, S. (2011). "Speed Dating and Decision Making: Why Less is More". Scientific American – Mind Matters. Nature. Retrieved 2013-11-14.
- Thorngate, Warren (1980). "Efficient decision heuristics". Behavioral Science. 25 (3): 219–225. doi:10.1002/bs.3830250306.
- Monin, Benoît; Daniel M. Oppenheimer (2005), "Correlated Averages vs. Averaged Correlations: Demonstrating the Warm Glow Heuristic Beyond Aggregation" (PDF), Social Cognition, 23 (3): 257–278, ISSN 0278-016X, doi:10.1521/soco.2005.23.3.257
- Sunstein, Cass R. (2005). "Moral heuristics". Behavioral and Brain Sciences. Cambridge University Press. 28 (4): 531–542. ISSN 0140-525X. PMID 16209802. doi:10.1017/S0140525X05000099.
- Sunstein, Cass R. (2009). "Some Effects of Moral Indignation on Law" (PDF). Vermont Law Review. Vermont Law School. 33 (3): 405–434. SSRN . Archived from the original (PDF) on November 29, 2014. Retrieved 2009-09-15.
- Baron, Jonathan (2000), Thinking and deciding (3rd ed.), New York: Cambridge University Press, ISBN 0521650305, OCLC 316403966
- Fiedler, Klaus; von Sydow, Momme (2015), "Heuristics and Biases: Beyond Tversky and Kahneman's (1974) Judgment under Uncertainty" (PDF), in Eysenck, Michael W.; Groome, David, Cognitive Psychology: Revising the Classical Studies, Sage, London, pp. 146–161, ISBN 978-1-4462-9447-5
- Gigerenzer, G. (1996), "On narrow norms and vague heuristics: A reply to Kahneman and Tversky. Heuristic", Psychological Review, 103 (3): 592–596, doi:10.1037/0033-295X.103.3.592
- Gilovich, Thomas; Griffin, Dale W. (2002), "Introduction – Heuristics and Biases: Then and Now", in Gilovich, Thomas; Griffin, Dale W.; Kahneman, Daniel, Heuristics and biases: the psychology of intuitive judgement, Cambridge University Press, pp. 1–18, ISBN 9780521796798
- Hardman, David (2009), Judgment and decision making: psychological perspectives, Wiley-Blackwell, ISBN 9781405123983
- Hastie, Reid; Dawes, Robyn M. (29 September 2009), Rational Choice in an Uncertain World: The Psychology of Judgment and Decision Making, SAGE, ISBN 9781412959032
- Koehler, Derek J.; Harvey, Nigel (2004), Blackwell handbook of judgment and decision making, Wiley-Blackwell, ISBN 9781405107464
- Kunda, Ziva (1999), Social Cognition: Making Sense of People, MIT Press, ISBN 978-0-262-61143-5, OCLC 40618974
- Mussweiler, Thomas; Englich, Birte; Strack, Fritz (2004), "Anchoring effect", in Pohl, Rüdiger F., Cognitive Illusions: A Handbook on Fallacies and Biases in Thinking, Judgement and Memory, Hove, UK: Psychology Press, pp. 183–200, ISBN 9781841693514, OCLC 55124398
- Plous, Scott (1993), The Psychology of Judgment and Decision Making, McGraw-Hill, ISBN 9780070504776, OCLC 26931106
- Poundstone, William (2010), Priceless: the myth of fair value (and how to take advantage of it), Hill and Wang, ISBN 9780809094691
- Reber, Rolf (2004), "Availability", in Pohl, Rüdiger F., Cognitive Illusions: A Handbook on Fallacies and Biases in Thinking, Judgement and Memory, Hove, UK: Psychology Press, pp. 147–163, ISBN 9781841693514, OCLC 55124398
- Sutherland, Stuart (2007), Irrationality (2nd ed.), London: Pinter and Martin, ISBN 9781905177073, OCLC 72151566
- Teigen, Karl Halvor (2004), "Judgements by representativeness", in Pohl, Rüdiger F., Cognitive Illusions: A Handbook on Fallacies and Biases in Thinking, Judgement and Memory, Hove, UK: Psychology Press, pp. 165–182, ISBN 9781841693514, OCLC 55124398
- Tversky, Amos; Kahneman, Daniel (1974), "Judgments Under Uncertainty: Heuristics and Biases" (PDF), Science, 185 (4157): 1124–1131, PMID 17835457, doi:10.1126/science.185.4157.1124 reprinted in Daniel Kahneman; Paul Slovic; Amos Tversky, eds. (1982). Judgment Under Uncertainty: Heuristics and Biases. Cambridge: Cambridge University Press. pp. 3–20. ISBN 9780521284141.
- Yudkowsky, Eliezer (2008), "Cognitive biases potentially affecting judgment of global risks", in Bostrom, Nick; Ćirković, Milan M., Global catastrophic risks, Oxford University Press, pp. 91–129, ISBN 9780198570509
- Slovic, Paul; Melissa Finucane; Ellen Peters; Donald G. MacGregor (2002). "The Affect Heuristic". In Thomas Gilovich; Dale Griffin; Daniel Kahneman. Heuristics and Biases: The Psychology of Intuitive Judgment. Cambridge University Press. pp. 397–420. ISBN 9780521796798. |
Imagine a world hot enough to turn lead into a puddle, where the atmospheric pressure can crush a nuclear-powered submarine. Now imagine sending a rover to explore that world.
Venus, ancient sister of Earth with a planetary environment just this side of hellish, has been visited by a handful of probes since the early days of space flight. Of the many missions to our celestial neighbor, only about a dozen have made contact with the surface of the planet. The longest-lived landers only managed to function for a couple of hours before succumbing to the relentlessly oppressive heat and pressure.
Despite the punishing conditions, previous missions to Venus have nevertheless delivered important information, such as:
NASA’s Jet Propulsion Laboratory (JPL), under a grant from the NASA Innovative Advanced Concepts (NIAC) program, is studying a mission concept to return to the surface of Venus, known as the Automaton Rover for Extreme Environments (AREE), something not accomplished since the Soviet Vega 2 landed in 1985.
Current, state-of-the-art, military-grade electronics fail at approximately 125°C, so mission scientists at JPL have taken their design cues from a different source: automatons and clockwork operations. Powered by wind, the AREE mission concept is intended to spend months, not minutes, exploring the landscape of our sister world. Built of advanced alloys, AREE will be able to collect valuable long-term longitudinal scientific data utilizing both indirect and direct sensors.
As the rover explores the surface of Venus, collecting and relaying data to an orbiter overhead, it must also detect obstacles in its path like rocks, crevices, and steep terrain. To assist AREE on its groundbreaking mission concept, JPL needs an equally groundbreaking obstacle avoidance sensor, one that does not rely on vulnerable electronic systems. For that reason, JPL is turning to the global community of innovators and inventors to design this novel avoidance sensor for AREE. JPL is interested in all approaches, regardless of technical maturity.
This sensor will be the primary mechanism by which the potential rover would detect and navigates through dangerous situations during its operational life. By sensing obstacles such as rocks, crevices, and inclines, the rover would then navigate around the obstruction, enabling the rover to continue to explore the surface of Venus and collect more observational data.
JPL has issued this Challenge to the global community because the rover must have the ability to successfully navigate in such a demanding environment in order to qualify for additional developmental funding. While the mission to the surface of Venus may be years off, the development of a suitably robust rover sensor will strengthen the case for returning to Venus with a rover, something that has never been attempted before.
What You Can Do To Cause A Breakthrough
Using ancient approaches and modern material science, design a mechanical obstacle avoidance sensor for usage on an off-world planetary rover.
The goal of this single-stage challenge is to submit a fully mechanical sensor that meets the performance criteria listed below and can be incorporated into the existing AREE model – competitors do not need to demonstrate how their sensor will connect to the rover, only that their design can provide the desired functionality.
Below are several profile images of the rover, as currently envisioned by the design team:
The actuator in any proposed sensor must be able to move a 6 cm diameter pin by a minimum of 3 cm with 25 N of force when an obstacle is encountered. This, in turn, will then trigger the rover to back off the obstacle and seek a new pathway forward.
The sensor must reliably respond when encountering:
To assist competitors, the following image demonstrates possible scenarios that the rover may encounter during its mission:
Additional performance criteria:
The Challenge offers up to $30,000 USD in prize money.
In addition to the above cash prizes, competitors may also be considered for the following non-monetary awards:
Open to submissions February 18, 2020
Submission deadline May 29, 2020 @ 5pm ET
Judging June 1 to July 2, 2020
Winners Announced July 6, 2020
To be eligible for an award, your proposal must, at minimum:
|A. Likelihood of Successful Operation||Is the concept likely to meet the challenge obstacles avoidance requirements?||55% total|
|A.1||Does this submission include a compelling diagram/schematic of the proposed sensor?||15%|
|A.2||Does this submission include appropriate justification or citations for the proposed sensor?||5%|
|A.3||Would the system detect rocks/holes/valleys greater than 0.35 meters tall/deep?||10%|
|A.4||Would the system detect slopes or combinations of slopes/obstacles that could result in an angle of greater than 30 degrees?||10%|
|A.5||Would the system ignore rocks/holes/valleys less than 0.3 meters tall/deep?||10%|
|A.6||Would the design produce a 3 cm displacement of a shaft/pin with 25N of force?||5%|
Is the design compatible with the current rover architecture?
|B. Is the concept feasible to construct?|
Is the design something that could actually be constructed?
Are there any practical limitations to implementing the design?
|C. Can the concept be adjusted to work in Venus conditions|
Would the concept, if built out of the right materials, operate at Venus’s high temperatures?
Would the concept operate at Venus pressure?
NOTE: Competitors are encouraged to present citations (or other relevant supporting information) to bolster the case for their design’s suitability for this application. Citations may be made inline with text or may be included as a piece of supporting documentation.
NOTE: Responses should include a schematic or diagram of their proposed avoidance sensor design. The diagram should be attached as a supporting document. Acceptable file formats include: WORD, PDF or JPEG. Competitors wishing to include CAD files may do so: 2D CAD files can be shared as a PDF, 3D CAD files should be shared as a Parasolid . x_t file.
You may submit multiple solutions.
The Prize is open to anyone age 18 or older participating as an individual or as a team. Individual competitors and teams may originate from any country, as long as United States federal sanctions do not prohibit participation (see: https://www.treasury.gov/resource-center/sanctions/Programs/Pages/Programs.aspx). If you are a NASA employee, a Government contractor, or employed by a Government Contractor, your participation in this challenge may be restricted.
Submissions must be made in English. All challenge-related communication will be in English.
No specific qualifications or expertise in the field of mechanical sensors is required. NASA encourages outside individuals and non-expert teams to compete and propose new solutions.
To be eligible to compete, you must comply with all the terms of the challenge as defined in the Challenge-Specific Agreement.
Innovators who are awarded a prize for their submission must agree to grant NASA a royalty free, non-exclusive, irrevocable, world-wide license in all Intellectual Property demonstrated by the winning/awarded submissions. See the Challenge-Specific Agreement for complete details.
Registration and Submissions:
Submissions must be made online (only), via upload to the HeroX.com website, on or before 5:00 pm ET on May 29, 2020. No late submissions will be accepted.
Selection of Winners:
Based on the winning criteria, prizes will be awarded per the weighted Judging Criteria section above.
The determination of the winners will be made by HeroX based on evaluation by relevant NASA specialists. |
This Level 2 guided reader illustrates examples of patterns found in different foods. Students will develop word recognition and reading skills while learning to identify how repeating shapes, colors, or lines form a pattern.
This book offers readers insight into solving length word problems. Designed to support the Common Core State Standards, this title includes strategies such as using drawings, symbols, and number lines to solve problems. Real-world examples and engaging text make learning meaningful to young readers.
Word Problems: Mass and Volume uses an engaging narrative and authentic, real-world problems to teach readers strategies to solve one-step word problems involving mass and volume. The text models the problem-solving process for readers and provides hands-on opportunities for readers to apply their own problem-solving skills. Readers will discover that there is often more than one way to solve a problem.
This engaging title introduces young readers to the concept of equal sets. Decodable text and image support help readers identify familiar things that come in pairs, sets of threes, fours, fives, and more. The book uses relatable objects, such as four legs on a table and five fingers on a hand, to teach the concept of sets. Readers are also encouraged to find their own number set examples in their everyday lives.
This informative book introduces readers to the concept of perimeter using real-world examples. Through modeling, readers will learn that perimeter is calculated by adding together the length of each side of a figure. This title also explains how to measure the perimeters of familiar places, such as classrooms and football fields, by using addition and multiplication.
Trains take you where you need to go, and they move food and other goods from one place to another. Trains are full of shapes, too, which make them fun to look at. You can find circles on the train's wheels and rectangles on the railroad tracks and railroad signs. There are even shapes at the train station. Some ceilings have circles or triangles and some floors have squares. Look inside. Can you find any more?
Let’s make a Christmas card. I can make a round snowman, a pointy Christmas tree, and square Christmas presents. Here is my Christmas card.
Level 1 guided reader that introduces young students to the concept of weights while supporting the development of reading skills.
Que cosas de la playa tienen rayas?
Shapes at School takes readers through a day at school, pointing out the many familiar shapes they encounter in the classroom, in the lunchroom, and on the playground. Vibrant, full-color photos and carefully leveled text engage emergent readers as they hunt for shapes at school. A labeled diagram helps readers identify shapes in a classroom, while a picture glossary reinforces new vocabulary. Children can learn more about shapes online using our safe search engine that provides relevant, age-appropriate websites. Shapes at School also features reading tips for teachers and parents, a table of contents, and an index.
Informs readers about different types of patterns, such as an ABAB pattern, through simple text, photographs, and matching activities. Additional features to aid comprehension include a phonetic glossary, an index, an answer key, sources for further research, and an introduction to the author.
Introduces readers to different shapes through simple text, photographs, and matching activities. Additional features to aid comprehension include a phonetic glossary, an index, an answer key, sources for further research, and an introduction to the author.
Look at the different animals in this e-book. Is one animal taller than another? Flip through the pages and describe what you see. Simple phrases, exact text-to-image relationships, large font, and vibrant photographs are flowed beautifully throughout this e-book to engage students from start to finish. Students will be introduced to basic measurement and data concepts with this e-book that aligns to mathematics standards.
What makes a group? How do you know? Look at what is the same to help you! Use this enlightening, nonfiction e-book to introduce students to classifying and counting. Simple phrases, exact text-to-image relationships, large font, and vibrant photographs are flowed beautifully throughout this e-book to engage students from start to finish. Students will be introduced to basic measurement and data concepts with this e-book that aligns to mathematics standards.
Learn how to sort like a scientist! Sort items by their look, smell, taste, touch, sound, and more! This dynamic science e-book will help kindergarten students make connections, categorize items, and identify similarities and differences. With a hands-on “Let's Do Science” lab activity that is aligned to the Next Generation Science Standards, this is a perfect tool to develop students' scientific practices and support STEM Education. Including a glossary and index, the helpful text features in this easy-to-read informational text support the development of content-area literacy while vibrant images keep readers engaged from cover to cover.
Learn how to sort with this engaging, colorful science e-book! Follow along as children play "I Spy" to identify objects by color, shape, and more. See how they sort a blue square by color, then by shape. This e-book includes more fun and easy examples to help guide young readers into understanding sorting and categorizing everyday items. With the help of vibrant images and easy-to-read text, students will be engaged from cover to cover! This e-book also includes instructions for an engaging science activity and practice problems to give students additional practice in sorting objects. A helpful glossary and index are also included for support.
Teach young students how to describe things with the help of this science reader! Describe familiar objects by their texture, temperature, shape, speed, size, age, and more! The easy-to-read text and vibrant images will keep young readers engaged from start to finish. This reader also includes instructions for a fun science activity and practice problems to give students additional practice in describing things. A helpful glossary and index are also included for support.
Beginning readers learn to recognize shapes such as circles, squares, and rectangles in this nonfiction reader that features simple text and bright, vivid images.
In this basic concept nonfiction book, bright photos and simple, informational text encourage beginning readers to compare sizes to find what's big and little in their world!
There are all sorts of ways to sort farm animals! This charming title teaches young readers how to recognize animals' different qualities and sort them into sets, familiarizing children with set theory, data analysis, and early STEM themes. With the help of familiar images, engaging "You Try It!" problems, and a glossary, children will be able to sort animals into many different categories--big or small, two-legged or four-legged, fast or slow!
There are all kinds of ways to sort wild animals! This fun title teaches young readers how to recognize animals' different qualities and sort them into sets, familiarizing children with set theory, data analysis, and early STEM themes. With the help of fun, familiar images, engaging "You Try It!" problems, and a glossary, children will be able to sort animals into many different categories--big or small, fast or slow!
Make patterns fun at recess time! This exciting title helps young readers recognize repeating patterns all around through helpful charts and familiar images of recess time. Children will better understand early STEM themes through the help of simple, applicable examples of patterns. This title will engage young readers with games and featured "You Try It!" problems!
Discover patterns in everyday games! This charming title helps young readers recognize repeating patterns in common games like checkers, cards, board games, and jacks. Children will enhance their understanding of patterns and early STEM themes with engaging examples and featured "You Try It" problems.
Learn geometry by taking a trip around town! This engaging title uses examples of town life to help young readers recognize basic shapes like circles, triangles, and rectangles. Vivid, familiar images of city life, engaging "You Try It!" problems, and a helpful glossary allow children to discover geometry all around them and improve their understanding of early STEM themes.
Learn about geometry right at home! This engaging title uses examples of household items like doors, napkins, and windows to help young readers recognize shapes like circles, triangles, and rectangles. These familiar images work in conjunction with engaging "You Try It!" problems and a helpful glossary to better children's understanding of geometry and early STEM concepts. |
Jump to navigation
6 Rules for Discussion and Debate in College Classes
You don't need a debate class to stage an educational faceoff; follow this framework to stage a student debate on any topic in your curriculum....
Instructions_for_Debate Week 11 Debate(20 Instruction to
Rules, Forms & Manuals. you’ll find instructions for A student may not use a cutting from a work of literature the student used in National Speech & Debate. Guidelines for Conducting a Debate. Debate can be an effective instructional method for helping participants to present and evaluate positions clearly and logically.. The best activity in the world will turn into a disappointing failure if students don't understand the instructions. Here are some practical tips to give instructions..
ESL Cafe's Idea Cookbook Mini-Debate
Debate Formats. There are several different formats for debate practiced in high school and college debate leagues. Most of these formats share some general features.! Instructions for in class debates: The class has been divided into 6 teams (A-F) of 6 or 7 students. Each debate follows a point-counter point format involving.
Examples of Debate Up for Debate - Google Sites
Classroom debates help students learn through friendly competition, examine controversial topics and “strengthen skills in the areas of leadership,. Teachers’ Instructions (1/2) For more FUN English Lesson Worksheets The Great Balloon Debate [Lesson The Great Balloon Debate 1. Student ability:. Debates in the Classroom. Description There's no debate about it! Debates are a great tool for engaging students and livening up classroom curriculum..
The Great Balloon Debate efl4U.com
Are you looking for classroom activities to get your students to use their critical thinking skills? Then you should try having a classroom debate.. Mini-Debate . This is a good oral Materials : Debate topics – at least one for every two students. Instructions : Divide the class into two main groups..
Debate Competition Rules University of Windsor
The Middle School Public Debate Program has a YouTube Channel of example debates done by middle school students. The video linked to on the left is the 2006 final
4 Fast Debates Formats for the Classroom Hold Quick Debates in Grades 7-12. During a debate, students take turns to speak in response to the arguments made by. Guide to Judging Times and Duties of students must respond to the win and a loss in a given debate, you must give each student an individual score. Use the
Voyager trailer brake controller instructions Soup.io. tekonsha voyager electric brake controller model 9030 new plug and play version australian version made in usa free postage australia wide with australia post the voyager brake controller instructions how to remove a tekonsha voyager control having problems removing the brake control from truck. - Cars & Trucks question. |
wikiHow is a “wiki,” similar to Wikipedia, which means that many of our articles are co-written by multiple authors. To create this article, 14 people, some anonymous, worked to edit and improve it over time.
This article has been viewed 166,481 times.
The cross product is a type of vector multiplication only defined in three and seven dimensions that outputs another vector. This operation, used in almost exclusively three dimensions, is useful for applications in physics and engineering. In this article, we will calculate the cross product of two three-dimensional vectors defined in Cartesian coordinates.
Method 1 of 2:Calculating the Cross Product
1Consider two general three-dimensional vectors defined in Cartesian coordinates.
- Here, are unit vectors, and are constants.
2Set up the matrix. One of the easiest ways to compute a cross product is to set up the unit vectors with the two vectors in a matrix.Advertisement
3Calculate the determinant of the matrix. Below, we use cofactor expansion (expansion by minors).
- This vector is orthogonal to both and
Method 2 of 2:Example
QuestionHow do I calculate the vector triple product?Community AnswerGiven vectors u, v, and w, the scalar triple product is u*(vXw). So by order of operations, first find the cross product of v and w. Set up a 3X3 determinant with the unit coordinate vectors (i, j, k) in the first row, v in the second row, and w in the third row. Evaluate the determinant (you'll get a 3 dimensional vector). Then dot that with u (to get a scalar). Inner products are abelian, so u*(vXw)=(vXw)*u. Interestingly, the absolute value of the T.S.P. yields the volume of a parallelpiped with 3 edges given by vectors u, v, and w.
QuestionWhat is the vector analog of distance?Community AnswerNorm, sometimes also called magnitude, generalizes distance to vectors. Norm is denoted with vertical bars like absolute values. For example, |(3,-4)| = 5, and |(1,1,1,1)| = 2.
The cross product of a vector with any multiple of itself is 0. This is easier shown when setting up the matrix. The second and third rows are linearly dependent, since you can write one as a multiple of the other. Then, the determinant of the matrix and therefore the cross product is 0.
One can show that the vector produced by a cross product of two vectors is orthogonal to both and To do so, compute the dot products. These products are called triple products - since the operation on the outside is a dot product, these are the scalar triple products.
- These triple products follow something known as cyclic permutation - that is, if you swap the positions of the vectors without reordering them, the expressions are equivalent. Then, we can rewrite them such that a vector is crossing with itself.
- However, we know that the cross product of a vector with itself is 0. Since a dot product of the two vectors ends up being 0 as well, they are orthogonal. |
-Facts & Figures-
CIA World Factbook
Info Please Almanac
The information on this page is courtesy of nih.gov
Date reviewed: 04/18/2001
Editorial changes made: 05/13/2002
Mesothelioma: Questions and Answers
Mesothelioma is a rare form of cancer in which malignant (cancerous) cells are found in the mesothelium, a protective sac that covers most of the body's internal organs. Most people who develop mesothelioma have worked on jobs where they inhaled asbestos particles.
What is the mesothelium?
The mesothelium is a membrane that covers and protects most of the internal organs of the body. It is composed of two layers of cells: One layer immediately surrounds the organ; the other forms a sac around it. The mesothelium produces a lubricating fluid that is released between these layers, allowing moving organs (such as the beating heart and the expanding and contracting lungs) to glide easily against adjacent structures.
The mesothelium has different names, depending on its location in the body. The peritoneum is the mesothelial tissue that covers most of the organs in the abdominal cavity. The pleura is the membrane that surrounds the lungs and lines the wall of the chest cavity. The pericardium covers and protects the heart. The mesothelial tissue surrounding the male internal reproductive organs is called the tunica vaginalis testis. The tunica serosa uteri covers the internal reproductive organs in women.
What is mesothelioma?
Mesothelioma (cancer of the mesothelium) is a disease in which cells of the mesothelium become abnormal and divide without control or order. They can invade and damage nearby tissues and organs. Cancer cells can also metastasize (spread) from their original site to other parts of the body. Most cases of mesothelioma begin in the pleura or peritoneum.
How common is mesothelioma?
Although reported incidence rates have increased in the past 20 years, mesothelioma is still a relatively rare cancer. About 2,000 new cases of mesothelioma are diagnosed in the United States each year. Mesothelioma occurs more often in men than in women and risk increases with age, but this disease can appear in either men or women at any age.
What are the risk factors for mesothelioma?
Working with asbestos is the major risk factor for mesothelioma. A history of asbestos exposure at work is reported in about 70 percent to 80 percent of all cases. However, mesothelioma has been reported in some individuals without any known exposure to asbestos.
Asbestos is the name of a group of minerals that occur naturally as masses of strong, flexible fibers that can be separated into thin threads and woven. Asbestos has been widely used in many industrial products, including cement, brake linings, roof shingles, flooring products, textiles, and insulation. If tiny asbestos particles float in the air, especially during the manufacturing process, they may be inhaled or swallowed, and can cause serious health problems. In addition to mesothelioma, exposure to asbestos increases the risk of lung cancer, asbestosis (a noncancerous, chronic lung ailment), and other cancers, such as those of the larynx and kidney.
Smoking does not appear to increase the risk of mesothelioma. However, the combination of smoking and asbestos exposure significantly increases a person's risk of developing cancer of the air passageways in the lung.
Who is at increased risk for developing mesothelioma?
Asbestos has been mined and used commercially since the late 1800s. Its use greatly increased during World War II. Since the early 1940s, millions of American workers have been exposed to asbestos dust. Initially, the risks associated with asbestos exposure were not known. However, an increased risk of developing mesothelioma was later found among shipyard workers, people who work in asbestos mines and mills, producers of asbestos products, workers in the heating and construction industries, and other tradespeople. Today, the U.S. Occupational Safety and Health Administration (OSHA) sets limits for acceptable levels of asbestos exposure in the workplace. People who work with asbestos wear personal protective equipment to lower their risk of exposure.
The risk of asbestos-related disease increases with heavier exposure to asbestos and longer exposure time. However, some individuals with only brief exposures have developed mesothelioma. On the other hand, not all workers who are heavily exposed develop asbestos-related diseases.
There is some evidence that family members and others living with asbestos workers have an increased risk of developing mesothelioma, and possibly other asbestos-related diseases. This risk may be the result of exposure to asbestos dust brought home on the clothing and hair of asbestos workers. To reduce the chance of exposing family members to asbestos fibers, asbestos workers are usually required to shower and change their clothing before leaving the workplace.
What are the symptoms of mesothelioma?
Symptoms of mesothelioma may not appear until 30 to 50 years after exposure to asbestos. Shortness of breath and pain in the chest due to an accumulation of fluid in the pleura are often symptoms of pleural mesothelioma. Symptoms of peritoneal mesothelioma include weight loss and abdominal pain and swelling due to a buildup of fluid in the abdomen. Other symptoms of peritoneal mesothelioma may include bowel obstruction, blood clotting abnormalities, anemia, and fever. If the cancer has spread beyond the mesothelium to other parts of the body, symptoms may include pain, trouble swallowing, or swelling of the neck or face.
These symptoms may be caused by mesothelioma or by other, less serious conditions. It is important to see a doctor about any of these symptoms. Only a doctor can make a diagnosis.
How is mesothelioma diagnosed?
Diagnosing mesothelioma is often difficult, because the symptoms are similar to those of a number of other conditions. Diagnosis begins with a review of the patient's medical history, including any history of asbestos exposure. A complete physical examination may be performed, including x-rays of the chest or abdomen and lung function tests. A CT (or CAT) scan or an MRI may also be useful. A CT scan is a series of detailed pictures of areas inside the body created by a computer linked to an x-ray machine. In an MRI, a powerful magnet linked to a computer is used to make detailed pictures of areas inside the body. These pictures are viewed on a monitor and can also be printed.
A biopsy is needed to confirm a diagnosis of mesothelioma. In a biopsy, a surgeon or a medical oncologist (a doctor who specializes in diagnosing and treating cancer) removes a sample of tissue for examination under a microscope by a pathologist. A biopsy may be done in different ways, depending on where the abnormal area is located. If the cancer is in the chest, the doctor may perform a thoracoscopy. In this procedure, the doctor makes a small cut through the chest wall and puts a thin, lighted tube called a thoracoscope into the chest between two ribs. Thoracoscopy allows the doctor to look inside the chest and obtain tissue samples. If the cancer is in the abdomen, the doctor may perform a peritoneoscopy. To obtain tissue for examination, the doctor makes a small opening in the abdomen and inserts a special instrument called a peritoneoscope into the abdominal cavity. If these procedures do not yield enough tissue, more extensive diagnostic surgery may be necessary.
If the diagnosis is mesothelioma, the doctor will want to learn the stage (or extent) of the disease. Staging involves more tests in a careful attempt to find out whether the cancer has spread and, if so, to which parts of the body. Knowing the stage of the disease helps the doctor plan treatment.
Mesothelioma is described as localized if the cancer is found only on the membrane surface where it originated. It is classified as advanced if it has spread beyond the original membrane surface to other parts of the body, such as the lymph nodes, lungs, chest wall, or abdominal organs.
How is mesothelioma treated?
Treatment for mesothelioma depends on the location of the cancer, the stage of the disease, and the patient's age and general health. Standard treatment options include surgery, radiation therapy, and chemotherapy. Sometimes, these treatments are combined.
Surgery is a common treatment for mesothelioma. The doctor may remove part of the lining of the chest or abdomen and some of the tissue around it. For cancer of the pleura (pleural mesothelioma), a lung may be removed in an operation called a pneumonectomy. Sometimes part of the diaphragm, the muscle below the lungs that helps with breathing, is also removed.
Radiation therapy, also called radiotherapy, involves the use of high-energy rays to kill cancer cells and shrink tumors. Radiation therapy affects the cancer cells only in the treated area. The radiation may come from a machine (external radiation) or from putting materials that produce radiation through thin plastic tubes into the area where the cancer cells are found (internal radiation therapy).
Chemotherapy is the use of anticancer drugs to kill cancer cells throughout the body. Most drugs used to treat mesothelioma are given by injection into a vein (intravenous, or IV). Doctors are also studying the effectiveness of putting chemotherapy directly into the chest or abdomen (intracavitary chemotherapy).
To relieve symptoms and control pain, the doctor may use a needle or a thin tube to drain fluid that has built up in the chest or abdomen. The procedure for removing fluid from the chest is called thoracentesis. Removal of fluid from the abdomen is called paracentesis. Drugs may be given through a tube in the chest to prevent more fluid from accumulating. Radiation therapy and surgery may also be helpful in relieving symptoms.
Are new treatments for mesothelioma being studied?
Yes. Because mesothelioma is very hard to control, the National Cancer Institute (NCI) is sponsoring clinical trials (research studies with people) that are designed to find new treatments and better ways to use current treatments. Before any new treatment can be recommended for general use, doctors conduct clinical trials to find out whether the treatment is safe for patients and effective against the disease. Participation in clinical trials is an important treatment option for many patients with mesothelioma.
People interested in taking part in a clinical trial should talk with their doctor. Information about clinical trials is available from the Cancer Information Service (CIS) (see below) at 1–800–4–CANCER. Information specialists at the CIS use PDQ®, NCI's cancer information database, to identify and provide detailed information about specific ongoing clinical trials. Patients also have the option of searching for clinical trials on their own. The clinical trials page on the NCI's http://www.cancer.gov Web site, located at http://www.cancer.gov/clinical_trials on the Internet, provides general information about clinical trials and links to PDQ.
People considering clinical trials may be interested in the NCI booklet Taking Part in Clinical Trials: What Cancer Patients Need To Know. This booklet describes how research studies are carried out and explains their possible benefits and risks. The booklet is available by calling the CIS, or from the NCI Publications Locator Web site at http://www.cancer.gov/publications on the Internet.
# # #
Sources of National Cancer Institute Information
Cancer Information Service
Toll-free: 1–800–4–CANCER (1–800–422–6237)
TTY (for deaf and hard of hearing callers): 1–800–332–8615
Use http://www.cancer.gov to reach NCI's Web site.
Cancer Information Specialists offer online assistance through the LiveHelp link on the NCI's Web site. |
What Is the History of Recall Elections?History Q & A
The recall has always been at the forefront of a fundamental question about the role of an elected officials, namely whether the official should act as a trustee and vote his own opinion or perform as a delegate and vote according to the wishes of his constituency. This long running debate continues to this day with criticism of poll-driven politicians. This clash of ideologies was much in evidence during the debate about the recall's place in the new U.S. Constitution.
The actual origins of the recall is shrouded in conjecture. Its modern day creator, Dr. John Randolph Haynes, claimed that it was "derived historically from Greek and Latin sources...." However, the authors of many of the works on the practice cite Haynes as expropriating the idea from the Swiss.
While the first instance of the recall can be found in the laws of the General Court of the Massachusetts Bay Colony of 1631, and again in the Massachusetts Charter of 1691, the recall gained a firm footing in American politics with the democratic ideals that burst forth from the American Revolution. After declaring their independence, 11 of the 13 colonies wrote new constitutions, and many of these documents showed the new spirit of democracy. They specifically spelled out the laws in their constitution, which was a sharp departure from the unwritten British constitution. Most lessened the power of the executive and strengthened the legislature. Some opened up the right to vote to a larger portion of the population. And a few states wrote the recall into law as a method of controlling their elected representatives.
The states which adopted the recall were mainly concerned with the power of the representatives who served the states in the national government's congress. Unlike its modern day counterpart, the seventeenth and eighteenth century versions of the recall involved the removal of an official by another elected body, such as a state legislature recalling its United States senator. While this form provides a different relationship between the elected official and the general population the principles and the debates that engulfed the issue had not substantially changed.
The Revolution's success led the states to form a government under the Articles of Confederation, which were finally ratified in 1781. The government under the Articles was weak and at the mercy of the individual states. Unsurprisingly, the recall was included in the Articles of Confederation. According to recall proponent and New York delegate John Lansing, the recall was never exercised by any of the states throughout the brief history of the Confederation.
As the Articles of Confederation government proved a failure in leading the new country, some of the brightest lights in America met in Philadelphia in 1787 and drafted the new Constitution. There is a plethora of materials on the Constitutional Convention, the debates surrounding its adoption, and its eventual impact. However, the issue of the recall has been mostly ignored, despite the fact that the idea was discussed. It was proposed by Edmund Randolph in his presentation of the Virginia Plan on May 29. The plan would have allowed the recall of the members of the first house of the legislature, who were directly elected by the people. On June 12, the convention passed Charles Pickney's motion to strike out the recall. The only other mention of the procedure in Madison's notes on the convention was a speech by future Vice President Elbridge Gerry exploring how the convention exceeded its mandate.
The argument for the recall was a strong component of the anti-federalist attack. The American Revolution was in many ways an attack on the existing power structure, or as Carl Becker said it was not just about home rule, but who rules at home. The new Constitution, in the view of many leading anti-federalists, was a conservative reaction to the American Revolution. One of the major opponents of the Constitution, Luther Martin, stressed the absence of a recall for senators, and the freedom from popular control that this absence represented, as a reason to reject the document. Martin was opposed to granting senators, who were elected by the state legislators and were seen as representing the more traditional aristocratic population, a large degree of freedom. He feared that senators would disregard their position as delegates of the people, and be free to work against the interests of their own states. Martin said: "Thus, sir, for six years, the senators are rendered totally and absolutely independent of their states, of whom they ought to be the representatives, without any bond or tie between them."
The idea of tightly binding the senators to their states was strongly opposed by the Federalists, most notably Alexander Hamilton. The topic gained new life when the Constitution was sent to the states to ratify. Each state elected a ratifying convention to approve or disapprove of the Constitution. Nine of the thirteen states votes were required for ratification. The topic took up several days of debate in the New York Ratifying Convention and was also proposed in the Massachusetts Convention. Using arguments that opponents of the recall would still be making more than a century later, Hamilton feared, that the recall "will render the senator a slave to all the capricious humors among the people."
In New York's Ratifying Convention on June 24, 1788, Gilbert Livingston introduced a measure calling for the recall of senators by state legislatures. Livingston was concerned that states would have "little or no check" on senators who have a six year term of office. John Lansing, an opponent of the new Constitution, said in words that echoed more than a century later, "they (the Senators) will lose their respect for the power from whom they receive their existence, and consequently disregard the great object for which they are instituted."
Hamilton denied the premise that the state legislatures would be more in tune with the will of the people, and argued that the recall would prevent the senators from being able to make difficult decisions. Hamilton said " in whatever body the power of recall is vested, the senator will perpetually feel himself in such a state of vassalage and dependence, that he never can posses that firmness which is necessary to the discharge of his great duty to the Union."
By the time the New York Convention finally ratified the Constitution, enough states had ratified to form the government. However, there were still attempts to bring up various amendments to the new Constitution. Rhode Island, the last state to ratify in 1790, proposed 21 amendments, including granting state legislatures the power to recall their federal senators. However, the recall did not have the backing to continue as a major topic of debate after the failure of the anti-federalists. The recall of senators came up twice more, as the legislature in Virginia attempted to bring the topic up as a constitutional amendment in 1803 and 1808. The 1808 amendment was met by resolutions of disapproval from six states.
The recall received a considerable degree of support in America's early years. However, its proposed use as a weapon against the power of federal government officers failed to generate sufficient excitement to push its way through to adoption. With the Federalists' victory, the recall went into hibernation. It was not until the early part of the twentieth century, when the country was faced with a very different set of circumstances, that the recall reemerged as a viable political option. By that time, the field of debate had shifted to the state level, with the people themselves possessing the power of the recall. But the focus of the debates and the nature of the arguments had remained the same.
comments powered by Disqus
Eric D Frank - 2/2/2007
Do you by any chance know which of the 13 colonies actually wrote a recall provision in to their original constitutions?
JOHN - 1/18/2004
DOES NEW YORK STATE HAVE A RECALL PROVISION AND IF SO DOES IT APPLY TO ELECTED OFFICIAL (EX. LOCAL TO STATE LEVEL). WHAT ARE THE LAWS AND RULES THAT APPLY.
Rod Farmer - 10/30/2003
I enjoyed your web site. In case you are interested, I wrote the following article:
Farmer, Rod. "Power to the People: The Progressive Movement for the Recall, 1890s-1920," The New England Journal of History,
Winter, 2001, Vol. 57, No. 2, pp. 59-83.
bob adams - 10/8/2003
Which states have this provision? Does it apply
uniformly to all elected officials?
G Bozeman - 10/7/2003
The Iroquois praciced a form of the recall in the Iroquois Confederacy and the Six Nations. Traditionally, the women of each clan selected a male member of the clan as a representative. If this representative failed to perform his job to the benefit of his clan, he was "fired" by the women of the clan and another man was selected to replace him.
I find it interesting that, here in the most successful democracy in the world, the act of recalling an errant elected official has been practiced so few times. As a Political Science Teacher, I feel it is imperative that I stress the mechanisims by which "We The People" must maintain our authority over our government.
Joshua Spivak - 10/5/2003
In 1921, Lynn Frazier, the Governor of North Dakota was successfully recalled, along with the Attorney General and the Commissioner of Agriculture and Labor. It didn't hurt Frazier's career that much: He was elected to the U.S. Senate 2 years later.
In addition, the Governor of Arizona, Evan Mecham, was about to face a recall election in 1988, but he was impeached before the vote took place.
D. R. Taylor - 10/5/2003
I have read and heard several times that only one other governor in US history has faced a recall election. But none of the news clips or articles has stated who the governor was or when the event occurred.
Who was the first US governor to face a recall election?
Joshua Spivak - 10/3/2003
I'm not that well informed on the Omaha platform or on the Populist Party. However, I do know that the Socialist Labor and the Populist Party included the "Imperative Mandate," which was an earlier version of the recall, in their party platforms in the 1890s. Before John Randolph Haynes successful championing of the recall, many of the direct legislationists did not want to include on the same level as the intiative and referendum. They felt that it would be construed as a personal attack on an elected official.
John King - 10/3/2003
Is there any evidence that the recall was debated or considered as part of the drafting of the Omaha Platform which did endorse the referendum and the right of petition, as well as direct election of US Senators?
K. G. Schneider - 9/30/2003
Very interesting, useful article. Could you please tweak the following sentence: "Despite its infrequently usage, the recall has a long, if spotty, history in America..." We would like to feature this article in our database this Thursday, and that typo sticks out like a sore loser--I mean, thumb.
K. G. Schneider
Director, Librarians' Index to the Internet
Joshua Spivak - 9/16/2003
I agree. I appreciate you bringing the subject to my attention. I intend to mention it in any future writing I do on this subject.
Oscar Chamberlain - 9/15/2003
Thank you for the links.
I had understood the distinction. It simply struck me that the similarity was revealing.
Joshua Spivak - 9/7/2003
I should explain that the reason Instructions were not the equivalent to the recall is that Senators were under no legal obligation to either follow the instructions or resign. This is in marked contrast to the recall, which would cause the removal of a Senator from office by force of law.
Below are the links to two websites that discuss the use of Instructions in the 18th and 19th Century Senate.
Joshua Spivak - 9/5/2003
I've heard of such behavior, but I don't know that I would consider that a recall. I do remember a similar situation in 1880, when New York Senators Roscoe Conkling and Thomas Platt resigned over a disagreement with President Garfield regarding presidential appointments. They expected the NY State legislature to reappoint them in a show of force, but their gambit failed.
Oscar Chamberlain - 9/4/2003
In at least some states in the antebellum period, there was a tradition that a Senator would resign if he felt he could not follow the instructions of the legislature.
I know I have seen such a debate in Michigan, when one of its senators used the threat of retiring as a way of avoiding instructions being passed. (If memory serves, this occurred in the debate leading up to the Compromise of 1850).
I would be curious if anyone knows of other examples.
- ‘The Crown’: The History Behind Season 3 on Netflix
- No, Trump in 2019 is not like George Washington in 1794
- Confederate Statue in North Carolina Comes Down After 112 Years
- NASA Renames Object After Uproar Over Old Name’s Nazi Connotations
- New Statue Unsettles Italian City: Is It Celebrating a Poet or a Nationalist?
- Beloved University professor passes away at 64
- British Historians Antony Beevor, Tom Holland and Dan Snow say they cannot vote for party under Corbyn
- He Predicted Both Trump’s Election and Impeachment. What Else Does He Know?
- Dorothy Seymour Mills, who received belated credit for husband's baseball books, dies at 91
- A Defense of Aristocracy: On Anthony T. Kronman’s “The Assault on American Excellence” |
What is a Codec?
A codec, short for encoding and decoding, is a software or hardware component that compresses and decompresses digital data. It is a vital tool in the world of multimedia, enabling efficient storage, transmission, and playback of various media files, including videos, audio recordings, and images.
When we talk about digital media, it is important to understand that files contain a vast amount of data. For example, a high-definition video can have thousands of frames and each frame contains millions of pixels. Similarly, an audio file consists of multiple channels and each channel contains thousands of samples per second.
Codecs use different algorithms to compress these large files into smaller sizes without compromising the quality of the media. During the compression process, redundant or unnecessary data is removed, resulting in a reduced file size that is more manageable for storage and transmission.
On the other hand, when the compressed file is played or accessed, the codec helps to decompress it so that it can be viewed or heard in its original form. This decoding process ensures that the media content is restored with minimal loss of quality.
It’s important to note that there are different types of codecs available, each designed for specific media formats and purposes. Some codecs are specifically designed for video compression and decompression, while others are tailored for audio compression and decompression.
Overall, codecs play a crucial role in enabling the seamless playback and transmission of media files, making it possible for us to enjoy high-quality videos, music, and other forms of digital media on our devices.
Understanding Compression and Decompression
Compression and decompression are fundamental concepts in the realm of codecs. To comprehend how codecs work, it’s important to understand the process of compression and decompression.
Compression is the technique used to reduce the size of a file by eliminating redundant or unnecessary data. When a file is compressed, it takes up less storage space, requires less bandwidth for transmission, and can be uploaded or downloaded more quickly.
There are two main types of compression: lossy and lossless. Lossy compression is generally used for multimedia files, such as videos and audio, where a certain amount of data can be discarded without significantly impacting the perceived quality. This type of compression achieves higher compression ratios but sacrifices some details and fidelity in the process.
On the other hand, lossless compression is used when preserving the exact quality of the file is paramount. It compresses the data without any loss of quality, resulting in a smaller file size. This type of compression is commonly used for text-based files, such as documents and spreadsheets.
Decompression, as the name suggests, is the process of reversing the compression and restoring the file to its original form. When a compressed file is opened or played, the codec responsible for decompression reads the compressed data and reconstructs it to its original state.
Codecs use algorithms to compress and decompress files. These algorithms vary depending on the type and purpose of the codec. Some codecs use simple algorithms, while others employ complex algorithms to achieve higher compression ratios or preserve specific aspects of the media.
Understanding these principles of compression and decompression can help you make informed decisions when choosing codecs and selecting the appropriate settings for compression and decompression processes.
Different Types of Codecs
There are numerous types of codecs available, each designed to handle specific media formats and achieve different compression results. Understanding the different types of codecs can help you choose the right one for your specific needs. Let’s explore some of the most commonly used codecs:
Video Codecs: Video codecs are specifically designed for compressing and decompressing video content. Some popular video codecs include H.264, H.265 (also known as HEVC), VP9, and AV1. These codecs use different compression algorithms and settings to achieve varying degrees of compression and quality.
Audio Codecs: Audio codecs are dedicated to compressing and decompressing audio files. Common audio codecs include AAC, MP3, FLAC, and Ogg Vorbis. Each audio codec has its own strengths and weaknesses in terms of compression efficiency and audio quality.
Image Codecs: Image codecs are used for compressing and decompressing image files. Popular image codecs include JPEG, PNG, GIF, and WebP. These codecs take different approaches to image compression, resulting in varying levels of file size reduction and image quality.
Container Formats: While not technically codecs, container formats play a crucial role in storing and delivering multimedia content. Container formats, such as MP4, MKV, and AVI, hold compressed audio and video streams, along with metadata and other related data. They provide the framework for storing and playing back multimedia files.
It’s important to note that the choice of codec depends on various factors, including the intended use of the media, the desired file size, and the compatibility with playback devices. Some codecs are more widely supported and compatible across different platforms and devices, while others may be more specialized or proprietary.
When working with media files, it’s essential to choose codecs that strike a balance between the desired compression and the quality of the output. This ensures that the media is efficiently compressed while still maintaining an acceptable level of visual or auditory fidelity.
By understanding the different types of codecs and their applications, you can make informed decisions when it comes to choosing the right codec for your specific multimedia needs.
Lossy vs Lossless Codecs
When it comes to compressing digital media, there are two main types of codecs: lossy and lossless. Understanding the differences between these two types is essential for selecting the appropriate codec based on your specific requirements.
Lossy Codecs: Lossy codecs are designed to achieve higher levels of compression by selectively discarding certain data that is considered less crucial to human perception. This data includes details that may not be easily noticeable or audibly discernible to the average viewer or listener. By removing this “unnecessary” data, lossy codecs can significantly reduce the file size while maintaining an acceptable level of perceived quality.
However, it’s important to note that the compression process of lossy codecs results in some loss of data and quality. This means that with each compression and decompression cycle, there is a gradual degradation of the media content. Despite this, lossy codecs are widely used for multimedia files such as videos and audio, as the loss in quality is often imperceptible to the average user and the reduction in file size enables efficient storage and transmission.
Lossless Codecs: In contrast, lossless codecs aim to compress the media files without any loss of quality. They achieve compression by identifying and eliminating redundancy within the data but retain all the original details, ensuring an exact reconstruction of the uncompressed file. Lossless codecs are commonly used for applications where every bit of data is crucial, such as archival storage, professional audio recording, or medical imaging.
Due to their preservation of quality, lossless codecs generally result in larger file sizes compared to lossy codecs. This is because they prioritize maintaining the integrity of the original media content over efficient compression. Lossless codecs are often preferred when the goal is to retain the highest possible quality or when there is a need for lossless transcoding, where the compressed file needs to be further processed or edited without any additional quality loss.
The choice between lossy and lossless codecs ultimately depends on the specific requirements and constraints of the project. If storage space, bandwidth usage, or compatibility with playback devices are critical considerations, lossy codecs may be the preferred choice. On the other hand, if preserving the exact quality of the media is crucial, such as for professional or archival purposes, lossless codecs are the recommended option.
Understanding the differences between lossy and lossless codecs enables you to make informed decisions and select the appropriate codec based on the desired level of compression and quality for your specific media files.
Popular Video Codecs
Video codecs play a crucial role in compressing and decompressing video files, enabling efficient storage, transmission, and playback of multimedia content. Let’s explore some of the most popular video codecs used today:
H.264: H.264, also known as AVC (Advanced Video Coding), is one of the most widely used video codecs. It offers a good balance between file size and visual quality, making it suitable for a wide range of applications, including streaming, broadcasting, and video conferencing. It is supported by virtually all modern devices and platforms, making it highly compatible.
H.265 (HEVC): H.265, also known as High-Efficiency Video Coding (HEVC), is a newer video codec that offers improved compression efficiency compared to H.264. It can significantly reduce file sizes while preserving video quality. This makes it ideal for high-resolution videos and high-quality streaming applications. However, wider adoption and support for H.265 are still growing.
VP9: VP9 is an open-source video codec developed by Google. It is designed to provide efficient compression and high video quality with broader browser and platform support. VP9 is commonly used for online video streaming, particularly on platforms like YouTube. It is considered to be a competitor to H.264 and H.265.
AV1: AV1 is an emerging video codec that aims to provide even better compression efficiency than existing codecs. Developed as a royalty-free and open-source alternative, AV1 offers high-quality video with smaller file sizes. It is increasingly being adopted by major streaming platforms and is expected to gain wider support in the future.
These video codecs leverage different compression algorithms and settings to strike a balance between compression efficiency and video quality. The choice of codec primarily depends on factors such as the intended use of the video, device compatibility, and available bandwidth.
It’s worth mentioning that codecs have evolved over time, with newer versions and improvements continually being developed. As technology advances, newer codecs may offer better compression ratios and improved video quality.
Understanding these popular video codecs can help you make informed decisions when it comes to encoding and decoding video files, ensuring optimal video quality and compatibility across various platforms and devices.
Popular Audio Codecs
Audio codecs are essential for compressing and decompressing audio files, enabling efficient storage, transmission, and playback of audio content. Let’s explore some of the most popular audio codecs used today:
AAC: Advanced Audio Coding (AAC) is a widely used audio codec known for its efficient compression and high sound quality. It is the default audio format for many streaming platforms and is supported by a wide range of devices. AAC offers better compression ratios compared to older audio codecs like MP3, while delivering comparable or even better audio quality.
MP3: MPEG-1 Audio Layer 3 (MP3) is one of the most recognizable audio codecs. It gained popularity for its ability to compress audio files without substantial quality loss. MP3 files have become the standard format for music and audio playback, making them highly compatible with various devices and platforms. However, newer audio codecs like AAC have surpassed MP3 in terms of compression efficiency and audio quality.
FLAC: Free Lossless Audio Codec (FLAC) is a popular codec known for its ability to compress audio files without any loss of audio quality. Unlike lossy codecs, FLAC retains all the audio data, resulting in high-quality audio reproduction. FLAC is commonly used for archiving or storing audio files that require preservation of the original audio fidelity, such as professional music production or audiophile listening.
Ogg Vorbis: Ogg Vorbis is an open-source audio codec that offers a good balance between compression efficiency and audio quality. It provides comparable audio quality to MP3 while achieving smaller file sizes. Ogg Vorbis is commonly used for online streaming and is supported by popular media players.
These audio codecs employ different compression techniques to meet various needs for audio storage and transmission. The choice of codec depends on factors such as the desired compression ratio, audio quality requirements, and compatibility with playback devices and platforms.
It’s important to note that ongoing advancements in audio codec technology have led to the emergence of newer codecs that offer even better compression efficiency and audio quality. These newer codecs may be more suitable for specific applications, depending on the desired outcome.
Understanding the popular audio codecs mentioned above can help you select the appropriate codec for your audio files, ensuring efficient storage, transmission, and playback of high-quality audio content.
Codecs and File Size
Codecs play a significant role in determining the file size of multimedia content. The choice of codec can have a considerable impact on the final file size, influencing storage requirements and transmission efficiency. Let’s explore how codecs affect file size:
Compression Efficiency: Codecs use different compression algorithms and settings to reduce the file size of media content. Some codecs, like lossy codecs, achieve higher compression ratios by discarding less noticeable data. As a result, the compressed file size is significantly smaller compared to the original uncompressed file. On the other hand, lossless codecs retain all the original data, leading to larger file sizes. It’s important to strike a balance between compression efficiency and desired quality when choosing a codec.
Bitrate: The bitrate represents the amount of data processed per unit of time, usually measured in kilobits per second (Kbps) or megabits per second (Mbps). Higher bitrates result in larger file sizes as they contain more data. Codecs often allow adjustments of the bitrate, allowing users to prioritize either maintaining quality with a higher bitrate or reducing file size with a lower bitrate. However, reducing the bitrate too much can lead to noticeable degradation in visual or auditory quality.
Media Complexity: Different types of media have varying levels of complexity. For example, a video with fast-moving action or high levels of detail will require more data to accurately represent the content. Consequently, the file size of such videos might be larger compared to videos with less complexity or motion. Similarly, audio files with intricate soundscapes or multiple channels may result in larger file sizes compared to simpler audio recordings. Codecs need to account for these complexities and allocate appropriate data to accurately represent the media, which can impact the resulting file size.
Format and Container: The choice of format and container can also affect the final file size. Different container formats, such as MP4, MKV, or AVI, have their own overhead for storing metadata, audio, and video streams. The codec used within the container can further impact the compression efficiency and, consequently, the final file size.
It’s important to consider the intended use and storage capacity when selecting a codec. While smaller file sizes are desirable for efficient storage and transmission, it’s crucial to strike a balance between compression and quality. Compressing a file too much may result in noticeable loss of audio or video fidelity. Conversely, larger file sizes may be acceptable for professional or archival purposes that prioritize maintaining the utmost quality.
By understanding how codecs affect file sizes, you can make informed decisions when choosing the appropriate codec settings to achieve the desired balance between file size and quality for your specific media content.
Codecs and Quality
When it comes to codecs, the quality of the media is a crucial aspect to consider. Codecs affect the quality of audio and video content in different ways, and understanding these factors can help you make informed decisions when selecting the right codec for your needs. Let’s explore how codecs impact quality:
Lossy Compression and Quality: Lossy codecs, as the name suggests, involve discarding certain data during compression to achieve higher levels of compression. While this reduction in data helps to significantly reduce file sizes, it can result in a loss of quality. The extent of this quality loss depends on various factors such as the bitrate, the specific compression algorithms used, and the perceptual limitations of human perception. Generally, lossy codecs aim to minimize the loss of quality to a level that is imperceptible or acceptable to the average viewer or listener.
Compression Artefacts: In lossy compression, there is a possibility of introducing compression artefacts, which are visual or auditory distortions that occur due to the compression process. These artefacts can include pixelation, blurring, blocking, color banding in videos, or distortion, noise, or loss of subtle details in audio. The severity of these artefacts depends on the specific codec and the compression settings. Higher compression ratios or lower bitrates can result in more noticeable artefacts, impacting the perceived quality of the media.
Bitrate Allocation: Codecs allow for adjustments in bitrate allocation, which determines the amount of data allocated for different elements of the media, such as video frames or audio samples. A higher bitrate allocation helps in maintaining better quality, as more data is dedicated to accurately represent the details. However, allocating too high a bitrate may lead to larger file sizes. Finding the right balance between bitrate allocation and file size is crucial to achieving the desired quality.
Perceptual Coding: Many codecs utilize perceptual coding techniques to optimize the compression process. These techniques take advantage of human perceptual limitations by allocating more bits to the elements that are more noticeable to viewers or listeners while reducing the allocation for elements that are less perceptible. This approach allows for effective reduction in file size while preserving the perceived quality of the media.
It’s important to note that the perceived quality of media can vary from person to person, and factors such as individual perception, viewing or listening conditions, and personal preferences can influence how someone perceives the quality of compressed media. Additionally, the quality of the source media, such as the original recording or video, can also impact the final perceived quality after compression.
By understanding how codecs impact the quality of media, you can choose a codec and adjust the settings that best align with your desired quality requirements for your specific audio or video content.
Codecs and Device Compatibility
When working with multimedia content, it’s essential to consider the compatibility of codecs with the devices on which you plan to play or transmit the media. Different devices may have varying levels of support for different codecs and formats. Let’s explore how codecs and device compatibility intersect:
Supported Codecs: Each device, whether it’s a smartphone, computer, or television, has a specific list of supported codecs. This means that the device’s hardware and software are designed to decode and play media files encoded with these codecs. It’s important to ensure that the codecs used to encode your media files are supported by the devices on which you intend to play them. Otherwise, the media may not play at all or may only play with limited functionality.
Platform-Specific Codecs: Some devices or platforms may have their own proprietary codecs or formats that are optimized for their hardware and software. These codecs may offer specific features, higher performance, or better compatibility. For example, Apple devices often support the Apple proprietary codec AAC, while certain Android devices may have better compatibility with other codecs. It’s crucial to consider the platform and the target device to ensure compatibility and optimal playback experience.
Transcoding and Format Conversion: In cases where the codec used in the media file is not supported by a particular device, transcoding or format conversion may be necessary. Transcoding involves decoding the media file using one codec and then re-encoding it using a different codec that is supported by the target device. Format conversion, on the other hand, involves converting the media file from one file format to another while keeping the same codec. While these processes can ensure device compatibility, they may also result in a loss of quality or increased file size, and should be done with caution.
Streaming and Online Platforms: When it comes to streaming media or uploading to online platforms, compatibility with the chosen platform’s requirements is crucial. Different platforms may have specific codec and format requirements for media files to ensure efficient streaming and playback across various devices. It’s important to check the platform’s guidelines and recommendations to adhere to their compatibility standards.
By understanding the supported codecs and formats of your target devices and platforms, you can choose the appropriate codecs and ensure compatibility for your media files. It’s advisable to utilize widely supported codecs and formats to maximize the chances of seamless playback across a range of devices.
Remember that device compatibility can evolve over time as new codecs and formats are introduced or as devices receive software updates. Staying informed about the compatibility of codecs and periodically checking for updates can help ensure that your media remains compatible with the latest devices and technologies.
Transcoding is the process of converting media files encoded with one codec into another codec. It is often necessary when the original codec used in the media file is not supported by the desired playback device or platform. Let’s explore the concept of transcoding and its implications:
Reasons for Transcoding: There are several reasons why transcoding may be required. One common reason is device compatibility. If a device doesn’t support the original codec used in the media file, transcoding allows for conversion to a supported codec, ensuring playback on the device. Additionally, transcoding can also be done to optimize the file for specific requirements, such as reducing the file size for efficient streaming or converting to a different format while preserving the original codec for compatibility purposes.
Transcoding Process: Transcoding involves two main steps: decoding and encoding. First, the original media file is decoded using the original codec, converting it into an uncompressed or raw format. Then, the raw file is re-encoded using the desired codec. During the encoding process, various parameters can be adjusted, such as the bitrate, resolution, or audio settings, to meet specific requirements or optimize the result. It’s important to note that each codec has its own characteristics and settings, and transcoding may result in a change in quality, file size, or compatibility.
Quality Considerations: Transcoding can introduce quality loss or artifacts, especially when transcoding from a lossy codec to another lossy codec. Each codec has its own algorithms and compression techniques, which may not perfectly align with those of the original codec. Each transcoding process involves a generation loss, meaning that the quality may degrade slightly with each cycle of transcoding. To minimize quality loss, it’s generally advised to transcode from a lossless source or to perform transcoding processes sparingly and only when necessary.
File Size and Compression: Transcoding can also impact the file size. Different codecs have varying compression efficiencies, resulting in different file sizes for the same media content. When transcoding, it’s important to consider the target file size and the desired compression ratio. Depending on the settings used during transcoding, the resulting file size may differ from the original file size. Adjustments in bitrate, resolution, or other parameters can be made during the encoding process to optimize the file size while maintaining an acceptable level of quality.
Transcoding Considerations: Before transcoding, it’s vital to have a clear understanding of your requirements and the implications of the process. Consider the compatibility of the target device or platform, the desired quality, and the file size considerations. Additionally, it’s recommended to keep backups of the original media files to avoid unnecessary generation loss and to retain the highest possible quality for future use or re-encoding.
Transcoding codecs allows for greater flexibility and compatibility in managing media files. However, it’s important to be mindful of the potential quality loss and consider the settings and parameters during transcoding to achieve the desired balance between compatibility and maintaining the highest possible quality. |
If you are wondering what a volcanic arc is then you have come to the right place! In this article I will teach you all about volcanic arcs and why they exist. Ready to learn more? Read on…
What is a volcanic arc?
A volcanic arc is like a line of volcanoes that form in a specific area.
It happens when one big piece of Earth’s crust slides beneath another.
The volcanoes in a volcanic arc can be really tall and explosive because of the intense heat and pressure created when the plates collide.
It’s a place where lots of volcanic activity happens, and it can also have a lot of earthquakes.
Process of plate tectonics
Plate tectonics is the process that describes how Earth’s outer layer, called the lithosphere, is made up of several large pieces called tectonic plates. These plates are constantly moving, albeit very slowly, floating on the semi-fluid layer beneath called the asthenosphere.
When two tectonic plates meet, different things can happen depending on the type of boundary between them. In the case of a subduction zone, where one plate is forced beneath another, the process begins. The denser plate (usually an oceanic plate) sinks into the hotter and more plastic mantle beneath the less dense plate (usually a continental plate).
As the subducting plate sinks deeper into the Earth, it experiences increasing temperature and pressure. This causes the subducting plate to release water and other volatile substances trapped within it. These released volatiles rise into the overlying mantle wedge, causing it to partially melt.
The melted material from the mantle wedge, called magma, is less dense than the surrounding rocks, so it rises towards the surface. Eventually, it reaches the Earth’s crust, forming a series of volcanoes along the subduction zone. This line of volcanoes is known as a volcanic arc.
The volcanic arc consists of explosive volcanoes because the magma contains a lot of gas and other materials that build up pressure. This leads to eruptions that can be quite powerful. Additionally, the intense heat and pressure in the subduction zone can cause the crust to deform and create earthquakes.
So, in simple terms, plate tectonics is when Earth’s outer layer moves around, and when certain plates collide, one goes under the other. This creates a place where one plate sinks into the Earth and makes volcanoes. The volcanoes form a line called a volcanic arc, and they can erupt explosively and cause earthquakes.
Three main sections of a volcanic arc
In a volcanic arc, there are three main sections: the forearc, volcanic front, and back-arc.
The forearc is the region located between the trench (where the subduction occurs) and the volcanic front. It is typically a broad area and is often characterized by sediment deposition from the eroding land above. The forearc is not as active volcanically as the other sections but can experience shallow earthquakes due to the interaction of the subducting and overriding plates.
The volcanic front is the central part of the volcanic arc where most of the active volcanoes are found. It is the area where the subducted plate melts and creates magma that rises to the surface, leading to volcanic eruptions. The volcanic front is known for its tall and explosive volcanoes, such as stratovolcanoes, which are formed by layers of lava and ash building up over time.
The back-arc is the region located on the side opposite to the trench, behind the volcanic front. It is characterised by a different tectonic setting compared to the volcanic front. In the back-arc, the overriding plate may experience extension or stretching, which can lead to the formation of features like basins, rifts, or even smaller volcanoes. The volcanism in the back-arc is often less intense compared to the volcanic front.
Examples of volcanic arcs around the world
Now that we know what a volcanic arc is, lets take a look at some examples from around the world.
The Andean Volcanic Arc
The Andean Volcanic Arc is a prominent volcanic arc in South America, stretching over 7,000 kilometres along the western coast.
It is formed by the subduction of the Nazca Plate beneath the South American Plate. This volcanic arc is known for its numerous active stratovolcanoes, such as Cotopaxi in Ecuador and Villarrica in Chile.
The Cascade Volcanic Arc
The Cascade Volcanic Arc is located in the western part of North America, extending from northern California through Oregon and Washington up to British Columbia in Canada.
It is formed by the subduction of the Juan de Fuca Plate beneath the North American Plate.
The Cascade Volcanic Arc is famous for its explosive and iconic volcanoes, including Mount St. Helens and Mount Rainier.
The Japanese Volcanic Arc
The Japanese Volcanic Arc is a volcanic arc that runs through the islands of Japan. It is created by the subduction of the Pacific Plate beneath the Eurasian Plate.
This volcanic arc is characterised by a high density of volcanoes, including iconic peaks like Mount Fuji and Mount Aso.
The Japanese Volcanic Arc is known for its frequent volcanic activity and seismic events.
The Central American Volcanic Arc
It is formed by the subduction of the Cocos Plate beneath the Caribbean Plate. This volcanic arc is marked by a string of active volcanoes, such as Arenal in Costa Rica and Masaya in Nicaragua.
The Central American Volcanic Arc is associated with both explosive eruptions and volcanic hazards.
The Aleutian Volcanic Arc
The Aleutian Volcanic Arc stretches across the Aleutian Islands in Alaska, extending into the Kamchatka Peninsula in Russia.
It is formed by the subduction of the Pacific Plate beneath the North American Plate. The Aleutian Volcanic Arc is known for its volcanic activity and hosts numerous volcanic peaks, including Mount Shishaldin and Mount Pavlof.
It is characterised by its remote location and rugged volcanic landscapes.
These examples demonstrate the diverse and geologically active nature of volcanic arcs around the world, each shaped by the specific tectonic processes occurring in their respective regions.
Creating rich soils
Volcanic arcs play a crucial role in creating rich soils that contribute to high levels of biodiversity. When volcanoes erupt in volcanic arcs, they release lava and ash that contain various minerals and nutrients. Over time, these volcanic materials break down and weather, forming fertile soils.
The rich soils formed from volcanic activity are highly fertile and contain essential elements such as nitrogen, phosphorus, potassium, and trace minerals. These nutrients provide a favorable environment for plant growth and support a diverse range of vegetation.
The lush vegetation, in turn, attracts a wide array of animal species. The abundant plant life provides food and shelter for insects, birds, mammals, and other organisms, fostering a complex web of interactions and relationships. The high levels of biodiversity in volcanic arc regions can include unique plant species, endemic wildlife, and specialized adaptations to volcanic environments.
Additionally, the volcanic soils have excellent water retention properties, allowing plants to thrive even during dry periods. The moisture-retaining capacity of these soils helps sustain vegetation, creating microhabitats and promoting further biodiversity.
The presence of diverse plant communities and abundant food sources in volcanic arc areas supports a cascade of life. Herbivores feed on plants, which, in turn, attract predators and scavengers. The interdependence of species within these ecosystems contributes to the overall richness and diversity of life.
In summary, volcanic arcs provide the foundation for fertile soils rich in nutrients, fostering the growth of diverse plant communities. This abundance of plant life supports a wide range of animal species, resulting in high levels of biodiversity. The interconnectedness and complexity of these ecosystems contribute to the unique and vibrant natural environments found in volcanic arc regions.
Hazards and risks
Hazards and risks are important concepts to understand when it comes to potential dangers and uncertainties in our environment.
Hazards refer to natural or man-made events or conditions that have the potential to cause harm, damage, or disruption to people, property, or the environment. These hazards can take various forms, such as natural disasters like earthquakes, hurricanes, floods, volcanic eruptions, or human-made hazards like industrial accidents or chemical spills.
Risks, on the other hand, are the chances or probabilities of experiencing negative consequences or harm due to exposure to a hazard. It involves assessing the likelihood and potential severity of an event occurring and the potential impacts it may have.
For example, if we consider a volcanic eruption as a hazard, the associated risks would involve evaluating factors such as the volcano’s history, monitoring data, and the proximity of populated areas to determine the likelihood and potential impacts of an eruption. The risks associated with the volcanic hazard would include the potential for ashfall, pyroclastic flows, lahars (mudflows), lava flows, and other volcanic phenomena.
Understanding hazards and risks is essential for risk management and mitigation. It involves identifying potential hazards, assessing the risks they pose, and implementing measures to reduce or avoid those risks. This can include measures such as building codes and regulations, early warning systems, emergency preparedness, evacuation plans, and public education.
By recognising hazards and evaluating risks, we can take steps to minimise their potential impacts and increase our resilience to such events. This helps to protect lives, safeguard communities, and promote sustainable development in the face of potential dangers.
Now that we understand what a volcanic arc is, lets summarise the key points that we should know:
- Volcanic arcs are curving chains of volcanoes that form above subduction zones, where one tectonic plate is forced beneath another.
- They are characterized by intense volcanic activity and the formation of stratovolcanoes, which are steep-sided cones composed of alternating layers of lava, ash, and volcanic debris.
- Volcanic arcs often occur in areas of intense seismic activity and are associated with the collision or subduction of tectonic plates.
- The three main sections of a volcanic arc are the forearc, volcanic front, and back-arc. The forearc is located between the trench and volcanic front, the volcanic front is where most active volcanoes are found, and the back-arc is behind the volcanic front.
- Volcanic arcs play a crucial role in creating fertile soils due to the volcanic materials released during eruptions. These rich soils support diverse plant communities and contribute to high levels of biodiversity.
- The abundant vegetation in volcanic arc regions attracts a wide range of animal species, leading to complex ecosystems and interactions.
- Volcanic arcs can pose hazards, including explosive volcanic eruptions, ashfall, pyroclastic flows, lahars, and seismic activity. Assessing and managing these risks is essential for safeguarding lives and minimizing the potential impacts.
- Understanding volcanic arcs provides insights into the dynamic nature of Earth’s tectonic processes, the formation of landscapes, and the coexistence of natural hazards and biodiversity.
Lastly, here are 10 frequently asked questions about volcanic arcs along with their answers:
What is a volcanic arc?
A volcanic arc is a curving chain of volcanoes that forms above a subduction zone, where one tectonic plate is forced beneath another.
How are volcanic arcs formed?
Volcanic arcs form due to the subduction of one tectonic plate beneath another, which leads to the melting of the subducting plate and the subsequent volcanic activity.
Where can volcanic arcs be found?
Volcanic arcs are found in various locations worldwide, including the Andes, the Cascades, Japan, Central America, and the Aleutian Islands.
What types of volcanoes are commonly found in volcanic arcs?
Stratovolcanoes, also known as composite volcanoes, are commonly found in volcanic arcs due to their explosive nature. However, other volcano types like shield volcanoes or calderas can also be present.
Are volcanic arcs dangerous?
Volcanic arcs can pose hazards such as volcanic eruptions, ashfall, pyroclastic flows, lahars, and earthquakes. However, with proper monitoring and preparedness, the risks can be minimised.
Do volcanic arcs contribute to biodiversity?
Yes, volcanic arcs create fertile soils that support diverse plant communities. The abundant vegetation attracts various animal species, leading to high levels of biodiversity.
How long do volcanic arcs typically last?
Volcanic arcs can persist for millions of years as long as the subduction process continues and tectonic forces drive the movement of the plates involved.
Can volcanic arcs cause tsunamis?
While volcanic arcs themselves do not directly cause tsunamis, large volcanic eruptions, particularly those occurring in or near bodies of water, can trigger tsunamis through associated events like underwater landslides or the collapse of volcanic edifices.
Are volcanic arcs related to earthquakes?
Yes, volcanic arcs are often associated with intense seismic activity. As tectonic plates interact and subduct, the resulting pressure and movement can lead to earthquakes.
How are volcanic arcs monitored for potential eruptions?
Volcanic arcs are monitored using various techniques, including seismometers to detect earthquake activity, gas monitoring to track changes in gas emissions, ground deformation measurements, and satellite-based remote sensing to monitor volcanic activity and changes in thermal patterns.
As we can see, volcanic arcs are incredible feats of nature that are found in various parts of the world. If you enjoyed learning about these types of volcanos, I am sure you will like these posts too: |
CCSS.Math.Content.8.EE.B.5 - Graph proportional relationships, interpreting the unit rate as the slope of the graph. Compare two different proportional relationships represented in different ways. For example, compare a distance-time graph to a distance-time equation to determine which of two moving objects has greater speed.
Authors: National Governors Association Center for Best Practices, Council of Chief State School Officers
Title: CCSS.Math.Content.8.EE.B.5 Graph Proportional Relationships, Interpreting The Unit Rate... Expressions and Equations - 8th Grade Mathematics Common Core State Standards
Publisher: National Governors Association Center for Best Practices, Council of Chief State School Officers, Washington D.C.
Copyright Date: 2010
(Page last edited 10/08/2017)
- Finding the X Intercept and Y Intercept to Graph Standard Form Equations - Five practice problems to practice using what was learned in the lesson above, followed by an answer key
- Fun and Sun Rent-a-Car - Students use tables, graphs, linear functions to solve a real-world problem
- Graphing linear equations - Practice from Math.com
- Practice Converting Linear Equations into Slope-Intercept Form - This show provides several opportunities for student practice [25 slides]
- Proportional Relationships - Proportional relationships: word problems
- Proportional Relationships - Graph a proportional relationship
- Proportional Relationships - Find the constant of variation: graphs
- Rate - Understand that rate compares two quantities of different kinds of units and learn how to express rates as unit rates. Work through the lesson, experiment with the interactive figure and then answer ten questions. [caution: do not submit until you have selected answers to all of the questions]
- Rate of Change Practice Problems - Nine practice problems (including a graph to interpret) followed by an answer key
- Rate of Change: Connecting Slope to Real Life - A lesson on using slope to find the rate of change accompanied by graphs and explanations
- Ratios and Proportions - Solve proportions: word problems
- Ratios and Proportions - Solve proportions
- Ratios and Proportions - Do the ratios form a proportion: word problems
- Ratios and Proportions - Do the ratios form a proportion?
- Ratios and Proportions - Unit rates
- Reading Charts and Graphs - Read bar graphs, pie charts, and grid charts, review percentages in pie charts, and compare types of information shown in different kinds of charts
- Representations of Linear Functions: Cab fares - The charge for a cab ride is determined by the length of travel. Some companies also charge a basic, fixed amount. At this site you will find descriptions of fares of three competing cab companies and you are asked to suggest a new fare.
- Slope Intercept Jeopardy - This show is in Jeopardy format; with five categories each with five questions - good for a while class review
- Straight Lines and Slopes - y-Intercept - Designed as a bellwork assignment, this show gives practice at reading slope and intercept from graphs [15 slides]. Explanation from Regents Prep assessment preparation site [from the Internet archive, the Wayback Machine.]
- Tallying, collecting and grouping data - Find out how to tally, collect and group data in this activity.
- The Hot Tub - This is a fun activity where students tell the story behind a graph and relate slope to rate of change.
- Using the X and Y Intercept to Graph Linear Equations - This follow up to the lesson on slope and rate of change deals with linear equations written in standard form rather than slope intercept form
- Word Problems with Katie - Two levels are available: addition and subtraction in level 1, and multiplication and division in level 2 - questions start out easy and get a little harder as you go |
In our exhibition Time and Navigation visitors can set their watches by a working cesium frequency standard, commonly known as an “atomic clock,” on loan from the National Museum of American History. The exhibit allows visitors to see different methods of measuring time, including mechanical and electrical clocks. A digital display on the atomic clock shows the global reference known as the Coordinated Universal Time or UTC. A separate display connected to the clock shows local time, which visitors can use to set their watches. While the device is not connected to outside time sources, it will keep accurate time within a tiny fraction of a second over the foreseeable future. We jokingly called it our “Box of Time.”
What is an Atomic Clock?
Atomic clocks maintain very stable time references at specialized laboratories such as the U.S. Naval Observatory and the National Institute of Standards and Technology. The time is distributed all over the world by satellites, radio signals, fiber optic connections, and computer networks. These time standards are essential for synchronizing data connections, communications, transportation, and countless other aspects of modern society.
Atomic “clocks” can be more precisely called frequency standards. They maintain stable frequencies by measuring changes in the energy state of heavy elements such as cesium. These devices know exactly the length of each second with a precision of a billionth of a second. The unit sends out pulses exactly one second apart. By itself, the frequency standard doesn’t actually know the time of day. Keeping track of that requires a second piece of equipment: A time code generator. This device takes the pulses from the frequency standard to keep track of hours, minutes, and seconds. (Jump to minute 17:30 of this STEM in 30 episode to see the U.S. Naval Observatory and learn more about how an atomic clock works.)
Setting Up the Clock in the Museum
Our atomic clock was the last thing to be installed in our exhibition in 2014. Before it could be installed, we needed the frequency standard to be calibrated because each atomic clock can run at a slightly different rate. To determine how ours was working, we wanted to compare its operation to the national reference time. Fortunately for us, this originates right in Washington, DC at the U.S. Naval Observatory. The staff there agreed to let us bring our clock in for calibration. It was a tricky procedure. We had to calibrate both the frequency standard and the time code generator and then bring them back to the Museum. During all this, electrical power to the clock had to be maintained. We brought along a battery power unit used for computers. Along with the internal battery backup in the frequency standard, we hoped this would give us about 90 minutes of power. To be safe, we planned to plug in the whole system to the vehicle’s power supply.
With a plan in place, we picked up the clock from the National Museum of American History, along with its curator Roger Sherman. The unit had a helpful note on top that said, “Roger’s Atomic Clock.” The battery backup worked flawlessly as we made our way up Massachusetts Avenue to the Naval Observatory.
Once at the Observatory’s time service building, we plugged in the necessary cables to compare our clock with the U.S. master clock. The initial comparison showed that our clock was running about 24 nanoseconds (billionths of a second) slow. After a couple hours, this offset had changed to less than a nanosecond. This told us the frequency standard was running well. Over the next 10 years it will drift out of sync with the national time reference by only less than 1/10,000th of a second. That sounded good enough for museum visitors to set their watches. While there we also set the time of day on the time code generator.
We packed up the frequency standard, the time code generator, the battery backup, and began the drive back to the Museum. I was behind the wheel with the power supply plugged into the dashboard. Roger was in the back seat with the equipment. At one point, driving down Independence Avenue, something began to emit ear-splitting cries. Roger and I tried to determine which piece of equipment was complaining. It turned out to be the overloaded power supply. I pulled the plug out of the dashboard port, which was so hot it almost burned my fingers. Then the UPS on the floor started beeping loudly because it wasn’t getting power. Everything was confusion in the vehicle as we shouted above the noisy equipment while checking all the units and cables. But after that brief moment of excitement, we had enough juice in the battery backup to make it the rest of the way to the Museum. After some careful coordination with all the cables, we got it mounted in its display case where it continues to display the time.
This wouldn’t be the last time we needed to adjust our atomic clock. In June 2015 we had to account for a leap second. In my next post I’ll explain what a leap second is and how we updated our atomic clock.
A special thanks to everyone at the U.S. Naval Observatory, the people at Symetricomm (now Microsemi) who manufactured the clock, and Roger Sherman at the National Museum of American History.
Andrew Johnston was a research associate in our Center for Earth and Planetary Studies. He is now the Vice President of Astronomy & Collections at the Adler Planetarium in Chicago.
Last October, we announced that we had acquired the collection of Sally K. Ride, the first American woman in space. Now, we can share that the archival portion of the collection has been processed and is available for research! See our finding aid for more detailed information.
The Sally K. Ride collection consists of more than 23 cubic feet of papers, photographs, certificates, and film created or collected by Ride chronicling her career from the 1970s through the 2010s. The papers document Ride’s lifetime of professional achievements and include material relating to her astronaut training and duties; her contributions to space policy; her work as a physicist; and her work as an educator.
A significant portion of the collection highlights her iconic role as a NASA astronaut from 1978 to 1987. Ride spent 343 hours in space, as a mission specialist on space shuttle missions STS-7 and STS-41G, where she operated a variety of orbiter systems and experiment payloads. She also operated the Remote Manipulator System (RMS) arm to maneuver, release, and retrieve a free-flying satellite.
But Ride’s NASA’s career and legacy extend well beyond her missions in space. Ride was training for her third flight when the Space Shuttle Challenger disaster occurred and she was named to the Rogers Commission, the presidential commission investigating the accident. Ride later served on the Columbia Accident Board as well. She was the only person assigned to both shuttle disaster committees that investigated the causes and recommended remedies after the tragic losses.
In 1987, Ride left NASA to become a full-time educator. The collection mirrors those professional changes with material relating to her work as a physics professor at University of California at San Diego (UCSD) and later endeavors to improve science education for elementary and middle school students, with a special focus on science education for girls.
The Museum is proud to play a role in securing Ride’s legacy by making this collection available to researchers for years to come. And, on a personal note, it was a wonderful honor to process the papers. I leave you with my favorite image from the collection. It shows a very young Sally Ride looking at a book. A “thought bubble” caption has been adhered to the photo as though Ride is reading a technical manual. I found this image attached on the inside cover of one of her STS-7 manuals.
Patti Williams is the acquisition archivist for the National Air and Space Museum
If you were going to fly non-stop for 33½ hours, what kind of chair would you want to sit in?
For Charles Lindbergh, it was this simple wicker chair. The Ryan NYP Spirit of St. Louis was a modified version of the Ryan M-2 aircraft created specifically for the long flight from New York to Paris. In an effort to save weight, Lindbergh opted for this wicker seat for the historic flight. Discover more about the Ryan NYP Sprit of St. Louis.
Tom Paone is a Museum Technician in the Aeronautics Department at the National Air and Space Museum.
Who would think that a damaged, old leather glove, with the thumb badly torn, could be a valuable item? But if that damaged glove belonged to Luftwaffe pilot Günther Rall, with 275 aerial victories and the third highest scoring ace in aviation history, then it becomes an item of unique historic value. And now that item has found a home at the National Air and Space Museum. In addition to the glove, the Museum also received Rall’s diary from 1942, documenting his actions at the Eastern Front, and a portrait of the pilot in summer 1945, created by another prisoner of war, Wolfgang Willrich, during their time in captivity in Fouquainville, France.
Günther Rall was born in 1918 at the end of World War I and became a pilot with the Luftwaffe in 1938. During World War II, he fought in the skies over France, Great Britain, Yugoslavia, Greece, Russia, and later in the air defense over Germany against the American and British strategic bombardment campaign—always flying the Messerschmitt Bf 109. In November 1941, after 37 air victories, Rall was shot down for the first time and rescued by a German tank crew, his back broken in three places. Told that he would never be able to walk (let alone fly) again, Rall returned to combat just one year later.
In April 1944, Major Günther Rall was made Group Commander of the 2nd group of Fighter Wing 11, defending the skies over Germany against the overwhelming powers of the Allied Air Forces. At that time, the Allies had seven to 10 times more aircraft in the air over Germany than Germany did. Even worse, U.S. pilots had about 400 flight hours of training when they were sent into battle, while German pilots, due to lack of instructors and fuel, had almost none. Many of these young, inexperienced German pilots were shot down before their 10th sortie.
On May 12, 1944, Rall led his group against an American air raid. His pilots flew two different aircraft. Some flew Me 109s with engines equipped with special chargers to allow them to reach altitudes of 8,000 to 10,000 meters where they were able to attack the P-51 Mustang and P-47 Thunderbolts that protected Allied bomber units. Other pilots flew Fw 190s and attacked the lower-flying U.S. bomber aircraft. Rall shot down two Thunderbolts, but then other P-47s arrived. One of them fired at Rall’s Me 109. Bullets from a .50 caliber machine gun hit his cockpit, his engine, his cooler, and his left hand at the control stick, shooting his thumb. The glove donated to the Museum is the very glove worn by Rall during that engagement, and it clearly shows the damage from the machine gun round. Günther Rall bailed out and landed in a field. He was taken to a hospital and his left thumb amputated. Due to the onset of infections he was not able to fly for months.
The air battles of that day marked the beginning of a systematic U.S. offensive against the German fuel industry, one of the weakest links in the German war economy. The 8th and 9th USAAF with 886 bombers, and 980 accompanying fighters, flew attacks against refineries and production sites for synthetic fuel in the heart of Germany. Facing heavy German resistance, the U.S. lost 46 bombers and 12 fighters. On the German side, 28 pilots were killed and 26 wounded that day, among them was the entirety of Rall’s group. Later, Albert Speer, Reich Minister of Armament and War Production, would declare: “On that day, the fate of Germany’s technical warfare was decided.”
In November 1944, Rall returned to active duty. He spent the last months of the war with Fighter Wing 300, which mostly sat idly due to lack of fuel and supplies. At the end of the war, after 621 missions flown, 275 confirmed aerial victories, shot down eight times, and wounded three, Rall became a prisoner of war of the American Forces. Released in August 1945, he had to adjust to a civilian life and became a representative for the Siemens Company. In 1956, he joined the newly established Armed Forces of the Federal Republic in the rank of a Major of the Luftwaffe. He was put in charge of modifying the F-104 fighter jet for Luftwaffe’s requirements and worked his way to the position of Luftwaffe’s Inspector General, a rank he held from 1971 to 1974. That year, he was made the German military representative in NATO’s Military Committee at Brussels, with the rank of a Lieutenant General.
In 1977, Günther Rall visited a meeting of U.S. fighter pilots. While inquiring about the 1944 incident where he lost his thumb, he learned that he had encountered the notorious “Wolf Pack” on that fateful day in 1944, the 56th Fighter Group under Col. Hubert Hub Zemke. Zemke’s pilots were by far the most successful American fighter group in the European theatre, and Zemke himself was known as a supreme tactician. From that meeting, a close friendship developed between Rall, Hub Zemke, Zemke’s 2nd Lieutenant Robert “Shortie” Rankin, and other U.S. pilots. During his visits to the U.S., Rall frequently gave talks about his life as a pilot, often together with U.S. pilots like Hub Zemke or Chuck Yeager. In May 1996, he joined the Gathering of Eagles at the Museum and talked about his war time experiences. In 2003, he was made an honorary member of the prestigious Society of Experimental Test Pilots, and one year later published his memoirs, Mein Flugbuch [edited by Kurt Braatz, Moosburg/Germany: Edition NeunundzwanzigSechs]. In them, the third-highest scoring ace of all time said:
“Nothing is further from my mind than to join into the praise for the last Knights of the Air which you hear so often when people talk about World War II fighter pilots. The sober truth […] is that we fought each other for life and death, although we wanted nothing but to live, and that these fights became the more relentless the longer this terrible war lasted. […] War is not the continuation of politics with other means, but an infamy; it is the utter failure of political action.”
Günther Rall died in 2009. The Museum plans to incorporate his glove, his diary, and his portrait in a new exhibition on World War II.
Evelyn Crellin is the curator for European Aviation in the Museum’s Aeronautics Department
As the curator for the Museum’s Martin B-26B Marauder, I’ve become obsessed with the proper way to designate the name given to it by its first pilot Jim Farrell in August 1943. It all centers on the pesky use of a hyphen. Is it Flak Bait or Flak-Bait? You see both in archival documents, historical references and books, and all over the internet. Which one is correct? In my quest to get that one detail right, I learned that the use of the term “flak bait” referred to more than just the name of the World War II, medium bomber that is now undergoing preservation treatment in the Mary Baker Engen Restoration Hangar at the Steven F. Udvar-Hazy Center.
American aircrew that were going into combat would describe themselves as “flak bait,” meaning they were at the mercy of enemy anti-aircraft artillery. “Flak,” which was short for the German fliegerabwehrkanone, or literally “flyer defense cannon,” was the primary threat to bomber crews over their targets. U.S. Army Air Forces Ninth Air Force medium bomber crews, specifically those flying the B-26 Marauder, adopted that name collectively for themselves as they risked their lives over Nazi-occupied Europe.
At least three other American aircraft went into battle over Europe with the name Flak Bait in World War II. A Douglas C-47 Dakota in the 437th Troop Carrier Group of the Ninth was one. Lt. Bill Barlow of the 353rd Fighter Group in the Eighth Air Force named his Republic P-47D Thunderbolt Flak Bait because it always came back from a mission with a few holes somewhere on the airframe. I also found one mention of an Eighth Air Force Boeing B-17 Flying Fortress that carried the name.
Does anyone have a personal connection with these aircraft, have any more details, or know of any other World War II aircraft that flew with the name Flak Bait?
How did the Museum’s B-26 get its name? Pilot Jim Farrell took inspiration from the nickname his brother gave to Boots the family dog back home, “Flea Bait,” and adapted it to reflect the combat environment over Western Europe.
With the approval of the crew, Farrell took those two words and sketched them popping out of a flak burst. Squadron artist Ted Simonaitis painted the now iconic nose art in yellow, red, and white on the left forward fuselage. See the narrow hyphen between “Flak” and “Bait?” While combat crews rightly called themselves “flak bait” and there were other aircraft that carried the name, there is only one Flak-Bait, the airplane that flew more missions than any other American aircraft during World War II.
Jeremy Kinney is a curator in the Aeronautics Department at the National Air and Space Museum. |
The Viceroyalty of New Spain (Spanish: Virreinato de Nueva España Spanish pronunciation: [birei̯ˈnato ðe ˈnweβa esˈpaɲa] (listen)) was an integral territorial entity of the Spanish Empire, established by Habsburg Spain during the Spanish colonization of the Americas. It covered a huge area that included territories in North America, South America, Asia and Oceania. It originated in 1521 after the fall of Mexico-Tenochtitlan, the main event of the Spanish conquest, which did not properly end until much later, as its territory continued to grow to the north. It was officially created on 8 March 1535 as a viceroyalty (Spanish: virreinato), the first of four viceroyalties Spain created in the Americas. Its first viceroy was Antonio de Mendoza y Pacheco, and the capital of the viceroyalty was Mexico City, established on the ancient Mexico-Tenochtitlan.
It included what is now Mexico plus the current U.S. states of California, Nevada, Colorado, Utah, New Mexico, Arizona, Texas, Oregon, Washington, Florida and parts of Idaho, Montana, Wyoming, Kansas, Oklahoma and Louisiana; as well as the southwestern part of British Columbia of present-day Canada; plus the Captaincy General of Guatemala (which included the current countries of Guatemala, the Mexican state of Chiapas, Belize, Costa Rica, El Salvador, Honduras, Nicaragua); the Captaincy General of Cuba (current Cuba, Dominican Republic, Puerto Rico, Trinidad and Tobago and Guadeloupe); and the Captaincy General of the Philippines (including the Philippines, Guam, the Caroline Islands, the Mariana Islands and the short lived Spanish Formosa in modern-day northern Taiwan).
The political organization divided the viceroyalty into kingdoms and captaincies general. The kingdoms were those of New Spain (different from the viceroyalty itself); Nueva Galicia (1530); Captaincy General of Guatemala (1540); Nueva Vizcaya (1562); New Kingdom of León (1569); Santa Fe de Nuevo México (1598); Nueva Extremadura (1674) and Nuevo Santander (1746). There were four captaincies: Captaincy General of the Philippines (1574), Captaincy General of Cuba, Captaincy General of Puerto Rico and Captaincy General of Santo Domingo. These territorial subdivisions had a governor and captain general (who in New Spain was the viceroy himself, who added this title to his other dignities). In Guatemala, Santo Domingo and Nueva Galicia, these officials were called presiding governors, since they were leading real audiences. For this reason, these hearings were considered "praetorial."
There were two great estates. The most important was the Marquisate of the Valley of Oaxaca, property of Hernán Cortés and his descendants that included a set of vast territories where marquises had civil and criminal jurisdiction, and the right to grant land, water and forests and within which were their main possessions (cattle ranches, agricultural work, sugar mills, fulling houses and shipyards). The other estate was the Duchy of Atlixco, granted in 1708, by King Philip V to José Sarmiento de Valladares, former viceroy of New Spain and married to the Countess of Moctezuma, with civil and criminal jurisdiction over Atlixco, Tepeaca, Guachinango, Ixtepeji and Tula de Allende. King Charles III introduced reforms in the organization of the viceroyalty in 1786, known as Bourbon reforms, which created the intendencias, which allowed to limit, in some way, the viceroy's attributions.
New Spain developed highly regional divisions, reflecting the impact of climate, topography, indigenous populations, and mineral resources. The areas of central and southern Mexico had dense indigenous populations with complex social, political, and economic organization. The northern area of Mexico, a region of nomadic and semi-nomadic indigenous populations, was not generally conducive to dense settlements, but the discovery of silver in Zacatecas in the 1540s drew settlement there to exploit the mines. Silver mining not only became the engine of the economy of New Spain, but vastly enriched Spain and transformed the global economy. New Spain was the New World terminus of the Philippine trade, making the viceroyalty a vital link between Spain's New World empire and its Asian empire.
From the beginning of the 19th century, the viceroyalty fell into crisis, aggravated by the Peninsular War, and its direct consequence in the viceroyalty, the political crisis in Mexico in 1808, which ended with the government of viceroy José de Iturrigaray and, later, gave rise to the Conspiracy of Valladolid and the Conspiracy of Querétaro. This last one was the direct antecedent of the Mexican War of Independence, which, when concluding in 1821, disintegrated the viceroyalty and gave way to the Mexican Empire, in which finally Agustín de Iturbide would be crowned.
Viceroyalty of New Spain
Virreinato de la Nueva España
Motto: Plus Ultra
Anthem: Marcha Real
Maximum extension of the Viceroyalty of New Spain, with the incorporation of Louisiana (1764 - 1803). In light green the territory not controlled effectively, but claimed as part of the Viceroyalty.
|Common languages||Spanish (official), Nahuatl, Mayan, Indigenous languages, French (Spanish Louisiana), Philippine languages|
|Charles I (first)|
|Ferdinand VII (last)|
|Antonio de Mendoza (first)|
|Juan O'Donojú Political chief superior (not viceroy)|
|Legislature||Council of the Indies|
|Historical era||Colonial era|
• Viceroyalty created
|27 May 1717|
• Acquisition of Louisiana from France
|1 October 1800|
|22 February 1819|
• Trienio Liberal abolished the viceroyalty of New Spain
|31 May 1820|
|5 to 6.5 million|
|Currency||Spanish colonial real|
The Kingdom of New Spain was established following the Spanish conquest of the Aztec Empire in 1521 as a New World kingdom dependent on the Crown of Castile, since the initial funds for exploration came from Queen Isabella. Although New Spain was a dependency of Spain, it was a kingdom not a colony, subject to the presiding monarch on the Iberian Peninsula. The monarch had sweeping power in the overseas territories,
The king possessed not only the sovereign right but the property rights; he was the absolute proprietor, the sole political head of his American dominions. Every privilege and position, economic political, or religious came from him. It was on this basis that the conquest, occupation, and government of the [Spanish] New World was achieved.
The Viceroyalty of New Spain was established in 1535 in the Kingdom of New Spain. It was the first New World viceroyalty and one of only two in the Spanish empire until the 18th century Bourbon Reforms.
The Spanish Empire comprised the territories in the north overseas 'Septentrion', from North America and the Caribbean, to the Philippine, Mariana and Caroline Islands. At its greatest extent, the Spanish crown claimed on the mainland of the Americas much of North America south of Canada, that is: all of present-day Mexico and Central America except Panama; most of present-day United States west of the Mississippi River, plus the Floridas.
To the west of the continent, New Spain also included the Spanish East Indies (the Philippine Islands, the Mariana Islands, the Caroline Islands, parts of Taiwan, and parts of the Moluccas). To the east of the continent, it included the Spanish West Indies (Cuba, Hispaniola (comprising the modern states of Haiti and the Dominican Republic), Puerto Rico, Jamaica, the Cayman Islands, Trinidad, and the Bay Islands).
Until the 18th century, when Spain saw its claims in North America threatened by other European powers, much of what were called the Spanish borderlands consisted of territory now part of the United States. This was not occupied by many Spanish settlers and was considered more marginal to Spanish interests than the most densely populated and lucrative areas of central Mexico. To shore up its claims in North America, starting in the late 18th century Spanish expeditions to the Pacific Northwest explored and claimed the coast of what is now British Columbia and Alaska. On the mainland, the administrative units included Las Californias, that is, the Baja California peninsula, still part of Mexico and divided into Baja California and Baja California Sur; Alta California (present-day Arizona, California, Nevada, Utah, western Colorado, and southern Wyoming); (from the 1760s) Louisiana (including the western Mississippi River basin and the Missouri River basin); Nueva Extremadura (the present-day states of Coahuila and Texas); and Santa Fe de Nuevo México (parts of Texas and New Mexico).
The Caribbean islands and early Spanish explorations around the circum-Caribbean region had not been of major political, strategic, or financial importance until the conquest of the Aztec Empire in 1521. However, important precedents of exploration, conquest, and settlement and crown rule had been initially worked out in the Caribbean, which long affected subsequent regions, including Mexico and Peru. The indigenous societies of Mesoamerica brought under Spanish control were of unprecedented complexity and wealth from what they had encountered in the Caribbean. This presented both an important opportunity and a potential threat to the power of the Crown of Castile, since the conquerors were acting independent of effective crown control. The societies could provide the conquistadors, especially Hernán Cortés, a base from which the conquerors could become autonomous, or even independent, of the Crown.
As a result, the Holy Roman Emperor and King of Spain, Charles V created the Council of the Indies[Note 1] in 1524 as the crown entity to oversee the crown's interests in the New World. Since the time of the Catholic Monarchs, central Iberia was governed through councils appointed by the monarch with particular jurisdictions. Thus, the creation of the Council of the Indies became another, but extremely important, advisory body to the monarch.
The crown had set up the Casa de Contratación (House of Trade) in 1503 to regulate contacts between Spain and its overseas possessions. A key function was to gather information about navigation to make trips less risky and more efficient. Philip II sought systematic information about his overseas empire and mandated reports, known as the Relaciones geográficas, with text on topography, economic conditions, and populations among other information. They were accompanied by maps of the area discussed, many of which were drawn by indigenous artists. The Francisco Hernández Expedition (1570–77), the first scientific expedition to the New World, was sent to gather information on medicinal plants and practices.
The crown created the first mainland high court, or Audiencia, in 1527 to regain control of the administration of New Spain from Cortés, who as the premier conqueror of the Aztec empire, was ruling in the name of the king but without crown oversight or control. An earlier Audiencia had been established in Santo Domingo in 1526 to deal with the Caribbean settlements. That court, housed in the Casa Reales in Santo Domingo, was charged with encouraging further exploration and settlements with the authority granted it by the crown. Management by the Audiencia, which was expected to make executive decisions as a body, proved unwieldy. Therefore, in 1535, King Charles V named Don Antonio de Mendoza as the first Viceroy of New Spain.
Because the Roman Catholic Church had played such an important role in the Reconquista (Christian reconquest) of the Iberian peninsula from the Moors, the Church in essence became another arm of the Spanish government. The Spanish Crown granted it a large role in the administration of the state, and this practice became even more pronounced in the New World, where prelates often assumed the role of government officials. In addition to the Church's explicit political role, the Catholic faith became a central part of Spanish identity after the conquest of last Muslim kingdom in the peninsula, the Emirate of Granada, and the expulsion of all Jews who did not convert to Christianity.
The conquistadors brought with them many missionaries to promulgate the Catholic religion. Amerindians were taught the Roman Catholic religion and the language of Spain. Initially, the missionaries hoped to create a large body of Amerindian priests, but this did not come to be. Moreover, efforts were made to keep the Amerindian cultural aspects that did not violate the Catholic traditions. As an example, most Spanish priests committed themselves to learn the most important Amerindian languages (especially during the 16th century) and wrote grammars so that the missionaries could learn the languages and preach in them. This was similarly practiced by the French colonists.
At first, conversion seemed to be happening rapidly. The missionaries soon found that most of the natives had simply adopted "the god of the heavens," as they called the Christian god, as just another one of their many gods. While they often held the Christian god to be an important deity because it was the god of the victorious conquerors, they did not see the need to abandon their old beliefs. As a result, a second wave of missionaries began an effort to completely erase the old beliefs, which they associated with the ritualized human sacrifice found in many of the native religions, eventually putting an end to this practice common before the arrival of the Spaniards. In the process many artifacts of pre-Columbian Mesoamerican culture were destroyed. Hundreds of thousands of native codices were burned, native priests and teachers were persecuted, and the temples and statues of the old gods were torn down. Even some foods associated with the native religions, like amaranth, were forbidden.
Many clerics, such as Bartolomé de las Casas, also tried to protect the natives from de facto and actual enslavement to the settlers, and obtained from the Crown decrees and promises to protect native Mesoamericans, most notably the New Laws. Unfortunately, the royal government was too far away to fully enforce them, and many abuses against the natives, even among the clergy, continued. Eventually, the Crown declared the natives to be legal minors and placed under the guardianship of the Crown, which was responsible for their indoctrination. It was this status that barred the native population from the priesthood. During the following centuries, under Spanish rule, a new culture developed that combined the customs and traditions of the indigenous peoples with that of Catholic Spain. Numerous churches and other buildings were constructed by native labor in the Spanish style, and cities were named after various saints or religious topics such as San Luis Potosí (after Saint Louis) and Vera Cruz (the True Cross).
The Spanish Inquisition, and its New Spanish counterpart, the Mexican Inquisition, continued to operate in the viceroyalty until Mexico declared its independence. During the 17th and 18th centuries, the Inquisition worked with the viceregal government to block the diffusion of liberal ideas during the Enlightenment, as well as the revolutionary republican and democratic ideas of the United States War of Independence and the French Revolution.
Even before the establishment of the viceroyalty of New Spain, conquerors in central Mexico founded new Spanish cities and embarked on further conquests, a pattern that had been established in the Caribbean. In central Mexico, the Aztec capital of Tenochtitlan was transformed into the main settlement of the territory; thus, the history of Mexico City is of huge importance to the whole colonial enterprise. Spaniards founded new settlements in Puebla de los Angeles (founded 1531) at the midway point between the Mexico City (founded 1521-24) and the Caribbean port of Veracruz (1519). Colima (1524), Antequera (1526, now Oaxaca City), and Guadalajara (1532) were all new Spanish settlements. North of Mexico City, the city of Querétaro was founded (ca. 1531) in what was called the Bajío, a major zone of commercial agriculture. Guadalajara was founded northwest of Mexico City (1531–42) and became the dominant Spanish settlement in the region. West of Mexico City the settlement of Valladolid (Michoacan) was founded (1529–41). In the densely indigenous South, as noted, Antequera (1526) became the center of Spanish settlement in Oaxaca; Santiago de Guatemala was founded in 1524; and in Yucatán, Mérida (1542) was founded inland, with Campeche founded as a small, Caribbean port in 1541. There was sea trade between Campeche and Veracruz. During the first twenty years, before the establishment of the viceroyalty, some of the important cities of the colonial era that remain important today were founded. The discovery of silver in Zacatecas in the far north was a transformative event. The settlement of Zacatecas was founded in 1547 deep in the territory of the nomadic and fierce Chichimeca, whose resistance to Spanish presence was the protracted conflict of the Chichimeca War.
During the 16th century, many Spanish cities were established in North and Central America. Spain attempted to establish missions in what is now the southern United States including Georgia and South Carolina between 1568 and 1587. These efforts were mainly successful in the region of present-day Florida, where the city of St. Augustine was founded in 1565, the oldest European city in the United States.
Upon his arrival, Viceroy Don Antonio de Mendoza vigorously took to the duties entrusted to him by the King and encouraged the exploration of Spain's new mainland territories. He commissioned the expeditions of Francisco Vásquez de Coronado into the present day American Southwest in 1540–1542. The Viceroy commissioned Juan Rodríguez Cabrillo in the first Spanish exploration up the Pacific Ocean in 1542–1543. Cabrillo sailed far up the coast, becoming the first European to see present day California, United States. The Viceroy also sent Ruy López de Villalobos to the Spanish East Indies in 1542–1543. As these new territories became controlled, they were brought under the purview of the Viceroy of New Spain. Spanish settlers expanded to Nuevo Mexico, and the major settlement of Santa Fe was founded in 1610.
The establishment of religious missions and military presidios on the northern frontier became the nucleus of Spanish settlement and the founding of Spanish towns.
Seeking to develop trade between the East Indies and the Americas across the Pacific Ocean, Miguel López de Legazpi established the first Spanish settlement in the Philippine Islands in 1565, which became the town of San Miguel (present-day Cebu City). Andrés de Urdaneta discovered an efficient sailing route from the Philippine Islands to Mexico which took advantage of the Kuroshio Current. In 1571, the city of Manila became the capital of the Spanish East Indies, with trade soon beginning via the Manila-Acapulco Galleons. The Manila-Acapulco trade route shipped products such as silk, spices, silver, porcelain and gold to the Americas from Asia. New works indicate that interest is increasing. The importance of the Philippines to the Spanish empire can be seen by its creation as a separate Captaincy-General.
The Spanish crown created a system of convoys of ships (called the flota) to prevent attacks by European privateers. Some isolated attacks on these shipments took place in the Gulf of Mexico and Caribbean Sea by English and Dutch pirates and privateers. One such act of piracy was led by Francis Drake in 1586, and another by Thomas Cavendish in 1587. In one episode, the cities of Huatulco (Oaxaca) and Barra de Navidad in Jalisco Province of México were sacked. However, these maritime routes, both across the Pacific and the Atlantic, were successful in the defensive and logistical role they played in the history of the Spanish Empire. For over three centuries the Spanish Navy escorted the galleon convoys that sailed around the world.
Don Lope Díez de Armendáriz, born in Quito, Ecuador, was the first Viceroy of New Spain who was born in the 'New World'. He formed the 'Navy of Barlovento' (Armada de Barlovento), based in Veracruz, to patrol coastal regions and protect the harbors, port towns, and trade ships from pirates and privateers.
After the conquest of central Mexico, there were only two major Indian revolts challenging Spanish rule. In the Mixtón war in 1541, the viceroy Don Antonio de Mendoza led an army against an uprising by Caxcanes. In the 1680 Pueblo revolt, Indians in 24 settlements in New Mexico expelled the Spanish, who left for Texas, an exile lasting a decade. The Chichimeca war lasted over fifty years, 1550-1606, between the Spanish and various indigenous groups of northern New Spain, particularly in silver mining regions and the transportation trunk lines. Non-sedentary or semi-sedentary Northern Indians were difficult to control once they acquired the mobility of the horse. In 1616, the Tepehuan revolted against the Spanish, but it was relatively quickly suppressed. The Tarahumara Indians were in revolt in the mountains of Chihuahua for several years. In 1670 Chichimecas invaded Durango, and the governor, Francisco González, abandoned its defense.
In the southern area of New Spain, the Tzeltal Maya and other indigenous groups, including the Tzotzil and Chol revolted in 1712. It was a multiethnic revolt sparked by religious issues in several communities. In 1704 viceroy Francisco Fernández de la Cueva suppressed a rebellion of Pima Indians in Nueva Vizcaya.
During the era of the conquest, in order to pay off the debts incurred by the conquistadors and their companies, the new Spanish governors awarded their men grants of native tribute and labor, known as encomiendas. In New Spain these grants were modeled after the tribute and corvee labor that the Mexica rulers had demanded from native communities. This system came to signify the oppression and exploitation of natives, although its originators may not have set out with such intent. In short order the upper echelons of patrons and priests in the society lived off the work of the lower classes. Due to some horrifying instances of abuse against the indigenous peoples, Bishop Bartolomé de las Casas suggested bringing black slaves to replace them. Fray Bartolomé later repented when he saw the even worse treatment given to the black slaves.
In Peru, the other discovery that perpetuated the system of forced labor, the mit'a, was the enormously rich single silver mine discovered at Potosí, but in New Spain, labor recruitment differed significantly. With the exception of silver mines worked in the Aztec period at Taxco, southwest of Tenochtitlan, the Mexico's mining region was outside the area of dense indigenous settlement. Labor for the mines in the north of Mexico had a workforce of black slave labor and indigenous wage labor, not draft labor. Indigenous who were drawn to the mining areas were from different regions of the center of Mexico, with a few from the north itself. With such diversity they did not have a common ethnic identity or language and rapidly assimilated to Hispanic culture. Although mining was difficult and dangerous, the wages were good, which is what drew the indigenous labor.
The Viceroyalty of New Spain was the principal source of income for Spain in the eighteenth century, with the revival of mining under the Bourbon Reforms. Important mining centers like Zacatecas, Guanajuato, San Luis Potosí and Hidalgo had been established in the sixteenth century and suffered decline for a variety of reasons in the seventeenth century, but silver mining in Mexico out performed all other Spanish overseas territories in revenues for the royal coffers.
The fast red dye cochineal was an important export in areas such as central Mexico and Oaxaca in terms of revenues to the crown and stimulation of the internal market of New Spain. Cacao and indigo were also important exports for the New Spain, but was used through rather the vice royalties rather than contact with European countries due to piracy, and smuggling. The indigo industry in particular also helped to temporarily unite communities throughout the Kingdom of Guatemala due to the smuggling.
There were two major ports in New Spain, Veracruz the viceroyalty's principal port on the Atlantic, and Acapulco on the Pacific, terminus of the Manila Galleon. In the Philippines Manila near the South China Sea was the main port. The ports were fundamental for overseas trade, stretching a trade route from Asia, through the Manila Galleon to the Spanish mainland.
These were ships that made voyages from the Philippines to Mexico, whose goods were then transported overland from Acapulco to Veracruz and later reshipped from Veracruz to Cádiz in Spain. So then, the ships that set sail from Veracruz were generally loaded with merchandise from the East Indies originating from the commercial centers of the Philippines, plus the precious metals and natural resources of Mexico, Central America, and the Caribbean. During the 16th century, Spain held the equivalent of US$1.5 trillion (1990 terms) in gold and silver received from New Spain.
However, these resources did not translate into development for the Metropolis (mother country) due to Spanish Roman Catholic Monarchy's frequent preoccupation with European wars (enormous amounts of this wealth were spent hiring mercenaries to fight the Protestant Reformation), as well as the incessant decrease in overseas transportation caused by assaults from companies of British buccaneers, Dutch corsairs and pirates of various origin. These companies were initially financed by, at first, by the Amsterdam stock market, the first in history and whose origin is owed precisely to the need for funds to finance pirate expeditions, as later by the London market. The above is what some authors call the "historical process of the transfer of wealth from the south to the north."
The Bourbon monarchy embarked upon a far-reaching program to revitalize the economy of its territories, both on the peninsula and its overseas possessions. The crown sought to enhance its control and administrative efficiency, and to decrease the power and privilege of the Roman Catholic Church vis-a-vis the state.
The British capture and occupation of both Manila and Havana in 1762, during the global conflict of the Seven Years' War, meant that the Spanish crown had to rethink its military strategy for defending its possessions. The Spanish crown had engaged with Britain for a number of years in low-intensity warfare, with ports and trade routes harassed by English privateers. The crown strengthened the defenses of Veracruz and San Juan de Ulúa, Jamaica, Cuba, and Florida, but the British sacked ports in the late seventeenth century.Santiago de Cuba (1662), St. Augustine Spanish Florida (1665) and Campeche 1678 and so with the loss of Havana and Manila, Spain realized it needed to take significant steps. The Bourbons created a standing army in New Spain, beginning in 1764, and strengthened defensive infrastructure, such as forts.
The crown sought reliable information about New Spain and dispatched José de Gálvez as Visitador General (inspector general), who observed conditions needing reform, starting in 1765, in order to strengthen crown control over the kingdom.
An important feature of the Bourbon Reforms was that they ended the significant amount of local control that was a characteristic of the bureaucracy under the Habsburgs, especially through the sale of offices. The Bourbons sought a return to the monarchical ideal of having those not directly connected with local elites as administrators, who in theory should be disinterested, staff the higher echelons of regional government. In practice this meant that there was a concerted effort to appoint mostly peninsulares, usually military men with long records of service (as opposed to the Habsburg preference for prelates), who were willing to move around the global empire. The intendancies were one new office that could be staffed with peninsulares, but throughout the 18th century significant gains were made in the numbers of governors-captain generals, audiencia judges and bishops, in addition to other posts, who were Spanish-born.
In 1766, the crown appointed Carlos Francisco de Croix, marqués de Croix as viceroy of New Spain. One of his early tasks was to implement the crown's decision to expel the Jesuits from all its territories, accomplished in 1767. Since the Jesuits had significant power, owning large, well managed haciendas, educating New Spain's elite young men, and as a religious order resistant to crown control, the Jesuits were a major target for the assertion of crown control. Croix closed the religious autos-de-fe of the Holy Office of the Inquisition to public viewing, signaling a shift in the crown's attitude toward religion. Other significant accomplishments under Croix's administration was the founding of the College of Surgery in 1768, part of the crown's push to introduce institutional reforms that regulated professions. The crown was also interested in generating more income for its coffers and Croix instituted the royal lottery in 1769. Croix also initiated improvements in the capital and seat of the viceroyalty, increasing the size of its central park, the Alameda.
Another activist viceroy carrying out reforms was Antonio María de Bucareli y Ursúa, marqués de Valleheroso y conde de Jerena, who served from 1771 to 1779, and died in office. José de Gálvez, now Minister of the Indies following his appointment as Visitor General of New Spain, briefed the newly appointed viceroy about reforms to be implemented. In 1776, a new northern territorial division was established, Commandancy General of the Provincias Internas known as the Provincias Internas (Commandancy General of the Internal Provinces of the North, Spanish: Comandancia y Capitanía General de las Provincias Internas). Teodoro de Croix (nephew of the former viceroy) was appointed the first Commander General of the Provincias Internas, independent of the Viceroy of New Spain, to provide better administration for the northern frontier provinces. They included Nueva Vizcaya, Nuevo Santander, Sonora y Sinaloa, Las Californias, Coahuila y Tejas (Coahuila and Texas), and Nuevo México. Bucareli was opposed to Gálvez's plan to implement the new administrative organization of intendancies, which he believed would burden areas with sparse population with excessive costs for the new bureaucracy.
The new Bourbon kings did not split the Viceroyalty of New Spain into smaller administrative units as they did with the Viceroyalty of Peru, carving out the Viceroyalty of Río de la Plata and the Viceroyalty of New Granada, but New Spain was reorganized administratively and elite American-born Spanish men were passed over for high office. The crown also established a standing military, with the aim of defending its overseas territories.
The Spanish Bourbons monarchs' prime innovation introduction of intendancies, an institution emulating that of Bourbon France. They were first introduced on a large scale in New Spain, by the Minister of the Indies José de Gálvez, in the 1770s, who originally envisioned that they would replace the viceregal system (viceroyalty) altogether. With broad powers over tax collection and the public treasury and with a mandate to help foster economic growth over their districts, intendants encroached on the traditional powers of viceroys, governors and local officials, such as the corregidores, which were phased out as intendancies were established. The Crown saw the intendants as a check on these other officers. Over time accommodations were made. For example, after a period of experimentation in which an independent intendant was assigned to Mexico City, the office was thereafter given to the same person who simultaneously held the post of viceroy. Nevertheless, the creation of scores of autonomous intendancies throughout the Viceroyalty, created a great deal of decentralization, and in the Captaincy General of Guatemala, in particular, the intendancy laid the groundwork for the future independent nations of the 19th century. In 1780, Minister of the Indies José de Gálvez sent a royal dispatch to Teodoro de Croix, Commandant General of the Internal Provinces of New Spain (Provincias Internas), asking all subjects to donate money to help the American Revolution. Millions of pesos were given.
The focus on the economy (and the revenues it provided to the royal coffers) was also extended to society at large. Economic associations were promoted, such as the Economic Society of Friends of the Country. Similar "Friends of the Country" economic societies were established throughout the Spanish world, including Cuba and Guatemala.
The Bourbon Reforms were not a unified or entirely coherent program, but a series of crown initiatives designed to revitalize the economies of its overseas possessions and make administration more efficient and firmly under control of the crown. Record keeping improved and records were more centralized. The bureaucracy was staffed with well-qualified men, most of them peninsular-born Spaniards. The preference for them meant that there was resentment from American-born elite men and their families, who were excluded from holding office. The creation of a military meant that some American Spaniards became officers in local militias, but the ranks were filled with poor, mixed-race men, who resented service and avoided it if possible.
The first century that saw the Bourbons on the Spanish throne coincided with series of global conflicts that pitted primarily France against Great Britain. Spain as an ally of Bourbon France was drawn into these conflicts. In fact part of the motivation for the Bourbon Reforms was the perceived need to prepare the empire administratively, economically and militarily for what was the next expected war. The Seven Years' War proved to be catalyst for most of the reforms in the overseas possessions, just like the War of the Spanish Succession had been for the reforms on the Peninsula.
In 1720, the Villasur expedition from Santa Fe met and attempted to parley with French- allied Pawnee in what is now Nebraska. Negotiations were unsuccessful, and a battle ensued; the Spanish were badly defeated, with only thirteen managing to return to New Mexico. Although this was a small engagement, it is significant in that it was the deepest penetration of the Spanish into the Great Plains, establishing the limit to Spanish expansion and influence there.
The War of Jenkins' Ear broke out in 1739 between the Spanish and British and was confined to the Caribbean and Georgia. The major action in the War of Jenkins' Ear was a major amphibious attack launched by the British under Admiral Edward Vernon in March 1741 against Cartagena de Indias, one of Spain's major gold-trading ports in the Caribbean (today Colombia). Although this episode is largely forgotten, it ended in a decisive victory for Spain, who managed to prolong its control of the Caribbean and indeed secure the Spanish Main until the 19th century.
Following the French and Indian War/Seven Years' War, the British troops invaded and captured the Spanish cities of Havana in Cuba and Manila in the Philippines in 1762. The Treaty of Paris (1763) gave Spain control over the Louisiana part of New France including New Orleans, creating a Spanish empire that stretched from the Mississippi River to the Pacific Ocean; but Spain also ceded Florida to Great Britain in order to regain Cuba, which the British occupied during the war. Louisiana settlers, hoping to restore the territory to France, in the bloodless Rebellion of 1768 forced the Louisiana Governor Antonio de Ulloa to flee to Spain. The rebellion was crushed in 1769 by the next governor Alejandro O'Reilly, who executed five of the conspirators. The Louisiana territory was to be administered by superiors in Cuba with a governor on site in New Orleans.
The 21 northern missions in present-day California (U.S.) were established along California's El Camino Real from 1769. In an effort to exclude Britain and Russia from the eastern Pacific, King Charles III of Spain sent forth from Mexico a number of expeditions to the Pacific Northwest between 1774 and 1793. Spain's long-held claims and navigation rights were strengthened and a settlement and fort were built in Nootka Sound, Alaska.
Spain entered the American Revolutionary War as an ally of the United States and France in June 1779. From September 1779 to May 1781, Bernardo de Galvez led an army in a campaign along the Gulf Coast against the British. Galvez's army consisted of Spanish regulars from throughout Latin America and a militia which consisted of mostly Acadians along with Creoles, Germans, and Native Americans. Galvez's army engaged and defeated the British in battles fought at Manchac and Baton Rouge, Louisiana, Natchez, Mississippi, Mobile, Alabama, and Pensacola, Florida. The loss of Mobile and Pensacola left the British with no bases along the Gulf Coast. In 1782, forces under Galvez's overall command captured the British naval base at Nassau on New Providence Island in the Bahamas. Galvez was angry that the operation had proceeded against his orders to cancel, and ordered the arrest and imprisonment of Francisco de Miranda, aide-de-camp of Juan Manuel Cajigal, the commander of the expedition. Miranda later ascribed this action on the part of Galvez to jealousy of Cajigal's success.
In the second Treaty of Paris (1783), which ended the American Revolution, Great Britain returned control of Florida back to Spain in exchange for the Bahamas. Spain then had control over the Mississippi River south of 32°30' north latitude, and, in what is known as the Spanish Conspiracy, hoped to gain greater control of Louisiana and all of the west. These hopes ended when Spain was pressured into signing Pinckney's Treaty in 1795. France re-acquired Louisiana from Spain in the secret Treaty of San Ildefonso in 1800. The United States bought the territory from France in the Louisiana Purchase of 1803.
New Spain claimed the entire west coast of North America and therefore considered the Russian fur trading activity in Alaska, which began in the middle to late 18th century, an encroachment and threat. Likewise, the exploration of the northwest coast by Captain James Cook of the British Navy and the subsequent fur trading activities by British ships was considered an encroachment on Spanish territory. To protect and strengthen its claim, New Spain sent a number of expeditions to the Pacific Northwest between 1774 and 1793. In 1789, a naval outpost called Santa Cruz de Nuca (or just Nuca) was established at Friendly Cove in Nootka Sound (now Yuquot), Vancouver Island. It was protected by an artillery land battery called Fort San Miguel. Santa Cruz de Nuca was the northernmost establishment of New Spain. It was the first European colony in what is now the province of British Columbia and the only Spanish settlement in what is now Canada. Santa Cruz de Nuca remained under the control of New Spain until 1795, when it was abandoned under the terms of the third Nootka Convention. Another outpost, intended to replace Santa Cruz de Nuca, was partially built at Neah Bay on the southern side of the Strait of Juan de Fuca in what is now the U.S. state of Washington. Neah Bay was known as Bahía de Núñez Gaona in New Spain, and the outpost there was referred to as "Fuca." It was abandoned, partially finished, in 1792. Its personnel, livestock, cannons, and ammunition were transferred to Nuca.
In 1789, at Santa Cruz de Nuca, a conflict occurred between the Spanish naval officer Esteban José Martínez and the British merchant James Colnett, triggering the Nootka Crisis, which grew into an international incident and the threat of war between Britain and Spain. The first Nootka Convention averted the war but left many specific issues unresolved. Both sides sought to define a northern boundary for New Spain. At Nootka Sound, the diplomatic representative of New Spain, Juan Francisco de la Bodega y Quadra, proposed a boundary at the Strait of Juan de Fuca, but the British representative, George Vancouver refused to accept any boundary north of San Francisco. No agreement could be reached and the northern boundary of New Spain remained unspecified until the Adams–Onís Treaty with the United States (1819). That treaty also ceded Spanish Florida to the United States.
The Third Treaty of San Ildefonso ceded to France the vast territory that Napoleon then sold to the United States in 1803, known as the Louisiana Purchase. The United States obtained Spanish Florida in 1819 in the Adams–Onís Treaty. That treaty also defined a northern border for New Spain, at 42d north (now the northern boundary of the U.S. states of California, Nevada, and Utah).
In the 1821 Declaration of Independence of the Mexican Empire, both Mexico and Central America declared their independence after three centuries of Spanish rule and formed the First Mexican Empire, although Central America quickly rejected the union. After priest Miguel Hidalgo y Costilla's 1810 Grito de Dolores (call for independence), the insurgent army began an eleven-year war. At first, the Criollo class fought against the rebels. But in 1820, a military coup in Spain forced Ferdinand VII to accept the authority of the liberal Spanish Constitution. The specter of liberalism that could undermine the authority and autonomy of the Roman Catholic Church made the Church hierarchy in New Spain view independence in a different light. In an independent nation, the Church anticipated retaining its power. Royalist military officer Agustín de Iturbide proposed uniting with the insurgents with whom he had battled, and gained the alliance of Vicente Guerrero, leader of the insurgents in a region now bearing his name. Royal government collapsed in New Spain and the Army of the Three Guarantees marched triumphantly into Mexico City in 1821.
The new Mexican Empire offered the crown to Ferdinand VII or to a member of the Spanish royal family that he would designate. After the refusal of the Spanish monarchy to recognize the independence of Mexico, the ejército Trigarante (Army of the Three Guarantees), led by Agustín de Iturbide and Vicente Guerrero, cut all political and economic ties with Spain and crowned Iturbide as emperor Agustín of Mexico. Central America was originally planned to be part of the Mexican Empire; but it seceded peacefully in 1823, forming the United Provinces of Central America under the Constitution of 1824.
The Viceroyalty of New Spain united many regions and provinces of the Spanish Empire throughout half a world. These included on the North American mainland, central Mexico, Nueva Extremadura, Nueva Galicia, the Californias, Nueva Vizcaya, Nuevo Reyno de León, Texas and Nuevo Santander, as well as the Captaincy General of Guatemala.
In the Caribbean it included Cuba, Santo Domingo, most of the Venezuelan mainland and the other islands in the Caribbean controlled by the Spanish. In Asia, the Viceroyalty ruled the Captaincy General of the Philippines, which covered all of the Spanish territories in the Asia-Pacific region. The outpost at Nootka Sound, on Vancouver Island, was considered part of the province of California.
Therefore, the Viceroyalty's former territories included what is now the countries of Mexico, Guatemala, El Salvador, Honduras, Nicaragua, Belize, Costa Rica; the United States regions of California, Texas, New Mexico, Arizona, Puerto Rico, Guam, Northern Mariana Islands, Nevada, Utah, Colorado, Wyoming, Florida; a portion of the Canadian province of British Columbia; the Caribbean nations of Cuba, the Dominican Republic, the island of Hispaniola, Jamaica, Antigua and Barbuda; the Asia-Pacific nations of the Philippine Islands, Palau and Caroline Islands.
The Viceroyalty was administered by a viceroy residing in Mexico City and appointed by the Spanish monarch, who had administrative oversight of all of these regions, although most matters were handled by the local governmental bodies, which ruled the various regions of the viceroyalty. First among these were the audiencias, which were primarily superior tribunals, but which also had administrative and legislative functions. Each of these was responsible to the Viceroy of New Spain in administrative matters (though not in judicial ones), but they also answered directly to the Council of the Indies.
Audiencia districts further incorporated the older, smaller divisions known as governorates (gobernaciones, roughly equivalent to provinces), which had been originally established by conquistador-governors known as adelantados. Provinces which were under military threat were grouped into captaincies general, such as the Captaincies General of the Philippines (established 1574) and Guatemala (established in 1609) mentioned above, which were joint military and political commands with a certain level of autonomy. (The viceroy was captain-general of those provinces that remained directly under his command).
At the local level there were over two hundred districts, in both Indian and Spanish areas, which were headed by either a corregidor (also known as an alcalde mayor) or a cabildo (town council), both of which had judicial and administrative powers. In the late 18th century the Bourbon dynasty began phasing out the corregidores and introduced intendants, whose broad fiscal powers cut into the authority of the viceroys, governors and cabildos. Despite their late creation, these intendancies so affected the formation of regional identity that they became the basis for the nations of Central America and the first Mexican states after independence.
The high courts, or audiencias, were established in major areas of Spanish settlement. In New Spain the high court was established in 1527, prior to the establishment of the viceroyalty. The First Audiencia was headed by Hernán Cortés's rival Nuño de Guzmán, who used the court to deprive Cortés of power and property. The First Audiencia was dissolved and the Second Audiencia established.
Audiencias with dates of creation:
With dates of creation:
1. Santo Domingo (1535)
2. Philippines (1574)
3. Puerto Rico (1580)
5. Guatemala (1609)
6. Yucatán (1617)
7. Commandancy General of the Provincias Internas (1776) (Analogous to a dependent captaincy general.)
As part of the sweeping eighteenth-century administrative and economic changes known as the Bourbon Reforms, the Spanish crown created new administrative units called intendancies. In New Spain, these units generally corresponded to the regions or provinces that had developed earlier in the Center, South, and North. In turn, many of the intendancy boundaries became Mexican state boundaries after independence.
|Year of creation||Intendancy|
|1764||Havana (Presumably, the West Florida intendancy fits here.)|
|Puerto Príncipe (separated from the Intendancy of Havana)|
|Santiago de Cuba (separated from the Intendency of Havana)|
|San Luis Potosí|
In the colonial period, basic patterns of regional development emerged and strengthened. European settlement and institutional life was built in the Mesoamerican heartland of the Aztec Empire in Central Mexico. The South (Oaxaca, Michoacan, Yucatán, and Central America) was a region of dense indigenous settlement of Mesoamerica, but without exploitable resources of interest to Europeans, the area attracted few Europeans, while the indigenous presence remained strong. The North was outside the area of complex indigenous populations, inhabited primarily by nomadic and hostile northern indigenous groups. With the discovery of silver in the north, the Spanish sought to conquer or pacify those peoples in order to exploit the mines and develop enterprises to supply them. Nonetheless, much of northern New Spain had sparse indigenous population and attracted few Europeans. The Spanish crown and later the Republic of Mexico did not effectively exert sovereignty over the region, leaving it vulnerable to the expansionism of the United States in the nineteenth century.
Regional characteristics of colonial Mexico have been the focus of considerable study within the vast scholarship on centers and peripheries. For those based in the vice-regal capital of Mexico City itself, everywhere else were the "provinces." Even in the modern era, "Mexico" for many refers solely to Mexico City, with the pejorative view of anywhere but the capital is a hopeless backwater. "Fuera de México, todo es Cuauhtitlán" ["outside of Mexico City, it's all Podunk"], that is, poor, marginal, and backward, in short, the periphery. The picture is far more complex, however; while the capital is enormously important as the center of power of various kinds (institutional, economic, social), the provinces played a significant role in colonial Mexico. Regions (provinces) developed and thrived to the extent that they were sites of economic production and tied into networks of trade. "Spanish society in the Indies was import-export oriented at the very base and in every aspect," and the development of many regional economies was usually centered on support of that export sector.
Mexico City was the center of the Central region, and the hub of New Spain. The development of Mexico City itself is extremely important to the development of New Spain as a whole. It was the seat of the Viceroyalty of New Spain, the Archdiocese of the Catholic Church, the Holy Office of the Inquisition, the merchants' guild (consulado), and home of the most elite families in the Kingdom of New Spain. Mexico City was the single-most populous city, not just in New Spain, but for many years the entire Western Hemisphere, with a high concentration of mixed-race castas.
Significant regional development grew along the main transportation route from the capital east to the port of Veracruz. Alexander von Humboldt called this area "Mesa de Anahuac", which can be defined as the adjacent valleys of Puebla, Mexico, and Toluca, enclosed by high mountains, along with their connections to the Gulf Coast port of Veracruz and the Pacific port of Acapulco, where over half the population of New Spain lived. These valleys were linked trunk lines, or main routes, facilitating the movement of vital goods and people to get to key areas. However, even in this relatively richly endowed region of Mexico, the difficulty of transit of people and goods in the absence of rivers and level terrain remained a major challenge to the economy of New Spain. This challenge persisted during the post-independence years until the late nineteenth-century construction of railroads. In the colonial era and up until the railroads were built in key areas, mule trains were the main mode of transporting goods. Mules were used because unpaved roads and mountainous terrain could not generally accommodate carts.
In the late eighteenth century the crown devoted some resources to the study and remedy the problem of poor roads. The Camino Real (royal road) between the port of Veracruz and the capital had some short sections paved and bridges constructed. The construction was done despite protests from some Indian villages when the infrastructure improvements, which sometimes included rerouting the road through communal lands. The Spanish crown finally decided that road improvement was in the interests of the state for military purposes, as well as for fomenting commerce, agriculture, and industry, but the lack of state involvement in the development of physical infrastructure was to have lasting effects constraining development until the late nineteenth century. Despite some improvements, the roads still made transit difficult, particularly for heavy military equipment.
Although the crown had ambitious plans for both the Toluca and Veracruz portions of the king's highway, actual improvements were limited to a localized network. Even where infrastructure was improved, transit on the Veracruz-Puebla main road had other obstacles, with wolves attacking mule trains, killing animals, and rendering some sacks of foodstuffs unsellable because they were smeared with blood. The north-south Acapulco route remained a mule track through mountainous terrain.
Veracruz was the first Spanish settlement founded in what became New Spain, and it endured as the only viable Gulf Coast port, the gateway for Spain to New Spain. The difficult topography around the port affected local development and New Spain as a whole. Going from the port to the central plateau entailed a daunting 2000 meter climb from the narrow tropical coastal plain in just over a hundred kilometers. The narrow, slippery road in the mountain mists was treacherous for mule trains, and in some cases mules were hoisted by ropes. Many tumbled with their cargo to their deaths. Given these transport constraints, only high-value, low-bulk goods continued to be shipped in the transatlantic trade, which stimulated local production of foodstuffs, rough textiles, and other products for a mass market. Although New Spain produced considerable sugar and wheat, these were consumed exclusively in the colony even though there was demand elsewhere. Philadelphia, not New Spain, supplied Cuba with wheat.
The Caribbean port of Veracruz was small, with its hot, pestilential climate not a draw for permanent settlers: its population never topped 10,000. Many Spanish merchants preferred living in the pleasant highland town of Jalapa (1,500 m). For a brief period (1722–76) the town of Jalapa became even more important than Veracruz, after it was granted the right to hold the royal trade fair for New Spain, serving as the entrepot for goods from Asia via Manila Galleon through the port of Acapulco and European goods via the flota (convoy) from the Spanish port of Cádiz. Spaniards also settled in the temperate area of Orizaba, east of the Citlaltepetl volcano. Orizaba varied considerably in elevation from 800 metres (2,600 ft) to 5,700 metres (18,700 ft) (the summit of the Citlaltepetl volcano), but "most of the inhabited part is temperate." Some Spaniards lived in semitropical Córdoba, which was founded as a villa in 1618, to serve as a Spanish base against runaway slave (cimarrón) predations on mule trains traveling the route from the port to the capital. Some cimarrón settlements sought autonomy, such as one led by Gaspar Yanga, with whom the crown concluded a treaty leading to the recognition of a largely black town, San Lorenzo de los Negros de Cerralvo, now called the municipality of Yanga.
European diseases immediately affected the multiethnic Indian populations in the Veracruz area and for that reason Spaniards imported black slaves as either an alternative to indigenous labor or its complete replacement in the event of a repetition of the Caribbean die-off. A few Spaniards acquired prime agricultural lands left vacant by the indigenous demographic disaster. Portions of the province could support sugar cultivation and as early as the 1530s sugar production was underway. New Spain's first viceroy, Don Antonio de Mendoza established an hacienda on lands taken from Orizaba.
Indians resisted cultivating sugarcane themselves, preferring to tend their subsistence crops. As in the Caribbean, black slave labor became crucial to the development of sugar estates. During the period 1580-1640 when Spain and Portugal were ruled by the same monarch and Portuguese slave traders had access to Spanish markets, African slaves were imported in large numbers to New Spain and many of them remained in the region of Veracruz. But even when that connection was broken and prices rose, black slaves remained an important component of Córdoba's labor sector even after 1700. Rural estates in Córdoba depended on African slave labor, who were 20% of the population there, a far greater proportion than any other area of New Spain, and greater than even nearby Jalapa.
In 1765 the crown created a monopoly on tobacco, which directly affected agriculture and manufacturing in the Veracruz region. Tobacco was a valuable, high-demand product. Men, women, and even children smoked, something commented on by foreign travelers and depicted in eighteenth-century casta paintings. The crown calculated that tobacco could produce a steady stream of tax revenues by supplying the huge Mexican demand, so the crown limited zones of tobacco cultivation. It also established a small number of manufactories of finished products, and licensed distribution outlets (estanquillos). The crown also set up warehouses to store up to a year's worth of supplies, including paper for cigarettes, for the manufactories. With the establishment of the monopoly, crown revenues increased and there is evidence that despite high prices and expanding rates of poverty, tobacco consumption rose while at the same time, general consumption fell.
Founded in 1531 as a Spanish settlement, Puebla de los Angeles quickly rose to the status of Mexico's second-most important city. Its location on the main route between the viceregal capital and the port of Veracruz, in a fertile basin with a dense indigenous population, largely not held in encomienda, made Puebla a destination for many later arriving Spaniards. If there had been significant mineral wealth in Puebla, it could have been even more prominent a center for New Spain, but its first century established its importance. In 1786 it became the capital of an intendancy of the same name.
It became the seat of the richest diocese in New Spain in its first century, with the seat of the first diocese, formerly in Tlaxcala, moved there in 1543. Bishop Juan de Palafox asserted the income from the diocese of Puebla as being twice that of the archbishopic of Mexico, due to the tithe income derived from agriculture. In its first hundred years, Puebla was prosperous from wheat farming and other agriculture, as the ample tithe income indicates, plus manufacturing woolen cloth for the domestic market. Merchants, manufacturers, and artisans were important to the city's economic fortunes, but its early prosperity was followed by stagnation and decline in the seventeenth and eighteenth centuries.
The foundation of the town of Puebla was a pragmatic social experiment to settle Spanish immigrants without encomiendas to pursue farming and industry. Puebla was privileged in a number of ways, starting with its status as a Spanish settlement not founded on existing indigenous city-state, but with a significant indigenous population. It was located in a fertile basin on a temperate plateau in the nexus of the key trade triangle of Veracruz–Mexico City–Antequera (Oaxaca). Although there were no encomiendas in Puebla itself, encomenderos with nearby labor grants settled in Puebla. And despite its foundation as a Spanish city, sixteenth-century Puebla had Indians resident in the central core.
Administratively Puebla was far enough away from Mexico City (approximately 160 km or 100 mi) so as not to be under its direct influence. Puebla's Spanish town council (cabildo) had considerable autonomy and was not dominated by encomenderos. The administrative structure of Puebla "may be seen as a subtle expression of royal absolutism, the granting of extensive privileges to a town of commoners, amounting almost to republican self-government, in order to curtail the potential authority of encomenderos and the religious orders, as well as to counterbalance the power of the viceregal capital."
During the "golden century" from its founding in 1531 until the early 1600s, Puebla's agricultural sector flourished, with small-scale Spanish farmers plowing the land for the first time, planting wheat and vaulting Puebla to importance as New Spain's breadbasket, a role assumed by the Bajío (including Querétaro) in the seventeenth century, and Guadalajara in the eighteenth. Puebla's wheat production was the initial element of its prosperity, but it emerged as a manufacturing and commercial center, "serving as the inland port of Mexico's Atlantic trade." Economically, the city received exemptions from the alcabala (sales tax) and almojarifazgo (import/export duties) for its first century (1531–1630), which helped promote commerce.
Puebla built a significant manufacturing sector, mainly in textile production in workshops (obrajes), supplying New Spain and markets as far away as Guatemala and Peru. Transatlantic ties between a particular Spanish town, Brihuega, and Puebla demonstrate the close connection between the two settlements. The take-off for Puebla's manufacturing sector did not simply coincide with immigration from Brihuega but was crucial to "shaping and driving Puebla's economic development, especially in the manufacturing sector." Brihuega immigrants not only came to Mexico with expertise in textile production, but the transplanted briocenses provided capital to create large-scale obrajes. Although obrajes in Brihuega were small-scale enterprises, quite a number of them in Puebla employed up to 100 workers. Supplies of wool, water for fulling mills, and labor (free indigenous, incarcerated Indians, black slaves) were available. Although much of Puebla's textile output was rough cloth, it also produced higher quality dyed cloth with cochineal from Oaxaca and indigo from Guatemala. But by the eighteenth century, Querétaro had displaced Puebla as the mainstay of woolen textile production.
Mexico City dominated the Valley of Mexico, but the valley continued to have dense indigenous populations challenged by growing, increasingly dense Spanish settlement. The Valley of Mexico had many former Indian city-states that became Indian towns in the colonial era. These towns continued to be ruled by indigenous elites under the Spanish crown, with an indigenous governor and a town councils. These Indian towns close to the capital were the most desirable ones for encomenderos to hold and for the friars to evangelize.
The capital was provisioned by the indigenous towns, and its labor was available for enterprises that ultimately created a colonial economy. The gradual drying up of the central lake system created more dry land for farming, but the sixteenth-century population declines allowed Spaniards to expand their acquisition of land. One region that retained strong Indian land holding was the southern fresh water area, with important suppliers of fresh produce to the capital. The area was characterized by intensely cultivated chinampas, man-made extensions of cultivable land into the lake system. These chinampa towns retained a strong indigenous character, and Indians continued to hold the majority of that land, despite its closeness to the Spanish capital. A key example is Xochimilco.
Texcoco in the pre-conquest period was one of the three members of the Aztec Triple Alliance and the cultural center of the empire. It fell on hard times in the colonial period as an economic backwater. Spaniards with any ambition or connections would be lured by the closeness of Mexico City, so that the Spanish presence was minimal and marginal.
Tlaxcala, the major ally of the Spanish against the Aztecs of Tenochtitlan, also became something of a backwater, but like Puebla it did not come under the control of Spanish encomenderos. No elite Spaniards settled there, but like many other Indian towns in the Valley of Mexico, it had an assortment of small-scale merchants, artisans, farmers and ranchers, and textile workshops (obrajes).
Since portions of northern New Spain became part of the United States' Southwest region, there has been considerable scholarship on the Spanish borderlands in the north. The motor of the Spanish colonial economy was the extraction of silver. In Bolivia, it was from the single rich mountain of Potosí; but in New Spain, there were two major mining sites, one in Zacatecas, the other in Guanajuato.
The region farther north of the main mining zones attracted few Spanish settlers. Where there were settled indigenous populations, such as in the present-day state of New Mexico and in coastal regions of Baja and Alta California, indigenous culture retained considerable integrity.
The Bajío, a rich, fertile lowland just north of central Mexico, was nonetheless a frontier region between the densely populated plateaus and valleys of Mexico's center and south and the harsh northern desert controlled by nomadic Chichimeca. Devoid of settled indigenous populations in the early sixteenth century, the Bajío did not initially attract Spaniards, who were much more interested in exploiting labor and collecting tribute whenever possible. The region did not have indigenous populations that practiced subsistence agriculture. The Bajío developed in the colonial period as a region of commercial agriculture.
The discovery of mining deposits in Zacatecas and Guanajuato in the mid-sixteenth century and later in San Luis Potosí stimulated the Bajío's development to supply the mines with food and livestock. A network of Spanish towns was established in this region of commercial agriculture, with Querétaro also becoming a center of textile production. Although there were no dense indigenous populations or network of settlements, Indians migrated to the Bajío to work as resident employees on the region's haciendas and ranchos or rented land (terrasguerros). From diverse cultural backgrounds and with no sustaining indigenous communities, these indios were quickly hispanized, but largely remained at the bottom of the economic hierarchy. Although Indians migrated willingly to the region, they did so in such small numbers that labor shortages prompted Spanish hacendados to provide incentives to attract workers, especially in the initial boom period of the early seventeenth century. Land owners lent workers money, which could be seen as a perpetual indebtedness, but it can be seen not as coercing Indians to stay but a way estate owners sweetened their terms of employment, beyond their basic wage labor. For example, in 1775 the Spanish administrator of a San Luis Potosí estate "had to scour both Mexico City and the northern towns to find enough blue French linen to satisfy the resident employees." Other types of goods they received on credit were textiles, hats, shoes, candles, meat, beans, and a guaranteed ration of maize. However, where labor was more abundant or market conditions depressed, estate owners paid lower wages. The more sparsely populated northern Bajío tended to pay higher wages than the southern Bajío, which was increasingly integrated in the economy of central Mexico. The credit-based employment system often privileged those holding higher ranked positions on the estate (supervisors, craftsmen, other specialists) who were mostly white, and the estates did not demand repayment.
In the late colonial period, renting complemented estate employment for many non-Indians in more central areas of the Bajío with access to markets. As with hacendados, renters produced for the commercial market. While these Bajío renters could prosper in good times and achieved a level of independence, drought and other disasters made their choice more risky than beneficial.
Many renters retained ties to the estates, diversifying their household's sources of income and level of economic security. In San Luis Potosí, rentals were fewer and estate employment the norm. After a number of years of drought and bad harvests in the first decade of the nineteenth century Hidalgo's 1810 grito appealed more in the Bajío than in San Luis Potosí. In the Bajío estate owners were evicting tenants in favor of renters better able to pay more for land, there was a disruption of previous patterns of mutual benefit between estate owners and renters.
Areas of northern Mexico were incorporated into the United States in the mid-nineteenth century, following Texas independence and the Mexican–American War (1846–48) and generally known as the "Spanish Borderlands." Scholars in the United States have extensively studied this northern region, which became the states of Texas, New Mexico, Arizona, and California. During the period of Spanish rule, this area was sparsely populated even by indigenous peoples.
The Presidios (forts), pueblos (civilian towns) and the misiones (missions) were the three major agencies employed by the Spanish crown to extend its borders and consolidate its colonial holdings in these territories.
The town of Albuquerque (present day Albuquerque, New Mexico) was founded in 1706. Other Mexican towns in the region included Paso del Norte (present day Ciudad Juárez), founded in 1667; Santiago de la Monclova in 1689; Panzacola, Tejas in 1681; and San Francisco de Cuéllar (present day city of Chihuahua) in 1709. From 1687, Father Eusebio Francisco Kino, with funding from the Marqués de Villapuente, founded over twenty missions in the Sonoran Desert (in present-day Sonora and Arizona). From 1697, Jesuits established eighteen missions throughout the Baja California Peninsula. Between 1687 and 1700 several missions were founded in Trinidad, but only four survived as Amerindian villages throughout the 18th century. In 1691, explorers and missionaries visited the interior of Texas and came upon a river and Amerindian settlement on June 13, the feast day of St. Anthony, and named the location and river San Antonio in his honor.
During the term of viceroy Don Luis de Velasco, marqués de Salinas the crown ended the long-running Chichimeca War by making peace with the semi-nomadic Chichimeca indigenous tribes of northern México in 1591. This allowed expansion into the 'Province of New Mexico' or Provincia de Nuevo México. In 1595, Don Juan de Oñate, son of one the key figures in the silver remining region of Zacatecas, received official permission from the viceroy to explore and conquer New Mexico. As was the pattern of such expeditions, the leader assumed the greatest risk but would reap the largest rewards, so that Oñate would become capitán general of New Mexico and had the authority to distribute rewards to those in the expedition. Oñate pioneered 'The Royal Road of the Interior Land' or El Camino Real de Tierra Adentro between Mexico City and the Tewa village of Ohkay Owingeh, or San Juan Pueblo. He also founded the Spanish settlement of San Gabriel de Yungue-Ouinge on the Rio Grande near the Native American Pueblo, located just north of the present day city of Española, New Mexico.
By naming the region "New Mexico," the Spanish likely hoped to incorporate a region as rich as Mexico had proven to be. However, while New Mexico had a settled indigenous population, there were no silver mines, little arable land, and few other resources to exploit that would merit large scale colonization. Oñate resigned as governor in 1607 and left New Mexico, having spent much of his personal wealth on the enterprise.
In 1610, Pedro de Peralta, a later governor of the Province of New Mexico, established the settlement of Santa Fe near the southern end of the Sangre de Cristo mountain range. Missions were established to convert the locals, and manage the agricultural industry. The territory's indigenous population resented the Spanish prohibition of their traditional religion, and the encomienda system of forced labor. The unrest led to the Pueblo Revolt in 1680, forcing the Spanish to retreat to Paso del Norte (modern-day Ciudad Juárez.) After the return of the Spanish in 1692, the final resolution included a marked reduction of Spanish efforts to eradicate native culture and religion, the issuing of substantial communal land grants to each Pueblo, and a public defender of their rights and for their legal cases in Spanish courts. In 1776 the Province came under the new Provincias Internas jurisdiction. In the late 18th century the Spanish land grant encouraged the settlement by individuals of large land parcels outside Mission and Pueblo boundaries, many of which became ranchos.
In 1602, Sebastián Vizcaíno, the first Spanish presence in the 'New California' (Nueva California) region of the frontier Las Californias province since Cabrillo in 1542, sailed as far north up the Pacific Coast as present-day Oregon, and named California coastal features from San Diego to as far north as the Bay of Monterrey.
Not until the eighteenth century was California of much interest to the Spanish crown, since it had no known rich mineral deposits or indigenous populations sufficiently organized to render tribute and do labor for Spaniards. The discovery of huge deposits of gold in the Sierra Nevada foothills did not come until after the U.S. had incorporated California following the Mexican–American War (1846–48).
By the middle of the 1700s, the Catholic order of Jesuits had established a number of missions on the Baja (lower) California peninsula. Then, in 1767, King Charles III ordered all Jesuits expelled from all Spanish possessions, including New Spain. New Spain's Visitador General José de Gálvez replaced them with the Dominican Order in Baja California, and the Franciscans were chosen to establish new northern missions in Alta (upper) California.
In 1768, Gálvez received the following orders: "Occupy and fortify San Diego and Monterey for God and the King of Spain." The Spanish colonization there, with far fewer known natural resources and less cultural development than Mexico or Peru, was to combine establishing a presence for defense of the territory with a perceived responsibility to convert the indigenous people to Christianity.
The method used to "occupy and fortify" was the established Spanish colonial system: missions (misiones, between 1769 and 1833 twenty-one missions were established) aimed at converting the Native Californians to Christianity, forts (presidios, four total) to protect the missionaries, and secular municipalities (pueblos, three total). Due to the region's great distance from supplies and support in México, the system had to be largely self-sufficient. As a result, the colonial population of California remained small, widely scattered and near the coast.
In 1776, the north-western frontier areas came under the administration of the new 'Commandancy General of the Internal Provinces of the North' (Provincias Internas), designed to streamline administration and invigorate growth. The crown created two new provincial governments from the former Las Californias in 1804; the southern peninsula became Baja California, and the ill-defined northern mainland frontier area became Alta California.
Once missions and protective presidios were established in an area, large land grants encouraged settlement and establishment of California ranchos. The Spanish system of land grants was not very successful, however, because the grants were merely royal concessions—not actual land ownership. Under later Mexican rule, land grants conveyed ownership, and were more successful at promoting settlement.
Rancho activities centered on cattle-raising; many grantees emulated the Dons of Spain, with cattle, horses and sheep the source of wealth. The work was usually done by Native Americans, sometimes displaced and/or relocated from their villages. Native-born descendants of the resident Spanish-heritage rancho grantees, soldiers, servants, merchants, craftsmen and others became the Californios. Many of the less-affluent men took native wives, and many daughters married later English, French and American settlers.
After the Mexican War of Independence (1821) and subsequent secularization ("disestablishment") of the missions (1834), Mexican land grant transactions increased the spread of the rancho system. The land grants and ranchos established mapping and land-ownership patterns that are still recognizable in present-day California and New Mexico.
The Yucatán peninsula can be seen as a cul-de-sac, and it does indeed have unique features, but it also has strong similarities to other areas in the South. The Yucatán peninsula extends into the Gulf of Mexico and was connected to Caribbean trade routes and Mexico City, far more than some other southern regions, such as Oaxaca. There was three main Spanish settlements, the inland city of Mérida, where Spanish civil and religious officials had their headquarters and where the many Spaniards in the province lived. The villa of Campeche was the peninsula's port, the key gateway for the whole region. A merchant group developed and expanded dramatically as trade flourished during the seventeenth century. Although that period was once characterized as New Spain's "century of depression," for Yucatán this was certainly not the case, with sustained growth from the early seventeenth century to the end of the colonial period.
With dense indigenous Maya populations, Yucatán's encomienda system was established early and persisted far longer than in central Mexico, since fewer Spaniards migrated to the region than in the center. Although Yucatán was a more peripheral area to the colony, since it lacked rich mining areas and no agricultural or other export product, it did have complex of Spanish settlement, with a whole range of social types in the main settlements of Mérida and the villas of Campeche and Valladolid. There was an important sector of mixed-race castas, some of whom were fully at home in both the indigenous and Hispanic worlds. Blacks were an important component of Yucatecan society. The largest population in the province was indigenous Maya, who lived in their communities, but which were in contact with the Hispanic sphere via labor demands and commerce.
In Yucatán, Spanish rule was largely indirect, allowing these communities considerable political and cultural autonomy. The Maya community, the cah, was the means by which indigenous cultural integrity was maintained. In the economic sphere, unlike many other regions and ethnic groups in Mesoamerica, the Yucatec Maya did not have a pre-conquest network of regular markets to exchange different types of food and craft goods. Perhaps because the peninsula was uniform in its ecosystem local niche production did not develop. Production of cotton textiles, largely by Maya women, helped pay households' tribute obligations, but basic crops were the basis of the economy. The cah retained considerable land under the control of religious brotherhoods or confraternities (cofradías), the device by which Maya communities avoided colonial officials, the clergy, or even indigenous rulers (gobernadores) from diverting of community revenues in their cajas de comunidad (literally community-owned chests that had locks and keys). Cofradías were traditionally lay pious organizations and burial societies, but in Yucatán they became significant holders of land, a source of revenue for pious purposes kept under cah control. "[I]n Yucatán the cofradía in its modified form was the community." Local Spanish clergy had no reason to object to the arrangement since much of the revenue went for payment for masses or other spiritual matters controlled by the priest.
A limiting factor in Yucatán's economy was the poorness of the limestone soil, which could only support crops for two to three years with land cleared through slash and burn (swidden) agriculture. Access to water was a limiting factor on agriculture, with the limestone escarpment giving way in water filled sinkholes (cenotes), but rivers and streams were generally absent on the peninsula. Individuals had rights to land so long as they cleared and tilled them and when the soil was exhausted, they repeated the process. In general, the Indians lived in a dispersed pattern, which Spanish congregación or forced resettlement attempted to alter. Collective labor cultivated the confraternities' lands, which included raising the traditional maize, beans, and cotton. But confraternities also later pursued cattle ranching, as well as mule and horse breeding, depending on the local situation. There is evidence that cofradías in southern Campeche were involved in interregional trade in cacao as well as cattle ranching. Although generally the revenues from crops and animals were devoted to expenses in the spiritual sphere, cofradías' cattle were used for direct aid to community members during droughts, stabilizing the community's food supply.
In the seventeenth century, patterns shifted in Yucatán and Tabasco, as the English took territory the Spanish claimed but did not control, especially what became British Honduras (now Belize), where they cut dyewood and in Laguna de Términos (Isla del Carmen) where they cut logwood. In 1716–17 viceroy of New Spain organized a sufficient ships to expel the foreigners, where the crown subsequently built a fortress at Isla del Carmen. But the British held onto their territory in the eastern portion of the peninsula into the twentieth century. In the nineteenth century, the enclave supplied guns to the rebellious Maya in the Caste War of Yucatan.
Since Oaxaca was lacking in mineral deposits and it had an abundant sedentary indigenous population, its development was notable for the lack of European or mixed-race population, lack of large-scale Spanish haciendas, and the survival of indigenous communities. These communities retained their land, indigenous languages, and distinct ethnic identities. Antequera (now Oaxaca City) was a Spanish settlement founded in 1529, but the rest of Oaxaca consisted of indigenous towns. Despite its remoteness from Mexico City, "throughout the colonial era, Oaxaca was one of Mexico's most prosperous provinces."[Note 2] In the eighteenth century, the value of crown offices (alcalde mayor or corregidor) were the highest for two Oaxaca jurisdictions, with Jicayan and Villa Alta each worth 7,500 pesos, Cuicatlan-Papalotipac, 4,500; Teposcolula and Chichicapa, each 4,200 pesos.[Note 3]
The most important commodity for Oaxaca was cochineal red dye. Cochineal's commodity chain is an interesting one, with indigenous peasants in the remote areas of Oaxaca ultimately linked to Amsterdam and London commodity exchanges and the European production of luxury cloth. The most extensive scholarly work on Oaxaca's eighteenth-century economy deals with the nexus between the local crown officials (alcaldes mayores), merchant investors (aviadores), the repartimiento (forced labor), and indigenous products, particularly cochineal. The rich, color-fast red dye produced from insects, was harvested from nopal cacti. Cochineal was a high-value, low-volume product that became the second-most valuable Mexican export after silver. Although it could be produced elsewhere in central and southern Mexico, its main region of production was Oaxaca. For the indigenous in Oaxaca, cochineal was the only one "with which the [tributaries] maintain themselves and pay their debts" but it also had other advantages for them.[Note 4] Producing cochineal was time-consuming labor, but it was not particularly difficult and could be done by the elderly, women, and children. It was also important to households and communities because it initially did not require the indigenous to displace their existing crops or migrate elsewhere.
Although the repartimiento has historically been seen as an imposition on the indigenous, forcing them into economic relations they would rather have avoided and maintained by force, recent work on eighteenth-century Oaxaca analyzes the nexus of crown officials (the alcaldes mayores) and Spanish merchants, and indigenous via the repartimiento. cash loaned by local crown officials (the alcalde mayor and his teniente), usually to individual Indians but sometimes to communities, in exchange for a fixed amount of a good (cochineal or cotton mantles) at a later date. Indigenous elites were an integral part of the repartimiento, often being recipients of large extensions of credit. As authority figures in their community, they were in a good position to collect on the debt, the most risky part of the business from the Spanish point of view.
The Isthmus of Tehuantepec region of Oaxaca was important for its short transit between the Gulf Coast and the Pacific, facilitating both overland and sea trade. The province of Tehuantepec was the Pacific side of the isthmus and the headwaters of the Coatzacoalcos River. Hernán Cortés acquired strategically located holdings entailed in the Marquesado including Huatulco,[Note 5] once the main Pacific Coast port before Acapulco replaced it in 1563.
Gold mining was an early draw for Spaniards, who directed indigenous labor to its extraction, but did not continue beyond the mid-sixteenth century. Over the long run, ranching and commerce were the most important economic activities, with the settlement of Tehuantepec becoming the hub. The region's history can be divided into three distinct periods, an initial period of engagement with Spanish colonial rule to 1563, during which there was a working relationship with the Zapotec ruling line and the establishment of Cortés's economic enterprises. This early period came to a close with the death of the last native king in 1562 and the escheatment of Cortés's Tehuantepec encomiendas to the crown in 1563. The second period of approximately a century (1563–1660) saw the decline of the indigenous entailed estate (cacicazgo) and indigenous political power and development of the colonial economy and imposition of Spanish political and religious structures. The final period is the maturation of these structures (1660–1750). The 1660 rebellion can be a dividing line between the two later periods.
The Villa of Tehuantepec, the largest settlement on the isthmus, was an important prehispanic Zapotec trade and religious center, which was not under the jurisdiction of the Aztecs. The early colonial history of Tehuantepec and the larger province was dominated by Cortés and the Marquesado, but the crown realized the importance of the area and concluded an agreement in 1563 with the second Marqués by which the crown took control of the Tehuantepec encomienda. The Marquesado continued to have major private holdings in the province. The Villa of Tehuantepec became a center of Spanish and mixed-race settlement, crown administration, and trade.
The Cortés haciendas in Tehuantepec were key components of the province's economy, and they were directly linked to other Marquesado enterprises in greater Mexico in an integrated fashion. The Dominicans also had significant holdings in Tehuantepec, but there has been little research on these. However important the Marquesado and the Dominican enterprises were, there were also other economic players in the region, including individual Spaniards as well as existing indigenous communities. Ranching emerged as the dominant rural enterprise in most of Tehuantepec with a ranching boom in the period 1580–1640. Since Tehuantepec experienced significant indigenous population loss in the sixteenth century conforming to the general pattern, ranching made possible for Spaniards to thrive in Tehuantepec because ranching was not dependent on significant amounts of indigenous labor.
The most detailed economic records for the region are of the Marquesado's ranching haciendas, which produced draft animals (horses, mules, burros, and oxen) and sheep and goats, for meat and wool. Cattle ranching for meat, tallow, and leather were also important. Tallow for candles used in churches and residences and leather used in a variety of ways (saddles, other tack, boots, furniture, machinery) were significant items in the larger colonial economy, finding markets well beyond Tehuantepec. Since the Marquesado operated as an integrated enterprise, draft animals were used in other holdings for transport, agriculture, and mining in Oaxaca, Morelos, Toluca, and Mexico City as well as sold. Raised in Tehuantepec, the animals were driven to other Marquesado holdings for use and distribution.
Although colonial population decline affected the indigenous in Tehuantepec, their communities remained important in the colonial era and remain distinctly Indian to the current era. There were differences in the three distinct linguistic and ethnic groups in colonial Tehuantepec, the Zapotec, the Zoque, and the Huave. The Zapotecs concluded an alliance with the Spaniards at contact, and they had already expanded their territory into Zoque and Huave regions.
Under Spanish rule, the Zapotecs not only survived, but flourished, unlike the other two. They continued to pursue agriculture, some of it irrigated, which was not disrupted by the growing ranching economy. Generally Zapotec elites protected their communities from Spanish incursions and community cohesion remained strong as shown in members' performance of regular community service for social ends. Zapotec elites engaged in the market economy early on, which undermined to an extent the bonds between commoners and elites who colluded with the Spanish. In contrast to the Zapotecs, the Zoque generally declined as a group during the ranching boom, with interloping animals eating their maize crops. Zoque response was to take up being vaqueros themselves. They had access to the trade to Guatemala. Of the three indigenous groups, the Huave were the most isolated from the Spanish ranching economy and labor demands. With little arable or grazing land, they exploited the lagoons of the Pacific coast, using shore and beach resources. They traded dried shrimp and fish, as well as purple dye from shells to Oaxaca, likely acquiring foodstuffs that they were unable to cultivate themselves.
Not well documented is the number of African slaves and their descendants, who were artisans in urban areas and did hard manual labor in rural areas. In a pattern recognizable elsewhere, coastal populations were mainly African, including an unknown number of cimarrón (runaway slave) settlements, while inland the indigenous communities were more prominent. On the Cortés haciendas, blacks and mulattoes were essential to the profitability of the enterprises.
In general, Tehuantepec was not a site of major historical events, but in 1660–61, there was a significant rebellion stemming from increased repartimiento Spanish demands.
Spanish settlers brought to the American continent smallpox, measles, typhoid fever, and other infectious diseases. Most of the Spanish settlers had developed an immunity to these diseases from childhood, but the indigenous peoples lacked the needed antibodies since these diseases were totally alien to the native population at the time. There were at least three separate, major epidemics that decimated the population: smallpox (1520 to 1521), measles (1545 to 1548) and typhus (1576 to 1581). In the course of the 16th century, the native population in Mexico went from an estimated pre-Columbian population of 8 to 20 million to less than two million. Therefore, at the start of the 17th century, continental New Spain was a depopulated country with abandoned cities and maize fields. These diseases would not affect the Philippines in the same way because the diseases were already present in the country; Pre-Hispanic Filipinos had contact with other foreign nationalities before the arrival of the Spaniards.
Following the Spanish conquests, new ethnic groups were created, primary among them the Mestizo. The Mestizo population emerged as a result of the Spanish colonizers having children with indigenous women, both within and outside of wedlock, which brought about the mixing of both cultures.
Initially, if a child was born in wedlock, the child was considered, and raised as, a member of the prominent parent's ethnicity. (See Hyperdescent and Hypodescent.) Because of this, the term "Mestizo" was associated with illegitimacy. Mestizos do not appear in large numbers in official censuses until the second half of the 17th century, when a sizable and stable community of mixed-race people with no claims to being either Indian or Spanish appeared, although, of course, a large population of biological Mestizos had already existed for over a century in Mexico.
The Spanish conquest also brought the migration of people of African descent to the many regions of the Viceroyalty. Some came as free blacks, but vast majority came because of the introduction of African slavery. As the native population was decimated by epidemics and forced labor, black slaves were imported. Mixes with Europeans and indigenous peoples also occurred, resulting in the creation of new racial categories such as Mulattos and Zambos to account for these offspring. As with the term Mestizo, these other terms were associated with illegitimacy, since a majority—though not all—of these people were born outside of wedlock.
Eventually a caste system was created to describe the various mixes and to assign them a different social level. In theory, each different mix had a name and different sets of privileges or prohibitions. In reality, mixed-race people were able to negotiate various racial and ethnic identities (often several ones at different points in their lives) depending on the family ties and wealth they had. In its general outline, the system reflected reality. The upper echelons of government were staffed by Spaniards born in Spain (peninsulares), the middle and lower levels of government and other higher paying jobs were held by Criollos (Criollos were Spaniards born in the Americas, or—as permitted by the casta system—Spaniards with some Amerindian or even other ancestry.) The best lands were owned by Peninsulares and Criollos, with Native communities for the most part relegated to marginal lands. Mestizos and Mulattos held artisanal positions and unskilled laborers were either more mixed people, such as Zambos, recently freed slaves or Natives who had left their communities and settled in areas with large Hispanic populations. Native populations tended to have their own legally recognized communities (the repúblicas de indios) with their own social and economic hierarchies. This rough sketch must be complicated by the fact that not only did exceptions exist, but also that all these "racial" categories represented social conventions, as demonstrated by the fact that many persons were assigned a caste based on hyperdescent or hypodescent.
Even if mixes were common, the white population tried to keep their higher status, and were largely successful in doing so. With Mexican and Central American independence, the caste system and slavery were theoretically abolished. However, it can be argued that the Criollos simply replaced the Peninsulares in terms of power. In modern Mexico, "Mestizo" has become more a cultural term, since Indigenous people who abandon their traditional ways are considered Mestizos. Also, most Afro-Mexicans prefer to be considered Mestizo, since they identify closely with this group.
While different intendencies would perform censuses to get a detailed insight in regards to its inhabitants (namely occupation, number of persons per household, ethnicity etc.), it was until 1793 that the results of the first ever national census would be published. The census is also known as the "Revillagigedo census" because its creation was ordered by the Count of the same name. Most of the census' original datasets have reportedly been lost; thus most of what is known about it nowadays comes from essays and field investigations made by academics who had access to the census data and used it as reference for their works, such as Prussian geographer Alexander von Humboldt. Each author gives different estimations for the total population, ranging from 3,799,561 to 6,122,354 (more recent data suggests that the actual population of New Spain in 1810 was closer to 5 or 5.5 million individuals) as well as the ethnic composition in the country although there isn't much variation, with Europeans ranging from 18% to 22% of New Spain's population, Mestizos ranging from 21% to 25%, Indians ranging from 51% to 61% and Africans being between 6,000 and 10,000. It is concluded then, that across nearly three centuries of colonization, the population growth trends of whites and mestizos were even, while the total percentage of the indigenous population decreased at a rate of 13%–17% per century. The authors assert that rather than whites and mestizos having higher birthrates, the reason for the indigenous population's numbers decreasing lies on them suffering of higher mortality rates, due living in remote locations rather than on cities and towns founded by the Spanish colonists or being at war with them. It is also for these reasons that the number of Indigenous Mexicans presents the greater variation range between publications, as in cases their numbers in a given location were estimated rather than counted, leading to possible overestimations in some provinces and possible underestimations in others.
|Intendecy/territory||European population (%)||Indigenous population (%)||Mestizo population (%)|
|San Luis Potosi||13.0%||51.2%||35.7%|
~Europeans are included within the Mestizo category.
Regardless of the possible imprecisions related to the counting of Indigenous peoples living outside of the colonized areas, the effort that New Spain's authorities put on considering them as subjects is worth mentioning, as censuses made by other colonial or post-colonial countries did not consider American Indians to be citizens/subjects, as example the censuses made by the Viceroyalty of the Río de la Plata would only count the inhabitants of the colonized settlements. Other example would be the censuses made by the United States, that did not include Indigenous peoples living among the general population until 1860, and indigenous peoples as a whole until 1900.
Once New Spain achieved its independence, the legal basis of the Colonial caste system was abolished and mentions of a person's caste in official documents were also abandoned, which led to the exclusion of racial classification in the censuses to come and difficulted to keep track of the demographic development of each ethnicity that lived in the country. More than a century would pass for Mexico to conduct a new census on which a person's race was taken into account, in 1921, but even then, due to it showing huge inconsistencies regarding other official registers as well as its historic context, modern investigators have deemed it inaccurate. Almost a century after the aforementioned census was made, Mexico's government has begun to conduct ethno-racial surveys again, with its results suggesting that the population growth trends for each major ethnic group haven't changed significatively since the 1793 census was taken.
The capital of Viceroyalty of New Spain, Mexico City, was one of the principal centers of European cultural expansion in the Americas. Some of the most important early buildings in New Spain were churches and other religious architecture. Civil architecture included the viceregal palace, now the National Palace, and the Mexico City town council (cabildo), both located on the main square in the capital.
The first printing press in the New World was brought to Mexico in 1539, by printer Juan Pablos (Giovanni Paoli). The first book printed in Mexico was entitled "La escala espiritual de San Juan Clímaco". In 1568, Bernal Díaz del Castillo finished La Historia Verdadera de la Conquista de la Nueva España. Figures such as Sor Juana Inés de la Cruz, Juan Ruiz de Alarcón, and don Carlos de Sigüenza y Góngora, stand out as some of the viceroyalty's most notable contributors to Spanish Literature. In 1693, Sigüenza y Góngora published El Mercurio Volante, the first newspaper in New Spain.
Architects Pedro Martínez Vázquez and Lorenzo Rodriguez produced some fantastically extravagant and visually frenetic architecture known as Mexican Churrigueresque in the capital, Ocotlan, Puebla or remote silver-mining towns. Composers including Manuel de Zumaya, Juan Gutiérrez de Padilla, and Antonio de Salazar were active from the early 1500s through the Baroque period of music.
The Adams–Onís Treaty of 1819, also known as the Transcontinental Treaty, the Florida Purchase Treaty, or the Florida Treaty, was a treaty between the United States and Spain in 1819 that ceded Florida to the U.S. and defined the boundary between the U.S. and New Spain. It settled a standing border dispute between the two countries and was considered a triumph of American diplomacy. It came in the midst of increasing tensions related to Spain's territorial boundaries in North America against the United States and Great Britain in the aftermath of the American Revolution; it also came during the Latin American wars of independence.
Florida had become a burden to Spain, which could not afford to send settlers or garrisons, so the Spanish government decided to cede the territory to the United States in exchange for settling the boundary dispute along the Sabine River in Spanish Texas. The treaty established the boundary of U.S. territory and claims through the Rocky Mountains and west to the Pacific Ocean, in exchange for the U.S. paying residents' claims against the Spanish government up to a total of $5,000,000 and relinquishing the U.S. claims on parts of Spanish Texas west of the Sabine River and other Spanish areas, under the terms of the Louisiana Purchase.
The treaty remained in full effect for only 183 days: from February 22, 1821, to August 24, 1821, when Spanish military officials signed the Treaty of Córdoba acknowledging the independence of Mexico; Spain repudiated that treaty, but Mexico effectively took control of Spain's former colony. The Treaty of Limits between Mexico and the United States, signed in 1828 and effective in 1832, recognized the border defined by the Adams–Onís Treaty as the boundary between the two nations.Alta California
Alta California ('Upper California'), known sometimes unofficially as Nueva California ('New California'), California Septentrional ('Northern California'), California del Norte ('North California') or California Superior ('Upper California'), began in 1804 as a province of New Spain. Along with the Baja California peninsula, it had previously comprised the province of Las Californias, but was split off into a separate province in 1804. Following the Mexican War of Independence, it became a territory of Mexico in April 1822 and was renamed "Alta California" in 1824. The claimed territory included all of the modern US states of California, Nevada and Utah, and parts of Arizona, Wyoming, Colorado and New Mexico.
Neither Spain nor Mexico ever colonized the area beyond the southern and central coastal areas of present-day California, and small areas of present-day Arizona, so they exerted no effective control in modern-day California north of the Sonoma area, or east of the California Coast Ranges. Most interior areas such as the Central Valley and the deserts of California remained in de facto possession of indigenous peoples until later in the Mexican era when more inland land grants were made, and especially after 1841 when overland immigrants from the United States began to settle inland areas.
Large areas east of the Sierra Nevada and Coast Ranges were claimed to be part of Alta California, but were never colonized. To the southeast, beyond the deserts and the Colorado River, lay the Spanish settlements in Arizona.Alta California ceased to exist as an administrative division separate from Baja California in 1836, when the Siete Leyes constitutional reforms in Mexico re-established Las Californias as a unified department, granting it more autonomy. Most of the areas formerly comprising Alta California were ceded to the United States in the Treaty of Guadalupe Hidalgo that ended the Mexican–American War in 1848. Two years later, California joined the union as the 31st state. Other parts of Alta California became all or part of the later U.S. states of Arizona, Nevada, Utah, Colorado, and Wyoming.Captaincy General of the Philippines
The Captaincy General of the Philippines (Spanish: Capitanía General de las Filipinas [kapitaˈni.a xeneˈɾal ðe las filiˈpinas]; Filipino: Kapitaniyang Heneral ng Pilipinas) was an administrative district of the Spanish Empire in Southeast Asia governed by a Governor-General. The Captaincy General encompassed the Spanish East Indies, which included among others the Philippine Islands and the Caroline Islands. It was founded in 1565 with the first permanent Spanish settlements.
For centuries all the political and economic aspects of the Captaincy were administered in Mexico City by the Viceroyalty of New Spain, while the administrative issues had to be consulted with the Spanish Crown or the Council of the Indies through the Royal Audience of Manila. However, in 1821, following the independence of Mexico, all control was transferred to Madrid. It was succeeded by the short-lived First Philippine Republic following its Independence through the Philippine Revolution.Casta
A casta (Spanish: [ˈkasta]) was a term to describe mixed-race individuals in Spanish America, resulting from unions of European whites (españoles), Amerindians (indios), and Africans (negros). Racial categories had legal and social consequences, since racial status was an organizing principle of Spanish colonial rule. During the seventeenth and eighteenth centuries, European elites created a complex hierarchical system of race classification. The sistema de castas or the sociedad de castas was used in the 17th and 18th century in New Spain, a vast area of land starting just below Alaska stretching all the way to the Isthmus of Panama, plus the entire Caribbean, the Floridas and Spanish Philippines, to formally rank the mixed-race people who were born during the post-Conquest period. The process of mixing ancestries in the union of people of different races was known as mestizaje (Portuguese: mestiçagem [meʃtʃiˈsaʒẽj], [mɨʃtiˈsaʒɐ̃j]). In Spanish colonial law, mixed-race castas were classified as part of the república de españoles and not the república de indios, which set Amerindians outside the Hispanic sphere. Other terminology for classification is categorization based on the degree of acculturation to Hispanic culture, which distinguished between gente de razón (Hispanics, literally, "people of reason") and gente sin razón (non-acculturated natives), concurrently existed and supported the idea of the racial classification system.
Created by Hispanic elites, the sistema de castas or the sociedad de castas, varied largely due to their birth, color, race and origin of ethnic types. The system of casta was more than socio-racial classification. It had an effect on every aspect of life, including economics and taxation. Both the Spanish colonial state and the Church required more tax and tribute payments from those of lower socio-racial categories. Related to Spanish ideas about purity of blood (which historically also related to its reconquest of Spain from the Moors), the colonists established a caste system in Latin America by which a person's socio-economic status generally correlated with race or racial mix in the known family background, or simply on phenotype (physical appearance) if the family background was unknown. From the colonial period, when the Spanish imposed control, many wealthy persons and high government officials were of peninsular (Iberian) and/or European background, while African or indigenous ancestry, or dark skin, generally was correlated with inferiority and poverty. The "whiter" the heritage a person could claim, the higher in status they could climb; conversely, darker features meant less opportunity.
Casta paintings were a new, secular art form primarily produced in eighteenth-century Mexico. A notable exception to the secular nature of the genre is Luis de Mena's 1750 painting of Virgin of Guadalupe with castas.Hernando de Soto
Hernando de Soto (; Spanish: [eɾˈnãndo ðe ˈsoto]; c. 1500 – May 21, 1542) was a Spanish explorer and conquistador who was involved in expeditions in Nicaragua and the Yucatan Peninsula, and played an important role in Pizarro's conquest of the Inca Empire in Peru, but is best known for leading the first Spanish and European expedition deep into the territory of the modern-day United States (through Florida, Georgia, Alabama, Mississippi, and most likely Arkansas). He is the first European documented as having crossed the Mississippi River.De Soto's North American expedition was a vast undertaking. It ranged throughout the southeastern United States, both searching for gold, which had been reported by various Indian tribes and earlier coastal explorers, and for a passage to China or the Pacific coast. De Soto died in 1542 on the banks of the Mississippi River; different sources disagree on the exact location, whether it was what is now Lake Village, Arkansas, or Ferriday, Louisiana.Hernán Cortés
Hernán Cortés de Monroy y Pizarro Altamirano, Marquis of the Valley of Oaxaca (; Spanish: [eɾˈnaŋ koɾˈtes ðe monˈroj i piˈθaro]; 1485 – December 2, 1547) was a Spanish Conquistador who led an expedition that caused the fall of the Aztec Empire and brought large portions of what is now mainland Mexico under the rule of the King of Castile in the early 16th century. Cortés was part of the generation of Spanish colonizers who began the first phase of the Spanish colonization of the Americas.
Born in Medellín, Spain, to a family of lesser nobility, Cortés chose to pursue adventure and riches in the New World. He went to Hispaniola and later to Cuba, where he received an encomienda (the right to the labor of certain subjects). For a short time, he served as alcalde (magistrate) of the second Spanish town founded on the island. In 1519, he was elected captain of the third expedition to the mainland, an expedition which he partly funded. His enmity with the Governor of Cuba, Diego Velázquez de Cuéllar, resulted in the recall of the expedition at the last moment, an order which Cortés ignored.
Arriving on the continent, Cortés executed a successful strategy of allying with some indigenous people against others. He also used a native woman, Doña Marina, as an interpreter. She later bore his first son. When the Governor of Cuba sent emissaries to arrest Cortés, he fought them and won, using the extra troops as reinforcements. Cortés wrote letters directly to the king asking to be acknowledged for his successes instead of being punished for mutiny. After he overthrew the Aztec Empire, Cortés was awarded the title of Marqués del Valle de Oaxaca, while the more prestigious title of Viceroy was given to a high-ranking nobleman, Antonio de Mendoza. In 1541 Cortés returned to Spain, where he died six years later of natural causes but embittered.
Because of the controversial undertakings of Cortés and the scarcity of reliable sources of information about him, it is difficult to describe his personality or motivations. Early lionizing of the conquistadores did not encourage deep examination of Cortés. Later reconsideration of the conquistadores in the context of modern anti-colonial sentiment has done little to enlarge the understanding of Cortés. As a result of these historical trends, descriptions of Cortés tend to be simplistic, and either damning or idealizing.History of the Philippines (1521–1898)
The history of the Philippines from 1521 to 1898, also known as the Spanish colonial period, was a period during which Spain controlled the Philippine islands as the Captaincy General of the Philippines, initially under New Spain until Mexican independence in 1821, which gave Madrid direct control over the area. It was also known as Spanish East Indies to the colonialists. It started with the arrival in 1521 of European explorer Ferdinand Magellan sailing for Spain, which heralded the period when the Philippines was a colony of the Spanish Empire, and ended with the outbreak of the Philippine Revolution in 1898, which marked the beginning of the American colonial era of Philippine history.List of viceroys of New Spain
The following is a list of Viceroys of New Spain.
In addition to viceroys, the following lists the highest Spanish governors of the colony of New Spain, before the appointment of the first viceroy or when the office of viceroy was vacant. Most of these individuals exercised most or all of the functions of viceroy, usually on an interim basis.Louisiana (New Spain)
Louisiana (Spanish: Luisiana) was the name of an administrative district of the Viceroyalty of New Spain from 1763 to 1801 that consisted of territory west of the Mississippi River basin, plus New Orleans. Spain acquired the territory from France, which had named it La Louisiane in honor of King Louis XIV in 1682. It is sometimes known as Spanish Louisiana. The district was retroceded to France, under the terms of the Third Treaty of San Ildefonso (1800) and the Treaty of Aranjuez (1801). In 1802, King Charles IV of Spain published a royal bill on 14 October, effecting the transfer and outlining the conditions.
However, Spain agreed to continue administering the colony until French officials arrived and formalized the transfer (1803). The ceremony was conducted at the Cabildo in New Orleans on 30 November 1803, just three weeks before the formalities of cession from France to the United States pursuant to the Louisiana Purchase.Manila galleon
The Manila Galleons (Spanish: Galeón de Manila; Filipino: Kalakalang Galyon ng Maynila at Acapulco) were Spanish trading ships which for two and a half centuries linked the Philippines with Mexico across the Pacific Ocean, making one or two round-trip voyages per year between the ports of Acapulco and Manila, which were both part of New Spain. The name of the galleon changed to reflect the city that the ship sailed from. The term Manila Galleons is also used to refer to the trade route itself between Acapulco and Manila, which lasted from 1565 to 1815.
The Manila Galleons were also known in New Spain as "La Nao de la China" (The China Ship) on their return voyage from the Philippines because they carried mostly Chinese goods, shipped from Manila.
The Manila Galleon trade route was inaugurated in 1565 after Augustinian friar and navigator Andrés de Urdaneta discovered the tornaviaje or return route from the Philippines to Mexico. The first successful round trips were made by Urdaneta and by Alonso de Arellano that year. The route lasted until 1815 when the Mexican War of Independence broke out. The Manila galleons sailed the Pacific for 250 years, bringing to the Americas cargoes of luxury goods such as spices and porcelain, in exchange for silver. The route also created a cultural exchange that shaped the identities and culture of the countries involved.
In 2015, the Philippines and Mexico began preparations for the nomination of the Manila-Acapulco Galleon Trade Route in the UNESCO World Heritage List, with backing from Spain. Spain has also suggested the tri-national nomination of the Archives on the Manila-Acapulco Galleons in the UNESCO Memory of the World Register.Mexican War of Independence
The Mexican War of Independence (Spanish: Guerra de Independencia de México) was an armed conflict, and the culmination of a political and social process which ended the rule of Spain in 1821 in the territory of New Spain. The war had its antecedent in Napoleon's French invasion of Spain in 1808; it extended from the Cry of Dolores by Father Miguel Hidalgo y Costilla on September 16, 1810, to the entrance of the Army of the Three Guarantees led by Agustín de Iturbide to Mexico City on September 27, 1821. September 16 is celebrated as Mexican Independence Day.
The movement for independence was inspired by the Age of Enlightenment and the American and French Revolutions. By that time the educated elite of New Spain had begun to reflect on the relations between Spain and its colonial kingdoms. Changes in the social and political structure occasioned by Bourbon Reforms and a deep economic crisis in New Spain caused discomfort among the native-born Creole elite.
The dramatic political events in Europe, the French Revolutionary Wars and the conquests of Napoleon deeply influenced events in New Spain. In 1808, Charles IV and Ferdinand VII were forced to abdicate in favor of the French Emperor, who then made his elder brother Joseph king. The same year, the ayuntamiento (city council) of Mexico City, supported by viceroy José de Iturrigaray, claimed sovereignty in the absence of the legitimate king. That led to a coup against the viceroy; when it was suppressed, the leaders of the movement were jailed.
Despite the defeat in Mexico City, small groups of rebels met in other cities of New Spain to raise movements against colonial rule. In 1810, after being discovered, Querétaro conspirators chose to take up arms on September 16 in the company of peasants and indigenous inhabitants of Dolores (Guanajuato), who were called to action by the secular Catholic priest Miguel Hidalgo, former rector of the Colegio de San Nicolás Obispo.
After 1810 the independence movement went through several stages, as leaders were imprisoned or executed by forces loyal to Spain. At first the rebels disputed the legitimacy of the French-installed Joseph Bonaparte while recognizing the sovereignty of Ferdinand VII over Spain and its colonies, but later the leaders took more radical positions, rejecting the Spanish claim and espousing a new social order including the abolition of slavery. Secular priest José María Morelos called the separatist provinces to form the Congress of Chilpancingo, which gave the insurgency its own legal framework. After the defeat of Morelos, the movement survived as a guerrilla war under the leadership of Vicente Guerrero. By 1820, the few rebel groups survived most notably in the Sierra Madre del Sur and Veracruz.
The reinstatement of the liberal Constitution of Cadiz in 1820 caused a change of mind among the elite groups who had supported Spanish rule. Monarchist Creoles affected by the constitution decided to support the independence of New Spain; they sought an alliance with the former insurgent resistance. Agustín de Iturbide led the military arm of the conspirators and in early 1821 he met Vicente Guerrero. Both proclaimed the Plan of Iguala, which called for the union of all insurgent factions and was supported by both the aristocracy and clergy of New Spain. It called for a monarchy in an independent Mexico. Finally, the independence of Mexico was achieved on September 27, 1821.After that, the mainland of New Spain was organized as the Mexican Empire. This ephemeral Catholic monarchy changed to a federal republic in 1823, due to internal conflicts and the separation of Central America from Mexico.
After some Spanish reconquest attempts, including the expedition of Isidro Barradas in 1829, Spain under the rule of Isabella II recognized the independence of Mexico in 1836.Provincias Internas
The Provincias Internas, also known as the Comandancia y Capitanía General de las Provincias Internas (Commandancy and Captaincy General of the Internal Provinces), was an administrative district of the Spanish Empire created in 1776 to provide more autonomy for the frontier provinces of the Viceroyalty of New Spain, present-day northern Mexico and the Southwestern United States. The goal of its creation was to establish a unified government in political, military and fiscal affairs. Nevertheless, the Commandancy General experienced significant changes in its administration because of experimentation to find the best government for the frontier region as well as bureaucratic in-fighting. Its creation was part of the Bourbon Reforms and was part of an effort to invigorate economic and population growth in the region to stave off encroachment on the region by foreign powers. During its existence, the Commandancy General encompassed the Provinces of Sonora y Sinaloa, Nueva Vizcaya, Las Californias, Nuevo México, Nuevo Santander, Nuevo Reyno de León, Coahuila (formerly Nueva Extremadura) and Texas.Santa Fe de Nuevo México
Santa Fe de Nuevo México (English: Santa Fe [Holy Faith] of New Mexico; shortened as Nuevo México or Nuevo Méjico, and translated as New Mexico in English) was a province of the Viceroyalty of New Spain, and later a territory of independent Mexico. The first capital was San Juan de los Caballeros from 1598 until 1610, and from 1610 onward the capital was La Villa Real de la Santa Fe de San Francisco de Asís. The naming, capital, the Palace of the Governors, and rule of law were retained as the New Mexico Territory, and the subsequent U.S. State of New Mexico, became a part of the United States. The New Mexican citizenry, primarily consisting of Hispano, Pueblo, Navajo, Apache, and Comanche peoples, became citizens of the United States as a result of the Treaty of Guadalupe Hidalgo.
Nuevo México is often incorrectly believed to have taken its name from the nation of Mexico. However, it was named by Spanish explorers who believed the area contained wealthy Amerindian cultures similar to those of the Aztec Empire (centered in the Valley of Mexico), and called the land the "Santa Fe de Nuevo México".Spanish East Indies
The Spanish East Indies were the colonies of the Spanish Empire in Asia and Oceania from 1565 until 1899. At one time or another, they included the Philippines, Marianas, Carolines, Palaos and Guam, as well as parts of Formosa (Taiwan), Sulawesi (Celebes) and the Moluccas (Maluku). The King of Spain traditionally styled himself "King of the East and West Indies".Administratively, the Spanish East Indies was part of the Captaincy General of the Philippines and the Real Audiencia of Manila. Cebu was the first seat of government, later transferred to Manila. From 1565 to 1821 these territories, together with the Spanish West Indies, were administered through the Viceroyalty of New Spain based in Mexico City. After Mexican independence, they were ruled directly from Madrid.
As a result of the Spanish–American War in 1898, the Philippines and Guam were occupied by the United States while about 6,000 of the remaining smaller islands were sold to Germany in the German–Spanish Treaty of 1899. The few remaining islands were ceded to the United States when the Treaty of Washington was ratified in 1901.Spanish Florida
Spanish Florida (Spanish: La Florida), was the first major European land claim and attempted settlement in North America during the European Age of Discovery. La Florida formed part of the Captaincy General of Cuba, the Viceroyalty of New Spain, and the Spanish Empire during Spanish colonization of the Americas. While its boundaries were never clearly or formally defined, the territory was much larger than the present-day state of Florida, extending over much of what is now the southeastern United States, including all of present-day Florida plus portions of Georgia, Alabama, Mississippi, South Carolina, and southeastern Louisiana. Spain's claim to this vast area was based on several wide-ranging expeditions mounted during the 16th century. A number of missions, settlements, and small forts existed in the 16th and to a lesser extent in the 17th century; eventually they were abandoned due to pressure from the expanding English and French colonial projects, the collapse of the native populations, and the general difficulty in becoming agriculturally or economically self-sufficient (which also affected some early English colonies). By the 18th century, Spain's control over La Florida did not extend much beyond its three forts, all located in present-day Florida: St. Augustine, St. Marks, and Pensacola.
Florida was never more than a backwater region for Spain. In contrast with Mexico and Peru, there was no gold to be found. There was insufficient native population to set up the encomienda system of forced agricultural labor, and Spaniards did not set up plantations in Florida. The missions did supply St. Augustine with maize, and were required to send laborers to St. Augustine every year to work in the fields and perform other labor. Spanish officials established cattle ranches which supplied both the local and the Cuban markets. It provided ports where ships needing water or supplies could call, and it had strategic importance as a buffer between Mexico (New Spain), whose undefined northeastern border was somewhere near the Mississippi River, Spain's Caribbean colonies, and the expanding English colonies to the north.
Spanish Florida was established in 1513, when Juan Ponce de León claimed peninsular Florida for Spain during the first official European expedition to North America. This claim was enlarged as several explorers (most notably Pánfilo Narváez and Hernando de Soto) landed near Tampa Bay in the mid-1500s and wandered as far north as the Appalachian Mountains and as far west as Texas in largely unsuccessful searches for gold and other riches. The presidio of St. Augustine was founded on Florida's Atlantic coast in 1565; a series of missions were established across the Florida panhandle, Georgia, and South Carolina during the 1600s; and Pensacola was founded on the western Florida panhandle in 1698, strengthening Spanish claims to that section of the territory.
Spanish control of the Florida peninsula was much facilitated by the collapse of native cultures during the 17th century. Several Native American groups (including the Timucua, Calusa, Tequesta, Apalachee, Tocobaga, and the Ais people) had been long-established residents of Florida, and most resisted Spanish incursions onto their land. However, conflict with Spanish expeditions, raids by the English and their native allies, and (especially) diseases brought from Europe resulted in a drastic decline in the population of all the indigenous peoples of Florida, and large swaths of the peninsula were mostly uninhabited by the early 1700s. During the mid-1700s, small bands of Creek and other Native American refugees began moving south into Spanish Florida after having been forced off their lands by English settlements and raids. They were later joined by African-Americans fleeing slavery in nearby colonies. These newcomers – plus perhaps a few surviving descendants of indigenous Florida peoples – eventually coalesced into a new Seminole culture.
The extent of Spanish Florida began to shrink in the 1600s, and the mission system was gradually abandoned due to native depopulation. Between disease, poor management, and ill-timed hurricanes, several Spanish attempts to establish new settlements in La Florida ended in failure. With no gold or silver in the region, Spain regarded Florida (and particularly the heavily fortified town of St. Augustine) primarily as a buffer between its more prosperous colonies to the south and west and several newly established rival European colonies to the north. The establishment of the Province of Carolina by the English in 1639, New Orleans by the French in 1718, and of the Province of Georgia by Great Britain in 1732 limited the boundaries of Florida over Spanish objections. The War of Jenkins' Ear (1739–1748) included a British attack on St. Augustine and a Spanish invasion of Georgia, both of which were repulsed. At the conclusion of the war, the northern boundary of Spanish Florida was set near the current northern border of modern-day Florida.
Great Britain temporarily gained control of Florida beginning in 1763 as a result of the Anglo-Spanish War, but while Britain occupied the territory, it did not develop it further. Sparsely populated British Florida stayed loyal to the Crown during the American Revolutionary War, and by the terms of the Treaty of Paris which ended the war, the territory was returned to Spain in 1783. After a brief diplomatic border dispute with the fledgling United States, the countries set a territorial border and allowed Americans free navigation of the Mississippi River by the terms of Pinckney's Treaty in 1795.
France sold Louisiana to the United States in 1803. The U.S. claimed that the transaction included West Florida, while Spain insisted that the area was not part of Louisiana and was still Spanish territory. In 1810, the United States intervened in a local uprising in West Florida, and by 1812, the Mobile District was absorbed into the U.S. territory of Mississippi, reducing the borders of Spanish Florida to that of modern Florida.
In the early 1800s, tensions rose along the unguarded border between Spanish Florida and the state of Georgia as settlers skirmished with Seminoles over land and American slave-hunters raided Black Seminole villages in Florida. These tensions were exacerbated when the Seminoles aided Great Britain against the United States during the War of 1812 and led to American military incursions into northern Florida beginning in late 1814 during what became known as the First Seminole War. As with earlier American incursions into Florida, Spain protested this invasion but could not defend its territory, and instead opened diplomatic negotiations seeking a peaceful transfer of land. By the terms of the Adams–Onís Treaty of 1819, Spanish Florida ceased to exist in 1821, when control of the territory was officially transferred to the United States.Spanish Formosa
Spanish Formosa (Spanish: Formosa Española) was a small Spanish colony established in the northern tip of the island known to Europeans at the time as Formosa (now Taiwan) from 1626 to 1642. It was conquered by the Dutch in the Eighty Years War.
The Portuguese were the first Europeans to reach the island off the southern coast of China in 1544, and named it Formosa (Portuguese for "beautiful") due to the beautiful landscape as seen from the sea. The Spanish colony was meant to protect the regional trade with the Philippines from interference by the Dutch base in the south of the island. The colony was short-lived due to the unwillingness of Spanish colonial authorities in Manila to commit more men and materiel to its defense.
After seventeen years, the last fortress of the Spanish was besieged by Dutch forces and eventually fell, giving the Dutch control over much of the island.Spanish West Indies
The Spanish West Indies or the Spanish Antilles (also known as "Las Antillas Occidentales" or simply "Las Antillas Españolas" in Spanish) was the former name of the Spanish colonies in the Caribbean. In terms of governance of the Spanish Empire, The Indies was the designation for all its overseas territories and was overseen by the Council of the Indies, founded in 1524 and based in Spain. When the crown established the Viceroyalty of New Spain in 1535, the islands of the Caribbean came under its jurisdiction.
The islands claimed by Spain were Hispaniola, modern Haiti and the Dominican Republic; Cuba, Puerto Rico, Saint Martin, the Virgin Islands, Anguilla, Montserrat, Guadalupe and the Lesser Antilles, Jamaica, the Cayman Islands, Venezuela (Margarita island), Trinidad, and the Bay Islands.
The islands that later became the Spanish West Indies were the focus of the voyages of the Spanish expedition of Christopher Columbus in America. Largely due to the familiarity that Spaniards gained from Columbus's voyages, the islands were also the first lands to be permanently colonized by Europeans in the Americas. The Spanish West Indies were also the most enduring part of Spain's American Empire, only being surrendered in 1898 at the end of the Spanish–American War. For over three centuries, Spain controlled a network of ports in the Caribbean including Havana (Cuba), San Juan (Puerto Rico), Cartagena de Indias (Colombia), Veracruz (Mexico), and Portobelo, Panama, which were connected by galleon routes.
Some smaller islands were seized or ceded to other European powers as a result of war, or diplomatic agreements during the 17th and 18th centuries. Others such as Dominican Republic gained their independence in the 19th century.Spanish colonization of the Americas
The overseas expansion under the Crown of Castile was initiated under the royal authority and first accomplished by the Spanish conquistadors. The Americas were incorporated into the Spanish Empire, with the exception of Brazil, Canada, the eastern United States and several other small countries in South America and The Caribbean. The crown created civil and religious structures to administer the region. The motivations for colonial expansion were trade and the spread of the Catholic faith through indigenous conversions.
Beginning with the 1492 arrival of Christopher Columbus in the Caribbean and continuing control of vast territory for over three centuries, the Spanish Empire would expand across the Caribbean Islands, half of South America, most of Central America and much of North America (including present day Mexico, Florida and the Southwestern and Pacific Coastal regions of the United States). It is estimated that during the colonial period (1492–1832), a total of 1.86 million Spaniards settled in the Americas and a further 3.5 million immigrated during the post-colonial era (1850–1950); the estimate is 250,000 in the 16th century, and most during the 18th century as immigration was encouraged by the new Bourbon Dynasty. In contrast, the indigenous population plummeted by an estimated 80% in the first century and a half following Columbus's voyages, primarily through the spread of Afro-Eurasian diseases. This has been argued to be the first large-scale act of genocide in the modern era, although this claim is largely disputed due to the unintended nature of the disease introduction, which is considered a byproduct of Columbian exchange. Racial mixing was a central process in the Spanish colonization of the Americas, and ultimately led to the Latin American identity, which combines Hispanic and native American ethnicities.
Spain enjoyed a cultural golden age in the sixteenth and seventeenth centuries when silver and gold from American mines increasingly financed a long series of European and North African wars. In the early 19th century, the Spanish American wars of independence resulted in the secession and subsequent balkanization of most Spanish colonies in the Americas, except for Cuba and Puerto Rico, which were finally given up in 1898, following the Spanish–American War, together with Guam and the Philippines in the Pacific. Spain's loss of these last territories politically ended the Spanish rule in the Americas.Spanish conquest of the Aztec Empire
The Spanish conquest of the Aztec Empire, or the Spanish–Mexica War (1519–21), was the conquest of the Aztec Empire by the Spanish Empire within the context of the Spanish colonization of the Americas. There are multiple 16th-century narratives of the events by Spanish conquerors, their indigenous allies and the defeated Aztecs. It was not solely a contest between a small contingent of Spaniards defeating the Aztec Empire but rather the creation of a coalition of Spanish invaders with tributaries to the Aztecs, and most especially the Aztecs' indigenous enemies and rivals. They combined forces to defeat the Mexica of Tenochtitlan over a two-year period. For the Spanish, the expedition to Mexico was part of a project of Spanish colonization of the New World after twenty-five years of permanent Spanish settlement and further exploration in the Caribbean.
Following an earlier expedition led by Juan de Grijalva to Yucatán in 1517, Spanish settler, Hernán Cortés, led an expedition (entrada) to Mexico. Two years later, in 1519, Cortés and his retinue set sail from Cuba for Mexico. The Spanish campaign against the Aztec Empire had its final victory on August 13, 1521, when a coalition army of Spanish forces and native Tlaxcalan warriors led by Cortés and Xicotencatl the Younger captured the emperor Cuauhtemoc and Tenochtitlan, the capital of the Aztec Empire. The fall of Tenochtitlan marks the beginning of Spanish rule in central Mexico, and they established their capital of Mexico City on the ruins of Tenochtitlan.
Cortés made alliances with tributaries city-states (altepetl) of the Aztec Empire as well as their political rivals, particularly the Tlaxcalteca and Texcocans, a former partner in the Aztec Triple Alliance. Other city-states also joined, including Cempoala and Huexotzinco and polities bordering Lake Texcoco, the inland lake system of the Valley of Mexico. Particularly important to the Spanish success was a multilingual (Nahuatl, a Maya dialect, and Spanish) indigenous slave woman, known to the Spanish conquistadors as Doña Marina, and generally as La Malinche. After eight months of battles and negotiations, which overcame the diplomatic resistance of the Aztec Emperor Moctezuma II to his visit, Cortés arrived in Tenochtitlan on November 8, 1519, where he took up residence with fellow Spaniards and their indigenous allies. When news reached Cortés of the death of several of his men during the Aztec attack on the Totonacs in Veracruz, he took Moctezuma captive, along with Cuitláhuac, his kinsman. Capturing the cacique or indigenous ruler was standard operating procedure for Spaniards in their expansion in the Caribbean, so capturing Moctezuma had considerable precedent.
When Cortés left Tenochtitlan to return to the coast and deal with the expedition of Pánfilo de Narváez, sent to rein in Cortés's expedition that had exceeded its specified limits, Cortés's right-hand man Pedro de Alvarado was left in charge. Alvarado allowed a significant Aztec feast to be celebrated in Tenochtitlan and on the pattern of the earlier massacre in Cholula, closed off the square and massacred the celebrating Aztec noblemen. The official biography of Cortés by Francisco López de Gómara contains a description of the massacre. The Alvarado massacre at the Main Temple of Tenochtitlan precipitated rebellion by the population of the city. Moctezuma was killed, although the sources do not agree on who murdered him. According to one account, when Moctezuma, now seen by the population as a mere puppet of the invading Spaniards, attempted to calm the outraged populace, he was killed by a projectile. According to an indigenous account, the Spanish killed Moctezuma. Cortés had returned to Tenochtitlan and his men fled the capital city during the Noche Triste in June 1520. The Spanish, Tlaxcalans and reinforcements returned a year later on August 13, 1521 to a civilization that had been weakened by famine and smallpox. This made it easier to conquer the remaining Aztecs.Many of those on the Cortés expedition of 1519 had never seen combat before, including Cortés. A whole generation of Spaniards later participated in expeditions in the Caribbean and Tierra Firme (Central America), learning strategy and tactics of successful enterprises. The Spanish conquest of Mexico had antecedents with established practices.The fall of the Aztec Empire was the key event in the formation of the Spanish Empire overseas, with New Spain, which later became Mexico. |
If you spend any time editing or outputting video, you will come across the term Codec. Because there are so many of them, and it is difficult to tell the difference between them, we put together a quick look to help you get started. If you can understand certain terms, you can better decide which one fits your needs. Let’s start at the beginning with a simple definition.
Codec is really the meshing of two words: coder and decoder (co/dec). What do they do? In the simplest terms, because video files are so large, you need a way to make them smaller. The codec encodes, compressing the data for storage or sending, then decompresses for playback or editing.
A codec is a computer code that performs its function whenever the file is called up by a piece of software. Codecs can also be used in a physical piece of hardware, like your camera, turning incoming video and audio into a digital format. This happens in real time, either at the point of capture or the point of playback. The codec also reverses the function and turns digital video and audio signals into a playback format. Unless you’re a broadcast engineer, however, you will rely on your computer or device to select a codec. The hardware compresses your video and audio data into a manageable size for viewing, transfer or storage.
Types of Codecs
Now that you know what codecs are, let’s look at the variety of codecs out there. Then you decide which one is the best fit your needs.
You’ll find thousands of Codecs that are grouped under a variety of umbrellas. Lossless codecs are just like they sound. They reproduce video exactly as it is without any loss in quality. Lossy codecs, on the other hand, lose a small amount of information, but can compress material into a much smaller format. Lossy codecs are great for compressing data that needs to be sent via e-mail or uploaded to the internet. Use caution when choosing a lossy Codec. Some color shifting is seen in some formats
Overall, all codecs work toward the same end: put your data into a manageable file type with as little loss of quality as possible.
Transformative codecs cut up the material into smaller chunks before actually compressing it. Predictive codecs compare the data you’re compressing with adjacent data and get rid of unnecessary date. This creates a smaller file. Overall, all codecs work toward the same end: put your data into a manageable file type with as little loss of quality as possible.
The most widely recognized family of codecs are based on MPEG standards. MPEG is an acronym for Moving Picture Experts Group. This is the organization that sets and codifies the standards. There are a number of primary MPEG formats and a multitude of derivative types.
MPEG-1 is a data stream which reproduces with incredibly high quality. The MP3 (MPEG-1 Layer 3) standard for audio compression, developed by Fraunhofer, is an application for the MPEG-1 data stream – MPEG-1 video does not always include MP3 audio.
Almost all computers and consumer DVD players support both MPEG-1 and the MP3 digital audio encoding formats. One drawback is that MPEG-1 allows only for progressive scanning. Progressive scanning is a method of storing and displaying moving images where all of the lines of the image are drawn in sequence. This is in contrast to interlaced scanning, where all the odd lines of an image are drawn first, then all of the even lines are drawn. MP3, while lossy and quite small, is the standard for nearly all digital music storage devices, audio players and retail sites. The typical MP3 audio file is 128kbits per second, around 1/11th of the size of the original audio data that would be on a CD.
MPEG-4 files use both progressive and interlaced video. It employs better compression techniques than MPEG-1 and is a widely-accepted compression standard. In fact, there are a number of codecs that are derived from MPEG-4. One is the H.264 codec, which is another option for encoding video for Blu-ray Disc, as well as for videos found on the iTunes store. H.264 is a family of standards with great flexibility and a wide variety of applications. H.264 enables compression for high and low bit rates and both high and low video resolutions. Adjusting size allows users to use this same standard for compressing for broadcast, multimedia usage and large file storage.
ProRes is another widely-used codec. The format was called Apple ProRes and found on Apple products like Final Cut and iMovie . You can find it in several formats like 422, 4444 and RAW. Developers boast that it will handle up to 8K media with superior playback. Superior color resolution is also a main feature.
Another well known codec or family of codecs is WMV or Windows Media Video. With the glut of Windows users out there, it isno wonder this codec family is so popular.
Originally designed to compress files for internet streaming, WMV was introduced as a competitor to the RealVideo compression codec. Microsoft’s WMV 9 has been around for quite some time at this point, and Microsoft claims that it provides a compression ratio that is two times better than MPEG-4 and three times better than MPEG-2. WMV 9 is also the basis of the SMPTE VC-1 video compression standard, which is another format that can be used for encoding video for Blu-ray Disc.
Other Codecs and Containers
Be sure to note the difference between a Codec and a container. So what is a container? It is a lot like the wrapping on a present. It refers to the way in which information is stored, but not how it is coded. For example, QuickTime is a container that is wrapped around a variety of compression codecs, like MPEG-4, k3g, skm and others.
What Do YOU Need?
So, which codec Do you choose? You will need a little trial and error to discover the right one. Ask yourself a few questions: Are you compressing for storage or for high quality viewing? Are you okay with a little data loss or does the finished files to be clean and pristine? Work backwards and do some research. Find out what the pros are doing to get the same results. They use the best so you are guaranteed to find the codec that is right for the job.
So now that you have at least a slightly better understanding of some of the more popular codecs, it might also be helpful to know which formats utilize those codecs. Read our associated story Transmission Formats for more. |
According to long-term observations, most star systems in the universe have been confirmed to be composed of at least two stars. In 1984, the American physicist Muller proposed the conjecture that the sun has a companion star-a distant red dwarf or brown dwarf star orbits the sun in an elliptical orbit, and enters the Oort cloud (an icy star) every about 26 million years. Microplanets form a sphere cloud cluster surrounding the solar system), disturbing a large number of comets into the inner solar system, causing periodic mass extinctions on the earth. The existence of this companion star called Nemesis has not been confirmed yet, and the Oort Cloud seems to provide clues for finding the companion star.
The Oort Cloud is a spherical cloud cluster surrounding the solar system.
The Oort cloud can be understood as the boundary of the solar system, and its farthest distance to the sun is 100,000 times the distance from the earth to the sun. The icy microplanets that make up the Oort Cloud are bound by the sun’s weak gravitational pull and gather into a sphere that surrounds the solar system. Astronomers believe that if only one star has existed in the solar system so far, the theoretical density of the matter that constitutes the Oort cloud should be less than the actual density measured today, and the more powerful gravitational capture effect of the dual star system can precisely explain Oort. Teyun’s “theory-actual density difference”.
If evidence can be found that the Oort cloud is captured by a binary star, then the theory of solar system formation will be rewritten, and it will help answer all kinds of questions about the origin of life on Earth. The comet originating from the Oort Cloud brought the source of life to the earth-water, and also led to the subsequent extinction of the dinosaurs. Why did the companion star that helped the sun capture the Oort Cloud, and where did it go? Astronomers speculate that the companion star was taken away from the sun by a star passing through the vicinity of the solar system. This phenomenon often occurs in young star clusters, and most of the Oort cloud’s material is also stripped away. Today, this possible companion star may have drifted to a corner of the Milky Way that we don’t know.
In 2006, the International Astronomical Union officially expelled Pluto from the ranks of planets and classified it as a dwarf planet. The concept of eight planets began to be established, but astronomers are still working on finding the unknown “Planet Nine.” Interestingly, all the simulation results of astronomers on “Planet Nine” are quite close. Some simulation results show that this planet is a massive celestial body with a volume of 2 to 4 times that of the Earth and a mass of about 10 times that of the Earth. It takes 10,000 to 20,000 years to revolve around the sun. Other simulation results indicate that the planet is likely to be an exoplanet captured by the young sun about 4.5 billion years ago. When the sun and other stars form together in a star cluster, the positions between the stars are not fixed, and the stars often pass by each other. During the encounter, the sun may have “stolen” one or more planets from other stars, and they gradually separated from the star cluster together and formed today’s solar system. This also seems to coincide with the companion star theory-if the sun has captured the Oort Cloud with the help of a companion star, it may also capture “Planet Nine.”
“Planet Nine” may be a small black hole.
In recent years of observations, astronomers have discovered some strange celestial bodies in the distant outer solar system that orbit outside Neptune and have almost the same perihelion. These celestial bodies are called trans-Neptune celestial bodies (“TNO”). Some astronomers believe that these TNOs are too far away from Neptune. Therefore, it is not Neptune that affects the extreme orbits of these TNOs, but the “Planet Nine”, which is 5 to 15 times the mass of the Earth. “Planet Nine” may be far beyond the orbit of Pluto, and its distance to the sun may be hundreds of times the distance from the earth to the sun.
Another explanation even thinks. “Planet Nine” may be a primitive black hole. A primordial black hole will appear a few seconds after the Big Bang, but this primordial black hole has not yet been confirmed. The basis for this hypothesis comes from the gravitational anomaly discovered by the Optical Gravitational Lens Experiment (OGLE). Using OGLE, astronomers monitor the sky and look for micro-gravitational lensing events: When a massive foreground object like a black hole passes in front of a background object (such as a star), the black hole distorts and magnifies the light of the background object like a lens. Through observation, scientists have discovered 6 micro-gravitational lensing events, which occurred about 26,000 light-years from the center of the Milky Way galaxy, which is almost the same as the distance from the sun to the center of the Milky Way galaxy. Black holes are very dense. A black hole about 5 times the mass of the earth is only the size of a persimmon, and it can inhale all objects including light. If the sun captures such a black hole, then this black hole will also affect the orbit of TNO.
In 2022, the Willa Rubin Observatory will officially start operation. One of its missions is to find outer solar system objects other than Pluto. We look forward to more new discoveries about the “Planet Nine” by then.
Willa Rubin Observatory
The Willa Rubin Observatory (also known as the Large Integrated Sky Survey Telescope, or “LSST”), named after the late American astronomer Willa Rubin, confirmed the existence of dark matter in the galaxy. LSST is located on Mount Ilpeño in the Coquimbo region of northern Chile, at an altitude of 2682 meters, next to the Gemini Observatory and the Southern Astrophysics Research Telescope. Construction began on August 1, 2014, and will officially start in January 2022 for 10 years. Years of observational work.
LSST is equipped with a primary mirror with a diameter of 8.4 meters. On September 8, 2020, the LSST camera team released the first batch of 32-megapixel digital photos. They are not only the largest images ever taken in a single shot, but also a successful test of the focal plane of the camera. The focal plane of the LSST camera contains 3.2 billion pixels, each pixel is about 10 microns wide, and the focal plane itself is very flat, and its variation does not exceed one-tenth of the width of a human hair. This allows the LSST camera to produce very high resolution image. The images taken by the LSST camera are so huge that 378 4K ultra-high-definition TV screens are required to display the true full size. The focal plane of the LSST camera is large enough to cover a sky the size of 40 full moons, which will enable LSST to complete a full-day imaging of the entire southern hemisphere every few nights. |
All About Pediatric Leukemia
What is leukemia?
In general terms, leukemia is a cancer of blood cells. Since all blood cells are derived from the bone marrow—the spongy space inside of bones—this is the anatomic origin of leukemia. But, leukemia is considered a "liquid" tumor in that it travels in the bloodstream (and therefore to all tissues) and is not based solely in one organ the way some other cancers are. Leukemia is classified in two different ways: (1) based on the type of bone marrow cell from which it originates, either lymphoid or myeloid, and (2) based on the stage of maturation of the tumor cell. Leukemia that arises from immature cells—so-called acute leukemia—is much more common in children than leukemia that arises from more mature cells—chronic leukemia. In acute leukemia, the leukemia cell is called a "blast" because it is very immature. This overview will focus on the two most common types of childhood leukemia, acute lymphoblastic leukemia (ALL) and acute myeloid leukemia (AML). Within these two broad groups there are many subtypes, some of which are described below.
Leukemia is the most common type of cancer in children - ALL accounts for approximately 75% of all childhood leukemia, and AML about 20%. While both of these major subtypes have similar presenting symptoms and initial diagnostic evaluations (and will be reviewed together in these respects), the treatments differ dramatically.
Who is at risk for leukemia?
Like most childhood cancers, leukemia most commonly occurs randomly—or sporadically—in otherwise healthy children without known risk factors. There is a peak in the occurrence of ALL in children aged 2-4 years, but it is not known why this occurs. Certain subtypes of ALL also seem to affect different age groups; for example, T-cell ALL is more common in adolescents compared to younger children.
Certain underlying medical conditions can predispose to the development of leukemia. Children with certain rare inherited immunodeficiency syndromes are at increased risk for ALL. Exposure to specific types of chemotherapy or radiation (to treat cancer) increases a child's risk of developing AML. Children with Trisomy 21 (Down syndrome) have a greatly increased risk of developing both ALL and AML.
What are the symptoms of leukemia?
Patients with leukemia may have many different types of symptoms before they are diagnosed, but there are a few common clinical features that make providers suspect that leukemia may be present. Because leukemia arises from the bone marrow where normal blood cells (white blood cells, red blood cells, and platelets) are made, one of the most common features associated with the diagnosis of acute leukemia is a decrease in these normal cells. This occurs when the leukemia blasts literally take up too much space in the bone marrow for the other normal cells to be produced. When normal white blood cells are low, children can develop unexplained fevers or infections; low red blood cell count can cause children to develop signs of anemia such as sleepiness, fatigue, pale skin/eyes, headaches, or dizziness; and low platelets can cause bruising, nosebleeds, or gum bleeding. Frequently, leukemia cells (blasts) overflow from the marrow and can be seen by a hematologist or oncologist when the blood is examined under the microscope. Leukemia cells can also cause enlargement of liver and spleen, as well as lymph nodes in the neck, armpit, and groin.
In addition, because leukemia cells expand in the bone marrow space, they can exert pressure on the surrounding normal bone, causing pain. In small children this may manifest as a limp or refusal to walk or use an extremity.
How is leukemia diagnosed?
If a doctor suspects that a child may have leukemia, the first step after a thorough history and physical exam, is typically to obtain a panel of blood tests (done through a lab draw in which a small amount of blood is taken with a needle), including blood counts (to determine if leukemia cells are present in the blood, and if the white cells, red cells, or platelets are low) and testing of electrolytes, kidney, and liver function. Frequently, a chest X-ray is performed to see if there is a collection of leukemia cells in an immune organ of the upper chest, called the thymus. In order to definitively establish the diagnosis of acute leukemia in children a test called a bone marrow examination is performed, in which an oncologist inserts a needle into the hip bone while the child is sedated. This involves two parts performed as one procedure: an aspirate (pulling liquid marrow out of the bone) and a biopsy (taking a small piece of solid marrow). At the same time, a sample of spinal fluid is usually obtained to check to see if leukemia cells are present in the central nervous system (CNS)—this is done by a lumbar puncture.
At this point, it is possible to determine what type of leukemia a child has—usually either ALL or AML.
How is acute leukemia staged?
Because leukemia is a "liquid" tumor and is present in the bloodstream, imaging tests like CT and MRI scans are not typically used for diagnosis or staging. Instead, certain characteristics of the patient, the clinical presentation, and the leukemia cells themselves are used to assign patients to a risk category, which in turn determines what treatment is needed.
In children with ALL, the following features are important in determining risk category:
- Age: Children ≥10 years are at higher risk than children ages 1-9 years. Children <1 year represent a unique group of patients and are treated differently because ALL in infants is very challenging to cure.
- White blood cell count at diagnosis: In many patients, leukemia cells (blasts) have leaked into the bloodstream and cause the white blood cell count to be elevated (even though normal, healthy white blood cells are actually decreased). When the total level of blasts is very high, this puts a patient into a higher risk group.
- Cell of origin: ALL can originate from two different types of lymphoblasts, either B cells (80% of ALL cases) or T cells (20%). T cell disease is more difficult to cure than B cell disease, and is therefore treated more aggressively.
- Central Nervous System (CNS) status: If leukemia cells are found in the spinal fluid, this requires an increased intensity of therapy and sometimes radiation.
- Genetics of the leukemia cells: All cancer cells undergo genetic changes that contribute to the initial transformation from normal cell to cancer cell. The type of genetic changes can help determine how aggressive (or not) a leukemia is. In ALL, certain changes indicate a better prognosis and others indicate that the leukemia is more difficult to treat.
AML treatment is determined predominately by the genetics of the leukemia cells (see above). As is true for patients with ALL, if AML is found in the CNS, increased intensity therapies are used.
For both types of acute leukemia, these features will enable the oncologist to categorize the patient into a specific risk group and subsequently select an initial treatment plan. Afterwards, the response to therapy is monitored closely, and measured by looking at the blood and the bone marrow. After the first phase of therapy (see below), tests are performed to determine if a child with leukemia is in remission and the quality of this remission. Based on this determination, treatment may or may not be altered. As with many types of childhood cancer, this response to therapy has major prognostic implications.
How is acute leukemia treated?
For both ALL and AML, chemotherapy is the major form of treatment, and is divided into different phases of therapy. The structure and intensity of these phases differs significantly. This chemotherapy is administered intravenously and/or orally (systemically) so it can attack the leukemia cells in the bloodstream and bone marrow. In addition, all patients receive some chemotherapy administered directly into the spinal fluid (intrathecal chemotherapy), even if no leukemia cells are found in the CNS at diagnosis. The drugs are given directly into the spinal fluid because most drugs that are given by mouth or vein do not travel well into the CNS. Placing chemotherapy into the spinal fluid serves to prevent the leukemia from coming back in the CNS. In certain cases, radiation to the brain will be administered, although the frequency with which radiation is used in childhood acute leukemia treatment is diminishing.
The initial treatment of acute leukemia involves not just appropriate chemotherapy, but management of complications that the leukemia cells cause, such as giving transfusions of red blood cells or platelets for low blood counts. Occasionally these complications constitute medical emergencies and can be life-threatening; sometimes these improve with prompt initiation of anti-leukemia therapy, but frequently additional measures are taken to ensure that these emergencies are not exacerbated early in the course of treatment.
Treatment for ALL is divided into several major phases—while the names and details of these phases vary depending on the specific regimen that the oncologist selects (and the risk group of the patient), the general structure is the same for almost all forms of ALL. The early phases of therapy are generally intensive chemotherapy (although still mostly outpatient), and are followed by a long period of low-intensity therapy called maintenance (or continuation) that is all outpatient and mostly oral (by mouth) chemotherapy.
- Induction: Initially, patients receive a month-long period of chemotherapy that is designed to put the leukemia into remission, which means that no leukemia can be seen when the bone marrow is examined under the microscope. This typically involves 3-4 medications, including at least a steroid medicine (prednisone, prednisolone, or dexamethasone), vincristine, and asparaginase, and sometimes other medicines like daunorubicin. Special tests can be used that can detect leukemia at a much higher rate than even a microscope, and they are used to determine the quality of the remission. This information is used to guide further therapy decisions.
- Consolidation (1-2 months): Once remission has been achieved with induction, patients receive different chemotherapy agents to "shore up" the remission and to prophylactically treat the CNS. During this phase, oral and IV medications are given with several rounds of intrathecal chemotherapy.
- Intensification (2 months): This portion of therapy is intensive and essentially repeats some of the earlier phases of therapy in an effort to intensify therapy one last time before entering maintenance.
- Interim Maintenance (1-2 months): These blocks of treatment are inserted between more intensive phases (e.g. between Consolidation and Intensification). They deliver important chemotherapy, but are relatively less intensive and serve as a sort of break.
- Maintenance (1.5-2.5 years): During this period of therapy, children with ALL receive chemotherapy by mouth on a daily and weekly basis, and return to clinic monthly to receive IV medications. During this period children often feel very well, can return to school and normal activities, and typically regrow their hair.
In contrast to ALL, where treatment occurs over several years, AML is treated more briefly (over 6-8 months). Because AML is generally more difficult to cure than ALL, the treatment is intensive and associated with more side effects, particularly from infection. For this reason patients largely remain hospitalized during chemotherapy cycles. Treatment is divided into blocks, which are 4-6 weeks long with brief breaks in between.
In certain situations, the oncologist may decide that a child's AML is not going to be successfully cured with conventional chemotherapy. This decision is based on the genetics of the leukemia cells and/or the individual's response to the initial phases of therapy. If this is the case, a bone marrow transplant (BMT) may be utilized. For AML, successful BMT utilizes a donor other than the patient (called allogeneic BMT), either a sibling, an unrelated volunteer donor, or unrelated umbilical cord blood. The oncologist will determine if the child has a match from any of these donor sources, based on special tests that determine the likelihood of a patient successfully accepting a transplant. Generally, if a patient has a sibling match, this is the preferred donor source; any given full sibling has an approximately 25% chance of being a match.
If BMT is the treatment of choice, this procedure will be performed after 3-4 cycles of intensive chemotherapy. A BMT is performed by administering very high doses of chemotherapy and/or radiation that serve the purpose of treating the AML and allowing for the successful acceptance of the donated blood/marrow, followed by infusion of the donated blood/marrow. BMT is associated with many risks—particularly infection and organ damage—and patients are monitored very closely in the hospital until it is felt that they are safe to go home (typically 4-6 weeks after transplant).
There is a special form of AML, called APML (or APL), that is treated very differently. Because this type of AML responds to medications targeted at very specific aspects of the cancer cells, it is easier to treat, using less intensive regimens.
How successful is treatment for childhood leukemia?
The survival rate for childhood leukemia depends on what general type of leukemia (AML or ALL) a child has, and the specific features of the leukemia that determine the risk category. One of the most important markers for prognosis is response to therapy. It is important to remember that all risk assignment is treatment-based, meaning that a child with so-called "high risk ALL" may have the same chance of cure as a child with a lower risk assignment, but only if they receive more intensive therapy. The child’s oncology team can discuss the specific prognosis based on that child’s disease and treatment plan.
Other types of childhood leukemia
While acute leukemia is by far the most common form of childhood leukemia, other types can occur. Of these, the most common is chronic myelogenous leukemia (CML). This type of leukemia is common in adults and presents in a different way than acute leukemia. Because the leukemia cells in CML almost all have a specific genetic change that responds very well to a specific drug (imatinib mesylate), this drug (or a similar one) is the treatment of choice for control of CML. Children with CML are treated with imatinib, but since this drug only controls the disease without achieving a true cure, more definitive therapy with BMT is sometimes needed.
While not technically classified as leukemia, children can develop myelodysplastic syndrome (MDS), which is a problem with the development and maturation of bone marrow cells that can progress to AML. These patients present with symptoms of low blood counts as well, but bone marrow testing does not reveal acute leukemia cells. These children require BMT for cure.
Follow-up Care and Survivorship
After treatment for childhood cancer, the patient will be followed closely to monitor for the cancer coming back, to help them heal from ongoing side effects, and to help them to transition to survivorship. Initially they will be seen often and have ongoing tests to monitor their health. As time goes on, these visits and testing will become less frequent. The oncology team will discuss each patient’s individual follow up plan with them.
Survivors often wonder what steps they can take to live healthier after cancer. There is no supplement or specific food you can eat to assure good health, but there are things you can do to live healthier, prevent other diseases, detect any subsequent cancers early and work with the social and emotional issues, including insurance, employment, relationships, sexual functioning, and fertility, that a prior cancer diagnosis sometimes brings with it. Your oncology team is there to support you and can help you find support resources.
It is important to have a plan for who will provide your cancer-focused follow up care (an oncologist, survivorship doctor or primary care doctor). Talk with your oncology team about developing a survivorship care plan. If you would like to find a survivorship doctor to review your history and provide recommendations, you can contact cancer centers in your area to see if they have a survivor's clinic or search for a clinic on OncoLink's survivorship clinic list.
Hunger SP, Loh ML, Whitlock JA et al. Children's Oncology Group's 2013 Blueprint for Research: Acute Lymphoblastic Leukemia. Pediatric Blood & Cancer. 2012: 1-7.
Orkin Stuart et al. Oncology of Infancy and Childhood. Philadelphia: Saunders, 2009. Print.
Rubnitz JE. How I treat pediatric acute myeloid leukemia. Blood. 2012;119:5980-5988.
National Cancer Institute's Childhood Cancers Page
Childhood Leukemia Information from Children's Hospital of Philadelphia |
• The Constitution of India provides for a legislature in each State and entrusts it with the responsibility to make laws for the state.
• Articles 168 to 212 in Part VI of Constitution of India deal with the organisation, composition, duration, officers, procedures, privileges and powers of the state legislature.
Organisation of State Legislature
• In the political system of India, there are two types of states with regard to state legislature.
o Most of the state in India have Unicameral system while few others have a bicameral system. Unicameral system has only one House and is known as the Legislative Assembly (Vidhan Sabha).
o In Bicameral system , the State has two houses, the Upper House is known as the Legislative Council (Vidhan Parishad) and the lower House is known as the Legislative Assembly (Vidhan Sabha).
Method of Abolition or Creation of a State Legislative Council (Vidhan Parishad):
• Article 169 in Constitution of India provides for the method of abolition or creation of a State Legislative Council.
• If a state Legislature passes a resolution by a special majority, in favour of the creation of the second chamber and if Parliament gives approval to such a resolution by simple Majority, the concerned State can have two Houses in the Legislature.
Different types of state legislatures:
• Unicameral Legislature
o As of November 2019, 22 States in India have unicameral system of state legislature (including Rajasthan). Here, the state legislature includes Governor and Vidhan Sabha.
o Other than these states, two Union territories – Delhi and Puducherry have State Legislatures (Both Unicameral).
• Bicameral Legislature
o 6 States in India namely: Bihar, Andhra Pradesh, Telangana, Karnataka, Maharashtra and Uttar Pradesh have Bicameral system of state legislature. Here, the state legislature includes Governor, Vidhan Sabha and Vidhan Parishad.
Composition of State Legislature
Legislative Assembly (Vidhan Sabha)
o Can be maximum of 500 and minimum of 60, to vary according to population of state.
o Rajasthan Legislative Assembly has 200 members.
o Special Case: Goa, Arunachal Pradesh & Sikkim number is fixed at 30 and for Mizoram & Nagaland at 40 & 46 respectively.
• Manner of Election:
o Members of legislative assembly are elected directly by people on basis of Universal Adult Franchise.
• Territorial Constituencies:
o The demarcation of territorial constituencies is to be done in such a manner that the ratio between the population of each constituency and the number of seats allotted to it, as far as practicable, is the same throughout the State.
o Constitution makes special provisions regarding the representation of Scheduled Castes and Scheduled Tribes on basis of population ratios.
• Nominated Members:
o Provision has also been made to nominate one member of the Anglo-Indian Community, if the Governor is of the opinion that the community is not adequately represented in the Assembly.
Legislative Council (Vidhan Parishad)
• The system of composition of the Council as laid down in the Constitution is not final.
o The final power of providing the composition of this Chamber of the State Legislature is given to the Union Parliament.
• But until Parliament Legislates on the matter, the composition shall be as given in the Constitution, which is as follows.
o Strength of legislative Council cannot be more than one-third of the total number of members in the Legislative Assembly of the State and in no case less than 40 members.
o Rajasthan does not have legislative council.
• Manner of Election:
o 1/3rd of total number of members of the Council shall be elected by electorates consisting of members of local bodies, such as municipalities, district boards.
o 1/12th shall be elected by electorates consisting of graduates of three years’ standing residing in that State
o 1/12th shall be elected by electorates consisting of persons engaged for at least three years in teaching in educational institutions within the State, not lower in standard than secondary schools.
o 1/3rd shall be elected by members of the Legislative Assembly from amongst persons who are not members of the Assembly.
o Remainder shall be nominated by the Governor from persons having knowledge or practical experience in respect of such matters as literature, science, art cooperative movement and social service.
o Thus 5/6th of member of legislative council are indirectly elected and 1/6 are nominated by the governor.
Duration of State Legislature
Legislative Assembly (Vidhan Sabha):
• The duration of the Legislative Assembly is five years from date of its first meeting after the general elections. The Governor has the power to dissolve the Assembly even before the expiry of its term.
• Additionally, during National Emergency, the Parliament by law can extend the term of assembly for a period not exceeding one year at a time and not extending in any case beyond a period of six months after proclamation has ceased to operate.
Legislative Council (Vidhan Parishad):
• Like Rajya Sabha, Legislative council is a continuing chamber. It is a permanent body, unless abolished by the Legislative Assembly and Parliament by due procedure.
• One-third of the members of the Council retire on the expiry of every second year, which means, a term of six years for each member. There is no bar on a member getting re-elected on the expiry of his/her term.
Membership of State Legislature
A person shall not be qualified to be chosen to fill a seat in the Legislature of a State unless he
• is a citizen of India
• is, in the case of a seat in the Legislative Assembly, not less than twenty-five years of age and, in the case of a seat in the Legislative Council, not less than thirty years of age and
• makes and subscribes before a person authorized by the Election Commission, an oath or affirmation, according to the form prescribed in Third Schedule.
• possesses such other qualifications as may be prescribed in that behalf by or under any law made by Parliament.
Accordingly, the Parliament by the Representation of the People Act, 1951, has provided additional qualifications that:
• A person shall not be elected either to the Legislative Assembly or the Council, unless he is himself an elector (registered as Voter) for any Legislative Assembly constituency in that State.
• To Contest seats reserved for SC/ST must be a member of SC/ST. However, a member of SC / ST can also contest a seat not reserved for them.
A person shall be disqualified for being chosen as and for being a member of the Legislative Assembly or Legislative Council of a State if he
• holds any office of profit under the Government of India or the Government of any State, other than that of a Minister for the Indian Union or for a State or an office declared by a law of the State not to disqualify its holder.
• is of unsound mind as declared by a competent Court
• is an un-discharged insolvent
• is not a citizen of India or has voluntarily acquired the citizenship of a foreign State or is under any acknowledgment of allegiance or adherence to a foreign State
• is so disqualified by or under any law made by Parliament
Accordingly, the Parliament through, the Representation of the People Act, 1951, has laid down some grounds of disqualification:
• Conviction by a Court, having been found guilty of a corrupt or illegal practice in relation to election, being a director or managing agent of a corporation in which Government has a financial interest (under conditions laid down in that Act):
• Art. 192 lays down that if any question arises as to whether a member of a House of the Legislature of a State has become subject to any of the disqualifications mentioned above, the question shall be referred to the Governor of that State for decision who will act according to the opinion of Election Commission.
Disqualification on ground of defection:
• The Tenth Schedule to Constitution provides for disqualification of the members on ground of defection.
o Defection refers to desertion of one’s party in favor of an opposing one.
• The question of disqualification under Tenth Schedule is decided by Speaker in Vidhan Sabha and Chairman in Vidhan Parishad.
Officers of The State Legislature
Presiding Officers of State Legislature:
Each house of state legislature has its own presiding officer.
• Legislative Assembly:
o Deputy Speaker
• Legislative Council
o Chairman of Council
o Deputy Chairman of Council
Speaker of Rajasthan Vidhan Sabha
• The assembly itself from amongst its members elects the Speaker.
• Usually, the speaker remains in office during the life of assembly. However, he vacates his office earlier in any of three following cases.
(a) Shall vacate his office if he ceases to be a member of the Assembly
(b) May at any time by writing under his hand addressed, if such member is the Speaker, to the Deputy Speaker, and if such member is the Deputy Speaker, to the Speaker, resign his office and
(c) May be removed from his office by a resolution of the Assembly passed by a majority of all the then members of the Assembly. Such a resolution can be moved only after giving 14 days advance notice.
Powers & Functions of Speaker of Legislative Assembly:
• Maintain order and decorum in assembly for conducting its business
• Final interpreter of:
o Constitution of India
o Rules of procedure and conduct of business of assembly
o Legislative precedents within the assembly
• Adjourns assembly or suspends meeting in absence of quorum
• Does not vote on first instance but can use his/her vote in case of tie
• At request of leader of assembly, allow secret sitting of the assembly
• Decides whether a bill is money bill
• Decides on cases of disqualification of members on ground of defection under X Schedule
• Appoints Chairman of all committees of assembly and supervises their working
• Himself is the Chairman of:
o Business Advisory Committee
o Rules Committee
o General Purpose Committee
• First Speaker of Rajasthan Assembly: Narottam Lal Joshi
Deputy Speaker of Rajasthan Vidhan Sabha
• Like the Speaker, the Deputy Speaker us also elected by the assembly itself from its members. The election for deputy speaker takes place after the election of Speaker has taken place.
• Article 180: Power of the Deputy Speaker or other person to perform the duties of the office of, or to act as, Speaker.
• While the office of Speaker is vacant, the duties of the office shall be performed by the Deputy Speaker or, if the office of Deputy Speaker is also vacant, by such member of the Assembly as the Governor may appoint for the purpose.
• The Speaker nominated from the members a panel of chairman. Any one of them can preside over the assembly in absence of both speaker and deputy speaker.
Pro-term Speakers of Rajasthan Vidhan Sabha
• Pro-tem, is a Latin phrase which best translates to “for the time being” in English. Legislative bodies can have one or more pro tempore for the post of presiding officer.
• As per the Constitution, the Speaker of the last Assembly vacates his office immediately before the first meeting of the newly- elected assembly.
• Therefore, the Governor appoints a member of the assembly as the Speaker Pro-tem. Usually, the senior most member is selected for this.
• The Governor himself administers oath to the Speaker Pro Tern.
Functions of Pro-Term Speaker:
• He presides over the first sitting of the newly elected assembly.
• His main duty is to administer oath to the new members.
• He also enables the blouse to elect the new Speaker. When the blouse elects the new speaker, the office of the Speaker Pro Tern ceases to exist.
• First Pro-term Speaker of Rajasthan Assembly: Maharav Sangram Singh
Powers and Functions of the State Legislature
• Each State Legislature exercises law-making powers over the subjects of the State List and the Concurrent List.
• In case a state has only Legislative Assembly, all the powers are exercised by it.
• However, even in case it is a bicameral state legislature with both Vidhan Sabha and Parishad, the Vidhan Sabha exercises almost all the powers.
• The Vidhan Parishad plays only a secondary, advisory and minor role.
Legislative Procedure in State Legislature
• The Governor of the Rajasthan summons the House from time to time keeping in mind that the intervening period between the last sitting in one session and first sitting in next session does not exceed six months.
• As per the Rules, Rajasthan Legislative Assembly shall have at least three sessions in a calendar year. The business of the House is decided by the House on the recommendation of the Business Advisory Committee.
There are two types of procedural devices available. One is Questions and others are Motions.
• Questions: There are three categories of questions, Starred Questions, Unstarred Questions and Short Notice Questions. It is essential to give questions in the prescribed form with 14 clear days’ notice for starred and unstarred questions and shorter than 10 clear days’ notice for Short Notice Questions.
• Motions: Besides questions, the members may raise the matters of urgent and current public importance before the House through the devices like Half an Hour Discussion, Calling Attention Motion, Notice Under Rule 295 (Special Mention Procedure) for Short Duration Discussion, Adjournment Motion etc.
• All the legislative proposals are to be brought in the form of Bills before the legislature.
• These can either be Government Bills or Private Members Bills.
• Government Bills are prepared and drafted by the Law Department of the State government.
Process of Passing of Bill:
There are three readings (stages) for passing a Bill.
• First Reading: The first reading means motion for leave to introduce a Bill and its adoption.
• Second Reading: The second reading consists of discussion on the principles of the Bill and clause by clause consideration.
• Third Reading: The third reading is completed when a motion for passing a Bill is adopted by the House.
• After the House passes a Bill, it is presented to the Governor/ President for assent. With such assent and its publication in the official gazette, it becomes law of the State.
• Money Bill can be introduced in State Assembly only after recommendation of the Governor and can only be introduced by a Minister.
Assent of the Governor:
Every bill, after it is passed by the assembly, is presented to the Governor. There are four options before the Governor:
• May give his assent to the Bill
• He withholds his assent
• May return the Bill to the Legislative Assembly for reconsideration.
• May reserve the Bill for the consideration of the President
• If the Governor gives his assent to the Bill then it becomes an Act.
• If the Governor withholds his assent to the Bill then bill does not become an act.
• If the Governor returns the bill to assembly and the assembly again passes the bill, with or without amendments, then Governor has to give assent to the Bill.
Assent of the President:
When the Governor reserves the Bill for consideration of the President, then President can either:
• Give his assent to the bill
• Withhold his assent to the bill
• Return the bill for reconsideration of assembly
• When bill is returned to the assembly by President, it must reconsider it within 6 months. After it is passed by Assembly again, with or without amendments,
• Legislative Committees can be divided into two categories – the Standing Committees and the Ad-hoc Committees.
• In Rajasthan Legislative Assembly, there are 18 Standing Committees out of which four are financial and the rest relates to various other subjects.
The financial committees are –
• Public Accounts Committee
o Examine the Secretaries to Government on various irregularities in their Departments as pointed out in the Report of the Comptroller and Auditor General
• Public Undertakings Committee
o Public Undertakings Committee is required to go into the functions of the various pubic undertakings and is expected to examine the Undertakings on various irregularities pointed out by the Report of the Comptroller and Auditor General under their control
• Two Estimates Committees
o Report as to what economies can be effected and what improvements in particular organisation may be made and also to suggest alternative policies in order to bring about efficiency and economy in administration, as also changes in the form of budget estimates.
• The financial committees are elected on the basis of proportional representation through single transferable vote and the rest are nominated by the Speaker.
• The Chairmen for all these committees are nominated by the Speaker from out of the members of these committees.
Besides the above mentioned four financial committees, the Rajasthan Legislative Assembly has following other 17 standing committees.
1. Committee on Subordinate Legislation
2. Committee on Welfare of Scheduled Tribes
3. Committee on Welfare of Scheduled Castes
4. Business Advisory Committee
5. House Committee
6. Rules Committee
7. Library Committee
8. Committee on Petitions
9. Committee on Privileges
10. Committee on Government Assurances
11. General Purposes Committee
12. Question & Reference Committee
13. Committee on Welfare of Women & Children
14. Committee on Welfare of Backward Classes
15. Committee on Welfare of Minorities
16. Committee on Local Bodies and Panchayat Raj Institutions
17. Committee on Environment
General features of the Committees:
• These committees are constituted from the members of the ruling as well as opposition parties generally in proportion to their strength in the House.
• The term of office of the members of the committee is generally one year.
• No minister can be a member of the committee except in the case of Select Committees on Government Bills.
• Normally, the Chairman of the Committees presents the Reports of these committees to the House but in inter-session period the Chairman may submit the Report to the Speaker.
Origin & Growth of Rajasthan Vidhan Sabha
• Though the Rajasthan Vidhan Sabha came in to existence in March 1952, the people of Rajasthan had experienced some kind of a parliamentary democracy even under the princely rule.
• The Maharaja Ganga Singh of Bikaner was one such progressive king who gave the House of Representatives to the people of Bikaner State in 1913.
• Maharaja Ummed Singh of Jodhpur accepted the principle of people’s participation in the administration in the 1940s and accorded his approval to the setting up of the Central and District Advisory Boards.
• In Jaipur State, a Vidhan Samiti consisting of both official and non-official members was created in 1923. Later, Maharaja Mansingh constituted a Central Advisory Board in 1939 with a view to eliciting public opinion through representatives on matters of public interest and importance.
• Under the pressure of changed political situation in Udaipur, a Reforms Committee headed by Shri Gopal Singh was constituted in May, 1946. The Committee consisted of official and non-official members including five representative of the Praja Mandal. The Maharana eventually agreed to the setting up of an Executive Council in October, 1946, to which he appointed Shri Mohan Lai Sukhadia and Shri Hira Lai Kothari as the representatives of the Praja Mandal and Shri Raghubir Singh as the representative of the Regional Council.
• Maharaja Ishwar Singh of Bundi set up the ‘Dhara Sabha’ on 18 October, 1943. The members of the Tehsil Advisory Boards and the Town Council elected members to the ‘Sabha’. The Dhara Sabha had the power to ask questions to the Government and to adopt Resolutions on matter of Public interest.
• The Maharaja of Banswara formed a ‘Rajya Parishad’ on 3 February, 1939. All the 32 members of the Council were nominated members, which included seven employees and eight ‘Jagirdars’. The ‘Rajya Parishad’ had the power to put questions, adopt Resolutions and enforce laws with the assent of the Maharaja.
• At time of Independence, Rajputana included twenty-two small and big Princely States. Though these Princely States were declared to have been annexed to the Union of India on 15th August, 1947, the process of merger and their unification became complete only in April, 1949, in five phases.
• The process of the creation of Legislative Council had started during the final phase of the formation of Rajasthan. This process continued upto the beginning of 1952.
• The First Rajasthan Legislative Assembly (1952-57) was inaugurated on 31 March 1952 and had strength of 160 members.
Strength of Rajasthan Legislative Assembly:
• The strength was increased to 190 after the merger of the erstwhile Ajmer State with Rajasthan in 1956.
• The Second (1957-62) and Third (1962-67) Legislative Assemblies had a strength of 176.
• The Fourth (1967-72) and Fifth (1972-77) Legislative Assembly comprised 184 members each.
• The strength became 200 from the Sixth (1977-1980) Legislative Assembly onwards.
Strength & Constituencies of Rajasthan Vidhan Sabha:
• The strength of the Rajasthan Legislative Assembly, which is determined by Delimitation Commission, was 160 in 1952.
• Currently, there are a total of 200 Assembly constituencies in Rajasthan, which are represented by 200 MLAs or Members of Legislative Assembly.
• At present, 34 constituencies are reserved for the candidates belonging to the Scheduled castes and 25 are reserved for the candidates belonging to the Scheduled tribes.
15th Assembly of Rajasthan: Important Persons
Speaker C. P. Joshi, INC
Chief Minister Ashok Gehlot, INC
Deputy Chief Minister Sachin Pilot, INC
Leader of the Opposition Gulab Chand Kataria, BJP
Deputy Leader of the Opposition Rajendra Singh Rathore, BJP
Party-wise Members in the 15th Assembly (2018)
Indian National Congress 107
Bharatiya Janata Party 72
Rashtriya Loktantrik Party 3
Communist Party of India (Marxist) 2
Bharatiya Tribal Party 2
Rashtriya Lok Dal 1
Total Seats 200
In the democratic setup of India, there are three main types of Elections.
• First, election to the Parliament – MP’s
• Second, election to legislative assembly – MLA’s
• Third, election to Local Bodies – PRI’s & ULB’s
• While first and second elections are conducted by Election Commission of India (Constitutional Body article 324), election to local bodies are conducted by State Election Commission (again Constitutional body)
Elections conducted by Election Commission of India
• Under Article 324(1) of the Constitution of India, the Election Commission of India, is vested with the power of superintendence, direction and control of conducting the elections to both Houses of Parliament.
• Detailed provisions are made under the Representation of the People Act, 1951 and the rules made there under this Act.
• Additionally, Article 324 (1) also vests in the Election Commission of India with the powers of superintendence, direction and control of the elections to the State Legislature.
• Detailed provisions of these elections are also made under the Representation of the People Act, 1951 and the rules made there under this Act.
• At present, the Election Commission of India is a three-member body, with one Chief Election Commissioner and two Election Commissioners.
• At State level, the Chief Electoral Officer of the State supervises the election work under the overall superintendence, direction and control of the Election Commission.
• At District level, the District Election Officer (DEO) supervises the election work of a district.
• At Constituency level (Parliamentary or Assembly), the Returning Officer of a parliamentary or assembly constituency is responsible for the conduct of elections.
• On a Polling Station within the Constituency, the Presiding Officer with the assistance of polling officers conducts the poll.
• Additionally, Under section 20B of the Representation of the People Act 1951, the Election Commission of India nominates officers of Government as Observers (General Observers and Election Expenditure Observers) for parliamentary and assembly constituencies.
• The Electoral Registration officer (ERO) is responsible for the preparation of electoral rolls for a parliamentary / assembly constituency.
Parliamentary Constituencies in Rajasthan:
Lok Sabha Constituencies:
• There are 25 Lok Sabha Constituencies in State of Rajasthan.
Rajya Sabha Seats from Rajasthan:
• There are 10 Rajya Sabha seats from Rajasthan.
• Members to Rajya Sabha are elected by the Legislative Assembly of States and Union territories by means of Single transferable vote through Proportional representation.
Elections conducted by State Election Commission
• The State Election Commission constituted under the Constitution (Seventy-third and Seventy-fourth) Amendments Act, 1992 for each State / Union Territory are vested with the powers of conduct of elections to the Corporations, Municipalities, Zilla Parishads, District Panchayats, Panchayat Samitis, Gram Panchayats and other local bodies.
• It is independent of the Election Commission of India.
• The SEC is a single member Commission headed by the State Election Commissioner.
• It has a Secretary who is also the Chief Electoral Officer for the State.
• The Commission discharges its Constitutional duty by way of preparing electoral rolls and holding elections for Panchayati Raj Institutions as well as for Municipal bodies.
|S.NO||Chief Minster||Took office||Left office||Party||Tenure|
|1||Heera Lai Shastri||07-Apr-49||05-Jan-51||Congress||639 days|
|2||C. S. Venkatachari||06-Jan-51||25-Apr-51||110 days|
|3||Jai Narayan Vyas||26-Apr-51||03-Mar-52||313 days|
|4||Tika Ram Paliwal||03-Mar-52||31-Oct-52||243 days|
|#3||Jai Narayan Vyas||Ol-Nov-52||12-Nov-54||742 days (total 1055 days)|
|5||Mohan Lai Sukhadia||13-Nov-54||13-Mar-67||4503 days|
|#5||Mohan Lai Sukhadia||26-Apr-67||09-Jul-71||Congress||1535 days (Total 6380 days)|
|6||Barkatullah Khan||09-Jul-71||ll-Aug-73||765 days|
|7||Hari Dev Joshi||11-Aug-73||29-Apr-77||1389 days|
|8||Bhairon Singh Shekhawat||22-Jun-77||16-Feb-80||Janata Party||970 days|
|9||Jagannath Palladia||06-Jun-80||13-Jul-81||Congress||403 days|
|10||Shiv Charan Mathur||14-Jul-81||23-Feb-85||1320 days|
|11||Hira Lai Devpura||23-Feb-85||10-Mar-85||16 days|
|#7||Hari Dev Joshi||10-Mar-85||20-Jan-88||1046 days|
|#10||Shiv Charan Mathur||20-Jan-88||04-Dec-89||684 days (total 2004 days)|
|#7||Hari Dev Joshi||04-Dec-89||04-Mar-90||91 days (total 2526 days)|
|#8||Bhairon Singh Shekhawat||04-Mar-90||15-Dec-92||BJP||1017 days|
|#8||Bhairon Singh Shekhawat||04-Dec-93||29-Nov-98||BJP||1821 days (total 3808 days)|
|12||Ashok Gehlot||01-Dec-98||08-Dec-03||Congress||1834 days|
|13||Vasundhara Raje||08-Dec-03||11-Dec-08||BJP||1831 days|
|#12||Ashok Gehlot||12-Dec-08||13-Dec-13||Congress||1822 days|
|#13||Vasundhara Raje||13-Dec-13||16-Dec-18||BJP||1829 days|
State Politics of Rajasthan
• The predominant political parties in Rajasthan are the Indian National Congress and Bharatiya Janata Party (BJP).
• Other parties of Rajasthan include Communist Party of India (Marxist),Bahujan Samaj Party, Indian National Lok Dal, Janata Dal (United),Lok Jan Shakti Party, Rajasthan Samajik Nyaya Manch.
• In the post-1990 phase, electoral politics in the state of Rajasthan has been marked by routine oscillation of power between the two principal contenders: Indian National Congress (INC) and the Bhartiya Janata Party (BJP).
• The Rajasthan Vidhan Sabha or the Legislative Assembly was formed in 1952 and is situated in the capital city of Jaipur. There is a total number of 200 MLAs and 200 seats are there, of which 33 are reserved for Scheduled Caste and 24 for the Scheduled Tribes. The tenure of the Vidhan Sabha is for 5 years.
• The representatives belong to various political parties which are but groups of people with similar ideologies and interests. These political parties need to register with the election commission which assigns them a symbol to contest the polls with.
• A party that holds sway in 4 or more states is called a national party while that having representation in 3 or less is called a state party.
Different phases of Political competition in Rajasthan
• Rajasthan’s politics has mainly been dominated by two national parties Bhartiya Janta Party and Indian National Congress. The earlier politics were dominated by the Congress party. The main opposition party was the Bharatiya Jansangh, headed by Rajasthan’s most popular leader Bhairon Singh Shekhawat and the Swatantra party headed by former rulers of Rajasthan. The Congress rule was untouched till the year 1962.
• But in 1967, Jansangh headed by Shekhawat and Swatantra party headed by Rajmata Gayatri Devi of Jaipur reached the majority point, but couldn’t form a government. In 1972, the Congress won a landslide victory following the victory in the 1971 war. But after the declaration of emergency, Shekhawat became immensely popular, especially after he was forced to be arrested and was sent to Rohtak Jail in Haryana.
• As soon as the emergency was lifted, a joint opposition Janata Party won a thundering landslide victory winning 151 of the 200 seats. Shekhawat became the Chief Minister. The government was dismissed by Indira Gandhi in 1980 after she restored power in Delhi. In the 1980 elections, the Janata Party split at the centre giving the Congress a victory in Rajasthan.
• Indira Gandhi was assassinated in 1984, and in 1985, a sympathy wave let the Congress sail through in the elections. But in 1989, which could be called a Shekhawat wave, the BJP-JD alliance won all 25 Lok Sabha seats and 140 of 200 seats in the assembly. Shekhawat became the Chief Minister for the second term. Though Janata Dal took back its support to the Shekhawat government, Shekhawat tore apart the JD and continued to rule as the Chief Minister thus earning the title of master manipulator.
• After the Babri Mosque demolition in Ayodhya, Shekhawat government was suspended by the P.M., Narsimha Rao and President’s rule was enforced in Rajasthan. Election took place in 1993 in which his party won even after the breaking of its alliance with the Janata Dal. But the then Governor Bali Ram Bhagat didn’t allow Shekhawat to form the government, but after immense pressure from Shekhawat, who reached the majority point after supports from independents crossed the majority line of 101 seats in the assembly. Shekhawat became the Chief Minister for the third term.
• This time he ran a successful third term. This was perhaps the diamond phase for Rajasthan as it led to all-round development and Rajasthan also gained identity on the globe as a rapidly developing and beautiful state. Shekhawat introduced Heritage, Desert, Rural, Wildlife tourism to Rajasthan In 1998 elections, the BJP lost heavily due to the onion price rise issue. Ashok Gehlot ran a 5-year government. But he lost the Lok Sabha elections in 1999, only 6 months after its victory in the assembly elections.
• Shekhawat became the Vice-President of India in 2002 so he had to leave Rajasthan politics and the BJP. He appointed Vasundhara Raje as his successor. She led the BJP in 2003 elections and led it to a victory. She was the Chief Minister of Rajasthan from 2003 – 2008.But the tables turned in December 2008, when the infighting within the BJP, Raje’s perceived autocratic and despotic rule, and the police excesses in the Gurjar-Meena agitation combined to overcome the incumbent Raje government’s development and growth planks, and the Congress emerged victorious with the support of some independent MLA’s. Ashok Gehlot was sworn-in as the new Chief Minister of Rajasthan.
• In 2013, Bharatiya Janata Party won by very large difference. BJP got 163 seats and Congress got only 21 seats out of 200 seats. Vasundhara Raje became the Chief Minister for second time.
• In 2018, Congress again came to power, with 107 seats. Ashok Gehlot was sworn in as the Chief Minister for the third time.
Panchayati Raj in Rajasthan
- After Independence, Rajasthan was the first state to establish Panchayati Raj.
- The scheme was inaugurated by then Prime Minister Nehru on October 2, 1959 in Bagdari village of Nagaur district.
- Panchayati Raj in India aims to build democracy at grass-root level and signifies the system of rural self-government. Panchayats are an effective vehicle for people’s participation in administration, planning and democratic process and so organisation of village Panchayats has been made a Directive Principle of State Policy (Article 40).
- After the 73rd Constitutional Amendment Act of 1992, these institutions have received Constitutional status.
73RD CONSTITUTIONAL AMENDMENT ACT OF 1992
Significance of the Act:
• The act has given a practical shape to Article 40 of the Constitution which says that
‘The State shall take steps to organise village panchayats and endow them with such powers and authority as may be necessary to enable them to function as units of self-government’.
• The act gives a constitutional status to the panchayati raj institutions. It has brought them under the purview of the justiciable part of the Constitution. The state governments are under constitutional obligation to adopt the new panchayati raj system in accordance with the provisions of the act.
• Additionally, neither the formation of panchayats nor the holding of elections at regular intervals depends on the will of the state government anymore. The act transfers the representative democracy into participatory democracy. It is a revolutionary concept to build democracy at the grass-root level in the country.
Major Features of the Act:
• This act has added a new Part-IX to the Constitution of India. It is entitled as ‘The Panchayats’ and consists of provisions from Articles 243 to 243-0. Additionally, the act has also added Eleventh Schedule to the Constitution which contains the 29 functional items of the panchayats.
• The provisions of the act can be grouped into two categories-Compulsory and Voluntary.
o The compulsory (mandatory or obligatory) provisions of the act have to be included in the state laws creating the new panchayati raj system.
o The voluntary provisions, on the other hand, may be included at the direction of the states.
• Gram Sabha:
o The act provides for a Gram Sabha as the foundation of the Panchayati Raj system. It is a body consisting of all the registered voters in the area of the panchayat.
o A Gram Sabha may exercise such powers and perform such functions at the village level as the Legislative of a State may, by law, pro-vide. (Article 243A).
o There shall be at least two meetings of the Gram Sabha every year.
o The quorum for a meeting of the Gram Sabha shall be one-tenth of the total number of members
• Three-Tier System:
o The act provides for three-tier system of Panchayati Raj in every state with Panchayats at village, intermediate and district level. In Rajasthan nomenclature used is:
|Level of Panchayat||Name used|
|District Panchayat||Zila Parishad|
|Intermediate Panchayat||Panchayat Samiti|
|Village Panchayat||Gram Panchayat|
• Elected members & chairpersons:
o All members of the Panchayats at village, intermediate and district level shall be elected directly by the people.
o Sarpanch of Gram Sabha is elected directly adult voters.
o Chairpersons of panchayats at intermediate & district level shall be elected indirectly – by and from amongst the elected members. Manner of election of Chairpersons at village level is decided by State legislative assembly.
• Elections to the Panchayats :
o The superintendence, direction and control of preparation of electoral rolls for, and the conduct of, all elections to the Panchayats shall be vested in a State Election Commission.
• Duration of Panchayats:
o Every Panchayat shall continue for five years from the date appointed for its first meeting and no longer.
• Reservations of Seats:
o Seats shall be reserved for-
(a) the Scheduled Castes and
(b) the Scheduled Tribes, in every Panchayat in ratio of population
o Not less than one-third of the total number of seats to be filled by direct election in every Panchayat shall be reserved for women and such seats may be allotted by rotation to different constituencies in a Panchayat.
• Exempted Areas:
o The Act did not apply to Jammu & Kashmir and certain scheduled areas in some states. However, the act provided power to Parliament to extend the Act to these scheduled areas with certain special provisions.
o Under which, Parliament passed “The Provisions of the Panchayats (Extension to Scheduled Areas) Act, 1996 or PESA Act”.
o Rajasthan passed its conformity legislation in accordance with PESA on 30th Sept. 1999.
• Finance Commission:
o The Finance Commission shall be constituted under Article 243-I to review the financial positions of Panchayati Raj Institutions and make recommendations to the Governor.
Panchayats in Rajasthan:
Rajasthan has a three-tier system of Panchayati Raj with
- 33 Zila Parishads (District level)
- 343 Panchayat Samities (Block level) and
- 11,152 Panchayats (Village level, comprising of a village or a group of villages)
Additional Rules for PRI’s in Rajasthan:
• Rajasthan was the first state to impose the two-child norm as a bar to standing for elections and as a disqualification for occupying a Panchayat elected seat.
• Rajasthan is also the first State in the country to fix a minimum educational qualification for contesting elections to the Panchayati Raj Institutions.
The Assembly passed the Rajasthan Panchayati Raj (amendment) Bill, 2015, which makes Class VIII pass mandatory for the post of sarpanch — except in tribal reserved areas, where the minimum qualification is Class V — and Class X for Zila Parishad or Panchayat Samiti elections.
The amendments to Section 19 of the Rajasthan Panchayat Raj Act, 1994 also make a functional toilet mandatory in the house of a contestant.
• Fifteen States, including Rajasthan, have enacted legislation for 50% reservation of women in PRI’s.
Composition of Panchayats in Rajasthan:
• A Sarpanch, and directly elected Panchas from as many wards as are determined.
• The Sarpanch is assisted by Gram Sevak & Clerk Grade II.
• Directly elected members from as many territorial constituencies.
• All members or the Legislative Assembly of the State representing constituencies which comprise whole or partly the Panchayat Samiti area.
• Chairpersons of all the Panchayats falling within the Panchayat Samiti
• The Pradhan is assisted by Block Development Officer who has an Assistant Engineer, Assistant Account Officer and Block Primary Education Officer at his disposal.
• Directly elected members from as many territorial constituencies as are determined.
• All members of the Lok Sabha and of the State Legislative Assembly representing constituencies which comprise wholly or partly the Zila Parishad area.
• All members of the Rajya Sabha registered as electors within the Zila Parishad.
• Chairpersons of all Panchayat Samities falling within the Zila Parishad area.
Eleventh schedule of Indian Constitution contains 29 functional items placed within the purview of the Panchayats:
1. Agriculture including agricultural expansion
2. Land improvement, implementation of land reforms, land consolidation and soil conservation.
3. Animal husbandry, dairying and poultry
4. Fisheries industry
5. Minor irrigation, water management and watershed development
6. Social forestry and farm forestry
7. Small scale industries in which food processing industry is involved
8. Minor forest produce
9. Safe water for drinking
10. Khadi, village and cottage industries
11. Rural housing
12. Fuel and fodder
13. Rural electrification, including distribution of electricity
14. Road, culverts, bridges, ferries, waterways and other means of communication
15. Education including primary and secondary schools
16. Non-conventional sources of energy
17. Technical training and vocational education
18. Adult and non-formal education
19. Public distribution system
20. Maintenance of community assets
21. Welfare of the weaker sections of the society, in particular of the Schedule Castes and Schedule Tribes
22. Social welfare, including welfare of the handicapped and mentally retarded
23. Family welfare
24. Women and child development
25. Markets and Fairs
26. Health and sanitation including hospitals, primary health centres and dispensaries
27. Cultural activities
29. Poverty Alleviation Programmes
Implementation of PESA in Rajasthan
Rajasthan passed its conformity legislation in accordance with PESA on 30th Sept. 1999. The details of notified FSA/ PESA areas in the State of Rajasthan as under:
• Number of PESA District (Fully & Partly covered): 8
- PESA District (Fully covered): 3 (Banswara, Dungarpur and Pratapgarh)
- PESA District (Partly covered): 5 (Udaipur, Rajsamand, Chittorgarh, Pali and Sirohi)
Urban Local Government
• The term Urban local Government in India signifies the governance of urban area by people through elected representatives.
• There are eight types of urban local governments currently existing in India:
1. Municipal Corporations
3. Notified area committee
4. Town area committee
5. Cantonment board
7. Port trust
8. Special purpose agency
Historical Background of Urban Local Government
• The origin of Municipal Administration in India dates back to 1687 when a Municipal Corporation was set up in Madras. In 1726, Municipal Corporations were setup in Bombay and Calcutta.
• Lord Ripon issued a resolution for local self-government that continued to influence the development of local self-government in India till 1947.
o He is thus called as ‘father of local self-government in India’.
• After Independence, Rajasthan Town Municipalities Act was promulgated in 1951 by repealing the existing princely States’ municipal laws.
• Subsequently, due to reorganisation of the State of Rajasthan, all the existing municipal laws, including the Act of 1951 were replaced by the Rajasthan Municipalities Act, 1959 (Act).
• Constitution (74th Amendment) Act, 1992 inserted new Articles 243-P to 243-ZG providing for the legislature to endow certain powers and the duties to the municipalities relating to 18 matters mentioned in Twelfth Schedule.
Urban Local Government: Constitutional Provisions
• The Constitution (74th Amendment) Act, 1992 inserted new Part- IX A to the Constitution of India. It is entitled as ‘The Municipalities’ and consists of Articles 243-P to 243-ZG .
• Additionally, the act has also added Twelfth Schedule that contains IS functional items of municipalities.
Significance of the Act:
• Earlier, State Governments were free to manage their local bodies as they wished.
• The Amendment made statutory provisions for the establishment, empowerment and functioning of urban local self- governing institutions.
Salient Features of the Act:
• Three types of Municipalities: It provides for the constitution of 3 types of Municipalities depending upon the size and area namely:
o Municipal Corporation – for a larger Urban area.
o Municipal Council – for smaller Urban area
o Nagar Panchayat – (by whatever name called) for a transitional area
• Composition of Municipal Bodies:
o All seats shall be filled by direct elections from the territorial constituencies known as wards.
o The Member of the Rajasthan Legislative Assembly representing a constituency which comprises wholly or partly the area of a Municipality.
o Three persons or ten percent of the number of elected members of the Municipality, whichever is less, having special knowledge or experience in municipal administration, to be nominated by the State Government by notification in the Official Gazette
o The Member of the House of the People representing a constituency which comprises wholly or partly the area
• Wards Committees:
o The Act provides for the constitution of Ward Committees, consisting of one or more wards, within the territorial area of a Municipality, with a population of 3 lakhs or more.
• Reservation of seats:
o In order to provide for adequate representation of Scheduled Caste/ Scheduled Tribe (SC/ST) and of women in the Municipal Bodies, provisions have been made for reservation of seats in every Municipality.
• Duration of Municipalities:
o The Municipality has a fixed term of 5(five) years from the date appointed for its first meeting.
o The State Election Commission of Rajasthan discharges its constitutional duty by way of preparing electoral rolls and holding elections for Municipal bodies under Article 243ZA.
o The manner of election of Chairperson of Municipalities has been left to be specified by the State Legislature.
• Finance Commission:
o The Finance Commission constituted under Article 243-I to review the financial positions of Panchayati Raj Institutions shall also review the financial position of the Municipalities and will make recommendations to the Governor.
• Committee for District Planning:
o There shall be constituted in every State at the District level a District Planning Committee to consolidate the plans prepared by the Panchayats and the Municipalities in the District and to prepare a Draft Development Plan for the District as a whole.
• Metropolitan Planning Committees:
o It is provided in Article 243-ZE of the Constitution that there shall be constituted in every Metropolitan area a Metropolitan Planning Committee to prepare a Draft Development Plan for the Metropolitan area as a whole.
Major Features of the Act:
• The main provisions of this Act can be grouped under two categories – compulsory and voluntary.
• Some of the compulsory provisions which are binding on all States are:
(i) Constitution of Nagar panchayats, municipal councils and municipal corporations in small, big and very big urban areas respectively
(ii) Reservation of seats in urban local bodies for Scheduled Castes / Scheduled Tribes roughly in proportion to their population
(iii) Reservation of seats for women up to one-third seats
(iv) The State Election Commission, constituted in order to conduct elections in the Panchayati Raj bodies (see 73rd Amendment) will also conduct elections to the urban local self- governing bodies
(v) The State Finance Commission, constituted to deal with financial affairs of the Panchayati raj bodies also looks into the financial affairs of the local urban self-governing bodies
(vi) Tenure of urban local self-governing bodies is fixed at five years and in case of earlier dissolution fresh elections are held within six months
• Some of the voluntary provisions which are not binding, but are expected to be observed by the States are:
(i) Giving voting rights to members of the Union and State Legislatures in these bodies
(ii) Providing reservation for backward classes
(iii) Giving financial powers in relation to taxes, duties, tolls and fees, etc
(iv) Making the municipal bodies autonomous and devolution of powers to these bodies to perform some or all of the functions enumerated in the Twelfth Schedule added to the Constitution through this Act and/or to prepare plans for economic development.
Urban Local Government in Rajasthan
• In Rajasthan, urban local bodies are called Municipalities, Municipal Councils and Municipal Corporations.
• There are a total of 196 Urban Local Bodies in Rajasthan:
o 10 Municipal Corporations (Nagar Nigam)
o 34 Municipal Councils (Nagar Parishad)
o 152 Municipalities (Nagar Palika)
Six new Nagar Nigams were formed:
• Jaipur Heritage and Jaipur Greater
• Jodhpur North and Jodhpur South
• Kota North and Kota South
Each Municipality has three authorities:
• The Council
o The Council is deliberative & Legislative wing,
o It consists of Councillors directly elected by people.
o Council is headed by Chairman. He presides over all meetings of the council.
• The Standing Committees
o Standing Committees are created to facilitate working of council,
o They deal with public works, taxation, health, finance etc.
• The Chief Executive Officer
o CEO is responsible for day-to-day administration,
o He / She is appointed by State government.
List of 18 items under the ULB’s are as follows:
1. Regulation of land use and construction of land buildings.
2. Urban planning including the town planning.
3. Planning for economic and social development
4. Urban poverty alleviation
5. Water supply for domestic, industrial and commercial purposes
6. Fire services
7. Public health sanitation, conservancy and solid waste management
8. Slum improvement and up-gradation
9. Safeguarding the interests of the weaker sections of society, including the physically handicapped and mentally unsound
10. Urban forestry, protection of environment and promotion of ecological aspects
11. Construction of roads and bridges
12. Provision of urban amenities and facilities such as parks, gardens and playgrounds
13. Promotion of cultural, educational and aesthetic aspects
14. Burials and burials grounds, cremation and cremation grounds and electric crematoriums
15. Cattle ponds, prevention of cruelty to animals
16. Regulation of slaughter houses and tanneries
17. Public amenities including street lighting, parking spaces, bus stops and public conveniences
18. Vital statistics including registration of births and deaths
Identity Politics in India: Caste, Religion, Language And Ethnicity
• Identity Politics has become a prominent subject in the Indian politics in the past few years. Rise of low castes, religious identities, linguistic groups and ethnic conflicts have contributed to the significance of identity politics in India.
• The discourse on Identity, many scholars feel, is distinctly a modern phenomenon. This is primarily a modern phenomenon because some scholars feel that emphasis on identity based on a central organising principle of ethnicity, religion, language, gender, sexual preferences, or caste positions, etc, are a sort of “compelling remedy for anonymity” in an otherwise impersonal modern world.
• Nonetheless, the concerns with individual and collective identity that simultaneously seeks to emphasise differences and attempt to establish commonality with others similarly distinguished, have become a universal venture.
• Identity Politics is said to “signify a wide range of political activity and theorising founded in the shared experiences of injustice of members of certain social groups”.
• As a political activity it is thus considered to signify a body of political projects that attempts a “recovery from exclusion and denigration” of groups hitherto marginalised on the basis of differences based on their determining characteristics like ethnicity, gender, sexual preferences, caste positions, etc.
• Identity politics thus attempts to attain empowerment, representation and recognition of social groups by asserting the very same markers that distinguished and differentiated them from the others and utilise those markers as an assertion of selfhood and identity, based on difference rather than equality.
• The proponents of identity politics assign the primacy of some “essence” or a set of core features shared only by members of the collective and no others.
• The adherents of identity politics utilise the power of myths, cultural symbols and kinship relations to mould the feeling of shared community and subsequently politicise these aspects to claim recognition of their particular identities.
• Identity Politics as a field of study can be said to have gained intellectual legitimacy since the second half of the twentieth century, i.e., between 1950s and 1960s in the United States when large scale political movements of the second wave-feminists, Black Civil Rights, Gay and Lesbian Liberation movements and movements of various Indigenous groups were being justified and legitimated on the basis of claims about injustices done to their respective social groups.
Identity Politics in India
• In India we find that despite adoption of a liberal democratic polity after independence, communities and collective identities have remained powerful and continue to claim recognition.
• It was probably this claim for and granting of recognition of particular identities by the post-independence state of India that led many scholars to believe that a material basis for the enunciation of identity claims has been provided by the post-independent state and its structures and institutions.
• In other words, the state is seen as an “active contributor to identity politics through the creation and maintenance of state structures which define and then recognise people in terms of certain identities”.
• Caste-based discrimination and oppression have been a pernicious feature of Indian society and in the post-independence period its imbrications with politics have not only made it possible for hitherto oppressed caste-groups to be accorded political freedom and recognition but has also raised consciousness about its potential as a political capital.
• Actually, the Mandal commission can be considered the intellectual inspiration in transforming caste-based identity to an asset that may.be used as a basis for securing political and economic gains.
• The caste system, which is based on the notions of purity and pollution, hierarchy and difference, has despite social mobility, been oppressive towards the Shudras and the outcastes who suffered the stigma of ritual impurity and lived in abject poverty, illiteracy and denial of political power.
• The origin of confrontational identity politics based on caste may be said to have its origin on the issue of providing the oppressed caste groups with state support in the form of protective discrimination. This group-identity based on caste that has been reinforced by the emergence of political consciousness around caste identities is institutionalised by the caste-based political parties that profess to uphold and protect the interests of specific identities including the castes.
• Consequently, we have the upper caste dominated BJP, the lower caste dominated BSP (Bhaujan Samaj Party) or the SP (Samajwadi Party), including the fact that left parties have tacitly followed the caste pattern to extract mileage in electoral politics.
• The cumulative result of the politicisation can be summarised by arguing that caste-based identity politics has had a dual role in Indian society and polity. It relatively democratised the caste-based Indian society but simultaneously undermined the evolution of class-based organisations.
• In all, caste has become an important determinant in Indian society and politics, the new lesson of organised politics and consciousness of caste affiliations learnt by the hitherto despised caste groups have transformed the contours of Indian politics where shifting caste-class alliances are being encountered.
• The net effect of these mobilisations along caste-identities have resulted not only in the empowerment of newly emerging groups but has increased the intensity of confrontational politics and possibly leading to a growing crisis of governability.
• Another form of identity politics is that effected through the construction of a community on the shared bond of religion.
• In India, Hinduism, Islam, Sikhism, Christianity, and Zoroastrianism are some of the major religions practised by the people.
• Numerically the Hindus are considered to be the majority, which inspires many Hindu loyalist groups like the RSS (Rashtriya Swayam Sevak Sangh) or the Siva Sena and political parties like the BJP (Bharatiya Janata Party) or the Hindu Mahasabha to claim that India is a Hindu State.
• These claims are countered by other religious groups who foresee the possibility of losing autonomy of practise of their religious and cultural life under such homogenising claims.
• This initiates contestations that have often resulted in communal riots. The generally accepted myths that process the identity divide on religious lines centre on the ‘appeasement theory’, ‘forcible religious conversions’, general ‘anti-Hindu’ and thus ‘anti-India’ attitude of the minority religious groups, the ‘hegemonic aspirations’ of majority groups and ‘denial of a socio-cultural space’ to minority groups.
• Historically, the Hindu revivalist movement of the 19th century is considered to be the period that saw the demarcation of two separate cultures on religious basis—the Hindus and the Muslims that deepened further because of the partition.
• This division which has become institutionalised in the form of a communal ideology has become a major challenge for India’s secular social fabric and democratic polity.
• The rise of religion-based national assertiveness, politics of representational government, persistence of communal perceptions, and competition for the socio-economic resources are considered some of the reasons for the generation of communal ideologies and their transformation into communal disturbances.
• The rise of majoritarian assertiveness is considered to have become institutionalised after the BJP, that along with its ‘Hindu’ constituents gave political cohesiveness to a consolidating Hindu consciousness, formed a coalition ministry in March 1998.
• However, like all identity schemes the forging of a religious community glosses over internal differences within a particular religion to generate the “we are all of the same kind” emotion. Thus differences of caste groups within a homogenous Hindu identity, linguistic and sectional differences within Islam are shelved to create a homogenous unified religious identity.
• In post-independence India the majoritarian assertion has generated its own antithesis in the form of minority religions assertiveness and a resulting confrontational politics that undermines the syncretistic dimensions of the civil society in India.
• Identity claims based on the perception of a collectively bound together by language may be said to have its origin in the pre-independence politics of the Congress that had promised reorganisation of states in the post-independent period on linguistic basis,
• But it was the “JVP” (Jawaharlal Nehru, Vallabhbai Patel and Pattabhi Sitaramayya) Committee’s concession that if public sentiment was “insistent and overwhelming”, the formation of Andhra from the Telugu-speaking region of the then Madras could be conceded.
• Ironically, the claim of separate states for linguistic collectivities did not end in 1956 and even today continues to confront the concerns of the Indian leadership.
• But the problem has been that none of the created or claimed states are mono-ethnic in composition and some even have numerically and politically powerful minorities.
• This has resulted in a cascading set of claims that continue to threaten the territorial limits of existing states and disputes over boundaries between linguistic states have continued to stir conflicts, as for instance the simmering tensions between Maharashtra and Karnataka over the district of Belgaum or even the claims of the Nagas to parts of Manipur.
• The linguistic divisions have been complicated by the lack of a uniform language policy for the entire country. Since in each state the dominant regional language is often used as the medium of instruction and social communication, the consequent affinity and allegiance that develops towards one’s own language gets expressed even outside one’s state of origin.
• For instance, the formation of linguistic cultural and social groups outside one’s state of origin helps to consolidate the unity and sense of community in a separate linguistic society.
• Though it is generally felt that linguistic states provide freedom and autonomy for collectivities within a heterogeneous society, critics argue that linguistic states have reinforced regionalism and has provided a platform for the articulation of a phenomenal number of identity claims in a country that has 1,652 ‘mother tongues’ and only twenty-two recognised languages.
• They argue that the effective result of recognition for linguistic groups has disembodied the feelings of national unity and national spirit in a climate where ‘Maharastra for Marathis, Gujrat for Gujratis, etc” has reinforced linguistic mistrust and defined the economic and political goods in linguistic terms.
• There are two ways in which the concept of ethnic identity is used; one, it insiders the formation of identity on the basis of single attribute – language, religion, caste, region, etc; two, it considers the formation of identity on the basis of multiple attributes cumulatively.
• However, it is the second way formation of identity on the basis of more than one characteristics – culture, customs, region, religion or caste, which is considered as the most common way of formation of the ethnic identity.
• The relations between more than one ethnic identities can be both harmonious and conflictual.
• Whenever there is competition among the ethnic identities on the real or imaginary basis, it expressed in the form of autonomy movements, demand for session or ethnic riots.
• Identity has become an important phenomenon in the modem politics. The identification of a members of the group on the basis of sharing common attributes on the basis of all or some of the attributes, language, gender, language, religion, culture, ethnicity etc. indicates the existence or formation of identity.
• The mobilisation on the basis of these markers is called identity politics. Identity politics gained legitimacy in the 1950s and 1960s in the United States and Europe.
• In India, the identity politics, has become an important aspect of politics.
• The rise of the Dalit politics, especially the BSP and backward class politics following the implementation of the Mandal Commission Report; linguistic organisation of Indian states from the 1950s, and rise of the BJP, and the active role of the organisations like the RSS; and the ethnic conflict, insurgency and autonomy movements in several parts of the country Eire examples of the identity politics in India.
• The democratic political system in India enables various groups to organise and assert on the basis the common attributes which they share. Identity politics has both negative and positive roles in Indian politics. |
Three-In-A-Row supports students who have begun mentally subtracting numbers less than 15. With each turn, children must consider several subtraction problems in order to be able to place their chip in a strategically beneficial spot. Since this game allows students to create their own problem, they must consider the part-whole relationship of numbers. When considering strategy, students will begin with a sum and have to work backwards; effectively thinking about the difference between two numbers. This helps to solidify the concept of subtraction, and aids in overall mathematical fluency.
To print out game boards, rules and notes, please click here.
Common Core Standards
*Click on the category title (ex: "Counting and Cardinality") to view the entire standards of the Common Core.
Know number names and the count sequence.
- K.CC.1. Count to 100 by ones and by tens.
- K.CC.2. Count forward beginning from a given number within the known sequence (instead of having to begin at 1).
Count to tell the number of objects.
- K.CC.5. Count to answer how many? questions about as many as 20 things arranged in a line, a rectangular array, or a circle, or as many as 10 things in a scattered configuration; given a number from 1 to 20, count out that many objects.
- K.CC.6. Identify whether the number of objects in one group is greater than, less than, or equal to the number of objects in another group, e.g., by using matching and counting strategies.
- K.CC.7. Compare two numbers between 1 and 10 presented as written numerals.
Understand addition as putting together and adding to, and understand subtraction as taking apart and taking from.
- K.OA.1. Represent addition and subtraction with objects, fingers, mental images, drawings, sounds (e.g., claps), acting out situations, verbal explanations, expressions, or equations.
- K.OA.2. Solve addition and subtraction word problems, and add and subtract within 10, e.g., by using objects or drawings to represent the problem.
- K.OA.3. Decompose numbers less than or equal to 10 into pairs in more than one way, e.g., by using objects or drawings, and record each decomposition by a drawing or equation (e.g., 5 = 2 + 3 and 5 = 4 + 1).
- K.OA.4. For any number from 1 to 9, find the number that makes 10 when added to the given number, e.g., by using objects or drawings, and record the answer with a drawing or equation.
- K.OA.5. Fluently add and subtract within 5.
Work with numbers 11-19 to gain foundations for place value.
- K.NBT.1. Compose and decompose numbers from 11 to 19 into ten ones and some further ones, e.g., by using objects or drawings, and record each composition or decomposition by a drawing or equation (such as 18 = 10 + 8); understand that these numbers are composed of ten ones and one, two, three, four, five, six, seven, eight, or nine ones.
Represent and solve problems involving addition and subtraction.
- 1.OA.1. Use addition and subtraction within 20 to solve word problems involving situations of adding to, taking from, putting together, taking apart, and comparing, with unknowns in all positions, e.g., by using objects, drawings, and equations with a symbol for the unknown number to represent the problem.
- 1.OA.2. Solve word problems that call for addition of three whole numbers whose sum is less than or equal to 20, e.g., by using objects, drawings, and equations with a symbol for the unknown number to represent the problem.
Understand and apply properties of operations and the relationship between addition and subtraction.
- 1.OA.3. Apply properties of operations as strategies to add and subtract. Examples: If 8 + 3 = 11 is known, then 3 + 8 = 11 is also known. (Commutative property of addition.) To add 2 + 6 + 4, the second two numbers can be added to make a ten, so 2 + 6 + 4 = 2 + 10 = 12. (Associative property of addition.)
- 1.OA.4.Understand subtraction as an unknown-addend problem. For example, subtract 10 - 8 by finding the number that makes 10 when added to 8. Add and subtract within 20.
Add and subtract within 20.
- 1.OA.5. Relate counting to addition and subtraction (e.g., by counting on 2 to add 2).
- 1.OA.6. Add and subtract within 20, demonstrating fluency for addition and subtraction within 10. Use strategies such as counting on; making ten (e.g., 8 + 6 = 8 + 2 + 4 = 10 + 4 = 14); decomposing a number leading to a ten (e.g., 13 - 4 = 13 - 3 - 1 = 10 - 1 = 9); using the relationship between addition and subtraction (e.g., knowing that 8 + 4 = 12, one knows 12 - 8 = 4); and creating equivalent but easier or known sums (e.g., adding 6 + 7 by creating the known equivalent 6 + 6 + 1 = 12 + 1 = 13).
Work with addition and subtraction equations.
- 1.OA.7. Understand the meaning of the equal sign, and determine if equations involving addition and subtraction are true or false. For example, which of the following equations are true and which are false? 6 = 6, 7 = 8 - 1, 5 + 2 = 2 + 5, 4 + 1 = 5 + 2.
- 1.OA.8. Determine the unknown whole number in an addition or subtraction equation relating three whole numbers. For example, determine the unknown number that makes the equation true in each of the equations 8 + ? = 11, 5 = _ - 3, 6 + 6 = _.
Extend the counting sequence.
- 1.NBT.1. Count to 120, starting at any number less than 120. In this range, read and write numerals and represent a number of objects with a written numeral.
Understand place value.
- 1.NBT.2. Understand that the two digits of a two-digit number represent amounts of tens and ones. Understand the following as special cases:
- 10 can be thought of as a bundle of ten ones called a ten.
- The numbers from 11 to 19 are composed of a ten and one, two, three, four, five, six, seven, eight, or nine ones.
Add and subtract within 20.
- 2.OA.2. Fluently add and subtract within 20 using mental strategies. By end of Grade 2, know from memory all sums of two one-digit numbers. |
The following activities can be used to promote fruit and vegetables in the classroom during Kick Start to Crunch&Sip (term one), Fruit & Veg September or at any other time during the school year.
- Plan and write a feature article about Fruit & Veg September or healthy eating initiatives at your school for a local newspaper or your school's newsletter.
- Class debates or persuasive writing. Students write a persuasive text or conduct a debate on topics such as 'all Western Australian students should have a daily Crunch&Sip break' or 'all junk food advertising should be banned.' Students should consider their point of view and write to convince a reader or listener of their opinions.
- Create a class cookbook with healthy original recipes or family favourites. Students can explore how recipes are grouped in cookbooks and how they are structured. Include the production of the cookbook and how it can be promoted.
- Writing fruit and veg adventures. Characters such as Apple Blossom, Celerina Celery, Ben D Banana, Crazy Carrot and Squashed Orange can be developed to create an adventure story. The adventures might centre around how being nutritious can make them a hero.
- Invent, describe and illustrate a new fruit or vegetable.
- Write sensory poems to describe the look, feel, taste, smell and sound of fruit and veg, contrasting the inside and outside of a piece of fruit.
- Compile a fruit and veg crossword.
- Writing descriptions. Describe characteristics of a chosen fruit or vegetable such as colour, shape, size, similar objects, and taste. Include the food group it belongs to and other foods that are also in this food group according to the Australian Guide to Healthy Eating.
- Measurement using a selection of fruit and vegetables use and respond to comparative language such as: which piece of fruit is longest/heaviest/biggest? Compare weight with volume - how many pumpkins would you need for 500g? What about 500g of lettuce leaves?
- Crunch&Sip surveys. Students collect, organise, summarise and represent data pertaining to Crunch&Sip or fruit and veg such as: what is your favourite fruit or vegetable, what our class has for our Crunch&Sip break, favourite fruits and vegetables, foods eaten at lunch time, class intake of fruit and vegetables etc. The results can be graphed and displayed and also published in the school newsletter. Surveys can be extended to include several classes, parents and the school community. This video explains data collection (for junior primary).
- Fruity maths. Using bulk common fruit such as apples or oranges encourage students to pose mathematical questions such as - How thick is the skin of the fruit? How long does it take to eat the piece of fruit? What is the average time taken to peel? What is the average number of seeds? What is the average surface area/weight? How much does the peel cost? What percentage/fraction of the fruit is peel, seeds, juice and pulp? Conduct a class investigation to find out the answers.
- Parts and wholes. Offer a mixed fruit and veg platter at recess by cutting fruit and vegetables into halves and quarters, relating to the concept of whole and parts.
- Modify fruit and vegetable recipes so they feed a family or a class. Use price lists from supermarket websites to work out the cost of the recipe.
HEALTH & PHYSICAL EDUCATION
- Food balance online game. Help Peach and Basil get across the tightrope safely by choosing healthy meals and snacks from the five food groups.
- Discuss the importance of a balanced diet. Refer to the Australian Dietary Guidelines. Encourage students to increase their fruit and vegetable intake and decrease unhealthy snacks.
- Discuss the benefits of eating fruit and vegetables. Include questions on personal intake of fruit and vegetables and what influences their food choices. Discuss ways of increasing consumption of fruit and vegetables. Publish this information for class use, as posters or snippets in the school newsletter.
- Use a decision making model to increase consumption of fruit and vegetables by students. For example: review the situation, plan before deciding, decide and act, monitor and evaluate.
- Snack record. Over a week, set aside time each day for students to make a daily record of snacks they eat and the times they eat them. At the end of the week, have students examine their snack record and with a partner identify if they need to improve their snacking habits. If the answer is yes, ask students to think of ways to improve their snacking habits. Students can decide on a snacking goal and make a plan to achieve their goal. Emphasise that snacking goals need to be realistic and achievable. Have students check their progress towards their snacking goal every few days.
- Hygiene. Discuss the importance of washing fruit and vegetables and personal hygiene before cooking and eating. Useful interactive websites include Be a Soaper Hero and Scrub Club.
- Health promotion. Students create an action plan to promote a healthy behaviour at the school e.g. healthy lunchboxes or reducing litter.
- Monitor fruit, vegetable and water intake. Use the Crunch&Sip tally charts and ask students to reflect on their fruit/veg and water intake. Older students could set up their own spread sheet for their intake or to collate the whole school results, or use the tally chart on a SMART Board.
- Crunch&Sip and dental health. The Western Australian Dental Health Service has a number of teacher resources relating to the benefits of eating fruit and veg and drinking water on dental health.
- Choosing drinks. Students compare the sugar and caffeine content of common drinks (juice, milk, cordial, soft drink, sports drinks and flavoured mineral waters). Research the benefits of drinking water as opposed to other drinks (the only drinks recommended to children are water and milk). Ask students to suggest ways to increase their water consumption. Check out Rethink Sugary Drinks for further information and to access posters, videos and fact sheets.
- Weigh up your lunch. Students can learn about what makes up a healthy lunchbox through this interactive online game.
- Crunch&Sip collage. Create a class collage of foods that can be eaten for Crunch&Sip.
- Print making with vegetables. Click here for details.
- Still life. Students sketch a still life of a basket of fruit or vegetables, focusing on light and shading.
- 'You are what you eat' collage. Share some images of Guiseppe Archimboldo's paintings of fruit and vegetable faces or look at the Go for 2&5 fruit and veg characters to recreate or design new portraits. Portraits can be collage, drawn or created from fresh fruit and veg and photographed.
- Colouring in. Colour in or create an electronic Vegie Man on the Go for 2&5 website. Alternatively download and print colouring in sheets from Fresh for Kids.
- Put on a show. Perform the Crunch&Sip rap or learn 'Veggie Believer', available here.
- Lyrics swap. Using a common or popular song, students change the lyrics to promote eating fruit and vegetables.
- Listen and discuss. Play students some of the songs from The Vegetable Plot or the Formidable Vegetable Sound System. Use this as a basis for a discussion on different methods to promote healthy eating to children.
HUMANITIES & SOCIAL SCIENCES
- Research project topics could include: why fruit and vegetables are good for you, ways to get more fruit and veg into your diet, Indigenous foods, the pros and cons of agriculturalists using sprays or research a specific fruit or vegetable. Information on the vegetable industry in Australia can be found here.
- Research could be presented as a poster, mobile, rap song, booklet, radio play, PowerPoint presentation or other innovative ways.
- Identify the origins of fruit and vegetables on a world map.
- Investigate the journey of certain fruits and vegetables from paddock to plate. Information could be presented in a flow chart. Discover where food comes from and how it gets to our plate.
- Interview an older person about what they used to eat as a child.
- Investigate scurvy, how it affected sailors and how a cure was discovered.
- Dissect fruit and vegetables to identify parts. Learn about the differences between fruit and vegetables and classify by whether they grow on a tree, vine, bush or in the ground.
- Lifecycle of a fruit or vegetable e.g. the apple tree or bean. Explore plant parts through drawing.
- Grow a variety of vegetables. Plant vegetable seeds to observe and record the changes. Create a fair test for the best way to grow vegetables using variables (e.g. no sunlight, no water) and control conditions. Do this activity online here. Broad beans are great fun and fast growing. Sprouts are easy and can be eaten in a salad. Other ideas for growing include a pizza garden and companion planting.
- Investigate decomposition of fruit and vegetable scraps. Ask students what might happen to the food scraps after a few days if they are thrown in the bin. Conduct an experiment where food scraps are left for a few days in a jar. Students can record any changes to the food and answer questions such as: What happened to the food scraps? Why do you think this happened? Have small groups of students discuss ways to use food scraps.
- Cook fruit and veg (e.g. vegetable pizzas) and discuss how the food changes in colour texture and taste when cooked and factors that influence the rate of change. Great recipe books for kids can be downloaded from Foodbank WA's Superhero Foods HQ.
- Fruit preservation. Using apple slices, design and conduct an experiment to delay the browning process. Variables to rub on the apple could include crushed vitamin C tablets, lemon juice and water. Observe differences after 1 hour.
- Floating and sinking fruit. Using oranges and a selection of other fruit, guess which fruits will float or sink in a container of water. After testing, try again with whole orange peel and flesh.
- Why do plants make fruit? Find out with this video from the ABC.
- Origins of food. Students construct a flow chart indicating the origins of the fruit or veg for their Crunch&Sip break. The flow chart may include: growing, processing, transporting, buying, eating, and recycling/composting.
- Crunch&Sip sort. Group/sort students fruit and veg for their Crunch&Sip break according to how it is grown e.g. on a tree, on a vine, under the ground etc.
- Testing water. Students conduct a blind tasting of different types of water (bottled, tap, rain, flavoured, mineral) with another class to determine if they can identify which is which and preferences.
- Water theme. Incorporate hydration and the body's use of water into a wider water theme. Visit the Water Corporation's website for information on becoming a Waterwise School.
- Design a fruit and vegetable board game. Modify a game to include fruit and vegetables e.g. Snakes and Ladders.
- Design a healthy food vending machine for your school. Factors to consider: Include five food groups, will it have refrigerated or heated sections, does food need replacing regularly, how would you make food appealing?
- Design a structure for tomato vines and other vegetables to grow.
- Design and build garden beds.
- Design efficient packaging for Crunch&Sip snacks that will prevent bruising of fragile fruit.
- Design and make scarecrows or other structures to deter crows and other garden pests.
- Develop a marketing plan to sell a particular fruit or vegetable. Include a menu, make healthy recipes and design packaging. Check these websites for examples: Australian Avocados, Australian Pears, Australian Bananas, Potato Growers, Australian Mushroom Growers.
- Fruit and vegetables from different countries and cultures.
- Learn names of fruit and vegetables in languages other than English.
- Discuss how different cultures prepare and cook various fruit and vegetables.
- Cook in class: Cook some traditional recipes of different countries.
- A to Z of fruit & veg brainstorm. Challenge students to create a comprehensive A to Z list of fruit and vegetables. This could be completed as an individual, small group or whole class activity. Younger grades can use pictures from catalogues or magazines or their own drawings and glue on the relevant initial letter. Upper grades may have a time limit to complete their lists or in small groups, rotate through each letter of the alphabet to add to the list. Completed lists can be compared with these links: Market Fresh, Fresh for Kids and Go for 2&5
- Have a class fruit and veg tasting. Seek donations from local businesses or growers, or ask students to bring something from home for a shared Crunch&Sip snack. Encourage students to try fruit and veg that they have not tasted before or would like to taste again, such as golden kiwi fruit, persimmon, fig and guava. Children are often more likely to try new foods in group situations, as everyone else is doing it! Appropriate measures to check for food allergies should be taken.
- Complete a PMI (Plus, Minus, Interesting) chart on the topic 'eating 5 serves of vegetables and 2 serves of fruit a day' to gauge students thoughts on eating fruit and vegetables.
- Crunch&Sip garden. Students can supplement the fruit and veg bought from home with produce grown in the school patch. Alternatively, plant a Crunch&Sip garden using this garden guide. This video from the ABC explains how to plant a vegie garden and these NSW resources are fantastic!
- Name the fruit and veg. Print off photos of a variety of fruit and vegetables and ask students to name each one. Alternatively, see if they can name all of Vegieman's parts.
- Share students' stories of how they enjoy fruit and vegetables at home and at school. Students can also discuss fruit and vegetable dishes that can be eaten for specific occasions e.g. birthday parties, religious days, sporting events.
- Cooking. Invite parents to cook with your class, focusing on fruit and vegetable based recipes. Click here for Go for 2&5 recipes.
- Fruit and vegetable 20 questions. Tell students you are thinking of a fruit or vegetable and invite them to guess what it is. Only 'yes' or 'no' answers are allowed.
- Supermarket/fruit and vegetable retailer/farm visits. Your local fresh produce store may be keen to form links with local communities and work around healthy eating campaigns. A visit would aim to develop positive links and create positive images around fruit and vegetables. Geraldton Fruit & Veg (email email@example.com) offers school tours. Farms sites offering school visits include: Landsdale Farm School and Kelmscott Senior High School Farm.
- Establish a Crunch&Sip break. Involve students in the decision making process by thinking through key questions to form an action plan: What needs to be done? What risks and barriers exist? What strategies can be used to implement changes? Who is responsible? What are the timeframes? This process aims to allow students to have ownership of the Crunch&Sip break and increase participation.
- Tally fruit and vegetable consumption for a week. Students aim for 2 serves of fruit and 5 serves of vegetables each day and reflect on their achievements.
- Fruit and vegetable sort. Collect pictures of fruit and vegetables from magazines or supermarket catalogues for students to sort according to colour, size, shape, tried/not tried, eaten cooked/raw/either etc.
- Aussie Apples have a number of curriculum activities for primary school students. Visit www.aussieapples.com.au |
Oct 5, 2022
The US government wants to take its space junk out of Earth's orbit. It has taken the first legal steps to do so. Parts of US spacecraft are still in Earth's orbit. That junk threatens the safety of space missions.
Last week, the US Federal Communications Commission (FCC) said new satellites can only remain in space for five years. After that, a satellite must leave orbit. Or, it can be sent to burn up in Earth’s atmosphere.
Tens of thousands of large pieces of debris are orbiting Earth. The buildup comes from decades of rocket missions and satellites launched into space. They threaten working satellites. And the trash can be a danger to astronauts in space. Even stray debris as small as a nickel can pose dangers to spacecraft. After all, they orbit at speeds of roughly 15,000 mph and could number in the millions.
Experts warn there are about 30,000 pieces of space junk that could cause issues for a mission. They say this junk could cause a disaster.
But just who regulates space litter is up in the air. Figuring out who will be responsible for identifying and removing such debris has been a big challenge. Analysts hope the FCC’s action is the first step of many to clean up space. They hope other countries will take similar actions.
“It’s about establishing rules for space and having a legal framework that people have to adhere to,” an associate professor of space engineering told NBC News. “That’s a big step.”
Photo from NASA courtesy of Unsplash.
Graphing the Rise in Earth's Carbon Dioxide
This lesson plan prompts students to graph the increase in atmospheric carbon dioxide concentrations throughout their lifetime.
Pixels on Fire
In this lesson, students will examine how NASA remotely detects wildfires from space, how they display satellite data about the Earth's climate, and information about three Californian wildfires of the 21st century.
NASA Global Warming Demonstration
This video demonstrates the heat capacity difference between air and water using a flame and balloons, which teachers in the classroom can recreate. |
Greater India, or the Indian cultural sphere, is an area composed of many countries and regions in South and Southeast Asia that were historically influenced by Indian culture, which itself formed from the various distinct indigenous cultures of these regions. Specifically Southeast Asian influence on early India had lasting impacts on the formation of Hinduism and Indian mythology. Hinduism itself formed from various distinct folk religions, which merged during the Vedic period and following periods. The term Greater India as a reference to the Indian cultural sphere was popularised by a network of Bengali scholars in the 1920s. It is an umbrella term encompassing the Indian subcontinent, and surrounding countries which are culturally linked through a diverse cultural cline. These countries have been transformed to varying degrees by the acceptance and induction of cultural and institutional elements from each other. Since around 500 BCE, Asia's expanding land and maritime trade had resulted in prolonged socio-economic and cultural stimulation and diffusion of Hindu and Buddhist beliefs into the region's cosmology, in particular in Southeast Asia and Sri Lanka. In Central Asia, transmission of ideas were predominantly of a religious nature. The spread of Islam significantly altered the course of the history of Greater India.
By the early centuries of the common era, most of the principalities of Southeast Asia had effectively absorbed defining aspects of Hindu culture, religion and administration. The notion of divine god-kingship was introduced by the concept of Harihara, Sanskrit and other Indian epigraphic systems were declared official, like those of the south Indian Pallava dynasty and Chalukya dynasty. These Indianized Kingdoms, a term coined by George Cœdès in his work Histoire ancienne des états hindouisés d'Extrême-Orient, were characterized by surprising resilience, political integrity and administrative stability.
To the north, Indian religious ideas were assimilated into the cosmology of Himalayan peoples, most profoundly in Tibet and Bhutan and merged with indigenous traditions. Buddhist monasticism extended into Afghanistan, Uzbekistan and other parts of Central Asia, and Buddhist texts and ideas were readily accepted in China and Japan in the east. To the west, Indian culture converged with Greater Persia via the Hindukush and the Pamir Mountains.
The concept of the Three Indias was in common circulation in pre-industrial Europe. Greater India was the southern part of South Asia, Lesser India was the northern part of South Asia, and Middle India was the region near the Middle East. The Portuguese form (Portuguese: India Maior) was used at least since the mid-15th century. The term, which seems to have been used with variable precision, sometimes meant only the Indian subcontinent; Europeans used a variety of terms related to South Asia to designate the South Asian peninsula, including High India, Greater India, Exterior India and India aquosa.
However, in some accounts of European nautical voyages, Greater India (or India Major) extended from the Malabar Coast (present-day Kerala) to India extra Gangem (lit. "India, beyond the Ganges," but usually the East Indies, i.e. present-day Malay Archipelago) and India Minor, from Malabar to Sind.Farther India was sometimes used to cover all of modern Southeast Asia. Until the fourteenth century, India could also mean areas along the Red Sea, including Somalia, South Arabia, and Ethiopia (e.g., Diodorus of Sicily of the first century BC says that "the Nile rises in India" and Marco Polo of the fourteenth century says that "Lesser India ... contains ... Abash [Abyssinia]").
Greater India, or Greater India Basin also signifies "the Indian Plate plus a postulated northern extension", the product of the Indian–Asia collision. Although its usage in geology pre-dates Plate tectonic theory, the term has seen increased usage since the 1970s. It is unknown when and where the India–Asia (Indian and Eurasian Plate) convergence occurred, at or before 52 Million years ago. The plates have converged up to 3,600 km (2,200 mi) ± 35 km (22 mi). The upper crustal shortening is documented from geological record of Asia and the Himalaya as up to approximately 2,350 km (1,460 mi) less.
Indianization is different from direct colonialism in that these Indianized lands were not inhabited by organizations or state elements from the Indian subcontinent, with exceptions such as the Chola invasions of medieval times. Instead, Indian cultural influence from trade routes and language use slowly permeated through Southeast Asia, making the traditions a part of the region. The interactions between India and Southeast Asia were marked by waves of influence and dominance. At some points, the Indian culture solely found its way into the region, and at other points, the influence was used to take over. A reason for the fast acceptance of Indian culture in Southeast Asia was because Indian culture already had striking similarities to indigenous cultures of Southeast Asia, which can be explained by earlier Southeast Asian (specifically Austroasiatic, such as early Munda and Mon Khmer groups) and Himalayan (Tibetic) cultural and linguistic influence on local Indian peoples. Several scholars, such as Professor Przyluski, Jules Bloch, and Lévi, among others, concluded that there is a significant cultural, linguistic, and political Mon-Khmer (Austroasiatic) influence on early India. Genetic evidence further found noteworthy East Asian-related ancestry among various Indian ethnic groups. The East Asian-related ancestry component is forming the major ancestry among specific populations in the Himalayan foothills and Northeast India, and is generally distributed throughout the Indian subcontinent, peaking among Austroasiatic-speaking groups, as well as among Sinhalese and Bengalis.
The concept of the Indianized kingdoms, a term coined by George Coedès, describes Southeast Asian principalities that flourished from the early common era as a result of centuries of socio-economic interaction having incorporated central aspects of Indian institutions, religion, statecraft, administration, culture, epigraphy, literature and architecture.
Iron Age trade expansion caused regional geostrategic remodeling. Southeast Asia was now situated in the central area of convergence of the Indian and the East Asian maritime trade routes, the basis for economic and cultural growth. The earliest Hindu kingdoms emerged in Sumatra and Java, followed by mainland polities such as Funan and Champa. Adoption of Indian civilization elements and individual adaptation stimulated the emergence of centralized states and the development of highly organized societies. Ambitious local leaders realized the benefits of Hinduism and Indian methods of administration, culture, literature, etc. Rule in accord with universal moral principles, represented in the concept of the devaraja, was more appealing than the Chinese concept of intermediaries.
Two Indian ships from Eastern Indian coast, 1st–3rd century AD.
A Siamese painting depicting the Chola raid on Kedah
As conclusive evidence is missing, numerous Indianization theories of Southeast Asia have emerged since the early 20th century. The central question usually revolves around the main propagator of Indian institutional and cultural ideas in Southeast Asia.
One theory of the spread of Indianization that focuses on the caste of Vaishyatraders and their role for spreading Indian culture and language into Southeast Asia through trade. There were many trade incentives that brought Vaishya traders to Southeast Asia, the most important of which was gold. During the 4th century C.E., when the first evidence of Indian trader in Southeast Asia, the Indian sub-continent was at a deficiency for gold due to extensive control of overland trade routes by the Roman Empire. This made many Vaishya traders look to the seas to acquire new gold, of which Southeast Asia was abundant. However, the conclusion that Indianization was just spread through trade is insufficient, as Indianization permeated through all classes of Southeast Asian society, not just the merchant classes.
Another theory states that Indianization spread through the warrior class of Kshatriya. This hypothesis effectively explains state formation in Southeast Asia, as these warriors came with the intention of conquering the local peoples and establishing their own political power in the region. However, this theory hasn't attracted much interest from historians as there is very little literary evidence to support it.
The most widely accepted theory for the spread of Indianization into Southeast Asia is through the class of Brahman scholars. These Brahmans brought with them many of the Hindu religious and philosophical traditions and spread them to the elite classes of Southeast Asian polities. Once these traditions were adopted into the elite classes, it disseminated throughout all the lower classes, thus explaining the Indianization present in all classes of Southeast Asian society. Brahmans were also experts in art and architecture, and political affairs, thus explaining the adoption of many Indian style law codes and architecture into Southeast Asian society
Angkor Wat in Cambodia is the largest Hindu temple in the world
It is unknown how immigration, interaction, and settlement took place, whether by key figures from India or through Southeast Asians visiting India who took elements of Indian culture back home. It is likely that Hindu and Buddhist traders, priests, and princes traveled to Southeast Asia from India in the first few centuries of the Common Era and eventually settled there. Strong impulse most certainly came from the region's ruling classes who invited Brahmans to serve at their courts as priests, astrologers and advisers. Divinity and royalty were closely connected in these polities as Hindu rituals validated the powers of the monarch. Brahmans and priests from India proper played a key role in supporting ruling dynasties through exact rituals. Dynastic consolidation was the basis for more centralized kingdoms that emerged in Java, Sumatra, Cambodia, Burma, and along the central and south coasts of Vietnam from the 4th to 8th centuries.
Art, architecture, rituals, and cultural elements such as the Rāmāyaṇa and the Mahābhārata had been adopted and customized increasingly with a regional character. The caste system, although adopted, was never applied universally and reduced to serve for a selected group of nobles only. Many struggle to date and determine when Indianizaton in Southeast Asia occurred because of the structures and ruins found that were similar to those in India.
States such as Srivijaya, Majapahit and the Khmer empire had territorial continuity, resilient population and surplus economies that rivaled those in India itself. Borobudur in Java and Angkor in Cambodia are, apart from their grandeur, examples of a distinctly developed regional culture, style, and expression.
Southeast Asia is called Suvarnabhumi or Sovannah Phoum – the golden land and Suvarnadvipa – the golden Islands in Sanskrit. It was frequented by traders from eastern India, particularly Kalinga. Cultural and trading relations between the powerful Chola dynasty of South India and the Southeast Asian Hindu kingdoms led the Bay of Bengal to be called "The Chola Lake", and the Chola attacks on Srivijaya in the 10th century CE are the sole example of military attacks by Indian rulers against Southeast Asia. The Pala dynasty of Bengal, which controlled the heartland of Buddhist India, maintained close economic, cultural and religious ties, particularly with Srivijaya.
Religion, authority and legitimacy
Balinese Ramayana dance drama, performed in Sarasvati Garden in Ubud.
The pre-Indic political and social systems in Southeast Asia were marked by a relative indifference towards lineage descent. Hindu God kingship enabled rulers to supersede loyalties, forge cosmopolitan polities and the worship of Shiva and Vishnu was combined with ancestor worship, so that Khmer, Javanese, and Cham rulers claimed semi-divine status as descendants of a God. Hindu traditions, especially the relationship to the sacrality of the land and social structures, are inherent in Hinduism's transnational features. The epic traditions of the Mahābhārata and the Rāmāyaṇa further legitimized a ruler identified with a God who battled and defeated the wrong doers that threaten the ethical order of the world.
Hinduism does not have a single historical founder, a centralized imperial authority in India proper nor a bureaucratic structure, thus ensuring relative religious independence for the individual ruler. It also allows for multiple forms of divinity, centered upon the Trimurti the triad of Brahma, Vishnu, and Shiva, the deities responsible for the creation, preservation, and destruction of the universe.
The effects of Hinduism and Buddhism applied a tremendous impact on the many civilizations inhabiting Southeast Asia which significantly provided some structure to the composition of written traditions. An essential factor for the spread and adaptation of these religions originated from trading systems of the third and fourth century. In order to spread the message of these religions Buddhist monks and Hindu priests joined mercantile classes in the quest to share their religious and cultural values and beliefs. Along the Mekong delta, evidence of Indianized religious models can be observed in communities labeled Funan. There can be found the earliest records engraved on a rock in Vocanh. The engravings consist of Buddhist archives and a south Indian scripts are written in Sanskrit that have been dated to belong to the early half of the third century. Indian religion was profoundly absorbed by local cultures that formed their own distinctive variations of these structures in order to reflect their own ideals.
The indianized kingdoms had by the 1st to 4th centuries CE adopted Hinduism's cosmology and rituals, the devaraja concept of kingship, and Sanskrit as official writing. Despite the fundamental cultural integration, these kingdoms were autonomous in their own right and functioned independently.
Beginning shortly after the 12th century, the Khmer kingdom, one of the first kingdoms that began the dissipation of Indianization started after Jayavarman VII in which expanded a substantial amount of territory, thus going into war with Champa. Leading into the fall of the Khmer Kingdom, the Khmer political and cultural zones were taken, overthrown, and fallen as well. Not only did Indianization change many cultural and political aspects, but it also changed the spiritual realm as well, creating a type of Northern Culture which began in the early 14th century, prevalent for its rapid decline in the Indian kingdoms. The decline of Hinduism kingdoms and spark of Buddhist kingdoms led to the formation of orthodox Sinhalese Buddhism and is a key factor leading to the decline of Indianization. Sukhothai and Ceylon are the prominent characters who formulated the center of Buddhism and this became more popularized over Hinduism.
Rise of Islam
Not only was the spark of Buddhism the driving force for Indianization coming to an end, but Islamic control took over as well in the midst of the thirteenth century to trump the Hinduist kingdoms. In the process of Islam coming to the traditional Hinduism kingdoms, trade was heavily practiced and the now Islamic Indians started becoming merchants all over Southeast Asia. Moreover, as trade became more saturated in the Southeast Asian regions wherein Indianization once persisted, the regions had become more Muslim populated. This so-called Islamic control has spanned to many of the trading centers across the regions of Southeast Asia, including one of the most dominant centers, Malacca, and has therefore stressed a widespread rise of Islamization.
Funan: Funan was a polity that encompassed the southernmost part of the Indochinese peninsula during the 1st to 6th centuries. The name Funan is not found in any texts of local origin from the period, and so is considered an exonym based on the accounts of two Chinese diplomats, Kang Tai and Zhu Ying who sojourned there in the mid-3rd century CE.: 24 It is not known what name the people of Funan gave to their polity. Some scholars believe ancient Chinese scholars transcribed the word Funan from a word related to the Khmer word bnaṃ or vnaṃ (modern: phnoṃ, meaning "mountain"); while others thought that Funan may not be a transcription at all, rather it meant what it says in Chinese, meaning something like "Pacified South". Centered at the lower Mekong, Funan is noted as the oldest Hindu culture in this region, which suggests prolonged socio-economic interaction with India and maritime trading partners of the Indosphere. Cultural and religious ideas had reached Funan via the Indian Ocean trade route. Trade with India had commenced well before 500 BC as Sanskrit hadn't yet replaced Pali. Funan's language has been determined as to have been an early form of Khmer and its written form was Sanskrit.
Chenla was the successor polity of Funan that existed from around the late 6th century until the early 9th century in Indochina, preceding the Khmer Empire. Like its predecessor, Chenla occupied a strategic position where the maritime trade routes of the Indosphere and the East Asian cultural sphere converged, resulting in prolonged socio-economic and cultural influence, along with the adoption of the Sanskrit epigraphic system of the south Indian Pallava dynasty and Chalukya dynasty. Chenla's first ruler Vīravarman adopted the idea of divine kingship and deployed the concept of Harihara, the syncretistic Hindu "god that embodied multiple conceptions of power". His successors continued this tradition, thus obeying the code of conduct Manusmṛti, the Laws of Manu for the Kshatriya warrior caste and conveying the idea of political and religious authority.
Langkasuka: Langkasuka (-langkhaSanskrit for "resplendent land" -sukkha of "bliss") was an ancient Hindu kingdom located in the Malay Peninsula. The kingdom, along with the Old Kedah settlement, are probably the earliest territorial footholds founded on the Malay Peninsula. According to tradition, the founding of the kingdom happened in the 2nd century; Malay legends claim that Langkasuka was founded at Kedah, and later moved to Pattani.
Champa: The kingdoms of Champa controlled what is now south and central Vietnam. The earliest kingdom, Lâm Ấp was desbribed by Chinese sources around 192. CE The dominant religion was Hinduism and the culture was heavily influenced by India. By the late fifteenth century, the Vietnamese – proponents of the Sinosphere – had eradicated the last remaining traces of the once powerful maritime kingdom of Champa. The last surviving Chams began their diaspora in 1471, many re-settling in Khmer territory.
Kambuja: The Khmer Empire was established by the early 9th century in a mythical initiation and consecration ceremony by founder Jayavarman II at Mount Kulen (Mount Mahendra) in 802 CE A succession of powerful sovereigns, continuing the Hindudevaraja tradition, reigned over the classical era of Khmer civilization until the 11th century. Buddhism was then introduced temporarily into royal religious practice, with discontinuities and decentralisation resulting in subsequent removal. The royal chronology ended in the 14th century. During this period of the Khmer empire, societal functions of administration, agriculture, architecture, hydrology, logistics, urban planning, literature and the arts saw an unprecedented degree of development, refinement and accomplishment from the distinct expression of Hindu cosmology.
Sukhothai: The first Tai peoples to gain independence from the Khmer Empire and start their own kingdom in the 13th century. Sukhothai was a precursor for the Ayutthaya Kingdom and the Kingdom of Siam. Though ethnically Thai, the Sukhothai kingdom in many ways was a continuation of the Buddhist Mon-Dvaravati civilizations, as well as the neighboring Khmer Empire.
Salakanagara: Salakanagara kingdom is the first historically recorded Indianized kingdom in Western Java, established by an Indian trader after marrying a local Sundanese princess. This Kingdom existed between 130 and 362 CE.
Tarumanagara was an early Sundanese Indianized kingdom, located not far from modern Jakarta, and according to Tugu inscription ruler Purnavarman apparently built a canal that changed the course of the Cakung River, and drained a coastal area for agriculture and settlement. In his inscriptions, Purnavarman associated himself with Vishnu, and Brahmins ritually secured the hydraulic project.
Kalingga: Kalingga (Javanese: Karajan Kalingga) was the 6th century Indianized kingdom on the north coast of Central Java, Indonesia. It was the earliest Hindu-Buddhist kingdom in Central Java, and together with Kutai and Tarumanagara are the oldest kingdoms in Indonesian history.
Malayu was a classical Southeast Asian kingdom. The primary sources for much of the information on the kingdom are the New History of the Tang, and the memoirs of the Chinese Buddhist monk Yijing who visited in 671 CE, and states that it was "absorbed" by Srivijaya by 692 CE, but had "broken away" by the end of the eleventh century according to Chao Jukua. The exact location of the kingdom is the subject of studies among historians.
Srivijaya: From the 7th to 13th centuries Srivijaya, a maritime empire centered on the island of Sumatra in Indonesia, had adopted Mahayana and Vajrayana Buddhism under a line of rulers from Dapunta Hyang Sri Jayanasa to the Sailendras. A stronghold of Vajrayana Buddhism, Srivijaya attracted pilgrims and scholars from other parts of Asia. I Ching reports that the kingdom was home to more than a thousand Buddhist scholars. A notable Buddhist scholar of local origin, Dharmakirti, taught Buddhist philosophy in Srivijaya and Nalanda (in India), and was the teacher of Atisha. Most of the time, this Buddhist Malay empire enjoyed cordial relationship with China and the Pala Empire in Bengal, and the 860 CE Nalanda inscription records that Maharaja Balaputra dedicated a monastery at Nalanda university near Pala territory. The Srivijaya kingdom ceased to exist in the 13th century due to various factors, including the expansion of the Javanese, Singhasari, and Majapahit empires.
Tambralinga was an ancient kingdom located on the Malay Peninsula that at one time came under the influence of Srivijaya. The name had been forgotten until scholars recognized Tambralinga as Nagara Sri Dharmaraja (Nakhon Si Thammarat). Early records are scarce but its duration is estimated to range from the seventh to the fourteenth century. Tambralinga first sent tribute to the emperor of the Tang dynasty in 616 CE. In Sanskrit, Tambra means "red" and linga means "symbol", typically representing the divine energy of Shiva.
Mataram: The Mataram Kingdom flourished between the 8th and 11th centuries. It was first centered in central Java before moving later to east Java. This kingdom produced numbers of Hindu-Buddhist temples in Java, including Borobudur Buddhist mandala and the PrambananTrimurti Hindu temple dedicated mainly to Shiva. The Sailendras were the ruling family of this kingdom at an earlier stage in central Java, before being replaced by the Isyana Dynasty.
Kadiri: In the 10th century, Mataram challenged the supremacy of Srivijaya, resulting in the destruction of the Mataram capital by Srivijaya early in the 11th century. Restored by King Airlangga (c. 1020–1050), the kingdom split on his death; the new state of Kediri, in eastern Java, became the centre of Javanese culture for the next two centuries, spreading its influence to the eastern parts of Southeast Asia. The spice trade was now becoming increasingly important, as demand from European countries grew. Before they learned to keep sheep and cattle alive in the winter, they had to eat salted meat, made palatable by the addition of spices. One of the main sources was the Maluku Islands (or "Spice Islands") in Indonesia, and so Kediri became a strong trading nation.
Singhasari: In the 13th century, however, the Kediri dynasty was overthrown by a revolution, and Singhasari arose in east Java. The domains of this new state expanded under the rule of its warrior-king Kertanegara. He was killed by a prince of the previous Kediri dynasty, who then established the last great Hindu-Javanese kingdom, Majapahit. By the middle of the 14th century Majapahit controlled most of Java, Sumatra and the Malay peninsula, part of Borneo, the southern Celebes and the Moluccas. It also exerted considerable influence on the mainland.
Majapahit: The Majapahit empire, centered in East Java, succeeded the Singhasari empire and flourished in the Indonesian archipelago between the 13th and 15th centuries. Noted for their naval expansion, the Javanese spanned west–east from Lamuri in Aceh to Wanin in Papua. Majapahit was one of the last and greatest Hindu empires in Maritime Southeast Asia. Most of Balinese Hindu culture, traditions and civilisations were derived from Majapahit legacy. A large number of Majapahit nobles, priests, and artisans found their home in Bali after the decline of Majapahit to Demak Sultanate.
Galuh was an ancient Hindu kingdom in the eastern Tatar Pasundan (now west Java province and Banyumasan region of central Java province), Indonesia. It was established following the collapse of the Tarumanagara kingdom around the 7th century. Traditionally the kingdom of Galuh was associated with the eastern Priangan cultural region, around the Citanduy and Cimanuk rivers, with its territory spanning from Citarum river on the west, to the Pamali (present-day Brebes river) and Serayu rivers on the east. Its capital was located in Kawali, near present-day Ciamis city.
Sunda: The Kingdom of Sunda was a Hindu kingdom located in western Java from 669 CE to around 1579 CE, covering the area of present-day Banten, Jakarta, West Java, and the western part of Central Java. According to primary historical records, the Bujangga Manik manuscript, the eastern border of the Sunda Kingdom was the Pamali River (Ci Pamali, the present day Brebes River) and the Serayu River (Ci Sarayu) in Central Java.
The eastern regions of Afghanistan were considered politically as parts of India. Buddhism and Hinduism held sway over the region until the Muslim conquest. Kabul and Zabulistan which housed Buddhism and other Indian religions, offered stiff resistance to the Muslim advance for two centuries, with the Kabul Shahi and Zunbils remaining unconquered until the Saffarid and Ghaznavid conquests. The significance of the realm of Zun and its rulers Zunbils had laid in them blocking the path of Arabs in invading the Indus Valley.
According to historian André Wink, "In southern and eastern Afghanistan, the regions of Zamindawar (Zamin I Datbar or land of the justice giver, the classical Arachosia) and Zabulistan or Zabul (Jabala, Kapisha, Kia pi shi) and Kabul, the Arabs were effectively opposed for more than two centuries, from 643 to 870 AD, by the indigenous rulers the Zunbils and the related Kabul-Shahs of the dynasty which became known as the Buddhist-Shahi. With Makran and Baluchistan and much of Sindh this area can be reckoned to belong to the cultural and political frontier zone between India and Persia." He also wrote, "It is clear however that in the seventh to ninth centuries the Zunbils and their kinsmen the Kabulshahs ruled over a predominantly Indian rather than a Persianate realm. The Arab geographers, in effect, commonly speak of 'that king of al-Hind ... (who) bore the title of Zunbil."
Archaeological sites such as the 8th-century Tapa Sardar and Gardez show a blend of Buddhism with strong Shaivst iconography. Around 644 CE, the Chinese travelling monk Xuanzang made an account of Zabul (which he called by its Sanskrit name Jaguda), which he describes as mainly pagan, though also respecting Mahayana Buddhism, which although in the minority had the support of its royals. In terms of other cults, the god Śuna, is described to be the prime deity of the country.
The Caliph Al-Ma'mun (r. 813–833 A.D.)led the last Arab expeditions on Kabul and Zabul, after which the long-drawn conflict ended with the dissolution of the empire. Rutbil were made to pay double the tribute to the Caliph. The king of Kabul was captured by him and converted to Islam. The last Zunbil was killed by Ya'qub bin al-Layth along with his former overlord Salih b. al-Nadr in 865. Meanwhile, the Hindu Shahi of Kabul were defeated under Mahmud of Ghazni. Indian soldiers were a part of the Ghaznavid army, Baihaki mentioned Hindu officers employed by Ma'sud. The 14th-century scholar Muslim scholar Ibn Battuta described the Hindu Kush as meaning "slayer of Indians", because large numbers of slaves brought from India died from its treacherous weather.
Zabulistan, a historical region in southern Afghanistan roughly corresponding to the modern provinces of Zabul and Ghazni, was a collection of loose suzerains of the Hindu rulers when it fell to the Turk Shahis in the 7th century, though the suzerainty continued up to the 11th century. The Hindu kingdom of Kapisha had split up as its western part formed a separate state called the kingdom of Zabul. It was a family division because there were consanguineous and political relationships between the states of Kabul and Zabul.
The Zunbils, a royal dynasty south of the Hindu Kush in present-day southern Afghanistan region, worshiped the Zhuna, possibly a sun god connected to the Hindu god Surya and is sometimes referred to as Zoor or Zoon. He is represented with flames radiating from his head on coins. Statues were adorned with gold and used rubies for eyes. Huen Tsang calls him "sunagir". It has been linked with the Hindu god Aditya at Multan, pre-Buddhist religious and kingship practices of Tibet as well as Shaivism. His shrine lay on a sacred mountain in Zamindawar. Originally it appears to have been brought there by Hepthalites, displacing an earlier god on the same site. Parallels have been noted with the pre-Buddhist monarchy of Tibet, next to Zoroastrian influence on its ritual. Whatever its origins, it was certainly superimposed on a mountain and on a pre-existing mountain god while merging with Shaiva doctrines of worship.
The area had been under the rule of the Turk Shahi who took over the rule of Kabul in the seventh century and later were attacked by the Arabs. The Turk Shahi dynasty was Buddhist and were followed by a Hindu dynasty shortly before the Saffarid conquest in 870 A.D.
The Turk Shahi were a Buddhist Turkic dynasty that ruled from Kabul and Kapisa in the 7th to 9th centuries. They replaced the Nezak – the last dynasty of Bactrian rulers. Kabulistan was the heartland of the Turk Shahi domain, which at times included Zabulistan and Gandhara. The last Shahi ruler of Kabul, Lagaturman, was deposed by a Brahmin minister, possibly named Vakkadeva, in c. 850, signaling the end of the Buddhist Turk Shahi dynasty, and the beginning of the Hindu Shahi dynasty of Kabul.
Vakkadeva: According to The Mazare Sharif Inscription of the Time of the Shahi Ruler Veka, recently discovered from northern Afghanistan and reported by the Taxila Institute of Asian Civilisations, Islamabad, Veka (sic.) conquered northern region of Afghanistan 'with eightfold forces' and ruled there. He established a Shiva temple there which was inaugurated by Parimaha Maitya (the Great Minister). He also issued copper coins of the Elephant and Lion type with the legend Shri Vakkadeva. Nine principal issues of Bull and Horseman silver coins and only one issue of corresponding copper coins of Spalapatideva have become available. As many as five Elephant and Lion type of copper coins of Shri Vakkadeva are available and curiously the copper issues of Vakka are contemporaneous with the silver issues of Spalapati.
Kamalavarman: During the reign of Kamalavarman, the Saffarid rule weakened precipitately and ultimately Sistan became a part of the Samanid Empire. The disorder generally prevailed and the control of Zabulistan changed hands frequently. Taking advantage of the situation, the Shahis stepped up activities on their western frontier. The result was the emergence of a small Hindu power at Ghazni, supported by the Shahis. "The authorities either themselves of early date or enshrining early information mention Lawik", a Hindu, as the ruler at Ghazni, before this place was taken over by the Turkish slave governor of the Samanids.
Jayapala: With Jayapala, a new dynasty started ruling over the former Shahi kingdom of southeastern Afghanistan and the change over was smooth and consensual. On his coronation, Jayapala used the additional name-suffix Deva from his predecessor's dynasty in addition to the pala name-ending of his own family. (With Kabul lost during the lifetime of Jayapaladeva, his successors – Anandapala, Trilochanapala and Bhimapala – reverted to their own family pala-ending names.) Jayapala did not issue any coins in his own name. Bull and Horseman coins with the legend Samantadeva, in billon, seem to have been struck during Jayapala's reign. As the successor of Bhima, Jayapala was a Shahi monarch of the state of Kabul, which now included Punjab. Minhaj-ud-din describes Jayapala as "the greatest of the Rais of Hindustan."
From historical evidence, it appears Tokharistan (Bactria) was the only area heavily colonized by Arabs where Buddhism flourished and the only area incorporated into the Arab empire where Sanskrit studies were pursued up to the conquest.Hui'Chao, who visited around 726, mentions that the Arabs ruled it and all the inhabitants were Buddhists.Balkh's final conquest was undertaken by Qutayba ibn Muslim in 705. Among Balkh's Buddhist monasteries, the largest was Nava Vihara, later Persianized to Naw Bahara after the Islamic conquest of Balkh. It is not known how long it continued to serve as a place of worship after the conquest. Accounts of early Arabs offer contradictory narratives.
The vast area extending from modern Nuristan to Kashmir (styled "Peristan" by A. M. Cacopardo) containing host of "Kafir" cultures and Indo-European languages that became Islamized over a long period. Earlier, it was surrounded by Buddhist areas. The Islamization of the nearby Badakhshan began in the 8th century and Peristan was completely surrounded by Muslim states in the 16th century with Islamization of Baltistan. The Buddhist states temporarily brought literacy and state rule into the region. The decline of Buddhism resulted in it becoming heavily isolated.
Successive wave of Pashtun immigration, before or during 16th and 17th centuries, displaced the original Kafirs and Pashayi people from Kunar Valley and Laghman valley, the two eastern provinces near Jalalabad, to the less fertile mountains. Before their conversion, the Kafir people of Kafiristan practiced a form of ancient Hinduism infused with locally developed accretions. The region from Nuristan to Kashmir (styled Peristan by A. M. Cacopardo) was host to a vast number of "Kafir" cultures. They were called Kafirs due to their enduring paganism, remaining politically independent until being conquered and forcibly converted by Afghan Amir Abdul Rahman Khan in 1895–1896 while others also converted to avoid paying jizya.
In 1020–21, Sultan Mahmud of Ghazna led a campaign against Kafiristan and the people of the "pleasant valleys of Nur and Qirat" according to Gardizi. These people worshipped the lion.Mohammad Habib however considers they might have been worshipping Buddha in form of a lion (Sakya Sinha).Ramesh Chandra Majumdar states they had a Hindu temple which was destroyed by Mahmud's general.
The use of Greater India to refer to an Indian cultural sphere was popularised by a network of Bengali scholars in the 1920s who were all members of the Calcutta-based Greater India Society. The movement's early leaders included the historian R. C. Majumdar (1888–1980); the philologists Suniti Kumar Chatterji (1890–1977) and P. C. Bagchi (1898–1956), and the historians Phanindranath Bose and Kalidas Nag (1891–1966). Some of their formulations were inspired by concurrent excavations in Angkor by French archaeologists and by the writings of French IndologistSylvain Lévi. The scholars of the society postulated a benevolent ancient Indian cultural colonisation of Southeast Asia, in stark contrast – in their view – to the Western colonialism of the early 20th century.
The term Greater India and the notion of an explicit Hindu expansion of ancient Southeast Asia have been linked to both Indian nationalism and Hindu nationalism. However, many Indian nationalists, like Jawaharlal Nehru and Rabindranath Tagore, although receptive to "an idealisation of India as a benign and uncoercive world civiliser and font of global enlightenment," stayed away from explicit "Greater India" formulations. In addition, some scholars have seen the Hindu/Buddhist acculturation in ancient Southeast Asia as "a single cultural process in which Southeast Asia was the matrix and South Asia the mediatrix." In the field of art history, especially in American writings, the term survived due to the influence of art theorist Ananda Coomaraswamy. Coomaraswamy's view of pan-Indian art history was influenced by the "Calcutta cultural nationalists."
By some accounts Greater India consists of "lands including Burma, Java, Cambodia, Bali, and the former Champa and Funan polities of present-day Vietnam," in which Indian and Hindu culture left an "imprint in the form of monuments, inscriptions and other traces of the historic "Indianizing" process." By some other accounts, many Pacific societies and "most of the Buddhist world including Ceylon, Tibet, Central Asia, and even Japan were held to fall within this web of Indianizing culture colonies" This particular usage – implying cultural "sphere of influence" of India – was promoted by the Greater India Society, formed by a group of Bengalimen of letters, and is not found before the 1920s. The term Greater India was used in historical writing in India into the 1970s.
Culture spread via the trade routes that linked India with southern Burma, central and southern Siam, the Malay peninsula and Sumatra to Java, lower Cambodia and Champa. The Pali and Sanskrit languages and the Indian script, together with Theravada and MahayanaBuddhism, Brahmanism and Hinduism, were transmitted from direct contact as well as through sacred texts and Indian literature. Southeast Asia had developed some prosperous and very powerful colonial empires that contributed to Hindu-Buddhist artistic creations and architectural developments. Art and architectural creations that rivaled those built in India, especially in its sheer size, design and aesthetic achievements. The notable examples are Borobudur in Java and Angkor monuments in Cambodia. The Srivijaya Empire to the south and the Khmer Empire to the north competed for influence in the region.
A defining characteristic of the cultural link between Southeast Asia and the Indian subcontinent was the adoption of ancient Indian Vedic/Hindu and Buddhist culture and philosophy into Myanmar, Tibet, Thailand, Indonesia, Malaya, Laos and Cambodia. Indian scripts are found in Southeast Asian islands ranging from Sumatra, Java, Bali, South Sulawesi and the Philippines. The Ramayana and the Mahabharata have had a large impact on South Asia and Southeast Asia. One of the most tangible evidence of dharmic Hindu traditions is the widespread use of the Añjali Mudrā gesture of greeting and respect. It is seen in the Indiannamasté and similar gestures known throughout Southeast Asia; its cognates include the Cambodiansampeah, the Indonesiansembah, the Japanesegassho and Thaiwai.
Beyond the Himalaya and Hindukush mountains in the north, along the Silk Route Indian influence was linked with Buddhism. Tibet and Khotan was direct heirs of Gangetic Buddhism, despite the difference in languages. Many Tibetan monks even used to know Sanskrit very well. In Khotan the Ramayana was well cicrulated in Khotanese language, though the narrative is slightly different from the Gangetic version. In Afghanistan, Uzbekistan and Tajikistan many Buddhist monasteries were established. These countries were used as a kind of springboard for the monks who brought Indian Buddhist texts and images to China. Further north, in the Gobi Desert, statues of Ganesha and Kartikeya was found alongside Buddhist imagery in Mogao Caves.
Hinduism is practised by the majority of Bali's population. The Cham people of Vietnam still practice Hinduism as well. Though officially Buddhist, many Thai, Khmer, and Burmese people also worship Hindu gods in a form of syncretism.
Brahmins have had a large role in spreading Hinduism in Southeast Asia. Even today many monarchies such as the royal court of Thailand still have Hindu rituals performed for the King by Hindu Brahmins.
Indians spread their religion to Southeast Asia, beginning the Hindu and Buddhist cultures there. They introduced the caste system to the region, especially to Java, Bali, Madura, and Sumatra. The adopted caste system was not as strict as in India, tempered to the local context. There are multiple similarities between the two caste systems such that both state that no one is equal within society and that everyone has his own place. It also promoted the upbringing of highly organized central states. Indians were still able to implement their religion, political ideas, literature, mythology, and art.
Borobudur in Central Java, Indonesia, is the world's largest Buddhist monument. It took shape of a giant stone mandala crowned with stupas and believed to be the combination of Indian-origin Buddhist ideas with the previous megalithic tradition of native Austronesianstep pyramid.
The Batu Caves in Malaysia are one of the most popular Hindu shrines outside India. It is the focal point of the annual Thaipusam festival in Malaysia and attracts over 1.5 million pilgrims, making it one of the largest religious gatherings in history.
A map of East, South and Southeast Asia. Red signifies current and historical (Vietnam) distribution of Chinese characters. Green signifies current and historical (Malaysia, Pakistan, the Maldives, Indonesia, the Philippines, and Vietnam) distribution of Indic scripts. Blue signifies current use of non-Sinitic and non-Indic scripts.
Scholars like Sheldon Pollock have used the term Sanskrit Cosmopolis to describe the region and argued for millennium-long cultural exchanges without necessarily involving migration of peoples or colonisation. Pollock's 2006 book The Language of the Gods in the World of Men makes a case for studying the region as comparable with Latin Europe and argues that the Sanskrit language was its unifying element.
Scripts in Sanskrit discovered during the early centuries of the Common Era are the earliest known forms of writing to have extended all the way to Southeast Asia. Its gradual impact ultimately resulted in its widespread domain as a means of dialect which evident in regions, from Bangladesh to Cambodia, Malaysia and Thailand and additionally a few of the larger Indonesian islands. In addition, alphabets from languages spoken in Burmese, Thai, Laos, and Cambodia are variations formed off of Indian ideals that have localized the language.
The spread of Buddhism to Tibet allowed many Sanskrit texts to survive only in Tibetan translation (in the Tanjur). Buddhism was similarly introduced to China by Mahayanist missionaries sent by the Indian Emperor Ashoka mostly through translations of Buddhist Hybrid Sanskrit and Classical Sanskrit texts, and many terms were transliterated directly and added to the Chinese vocabulary.
In Southeast Asia, languages such as Thai and Lao contain many loan words from Sanskrit, as does Khmer to a lesser extent. For example, in Thai, Rāvaṇa, the legendary emperor of Sri Lanka, is called 'Thosakanth' which is derived from his Sanskrit name 'Daśakaṇṭha' ("having ten necks").
A Sanskrit loanword encountered in many Southeast Asian languages is the word bhāṣā, or spoken language, which is used to mean language in general, for example bahasa in Malay, Indonesian and Tausug, basa in Javanese, Sundanese, and Balinese, phasa in Thai and Lao, bhasa in Burmese, and phiesa in Khmer.
The utilization of Sanskrit has been prevalent in all aspects of life including legal purposes. Sanskrit terminology and vernacular appears in ancient courts to establish procedures that have been structured by Indian models such as a system composed of a code of laws. The concept of legislation demonstrated through codes of law and organizations particularly the idea of "God King" was embraced by numerous rulers of Southeast Asia. The rulers amid this time, for example, the Lin-I Dynasty of Vietnam once embraced the Sanskrit dialect and devoted sanctuaries to the Indian divinity Shiva. Many rulers following even viewed themselves as "reincarnations or descendants" of the Hindu gods. However once Buddhism began entering the nations, this practiced view was eventually altered.
Scripts in Sanskrit discovered during the early centuries of the Common Era are the earliest known forms of writing to have extended all the way to Southeast Asia. Its gradual impact ultimately resulted in its widespread domain as a means of dialect which evident in regions, from Bangladesh to Cambodia, Malaysia and Thailand and additionally a few of the larger Indonesian islands. In addition, alphabets from languages spoken in Burmese, Thai, Laos, and Cambodia are variations formed off of Indian ideals that have localized the language.
The utilization of Sanskrit has been prevalent in all aspects of life including legal purposes. Sanskrit terminology and vernacular appears in ancient courts to establish procedures that have been structured by Indian models such as a system composed of a code of laws. The concept of legislation demonstrated through codes of law and organizations particularly the idea of "God King" was embraced by numerous rulers of Southeast Asia. The rulers amid this time, for example, the Lin-I Dynasty of Vietnam once embraced the Sanskrit dialect and devoted sanctuaries to the Indian divinity, Shiva. Many rulers following even viewed themselves as "reincarnations or descendants" of the Hindu Gods. However, once Buddhism began entering the nations, this practiced view was eventually altered.
Southeast Asian languages are traditionally written with Indic alphabets and therefore have extra letters not pronounced in the local language, so that original Sanskrit spelling can be preserved. An example is how the name of the late King of Thailand, Bhumibol Adulyadej, is spelled in Sanskrit as "Bhumibol"ภูมิพล, yet is pronounced in Thai as "Phumipon" พูมิพน using Thai-Sanskrit pronunciation rules since the original Sanskrit sounds do not exist in Thai.
Ruins of Ayutthaya in Thailand; Ayutthaya derives its name from the ancient Indian city of Ayodhya, which has had wide cultural significance
Suvarnabhumi is a toponym that has been historically associated with Southeast Asia. In Sanskrit, it means "The Land of Gold". Thailand's Suvarnabhumi Airport is named after this toponym.
Some Thai toponyms also often have Indian parallels or Sanskrit origin, although the spellings are adapted to the Siamese tongue, such as Ratchaburi from Raja-puri ("king's city"), and Nakhon Si Thammarat from Nagara Sri Dharmaraja.
^Mehta, Jaswant Lal (1979). Advanced Study in the History of Medieval India. Vol. I (1st ed.). Sterling Publishers. p. 31. OCLC557595150. Modern Afghanistan was part of ancient India; the Afghans belonged to the pale of Indo-Aryan civilisation. In the eighty century, the country was known by two regional names—Kabul land Zabul. The northern part, called Kabul (or Kabulistan) was governed by a Buddhist dynasty. Its capital and the river on the banks of which it was situated, also bore the same name. Lalliya, a Brahmin minister of the last Buddhist ruler Lagaturman, deposed his master and laid the foundation of the Hindushahi dynasty in c. 865.
^Chandra, Satish (2006). Medieval India: From Sultanat to the Mughals. Har-Anand Publications. p. 41. ISBN9788124110669. Although Afghanistan was considered an integral part of India in antiquity, and was often called "Little India" even in medieval times, politically it had not been a part of India after the downfall of the Kushan empire, followed by the defeat of the Hindu Shahis by Mahmud Ghazni.
^ abcStark, Miriam T.; Griffin, Bion; Phoeurn, Chuch; Ledgerwood, Judy; et al. (1999). "Results of the 1995–1996 Archaeological Field Investigations at Angkor Borei, Cambodia"(PDF). Asian Perspectives. University of Hawai'i-Manoa. 38 (1). Archived from the original(PDF) on 23 September 2015. Retrieved 5 July 2015. The development of maritime commerce and Hindu influence stimulated early state formation in polities along the coasts of mainland Southeast Asia, where passive indigenous populations embraced notions of statecraft and ideology introduced by outsiders...
^(Beazley 1910, p. 708) Quote: "Azurara's hyperbole, indeed, which celebrates the Navigator Prince as joining Orient and Occident by continual voyaging, as transporting to the extremities of the East the creations of Western industry, does not scruple to picture the people of the Greater and the Lesser India"
^(Beazley 1910, p. 708) Quote: "Among all the confusion of the various Indies in Mediaeval nomenclature, "Greater India" can usually be recognized as restricted to the "India proper" of the modern [c. 1910] world."
^(Wheatley 1982, p. 13) Quote: "Subsequently the whole area came to be identified with one of the "Three Indies," though whether India Major or Minor, Greater or Lesser, Superior or Inferior, seems often to have been a personal preference of the author concerned. When Europeans began to penetrate into Southeast Asia in earnest, they continued this tradition, attaching to various of the constituent territories such labels as Further India or Hinterindien, the East Indies, the Indian Archipelago, Insulinde, and, in acknowledgment of the presence of a competing culture, Indochina."
^Lévi, Sylvain; Przyluski, Jean; Bloch, Jules (1993). Pre-Aryan and Pre-Dravidian in India. Asian Educational Services. ISBN978-81-206-0772-9. It has been further proved that not only linguistic but also certain cultural and political facts of ancient India, can be explained by Austroasiatic (Mon-Khmer) elements.
^Chaubey, Gyaneshwer (January 2015). "East Asian ancestry in India"(PDF). Indian Journal of Physical Anthropology and Human Genetics. 34 (2): 193–199. Here the analysis of genome wide data on Indian and East/Southeast Asian demonstrated their restricted distinctive ancestry in India mainly running along the foothills of Himalaya and northeastern part.
^Wolters, O. W. (1973). "Jayavarman II's Military Power: The Territorial Foundation of the Angkor Empire". The Journal of the Royal Asiatic Society of Great Britain and Ireland. Cambridge University Press. 105 (1): 21–30. doi:10.1017/S0035869X00130400. JSTOR25203407.
^Abdur Rahman, Last Two Dynasties of the Shahis: "In about AD 680, the Rutbil was a brother of the Kabul Shahi. In AD 726, the ruler of Zabulistan (Rutbil) was the nephew of Kabul Shah. Obviously the Kabul Shahs and the Rutbils belonged to the same family" – pp. 46 and 79, quoting Tabri, I, 2705-6 and Fuch, von W.
^Gopal, Ram; Paliwal, KV (2005). Hindu Renaissance: Ways and Means. New Delhi, India: Hindu Writers Forum. p. 83. We may conclude with a broad survey of the Indian colonies in the Far East. For nearly fifteen hundred years, and down to a period when the Hindus had lost their independence in their own home, Hindu kings were ruling over Indo-China and the numerous islands of the Indian Archipelago, from Sumatra to New Guinea. Indian religion, Indian culture, Indian laws, and Indian government moulded the lives of the primitive races all over this wide region, and they imbibed a more elevated moral spirit and a higher intellectual taste through the religion, art, and literature of India. In short, the people were lifted to a higher plane of civilisation.
^Review by 'SKV' of The Hindu Colony of Cambodia by Phanindranath Bose [Adyar, Madras: Theosophical Publishing House 1927] in The Vedic Magazine and Gurukula Samachar 26: 1927, pp. 620–1.
^Lyne Bansat-Boudon, Roland Lardinois, and Isabelle Ratié, Sylvain Lévi (1863–1935), page 196, Brepols, 2007, ISBN9782503524474 Quote: "The ancient Hindus of yore were not simply a spiritual people, always busy with mystical problems and never trouble themselves with the questions of 'this world'... India also has its Napoleons and Charlemagnes, its Bismarcks and Machiavellis. But the real charm of Indian history does not consist in these aspirants after universal power, but in its peaceful and benevolent Imperialism – a unique thing in the history of mankind. The colonisers of India did not go with sword and fire in their hands; they used... the weapons of their superior culture and religion... The Buddhist age has attracted special attention, and the French savants have taken much pains to investigate the splendid monuments of the Indian cultural empire in the Far East."
^Keenleyside (1982, pp. 213–214) Quote: "Starting in the 1920s under the leadership of Kalidas Nag – and continuing even after independence – a number of Indian scholars wrote extensively and rapturously about the ancient Hindu cultural expansion into and colonisation of South and Southeast Asia. They called this vast region "Greater India" – a dubious appellation for a region which to a limited degree, but with little permanence, had been influenced by Indian religion, art, architecture, literature and administrative customs. As a consequence of this renewed and extensive interest in Greater India, many Indians came to believe that the entire South and Southeast Asian region formed the cultural progeny of India; now that the sub-continent was reawakening, they felt, India would once again assert its non-political ascendancy over the area... While the idea of reviving the ancient Greater India was never officially endorsed by the Indian National Congress, it enjoyed considerable popularity in nationalist Indian circles. Indeed, Congress leaders made occasional references to Greater India while the organisation's abiding interest in the problems of overseas Indians lent indirect support to the Indian hope of restoring the alleged cultural and spiritual unity of South and Southeast Asia."
^Thapar (1968, pp. 326–330) Quote: "At another level, it was believed that the dynamics of many Asian cultures, particularly those of Southeast Asia, arose from Hindu culture, and the theory of Greater India derived sustenance from Pan-Hinduism. A curious pride was taken in the supposed imperialist past of India, as expressed in sentiments such as these: "The art of Java and Kambuja was no doubt derived from India and fostered by the Indian rulers of these colonies." (Majumdar, R. C. et al. (1950), An Advanced History of India, London: Macmillan, p. 221) This form of historical interpretation, which can perhaps best be described as being inspired by Hindu nationalism, remains an influential school of thinking in present historical writings."
^Bayley (2004, pp. 735–736) Quote:"The Greater India visions which Calcutta thinkers derived from French and other sources are still known to educated anglophone Indians, especially but not exclusively Bengalis from the generation brought up in the traditions of post-Independence Nehruvian secular nationalism. One key source of this knowledge is a warm tribute paid to Sylvain Lévi and his ideas of an expansive, civilising India by Jawaharlal Nehru himself, in his celebrated book, The Discovery of India, which was written during one of Nehru's periods of imprisonment by the British authorities, first published in 1946, and reprinted many times since.... The ideas of both Lévi and the Greater India scholars were known to Nehru through his close intellectual links with Tagore. Thus Lévi's notion of ancient Indian voyagers leaving their invisible 'imprints' throughout east and southeast Asia was for Nehru a recapitulation of Tagore's vision of nationhood, that is an idealisation of India as a benign and uncoercive world civiliser and font of global enlightenment. This was clearly a perspective which defined the Greater India phenomenon as a process of religious and spiritual tutelage, but it was not a Hindu supremacist idea of India's mission to the lands of the Trans-Gangetic Sarvabhumi or Bharat Varsha."
^Narasimhaiah (1986) Quote: "To him (Nehru), the so-called practical approach meant, in practice, shameless expediency, and so he would say, "the sooner we are not practical, the better". He rebuked a Member of Indian Parliament who sought to revive the concept of Greater India by saying that 'the honorable Member lived in the days of Bismarck; Bismarck is dead, and his politics more dead!' He would consistently plead for an idealistic approach and such power as the language wields is the creation of idealism—politics' arch enemy—which, however, liberates the leader of a national movement from narrow nationalism, thus igniting in the process a dead fact of history, in the sneer, "For him the Bastille has not fallen!" Though Nehru was not to the language born, his utterances show a remarkable capacity for introspection and sense of moral responsibility in commenting on political processes."
^Wheatley (1982, pp. 27–28) Quote: "The tide of revisionism that is currently sweeping through Southeast Asian historiography has in effect taken us back almost to the point where we have to consider reevaluating almost every text bearing on the protohistoric period and many from later times. Although this may seem a daunting proposition, it is nonetheless supremely worth attempting, for the process by which the peoples of western Southeast Asia came to think of themselves as part of Bharatavarsa (even though they had no conception of "India" as we know it) represents one of the most impressive instances of large-scale acculturation in the history of the world. Sylvain Levi was perhaps overenthusiastic when he claimed that India produced her definitive masterpieces – he was thinking of Angkor and the Borobudur – through the efforts of foreigners or on foreign soil. Those masterpieces were not strictly Indian achievements: rather were they the outcome of a Eutychian fusion of natures so melded together as to constitute a single cultural process in which Southeast Asia was the matrix and South Asia the mediatrix."
^Handy (1930, p. 364) Quote: "An equally significant movement is one that brought about among the Indian intelligentsia of Calcutta a few years ago the formation of what is known as the "Greater India Society," whose membership is open "to all serious students of the Indian cultural expansion and to all sympathizers of such studies and activities." Though still in its infancy, this organisation has already a large membership, due perhaps as much as anything else to the enthusiasm of its Secretary and Convener, Dr. Kalidas Nag, whose scholarly affiliations with the Orientalists in the University of Paris and studies in Indochina, Insulindia and beyond, have equipped him in an unusual way for the work he has chosen, namely stimulating interest in and spreading knowledge of Greater Indian culture of the past, present and future. The Society's President is Professor Jadunath Sarkar, Vice-Chancellor of Calcutta University, and its Council is made up largely of professors on the faculty of the University and members of the staff of the Calcutta Museum, as well as of Indian authors and journalists. Its activities have included illustrated lecture series at the various universities throughout India by Dr. Nag, the assembling of a research library, and the publication of monographs of which four very excellent examples have already been printed: 1) Greater India, by Kalidas Nag, M.A., D.Litt. (Paris), 2) India and China, by Prabodh Chandra Bagchi, M.A., D.Litt., 3) Indian Culture in Java and Sumatra, by Bijan Raj Chatterjee, D.Litt. (Punjab), PhD (London), and 4) India and Central Asia, by Niranjan Prasad Chakravarti, M.A., PhD(Cantab.)."
^Abraham Valentine Williams Jackson (1911), From Constantinople to the home of Omar Khayyam: travels in Transcaucasia and northern Persia for historic and literary research, The Macmillan company, ... they are now wholly substantiated by the other inscriptions.... They are all Indian, with the exception of one written in Persian... dated in the same year as the Hindu tablet over it... if actual Gabrs (i.e. Zoroastrians, or Parsis) were among the number of worshipers at the shrine, they must have kept in the background, crowded out by Hindus, because the typical features Hanway mentions are distinctly Indian, not Zoroastrian... met two Hindu Fakirs who announced themselves as 'on a pilgrimage to this Baku Jawala Ji'....
^Richard Delacy, Parvez Dewan (1998), Hindi & Urdu phrasebook, Lonely Planet, ISBN978-0-86442-425-9, ... The Hindu calendar (Vikramaditya) is 57 years ahead of the Christian calendar. Dates in the Hindu calendar are prefixed by the word: samvat संवत ...
Caverhill, John (1767), "Some Attempts to Ascertain the Utmost Extent of the Knowledge of the Ancients in the East Indies", Philosophical Transactions, 57: 155–178, doi:10.1098/rstl.1767.0018, S2CID186214598
Guha-Thakurta, Tapati (1992), The making of a new 'Indian' art. Artists, aesthetics and nationalism in Bengal, c. 1850–1920, Cambridge, UK: Cambridge University Press
Handy, E. S. Craighill (1930), "The Renaissance of East Indian Culture: Its Significance for the Pacific and the World", Pacific Affairs, University of British Columbia, 3 (4): 362–369, doi:10.2307/2750560, JSTOR2750560
Keenleyside, T. A. (Summer 1982), "Nationalist Indian Attitudes Towards Asia: A Troublesome Legacy for Post-Independence Indian Foreign Policy", Pacific Affairs, University of British Columbia, 55 (2): 210–230, doi:10.2307/2757594, JSTOR2757594
Wheatley, Paul (November 1982), "Presidential Address: India Beyond the Ganges—Desultory Reflections on the Origins of Civilisation in Southeast Asia", The Journal of Asian Studies, Association for Asian Studies, 42 (1): 13–28, doi:10.2307/2055365, JSTOR2055365
Language variation: Papers on variation and change in the Sinosphere and in the Indosphere in honour of James A. Matisoff, David Bradley, Randy J. LaPolla and Boyd Michailovsky eds., pp. 113–144. Canberra: Pacific Linguistics.
Lokesh, Chandra, & International Academy of Indian Culture. (2000). Society and culture of Southeast Asia: Continuities and changes. New Delhi: International Academy of Indian Culture and Aditya Prakashan.
R. C. Majumdar, Study of Sanskrit in South-East Asia |
Data presentation is the process of displaying data and analysis results in a way that is understandable and interpretable to the intended audience. It’s a crucial step in the data analysis process because it allows the insights gained from the analysis to be shared and acted upon.
Data can be presented in a variety of formats, including:
- Tables: Tables are a simple and effective way to present data. They allow for precise values to be displayed and are especially useful when the audience needs to know exact figures.
- Graphs and Charts: Graphs and charts are often used when it’s helpful to visualize trends, comparisons, or relationships in the data. This includes bar graphs, line graphs, pie charts, scatter plots, and more.
- Maps: For data with a geographical component, maps can be an effective way to present data. Heat maps, choropleth maps, and geographic information system (GIS) maps are all common types of data maps.
- Infographics: Infographics combine graphics and text to present data in a visually engaging way. They are often used to present a narrative or to illustrate complex concepts or relationships.
- Dashboards: Dashboards present key data points or metrics in a way that can be quickly understood. They are often interactive, allowing the user to drill down into the data or view different aspects of the data.
- Reports: Written reports often include a combination of text, tables, and figures to present data. They typically provide a more detailed and comprehensive presentation of the data than other formats.
The choice of data presentation format depends on the nature of the data, the results of the analysis, and the needs and preferences of the audience. Good data presentation is clear, accurate, and tailored to its audience. It highlights the important findings and implications of the data without overwhelming the audience with unnecessary details or complexity.
Example of Data Presentation
Let’s consider a health department that has conducted a survey to understand the rates of various health conditions in different neighborhoods of a city.
- Tables: The department could present the raw data in a table, with each row representing a different neighborhood and each column representing a different health condition. The table could show the percentage of survey respondents in each neighborhood who reported each condition.
- Bar Graph: To highlight the differences in health conditions between neighborhoods, they could use a bar graph. Each bar could represent a neighborhood, and the height of the bar could represent the total percentage of respondents in that neighborhood who reported any health condition.
- Pie Charts: To break down the types of health conditions in each neighborhood, they could use pie charts. Each pie chart could represent a neighborhood, and each slice could represent a different health condition, with the size of the slice proportional to the percentage of respondents who reported that condition.
- Heat Map: To visualize the geographic distribution of health conditions, they could use a heat map. The map could show the city, with each neighborhood color-coded based on the total percentage of respondents who reported any health condition.
- Dashboard: To provide an interactive tool for exploring the data, they could create a dashboard. The dashboard could include the table, bar graph, pie charts, and heat map, with controls that allow users to filter and drill down into the data.
- Report: Finally, to provide a comprehensive overview of the survey results, they could write a report. The report could include the table, graphs, and maps, along with text that explains the methodology, summarizes the findings, and discusses the implications.
In this way, the health department can present the survey data in multiple formats, each highlighting different aspects of the data and catering to different audience preferences. |
The Perseus spiral arm, the nearest spiral arm in the Milky Way outside the Sun's orbit, lies only half as far from Earth as some previous results had suggested. An international team of astronomers including scientists from the Max-Planck-Institut für Radioastronomie has recently achieved the most accurate distance measurement ever to the Perseus arm.
Figure 1: Our Milky Way galaxy as an observer located far above its plane would see it. Shown are the known spiral arms. The locations of our solar system and of W3OH are indicated.
This was done by use of a vast array of radio telescopes in the USA called the Very Long Baseline Array, observing very bright spots within clouds of gas that contain methyl alcohol in the placental material surrounding a newly formed star called W3OH.
Dr. Xu Ye, an astronomer at Shanghai Observatory now working at the Max-Planck-Institut für Radioastronomie and one of the members of the international team who made the measurements, stated that "we measured distance by the simplest and most direct method in astronomy - essentially the technique used by surveyors called triangulation." Specifically, the team used the changing vantage point of the Earth as it orbits the Sun to form one leg of a triangle. Measuring the change in apparent position of a source, they could calculate the source's distance by simple trigonometry (resulting in 6357±130 light years).
This result resolves the longstanding problem of the distance to this spiral arm. In thje past, different methods of measuring distance have disagreed by more than a factor of 2. Prof. Karl Menten, another member of the team, states that "this confirms distances based on the apparent luminosity of young stars but disagrees with distances based on a model of the rotation of the Milky Way. The reason for the discrepancy is that young stars in the Perseus spiral arm have unexpectedly large motions."
The astronomers found that the young star is not moving in a circular orbit around the Milky Way, but deviates by 10% from circular. It is rotating more slowly and "falling" toward the center of the Milky Way. Team member Zheng Xing-Wu of Nanjing University points out that "the simplest explanation is that the cloud of gas out of which the star formed was gravitationally attracted by excess mass of material in the Perseus spiral arm."
"Studies such as ours are the first steps to accurately map the Milky Way," says Dr. Mark Reid, a member of the team from the Harvard-Smithsonian Center for Astrophysics. "We have established that the radio telescope we used, the Very Long Baseline Array, can measure distances with unprecedented accuracy--nearly a factor of 100 times better than previously accomplished." To get a feeling for this measurement one may visualize a person standing on the moon, holding a torch in his stretched-out hand. Let her turn around herself like an ice scater, but only making a single turn in the course of one year. The VLBA measurement is equivalent to measuring the torch's motion with an accuracy comparable to the torch's size.
The technique used is Very Long Baseline Interferometry (VLBI), where observations made with many telescopes are combined to achieve the resolution of an extraordinarily large telescope nearly the size of the Earth. The VLBA telescopes stretch from Hawaii over the continental United States to the Virgin Island of St. Croix, producing the resolution of an 8000 km diameter telescope. While the VLBA has extremely high resolution, it requires extremely bright and very compact radio sources such as masers for such measurements (a maser is the microwave equivalent of a laser.) Along with water, methanol is the most widespread maser molecule found in star-forming regions. The methanol spectral line used for the present experiment was discovered in the course of Prof. Menten's dissertation in the 1980s. In 1988, while working with Dr. Reid, they conducted the first VLBI observations of methanol masers; the target then was also W3OH. "Already then we dreamt of observations such as this one" says Menten.
In fact similar VLBA observations have also been made on water masers in W3OH. This effort, led by the MPIfR's Kazuya Hachisuka, yielded a distance similar to the methanol masers. "A splendid confirmation!" says Hachisuka. His team also includes Reid and Menten and a number of Japanese scientists.
The methanol observations are only the start of a very large-scale project that Reid and Menten have initiated. It will determine distances and motions of methanol masers all over the Milky Way. It has been granted a large block of VLBA observing time. In addition to the motions on the sky these observations also yield the star's velocity toward or away from the observer by measuring the Doppler shift of the methanol lines. The resulting three dimensional motions will deliver unique constraints not only on the rotation of the Milky Way but also on the distribution of the unseen Dark Matter that is postulated to surround it.
While the method - simple trigonometry - sounds basic, the transformation into practical results requires a comprehensive understanding of VLBA and all aspects of the observations, including thorough modeling of the Earths' atmosphere which affects the incoming radio waves. Dr. Reid has dedicated many years of his life to reach the point were programs such as this one can be performed.
Original Paper: The Distance to the Perseus Spiral Arm in the Milky Way , Y. Xu, M. J. Reid, X. W. Zheng, K. M. Menten, (Science, Published Online December 8, 2005).
Explore further: After early troubles, all go for Milky Way telescope |
Multiplication and Division Practice Unit.
Please click the images below for more information. Multiplication and Division Practice Unit homeschool math lessons are designed to help parents teach students multiplication and division through examples and practice problems. Go to Class Lessons and download the lesson plan and the first lesson for either multiplication or division.
Start with the Day 1 assignment. Follow the instructions each day on the lesson plan and check them off when completed. These Multiplication and Division Practice Unit homeschool math lessons use an area model, or box model, to teach multiplication. This approach is an excellent way to build conceptual understanding that the standard algorithm for multiplication does not provide.
Multiplication and Division Practice Unit homeschool math problems intentionally follow the same pattern to allow students time to master each problem type. Every lesson builds on the lesson before with a new level of complexity.
In addition to building computational skills, the Multiplication and Division Practice Unit also sprinkles in lessons on how to interpret the remainder in real-life situations. Close Affiliate Banner. Click on the image above to be taken to our Affiliate Image Gallery, where you can find additional images for this class. You can use our affiliate banner on your website, blog, or even social network to tell your friends, family, and contacts about this wonderful class.
Visit our Affiliate Image Gallery here. Close Quick Start. Need help? Check out our tutorials or click the live chat box in the corner of your screen. Related Classes You May Enjoy. Algebra for Kids. Algebra 1. Algebra 2. All About Shapes. Building a Foundation with Kindergarten Math. Daily Math. Decimal Workshop. Doodles Do Math. Everyday Games. Fractions Workshop.
How to Teach Elementary Math. Let's Do Math Outside. Multiplication Workshop. Starting Out with First Grade Math. Staying Sharp with Sixth Grade Math. Steaming Ahead with Fifth Grade Math.The table shows the monthly revenue of a business rising exponentially since it opened an online store.
When the bacteria population reaches12 hours have passed since the colony was placed on the petri dish. Three hours after the colony is placed on the petri dish, there are about bacteria in the colony. Between 8 a.
Writing Neutralization Reactions, Part 1
Show your reasoning. Expand Image. A piece of paper has area How many times does it need to be folded in half before the area is less than 1 square inch? Explain how you know. The area covered by an invasive tropical plant triples every year.
By what factor does the area covered by the plant increase every month? Lesson 5 Changes Over Rational Intervals. Problem 1. Find the monthly revenue 1 month after the online store opened. Record the value in the table.Remove background from video online
Explain your reasoning. Problem 2. Select all statements that are true about the bacteria population. A: When the bacteria population reaches12 hours have passed since the colony was placed on the petri dish. B: Three hours after the colony is placed on the petri dish, there are bacteria. C: Three hours after the colony is placed on the petri dish, there are about bacteria in the colony.Expand Image.
Decompose the rectangle along the diagonal, and recompose the two pieces to make a different shape. How does the area of this new shape compare to the area of the original rectangle? Explain how you know. Priya decomposed a square into 16 smaller, equal-size squares and then cut out 4 of the small squares and attached them around the outside of original square to make a new figure. The first is a square comprised of 16 small squares arranged in four rows of 4.
The second image is a copy of the first image, but it has the center four squares removed and a square added to the outside of each side of the square. The area of the square is 1 square unit. Select all that apply. Figure A is composed of three small triangles, figure B is composed of three small triangles in a different arrangement, figure C is composed of one medium triangle and one small triangle, and figure D is composed of two small triangles and one square.
The area of a rectangular playground is 78 square meters.
If the length of the playground is 13 meters, what is its width? The sides on top measure 10 units, 35 units, and 15 units. Two of the three sides on the left measure 10 units. One of the two sides on the right measures 10 units. One of the two sides on the bottom measure 15 units. The total width of the figure is 60 units, and the total height is 30 units.
All angles are right angles. Professional Learning Contact Us. Lesson Practice. Problem 1. The diagonal of a rectangle is shown.In this lesson students will review how to write formulas for ionic compounds and how to balance chemical equations. These topics are essential to the new material in this lesson, which is learning how to write balanced chemical equations for neutralization reactions.
This lesson aligns to the NGSS Practices of the Scientist of Developing and Using Models because the balanced chemical reaction is a written depiction of a chemical reaction. The process of using the equations to show what is happening in the test tube introduces a level of complexity beyond what students would observe from seeing color changes or even pH changes.
It aligns to the NGSS Crosscutting Concept of Cause and Effect in the sense that balanced chemical equations for neutralization reactions are best understood by examining the smaller scale mechanisms within the system—in this case the formation of a salt and water. In terms of prior knowledge or skills, students have seen in a previous lesson that mixing an acid and a base together produces a chemical reaction.
They have also learned about differences between acids and bases in this lesson and in this lesson. I reason that this is a good way to begin class because it introduces students to the material for today. The textbook is set up so that every student can extract this information, even if they do not completely understand it. By getting this early exposure, students will have begun thinking about neutralization reactions before I give my lesson on the subject; it is my hope that they will have a little momentum going into the lesson.
After I take attendance I walk around the room to see how students are doing, and they confirm my expectations. Activator : After students have had a chance to do this assignment, I ask a student to show their answers to the class. The salt is the NaCl.
Mini-lesson : My first challenge is to help students remember how to write salts. I ask them to look at pages in their textbook. The first thing I ask them to look at is the idea of balancing the charge, as shown in the middle of page The total positive charge has to be equal and opposite to the total negative charge.
I point out that the way to achieve this balancing of charge is to add subscripts. I then remind students that individual atoms can have a charge, and I remind them that the different groups on the periodic table provide us with a hint about what the charge is. I then point out that page shows how to deal with polyatomic ions.
I note that these are groups of atoms that have a charge when they are bonded together, and that there is a list on page They do not have to memorize the list, but they do need to know how to use the list to find the charge of polyatomic ions. This instructional choice reflects my desire to slowly and methodically build up to the skill of writing neutralization reactions. Being able to write the chemical formulas for salts seems like a good first step.
Guided Practice : I ask students to write the salts for problems from the Ionic Bonding Practice problems. I then show the class the answers using the Ionic Bonding Practice answer key.
Most students met with success on this task, and so I release them to finish these practice problems. While they are working I walk around and answer students' questions, help them stay on task, and look for common mistakes. I want students doing this work so that they have a chance to practice the skill that they have just learned.
Catch and Release Opportunities: The one common mistake I see from walking around is that students have forgotten how to use the periodic table to determine the charge of a monatomic ion.The goal is to recall some features of exponential change, such as:. In addition, students briefly explore the meaning of an exponential function at a non-whole number input and how they could determine the value of the function for that input. In future lessons, students will focus on making sense of the meaning of rational inputs in other contexts before using the principle that exponential functions change by equal amounts over equal intervals to calculate things like growth factors over different intervals of time.Uicollectionview update cell
Students may represent exponential changes in different ways. They reason abstractly and quantitatively by using descriptions to write expressions, create a table, or make a graph in order to answer questions about a situation MP2.
They may also use expressions to capture regularity in repeated reasoning MP8. This work will support students throughout the unit, as they deepen their knowledge of exponential functions and extend it to include any type of rational input, with an emphasis on non-whole number input, later in the unit.
We recommend making technology available MP5. In particular, provide students access to calculators that can process exponential expressions for all lessons in this unit. Lesson 1 Growing and Shrinking. Lesson Narrative. The goal is to recall some features of exponential change, such as: Exponential change involves repeatedly multiplying a quantity by the same factor, rather than adding the same amount.
Exponential growth happens when the factor is greater than 1, and exponential decay happens when the factor is less than 1. A quantity that grows exponentially may appear to increase slowly at first but then increases very rapidly later. Learning Goals Teacher Facing. Compare and contrast orally exponential growth and decay. Student Facing. Required Materials. Required Preparation. Learning Targets. I understand how to calculate values that are changing exponentially.
Unit 4: Practice Problem Sets
Print Formatted Materials.Give two examples of dimensions for rectangles that could be scaled versions of this rectangle. One rectangle measures 2 units by 7 units. A second rectangle measures 11 units by 37 units. Are these two figures scaled versions of each other? If so, find the scale factor. If not, briefly explain why. Ants have 6 legs. Do you agree with either of the equations?
Explain your reasoning.Garabandal 2019
The height of the model train is millimeters. What is the actual height of the train in meters? The state of Wyoming has old railroad tracks that are 4. Can the modern train travel on those tracks? Which meat is the least expensive per pound? Which meat is the most expensive per pound?
Explain how you know. Jada has a scale map of Kansas that fits on a page in her book. The page is 5 inches by 8 inches. Kansas is about miles by miles. Select all scales that could be a scale of the map. There are 2.
Unit 2: Practice Problem Sets
At that rate, in how many days will the ant farm consume 3 apples? How much green paint should be mixed with 4 cups of black paint to make jasper green? What is the area of this circle? Suppose Quadrilaterals A and B are both squares. Are A and B necessarily scale copies of one another? Elena walked 12 miles. How far did she walk all together? Select all that apply. Jada is making circular birthday invitations for her friends. The diameter of the circle is 12 cm. She bought cm of ribbon to glue around the edge of each invitation.
How many invitations can she make? At the beginning of the month, there were 80 ounces of peanut butter in the pantry. Since then, the family ate 0. How many ounces of peanut butter are in the pantry now? A person's resting heart rate is typically between 60 and beats per minute.
Noah looks at his watch, and counts 8 heartbeats in 10 seconds. Then determine the percent increase or decrease.Find the measures of the following angles.Grade 6 Unit 4, Lesson 1 Practice Problem Review
Explain your reasoning. Measure the longest side of each of the three triangles. What do you notice?Oberlo course
Clare says the two triangles are congruent, because their angle measures are the same. Do you agree? Explain how you know. Describe a sequence of translations, rotations, and reflections that takes Polygon P to Polygon Q. What are the measures of the other two angles?Bosch pke 30 b 30cm 12 electric chainsaw saw chain pke30b
Each diagram has a pair of figures, one larger than the other. For each pair, show that the two figures are similar by identifying a sequence of translations, rotations, reflections, and dilations that takes the smaller figure to the larger one. For each pair, describe a point and a scale factor to use for a dilation moving the larger triangle to the smaller one. Use a measurement tool to find the scale factor.
Explain why they are similar. Draw two polygons that are not similar but could be mistaken for being similar. Explain why they are not similar. These two triangles are similar. Note: the two figures are not drawn to scale.
In each pair, some of the angles of two triangles in degrees are given. Use the information to decide if the triangles are similar or not.
- Student loan rehabilitation form
- Nintendont triforce
- Class 1c
- 3d geography volcanoes
- Cic canada login
- Push pull cable actuator
- One direction imagines cute
- Matrix menu fivem
- Laser architecture
- Pdf file calculator
- Sky movies 2018 bollywood
- 2000 vw golf alarm disable
- React dropdown input
- Pinsound board
- Gta rp commands
- Asus device discovery utility rt n66u
- Sestante strumento di navigazione
- Limescale remover
- The village of i corpetti, municipality of morcone (bn) campania
- Elisabeth edborg
- Lg g4 secret codes
- Colorimeter percentage error
- Fx dreamline tuning
- Reseller unlock
- Find all palindromes in a string python |
Mercury is the smallest planet in our solar system and nearest to the Sun. It is only slightly larger than Earth’s Moon.
From the surface of Mercury, the Sun would appear more than three times as large as it does when viewed from Earth. The sunlight would be as much as seven times brighter. Despite its proximity to the Sun, Mercury is not the hottest planet in our solar system .
Because of Mercury’s elliptical – egg-shaped – orbit, and sluggish rotation, the Sun appears to rise briefly, set, and rise again from some parts of the planet’s surface. The same thing happens in reverse at sunset.
Are you curious to know more about planet Mercury. Read below article to enhance your knowledge by Pritish Kumar Halder:
Mercury’s surface temperatures are both extremely hot and cold. Because the planet is so close to the Sun, day temperatures can reach highs of 800°F (430°C). Without an atmosphere to retain that heat at night, temperatures can dip as low as -290°F (-180°C).
Despite its proximity to the Sun, Mercury is not the hottest planet in our solar system – that title belongs to nearby Venus, thanks to its dense atmosphere. But Mercury is the fastest planet, zipping around the Sun every 88 Earth days.
Size and Distance
With a radius of 1,516 miles (2,440 kilometers), Mercury is a little more than 1/3 the width of Earth. If Earth were the size of a nickel, Mercury would be about as big as a blueberry.
From an average distance of 36 million miles (58 million kilometers), Mercury is 0.4 astronomical units away from the Sun. One astronomical unit (abbreviated as AU), is the distance from the Sun to Earth. From this distance, it takes sunlight 3.2 minutes to travel from the Sun to Mercury.
Mercury’s environment is not conducive to life as we know it. The temperatures and solar radiation that characterize this planet are most likely too extreme for organisms to adapt to.
Orbit and Rotation
Mercury’s highly eccentric, egg-shaped orbit takes the planet as close as 29 million miles (47 million kilometers) and as far as 43 million miles (70 million kilometers) from the Sun. It speeds around the Sun every 88 days, traveling through space at nearly 29 miles (47 kilometers) per second, faster than any other planet.
Mercury spins slowly on its axis and completes one rotation every 59 Earth days. But when Mercury is moving fastest in its elliptical orbit around the Sun . Each rotation is not accompanied by sunrise and sunset like it is on most other planets. The morning Sun appears to rise briefly, set, and rise again from some parts of the planet’s surface. The same thing happens in reverse at sunset for other parts of the surface. One Mercury solar day (one full day-night cycle) equals 176 Earth days – just over two years on Mercury.
Mercury’s axis of rotation is tilted just 2 degrees with respect to the plane of its orbit around the Sun. That means it spins nearly perfectly upright and so does not experience seasons as many other planets do.
Mercury doesn’t have moons. Mercury doesn’t have rings too.
Formation and structure
Mercury formed about 4.5 billion years ago when gravity pulled swirling gas and dust together to form this small planet nearest the Sun. Like its fellow terrestrial planets, Mercury has a central core, a rocky mantle, and a solid crust.
Mercury is the second densest planet, after Earth. It has a large metallic core with a radius about 1,289 miles , about 85 percent of the planet’s radius. There is evidence that it is partly molten or liquid. Mercury’s outer shell, comparable to Earth’s outer shell (called the mantle and crust), is only about 400 kilometers thick.
Most of Mercury’s surface would appear greyish-brown to the human eye. The bright streaks are called “crater rays.” They are formed when an asteroid or comet strikes the surface. The tremendous amount of energy that is released in such an impact digs a big hole in the ground. Also crushes a huge amount of rock under the point of impact. Some of this crushed material is thrown far from the crater and then falls to the surface, forming the rays. Fine particles of crushed rock are more reflective than large pieces, so the rays look brighter. The space environment – dust impacts and solar-wind particles – causes the rays to darken with time.
Mercury’s surface resembles that of Earth’s Moon, scarred by many impact craters resulting from collisions with meteoroids and comets. Craters and features on Mercury are named after famous deceased artists, musicians, or authors, including children’s author Dr. Seuss and dance pioneer Alvin Ailey.
Very large impact basins, including Caloris (960 miles or 1,550 kilometers in diameter) and Rachmaninoff (190 miles, or 306 kilometers in diameter), were created by asteroid impacts on the planet’s surface early in the solar system’s history. While there are large areas of smooth terrain, there are also cliffs. Some hundreds of miles long and soaring up to a mile high. They rose as the planet’s interior cooled and contracted over the billions of years since Mercury formed.
Temperatures on Mercury are extreme. During the day, temperatures on the surface can reach 800 degrees Fahrenheit (430 degrees Celsius). Because the planet has no atmosphere to retain that heat. Nighttime temperatures on the surface can drop to minus 290 degrees Fahrenheit (minus 180 degrees Celsius).
Mercury may have water ice at its north and south poles inside deep craters, but only in regions in permanent shadows. In those shadows, it could be cold enough to preserve water ice despite the high temperatures on sunlit parts of the planet.
Mercury possesses a thin exosphere instead of an atmosphere. This made up of atoms blasted off the surface by the solar wind and striking meteoroids. Mercury’s exosphere is composed mostly of oxygen, sodium, hydrogen, helium, and potassium.
Mercury’s magnetic field is offset relative to the planet’s equator. Though Mercury’s magnetic field at the surface has just 1% the strength of Earth’s. It interacts with the magnetic field of the solar wind to sometimes create intense magnetic tornadoes that funnel the fast, hot solar wind plasma down to the surface of the planet. When the ions strike the surface, they knock off neutrally charged atoms and send them on a loop high into the sky. |
The precautions that are taken to prevent or reduce the occurrence of fire, which results in death, injuries, or even destruction of property worth millions and are referred to as fire safety (Jerry 2008). They are aimed at alerting people in the structure in case of occurrence of uncontrolled fire. It alerts individuals, threatened by the fire incidence to evacuate the affected areas, or even to help in outing of the fire. Prevention helps to reduce chances of fire occurrence. Fire safety faces threats that are referred to as fire hazards, which include circumstances, which increase the probability of fire occurrence. These threats may also prevent from escape by the victims in the event if fire occurs. For instance, unseemly usage and maintenance of a gas stove creates a fire hazard. Other common fire hazards may include electrical systems that are overloaded, combustible storage areas with insufficient protection, fireplace chimneys not properly cleaned and combustibles near equipment that generates heat or sparks, personal ignition, such as matches amongst many others.
During construction of a building, fire safety may be included. Sometimes, they are implemented in buildings that are already standing. This involves educating the occupants on fire and life safety. They are taught how to prevent from the occurrence of such dangerous incidences and how to deal with the situation in case they caught unaware. Fire safety is found under the building safety. Before a construction of a building begins, building safeties are laid down for the security of the future occupants. Fire prevention officers are responsible for inspecting buildings for violation of fire code. They have also the responsibility to go to different learning institutions and educate students on fire safety topics.
Any fire safety policy has key elements, which include constructing a building according to rules provided by the local building code and maintaining the structure in agreement with the requirements of the fire code (Maclean, 2007). Examples of fire safety policies include introducing and maintaining firefighting equipments in easy to get to areas, prohibiting usage of flammable materials in certain locations in the building, using fire alarm systems for detection and warning of fire, maintaining high standards of training and awareness of the users of the building to avoid mistakes, maintaining proper fire exits and ensuring that spray fire proofing remains unscratched.
The United States has been committed in providing a fire-safe environment to it residents and, therefore, protecting their properties through an effectual fire prevention, protection and response programs. The aim of these programs is to help the citizens in working together to realize and maintain an environment with the reduced risks of fire hazards. The fire and safety policies outline how the United States aims to protect lives and property from destructions caused by fire arising from the use of materials or equipments, and from circumstances hazardous to life and property (Amernic, 2008).
The fire and life safety was developed to provide guideline information on how to implement the requirements of the United States’ fire and life safety policy. It also enhanced building-specific life safety and Emergency Management Plans as well as offering guidance on some common issues, which can become fire code violation if not appropriately addressed. This essay addresses four major areas, such as effects of fire, safety issues, fire and safety issues to elderly, and ways to educate and train people how to prevent the occurrence of fire incidences and how to handle fire case in the event they occur. It explores the above named areas in details and comes up with concrete ways to curb this catastrophic hazard that leads to loss of lives and destruction of properties worth millions of dollars.
Effects of Fire
Fire is essential and, at the same time, danger to human beings. It affects human being in diverse ways. Since in the past, fire has been a peculiar thing to human beings. They discovered that fire can be dangerous and figured out it could be used for heating, lighting, and cooking food. Nowadays, people have gain knowledge of more dangers of fire and more uses for the fire.
In addition, fire has been a dangerous thing, which causes burns and has capabilities to destroy houses, wildfire, and buildings. It causes many injuries and casualties if not used well. Over the past years, fire has caused the death of many people. This makes it the most perilous form of natural event in the United States of America. Researchers have proven that most fire deaths are caused by smoke inhalation rather than the burns. If exits are not properly placed, the smoke incapacitates so rapidly that it overcomes the victims’ bearing in mind that most of our homes today uses synthetic materials, which produces very dangerous substances if burnt (Vilar, 2001). As the fire grows inside the building, it consumes the available oxygen, thereby slowing the burning process. Unavailability of sufficient oxygen leads to incomplete combustion, which emits toxic gases into the environment.
In the United States of America, the majority of fire impacts occur in densely populated areas. These fire incidences have been proved that they mainly occur in most summers as compared to other seasons. Necessary complements to effective bush fire readiness and response in minimizing the risks of fire to people, their health, property, infrastructure and production systems need to be implemented. Better community awareness and understanding of how to prepare for and response to fire incidences are very important. Other necessary complements are better building design and maintenance, and good planning development.
Fire is among the ecological factors affecting plant life of the south eastern parts of the United States. In most parts of this region, fire seems to exert an influence in determining persistent type of vegetation as compared to the climate and soil. Frequent rate of fire has a great effect on the ecology of the United States’ pine ecosystems. The Fire Ecology Program has been conducting research to find out better understanding of the appropriate cause of this frequent fire (York, 2003).
The frequent occurrence of fire in the United States has contributed to changes in air quality. This is due to the emission of large quantities of smoke into the environment in a given area within a short period of time. Smoke comprises of small particulate of ash, partly consumed fuels, and invisible gases, such as carbon dioxide, carbon monoxide, hydrocarbons and small quantities of nitrogen oxides. These impurities change the quality of air. These toxic gases cause respiratory diseases when breathed in by the human beings. Existence of some gases in the air, such as carbon dioxide and sulphur dioxide causes acidic rains. This reduces the life span of building and many other structures. It also causes skin diseases to humans.
Basically, though fire is vital for human daily activities, it has a lot of negative impacts if not used appropriately. It affects them either directly or indirectly through environmental changes, such as emission of toxic gases into the air. This makes it very crucial to come up with educative plans to teach people about fire and life safety in order to counter the high rate of occurrence of fire.
Fire and Safety Issues
Fire safety and management play a vital role in the fight of fire occurrence. In case of a fire incidence, the fire must be attacked and controlled quickly to reduce the resulting damages. A fire can be very difficult to handle without the appropriate equipments and fire safety trained personnel’s. There is a number of particular areas that need to be considered during prevention of fire. These include identifying areas that fire can break out easily, possible size and influence of the fire, and the possible impacts of the fire. High risk areas within a building are large storage apartments, stores of flammable goods, plants rooms, kitchens, manufacturing places and under cover car parks.
In the United States, fire departments use many terms to describe fire education and prevention programs (Cote, 2008). These different programs have the same goals, but differ in size and approach they use. As seen earlier, fire can lead to massive loss of life and broad loss of property. A plan of response should be developed to deal with these unpredictable fire instances. This explains the reason why all the firefighting equipments should always be properly fixed and maintenance done regularly.
Some gadgets, such as smoke alarms, always double your chances of surviving a fire if properly maintained. This is made possible by ensuring that all the alarms are installed properly and in good conditions. It is advisable to install the smoke alarms in each room if one sleeps with bedroom doors are closed. The alarms need to be serviced on monthly bases and the batteries replaced annually. Some other smoke alarms, such as vibrating and flashing, should also be tested once per month to ensure that they are functioning as they should be.
Unsafe heating practices are also a major cause of fire incidences in the United States of America (NFPA, 2005). Heating equipments should be installed correctly and flammable materials, such as curtain’s or furniture’s, kept at least three feet from them. Heating equipments, for instance, a stove should never be used as a substitute for a space heater.
Unsafe cooking habits have also been attributed as another major factors leading to accidental fire resulting to loss of lives and destructions of property. By no means should one leave cooking food unattended if using a stove. Tight fitting clothes should be worn when cooking using an open flame. This is because a dangling sleeve can easily catch fire. When using a candle for lighting, it should be placed at a safe position far away from flammable materials.
In addition, unsafe smoking habits are another safety issues that should be looked at. Careless smoking is among the leading causes of fire in the United States. Smokers do not mind the placing they smoking, or even how they discard the ashes. For safety purposes, always ensure you are alert when smoking and never smoke while in bed. Before discarding ashes, soak them in water. Smoking material should not be left unattended.
Planning and practicing your home fire escape is very crucial. Escape routes should be updated and the occupants informed of the changes. This ensures that in case of a fire, they are aware of the possible routes to escape through. When preparing escape routes, consider different capabilities of the occupants. Make sure that each room has at least two ways to get out.
Fire and Safety Issues for Elderly People
In the United States, the elderly population is presently growing at a temperate rate. Those aged 65 years and above represent 12.5 percent of the total population. Unfortunately, elderly people are statically more at risk in their homes. They experience both physical and cognitive changes, which are associated with the aging process. This places them at high risk of being injured by a fire. Sensory impairments are among the common complications brought about by old age (Harwood, 1989). They experience diminished visual insight, hearing, and sense of smell. Presence of any of the above makes an individual more exposed to the dangers of fire and burns. For instance, inability to smell smoke due to respiratory problems increases the chances of suffocating from toxic fumes and smoke inhalation. The elderly tends to suffer from one or more of these deficiencies.
Old people exhibit some behaviors that place them at greater risk for having a fire. Most of them rely on alternative sources of heat to keep themselves warm. Some of these sources include space heaters and electric blackest. If a space heater is not maintained properly, it increases chances of starting a fire. Electric brackets too if washed repeatedly can comprise their wiring systems thus creating fire risks. In addition, they have a tendency of living alone hence, unlikely to receive prompt help in case of fire emergencies.
At old age, a quite large percent of people are under medication in the United States (Pollock, 1998). Individual may be taking several prescribed medication at the same time. Medications may cause drowsiness or impaired judgments when taken simultaneously. These effects decrease the possibility of detecting and escaping a fire and increase the likelihood of starting a fire accidentally. It is not wise for elderly people to live alone.
Almost a quarter of the old aged 65 years and above were claimed to drink alcohol beverage almost every day. The use of prescribed medicines along with alcohol reduces one’s cognitive and physical abilities among the older adults (Fahey and Miller, 1989). Specialists advise that alcohol should not be taken when one is under medications.
Population of the older adults living at or below the poverty line is roughly 20 percent in the United States (Fahey and Miller, 1989). Poverty increases fire risk as individuals lack income to equip their houses with fire protective measures. Where these equipments, such as smoke alarms, are available, they are unlikely to afford for maintenance. Due to poverty, most of them rely on dangerous source of heat, such as open flame or space heaters. These have a high probability of causing a fire risk. Moreover, in low-income households electrical wiring may be unsafe or even fall short of code of standards.
The older adults need advices on the fire safety. Individuals with physical disabilities or impairments should be the first to consider when coming up with fire safety programs. Smoke alarms for people with hearing problems should be fixed with vibrating pads that go under their pillow and strobe lights. This creates awareness to the victims in time thus, enabling them to run for their safety or even fight the fire.
Where the old are living alone, they should arrange for a firefighter or a Community Safety Advocate to come and assess their homes (SINTEF, 2008). During their visit, guidance will be given on how to make your home safer. As a results, this will reduce chances of accidental fire occurring.
The older adults are encouraged to wear a dressing gown and drink hot water to keep warm rather than using space heaters during the cold winter months. Portable gas heaters should be checked regularly to ensure that they are safe. All paths in the house should be cleared and well salted so that in case of a fire they can easily escape without interruptions. In addition, elderly people are advised not to stay alone. Company from their grandchildren is also important as they are more sensitive to change.
Automatic extinguishing systems should be installed in houses, where old people live. In the event of a fire, they automatically switch on without even the knowledge of the occupants. This is one of the best methods to deal with fire incidences in a household, where occupants are mainly the old.
Kitchen safety devices that turn off the stove when one forgets and walks away should be applied. Installation of such devices gives a piece of mind to the aging senior and even their family members. Since the older people have poor memories to remember, this will greatly help to avoid the occurrence of fire.
Educating and Training Fire Safety
To curb this disastrous thing, the United States has come up with different ways and methods of training for fire safety. Different programs have been initiated at various levels of academics to lay emphasis on this matter. Fire prevention topic is vital to children, particularly in the younger grades, to enable the establishment of a strong foundation in the fight for accidental fire. Despite having fun with the children, such issue needs to be taught to them for safety purposes. Learning about fire safety at an early age increases the chances of being safe as they learn not to panic in the event of a fire. During the lessons, the instructor is advised to introduce fire prevention topic by the use of firehouse DVD, which gives children a look inside a firehouse with details of the protective gear the firefighters wear.
Inviting guest speakers to your classroom is also another important measure to use. This lays more emphasis to the children. Visitors from fire departments have safety trailers, which they use to show demonstrations of fire hazards and safety tips to kids.
Once the children are familiar with the fire safety, it is important to practice them. Allowing the kids to practice the procedure means so much. Stay Low and Go is a good slogan for remembering to crawl to safety. Not only does it help the kids, but also prevents other people from breathing in smoke that rise from a fire. Presence of a fire drill enhances proper behavior during a drill or even in the event of real fire.
Additionally, many colleges and universities in the United States offer fire science degree programs for fire department administrators, inspectors and aspiring proficient firefighters. Students are taught how to deal with accidental fires, arson and hazardous materials. Due to existence of many institutions offering these courses, it is advisable to carry out enough research. This ensures that individuals go for the best equipped school. In ensuring that, classes match their career aspirations, they need to review program’s descriptions at potential schools. Individuals’ working in management position might be fancy looking for training in leadership, communications and arson investigation while those, who desire a job in public administration may desire to study policymaking, fire prevention education and the legal aspects of fire science. Moreover, programs that focus on the development of improved fire protection and suppression systems may be ultimate for aspiring engineering professionals.
Some science colleges are offering long distance learning programs online. This enables those employed to enhance their skills. In the United States, online programs have encouraged many to undertake these courses as they learn from their residential. Those, who want to improve their skills, can seek advice from the experienced firefighters, who can recommend best schools for them. Increase in the number of those enrolling for fire related courses will lead to high percent of people with fire safety knowledge hence, it decreases chances of fire occurrence and provides better techniques of fighting fire incidences.
Most of the United States’ fire departments have different fire safety programs that they offer to the general public. Through seminar, a large number of the public have acquired some skills in regard to fire safety. Seminars are held in different localities, where fire experts meet the intended groups and educate them on ways to prevent from and to deal with fire in the event it occurs. Also through seminars, some selected members of the public are trained by the fire experts (Proulx, 2007). Training is a very important key element that enables trainees to work proactively before something takes place. Despite equipping our households with all the required fire equipment, without training, people will do responses that will endanger themselves rather than using them as they are intended. Most of the targeted groups for training are the students. This is because they are easy to train. According to Proulx, involvement of students is a key to success. Students are the best to train practically as it is interactive and engaging. For example, a fire extinguisher is a hand-on training experience that gets the attention of many students, who have never had before.
During the training, trainees are taught the reasons behind the set rules and why they should be followed. The key to humanizing fire safety is by altering people’s behavior and attitudes towards fire. Training puts a person in a better position to select a fire-safe housing, follow the set rules and regulations and know how to react properly in case of a fire.
Diverse programs are provided to soothe each age group. Generally, all the messages comprise of injury prevention, safety, escape in case of a fire and fire prevention in general. By the use of media programs, also a large percent of the public has been reached. The fire departments’ representatives put forward various programs through the media to educate the public. Through demonstrations that are appealing to the public, many people are eager to learn more. For instance, the use of real fire incidences gets the attention of many thereby, the intended message is conveyed appropriately. The use of media as proved to be the most appealing method to attract attention of the public and convince them.
The United States has also come up with strategies that have been developed to ensure that buildings are designed with how people will react to an emergency (Rita, 2009). Also the National Fire Protection Association has carried out research on how people react to fire and emergencies, which have enabled them to come up with different methods and ways to educate and train the public about fire safety. Only through training and educating people about fire and life safety can help to curb this hazardous thing.
Over the recent years, fire has been one of the causes of massive loss of life and destruction of properties worth millions of dollars. The young and the older adults above the age of 65 are the people prone to this hazardous thing. As seen earlier, the elderly are at high risk even compared to young. This is because they are always in a close contact with sources of fire to keep them warm bearing in mind that they have physical impairments that prevent from their escape in the event of a fire. When their clothing catches fire, physical impairments reduce their ability to extinguish the fire or even escape. In addition, physical impairments often put off the elderly people from performing life-saving actions, such as stop, drop and roll.
To counter this tragedy, safety policies have been implemented as the key elements. This includes constructing a building according to rules given by the Local Building Code and maintaining the structure in agreement with the requirements of the fire code (Maclean, 2007). Implementation of this safety policy in the United States has greatly contributed in reducing the number of deaths that occur as a result of fire.
Generally, fire does not only cause death and destroys properties, but continuous emission of toxic substances into the environment, affects the daily weather pattern. Frequent occurrence of fire in the United States has contributed to the changes in air quality due to release of toxic gases into the atmospheres. It is also one of the ecological factors upsetting plant life in the United States.
In addition, inappropriate use of heating equipments contributes to a high percent of the fire cases in the United States. Heating equipments should always be installed properly and maintenance done on a regular basis. Flammable material should be kept away from sources of heat. Safe cooking habits must be practiced. Cooking food ought not to be left unattended when using cooking heating equipments, such as a stove. Unsafe smoking habits need to be stopped for safety purposes too.
Individuals should plan and practice their home fire escape. This ensures that they are familiar with the escape routes. Each room should have at least two possible escape routes and the occupants should be made aware of them.
Lastly, the United States has played a big role in implementing ways to educate and train different personnel about fire and life safety. Through implementation of different programs, a large number of people have been trained and equipped with the knowledge of handling fire incidences. Educating the public on fire and life safety with reduce the number of occurrence of fire incidences.
|Editorial Evaluation Essay||Pidgins and Creoles - Comparison Between Hawaiian Pidgin and Nigerian Pidgin| |
The world's collective imagination to answer the age-old question, "Are we alone," has been reignited now that we understand exoplanets – planets in orbit around stars other than Earth's Sun – are not uncommon. There's an increased urgency to develop capabilities for directly photographing exoplanets around nearby stars and to characterize their surface conditions and, Alpha Centauri, being the nearest star system to our own, has understandably become a focal point of current scientific study.
Alpha Centauri features two Sun-like stars, each with the chance of having one or more exoplanets orbiting in its habitable zone - the range where temperatures would allow liquid water to exist on a planet's surface - making it an even more compelling target. From a technical point of view, as our nearest neighboring star system, Alpha Centauri is also the easiest system for us to resolve important physical scales including better understanding the distance between a star and its habitable zone.
Being the closest neighboring system to Earth also makes Alpha Centauri our best prospect to eventually explore when we have the technology to travel across multiple light years in a reasonable amount of time. As a point of comparison, we're still just beginning our exploration of Mars, and it's typically only a few light minutes, or ~40 million miles, away at its closest approach from Earth. Alpha Centauri is the closest star system, but it's still 4.37 light years, or ~25 trillion miles, away.
Until now, although we know that Earth-size planets are common in our Milky Way, the technologies to directly photograph them in light visible to the human eye has not been available. The challenge with capturing this kind of image has been figuring out how to effectively block the light of a star in order to view its orbiting planets. A star is over a billion times brighter, and needs to be suppressed in order to see and capture pictures of orbiting planets.
Since Alpha Centauri is a binary system, this challenge is even more complicated, as we have to suppress the light from two stars. Project Blue is a mission to put a special purpose telescope capable of suppressing starlight and capturing images of exoplanets into low Earth orbit The Project Blue telescope will use a technique called 'direct imaging' to dim the light from Alpha Cen A and B, enabling us to see any surrounding exoplanets in their orbits. The specialized starlight suppression system consists of:
- An instrument called a coronagraph to block starlight, using either the Phase Induced Amplitude Apodization (PIAA) or Vector Vortex technique;
- A deformable mirror, low-order wavefront sensors, and software control algorithms to manipulate the incoming light and achieve multi-star wavefront control (MSWC); and
- Post-processing methods, called Orbital Differential Imaging (ODI), to enhance image contrast.
While every space mission is complex, difficult and takes time, Project Blue has a relatively short lifecycle of about six years. The Project Blue team, made up of technical experts and resources from BoldlyGo Institute, Mission Centaur, the SETI Institute, University of Massachusetts Lowell, and other institutions hope to launch the mission into low-Earth orbit by 2021 and observe the Alpha Centauri system for 2 years with its coronagraphic camera. By contrast, if we were to design a probe to send to Alpha Centauri and launch it today using conventional rocket technology, it would take approximately 75,000 years to get there before it could transmit images of any discoveries back to Earth.
Project Blue discoveries would be able to inform other missions, such as future large ground- and space-based telescopes, and even Breakthrough Starshot, which is in the early stages of developing technology to reach speeds of ⅕ light speed in order to send probes to Alpha Centauri. If successful, Breakthrough Starshot would still take decades to get to Alpha Centauri, and an additional 4 years for the first images to return to Earth. Project Blue might be able to provide a "roadmap" to help make sure future missions like these look in the right places.
Perhaps most important is the possibility that Project Blue identifies and captures the first image of a rocky blue exoplanet - a 'pale blue dot' - like the picture of Earth that the Voyager 1 spacecraft sent back to us on February 14, 1990, from a distance of 4 billion miles as it was leaving our solar system.
From a scientific standpoint, this discovery would be on par with other major discoveries of the past 500 years. It would enable us to learn about and study the composition of what could be another planet with oceans of water and a thick atmosphere capable of supporting life as we know it: A sister Earth.
From a philosophical standpoint, this discovery would be even more profound. Identifying another cerulean planet could tell us that Earth is not unique in the universe. When Voyager 1 sent the pale blue dot images of Earth, Carl Sagan offered:
"Look again at that dot. That's here. That's home. That's us. On it everyone you love, everyone you know, everyone you ever heard of, every human being who ever was, lived out their lives."
While finding another blue planet would enable us to answer many questions, it would lead to many more, undoubtedly starting with: "Are we alone?"
If we were to determine that there is life, all of Sagan's musings about Earth would suddenly take on a new dimension we've never had to consider. We would have discovered someone else's home.
"Project Blue is an ambitious space mission, designed to answer to a fundamental question, but surprisingly the technology to collect an image of a "Pale Blue Dot" around Alpha Centauri stars is there," said Franck Marchis, Senior Planetary Astronomer at the SETI Institute and Project Blue Science Operation Lead. "The technology that we will use to reach to detect a planet 1 to 10 billion times fainter than its star has been tested extensively in lab, and we are now ready to design a space-telescope with this instrument."
"We are extremely excited to partner with BoldlyGo Institute on Project Blue. We share a common goal to incorporate citizen science into our initiatives," said SETI Institute President and CEO Bill Diamond. "Project Blue builds on recent research in seeking to show that Earth is not alone in the cosmos as a planet capable of supporting life, and wouldn't it be amazing to see such a planet in our nearest neighboring star system? This is the fundamental reason we search."
"The future of space exploration holds boundless potential for answering profound questions about our existence and destiny. Space-based science is a cornerstone for investigating such questions," said BoldlyGo Institute CEO Jon Morse. "Project Blue seeks to engage a global community in a mission to search for habitable planets and life beyond Earth."
Project Blue is currently raising private and citizen funding to complete the initial mission architecture design. The design is based on detailed requirements laid out by the Project Blue science team, which is made up of leading exoplanet researchers from a variety of institutions. This architecture orchestrates how the telescope, coronagraphic camera, spacecraft bus, and ground system all work together to acquire, store, transmit, and process the pictures that the Project Blue mission will take. After initial design, we'll run mission simulations to predict performance, and will make a Mission Performance Simulator (MPS) available online for interested citizen scientists to run their own simulations.
Explore further: Image: Hubble's best image of Alpha Centauri A and B
To learn more about Project Blue, visit projectblue.org |
Achieving superconductivity at room temperature has represented one of the holy grails of physics for decades. A practical material with zero electrical resistance would not only represent a major advance in physics, but also revolutionize technologies from power grids to electric motors. However, the mechanism behind so-called ‘high-temperature’ superconductors, which are superconducting above approximately -240 Celsius, has been unclear, and the highest temperature at which superconductivity has been observed remains at a frigid -108 Celsius.
Now, the mechanism responsible for superconductivity in an important class of high-temperature superconducting materials, discovered in 2008, has been revealed by Tetsuo Hanaguri and colleagues at the RIKEN Advanced Science Institute, the Japan Science and Technology Agency (JST), The University of Electro-Communications in Tokyo, and The University of Tokyo1.
The researchers studied the mechanism behind a key property of all superconductors: electron pairing. In an ordinary material, electrons travel independently and their motion is regularly disrupted, or scattered, by defects and by vibrations (or phonons) of the atomic lattice they are traveling through. This leads to electrical resistance, so that any flowing current must be ‘pushed’ along by an applied voltage. In superconductors, electrons travel in pairs, rather than individually, making them less prone to scattering. A minimum amount of energy called the ‘superconducting gap’ energy must then be expended to break an electron pair. Since this energy is unavailable at low temperatures, the motion of the electron pairs remains unperturbed, and the material’s resistance is zero. This means a current can flow perpetually without any applied voltage.
Hanaguri and colleagues focused on understanding how electron pairing occurs in iron-based superconductors, one of the two major classes of high-temperature superconductors. In conventional, low-temperature superconductors, electrons are paired because phonons create attractions between them, overcoming the natural repulsion the electrons have as a result of their identical negative charges. In iron-based superconductors, however, superconductivity is associated with a particular ordering of the atomic magnets found in the materials. This generated speculation among physicists that these tiny magnets, or spins, may be involved in the pairing mechanism. The work by Hanaguri and colleagues provides strong evidence that these spins are indeed responsible for electron pairing in iron-based superconductors.
Out of phase
The researchers leveraged their expertise with scanning tunneling microscopes (STMs) to gather this evidence. Traditionally used to map the shapes of nanostructures and atoms, these microscopes measure the current between a sharp nanoscale tip and a surface just beneath it. They can also be used to measure the momentum of electrons traveling across a surface. Just before the discovery of iron-based superconductors, Hanaguri had developed a method at RIKEN in Hidenori Takagi’s laboratory to use STMs to measure the phase of electrons, and this capability was the key to their work on superconductors.
Hanaguri and colleagues were able to measure the interference pattern of electron pairs by purposefully scattering them from magnetic vortices that they created in the superconductor Fe(Se,Te) using an applied magnetic field. Electron pairs behave like waves at very small scales so, like all waves, they have a phase. For example, two water waves traveling across a pond at the same speed have different phases if one wave is slightly behind the other. If they collide, they make an interference pattern that is affected by the phase difference between them. Similarly, the interference pattern made by electron pairs is affected by the phase difference between those pairs.
The researchers measured and interpreted these interference patterns to understand iron-based superconductors. After initial measurements on high-quality crystals grown by their collaborator Seiji Niitaka, they began the task of data interpretation. Unfortunately, they made an early mistake with the coordinate system that stymied their progress until Kazuhiko Kuroki from The University of Electro-Communications realized the error at a presentation. Kuroki later joined the collaboration and helped interpret the measured interference patterns.
The team found that the patterns could be explained by assuming that the phase of an electron pair, and its associated superconducting gap, depends on the momentum of the pair (Fig. 2). This telltale sign of spin-mediated electron pairing had been predicted theoretically but never realized experimentally. By confirming the role of spins in iron-based superconductors, the team’s data lay the foundation for an understanding of superconductivity that is not based on lattice vibrations unlike more conventional superconductors.
Past and future
Hanaguri says his group was in a lucky position at the outset. “My ‘aha!’ moment came when I realized that the phase-sensitive STM technique that I had already developed could be applied to iron superconductors, which had just been discovered.” He also counts openness as a key to the success of the work: had Hanaguri not comprehensively described his preliminary results at a conference, Kuroki would not have identified his mistake. “My policy is that all the data, techniques and plans that I have must be as open as possible,” Hanaguri says.
Hanaguri also notes that the phase-sensitive scanning tunneling microscope developed by his team yielded a significant result in only its first years of operation, and can be expected to produce important results in other realms of physics, including magnetism. Ultimately, Hanaguri would be most satisfied by finding something completely new. “Our equipment is capable of studying matter under extreme conditions, and it is under extreme conditions that many new physical phenomena have been discovered,” he explains. “To discover a new phenomenon would be much more exciting than the elucidation of an existing phenomenon’s mechanism.”
About the Researcher
Tetsuo HanaguriTetsuo Hanaguri was born in Tokyo, Japan, in 1965. He graduated from the Department of Applied Physics at Tohoku University in 1988, and received his PhD in applied physics from the same university in 1993. He then worked as a research associate and associate professor at The University of Tokyo until he joined RIKEN. Since 2004, he has held the position of senior research scientist in the Takagi Magnetic Materials Laboratory at RIKEN. He works in the field of experimental condensed-matter physics at low temperatures, and his current research focus is on spectroscopic imaging scanning tunneling microscopy of complex electron systems including superconductors and topological insulators. He is also interested in measurement science and technology and enjoys building scientific apparatus.
NASA's fermi finds possible dark matter ties in andromeda galaxy
22.02.2017 | NASA/Goddard Space Flight Center
Tune your radio: galaxies sing while forming stars
21.02.2017 | Max-Planck-Institut für Radioastronomie
Cells need to repair damaged DNA in our genes to prevent the development of cancer and other diseases. Our cells therefore activate and send “repair-proteins”...
The Fraunhofer IWS Dresden and Technische Universität Dresden inaugurated their jointly operated Center for Additive Manufacturing Dresden (AMCD) with a festive ceremony on February 7, 2017. Scientists from various disciplines perform research on materials, additive manufacturing processes and innovative technologies, which build up components in a layer by layer process. This technology opens up new horizons for component design and combinations of functions. For example during fabrication, electrical conductors and sensors are already able to be additively manufactured into components. They provide information about stress conditions of a product during operation.
The 3D-printing technology, or additive manufacturing as it is often called, has long made the step out of scientific research laboratories into industrial...
Nature does amazing things with limited design materials. Grass, for example, can support its own weight, resist strong wind loads, and recover after being...
Nanometer-scale magnetic perforated grids could create new possibilities for computing. Together with international colleagues, scientists from the Helmholtz Zentrum Dresden-Rossendorf (HZDR) have shown how a cobalt grid can be reliably programmed at room temperature. In addition they discovered that for every hole ("antidot") three magnetic states can be configured. The results have been published in the journal "Scientific Reports".
Physicist Dr. Rantej Bali from the HZDR, together with scientists from Singapore and Australia, designed a special grid structure in a thin layer of cobalt in...
13.02.2017 | Event News
10.02.2017 | Event News
09.02.2017 | Event News
22.02.2017 | Power and Electrical Engineering
22.02.2017 | Life Sciences
22.02.2017 | Innovative Products |
Overview of algorithms
Introduction to Algorithms
Definition of an algorithm
An algorithm is a set of step-by-step instructions for solving a problem or performing a task. In computer programming, algorithms are used to manipulate data, perform calculations, and automate tasks. The goal of an algorithm is to produce a correct result in a reasonable amount of time, using a finite amount of memory and other resources.
Algorithms can be expressed in natural language, pseudocode, flowcharts, or programming languages. The choice of representation depends on the audience and the purpose of the algorithm. Natural language and pseudocode are more abstract and easier to understand, while flowcharts and programming languages are more concrete and precise.
To be effective, an algorithm must have certain characteristics. It must be correct, meaning it produces the expected result for all inputs. It must be efficient, meaning it uses as little time and space as possible. It must be general, meaning it can handle a wide range of inputs. It must be robust, meaning it can handle unexpected inputs or errors gracefully. It must be maintainable, meaning it can be modified or extended easily. And it must be understandable, meaning it can be comprehended by other programmers or users.
Designing an algorithm is a creative and iterative process that involves understanding the problem, breaking it down into subproblems, identifying patterns and structures, and selecting appropriate data structures and algorithms. Common algorithm design techniques include brute force, divide-and-conquer, dynamic programming, and greedy algorithms.
Analyzing an algorithm involves measuring its performance in terms of time complexity, space complexity, and other metrics. This helps to compare different algorithms, predict their behavior on large inputs, and optimize their implementation.
Importance of algorithms in computer programming
Algorithms are essential in computer programming as they enable developers to write efficient and reliable code. They provide a systematic approach to solving problems and performing tasks, and they allow programmers to abstract away the details of the underlying hardware and focus on the logic of the program. By encapsulating complex operations in reusable and modular code, algorithms make it easier to maintain, test, and debug software.
Algorithms are used in a wide range of applications, from simple scripts to complex systems. They are used to manipulate data, search and sort records, perform calculations, generate random numbers, parse and validate input, and much more. Without algorithms, computer programs would be limited to simple and repetitive tasks, and they would not be able to handle the complexity of modern computing.
In addition to their practical benefits, algorithms are also a fascinating intellectual pursuit. They offer a rich and diverse field of study that combines mathematics, logic, and computer science. They challenge our intuition and creativity, and they inspire us to develop new and innovative solutions to old problems.
Characteristics of a good algorithm
A good algorithm has several key characteristics that make it effective and reliable.
First and foremost, a good algorithm must be correct. This means that it produces the expected result for all inputs. A correct algorithm must handle all possible edge cases and error conditions in a way that produces the correct output. If an algorithm produces incorrect results, it is useless and potentially harmful.
In addition to being correct, a good algorithm must also be efficient. This means that it uses as little time and space as possible to produce the correct output. Efficiency is important because it determines how quickly an algorithm can process large inputs or handle a high volume of requests. An inefficient algorithm can become a bottleneck or a source of frustration for users.
A good algorithm must also be general. This means that it can handle a wide range of inputs, not just the ones that were used to design and test it. A general algorithm is flexible and adaptable, and can be used in a variety of contexts. An algorithm that is too specific may be useless or require significant modification to be useful in a different context.
Another important characteristic of a good algorithm is robustness. This means that it can handle unexpected inputs or errors gracefully. A robust algorithm should not crash or produce incorrect outputs when given unexpected or invalid input. Instead, it should handle errors in a way that is predictable and understandable.
A good algorithm must also be maintainable. This means that it can be modified or extended easily as requirements change or new features are added. A maintainable algorithm should be well-organized, modular, and easy to understand. It should have clear interfaces and well-defined responsibilities. A non-maintainable algorithm can become a source of technical debt and make it difficult to evolve the software.
A good algorithm must be understandable. This means that it can be comprehended by other programmers or users. An understandable algorithm should have clear and concise documentation, variable names, and comments. It should follow established coding conventions and be consistent with the overall design of the system. An algorithm that is difficult to understand can be a source of confusion and errors.
Common algorithm design techniques
Common algorithm design techniques include:
- Brute force: This technique involves trying every possible solution until a correct one is found. While it is simple and easy to implement, it can be very inefficient for large inputs. Brute force algorithms are often used as a baseline to compare against more sophisticated techniques.
- Divide and conquer: This technique involves breaking a problem down into smaller subproblems, solving them recursively, and combining the results. Divide and conquer algorithms are often efficient and can exploit parallelism, but they may require more complex data structures and can be difficult to implement correctly.
- Dynamic programming: This technique involves breaking a problem down into smaller subproblems and caching intermediate results to avoid redundant computation. Dynamic programming algorithms can be very efficient and are often used in optimization problems, but they require careful design and analysis.
- Greedy algorithms: This technique involves making locally optimal choices at each step, with the hope of finding a global optimum. Greedy algorithms can be very efficient and easy to implement, but they may not always produce the best solution and can be difficult to analyze.
Other algorithm design techniques include backtracking, branch and bound, randomization, and approximation algorithms. The choice of technique depends on the characteristics of the problem, the available resources, and the desired trade-offs between correctness, efficiency, and simplicity.
Analysis of algorithms
Analysis of algorithms is the process of measuring the performance of an algorithm in terms of time complexity, space complexity, and other metrics. The goal of algorithm analysis is to compare different algorithms, predict their behavior on large inputs, and optimize their implementation.
Time complexity is a measure of the amount of time an algorithm takes to run as a function of the input size. It is usually expressed as a function of the worst-case input size, and it ignores constant factors and lower-order terms. The most common notation used for time complexity is big O notation, which provides an upper bound on the running time of an algorithm. For example, an algorithm with a time complexity of O(n) means that its running time grows linearly with the input size.
Space complexity is a measure of the amount of memory an algorithm uses as a function of the input size. It is usually expressed as a function of the worst-case input size, and it ignores constant factors and lower-order terms. The most common notation used for space complexity is also big O notation, which provides an upper bound on the space used by an algorithm. For example, an algorithm with a space complexity of O(n) means that it uses at most a constant multiple of n units of memory.
Other metrics used for algorithm analysis include best-case running time, average-case running time, worst-case space usage, and worst-case number of comparisons. These metrics provide a more detailed picture of the performance of an algorithm and can be used to compare different algorithms on specific inputs or scenarios.
Algorithm analysis can be done theoretically, by analyzing the code and deriving expressions for the time and space complexity, or empirically, by measuring the running time and memory usage of the algorithm on real inputs. Theoretical analysis provides a more abstract and general view of the algorithm, while empirical analysis provides a more concrete and specific view. Both approaches are useful and can be combined to get a comprehensive understanding of the algorithm.
The results of algorithm analysis can be used to optimize the implementation of the algorithm. For example, if the time complexity of an algorithm is O(n^2), it may be possible to redesign the algorithm to have a time complexity of O(n log n) or even O(n) by using a more efficient data structure or algorithmic technique. Similarly, if the space complexity of an algorithm is too high, it may be possible to reduce it by using a more compact data structure or by reusing memory. |
Attachment theory states that children develop expectations about the extent to which they will receive support when stressed--and these expectations shape the relationships they will later form in life. Individuals, for example, who seldom receive warmth, approval, and support when needed during their childhood feel uneasy with intimacy. They, instead, prefer to rely on their own resources and abilities to redress threats and indeed, as a consequence, become inclined to suppress their limitations, striving to perceive themselves as competent and resilient.
Attachment theory was initially applied almost exclusively to the study of children and their caregivers. In the 1980s, the theory was extended to understand adult romantic relationships and, then, eventually to all friendships.
Bowlby (1969/1982) contended that humans are born with an attachment system, which motivates individuals to seek proximity, comfort, and assistance from parents-later other protective figures such as teachers, friends, romantic partners, and counselors--especially when some threat or adversity is imminent.
Specifically, attachments refer to bonds between a person and an attachment figure. These bonds first develop in infancy and childhood as a means to ensure the need for safety and protection are fulfilled. That is, children instinctively form bonds with a caregiver. According to Bowlby (1969), when individuals recognize a sense of threat or danger, they experience a sense of alarm, which activates the attachment system, eliciting behaviors that promote support from an attachment figure-usually the mother or the father. That is, these behaviors are intended to maintain proximity to this caregiver. If this attachment figure is not available or is not responsive, individuals experience a sense of anxiety.
Later, Bowlby (1973) discussed how this attachment system evolves over time, shaped by experiences with these protective figures. During their life, individuals develop specific tendencies, called their attachment style, which govern how they seek and maintain proximity to a person who can facilitate their capacity to cope with threats and dangers (Bowlby, 1973, 1988).
Ainsworth and her colleagues conducted naturalistic, longitudinal research, as well as experimental studies, to examine these attachment styles in infants and their mothers (Ainsworth, Blehar, Waters, & Wall, 1978). In this research, infants and mothers were observed in a room. Occasionally, the mother was asked to leave the room and then return.
These studies uncovered three distinct attachment styles-or means to seek and maintain proximity to an attachment figure: secure, insecure-ambivalent, and insecure-avoidant. Secure infants showed moderate distress when their mother departed from the room, approached the mother when she returned, received comfort from the mother, and explored the room adventurously provided she was present. Ambivalent infants showed elevated levels of distress when the mother departed and, although seeking proximity to the mother upon her return, was not able to be comforted. The infant also showed anxiety when exploring the room, even when the mother was present. Finally, the avoidant infants showed minimal distress when the mother departed, not excitement when she returned. In short, only infants who had formed a secure attachment style seemed to desire proximity and perceive the mother as a secure base from which to explore the world.
According to Ainsworth (1979, Ainsworth et al., 1978), the behavior of caregivers, at least partly, determines the attachment style of infants. When mothers were responsive to the needs of their children, the infants developed a secure attachment style. When the responses of mothers were inconsistent-often interfering with the activities of their children--the infants developed an ambivalent style. Finally, when the mother rejected the attempts of their children to establish physical contact, the children showed an avoidant style.
Several key assumptions underpin attachment theory. First, attachment or bonding behaviors are considered to be adaptive, increasing the capacity of individuals to survive (Bowlby, 1969). Examples of these behaviors include the inclination of toddlers to remain proximal to familiar individuals. Hence, cues that coincide with potential threats, such as unfamiliar events or rapidly approaching objects, activate the attachment system in infants or children, invoking behaviors that maintain proximity to caregivers.
Second, the development of these tendencies is primarily shaped during specific phases in life, especially during the first three years perhaps. That is, the development of these inclinations is especially sensitive to cues and events during these early years (Bowlby, 1958).
Third, the preference of individuals towards specific figures, such as their parents, is not inherent. Instead, children develop this need to seek their primary attachment figure as a consequence of their experiences with this person (Bowlby, 1958). The person who is most available and responsive to the infant, especially during stressful or threatening contexts, becomes the primary attachment figure.
Fourth, the infant usually develops a hierarchy of relationships, ranging from the person to whom they favor the most when they seek proximity and support to other individuals who they do not favor as appreciably (e.g., Rutter, 1995). This position diverges slightly from previous perspectives, promulgated by Bowlby (1958), called monotropy, which assumed that infants primarily seek support from a single individual, usually the mother.
Fifth, this preference towards a primary attachment figure or caregiver primarily evolves from the provision of support and sensitivity during social interactions, especially in threatening contexts. Accordingly, the mere provision of food or relief from discomfort does not appreciably affect this preference (Bowlby, 1958).
Sixth, these experiences with caregivers, over time, coalesce to shape the thoughts, beliefs, expectations, emotions, memories, and behaviors about the self and about other individuals, called internal working models of social relationships. These internal working models guide the social interactions of individuals (Bowlby, 1973), facilitating the formation of friendships, marriage, parental behaviors, and so forth. For example, the knowledge to treat younger children different to older children is guided by these working models. Similarly, the recognition that both teachers and parents can provide support also represents a manifestation of these working models. Children who perceive themselves as worthy of support and their caregivers as receptive to offering such assistance are more likely to assume these attachment figures will be responsive to their needs.
Seventh, persistent separation from a familiar caregiver, or continuous changes in who is the primary caregiver, can preclude the formation of adaptive attachment behaviors. These problems can manifest as problems later in life (Bowlby, 1958).
From these early experiences with caregivers, individuals develop perceptions of themselves and expectations about the support they will receive. These perceptions and expectations are represented as schemas, or internal working models, which govern their behavior (e.g., Hazan & Shaver, 1994).
According to Bartholomew and Horowitz (1991), the internal working model comprises two main facets: a model of the self and a model of others. The model of self refers to whether or not individuals perceive themselves as worthy of love or support from attachment figures. If the activities of individuals are often interrupted by a caregiver, implying their behavior was unsuitable, they might develop the belief they are not worthy of approval. The model of others refers to whether individuals perceive caregivers and other figures in their life as available and supportive or unreliable and rejecting.
As Bartholomew and Horowitz (1991) contend, these two internal working models imply that individuals can adopt one of four, not three, attachment styles-depending on whether the self or other is regarded positively or negatively. Secure individuals--both children and adults--perceive both themselves and other figures positively. That is, they perceive themselves as worthy of love and approval, raising their self esteem (Collins & Read, 1990& Feeney & Noller, 1990), as well as regard other individuals as available and trustworthy. Ambivalent individuals perceive themselves negatively but other figures positively, which diminishes their self esteem but increases the likelihood that will seek support from relatives, friends, and colleagues (Collins & Read, 1990& Feeney & Noller, 1990), often inciting obsession and preoccupation with relationships (Hazan & Shaver, 1987).
Two avoidant styles can be differentiated. Dismissing-avoidant individuals perceive themselves positively but other figures negatively. That is, because they regard other individuals as unavailable and unsupportive, they do not seek close relationships. Fearful-avoidant individuals, however, perceive both themselves and other figures negatively. They might feel an urge to seek proximity, but remain detached to protect their emotions.
In an extensive series of studies, Baldwin (1992, 1997 & Baldwin, Keelan, Fehr, Enns, & Koh-Rangarajoo, 1996 & Baldwin & Meunier, 1999), applied the concept of relational schemas to characterize the properties and consequences of internal working models. In particular, for any regular pattern of interactions, such as a marital conflict about parenthood, individuals form a schema that represents information about themselves, their partner, and the usual dynamics of this event.
Suppose, for example, a woman often asks her husband to become more involved in the role of parenting, but the husband never seems to modify his behavior. The woman will form a relational schema that includes information about herself, such as I need assistance, about her partner, such as He is unreliable, and about how events unfold in response to specific occurrences, such as "If I act emotionally, he will agree but not fulfill his promise".
According to Baldwin and colleagues, internal working models comprise these relational schemas. Therefore, in addition to information about the self and others, these models entail information about how interactions unfold. These expectations about how interactions unfold, for example, also seem to vary across attachment styles. To illustrate, in a study conducted by Baldwin, Fehr, Keedian, Seidel, and Thompson (1993), participants were asked to imagine the scenario in which they express their deep feelings towards their partner. They were then asked to indicate whether the partner would be accepting or rejecting. Relative to the other attachment styles, individuals who reported a secure attachment style were more likely to anticipate their partner would be accepting.
These relational schemas are also organized hierarchically (e.g., see Baldwin, 1992). That is, individuals form general working models that impinge on all relationships. Nevertheless, they working model varies across the various classes of relationships, such as romantic relationships, work relationships, and so forth. Furthermore, even within each class, the working model varies across every person with whom the individuals have formed a relationship-as well as the contexts in which these interactions unfold (for a similar perspective, see Collins & Read, 1994 & Overall, Fletcher, & Friesen, 2003 & Pietromonaco and Barrett, 2000 & Trinke & Bartholomew, 1997).
According to this proposition that relational schemas are organized hierarchically, attachment style should vary slightly across relationships. This proposition has indeed been supported (e.g., La Guardia, Ryan, Couchman, & Deci, 2000& Trinke & Bartholomew, 1997).
Secure working models were assumed to be represented as declarative or explicit knowledge (Mikulincer & Shaver, 2004& Shaver & Mikulincer, 2007). Nevertheless, internal working models might also comprise procedure knowledge, such as how to manage negative emotions in the context of relationships (e.g., Bretherton, 1987, 1990 & Waters, Rodrigues, & Ridgeway, 1998).
In other words, the working model might entails schemas, models, or scripts that underpin interactions with other individuals, intended to regulate emotions (e.g., Mikulincer, Shaver, Sapir-Lavid, & Avihou-Kanza, 2009 & Waters & Waters, 2006). According to Waters and Waters (2006), the script comprises three facets:
To illustrate, a prototypical script representing a secure attachment might be "If a difficulties arises, I can approach my partner, who will be available and supportive as well as curb my distress". These scripts enhance the emotional regulation of individuals.
The prompt word outline method has been used to assess these secure scripts (e.g., Waters & Hou, 1987 & Waters, Rodrigues, & Ridgeway, 1998& Waters & Waters, 2006). Specifically, participants receive a story title, such as "Baby's Morning", and then 12 to 14 prompts. The first few prompts indicate the actors, such as baby and mother. The next few prompts relate to a key context, disruption, and resolution, such as play, blanket, hug, smile, story, pretend, teddy bear, lost, found, and nap (Waters & Waters, 2006). The participants use these words to construct a story. Judges receive training to ascertain the extent to which the narrative refers to the key features of secure scripts. Past research indicates that individuals who exhibit a secure attachment, as gauged by the Adult Attachment Interview (see Hesse, 2008) or Experiences in Close Relationship questionnaire, also present narratives that refer to facets of a secure script (e.g., Coppola, Vaughn, Cassibba, & Constantini, 2006 & Dykas, Woodhouse, Cassidy, & Waters, 2006).
Mikulincer, Shaver, Sapir-Lavid, and Avihou-Kanza (2009) developed another measure of these scripts in an adult sample. Participants were asked to write stories to describe the events in two sequences of three pictures, corresponding to a hospital and work environment respectively. In each sequence, the first picture depicted a person in distress. The second picture showed someone offering assistance. The third picture showed the person, who had been originally distressed, seemingly feeling better. Hence, each picture corresponded to one of the key facets of secure scripts.
Independent judges then rated the extent to which these narratives alluded to these characteristics of secure scripts. Participants also wrote about neutral pictures and completed other measures to preclude alternative explanations. Participants who exhibited a secure attachment style, as gauged by the Experiences in Close Relationships scale, tended to allude to the elements that epitomize a secure script.
In a second study, conducted by Mikulincer, Shaver, Sapir-Lavid, and Avihou-Kanza (2009), participants received only the first picture of the hospital sequence and were encouraged to derive their narrative from this scene only. Individuals who reported an anxious attachment style seldom referred to the relief that should ensue when attachment figures are sought--consistent with their anticipation of rejection. Individuals who reported an avoidant attachment style often alluded to a sense of relief even if the protagonist did not seek support. A similar pattern of findings was observed even after extraversion and neuroticism was controlled.
In a fourth study, Mikulincer, Shaver, Sapir-Lavid, and Avihou-Kanza (2009) showed how these secure scripts manifest in dreams. After awakening, participants reported their dreams, each morning over a month. Using content analysis, judges first uncovered the dreams in which the person experienced distress. Next, they evaluated the degree to which these dreams allude to the features of a secure script: seeking support, availability of support, and distress relief.
Individuals who reported an avoidant attachment seldom dreamt about attempts to seek support. Individuals who reported an anxious attachment were tentatively less inclined to dream about relief after distress.
The fifth study, reported by Mikulincer, Shaver, Sapir-Lavid, and Avihou-Kanza (2009), substantiated that such scripts demonstrate another feature of scripts in general--the generation of inferences that are consistent with secure schemas (cf., Markus, Smith, & Moreland, 1985). In particular, participants read a story that entailed the key facets of a secure script, in which an athlete was injured and then received a visit from a romantic partner. Information that was unrelated to attachment, such as the ambitions of this athlete, was also included.
Next, participants were asked to recall the principal facts as well as share their inferences, feelings, and opinions about the anecdote. Participants who had reported an anxious or avoidant attachment style were less likely than participants who had reported a secure attachment style to allude to inferences that relate to a secure script.
In a subsequent study, Mikulincer, Shaver, Sapir-Lavid, and Avihou-Kanza (2009) also showed the retrieval of script that entails secure facets is relatively automatic and effortless, but only in participants who report a secure attachment style. That is, if participants reported a secure attachment, they could generate inferences about a secure script--even if their attention was distracted by the need to suppress thoughts of a white bear for six minutes.
According to social defense theory (e.g., Ein-Dor, Mikulincer, & Shaver, 2011 & see also Ein-Dor, Mikulincer, Doron, & Shaver, 2010), if individuals have developed an anxious attachment style, a particular schema or script governs their behavior in threatening situations, largely because of inconsistent attachment figures. This script comprises several key features. First, in unfamiliar or ambiguous settings, these individuals tend to remain especially vigilant to threats. Second, these individuals react rapidly, and often prematurely, to signs of threat, such as unusual noises. Third, they alert other people to imminent danger. Fourth, if they do not receive the support they seek, they intensify their efforts to attract this assistance. Finally, these individuals strive to become as close to other people as possible in threatening situations. These features are called a sentinel schema, because these individuals alert other people.
Ein-Dor, Mikulincer, and Shaver (2011) undertook a series of studies that establish this sentinel schema. In one study, participants wrote a story about a picture that depicted a group of people in a threatening situation. In addition, they completed a measure of attachment style. Independent judges evaluated the extent to which the participants referred to the features of a sentinel schema, such as sensitivity to ambiguous signs of threat and danger as well as warning other people about these hazards. If participants reported an anxious attachment style, they were more likely to allude to the features of a sentinel schema. This association was observed even when personality, social desirability bias, and verbal ability were controlled.
The second study showed that anxious attachment increases the likelihood that individuals will remember information that aligns to a sentinel schema. Participants observed a women answer questions about threatening events as well as neutral events. Participants were more likely to recognize correctly and rapidly answers that epitomize a sentinel schema, such as "I would be very scared. I would scream with fear, turn around, and yell to warn the others". They were not as likely to recognize correctly more composed responses, including "I would turn around and look where the roar was coming from. I am usually calm in these situations. I act according to what I see".
A further study revealed that anxious attachment augments the probability that individuals will process information that aligns to a sentinel schema more extensively and comprehensively. That is, participants were exposed to a story that epitomizes a sentinel schema. They were instructed to present the actual facts of this story and then to share their impressions. Individuals with an anxious attachment tended to extrapolate more extensively, inferring more insights about the thoughts, feelings, and traits of these protagonists. Presumably, their sentinel schema facilitated processing about information congruent information.
The final study examined whether or not anxious attachment actually evokes behaviors that correspond to this sentinel schema. That is, participants were exposed to a threatening situation: a room that was replete with smoke because of a malfunctioning computer. As predicted, if individuals reported an anxious attachment, they were more likely to detect the presence of smoke, after controlling neuroticism and extraversion.
As social defense theory indicates (e.g., Ein-Dor, Mikulincer, & Shaver, 2011), if individuals demonstrate an avoidant attachment style, a particular schema or script guides their behavior in threatening situations, primarily as a consequence of negligent or disapproving attachment figures. This script comprises several key features. First, because these individuals want to deny their vulnerability or reliance on other people, they tend to trivialize threats. Second, when dangers or threats are unambiguous, they respond rapidly, attempting to protect themselves immediately, either by fleeing or conquering this hazard. Finally, they do not tend to coordinate their efforts with anyone else. This schema is called the rapid fight-flight schema.
Ein-Dor, Mikulincer, and Shaver (2011) conducted a set of studies that establish this rapid fight-flight schema. In one study, participants wrote a story about a picture that depicted a group of people in a threatening situation. Furthermore, they completed a measure of attachment style. Independent judges rated the degree to which the participants referred to the features of a rapid fight-flight schema, such as fleeing the situation without assisting anyone else as well as failing to coordinate, cooperate, or deliberate with anyone else. If participants reported an anxious attachment style, they were especially likely to allude to the features of this schema. This association was observed even when personality, social desirability bias, and verbal ability were controlled.
The second study revealed that an avoidant attachment increases the probability that individuals will remember information that aligns to this rapid fight-flight schema. The participants watched a women answer questions about threatening events and neutral events. Participants were more likely to recognize correctly and rapidly answers that epitomize a rapid fight-flight schema, such as "I deal with the threat by myself. I do not trust others to do the job". They were not as likely to recognize correctly more composed responses, including "I see what others are doing and act accordingly".
A further study showed that avoidant attachment augments the likelihood that individuals will process information that aligns to a rapid fight-flight schema more extensively and comprehensively. That is, participants were exposed to a story that epitomizes a rapid fight-flight schema. They were instructed to present the actual facts of this story and then to share their impressions. Individuals with an avoidant attachment tended to extrapolate more extensively, inferring more insights about the thoughts, feelings, and traits of these protagonists.
Some researchers argue the categorical classification is too restrictive, prohibiting an exploration of graduation in attachment style (e.g., Simpson & Rholes, 1998 & Simpson, Rholes, & Nelligan, 1992). Researchers, therefore, have developed continuous scales to differentiate attachment styles. Brennan, Clark, and Shaver (1998), for example, characterized two continuous dimensions of attachment style that differentiate individuals. The first facet, referred to as anxious or ambivalent attachment, relates to the extent to which individuals are concerned that protective figures might not be available or supportive when threats or adversities impend and, therefore, strive to maintain proximity. They do not perceive themselves as able to resolve issues alone and, hence, develop unfavorable attitudes towards themselves (Bartholomew & Horowitz, 1991), showing elevated levels of neuroticism (Erez, Mikulincer, van Ijzendoorn, & Kroonenberg, 2008).
The second facet, designated as avoidant attachment, relates to the degree to which individuals assume that protective figures are not trustworthy and hostile, provoking a yearning for independence and discomfort with intimacy as well as unfavorable attitudes towards other people (Bartholomew & Horowitz, 1991) and corresponding to lower levels of agreeableness and extraversion (Erez, Mikulincer, van Ijzendoorn, & Kroonenberg, 2008).
This yearning for independence could encourage self enhancement, in which individuals overestimate their capacity to withstand threats and challenges. To illustrate, Bekker, Bachrach, and Croon (2007) found that university students who report an avoidant attachment style also felt they could readily accommodate unexpected changes and novel contexts.
Individuals who demonstrate neither anxious nor avoidant attachment are designated as secure. Because these individuals assume that protective figures will be supportive, they experience an enduring sense of security, as discussed by Mikulincer and Shaver (2003, 2007). In contrast, individuals who report anxious attachment seek proximity, support, and approval excessively, especially when stressed, perceiving themselves as unable at resolving issues. Individuals who report avoidant attachment, however, reject friends, family, and colleagues, preferring to address problems alone.
Hence, attachment style affects the formation and maintenance of relationships. Relative to their insecure counterparts, individuals who report secure attachment demonstrate more commitment, trust, and satisfaction in romantic relationships (Simpson, 1990& Simpson, Rholes, & Phillips, 1996). They can also overcome problems in relationships more effectively (Blustein, Prezioso, & Schultheiss, 1995 & Lopez, 1996).
In short, attachment style does predict satisfaction in relationships as well as duration. A secure attachment style, for example, tends to be associated with more satisfaction with relationships (e.g., Brennan & Shaver, 1995& Feeney, 1994& Keelan, Dion, & Dion, 1998).
Several mechanisms might underpin this association. First, a secure attachment style seems to promote more intimate disclosure, which in turn can facilitate the formation of trusting, committed, and satisfying relationships (Feeney, 1994& Keelan, Dion, & Dion, 1998). Second, a secure attachment style might promote the expression of more positive, suitable, and adaptive emotions, rather than negative affective states, which also facilitates relationships satisfaction (Davila, Bradbury, & Fincham, 1998 & Feeney, 1999). Third, individuals with a secure attachment style are more likely to interpret the behavior of their partner as supportive, which enhances relationship satisfaction (Cobb, Davila, & Bradbury, 2001 & Meyers & Landsberger, 2002)
Secure attachment style also coincides with more enduring relationships. That is, this attachment style might coincide with the expression of commitment and the experience of satisfaction, both of which can affect the longevity of relationships (Duemmler & Kobak, 2001 & Simpson, 1990). Kirkpatrick and Davis (1994), however, showed that individuals who report an anxious-preoccupied attachment style often experienced enduring, but unhappy, relationships.
Attachment style also impinges on the experience and expression of intimacy (e.g., Collins & Feeney, 2004). Intimacy, as defined by Collins and Feeney (2004), encompasses the willingness of individuals to disclose private thoughts, feelings, hopes, and concerns, to seek the emotional support and care of partners, and to engage in physical affection. In general, secure attachment style corresponds to elevated levels of intimacy on these three facets.
Nevertheless, if sexual intercourse is frequent, the relationship between insecure attachment and marital dissatisfaction tends to subside (Little, McNulty, & Russell, 2009). That is, if sex is satisfactory, the detriment impact of insecure attachment dissipates. Presumably, sex facilitates the intimacy that is needed to inhibit, rather than prime, the attachment system of these insecurely attached individuals.
Mikulincer, Shaver, Bar-on, and Ein-Dor (2010) showed that anxious attachment coincides with ambivalent attitudes towards relationships. These researchers utilized a range of measures, including both explicit and implicit techniques, to gauge the attitudes of individuals towards their romantic partners.
To illustrate, in one task, a series of words was presented. In one set of trials, participants needed to press the lever forward, a movement associated with avoidance, whenever they recognized a word (see approach and avoidance motivation. In another set of trials, participants needed to press the lever towards themselves, a movement associated with approach, whenever they recognized a word.
Participants who had previously reported anxious attachment exhibited a fascinating pattern of results. Whenever they needed to press the lever forward, corresponding to avoidance, they responded especially quickly to negative words that were related to closeness in relationships, such as ?intrude?. Whenever they needed to press the lever towards themselves, corresponding to approach, they responded especially quickly to positive words that were related to closeness in relationships, such as ?hug?. Therefore, anxious attachment seems to coincide with a motivation to both approach and avoid relationships, epitomizing ambivalence. These findings persisted even after controlling ambivalence to other matters, such as euthanasia, as well as need for cognition.
Participants who reported an avoidant attachment also demonstrated hints of ambivalence as well. They exhibited positive and negative attitudes--that is, approach and avoidance--towards words that relate to distance in relationships, such as lonely and independence.
As Bowlby (1982) highlighted, a caregiving system most likely evolved in parallel to an attachment system. That is, the tendency of children to seek proximity to parents, called the attachment system, is unlikely to be effective unless parents feel motivated to offer protection, called the caregiver system. When this system is activated, individuals experience positive emotions after they provide support and protection: they feel a sense of accomplishment, kindness, and morality.
These two systems may sometimes interact with each other. For example, when adults experience a sense of threat, the attachment system may be unduly activated. They may seek immediate support from someone else. Consequently, the caregiver system may be deactivated. These individuals may not offer support when appropriate (Mikulincer, Shaver, Gillath, & Nitzberg, 2005).
Reizer, Ein-Dor, and Shaver (2014) uncovered another context in which the attachment system and caregiver system interacts. According to these researchers, when individuals experience an avoidant attachment style, they strive to avoid intimacy. Accordingly, they are sometimes reluctant to offer care, because such acts may foster intimacy--a state these individuals shun. In addition, this support may not be reciprocated, culminating in disappointment--a feeling to which these individuals are particular sensitive. Consequently, their caregiver system may be deactivated. Their provision of care does not elicit positive emotions and, therefore, does not enhance relationship satisfaction.
Reizer, Ein-Dor, and Shaver (2014) conducted three studies to validate these premises. In one study, individuals answered questions that assess their attachment style, their caregiving style, and satisfaction with their relationship. To gauge attachment style, participants completed the Experiences of Close Relationships scale. To gauge caregiving style, participants responded to questions that revolve around the deactivating caregiver style, such as "Thinking about helping others does not excite me very much", and the hyperactivating caregiver style, "When I am unable to help a person who is in distress, I feel worthless". While answering these questions, participants considered a situation in which someone else needed help and assistance.
As hypothesized, if participants reported low levels of avoidant attachment, caregiving deactivation was negatively associated with relationship satisfaction. Presumably, these individuals feel more satisfied in relationships after they offer support. Caregiving instills positive emotions, and these emotions are projected onto the relationship. In contrast, if participants reported high levels of avoidant attachment, caregiving deactivation was not associated with relationship satisfaction.
Paulssen (2009) showed that attachment style, with romantic partners, can also translate to the business environment. Specifically, attachment style with romantic partners was related to trust, satisfaction, and loyalty in business relationships. In other words, working models of attachment, as manifested in romantic relationships, also affect perceptions of business partners.
If people experience a secure attachment, they are more likely to have acquired extensive experience in leadership positions (Popper & Amit, 2009). Specifically, whenever individuals report a secure attachment, they feel that somebody will offer assistance in response to problems& their anxiety, therefore, tends to diminish quickly. Consequently, they embrace risks and assume initiative, increasing their willingness to become a leader.
In addition, because of their trusting relationships, people who report a secure attachment do not feel the need to monitor their friendships vigilantly. Instead, they can explore future possibilities, again facilitating initiative and leadership. Consistent with these possibilities, secure attachment to partners or friends has been shown to be positively associated with leadership experience--and this association is mediated by both limited anxiety and openness to experience (Popper & Amit, 2009).
If managers experience an insecure attachment style, their subordinates are more likely to report burnout and dissatisfaction at work. This possibility was verified by Ronen and Mikulincer (2013). In this study, managers answered questions that gauge the degree to which they experience an anxious or avoidant attachment. They also answered questions that measure their caring orientation--specifically, whether they avoid caring for other people (e.g., "Thinking about helping others doesn't excite me very much") and whether they feel an intense need to care for other people (e.g., "When I'm unable to help a person who is in distress, I feel worthless"). These two caring orientations are called deactivated and hyperactivated caring respectively. Finally, the subordinates of these managers indicated the degree to which they feel exhausted or dissatisfied at work.
If managers reported elevated levels of anxious attachment, their subordinates were more likely to experience the symptoms of burnout. Hyperactivated caring mediated this relationship. Presumably, when managers experience anxious attachment, they feel the intense need to help their subordinates. They may, therefore, become intrusive, interfering with the autonomy of employees. That is, these intrusions impede the capacity of employees to engross themselves in their work. Instead, these employees tend to be vigilant, culminating in burnout and dissatisfaction.
When individuals feel stressed or threatened, the attachment system is activated, eliciting behaviors that promote safety and, usually, interfering with other motivations, such as the desire to improve the welfare of other people. Because individuals who report neither an anxious nor avoidant attachment style experience an enduring sense of security, their stress tends to diminish rapidly and hence the likelihood they engage in altruistic and prosocial behavior rises.
In contrast, as Mikulincer and Shaver (2003, 2007) maintain, because individuals with an anxious attachment style feel vulnerable--unable to resolve issues alone--they focus on their own distress, which compromises their capacity to experience empathy towards other people. Furthermore, because individuals with an avoidant attachment style shun intimacy, they separate themselves from the emotions of other people, which also curbs their capacity to experience empathy--ultimately impeding altruistic acts (Batson, 1991).
Gillath, Shaver, Mikulincer, Nitzberg, Erez, and van IJzendoorn (2005) demonstrated that avoidant attachment is inversely related to the frequency of philanthropic activities that individuals undertake--consistent with the proposition that such avoidance fosters distancing, which hinders empathy. Anxious attachment was not related to the frequency of philanthropic activities, but was associated with egocentric motivations to engage in this behavior, which aligns with the self focus these individuals are purported to exhibit. Erez, Mikulincer, van Ijzendoorn, and Kroonenberg (2008) extended these findings, revealing that individuals with anxious attachment will volunteer only when these acts improve their own self esteem, career prospects, social networks, or wellbeing-not because these behaviors align with their values.
Some studies have shown that parental availability fosters cooperative tendencies even in young children. Volling, McElwain, Notaro, and Herrera (2002), for example, monitored the reactions of infants to aversive parental behaviors. Interactions between parents and their infants were recorded on two separate occasions--when the infant was 12 and 16 months old respectively. Researchers coded the behaviours of infants and parents, assessing the availability of parents, as well as whether the infant cooperated with the caregiver. The availability of parents when the infant was 12 months old was positively related to the cooperation these infants demonstrated with their parents 4 months later.
A secure attachment style is also related to both self-actualization and self-transcendence. Self-actualization refers to the extent to which people are able to pursue their calling. That is, when self-actualization is high, individuals actualize their potential or passion and, consequently, feel authentic and absorbed in their endeavors. Self-transcendence refers to the tendency of some individuals to identify with a cause, such as social justice, or with a force, such as truth, that exceeds their own personal needs.
Otway and Carnelley (2013) showed that a secure attachment style is positively associated with both self-actualization and self-transcendence. In this study, participants completed the Experiences in Close Relationships to measure attachment style. They also completed a series of measures to gauge self-actualization (e.g., "I am motivated to achieve my full potential") and self-transcendence (e.g., "I identify with something that transcends or extends beyond myself"). The participants also answered questions that assess the degree to which they perceive themselves as likeable and competent.
Both anxious and avoidant attachment were inversely related to self-actualization, and avoidant attachment was inversely related to self-transcendence. These relationships were partly mediated by limited levels of self-liking or self-competence. Presumably, if people experience anxious attachment, they are too preoccupied with reinforcing relationships to actualize their potential. Likewise, if people experience avoidant attachment, they strive to suppress their concerns about relationships, potentially distracting themselves from personal aspirations& furthermore, their unfavorable perspective of other people may limit self-transcendence.
As Gillath, Sesko, Shaver, and Chan (2010) showed, attachment style also affects the extent to which individuals experience a sense of authenticity and behave honestly. In particular, if individuals report an anxious attachment style, they often strive to comply with the needs of other people. They will, therefore, sometimes conceal their natural inclinations, sometimes manifesting as insincerity and other defensive responses. In addition, if individuals report an avoidant attachment style, they often strive to detach themselves from dependent relationships. They might, thus, lie to maintain this independence.
Gillath, Sesko, Shaver, and Chan (2010) undertook a series of eight studies to verify these propositions. For example, in the first study, participants completed questionnaires that assess attachment style and authenticity. Some of the facets to gauge authenticity included accurate awareness, such as "For better or worse, I am aware of who I truly am" and authentic behavior, such as "?I frequently pretend to enjoy something when in actuality I don?t" (reverse). Insecure attachment styles, especially avoidant attachment, were inversely related to authenticity.
In a subsequent study, attachment style was primed rather than measured. That is, participants undertook a task in which they needed to judge the extent to which two pieces of furniture were similar. Embedded within this task were subliminal presentations of the word "love", to evoke a secure style, or "table", representing a control. Next, participants were granted an opportunity to list their positive traits, negative traits, and previous occasions in which they behaved shamefully. Exposure to the word love, evoking a secure style, increased the likelihood that individuals would concede their negative traits or shameful acts in the past. Secure attachment, thus, seems to be associated with a form of candor or honesty. In a separate study, reminiscing about a time when a close friend or partner was available, supportive, and loving also fostered this openness.
Attachment style affects not only the level of support and altruism that individuals enact, but also the extent to which they perceive other figures in their life as supportive or cooperative. Collins and Read (1990), for example, examined romantic couples in which one member from each pair was instructed to present a speech. The other member prepared one of two letters: a supportive message or an unsupportive message. This message was distributed to their partner before the speech was presented. Speakers who reported high, rather than low, levels of avoidant attachment were more inclined to perceive the message as unsupportive, upsetting, inconsiderate, and hostile.
Individuals who report an anxious attachment anticipate rejection and perceive themselves as unworthy of love and support (Collins & Read, 1990, 1994& Main et al., 1985). Accordingly, these individuals tend to rate faces as less friendly than do secure individuals (Meyer, Pilkonis, & Beevers, 2004)
According to Bowlby (1969), three sets of events can provoke anxiety or alarm in children. The first set of events revolves around the child and includes discomfort, illness, hunger, and fatigue. The second set of events revolves around the caregivers--such as absent, inattentive, departing, or rejecting parents. The third set of events relates to changes in the environment, such as criticisms or exclusion from friends. To regulate these adverse emotions, children are motivated to engage in behaviors that garner proximity and support from their caregivers.
Many studies have shown that secure attachment relates to emotional regulation in children. If individuals exhibit a secure attachment, they report less anxiety and frustration. They also apply more constructive practices to regulate their emotions (for reviews, see Thompson, 2008& Weinfield, Sroufe, Egeland, & Carlson, 2008).
Since the 1980s, a similar perspective has been applied to understand emotional regulation in relationships between romantic partners--rather than merely between parents and children. That is, in short, the same three classes of events can provoke anxiety or alarm in adults, and these feelings can promote behaviors that garner proximity and support from their partners.
For example, Mikulincer, Shaver and Pereg (2003) differentiated three classes of strategies that individuals can apply to seek support from their romantic partners. The first class of strategies assumes the relationship is secure. In these instances, individuals seek, and then receive, proximity and support from their partners when anxious. The support they receive reinforces their sense of security and enables individuals to then return to their ongoing pursuits. Individuals develop positive expectations about the availability and support of partners, family, or friends, which can foster more adaptive and creative responses to future threats.
The second class of strategies epitomizes avoidance attachment. In these instances, when individuals seek proximity and support from their partners, their needs remain unfulfilled. That is, their partner is unavailable or unsupportive. Over time, individuals thus learn to suppress their anxiety and detach themselves emotionally from their partner. In other words, they deactivate their attachment system (cf. Mikulincer & Shaver, 2007)
The third class of strategies epitomizes the concept of hyperactivation or anxious attachment. In these instances, when individuals seek proximity and support from their partners, again their needs remain unfulfilled. That is, the partner rejects this summons to be close and supportive. However, because their anxiety merely amplifies, these individuals attempt to magnify their pursuit of closeness and support, and thus seek reassurance excessively (Shaver, Schachner, & Mikulincer, 2005), which merely exacerbates the rejection from their partner.
When individuals who are anxiously attached experience threats to their relationship, they orient almost all of their attention to the attachment figure, to restore stability (Crisp, Farrow, Rosenthal, Walsh, Blisset, & Penn, 2009). Hence, their sense of identity with other frienship groups actually declines. In contrast, when individuals who are not anxiously attached experience threats to their relationship, they orient some of their attention to other potential sources of social support, because these friendships or collectives can also provide many benefits to wellbeing (Crisp, Farrow, Rosenthal, Walsh, Blisset, & Penn, 2009). Their sense of identity with friendship circles increases. Crisp, Farrow, Rosenthal, Walsh, Blisset, and Penn (2009) substantiated these propositions, using both explicit and implicit measures of group identification.
Individuals who report anxious attachment seek support. As a consequence, they often inflate their distress to elicit support from attachment figures (Cassidy, 2000). In contrast, individuals who report avoidant attachment prefer to perceive themselves as independent, to ensure they do not need to rely on support. Accordingly, they maintain distance and do not rely on other individuals (Cassidy, 2000).
Attachment style also affects sensitivity to threats. For example, individuals who report an anxious attachment style are especially sensitive to unfavorable evaluations from other individuals. As Srivastava and Beer (2005) showed, individuals with an anxious attachment were very sensitive to the evaluations of their partners. That is, they rated themselves negatively especially if they were evaluated harshly by their partners.
Experimental studies have also emphasized the role of attachment style on emotional regulation. When individuals reflect upon a supportive attachment figure, called security priming (Mikulincer & Shaver, 2007), their mood and wellbeing tends to improve& furthermore, their attitudes and behaviors towards other individuals becomes more favorable (see Mikulincer, Hirschberger, Nachmias, & Gillath, 2001& Mikulincer & Shaver, 2001& Mikulincer, Shaver, & Horesh, 2006).
Thoughts about attachment figures, such as a supportive mother, have been shown to alleviate negative emotions that were evoked by stressful events. Selcuk, Zayas, Gunaydin, Hazan, C., and Kross (2012), however, showed that thoughts about attachment figures also alleviate negative emotions that were evoked by upsetting memories.
For example, in one study, participants vividly wrote about two upsetting events in their life. They also identified a couple of words that may cue or evoke each memory later. A day or so later, participants were exposed to these memory cues and asked to reflect upon the corresponding upsetting experience as deeply and vividly as possible. Either before or after this memory was evoked, participants were asked to remember a supportive interaction with their mother or with a distant acquaintance. Their emotions were assessed at various times during this session.
Unsurprisingly, upsetting memories evoked negative emotions. However, if participants had recalled an interaction with their mother after this memory had been evoked, these negative emotions abated rapidly. Yet, if participants had recalled an interaction with their mother before this memory had been evoked, these negative emotions did not abate as rapidly. Furthermore, recollections of a supportive interaction with an acquaintance did not alleviate these negative emotions. Thoughts about a mother, therefore, can alleviate, but not prevent, these negative emotions.
Three other studies extended and clarified these findings, rejecting alternative explanations. For example, the same pattern of results was observed even if participants were exposed only to a photograph of their mother and even if mood was assessed implicitly, with a technique called the IPANAT (see implicit measures of mood and emotions). Similarly, the same pattern of results was observed even if participants reflected upon their romantic partner instead of their mother. Finally, these effects were not as pronounced if participants reported an avoidant attachment.
When individuals experience a secure attachment, their physiological response to stress declines rapidly. In particular, compared to individuals with an insecure attachment style, individuals with a secure attachment style exhibit a pronounced increase in physiological indices of stress in response to threatening events: their eyes open abruptly and substantially. However, in these individuals, these physiological indices dissipate more rapidly as well.
This pattern was observed in a study conducted by Borelli, Crowley, David, Sbarra, Anderson, and Mayers (2010). In this study, participants were children, aged between 8 and 12. First, the child attachment interview was conducted to establish the attachment style of these individuals. They received 19 questions about their experiences with caregivers. The answers were coded on eight scales, such as idealization, preoccupying anger, and balance of positive and negative references, on nine point scales. The children were then divided into four categories: secure, dismissing, preoccupied, and disorganized.
In addition, individuals were exposed to a series of pictures, while auditory probes were presented occasionally. Some of these pictures coincided with a puff of air to the neck, representing a subtle threat or discomfort. Changes in the pupil size, in response to each auditory probe, were assessed. Finally, levels of cortisol was assessed before and after the interviews and before and after the probe task. In addition, an explicit measure of mood was administered as well.
Relative to the other children, the children who demonstrated a secure attachment style exhibited a larger change in pupil size--that is, a more pronounced startle reflex--in response to auditory tones that coincided with the threatening pictures. That is, if the photograph sometimes coincides with a puff of air, the auditory tone elicited an elevated startle, particularly in secure children. Nevertheless, pupil size diminished more rapidly in these children as well.
Presumably, secure children feel that caregivers will resolve stressful events. Hence, they are more receptive to these stressful events, but also experience a sense of safety after a fleeting delay (Borelli, Crowley, David, Sbarra, Anderson, & Mayers, 2010). These findings are consistent with the discovery that secure children, during social interactions, exhibit less frustration and aggression (Sroufe, Schork, Motti, Lawroski, & LaFreniere, 1984).
Anxious attachment might compromise the immune system. For example, in one study, conducted by Jaremka, Glaser, Loving, Malarkey, Stowell, and Kiecolt-Glaser (2013), married couples completed the Experiences in Close Relationships, a measure of attachment style. Saliva samples were collected several times across three days, primarily to measure cortisol levels. Blood samples were collected twice to measure cellular immune markers. When anxious attachment was elevated, cortisol levels tended to be elevated. These elevated levels of cortisol were inversely associated with the number of CD3+, CD45+, CD3+CD4+ helper, and CD3+CD8+ cytotoxic T cells.
As these results show, if individuals experience anxious attachment, they are more inclined to perceive events as threatening. Consequently, they often experience stress. This stress seems to diminish the activation of T cells. These T cells are integral to many facets of the immune response. When the number of T cells is limited, the body cannot as readily defend against pathogens.
In particular, T cells, like B cells and Natural Killer cells, are a subset of white blood cells called lymphocytes. These T cells mature in an organ, near the top of the rib cage, called the thymus. T helper cells facilitate the maturation of B cells and also activate cytotoxic T cells and macrophages. Cytotoxic T cells eliminate cells that are infected by viruses.
Wei, Liao, Ku, and Shaffer (2011) examined the mechanisms that underpin the relationship between attachment style and wellbeing. They found that self compassion, in which individuals maintain a caring and compassionate attitude towards themselves particularly during challenging times, mediates the association between anxious attachment and indices of wellbeing. In contrast, limited empathy towards other people mediates the association between avoidant attachment and wellbeing.
Specifically, if parents are unpredictable, their children become vigilant, always striving to monitor and modify their behavior to avoid punishment, manifesting as anxious attachment. When punished, they assume they had not acted suitably, and thus attempt to be more vigilant in the future. They will, therefore, blame difficulties on their own shortfalls, culminating in self castigation instead of self compassion (Wei, Liao, Ku, & Shaffer, 2011). As this self compassion dissipates, individuals do not buttress their emotions with tender and kind words to themselves. Their mood and wellbeing thus diminish.
In contrast, if parents are entirely unsupportive, their children learn to become independent, curbing their reliance on other people, manifesting as avoidant attachment. They do not, therefore, attempt to modify their behavior to accommodate other people. Hence, they are not as likely to monitor the intentions and emotions of other individuals, curbing empathy (Wei, Liao, Ku, & Shaffer, 2011). As empathy diminishes, the capacity of individuals to respond suitably in social situations diminishes, and wellbeing declines.
Wei, Liao, Ku, and Shaffer (2011) confirmed these arguments in both college students and in a community sample of adults. Structural equation modeling confirmed that self compassion mediated the association between anxious attachment and wellbeing, as gauged by affectivity and life satisfaction. Furthermore, limited empathy mediated the association between avoidant attachment and wellbeing. Nevertheless, both attachment styles also directly affected wellbeing. Mediation was thus partial only.
As Lanciano, Curci, Kafetsios, Elia, and Lucia (2012) showed, attachment style is also associated with measures of emotional intelligence. In this study, participants first completed the relationship questionnaire, in which four prototypical attachment styles are described, and individuals indicate the degree to which these descriptions align to their tendencies. Next, they completed a scale that assesses three facets of rumination: brooding (e.g., "I often think 'Why do I always react this way?'"), depressive rumination (e.g., "I often think about how sad I feel?"), and reflection. Finally, the MSCEIT, designed to assess the capacity of individuals to perceive and recognize emotions accurately, utilize emotions to solve problems, understand the sources of emotions, and manage ore regulate emotions, was administered.
Two facets of emotional intelligence--emotion perception and emotion management--mediated the association between both insecure attachment styles and brooding. Two other facets of emotional intelligence--utilizing and understanding emotions--mediated the association between both insecure attachment styles and depressive rumination.
Presumably, when individuals experience insecure attachment, they either suppress or overreact to emotional cues. Consequently, they may not perceive or recognize emotions accurately. Their capacity to predict, utilize, and regulate emotions thus diminishes, manifesting as impaired emotional intelligence. Because of this impairment, negative emotions persist, increasing the likelihood of brooding and rumination about problems.
Attachment style also has been shown to be associated with the experience of various positive emotions. As Shiota, Keltner, and John (2006) showed, an anxious attachment style is negatively associated with joy, contentment, pride, and love. Avoidant attachment, however, was negatively associated with love and compassion.
To gauge these emotions, Shiota, Keltner, and John (2006) asked individuals to evaluate their dispositional affective traits, using the Dispositional Positive Emotion Scales. Typical items for various emotions include:
The measure comprises 38 items. The level of Cronbach's alpha for each scale was: .82 for joy, .92 for contentment, .80 for pride, 0.80 for love, 0.80 for compassion, 0.75 for amusement, and .78 for awe.
Several mechanisms could underpin the inverse association between avoidant attachment and emotional regulation. Individuals who report an avoidant attachment shun intimacy and, therefore, might be less inclined to divulge their feelings and concerns. They might not disclose their anxieties, doubts, or problems to their friends. These individuals, hence, do not enjoy the benefits of disclosure--satisfaction with life (Kahn & Hessling, 2001) and alleviation of depression (Berg & McQuinn, 1989).
The importance of self disclosure, and hence the drawbacks of avoidant attachment, are especially pronounced when the implicit and explicit motives of individuals diverge. That is, some individuals often project their motives onto their interpretations of events. They might, for example, frequently ascribe the behavior of individuals to the pursuit of power or influence. These interpretations imply a motive to seek power. Nevertheless, these individuals might contend they prefer to maintain solid relationships than to seek power or influence.
This incongruence tends to undermine life satisfaction and mood. This problem, however, dissipates if individuals divulge their feelings to friends (Langan-Fox, Sankey, & Canty, 2009).
Attachment style might affect the distribution of attention. To illustrate, individuals who report an avoidant attachment tend to direct their attention away from cues that relate to relationships.
This inclination was substantiated by Edelstein and Gillath (2008). In this study, participants completed a Stroop task. In this task, a series of words was presented. Participants were asked to name the font color of these words. When the words were related to facets of relationships, such as "adore", "abandon", "divorce", "lonely", or "loving", participants who reported an avoidant attachment were relatively proficient at this task.
According to Edelstein and Gillath (2008), some words, such as negative terms, attract attention. Participants thus focus on the semantics of this word, instead of the color, which compromises performance on this task. Conversely, other words do not attract attention. Participants can orient attention to the color, which facilitates performance. The results of this study, therefore, indicate that individuals who demonstrate avoidant attachment tend to direct their attention away from symbols that relate to relationships.
In general, individuals who report avoidant attachment can shift attention more rapidly, as well as resist distractions more effectively, than other people. For example, in one study, conducted by Gillath, Giesbrecht, and Shaver (2009), participants undertook a task that assesses the psychological refractory period. On each trial, a yellow or blue square first appeared. Then, 50 ms, 150ms, 250 ms, or 350 ms later, an X or O appeared. After these stimuli appeared, participants needed to press one of two buttons to indicate the color of this square and then to press one of two other buttons to indicate the letter.
If participants reported elevated levels of avoidant attachment, as measured by the Experiences in Close Relationships inventory, they performed more effectively on this task. That is, unlike people who did not report an avoidant attachment, they could rapidly identify the letter than appeared only 50 or 150 ms after the square. They could, therefore, switch rapidly from the square to the letter. Even after behavioral activation and behavioral inhibition (reinforcement sensitivity theory), personality as gauged by the five factor model, and trait anxiety were controlled, this relationship persisted.
In a subsequent study, Gillath, Giesbrecht, and Shaver (2009) administered a flanker task instead to gauge selective attention. On each trial, a sequence of arrows, like < < > < <, appeared. Participants had to press one of two buttons depending on the whether the central arrow pointed to the left or right. The surrounding arrows were either congruent or incongruent with the central arrow. If participants reported elevated levels of avoidant attachment, their reaction time was not as dependent on whether the surrounding arrows were congruent or incongruent, but only if they also reported elevated levels of anxious attachment. Thus, avoidant attachment, when combined with anxious attachment, enhanced the capacity of individuals to disregard extraneous information.
In the final study reported by Gillath, Giesbrecht, and Shaver (2009), participants performed the same task after reflecting upon either a secure, supportive, and trusting or an insecure, unsupportive, and unstable relationship. After an insecure relationship was primed, the benefits of avoidant attachment on selective attention dissipated.
Taken together, these findings imply that individuals with an avoidant attachment develop the capacity to suppress unwanted thoughts effectively. They can rapidly switch their attention from cues of rejection, for example, to more desirable events. They can disregard unpleasant experiences as well. However, insecure relationships might evoke powerful, negative cognitions in these individuals, compromising their capacity to switch attention and disregard information. A practical implication is that, perhaps, employees might be less likely to be distracted if they remember a time in which they solved problems independently, evoking an avoidant attachment style.
When people feel that close friends or family are unreliable--that is, they are not always available when needed--they become more attached to objects, such as mobile telephones. They feel especially uneasy when these objects are removed from their possession even momentarily. Specifically, these people experience a natural urge to depend on something that is reliable and predictable. Objects obviously fulfill this need.
This possibility was assessed by Keefer, Landau, Rothschild, and Sullivan (2012) across three studies. In each study, some participants wrote about a time in which a close friend or relative was not available when needed or wrote about their uncertainties about these relationships. In the control conditions, participants wrote about a time in which a stranger was not when needed or in which they felt disappointed or uncertain about themselves.
Subsequently, participants completed a measure that assesses the extent to which they exhibit object attachment. For example, in one study, they completed a scale that gauges whether they experience separation anxiety when this object is removed, feel a need to remain close to this object, and depend on this object. A sample question is ?I feel lost if I'm upset and my belongings are not around?. In another study, participants were informed their mobile phone would be returned after they completed another essay. Participants who wrote a short essay were assumed to feel the need to return to their phone, a behavioral measure of object attachment.
All these studies showed that, compared to the control conditions, writing about an unreliable close friend or relative amplified object attachment. This relationship was mediated by anxious attachment, as measured by the ECR. Furthermore, this relationship persisted even after use of the mobile phone to maintain relationships was controlled.
Similarly, in addition to objects, favored TV programs can also fulfill this need to belong, called the social surrogacy hypothesis (Derrick, Gabriel, & Hugenberg, 2009). To illustrate, after people wrote about a conflict with a close friend or relative, amplifying their need to belong, they dedicated more time to writing about their favorite TV program (Derrick, Gabriel, & Hugenberg, 2009). Similarly, after people wrote about their favorite TV programs, they become more resilient. They reported fewer feelings of rejection, as well as a more robust self-esteem, even after reflecting upon a major conflict in their lives. Finally, after people reflected upon their favorite TV program, words that are associated with rejection were not as accessible (Derrick, Gabriel, & Hugenberg, 2009). That is, if instructed to uncover the word that corresponds to exc - - - - , they tended to write excite rather than exclude.
The attachment style of people also correlates with the shoes they wear. For example, as Gillath, Bahns, Ge, and Crandall (2012) showed, if people report an anxious attachment, in which they are often worried they may be rejected or excluded, their shoes are not as likely to be colorful or worn. Presumably, these individuals do not want their shoes to be conspicuous or inappropriate. In contrast, if people report an avoidant attachment, their shoes tend to be higher.
Hazan and Shavers (1990) argued that secure attachment should promote satisfaction with work. Specifically, work can be conceptualized as an exploratory behavior, demanding a secure base. A secure attachment affords the secure base that is necessary to foster exploratory behavior and elevate job satisfaction.
Consistent with these premises, they discovered that secure individuals, relative to their insecure counterparts, were more inclined to feel satisfied with their job, to feel confident about their work, to adjust more adaptively to work environments, and to be liked by colleagues. Ambivalent employees, however, worried about work performance, conceptualized work as a means to satisfy unmet needs for love, and preferred roles in which they worked with other colleagues. Avoidant employees, however, preferred to work alone and were dissatisfied with colleagues.
Since this study, other research has confirmed the pertinence of attachment style to the work environment (see Joplin, Nelson, & Quick, 1999& Krausz, Bizman, & Braslavsky, 2001). Attachment insecurity tends to coincide with burnout (Pines, 2004), work stress (Schirmer & Lopez, 2001), and deficits in engagement (Ronen & Mikulincer, 2007).
A secure attachment is also associated with elevated levels of organizational citizenship behavior but limited levels of turnover. In particular, as Richards and Schat (2011) showed, anxious attachment was inversely associated with organizational citizenship behavior directed at the organization, but positively associated with turnover, even after controlling personality. Presumably, these individuals cannot readily resolve the complications and demands of work& they cannot therefore devote their effort to other activities, such as discretionary tasks that improve the organization. Their strain also curbs their loyalty to the workplace.
People often need to negotiate on behalf of someone else. The person who negotiates is called the agent. The person the agent represents is called the principal. Sometimes, however, agents will undertake an action that enhances their own interests to the detriment of their principal. As Lee and Thompson (2011) showed, a secure attachment tends to curb the likelihood of this problem.
In this study, participants assumed the role of an agent negotiating a real estate deal for a principal, either the vendor or the buyer. The agents needed to resolve a dispute between the buyer and seller over a historical heritage in New York. The seller wanted to ensure the buyer preserves the original site and prohibits commercial use. The buyer wanted to build luxurious hotels on the site. If the agents considered the needs of their principals, they would not reach a deal. If they disregarded the needs of their principals, they might reach a deal to earn a bonus.
Before the negotiation, attachment style with the principal was manipulated. For example, to evoke a secure attachment, they imagined a time in which they felt secure and comfortable with a client. To evoke an anxious attachment, they imagined a time in which they were reliant on a client who did not value the relationship. To evoke an avoidant attachment, they imagined a time in which the client was too dependent.
If a secure attachment was evoked, agents tended to reach decisions that align to the interests of the principal: they did not accept the deal. If an avoidant attachment was evoked, agents were especially likely to disregard the needs of their principal& an anxious attachment generated a tendency that was midway between these extremes (Lee & Thompson, 2011).
A secure attachment, presumably, increases the likelihood that people want to establish and maintain a shared relationship. That is, personal needs integrate the desires and concerns of the other person. Thus, this attachment style promotes honesty and openness. Anxious attachment in people shifts attention to their own preoccupations, undermining this appreciation of the needs and concerns of the principal. Avoidant attachment elicits a need for independence, curbing any obligation to the principal.
As Bodner and Cohen-Fridel (2014) showed, insecure attachment is positively associated with negative attitudes to older people. Yet, the mechanisms that underpin this relationship differ between avoidant attachment and anxious attachment. In particular, when people experience avoidant attachment, they are not as willing to adopt the perspective or feelings of other people. They are not, therefore, as likely to appreciate the obstacles that older people experience. Their empathy towards older people diminishes, compromising their attitudes to this age group. Consistent with this possibility, Bodner and Cohen-Fridel (2014) showed that limited empathy mediates the association between avoidant attachment and negative attitudes towards elderly people.
In contrast, when people experience anxious attachment, they become more fearful of death. These individuals do not tend to feel confident in themselves and, therefore, do not feel they will be valued after they die. They experience existential angst as a consequence. This fear of death translates to negative attitudes to reminders of death, such as aging. Bodner and Cohen-Fridel (2014) indeed showed that fear of death mediates the association between anxious attachment and negative attitudes towards elderly people.
When people need to solve a problem, such as eliminate a virus from a computer, they are often distracted by other obstacles or opportunities. Anxious attachment, however, diminishes the likelihood that people will be distracted as they strive to solve problems. People who report an anxious attachment are more persistent and committed to the resolution of these problems (Ein-Dor & Tal, 2012).
This observation can be ascribed to the strategies that anxiously attached individuals adopt to resolve problems. During childhood, anxious attachment is associated with the tendency of individuals to direct all their attention to their main attachment figure, often their mother, while stressed. Over time, therefore, they learn to focus their attention on one solution to their problems and disregard other cues in the environment.
To illustrate, in one study, conducted by Ein-Dor and Tal (2012), undergraduate students completed a questionnaire that gauges attachment style and then began another task. Midway during this task, an error message appeared on the screen, indicating that a pernicious virus may soon delete the hard drive. The experimenter pretended to be distraught and asked the participants to seek help from the Dean's assistant manager.
Four obstacles were arranged to impede participants. For example, a person asked participants to complete a short survey, another person asked them to photocopy some papers, a sign on the door asked visitors to wait, and so forth. If participants reported an anxious attachment, however, they were not as likely to be impeded by these obstacles but sought help successfully.
Individuals with an anxious attachment are especially motivated to establish intimate relationships and avoid rejection. Consequently, they may strive to comply with the needs of therapists, counsellors, and other professionals. This motivation may increase their retention in programs that are designed to address problems.
This possibility was raised and validated by Fowler, Groat, and Ulanday (2013). These authors showed that an anxious attachment style, as gauged by the Relationship Questionnaire, predicted retention in a program intended to treat substance abuse. This relationship persisted even after controlling other psychiatric disorders.
Attachment style might also affect the extent to which individuals feel certain about their choice of career. In particular, as Blustein, Prezioso, and Schultheiss (1995) maintain, when attachment is secure, individuals feel they can explore their identity and interests, embracing the risks this pursuit might entail. As a consequence of this pursuit, they form a more comprehensive, complete, and coherent representation of themselves. They become more sensitive to their personal values and needs. Furthermore, because of this secure attachment style, they perceive themselves as worthy of support& they perceive themselves positively. This coherent and positive perception of themselves ensures they are more confident with career decisions.
Several studies confirm this argument. In a study conducted by Emmanuelle (2009), for example, a sample of 241 adolescents, aged between 15 and 19, completed a scale that assesses their attachment with their parents& specifically, this questionnaire ascertained the extent to which they developed a trusting, candid relationship with their mother and father rather than a sense of alienation. In addition, these individuals answered questions that assess whether they can readily reach career decisions, with questions like "I find it easy to make decisions". Finally, they completed a measure of global self esteem.
For boys, a secure attachment with their father was positively related to self esteem, which in turn was inversely associated with career indecision. In contrast, for girls, a secure attachment with their mother was positively related to self esteem, which in turn was negatively associated with career indecision. Attachment with parents of the same sex, thus, seems to be the main determinant of both self esteem and the capacity to reach career decisions.
Anxious attachment also promotes materialism or a tendency in people to attach significant weight on money or material goods (Norris, Lambert, Dewall, & Fincham, 2012). That is, when individuals experience an anxious attachment, they tend to direct all their attention on maintaining proximity to one significant person, such as their mother. Alternatively, people often substitute a significant person with a significant object. Consequently, anxious attachment may be associated with the inclination to direct their attention to objects, manifesting as materialism.
This possibility was confirmed by Norris, Lambert, Dewall, and Fincham (2012). In one study, participants completed the experience of close relationships scale, designed to gauge attachment style. They also completed a scale that gauges materialism, epitomized by items such as "I admire people who own expensive cars" and "The things I own say a lot about how I am doing in life". An anxious attachment was positively associated with materialism, r=.34. A subsequent study showed that feelings of loneliness mediated this relationship. Therefore, materialism may, at least momentarily, reflect an attempt to override loneliness--but ultimately tends to impair wellbeing.
Attachment style affects the dreams of individuals. Specifically, as Contelmo, Hart, and Levine (2013) showed, when people exhibit anxious attachment, they are more inclined to become fixated on their dreams. For example, they concede they often feel unsafe in their dreams and are even apprehensive about dreaming. They are also more inclined to confuse their dreams and reality, uncertain of whether a memory of an event was real or not. In addition, they are more likely to feel their dreams are significant rather than random thoughts and may consult books to interpret these dreams.
In contrast, when people report an avoidant attachment, their experience of dreams is quite different. They are not as likely as the average person to confuse dreams and reality or experience lucid dreams. They are also more inclined to perceive dreams as random thoughts rather than significant. Finally, they do not likely to recall many dreams.
Presumably, anxious attachment, and the corresponding hyper-activating strategies, motivates individuals to analyze emotional and social information carefully. They often strive to interpret subtle cues, a tendency that evolves from their sensitivity to rejection. Dreams include social and emotional cues and, therefore, are perceived as significant and important to these individuals. Avoidant attachment, and their corresponding deactivating strategies, motivate individuals to dismiss or suppress social information in general and thus to trivialize their dreams in particular.
Similarly, many individuals overestimate the value of their possessions. That is, they feel that some object they own, like a pen or jewelry, is worth more than are similar objects they do not own. This tendency, called the ownership bias, is assumed to promote materialism. That is, individuals who significantly value their possessions like to own and retain more goods.
To some extent, a sense of insecurity could amplify this ownership bias. Specifically, throughout human evolution, to survive and to thrive, people need to rely either on the support of their community or their own personal resources. That is, if support is likely to be withdrawn, individuals must utilize their own resources, such as weapons or power. If these resources are limited, they must rely on the support of family, friends, and other members of their community. Accordingly, if they feel insecure-?and thus concerned they might be rejected or abandoned?-personal resources become more important. They become more likely to value their possessions and provisions.
Consistent with these arguments, Clark, Greenberg, Hill, Lemay, Clark-Polner, and Roosth (2011) showed that a sense of security, similar to a secure attachment style, should curb the ownership bias. In one study, some participants were instructed to recall a time in which they felt supported or secure. Other participants, in the control condition, were instructed to recall a time in which enjoyed an pleasant experience in a restaurant?-an experience that is not explicitly related to security. Next, participants specified the minimum amount they would accept to sell the blanket on their bed. If participants had imagined a time in which they felt secure, they did not value the blanket as appreciably& they did not show the ownership bias.
The second study by Clark, Greenberg, Hill, Lemay, Clark-Polner, and Roosth (2011) was similar, but different procedures were utilized to elicit a sense of security and measure the ownership bias. Specifically, the sentence unscrambling task was applied to evoke a sense of security, positive emotions, or no emotions. For example, to elicit a sense of security, participants were exposed to words like hug, love, reassuring, shares, support, commitment, and comfort. To elicit positive emotions, they were exposed to words such as laughter, festive, merry, triumphant and victory. Then, participants were bestowed a pen, with the university logo. From a list of options, they were asked to indicate the price at which they would return the pen. Again, consistent with hypotheses, when security had been elicited, the ownership bias dissipated. They were willing to the return the pen at a reduced price.
As Warren, Bost, Roisman, Silton, Spielberg, Engels, Choi, Sutton, Miller, and Heller (2011) showed, the brain regions that are activated by emotional stimuli depends on attachment style. In their study, participants completed the emotion-word Stroop task. That is, a series of emotional words, like death, was presented. The task of participants was to name the color of these words. As they completed this task, functional magnetic resonance imaging was utilized to ascertain which regions were especially active.
In addition, to assess the attachment style of individuals, participants were granted opportunities to report narratives about threatening events. Some participants, for example, referred to the availability of supportive figures in their life and the likelihood the solution would be resolved, reflecting a secure attachment style.
If participants did not report a secure attachment style, the emotional Stroop task was especially likely to increase activity in the right orbitofrontal cortex and other regions that mediate emotional regulation. Furthermore, in these individuals, this task also increased activity in regions that underpin cognitive control: the left dorsolateral prefrontal cortex, the anterior cingulate cortex, and the superior frontal gyrus. As these findings imply, if individuals exhibit an insecure attachment style, they seem to be especially susceptible to emotional stimuli. Hence, regions that resolve these emotions are mobilized. In addition, regions that enable individuals to maintain their orientation on other tasks, in the midst of emotional information, need to be activated more intensely.
The attachment system tends to be activated in response to threatening events (Mikulincer, Birnbaum, Woddis, & Nachmias, 2000). Hence, the relationship between attachment style and consequences tends to be more pronounced in stressful or threatening environments.
Often, romantic couples engage in discussions in which one person wants the other person to change some behavior or habit. The person who is asked to change will often feel angry, and may also feel motivated to withdraw from this discussion, especially if this individual exhibits avoidant attachment. Yet, as Overall, Simpson, and Struthers (2013) showed, if the person who seeks the change utilizes a specific tactic, called softening communication, this problem subsides.
Softening communication entails two key facets (Overall, Simpson, & Struthers, 2013). First, the person needs to offer unambiguous support, primarily to challenge the assumption of avoidant individuals that caregivers are unsupportive. That is, this person needs to demonstrate they value the avoidant individual. The person could emphasize the qualities of this avoidant individual, for example. Second, the person needs to demonstrate sensitivity towards the preference of avoidant individuals to sustain autonomy. This person should, for example, downplay the severity of this problem and validate the perspective of the avoidant individual.
Overall, Simpson, and Struthers (2013) validated these arguments empirically. In this study, romantic couples discussed an issue in which one person wanted to change the other person. The person who was encouraged to change also completed measures of attachment style as well as feelings of anger and withdrawal. Independent judges coded the degree to which the person who wanted to change the other person utilized softening tactics. Avoidance was associated with anger and withdrawal, but this association diminished when softening tactics were utilized.
The internal working models of individuals partly shapes the interpretation of all social interactions and thus permeates most, if not all, relationships (Bowlby, 1988& Shaver, Collins, & Clark, 1996). As a consequence, individuals will often exhibit similar attachment styles in different relationships. Indeed, as individuals grow, the separate working models that correspond to each attachment figure begin to merge, at least partly, cultivating generalized expectations of other individuals (Main, Kaplan, & Cassidy, 1985).
Nevertheless, the attachment style of each relationship is not only governed by internal working models, but also depends on the profile of unique experiences the dyad shares (Cummings & Cicchetti, 1990 & Kobak, 1994). Consistent with this proposition, the attachment style of individuals has been shown to vary across their relationships (La Guardia, Ryan, Couchman, & Deci, 2000).
Baldwin, Carrell, and Lopez (1990) also showed how relational schemas are specific to particular relationships. In their study, participants were subliminally exposed to photographs of a disapproving or approving individual. The disapproving individual compromised the confidence of participants, but only if this person was pertinent to their life. To illustrate, a photograph of a disapproving Pope, for example, compromised the self evaluations of participants, but only if these individuals were Catholic.
The internal working models can change over time, albeit gradually and slowly (Bowlby, 1988). Specifically, these schemas may be modified according to fundamental learning principles, such as classical conditioning (Baldwin & Dandeneau, 2005& Shaver et al., 1996).
Indeed, many studies have examined the extent to which attachment style-usually defined as internal working models about relationships in general-change over time (e.g., Baldwin & Fehr, 1995& Kirkpatrick & Hazen, 1994& Scharfe & Bartholomew, 1994). The general consensus is that attachment style seems to be relatively stable across time, even across years or decades (Waters, Merrick, Treboux, Crowell, & Albersheim, 2000). Nevertheless, approximately 25% of individuals do show changes in their attachment styles.
That is, traumatic events or upheaval in close relationships can damage attachment styles (Rothbard & Shaver, 1994). Conversely, very supportive relationships can improve the security of attachments styles as well (Pearson, Cohn, Cowan, & Cowan, 1994).
Other factors also seem to provoke changes in attachment style (see Davila, Karney, & Bradbury, 1999& Waters, Weinfield and Hamilton, 2000& Weinfield, Sroufe, & Egelund, 2000). For example, negative life events (Waters, Weinfield and Hamilton, 2000), and perhaps other experiences (see Davila, Karney, & Bradbury, 1999), seem to coincide with changes in attachment style. Personality might also affect the likelihood of shifts in attachment style in response to specific life events.
Several factors could affect the capacity of individuals to form trusting and solid relationships, and these relationships could ultimately impinge on the attachment style of individuals. For example, Nahrgang, Morgeson, and Ilies (2009) examined the factors that affected the development of trusting and stable relationships between employees and their supervisors, called leader-member exchange.
The results of this study were very informative. First, over time, supervisors develop stronger relationships with some employees relative to other employees. In particular, during the early phases of these relationships, leaders developed stronger relationships with extraverted employees. Employees developed stronger relationships with agreeable leaders. Second, over time, relationships tend to become increasingly strong over time, but eventually plateau. Once the relationship begins to plateau, performance of each individual was the key determinant of relationship trust and stability.
When individuals trust their romantic partner and feel this person validates their goals, they become more likely to develop a secure attachment style. This possibility was verified in a study, conducted by Arriaga, Kumashiro, Finkel, VanderDrift, and Luchies (2014). In this study, the participants were 134 committed couples who completed a questionnaire three times, separated by at least a year. In particular, this questionnaire assessed the degree to which they perceive their partner as available and dependable (e.g., "I can rely on my partner to keep the promises he/she makes to me"), the extent to which their partner validates their goals (e.g., "My partner is doubtful that I can achieve my goals", reverse scored), and attachment style. The Experiences in Close Relationships scale was utilized to assess attachment style in relationships.
Trust in the availability and dependence of partners was negatively associated with anxious attachment at the time but negatively associated with avoidant attachment in the future. Validation of goals was negatively associated with avoidant attachment at the time but negatively associated with anxious attachment in the future.
Arguably, when people exhibit anxious attachment, their main concern revolves around whether partners will be available and dependable, primarily because their attachment figures had tended to be inconsistent in the past. Consequently, they are more sensitive to limited availability. Anxious attachment, therefore, should be negatively related to trust in the availability and dependence of partners now. Likewise, when people exhibit an avoidant attachment, they are more concerned about someone who stifles their independence. Consequently, avoidant attachment may be negatively related to validation of their goals.
However, to overcome anxious attachment, people need to enhance their perception or model of themselves. That is, anxious attachment tends to coincide with a negative model of self. If individuals feel their partner validated their goals, their model of themselves improves, and anxious attachment should diminish in the future. Similarly, to overcome avoidant attachment, people need to enhance their perception of other individuals& after all, avoidant attachment tends to coincide with a negative model of others. They need to recognize that other people can be supportive and available. If individuals feel their partner is supportive, this need is fulfilled, and avoidant attachment should subsequently dissipate.
As Fraley, Roisman, Booth-LaForce, Owen, and Holland (2013) showed, variations in attachment style can primarily be ascribed to the social environment of individuals, such as the behavior of caregivers, emerging social competence, and quality of friendships. In contrast, few of the genetic polymorphisms that have been examined in this literature correlate with attachment style& one exception is the polymorphism in the serotonin receptor gene (HTR2A rs6313), which correlates modestly with anxious attachment. Oxytonergic and dopaminergic genes did not seem to be related to attachment style in this study.
In particular, the participants were 18 year old individuals, whose behavior and environment had been studied from birth to age 15. At 18, these individuals completed the Relationships Scales Questionnaire, to measure their general levels of anxiety and avoidance attachment, and the Experiences in Close Relationships. Previously, about 11 times throughout their lives, three facets of their caregiving environment was assessed: the degree to which their mother was sensitive to their needs, as gauged from watching interactions on video, maternal depression, and absence of the father. Their mothers and teachers had also rated their social competence throughout their lives. And the children themselves rated the quality of their friendships. Furthermore, while 54 months, the mothers rated their temperament on several attributes, such as their level of activity or restlessness, shyness, focused attention, passivity, and fear. Finally, genotyping was conducted.
Regression analysis showed that avoidance attachment was associated with low maternal sensitivity, social incompetence, and impaired friendships--although the measured variables explained only 29% of the variation. Caregiver experiences and genetics do not entirely explain attachment style. Anxious attachment was associated with maternal depression and social incompetence. Yet, improved friendships over time were also associated with anxious attachment in relationships, although the explanation of this result is unknown.
Furthermore, individuals carrying two, rather than one or zero, C alleles of the HTR2A rs6313 SNP gene, associated with sensation seeking and reward dependence, reported more anxious attachment. Furthermore, the association between maternal sensitivity and avoidant attachment was not as strong in the individuals with C rather than T alleles. Temperament was not associated with attachment style.
Individuals whose parents divorced are often more likely to exhibit insecure attachment styles later in life (Fraley & Heffernan, 2013). If the divorce proceeded while the children were young, below 7 or so, this pattern of observations is especially pronounced, sometimes called the sensitive period hypothesis. Furthermore, divorce primarily affects attachment style to parents rather than attachment style to romantic partners or friends.
In the first study conducted by Fraley and Heffernan (2013), over 12 000 people completed a survey over the internet. These individuals completed measures of anxious and avoidant attachment with their mothers, fathers, romantic partners, and friends. In addition, individuals indicated if and when their parents divorced. Divorce was especially likely to be associated with anxious and avoidant attachment to both fathers and, to a lesser extent, mothers. If the divorce had proceeded earlier, rather than later, in the life of these participants, these patterns were especially pronounced. But, divorce was not significantly related to anxious or avoidant attachment to romantic partners or friends.
The second study was conducted to replicate the first study. The measures were the same, except participants were also asked to indicate which parent was granted primary custody of the children. Interestly, custody with one parent compromised attachment style to the other parent. Because, individuals tended to live with their mothers, divorce was thus more likely to undermine relationships with the father. When individuals lived with their fathers instead, attachment style to their mothers was more likely to be undermined by divorce.
In this study, divorce did not compromise attachment to friends and romantic partners, but only slightly. This finding is consistent with the notion that experiences in one domain only marginally affect expectations of relationships in other domains.
Several accounts can explain the finding that divorce during early childhood is especially consequential. During the first few years of life, the nervous system is especially plastic and thus sensitive to adversities. In addition, early experiences can shape expectations, and hence the effects, of subsequent experiences.
Two main classes of measures have been developed to assess attachment style (for more information and details, see measures and manipulations of attachment style). First, some researchers apply narrative reports, such as the Adult Attachment Interview. During these interviews, participants discuss past experiences with attachment figures, primarily their parents. These interviews are intended to characterize the unconscious processes that individuals apply to regulate their emotions during these discussions (Jacobvitz, Curran, & Moller, 2002). Second, self-report measures assess the extent to which participants explicitly feel they seek close relationships and fear rejection.
Most self report measures, such as the Experiences in Close Relationships Revised scales (Fraley, Waller, & Brennan, 2000), assess two dimensions. The first dimension relates to anxious attachment, representing the extent to which individuals fear rejection (e.g., ?I worry about being abandoned?). The second dimension relates to avoidant attachment, representing the extent to which individuals attempt to evade close relationships (e.g., "I prefer not to show a partner how I feel deep down?).
Ainsworth, M. D. S. (Ed.). (1991). Attachments and other affectional bonds across the life cycle. New York, NY: Tavistock/Routledge.
Ainsworth, M. D. S., Blehar, M. C., Waters, E., & Wall, S. (1978). Patterns of attachment: A psychological study of the strange situation. Hillsdale, NJ: Erlbaum.
Ainsworth, M. S. (1979). Infant-mother attachment. American Psychologist, 34, 932-937.
Allen, E. S., & Baucom, D. H. (2004). Adult attachment and patterns of extradyadic involvement. Family Processes, 43, 467-488.
Andersen, S. M., Reznick, I., & Manzella, L. M. (1996). Eliciting facial affect, motivation, and expectancies in transference: Significant-other representations in social relations. Journal of Personality and Social Psychology, 71, 1108-1129.
Armsden, G. C., & Greenberg, M. T. (1987). The inventory of parent and peer attachment: Individual differences and their relationship to psychological wellbeing in adolescence. Journal Young Adolescence, 5, 427-454.
Arriaga, X. B., Kumashiro, M., Finkel, E. J., VanderDrift, L. E., & Luchies, L. B. (2014). Filling the void: Bolstering attachment security in committed relationships. Social Psychological and Personality Science, 5, 398-406. doi: 10.1177/1948550613509287
Baldwin, M. W. (1992). Relational schemas and the processing of social information. Psychological Bulletin, 112, 461-484.
Baldwin, M. W. (1994). Primed relational schemas as a source of self-evaluative reactions. Journal of Social and Clinical Psychology, 13, 380-403.
Baldwin, M. W. (1995). Relational schemas and cognition in close relationships. Journal of Social & Personal Relationships, 12, 547-552.
Baldwin, M. W. (1997). Relational schemas as a source of if-then self-inference procedures. Review of General Psychology, 1, 326-335.
Baldwin, M. W., & Dandeneau, S. D. (Eds.). (2005). Understanding and modifying the Relational Schemas underlying insecurity. New York, NY: Guilford Press.
Baldwin, M. W., & Fehr, B. (1995). On the instability of attachment style ratings. Personal Relationships, 2, 247-261.
Baldwin, M. W., & Meunier, J. (1999). The cued activation of attachment relational schemas. Social Cognition, 17, 209-227.
Baldwin, M. W., Carrell, S. E., & Lopez, D. F. (1990). Priming relationship schemas: My advisor and the Pope are watching me from the back of my mind. Journal of Experimental Social Psychology, 26, 435-454.
Baldwin, M. W., Fehr, B., Keedian, E., Seidel, M., & Thompson, D. W. (1993). An exploration of the relational schemata underlying attachment styles: Self-report and lexical decision approaches. Personality and Social Psychology Bulletin, 19, 746-754.
Baldwin, M. W., Keelan, J. P. R., Fehr, B., Enns, V., & Koh-Rangarajoo, E. (1996). Social-cognitive conceptualization of attachment working models: Availability and accessibility effects. Journal of Personality and Social Psychology, 71, 94-109.
Bartholomew, K. (1990). Avoidance of intimacy: An attachment perspective. Journal of Social and Personal Relationships, 7, 147-178.
Bartholomew, K., & Horowitz, L. M. (1991). Attachment styles among young adults: A test of a four-category model. Journal of Personality and Social Psychology, 61, 226-244.
Bartholomew, K., & Shaver, P. R. (1998). Measures of attachment: Do they converge? In J. A. Simpson & W. S. Rholes (Eds.), Attachment theory and close relationships (pp. 25-45). New York: Guilford Press.
Bartholomew, K., Cobb, R. J., & Poole, J. A. (1997). Adult attachment patterns and social support processes. In G. R. Pierce, B. Lakey, I. G. Sarason, & B. R. Sarason (Eds.), Sourcebook of social support and personality (pp. 359-378). New York: Plenum Press.
Bartz, J. A., & Lydon, J. E. (2004). Close relationships and the working self-concept: Implicit and explicit effects of priming attachment on agency and communion. Personality and Social Psychology Bulletin, 30, 1389-1401.
Batson, C. D. (1991). The altruism question: Towards a social social-psychological answer. Hillsdale, NJ: Erlbaum.
Bekker, M. H. J., Bachrach, N., & Croon, M. (2007). The relationships of antisocial behavior with attachment styles, autonomy-connectedness, and alexithymia. Journal of Clinical Psychology, 63, 507-527.
Berg, J. H., & McQuinn, R. D. (1989). Loneliness and aspects of social support networks. Journal of Social and Personal Relationships, 6, 359-372.
Berry, K. Barrowclough, C., &Wearden, A. (2007). A review of the role of adult attachment style in psychosis: Unexplored issues and questions for further research. Clinical Psychology Review, 27, 458-478.
Berson, Y., Dan, O., & Yammarino, F. J. (2006). Attachment style and individual differences in leadership perceptions and emergence. Journal of Social Psychology, 146, 165-182.
Blustein, D. L., Prezioso, M. S., & Schultheiss, D. P. (1995). Attachment theory and career development: Current status and future directions. Counseling Psychologist, 23, 416-432.
Bodner, E., & Cohen-Fridel, S. (2014). The paths leading from attachment to ageism: A structural equation model approach Death Studies, 38, 423-429. doi: 10.1080/07481187.2013.766654
Bordin, E. S. (1983). Supervision in counseling: II. Contemporary models of supervision: A working alliance based model of supervision. Counseling Psychologist, 11, 35-42.
Borelli, J. L., Crowley, M. J., David, D. H., Sbarra, D. A., Anderson, G. M., & Mayers, L. C. (2010). Attachment and emotion in school-aged children. Emotion, 10, 475-485.
Bost, K. K., Shin, N., McBride, B. A., Brown, G. L., Vaughn, B. E., Coppola, G., et al. (2006). Maternal secure base scripts, children's attachment security, and mother-child narrative styles. Attachment & Human Development, 8, 241-260.
Bowlby, J. (1969/1982). Attachment and loss (Vol. 1, attachment). New York: Basic Books.
Bowlby, J. (1973). Separation: Anxiety & anger (Vol. 2 of attachment and loss). London: Hogarth Press.
Bowlby, J. (1988). A secure base: Parent-child attachment and healthy human development. New York: Basic Books.
Brennan, K. A., Clark, C. L., & Shaver, P. R. (1998). Self-report measurement of adult romantic attachment: An integrative overview. In J. A. Simpson and W. S. Rholes (Eds.), Attachment theory and close relationships (pp. 46-76). New York: Guilford Press.
Brennan, K. A., Shaver, P. R., & Tobey, A. E. (1991). Attachment styles, gender, and parental problem drinking. Journal of Social and Personal Relationships, 8, 451-466.
Bresnahan, C. G., & Mitroff, I. I. (2007). Leadership and attachment theory. American Psychologist, 62, 607-608.
Campbell, W. K., & Foster, C. (2002). Narcissism and commitment in romantic relationships: An investment model analysis. Personality and Social Psychology Bulletin, 28, 484-495.
Carnelley, K. B., Pietromonaco, P. R., & Jaffe, K. (1994). Depression, working models of others, and relationship functioning. Journal of Personality and Social Psychology, 66, 127-140.
Cassidy, J. (2000). Adult romantic attachments: A developmental perspective on individual differences. Review of General Psychology, 4, 111-131.
Clark, M. S., Greenberg, A., Hill, E., Lemay, E. P., Clark-Polner, E., & Roosth, D. (2011). Heightened interpersonal security diminishes the monetary value of possessions. Journal of Experimental Social Psychology, 47, 359-364.
Collins, N. L., & Feeney, B. C. (2004). Working models of attachment shape perceptions of social support: Evidence from experimental and observational studies. Journal of Personality and Social Psychology, 87, 363-383. doi: 10.1037/0022-35184.108.40.2063
Collins, N. L., & Read, S. J. (1990). Adult attachment, working models, and relationship quality in dating couples. Journal of Personality and Social Psychology, 58, 644-663. doi: 10.1037/0022-35220.127.116.114
Collins, N. L., & Read, S. J. (1994). Cognitive representations of adult attachment: The structure and function of working models. In K. Bartholomew & D. Perlman (Eds.), Advances in personal relationships: Vol. 5. Attachment processes in adulthood (pp. 53-90). London: Jessica Kin.
Collins, N. L., Ford, M. B., Guichard, A. C., & Allard, L. (2006). Working models of attachment and attribution processes in intimate relationship. Personality and Social Psychology Bulletin, 32, 201-219.
Contelmo, G., Hart, J., & Levine, E. H. (2013). Dream orientation as a function of hyperactivating and deactivating attachment strategies. Self and Identity, 12, 357-369. doi: 10.1080/15298868.2012.673281
Coppola, G., Vaughn, B. E., Cassibba, R., & Constantini, A. (2006). The attachment script representation procedure in an Italian sample: Associations with adult attachment interview scales and with maternal sensitivity. Attachment & Human Development, 8, 209-219.
Crisp, R. J., Farrow, C. V., Rosenthal, H. E. S., Walsh, J., Blisset, J., & Penn, N. M. K. (2009). Interpersonal attachment predicts identification with groups. Journal of Experimental Social Psychology, 45, 115-122.
Crowell, J. A., & Treboux, D. (1995). A review of adult attachment measures: Implications for theory and research. Social Development, 4, 294-327.
Crowell, J. A., Fraley, R. C., & Shaver, P. R. (1999). Measures of individual differences in adolescent and adult attachment. In J. Cassidy & P. R. Shaver (Eds.), Handbook of attachment: Theory, research, and clinical applications (pp. 434-465). New York: Guilford.
Crowell, J. A., Waters, E., Treboux, D., O'Connor, E., Colon-Downs, C., Feider, O., et al. (1996). Discriminant validity of the Adult Attachment Interview. Child Development, 67, 2584-2599.
Cummings, E. M., & Cicchetti, D. (Eds.). (1990). Toward a transactional model of relations between attachment and depression. Chicago, IL: University of Chicago Press.
Davidovitz, R., Mikulincer, M., Shaver, P., Izsak, R., & Popper, M. (2007). Leaders as attachment figures: Leaders' attachment orientations predict leadership-related mental representations and followers-performance and mental health. Journal of Personality and Social Psychology, 93, 632-650.
Davila, J., Bradbury, T. N., & Fincham, F. (1998). Negative affectivity as a mediator of the association between adult attachment and marital satisfaction. Personal Relationships, 5, 467-484.
Davila, J., Karney, B. R., & Bradbury, T. N. (1999). Attachment change processes in the early years of marriage. Journal of Personality and Social Psychology, 76, 783-802.
De Wolff, M. S., & Van Ijzendoorn, M. H. (1997). Sensitivity and attachment: A meta-analysis on parental antecedents of infant attachment. Child Development, 68, 571-591.
Derrick, J. L., Gabriel, S., & Hugenberg, K. (2009). Social surrogacy: How favored television programs provide the experience of belonging. Journal of Experimental Social Psychology, 45, 352-362.
Downey, G., Freitas, A. L., Michaelis, B., & Khouri, H. (1998). The self-fulfilling prophecy in close relationships: Rejection sensitivity and rejection by romantic partners. Journal of Personality and Social Psychology, 75, 545-560.
Dykas, M. J., Woodhouse, S. S., Cassidy, J., & Waters, H. S. (2006). Narrative assessment of attachment representations: Links between secure base scripts and adolescent attachment. Attachment & Human Development, 8, 221-240.
Edelstein, R. S., & Gillath, O. (2008). Avoiding interference: Adult attachment and emotional processing biases. Personality and Social Psychology Bulletin, 34, 171-181.
Ein-Dor, T., Mikulincer, M., Doron, G., & Shaver, P. R. (2010). The attachment paradox: How can so many of us (the insecure ones) have no adaptive advantages? Perspectives on Psychological Science, 5, 123-141. doi:10.1177/1745691610362349
Ein-Dor, T., Mikulincer, M., & Shaver, P. R. (2011). Attachment insecurities and the processing of threat-related information: Studying the schemas involved in insecure people's coping strategies. Journal of Personality and Social Psychology, 101, 78-93. doi: 10.1037/a0022503
Ein-Dor, T., & Tal, O. (2012). Scared saviors: Evidence that people high in attachment anxiety are more effective in alerting others to threat. European Journal of Social Psychology, 42, 667-671 DOI: 10.1002/ejsp.1895
Emmanuelle, V. (2009). Inter-relationships among attachment to mother and father, self-esteem, and career indecision. Journal of Vocational Behavior, 75, 91-99.
Epstein, S., & Meier, P. (1989). Constructive thinking: A broad coping variable with specific components. Journal of Personality and Social Psychology, 57, 332-350.
Erez, A., Mikulincer, M., van Ijzendoorn, M. H., & Kroonenberg, P. M. (2008). Attachment, personality, and volunteering: Placing volunteerism in an attachment-theoretical framework. Personality and Individual Differences, 44, 64-74.
Etcheverry, P. E., & Le, B. (2005). Thinking about commitment: Accessibility of commitment and prediction of relationship persistence, accommodation, and willingness to sacrifice. Personal Relationships, 12, 103-123.
Feeney, J. A., & Noller, P. (1990). Attachment style as a predictor of adult romantic relationships. Journal of Personality and Social Psychology, 58, 281-291.
Florian, V., Mikulincer, M., & Bucholtz, I. (1995). Effects of adult attachment style on the perception and search for social support. Journal of Psychology, 129, 665-676.
Fraley, C. (2002). Attachment stability from infancy to adulthood: A meta-analysis and dynamic modelling of developmental mechanisms. Personality and Social Psychology Review, 6, 123-151.
Fraley, R. C., & Heffernan, M. E. (2013). Attachment and parental divorce: A test of the diffusion and sensitive period hypotheses. Personality and Social Psychology Bulletin, 39, 1199-1213. doi: 10.1177/0146167213491503
Fraley, R. C., Roisman, G. I., Booth-LaForce, C., Owen, M. T., & Holland, A. S. (2013). Interpersonal and genetic origins of adult attachment styles: A longitudinal study from infancy to early adulthood. Journal of Personality and Social Psychology, 104, 817-838. doi: 10.1037/a0031435
Fraley, R. C., & Shaver, P. R. (1997). Adult attachment and the suppression of unwanted thoughts. Journal of Personality and Social Psychology, 73, 1080-1091.
Fraley, R. C., & Waller, N. G. (1998). Adult attachment patterns: A test of the typological model. In J. A. Simpson & W. S. Rholes (Eds.), Attachment theory and close relationships (pp. 77-114). New York: Guilford Press.
Fraley, R. C., Waller, N. G., & Brennan, K. A. (2000). An item-response theory analysis of self-report measures of adult attachment. Journal of Personality and Social Psychology, 78, 350-365.
Fowler, J. C., Groat, M., & Ulanday, M. (2013). Attachment style and treatment completion among psychiatric inpatients with substance use disorders. American Journal on Addictions, 22, 14-17. doi: 10.1111/j.1521-0391.2013.00318.x
Gallo, L. C., & Smith. T. W. (2001). Attachment style in marriage: Adjustment and responses to interaction. Journal of Social and Personal Relationships, 18, 263-289.
George, C., & Solomon, J. (2008). The caregiving system: A behavioral systems approach to parenting. In J. Cassidy & P. R. Shaver (Eds.), Handbook of attachment: Theory, research, and clinical applications (2nd ed., pp. 833-856). New York: Guilford Press.
Gillath, O., Bahns, A. J., Ge, F., & Crandall, C. S. (2012). Shoes as a source of first impressions. Journal of Research in Personality, 46, 423-430. doi: 10.1016/j.jrp.2012.04.003
Gillath, O., Giesbrecht, B., & Shaver, P. R. (2009). Attachment, attention, and cognitive control: Attachment style and performance on general attention tasks. Journal of Experimental Social Psychology, 45, 647-654.
Gillath, O., Hart, J., Noftle, E. E., & Stockdale, G. D. (2009). Development and validation of a state adult attachment measure (SAAM). Journal of Research in Personality, 43, 362-373.
Gillath, O., Sesko, A. K., Shaver, P. R., & Chan, D. S. (2010). Attachment, authenticity, and honesty: Dispositional and experimentally induced security can reduce self- and other-deception. Journal of Personality and Social Psychology, 98, 841-855.
Gillath, O., Shaver, P. R., Mikulincer, M., Nitzberg, R. E., Erez, A., & van IJzendoorn, M. H. (2005). Attachment, caregiving, and volunteering: Placing volunteerism in an attachment-theoretical framework. Personal Relationships, 12, 425-446.
Green, J. D., & Campbell, W. (2000). Attachment and exploration in adults: Chronic and contextual accessibility. Personality and Social Psychology Bulletin, 26, 452-461.
Hazan, C., & Shaver, P. (1987). Romantic love conceptualised as an attachment process. Journal of Personality and Social Psychology, 52, 511-523.
Hazan, C., & Shaver, P. R. (1990). Love and work: An attachment-theoretical perspective. Journal of Personality and Social Psychology, 59, 270-280.
Hazan, C., & Shaver, P. R. (1994). Attachment as an organizational framework for research on close relationships. Psychological Inquiry, 5, 1-22.
Hesse, E. (2008). The Adult Attachment Interview: Protocol, method of analysis, and empirical studies. In J. Cassidy & P. R. Shaver (Eds.), Handbook of attachment: Theory, research, and clinical applications (2nd ed., pp. 552-598). New York: Guilford Press.
Hexel, M. (2003). Alexithymia and attachment style in relation to locus of control. Personality and Individual Differences, 35, 1261-1270.
Horowitz, L. M., Rosenberg, S. E., & Bartholomew, K. (1993). Interpersonal problems, attachment styles, and outcome in brief dynamic psychotherapy. Journal of Consulting and Clinical Psychology, 61, 549-560.
Jacobvitz, D., Curran, M., & Moller, N. (2002). Measurement of adult attachment: The place of self-report and interview methodologies. Attachment and Human Development, 4, 207-215.
Jaremka, L. M., Glaser, R., Loving, T. J., Malarkey, W. B., Stowell, J. R., & Kiecolt-Glaser, J. K. (2013). Attachment anxiety is linked to alterations in cortisol production and cellular immunity. Psychological Science, 24, 272-279. doi: 10.1177/0956797612452571
Joplin, J. R., Nelson, D. L., & Quick, J. C. (1999). Attachment behavior and health: Relationships at work and home. Journal of Organizational Behavior, 20, 783-796.
Judge, T. A., Piccolo, R. F., & Ilies, R. (2004). The Forgotten Ones? The Validity of Consideration and Initiating Structure in Leadership Research. Journal of Applied Psychology, 89, 36-51.
Kahn, J. H., & Hessling, R. M. (2001). Measuring the tendency to conceal versus disclose psychological distress. Journal of Social and Clinical Psychology, 20, 41-65.
Keefer, L. A., Landau, M. J., Rothschild, Z. K., & Sullivan, D. (2012). Attachment to objects as compensation for close others' perceived unreliability. Journal of Experimental Social Psychology, 48, 912-917. doi:10.1016/j.jesp.2012.02.007
Kirkpatrick, L. A., & Davis, K. E. (1994). Attachment style, gender, and relationship stability: A longitudinal analysis. Journal of Personality and Social Psychology, 66, 502-512.
Kirkpatrick, L., & Hazen, C. (1994). Attachment styles and close relationships: A four year prospective study. Personal Relationships, 1, 123-142.
Kobak, R. (1994). Adult attachment: A personality or relationship construct? Psychological Inquiry, 5, 42-44.
Kobak, R., & Sceery, A. (1988). Attachment in late adolescence: Working models, affect regulation, and representations of self and others. Child Development, 59, 135-146.
Kobak, R., Cole, H., Ferenz-Gillies, R., & Fleming, W. (1993). Attachment and emotional regulation during mother-teen problem solving: A control theory analysis. Child Development, 64, 231-245.
Krausz, M., Bizman, A., & Braslavsky, D. (2001). Effects of attachment style on preferences for and satisfaction with different employment contracts: An exploratory study. Journal of Business and Psychology, 16, 299-316.
La Guardia, J. G., Ryan, R. M., Couchman, C. E., & Deci, E. L. (2000). Within-person variation in security of attachment: A self-determination theory perspective on attachment, need fulfillment, and well-being. Journal of Personality and Social Psychology, 79, 367-384.
Lanciano, T., Curci, A., Kafetsios, K., Elia, L., & Lucia, V. (2012). Attachment and dysfunctional rumination: The mediating role of Emotional Intelligence abilities. Personality and Individual Differences, 53, 753-758. doi:10.1016/j.paid.2012.05.027
Langan-Fox, J., Sankey, M. J., & Canty, J. M. (2009). Incongruence between implicit and self-attributed achievement motives and psychological well-being: The moderating role of self-directedness, self-disclosure and locus of control. Personality and Individual Differences, 47, 99-104.
Larose, S., Bernier, A., & Soucy, N. (2005). Attachment as a moderator of the effect of security in mentoring on subsequent perceptions of mentoring and relationship quality with college teachers. Journal of Social and Personal Relationships, 22, 399-415.
Lee, S., & Thompson, L. (2011). Do agents negotiate for the best (or worst) interest of principals? Secure, anxious and avoidant principal-agent attachment. Journal of Experimental Social Psychology, 47, 681-684. doi:10.1016/j.jesp.2010.12.023
Levy, M. B., & Davis, K. E. (1988). Lovestyles and attachment styles compared: Their relations to each other and to various relationship characteristics. Journal of Social and Personal Relationships, 5, 439-471.
Little, K. C., McNulty, J. K., & Russell, V. M. (2009). Sex buffers intimates against the negative implications of attachment insecurity. Personality and Social Psychology Bulletin, 36, 484-498. doi: 10.1177/0146167209352494
Lopez, F. G. (1996). Attachment-related predictors of constructive thinking among college students. Journal of Counseling and Development, 75, 58-63.
Lopez, F. G. (2001). Adult attachment orientations, self-other boundary regulation, and splitting tendencies in a college sample. Journal of Counseling Psychology, 48, 440-446.
Lopez, F. G., Mitchell, P., & Gormley, B. (2002). Adult attachment orientations and college student distress: Test of a mediational model. Journal of Counseling Psychology, 49, 460-467.
Main, M., Kaplan, N., & Cassidy, J. (1985). Security in infancy, childhood, and adulthood: A move to the level of representation. Monographs of the Society for Research in Child Development, 50, 66-104.
Markus, H., Smith, J., & Moreland, R. L. (1985). The role of the self-concept in the perception of others. Journal of Personality and Social Psychology, 49, 1494-1512.
McGowan, S. (2002). Mental representations in stressful situations: The calming and distressing effects of significant others. Journal of Experimental Social Psychology, 38, 152-161.
Meifen, W., Vogel, D. L., Ku, T., & Zakalik, R. A. (2005). Adult attachment, affect regulation, negative mood, and interpersonal problems: The mediating roles of emotional reactivity and emotional cutoff. Journal of Counseling Psychology, 52, 14-24.
Meyer, J. P., & Allen, N. J. (1997). Commitment in the workplace: Theory, research, and application. Thousand Oaks, CA: Sage Publications, Inc.
Meyers, S. A., & Landsberger, S. A. (2002). Direct and indirect pathways between adult attachment style and marital satisfaction. Personal Relationships, 9, 159-172.
Mickelson, K. D., Kessler, R. C., & Shaver, P. R. (1997). Adult attachment in a nationally representative sample. Journal of Personality and Social Psychology, 73, 1092-1106.
Mikulincer, M. (1995). Attachment style and the mental representation of the self. Journal of Personality and Social Psychology, 69, 1203-1215.
Mikulincer, M. (1997). Adult attachment style and information processing: Individual differences in curiosity and cognitive closure. Journal of Personality and Social Psychology, 72, 1217-1230.
Mikulincer, M. (1998a). Adult attachment style and individual differences in functional versus dysfunctional experiences of anger. Journal of Personality and Social Psychology, 74, 513-524.
Mikulincer, M. (1998b). Attachment working models and the sense of trust: An exploration of interaction goals and affect regulation. Journal of Personality and Social Psychology, 74, 1209-1224.
Mikulincer, M., & Arad, D. (1999). Attachment working models and cognitive openness in close relationships: A test of chronic and temporary accessibility effects. Journal of Personality and Social Psychology, 77, 710-725.
Mikulincer, M., Birnbaum, G., Woddis, D., & Nachmias, O. (2000). Stress and accessibility of proximity-related thoughts: Exploring the normative and intraindividual components of attachment theory. Journal of Personality and Social Psychology, 78, 509-523.
Mikulincer, M., & Florian, V. (1997). Are emotional and instrumental supportive interactions beneficial in times of stress? The impact of attachment style. Anxiety, Stress & Coping: An International Journal, 10, 109-127.
Mikulincer, M., & Florian, V. (1999). The association between spouses' self-reports of attachment styles and representations of family dynamics. Family Process, 38, 69-83.
Mikulincer, M., & Florian, V. (2000). Exploring individual differences in reactions to mortality salience--Does attachment style regulate terror management mechanisms? Journal of Personality and Social Psychology, 79, 260-273.
Mikulincer, M., Florian, V., & Tolmacz, R. (1990). Attachment styles and fear of personal death: A case study of affect regulation. Journal of Personality and Social Psychology, 58, 273-280.
Mikulincer, M., Florian, V., Birnbaum, G., & Malishkevich, S. (2002). The death anxiety buffering function of close relationships: Exploring the effects of separation reminders on death-thought accessibility. Personality and Social Psychology Bulletin, 28, 287-299.
Mikulincer, M., Gillath, O., Halevy, V., Avihou, N., Avidan, S., & Eshkoli, N. (2001). Attachment theory and reactions to others needs: Evidence that activation of the sense of attachment security promotes empathic responses. Journal of Personality and Social Psychology, 81, 1205-1224.
Mikulincer, M., Hirschberger, G., Nachmias, O., & Gillath, O. (2001b). The affective component of the secure base schema: Affective priming with representations of attachment security. Journal of Personality and Social Psychology, 81, 305-321.
Mikulincer, M., Orbach, I., & Iavnieli, D. (1998). Adult attachment style and affect regulation: Strategic variations in subjective self-other similarity. Journal of Personality and Social Psychology, 75, 436-448.
Mikulincer, M., & Shaver, P. R. (2001). Attachment theory and intergroup bias: Evidence that priming the secure base schema attenuates negative reactions to out-groups. Journal of Personality and Social Psychology, 81, 97-115.
Mikulincer, M., & Shaver, P. R. (2003). The attachment behavioral system in adulthood: Activation, psychodynamic, and interpersonal processes. In M. P. Zanna (Ed.), Advances in experimental social psychology Vol. 35 (pp 53-152). San Diego, CA: Academic Press.
Mikulincer, M., & Shaver, P. R. (2004). Security-based self-representations in adulthood: Contents and processes. In W. S. Rholes & J. A. Simpson (Eds.), Adult attachment: Theory, research, and clinical implications (pp. 159-195). New York: Guilford Press.
Mikulincer, M., & Shaver, P. R. (2005). Attachment theory and emotions in close relationships: Exploring the attachment-related dynamics of emotional reactions to relational events. Personal Relationships, 12, 149-168.
Mikulincer, M., & Shaver, P. R. (2007). Attachment patterns in adulthood: Structure, dynamics, and change. New York: Guilford Press.
Mikulincer, M., Shaver, P. R., Bar-on, N., & Ein-Dor, T. (2010). The pushes and pulls of close relationships: Attachment insecurities and relational ambivalence. Journal of Personality and Social Psychology, 98, 450-468. doi: 10.1037/a0017366
Mikulincer, M., Shaver, P. R., & Horesh, N. (2006). Attachment bases of emotion regulation and posttraumatic adjustment. In D. K. Snyder, J. A. Simpson, & J. N. Hughes (Eds.), Emotion regulation in families: Pathways to dysfunction and health (pp. 77-99). Washington, DC: American Psychological Association.
Mikulincer, M., Shaver, P. R., & Pereg, D. (2003). Attachment theory and affect regulation: The dynamics, development, and cognitive consequences of attachment-related strategies. Motivation and Emotion, 27, 77-102.
Mikulincer, M., Shaver, P. R., & Slav, K. (2006). Attachment, mental representations of others, and gratitude and forgiveness in romantic relationships. In M. Mikulincer & G. S. Goodman (Eds.), Dynamics of romantic love: Attachment, caregiving, and sex (pp. 190-215). New York: Guilford Press.
Mikulincer, M., Shaver, P. R., Gillath, O., & Nitzberg, R. A. (2005). Attachment, caregiving, and altruism: Boosting attachment security increases compassion and helping. Journal of Personality and Social Psychology, 89, 817-839.
Mikulincer, M., Shaver, P. R., Sapir-Lavid, Y., & Avihou-Kanza, N. (2009). What's inside the minds of securely and insecurely attached people? The secure-base script and its associations with attachment-style dimensions. Journal of Personality and Social Psychology, 97, 615-633.
Nahrgang, J. D., Morgeson, F. P., & Ilies, R. (2009). The development of leader-member exchanges: Exploring how personality and performance influence leader and member relationships over time. Organizational Behavior and Human Decision Processes, 108, 256-266.
Niedenthal, P. M., Brauer, M., Robin, L., Innes-Ker, A. H. (2002). Adult attachment and the perception of facial expression of emotion. Journal of Personality and Social Psychology, 82, 419-433.
Noftle, E. E., & Shaver, P. R. (2006). Attachment dimensions and the Big Five personality traits: Associations and comparative ability to predict relationship quality. Journal of Research in Personality, 40, 179-208.
Norris, J. I., Lambert, N. M., Dewall, C. N., & Fincham, F. D. (2012). Can?t buy me love?: Anxious attachment and materialistic values. Personality and Individual Differences, 53, 666-669. doi:10.1016/j.paid.2012.05.009
Otway, L. J., & Carnelley, K. B. (2013). Exploring the associations between adult attachment security and self-actualization and self-transcendence. Self and Identity, 12, 217-230. doi: 10.1080/15298868.2012.667570
Overall, N. C., Fletcher, G. J. O., & Friesen, M. D. (2003). Mapping the intimate relationship mind: Comparisons between three models of attachment representations. Personality and Social Psychology Bulletin, 29, 1479-1493.
Overall, N. C., Simpson, J. A., & Struthers, H. (2013). Buffering attachment-related avoidance: softening emotional and behavioral defenses during conflict discussions. Journal of Personality and Social Psychology, 104, 854-871. doi: 10.1037/a0031798
Paulssen, M. (2009). Attachment orientations in business-to-business relationships. Psychology & Marketing, 26, 507-533.
Pearson, J. L., Cohn, D. A., Cowan, P. A., & Cowan, C. P. (1994). Earned-security and continuous-security in adult attachment- relation to depressive symptomatology and parenting style. Development and Psychopathology, 6, 359-373.
Pereg, D., & Mikulincer, M. (2004). Attachment style and the regulation of negative affect: Exploring individual differences in mood congruency effects on memory and judgment. Personality and Social Psychology Bulletin, 30, 67-80.
Pietromonaco, P. R., & Feldman Barrett, L. (1997). Working models of attachment and daily social interactions. Journal of Personality and Social Psychology, 73, 1409-1423.
Pietromonaco, P. R., & Feldman Barrett, L. (2000). Internal working models: What do we really know about the self in relation to others? Review of General Psychology, 4, 155-175.
Pines, A. M. (2004). Adult attachment styles and their relationship to burnout: A preliminary, cross-cultural investigation. Work & Stress, 18, 66-80.
Pistole, M. C. (1989). Attachment in adult romantic relationships: Style of conflict resolution and relationship satisfaction. Journal of Social and Personal Relationships, 6, 505-510.
Popper, M., & Amit, K. (2009). Attachment and leader?s development via experiences. Leadership Quarterly, 20, 749-763. doi:10.1016/j.leaqua.2009.06.005
Radecki-Bush, C., Farrell, A. D., & Bush, J. I. (1993). Predicting jealous responses: The influence of adult attachment and depression on threat appraisal. Journal of Social and Personal Relationships, 10, 569-588.
Reis, H. T., & Shaver, P. R. (1988). Intimacy as an interpersonal process. In S. Duck (Ed.), Handbook of research in personal relationships (pp. 367-389). London: Wiley.
Reizer, A., Ein-Dor, T., & Shaver, P. (2014). The avoidance cocoon: Examining the interplay between attachment and caregiving in predicting relationship satisfaction. European Journal of Social Psychology, 44, 774-786. doi: 10.1002/ejsp.2057
Rholes, W. S., Simpson, J. A., & Orina, M. M. (1999). Attachment and anger in an anxiety-provoking situation. Journal of Personality and Social Psychology, 76, 940-957.
Rholes, W. S., Simpson, J. A., Campbell, L., & Grich, J. (2001). Adult attachment and the transition to parenthood. Journal of Personality and Social Psychology, 81, 421-435.
Richards, D. A., & Schat, A. C. (2011). Attachment at (not to) work: Applying attachment theory to explain individual behavior in organizations. Journal of Applied Psychology, 96, 169-182. doi: 10.1037/a0020372
Ronen, S., & Mikulincer, M. (2013). Predicting employees? satisfaction and burnout from managers' attachment and caregiving orientations. European Journal of Work and Organizational Psychology, 21, 828-849. doi: 10.1080/1359432X.2011.595561
Rothbard, J. C., & Shaver, P. R. (1994). Continuity of attachment across the life span. In M.B. Sperling & W.H. Berman (Eds.), Attachment in adults (pp. 31-71). New York: Guildford.
Rowe, A., & Carnelley, K. B. (2003). Attachment style differences in the processing of attachment-relevant information: Primed-style effects on recall, interpersonal expectations, and affect. Personal Relationships, 10, 59-75.
Rusbult, C. E., Verette, J., Whiteney, G. A., Slovik, L. F., & Lipkus, I. (1991). Accommodation processes in close relationships: Theory and preliminary empirical evidence. Journal of Personality and Social Psychology, 60, 53-78.
Rutter, M. (1995). Clinical Implications of attachment Concepts: Retrospect and Prospect. Journal of Child Psychology & Psychiatry, 36, 549-571.
Scharfe, E., & Bartholomew, K. (1994). Reliability and stability of adult attachment patterns. Personal Relationships, 1, 23-43.
Scharfe, E., & Bartholomew, K. (1995). Accommodation and attachment representations in young couples. Journal of Social and Personal Relationships, 12, 389-401.
Schirmer, L. L., & Lopez, F. G. (2001). Probing the social support and work strain relationship among adult workers: Contributions of adult attachment orientations. Journal of Vocational Behavior, 59, 17-33.
Selcuk, E., Zayas, V., Gunaydin, G., Hazan, C., & Kross, E. (2012). Mental representations of attachment figures facilitate recovery following upsetting autobiographical memory recall. Journal of Personality and Social Psychology, 103, 362-378. doi: 10.1037/a0028125
Sharpsteen, D. J., & Kirkpatrick, L. A. (1997). Romantic jealousy and adult romantic attachment. Journal of Personality and Social Psychology, 72, 627-640.
Shaver, P. R., & Hazan, C. (1988). A biased overview of the study of love. Journal of Social and Personal Relationships, 5, 473-501.
Shaver, P. R., & Mikulincer, M. (2002). Attachment-related psychodynamics. Attachment and human development, 4, 133-161.
Shaver, P. R., & Mikulincer, M. (2004). What do self-report attachment measures assess? In W. S. Rholes & J. A. Simpson (Eds.), Adult attachment: Theory, research and clinical implications (pp. 17-54). London: Guilford Press.
Shaver, P. R., Belsky, J., & Brennan, K. A. (2000). Comparing measures of adult attachment: An examination of interview and self-report methods. Personal Relationships, 7, 25-43.
Shaver, P. R., Collins, N., & Clark, C. L. (Eds.). (1996). Attachment styles and internal working models of self and relationship partners. Hillsdale, NJ, England: Lawrence Erlbaum Associates, Inc.
Shaver, P. R., Schachner, D. A., & Mikulincer, M. (2005). Attachment style, excessive reassurance seeking, relationship processes, and depression. Personality and Social Psychology Bulletin, 31, 343-359.
Shiota, M. N., Keltner, D., & John, O. P. (2006). Positive emotion dispositions differentially associated with Big Five personality and attachment style. Journal of Positive Psychology, 1, 61-71.
Sibley, C. G., & Liu, J. H. (2004). Short-term temporal stability and factor structure of the revised experiences in close relationships (ECR-R) measure of adult attachment. Personality and Individual Differences, 36, 969-975.
Simpson, J. A. (1990). Influence of attachment styles on romantic relationships. Journal of Personality and Social Psychology, 59, 971-980.
Simpson, J. A., & Rholes, W. S. (1998). Attachment theory and close relationships: (1998). Attachment theory and close relationships.
Simpson, J. A., Rholes, W. S., & Nelligan, J. S. (1992). Support seeking and support giving within couples in an anxiety-provoking situation: The role of attachment styles. Journal of Personality and Social Psychology, 62, 434-446.
Simpson, J. A., Rholes, W., & Phillips, D. (1996). Conflict in close relationships: An attachment perspective. Journal of Personality and Social Psychology, 71, 899-914.
Simpson, J. A., Winterheld, H. A., Rholes, W. S., & Orina, M. M. (2007). Working models of attachment and reactions to different forms of caregiving from romantic partners. Journal of Personality and Social Psychology, 93, 466-477.
Smith, E. R., Murphy, J., & Coates, S. (1999). Attachment to groups: Theory and measurement. Journal of Personality and Social Psychology, 77, 94-110.
Srivastava, S., & Beer, J. S. (2005). How self-evaluations relate to being liked by others: Integrating sociometer and attachment perspectives. Journal of Personality and Social Psychology, 89, 966-977.
Sroufe, L. A., Schork, E., Motti, F., Lawroski, N., & LaFreniere, P. (1984). The role of affect in social competence. In C. E. Izard, J. Kagan, & R. Zajonc (Eds.), Emotions, cognition, and behavior (pp. 289-319). New York: Plenum Press.
Sumer, H. C., & Knight, P. A. (2001). How do people with different attachment styles balance work and family? A personality perspective on work-family linkage. Journal of Applied Psychology, 86, 653-663.
Thompson, R. A. (2008). Early attachment and later development: Familiar questions, new answers. In J. Cassidy & P. R. Shaver (Eds.), Handbook of attachment: Theory, research, and clinical applications (2nd ed., pp. 348-365). New York: Guilford Press.
Trinke, S. J., & Bartholomew, K. (1997). Hierarchies of attachment relationships in young adulthood. Journal of Social and Personal Relationships, 14, 603-625.
Van Lange, P., Rusbult, C. E., Drigotas, S. M., Arriaga, X. B., Witcher, B. S., & Cox, C. L. (1997). Willingness to sacrifice in close relationships. Journal of Personality and Social Psychology, 72, 1373-1395.
Vermigli, P., & Alessandro, T. (2006). Attachment and field dependence: Individual differences in information processing. European Psychologist, 9, 43-55.
Volling, B. L., McElwain, N. L., Notaro, P. C., & Herrera, C. (2002). Parents' emotional availability and infant emotional competence: Predictors of parent-infant attachment and emerging self-regulation. Journal of Family Psychology, 16, 447-465.
Wallace, J. L., & Vaux, A. (1994). Social support network orientation: The role of adult attachment style. Journal of Social and Clinical Psychology, 12, 354-365.
Warren, S. L., Bost, K. K., Roisman, G. I., Silton, R. L., Spielberg, J. M., Engels, A. S., Choi, E., Sutton, B. P., Miller, G. A., & Heller, W. (2011). Effects of adult attachment and emotional distractors on brain mechanisms of cognitive control. Psychological Science, 21, 1818-1826.
Waters, E., Merrick, S., Treboux, D., Crowell J., & Albersheim, L. (2000). Attachment security in infancy and early adulthood: A 20-year longitudinal study. Child Development, 71, 684-689.
Waters, H. S., & Hou, F. (1987). Children's production and recall of narrative passages. Journal of Experimental Child Psychology, 44, 348-363.
Waters, H. S., & Waters, E. (2006). The attachment working models concept: Among other things, we build script-like representations of secure base experiences. Attachment & Human Development, 8, 185-198.
Waters, H. S., Rodrigues, L. M., & Ridgeway, D. (1998). Cognitive underpinnings of narrative attachment assessment. Journal of Experimental Child Psychology, 71, 211-234.
Wei, M., Liao, K. Y., Ku, T., & Shaffer, P. A. (2011). Attachment, self-compassion, empathy, and subjective well-being among college students and community adults. Journal of Personality, 79, 191-221. doi: 10.1111/j.1467-6494.2010.00677.x
Weinfield, N. S., Sroufe, L. A., & Egelund, B. (2000). Attachment from infancy to early adulthood in a high-risk sample: Continuity, discontinuity, and their correlates. Child Development, 71, 695-702.
Weinfield, N. S., Sroufe, L. A., Egeland, B., & Carlson, E. (2008). Individual differences in infant-caregiver attachment: Conceptual and empirical aspects of security. In J. Cassidy & P. R. Shaver (Eds.), Handbook of attachment: Theory, research, and clinical applications (2nd ed., pp. 78-101). New York: Guilford Press.
Wieselquist, J., Rusbult, C. E., Foster, C. A., & Agnew, C. R. (1999). Commitment, pro-relationship behavior, and trust in close relationships. Journal of Personality and Social Psychology, 77, 942-966.
Yrle, A. C., Hartman, S., & Galle, W. P. (2002). An investigation of relationships between communication style and leader-member exchange. Journal of Communication Management, 6, 257-268.
Last Update: 5/25/2016 |
All right triangles have one right (90-degree) angle, and the hypotenuse is the side that is opposite or the right angle, or the longest side of the right triangle. The hypotenuse is the longest side of the triangle, and it’s also very easy to find using a couple of different methods. This article will teach you how to find the length of the hypotenuse using the Pythagorean theorem when you know the length of the other two sides of the triangle. It will then teach you to recognize the hypotenuse of some special right triangles that often appear on tests. It will finally teach you to find the length of the hypotenuse using the Law of Sines when you only know the length of one side and the measure of one additional angle.
Using the Pythagorean Theorem
1Learn the Pythagorean Theorem. The Pythagorean Theorem describes the relationship between the sides of a right triangle. It states that for any right triangle with sides of length a and b, and hypotenuse of length c, a2 + b2 = c2.
2Make sure that your triangle is a right triangle. The Pythagorean Theorem only works on right triangles, and by definition only right triangles can have a hypotenuse. If your triangle contains one angle that is exactly 90 degrees, it is a right triangle and you can proceed.
- Right angles are often notated in textbooks and on tests with a small square in the corner of the angle. This special mark means "90 degrees."
3Assign variables a, b, and c to the sides of your triangle. The variable "c" will always be assigned to the hypotenuse, or longest side. Choose one of the other sides to be a, and call the other side b (it doesn't matter which is which; the math will turn out the same). Then copy the lengths of a and b into the formula, according to the following example:
- If your triangle has sides of 3 and 4, and you have assigned letters to those sides such that a = 3 and b = 4, then you should write your equation out as: 32 + 42 = c2.
4Find the squares of a and b. To find the square of a number, you simply multiply the number by itself, so a2 = a x a. Find the squares of both a and b, and write them into your formula.
- If a = 3, a2 = 3 x 3, or 9. If b = 4, then b2 = 4 x 4, or 16.
- When you plug those values into your equation, it should now look like this: 9 + 16 = c2.
5Add together the values of a2 and b2. Enter this into your equation, and this will give you the value for c2. There is only one step left to go, and you will have that hypotenuse solved!
- In our example, 9 + 16 = 25, so you should write down 25 = c2.
6Find the square root of c2. Use the square root function on your calculator (or your memory of the multiplication table) to find the square root of c2. The answer is the length of your hypotenuse!
- In our example, c2 = 25. The square root of 25 is 5 (5 x 5 = 25, so Sqrt(25) = 5). That means c = 5, the length of our hypotenuse!
Finding the Hypotenuse of Special Right Triangles
1Learn to recognize Pythagorean Triple Triangles. The side lengths of a Pythagorean triple are integers that fit the Pythagorean Theorem. These special triangles appear frequently in geometry text books and on standardized tests like the SAT and the GRE. If you memorize the first 2 Pythagorean triples, in particular, you can save yourself a lot of time on these tests because you can immediately know the hypotenuse of one of these triangles just by looking at the side lengths!
- The first Pythagorean triple is 3-4-5 (32 + 42 = 52, 9 + 16 = 25). When you see a right triangle with legs of length 3 and 4, you can instantly be certain that the hypotenuse will be 5 without having to do any calculations.
- The ratio of a Pythagorean triple holds true even when the sides are multiplied by another number. For example a right triangle with legs of length 6 and 8 will have a hypotenuse of 10 (62 + 82 = 102, 36 + 64 = 100). The same holds true for 9-12-15, and even 1.5-2-2.5. Try the math and see for yourself!
- The second Pythagorean triple that commonly appears on tests is 5-12-13 (52 + 122 = 132, 25 + 144 = 169). Also be on the lookout for multiples like 10-24-26 and 2.5-6-6.5.
2Memorize the side ratios of a 45-45-90 right triangle. A 45-45-90 right triangle has angles of 45, 45, and 90 degrees, and is also called an Isosceles Right Triangle. It occurs frequently on standardized tests, and is a very easy triangle to solve. The ratio between the sides of this triangle is 1:1:Sqrt(2), which means that the length of the legs are equal, and the length of the hypotenuse is simply the leg length multiplied by the square root of two.
- To calculate the hypotenuse of this triangle based on the length of one of the legs, simply multiply the leg length by Sqrt(2).
- Knowing this ratio comes in especially handy when your test or homework question gives you the side lengths in terms of variables instead of integers.
3Learn the side ratios of a 30-60-90 right triangle. This triangle has angle measurements of 30, 60, and 90 degrees, and occurs when you cut an equilateral triangle in half. The sides of the 30-60-90 right triangle always maintain the ratio 1:Sqrt(3):2, or x:Sqrt(3)x:2x. If you are given the length of one leg of 30-60-90 right triangle and are asked to find the hypotenuse, it is very easy to do:
- If you are given the length of the shortest leg (opposite the 30-degree angle,) simply multiply the leg length by 2 to find the length of the hypotenuse. For instance, if the length of the shortest leg is 4, you know that the hypotenuse length must be 8.
- If you are given the length of the longer leg (opposite the 60-degree angle,) multiply that length by 2/Sqrt(3) to find the length of the hypotenuse. For instance, if the length of the longer leg is 4, you know that the hypotenuse length must be 4.62.
Finding the Hypotenuse Using the Law of Sines
1Understand what "Sine" means. The terms "sine," "cosine," and "tangent" all refer to various ratios between the angles and/or sides of a right triangle. In a right triangle, the sine of an angle is defined as the length of the side opposite the angle divided by the hypotenuse of the triangle. The abbreviation for sine found in equations and on calculators is sin.
2Learn to calculate sine. Even a basic scientific calculator will have a sine function. Look for a key marked sin. To find the sine of angle, you will usually press the sin key and then enter the angle measurement in degrees. On some calculators, however, you must enter the degree measurement first and then the sin key. You will have to experiment with your calculator or check the manual to find out which it is.
- To find the sine of an 80 degree angle, you will either need to key in sin 80 followed by the equal sign or enter key, or 80 sin. (The answer is -0.9939.)
- You can also type in "sine calculator" into a web search, and find a number of easy-to-use calculators that will remove any guesswork.
3Learn the Law of Sines. The Law of Sines is a useful tool for solving triangles. In particular, it can help you find the hypotenuse of a right triangle if you know the length of one side, and the measure of one other angle in addition to the right angle. For any triangle with sides a, b, and c, and angles A, B, and C, the Law of Sines states that a / sin A = b / sin B = c / sin C.
- The Law of Sines can actually be used to solve any triangle, but only a right triangle will have a hypotenuse.
4Assign the variables a, b, and c to the sides of your triangle. The hypotenuse (longest side) must be "c". For the sake of simplicity, label the side with the known length as "a," and the other "b". Then assign variables A, B, and C to the angles of the triangle. The right angle opposite the hypotenuse will be "C". The angle opposite side "a" is angle "A," and the angle opposite side "b" is "B".
5Calculate the measurement of the third angle. Because it is a right angle, you already know that C = 90 degrees, and you also know the measure of A or B. Since the internal degree measurement of a triangle must always equal 180 degrees, you can easily calculate the measurement of the third angle using the following formula: 180 – (90 + A) = B. You can also reverse the equation such that 180 – (90 + B) = A.
- For example, if you know that A = 40 degrees, then B = 180 – (90 + 40). Simplify this to B = 180 – 130, and you can quickly determine that B = 50 degrees.
6Examine your triangle. At this point, you should know the degree measurements of all three angles, and the length of side a. It is now time to plug this information into the Law of Sines equation to determine the lengths of the other two sides.
- To continue our example, let's say that the length of side a = 10. Angle C = 90 degrees, angle A = 40 degrees, and angle B = 50 degrees.
7Apply the Law of Sines to your triangle. We just need to plug our numbers in and solve the following equation to determine the length of hypotenuse c: length of side a / sin A = length of side c / sin C. This might still look a bit intimidating, but the sine of 90 degrees is a constant, and always equals 1! Our equation can thus be simplified to: a / sin A = c / 1, or just a / sin A = c.
8Divide the length of side a by the sine of angle A to find the length of the hypotenuse! You can do this in two separate steps, by first calculating sin A and writing it down, and then dividing by a. Or you can key it all into the calculator at the same time. If you do, remember to include parentheses after the division sign. For example, key in either 10 / (sin 40) or 10 / (40 sin), depending on your calculator.
- Using our example, we find that sin 40 = 0.64278761. To find the value of c, we simply divide the length of a by this number, and learn that 10 / 0.64278761 = 15.6, the length of our hypotenuse!
How do I find the area of a triangle if only the length of the hypotenuse is given?wikiHow ContributorTo get the area of a triangle, you need the base and the height or the length of all three sides. Since the angles are unknown, it is impossible.
Sources and Citations
- ↑ http://www.mathsisfun.com/definitions/hypotenuse.html
- ↑ http://mathematica.ludibunda.ch/pythagoras6.html
- ↑ https://www.wikihow.com/Use-the-Pythagorean-Theorem
- ↑ http://www.dummies.com/how-to/content/working-with-pythagorean-triple-triangles.html
- ↑ http://www.regentsprep.org/regents/math/algtrig/att2/ltri45.htm
- ↑ http://www.dummies.com/how-to/content/identifying-the-30-60-90-degree-triangle.html
- ↑ https://www.mathsisfun.com/definitions/sine.html
- ↑ http://www.rapidtables.com/calc/math/Sin_Calculator.htm
- ↑ http://www.mathsisfun.com/algebra/trig-sine-law.html
Categories: Coordinate Geometry
In other languages:
Español: encontrar el largo de la hipotenusa, Italiano: Calcolare la Lunghezza dell'Ipotenusa di un Triangolo, Português: Encontrar o Comprimento da Hipotenusa, Русский: найти гипотенузу, Deutsch: Die Länge der Hypotenuse bestimmen, Français: calculer la longueur de l'hypoténuse, Bahasa Indonesia: Mencari Panjang Hipotenusa, Nederlands: Het bepalen van de lengte van de hypotenusa
Thanks to all authors for creating a page that has been read 690,834 times. |
DNA microarrays are tools used to analyze and measure the activity of genes. Researchers can use microarrays and other methods to measure changes in gene expression and thereby learn how cells respond to a disease or to some other challenge.
Humans have 30,000 to 70,000 genes, each consisting of a sequence of bases, the building blocks of the hereditary material DNA. Before they can carry out their function, genes are copied to make messenger RNA (mRNA), in a process called transcription . This molecule is in turn used as a template for the synthesis of a protein molecule (translation ). This entire process, including transcription of RNA and translation of protein, is referred to as gene expression. Only a subset of the full set of genes is expressed in a given tissue at a given time. In fact, this differential pattern of gene expression is ultimately what distinguishes lung tissue from skin, liver, and muscle tissue.
Even within a given tissue type, different genes are expressed at different times. For example, there is a very tightly controlled sequence of gene expression during the course of embryonic development. Tissues also respond to metabolic and other challenges. The pattern of gene expression changes in the liver in response to the consumption of a large meal. Similarly, muscle gene expression changes in response to vigorous exercise or injury. Drugs can also affect gene expression. Researchers can use microarrays and other methods to measure these changes in gene expression, and from them learn about how cells respond to disease or to other challenges.
Microarrays measure gene expression by taking advantage of the process of hybridization (molecular) . DNA is made up of four bases: guanine, adenine, cytosine, and thymine, which are abbreviated G, A, C, and T, respectively. G and C can bind to one another, forming a base pair, as can A and T, but no other combinations of bases can form base pairs. G and C are said to be " complementary " bases, as are A and T.
The bases on each of the two strands of DNA that make up a chromosome are complementary to the bases on the opposite strand. Long pieces of DNA will not bind to each other (or "hybridize") unless they are complementary. Hybridization allows researchers to test whether two pieces of DNA are complementary. If they bind to one another (hybridize) then they are opposite strands of a single gene. If they do not bind to one another, then they are unrelated.
Hybridization can be used to measure the levels of hundreds of different mRNAs within a given tissue, thereby providing a picture of gene expression within that tissue. RNA is isolated from the tissue of interest and allowed to hybridize to a solid support to which many different DNA pieces, from many different genes, have been attached. Because the RNA is labeled with a fluorescent tag, the amount bound to a given spot can be measured. The fluorescent intensity of each spot is a measure of the level of that mRNA that was expressed in the original tissue. In this way, the levels of expression of up to 12,000 different genes can be measured with a single microarray.
There are two basic types of microarrays. One type is created by a company called Affymetrix. Affymetrix manufactures silicon and glass chips that resemble semiconductor chips and that are manufactured using the same photolithographic techniques. These chips have sets of very short (20 basepair) stretches of DNA representing each gene. A second type of microarray is commonly called a printed array and is made by spotting small amounts of DNA on glass slides. These arrays frequently have smaller numbers of genes on each slide, but researchers can easily modify them for specific experiments.
Microarrays produce enormous amounts of data, and the analysis of that data can be quite complex. The sheer volume of data requires special software and a database in which to store both the measurements and the results of the analyses. The exact form that the analysis takes depends on the nature of the experiment being performed. If just two samples are being directly compared (for example, gene expression in mouse heart tissue is compared with and without the administration of a drug), relatively straightforward statistical tests can be performed. If larger numbers of samples are being measured, the same tests can be performed between two samples at a time, but more sophisticated, "clustering" analyses can be performed as well.
Clustering analysis identifies groups of genes that react the same way across several different samples. For example, researchers might analyze gene expression in heart tissue from a set of mouse embryos that range in age from five to fifteen days. A clustering analysis would be able to detect a group of genes whose expression levels all increase slowly from days five to nine, peak at day ten and then fall to zero by day twelve. Only genes that have this precise pattern of expression would cluster together, in this type of analysis.
The Role of Bioinformatics
One of the tremendous difficulties in performing any kind of expression analysis is the manipulation of very large amounts of biological data, a field of study called bioinformatics. The usefulness of gene expression data depends on how much information is available for each identified gene. In other words, the identities of the genes associated with each spot on a microarray must be accessible as the analysis is done.
Descriptions and classifications of each gene on the array must be readily available, as no researcher can remember such details about the tens of thousands of genes that may be involved in the analysis. An analysis might be done many times, with slight changes in the parameters of the clustering algorithm each time. The genes that cluster together are examined at the end of each analysis, to look for reproducible patterns. This analysis must be done with the full understanding of the biology of the system being studied. Clusters of genes are most informative if they group in a biologically reasonable way. For this reason, microarray expression analysis is frequently exploratory. The results of the analysis are used to suggest additional, corroborative experiments.
Another bioinformatics challenge in gene expression studies is collecting information about the samples under analysis and storing the information in databases. If gene expression patterns of one hundred different tumor samples are being examined, it may be necessary to restrict the analysis to subgroups of the tumors in order to observe patterns in the data. This subgrouping or stratification of the samples is best performed on the basis of independently determined properties of those samples. For example, samples from only metastatic cancer cells could be grouped together for analysis and compared with those from nonmetastatic cancer cells, or the age of the patient at the onset of disease could be used to segregate the samples into different groups. Such subgroup analysis can only be done if complete information is collected and stored for all samples.
Applications of Microarray Analysis
Microarrays are new enough that their applications are still being developed. Microarray expression analysis can be used to help study complex, multigenic diseases such as Parkinson's disease (PD). The great challenge in understanding the genetics of such disorders is identifying susceptibility genes, which are genes that increase a person's risk of developing the disease. Frequently, the first step in discovering a susceptibility gene is linkage analysis . This technique can identify regions of a chromosome that harbor such a gene, but the regions that are identified are frequently very large, containing hundreds of genes. Screening through all of these genes individually is tremendously slow and labor-intensive. Expression analysis using microarrays can help prioritize these genes for further analysis by providing independent lines of evidence that specific genes are involved in the disease process.
Brain tissue can be collected through anatomical donations from patients with Parkinson's disease and from unaffected individuals, for example. Regions of the brain that are especially affected in Parkinson's patients can be compared to the same regions from unaffected individuals. Genes whose levels of expression vary can be identified. Hundreds or thousands of genes may be identified in this way, but they can then be compared to those that are found, through linkage analysis, to be linked to Parkinson's disease. There may be only tens of genes common to both groups. These genes can be prioritized for detailed examination through other methods. The key here is that expression analysis and linkage analysis provide independent evidence of a given gene's involvement in a disease process. It is the synthesis of information from these two independent lines of evidence that makes this approach powerful.
Another very powerful application of microarray expression data is called classification analysis. This technique uses gene expression data to separate tissue samples into two or more groups. For example, one type of tumor may respond very well to an aggressive program of chemotherapy treatment, while another type may respond better to surgical removal followed by radiation therapy. Further, these two types of tumors may be difficult or impossible to tell apart under a microscope. Choosing the correct method of treatment and applying that treatment early in the course of disease could significantly improve a patient's chances of survival.
In such a case, expression analysis can be used to give a detailed picture of the genes that are expressed in the two types of tumor. A training set (a small set of samples in each category) can be used to find specific patterns of gene expression that are characteristic for each type of tumor. New tumors can then be analyzed, and their expression profiles can be used to predict the group to which they belong. These approaches are used with great success to refine the clinical management of cancer patients. A 2001 study by S. Dhanasekaran, "Delineation of Prognostic Biomarkers in Prostate Cancer," offers an example of this kind of work. Additional applications of microarrays are still being developed.
Gene expression analysis can also be done using a powerful technique called serial analysis of gene expression (SAGE). Like microarrays, SAGE starts by isolating RNA from the tissue of interest. This RNA is then processed through a long series of steps resulting in the isolation of a set of very short sequences, called tags, from each transcript in the cell. These tags are converted into corresponding segments of DNA. These pieces of DNA, which are 14 base pairs long, are then linked together into long chains, and their sequence of bases is determined. Tens of thousands of these SAGE tags are sequenced from each tissue that is being studied. The tags corresponding to a given gene from one tissue are counted and compared to those from the same gene in another tissue.
For example, a colon cancer tumor sample might generate 50,000 SAGE tags, thirty-three of which correspond to a specific gene. A second library made from normal colon cells might have fifty thousand tags, eleven of which correspond to the same gene. This would indicate that the gene is expressed at a level that is three times as great in tumor cells than it is in normal cells.
SAGE data is significantly more difficult and expensive to produce than microarray data, but it offers the advantage of providing very precise and quantitative measurements of expression levels. SAGE has the further advantage that it can detect genes that have not been previously characterized. Such unknown genes cannot be detected by microarrays, because researchers must first know their sequence before they can place them on the array. SAGE therefore can be used as a gene discovery tool.
SAGE has been used most extensively in cancer research. Investigators in the Cancer Genome and Anatomy Project have created more than one hundred SAGE libraries from normal and cancerous tissue. Analysis of these libraries has revealed a great deal about the way that gene expression changes in cancerous tissue, which in turn has provided insight into new diagnostic and treatment options.
SAGE has also been used as a tool to help calculate the total number of genes in the human body, as well as to describe the ways in which genes are regulated and processed at different times. Microarrays and SAGE analysis are only two of the many ways that scientists have examined gene expression. As these techniques become more refined, and as new techniques are developed, they will provide a powerful tool to investigate how the incredible diversity and complexity of our tissues can arise, even though every cell in our bodies contains exactly the same set of genes.
see also Bioinformatics; Cancer; Complex Traits; Gene Discovery; In Situ Hybridization; Linkage and Recombination; Mapping.
Michael A. Hauser
Bloom, Mark V., Greg A. Freyer, and David A. Micklos. Laboratory DNA Science: An Introduction to Recombinant DNA Techniques and Methods of Genome Analysis. Menlo Park, CA: Addison-Wesley, 1996.
Dhanasekaran, S. M., et al. "Delineation of Prognostic Biomarkers in Prostate Cancer." Nature 412 (2001): 822-826.
Cancer Genome Anatomy Project. National Cancer Institute. <http://cgap.nci.nih.gov>.
"DNA Microarrays." Genetics. . Encyclopedia.com. (August 18, 2017). http://www.encyclopedia.com/medicine/medical-magazines/dna-microarrays
"DNA Microarrays." Genetics. . Retrieved August 18, 2017 from Encyclopedia.com: http://www.encyclopedia.com/medicine/medical-magazines/dna-microarrays
DNA Chips and Microarrays
DNA chips and microarrays
A DNA (deoxyribonucleic acid ) chip is a solid support (typically glass or nylon) onto which are fixed single strands of DNA sequences. The sequences are made synthetically and are arranged in a pattern that is referred to as an array. DNA chips are a means by which a large amount of DNA can be screened for the presence of target regions. Furthermore, samples can be compared to compare the effects of a treatment, environmental condition, or other factor on the activity. One example of the use of a DNA microarray is the screening for the development of a mutation in a gene . The original gene would be capable of binding to the synthetic DNA target, whereas the mutated gene does not bind. Such an experiment has been exploited in the search for genetic determinants of antibiotic resistance , and in the manufacture of compounds to which the resistant microorganisms will be susceptible.
A gene chip is wafer-like in appearance, and resembles a microtransistor chips. However, instead of transistors, a DNA chip contains an orderly and densely packed array of DNA species. Arrays are made by spotting DNA samples over the surface of the chip in a patterned manner. The spots can be applied by hand or with robotic automation. The latter can produce very small spots, which collectively is termed a microarray.
Each spot in an array is, in reality, a single-stranded piece of DNA. Depending upon the sequence of the tethered piece of DNA, a complimentary region of sample DNA can specifically bind. The design of the array is dependent on the nature of the experiment.
The synthetic DNA is constructed so that known sequences are presented to whatever sample is subsequently applied to the chip. DNA, or ribonucleic acid (typically messenger RNA ) from the samples being examined are treated to as to cut the double helix of DNA into its two single strand components, following be enzymatic treatment that cuts the DNA into smaller pieces. The pieces are labeled with fluorescent dyes . For example, the DNA from one sample of bacteria could be tagged with a green fluorescent dye (dye that will fluoresce green under illumination with a certain wavelength of light) and the DNA from a second sample of bacteria could be tagged with a red fluorescent dye (which will fluoresce red under illumination with the same wavelength of light). Both sets of DNA are flooded over the chip. Where the sample DNA finds a complimentary piece of synthetic DNA, binding will occur. Finally the nature of the bound sample DNA is ascertained by illuminating the chip and observing for the presence and the pattern of green and red regions (usually dots).
A microarray can also be used to determine the level of expression of a gene. For example, an array can be constructed such that the messenger RNA of a particular gene will bind to the target. Thus, the bound RNAs represent genes that were being actively transcribed, or at least recently. By monitoring genetic expression, the response of microorganisms to a treatment or condition can be examined. As an example, DNA from a bacterial species growing in suspension can be compared with the same species growing as surface-adherent biofilm in order to probe the genetic nature of the alterations that occur in the bacteria upon association with a surface. Since the method detects DNA, the survey can be all-encompassing, assaying for genetic changes to protein, carbohydrate, lipid, and other constituents in the same experiment.
The power of DNA chip technology has been recently illustrated in the Human Genome Project. This effort began in 1990, with the goal of sequencing the complete human genome. The projected time for the project's completion was 40 years. Yet, by 2001, the sequencing was essentially complete. The reason for the project's rapid completion is the development of the gene chip.
Vast amounts of information are obtained from a single experiment. Up to 260,000 genes can be probed on a single chip. The analysis of this information has spawned a new science called bioinformatics , where biology and computing mesh.
Gene chips are having a profound impact on research. Pharmaceutical companies are able to screen for gene-based drugs much faster than before. In the future, DNA chip technology will extend to the office of the family physician. For example, a patient with a sore throat could be tested with a single-use, disposable, inexpensive gene chip in order to identify the source of the infection and its antibiotic susceptibility profile. Therapy could commence sooner and would be precisely targeted to the causative infectious agent.
See also DNA (Deoxyribonucleic acid); DNA chips and micro arrays; DNA hybridization; Genetic identification of microorganisms; Laboratory techniques in immunology; Laboratory techniques in microbiology; Molecular biology and molecular genetics
"DNA Chips and Microarrays." World of Microbiology and Immunology. . Encyclopedia.com. (August 18, 2017). http://www.encyclopedia.com/science/encyclopedias-almanacs-transcripts-and-maps/dna-chips-and-microarrays
"DNA Chips and Microarrays." World of Microbiology and Immunology. . Retrieved August 18, 2017 from Encyclopedia.com: http://www.encyclopedia.com/science/encyclopedias-almanacs-transcripts-and-maps/dna-chips-and-microarrays
"DNA microarray." A Dictionary of Biology. . Encyclopedia.com. (August 18, 2017). http://www.encyclopedia.com/science/dictionaries-thesauruses-pictures-and-press-releases/dna-microarray
"DNA microarray." A Dictionary of Biology. . Retrieved August 18, 2017 from Encyclopedia.com: http://www.encyclopedia.com/science/dictionaries-thesauruses-pictures-and-press-releases/dna-microarray
"DNA chip." A Dictionary of Biology. . Encyclopedia.com. (August 18, 2017). http://www.encyclopedia.com/science/dictionaries-thesauruses-pictures-and-press-releases/dna-chip
"DNA chip." A Dictionary of Biology. . Retrieved August 18, 2017 from Encyclopedia.com: http://www.encyclopedia.com/science/dictionaries-thesauruses-pictures-and-press-releases/dna-chip
"DNA probe." A Dictionary of Biology. . Encyclopedia.com. (August 18, 2017). http://www.encyclopedia.com/science/dictionaries-thesauruses-pictures-and-press-releases/dna-probe
"DNA probe." A Dictionary of Biology. . Retrieved August 18, 2017 from Encyclopedia.com: http://www.encyclopedia.com/science/dictionaries-thesauruses-pictures-and-press-releases/dna-probe |
In this tutorial, you will learn what is a binary search tree, how different operations like insertion, deletion, searching are done in a binary search tree with examples in C and what are the applications of binary search trees.
A Binary Search Tree is a special binary tree used for the efficient storage of data. The nodes in a binary search tree are arranged in order. It also allows for the simple insertion and deletion of items.
- It is a binary tree where each node can have a maximum of two childern.
- It is called a search tree, since it can search for and find an element with an avergae running time
f(n) = O(log2n)
A binary search tree has the following properties:
- All the elements in the left subtree is less than the root node.
- All the elements in the right subtree is greater than the root node.
- These properties are shared by both the left and right trees i.e. they are also BSTs.
An example for a binary search tree is given below:
Operations on a Binary Search Tree
A binary search tree can perform three basic operations: searching, insertion, and deletion.
Searching in a Binary Search Tree
The search operation finds whether or not a particular value exists in a tree. Since the binary search tree is ordered, the search can be easily made.
Suppose we want to find X in a binary tree having root R.
- Compare item, X, with the root, R, of the tree.
X < R, then recursively search on the left subtree of the tree.
X > R, then recursively search on the right subtree of the tree.
Repeat these steps until we reach an empty subtree or locate a node R such that
R = X. That is, we continue the search from the root down the tree until we reach X or a terminal node.
IF ROOT = NULL Return NULL ELSE IF ROOT -> DATA = X Return ROOT ELSE IF X < ROOT -> DATA Return Search(ROOT -> LEFT, X) ELSE Return Search(ROOT -> RIGHT, X) [END OF IF] [END OF IF] [END OF IF]
Consider the following BST and suppose we want to find 43. Start from root node 50.
Move to the left subtree of root since
Now compare 43 and 41. Since
43>41move to the right subtree.
43<47, So move to the right subtree, where 43 is found.
Inserting in a Binary Search Tree
The insert operation is similar to the search operation. This is because first, you have to find the correct position where the new element is to be inserted.
- If the tree is NULL, insert new element as the root.
- Otherwise, depending on the value, we continue to the right or left subtree, and when we reach a position where the left or right subtree is null, we insert the new node there.
IF TREE = NULL Allocate memory for TREE SET TREE -> DATA = VAL SET TREE -> LEFT = TREE -> RIGHT = NULL ELSE IF VAL < TREE -> DATA Insert(TREE LEFT, VAL) ELSE Insert(TREE RIGHT, VAL) [END OF IF] [END OF IF]
Consider the following BST and suppose we want to insert a new node 42.
Start searching for element 42 from the root. The search will stop when we reach node 43.
43>42 but 43 has no left subtree. So we insert 43 as the left child of node 43.
Deleting from a Binary Search Tree
The deletion operation is to be done very carefully that the properties of the binary search tree are not violated and no nodes are lost.
Deletion is done in the following three cases:
- Deleting a node that has no children.
- Deleting a node with one child.
- Deleting a node with two children.
Case 1: Deleting a node that has no children
Here the node we have to delete is a leaf node. So, we can simply delete the node from the tree.
Suppose we want to delete node 58 from the following tree.
Simply delete node 58.
Case 2: Deleting a node with one child.
The following steps are to be done to delete a node that has only a single child.
- Replace the node with its child.
- Remove the child node.
Suppose we want to delete node 47 from the following tree.
Replace node 47 with its child node 43.
Now delete the child node 43 from its original position
Case 3: Deleting a node with two children
The case is when the node we want to delete has 2 children. This case can be handled by the following steps:
- Replace the node’s value with it’s in-order successor.
- Remove the in-order successor from its original position.
Suppose we want to delete node 41 from the following tree.
Replace the value of node 41 with its in-order successor 43.
Now delete node 43 from its original position.
IF TREE = NULL Write "VAL not found in the tree" ELSE IF VAL < TREE->DATA Delete(TREE->LEFT, VAL) ELSE IF VAL > TREE->DATA Delete(TREE->RIGHT, VAL) ELSE IF TREE->LEFT AND TREE->RIGHT SET TEMP = findLargestNode(TREE->LEFT) SET TREE->DATA = TEMP->DATA Delete(TREE->LEFT, TEMP->DATA) ELSE SET TEMP = TREE IF TREE->LEFT = NULL AND TREE->RIGHT = NULL SET TREE = NULL ELSE IF TREE LEFT != NULL SET TREE = TREE LEFT ELSE SET TREE = TREE RIGHT [END OF IF] FREE TEMP [END OF IF]
Applications of Binary Search Tree
- Used for indexing and multilevel indexing in the database.
- Used to implement various searching algorithms.
- For dynamic sorting.
- Used for managing virtual memory areas in unix kernal. |
This article needs additional citations for verification. (April 2008)
Procedural programming is a programming paradigm, derived from imperative programming, based on the concept of the procedure call. Procedures (a type of routine or subroutine) simply contain a series of computational steps to be carried out. Any given procedure might be called at any point during a program's execution, including by other procedures or itself. The first major procedural programming languages appeared circa 1957–1964, including Fortran, ALGOL, COBOL, PL/I and BASIC. Pascal and C were published circa 1970–1972.
Computer processors provide hardware support for procedural programming through a stack register and instructions for calling procedures and returning from them. Hardware support for other types of programming is possible, but no attempt was commercially successful (for example Lisp machines or Java processors).[contradictory]
Procedures and modularityEdit
Scoping is another technique that helps keep procedures modular. It prevents the procedure from accessing the variables of other procedures (and vice versa), including previous instances of itself, without explicit authorization.
Because of the ability to specify a simple interface, to be self-contained, and to be reused, procedures are a convenient vehicle for making pieces of code written by different people or different groups, including through programming libraries.
Comparison with other programming paradigmsEdit
Procedural programming languages are also imperative languages, because they make explicit references to the state of the execution environment. This could be anything from variables (which may correspond to processor registers) to something like the position of the "turtle" in the Logo programming language.
Often, the terms "procedural programming" and "imperative programming" are used synonymously. However, procedural programming relies heavily on blocks and scope, whereas imperative programming as a whole may or may not have such features. As such, procedural languages generally use reserved words that act on blocks, such as
for, to implement control flow, whereas non-structured imperative languages use goto statements and branch tables for the same purpose.
The focus of procedural programming is to break down a programming task into a collection of variables, data structures, and subroutines, whereas in object-oriented programming it is to break down a programming task into objects that expose behavior (methods) and data (members or attributes) using interfaces. The most important distinction is that while procedural programming uses procedures to operate on data structures, object-oriented programming bundles the two together, so an "object", which is an instance of a class, operates on its "own" data structure.
Nomenclature varies between the two, although they have similar semantics:
- Procedures correspond to functions. Both allow the reuse of the same code in various parts of the programs, and at various points of its execution.
- By the same token, procedure calls correspond to function application.
- Functions and their modularly separated from each other in the same manner, by the use of function arguments, return values and variable scopes.
The main difference between the styles is that functional programming languages remove or at least deemphasize the imperative elements of procedural programming. The feature set of functional languages is therefore designed to support writing programs as much as possible in terms of pure functions:
- Whereas procedural languages model execution of the program as a sequence of imperative commands that may implicitly alter shared state, functional programming languages model execution as the evaluation of complex expressions that only depend on each other in terms of arguments and return values. For this reason, functional programs can have a free order of code execution, and the languages may offer little control over the order in which various parts of the program are executed. (For example, the arguments to a procedure invocation in Scheme are executed in an arbitrary order.)
- Functional programming languages support (and heavily use) first-class functions, anonymous functions and closures, although these concepts are being included in newer procedural languages.
- Functional programming languages tend to rely on tail call optimization and higher-order functions instead of imperative looping constructs.
Many functional languages, however, are in fact impurely functional and offer imperative/procedural constructs that allow the programmer to write programs in procedural style, or in a combination of both styles. It is common for input/output code in functional languages to be written in a procedural style.
There do exist a few esoteric functional languages (like Unlambda) that eschew structured programming precepts for the sake of being difficult to program in (and therefore challenging). These languages are the exception to the common ground between procedural and functional languages.
In logic programming, a program is a set of premises, and computation is performed by attempting to prove candidate theorems. From this point of view, logic programs are declarative, focusing on what the problem is, rather than on how to solve it.
However, the backward reasoning technique, implemented by SLD resolution, used to solve problems in logic programming languages such as Prolog, treats programs as goal-reduction procedures. Thus clauses of the form:
- H :- B1, …, Bn.
have a dual interpretation, both as procedures
- to show/solve H, show/solve B1 and … and Bn
and as logical implications:
- B1 and … and Bn implies H.
Experienced logic programmers use the procedural interpretation to write programs that are effective and efficient, and they use the declarative interpretation to help ensure that programs are correct.
- "Programming Paradigms".
- "Welcome to IEEE Xplore 2.0: Use of procedural programming languages for controlling production systems". ieeexplore.ieee.org. doi:10.1109/CAIA.1991.120848. S2CID 58175293. Cite journal requires
- Stevenson, Joseph (August 2013). "Procedural programming vs object-oriented programming". neonbrand.com. Retrieved 2013-08-19. |
History of the Ottoman Empire during World War I
History of the Ottoman Empire during World War IThe Ottoman Empire participated in World War I as one of the Central Powers. The Ottomans entered the war when they carried out a surprise attack on Russia's Black Sea coast on 29 October 1914, following which Russia declared war on it on 1 November 1914. Ottoman forces fought the Allies in the Balkans and the Middle Eastern theatre of World War I. A naval blockade imposed by the allies and the conscription of farmers caused severe food shortages in the cities, especially in the winter of 1916-17. The Ottomans' defeat in the war in 1918 was crucial in the eventual dissolution of the empire in 1922.
The Ottoman entry into World War I began on 29 October 1914 when it attacked Russia's Black Sea Coast in a surprise naval action. Following the attack, Russia and its allies, Britain and France, declared war on the Ottomans in November 1914. The Ottoman commencement of military action came after three months of formal neutrality, although it had signed a secret alliance with the Central Powers in August 1914.
The political reasons for the Ottoman Sultan's entry into the war are disputed. The Ottoman Empire was an agricultural state in an age of industrialized warfare. The economic resources of the empire were depleted by the cost of the Balkan Wars of 1912 and 1913.
The great land mass of Anatolia was between the Ottoman army’s headquarters in Istanbul and many of the theaters of war. During Abdulhamit II's reign civilian communications had improved, but the road and rail network was not ready for war. It took more than a month to reach Syria and nearly two months to reach Mesopotamia. To reach the border with Russia; the railway ran only 60 km east of Ankara, and from there it was 35 days to Erzurum. The Army used Trabzon port as logistical shortcut to east. It took less time to arrive at any of these fronts from London than from the Ottoman War Department, given the poor condition of Ottoman supply ships.
The Empire fell into disorder with the declaration of war along with Germany. On 11 November a conspiracy was discovered in Constantinople against Germans and the CUP, in which some of the CUP leaders were shot. This followed the 12 November revolt in Adrianople against the German military mission. On 13 November a bomb exploded in Enver Pasha's palace, which killed five German officers but missed the Enver Pasha. These events were followed on 18 November with more anti-German plots. Committees formed around the country to rid the country of those siding with Germany. Army and navy officers protested against the assumption of authority by Germans. On 4 December widespread riots took place throughout the country. On 13 December there was an anti-war demonstration by women in Konak (Izmir) and Erzurum. Throughout December the CUP dealt with mutiny among soldiers in barracks and among naval crews. The head of the German Military Mission Field Marshal von der Goltz had a conspiracy against his life.
The military power remained firmly in the hands of War Minister Enver Pasha, domestic issues (civil matters) on Interior Minister Talat Pasha, and, an interesting point, Cemal Pasha had the control over Ottoman Syria singlehandedly. Rest of the governance, provincial governors, ran their regions with differing degrees of autonomy. An interesting case is Izmir; Rahmi Bey behaved almost as if his region was a neutral zone between the warring states.
War with Russia
Ottoman's entrance into the war greatly increased the Triple Entente's military burdens. Russia had to fight on the Caucasus Campaignalone and in the Persian Campaign along with the United Kingdom. İsmail Enver Pasha set off for the Battle of Sarikamish with the intention of recapturing Batum and Kars, overrunning Georgia and occupying north-western Persia and the oil fields. Fighting the Russians in the Caucasus, however, the Ottomans lost ground, and over 100,000 soldiers, in a series of battles. 60,000 Ottoman soldiers died in the winter of 1916—17 on the Mus—Bitlis section of the front. Ottomans preferred to keep the Caucasus militarily silent as they had to regroup reserves to retake Baghdad and Palestine from the British. 1917 and first half of 1918 was the time for negotiations. On 5 December 1917, the armistice of Erzincan (Erzincan Cease-fire Agreement) signed between the Russians and Ottomans in Erzincan that ended the armed conflicts between Russia and Ottoman Empire.
On 3 March, the Grand vizier Talat Pasha signed the Treaty of Brest-Litovsk with the Russian SFSR, (#Battles of ideals, rhetoric, 1917). It stipulated that Bolshevik Russia cede Batum, Kars, and Ardahan. In addition to these provisions, a secret clause was inserted which obligated the Russians to demobilize Armenian national forces.
Between 14 March – April 1918 the Trabzon peace conference held among the Ottoman Empire and the delegation of the Transcaucasian Diet. Enver Pasha offered to surrender all ambitions in the Caucasus in return for recognition of the Ottoman reacquisition of the east Anatolian provinces at Brest-Litovsk at the end of the negotiations. On 5 April, the head of the Transcaucasian delegation Akaki Chkhenkeli accepted the Treaty of Brest-Litovsk as a basis for more negotiations and wired the governing bodies urging them to accept this position. The mood prevailing in Tiflis was very different. Tiflis acknowledge the existence of a state of war between themselves and the Ottoman Empire.
In April 1918, the Ottoman 3rd Army finally went on the offensive in Armenia. Opposition from Armenian forces led to the Battle of Sardarapat, the Battle of Kara Killisse (1918), and the Battle of Bash Abaran. On 28 May 1918, the Armenian National Council based in Tiflis declared the Democratic Republic of Armenia. The new Republic of Armenia was forced to sign the Treaty of Batum.
In July 1918, Ottomans faced with the Centrocaspian Dictatorship at the Battle of Baku, with the goal of taking Baku on the Caspian Sea.
War with Britain
The British captured Basra in November 1914, and marched north into Iraq. Initially Ahmed Djemal Pasha was ordered to gather an army in Palestine to threaten the Suez Canal. In response, the Allies—including the newly formed Australian and New Zealand Army Corps ("ANZACs")—opened another front with the Battle of Gallipoli. The army led by Ahmed Djemal Pasha (Fourth Army) to eject the British from Egypt was stopped at the Suez canal in February 1915, and again the next summer. The canal was vital to the British war effort. The 1915 locust plague breaks out in the Palestine region, to be exact the Ottoman military hospitals record the period as March–October 1915:
The expected, and feared, British invasion came not through Cilicia or northern Syria, but through the straits. The aim of the Dardanelles campaign was to support Russia. Most military observers recognized that the uneducated Ottoman soldier was lost without good leadership, and at Gallipoli Mustafa Kemal realized the capabilities of his man if their officers led from the front. The war was something from a different era, as the agrarian Ottoman Empire faced two industrialized forces, at silent predawn attacks in which officers with drawn swords vent ahead of troops and only the troops to shout their battlecry of "Allahu Akbar!" when they reached the enemy’s trenches.
The United Kingdom was obliged to defend India and the southern Persian oil territory by undertaking the Mesopotamian campaign. Britain also had to protect Egypt in the Sinai-Palestine-Syria Campaign. These campaigns strained Allied resources and relieved Germany.
The repulse of British forces in Palestine in the spring of 1917 was followed by the loss of Jerusalem in December of the same year. The Ottoman authorities deport the entire civilian population of Jaffa and Tel Aviv, The Tel Aviv and Jaffa deportation, pursuant to the order from Ahmed Jamal Pasha on 6 April 1917. The Muslim evacuees allowed to return before long, At the same period the Balfour Declarationwas being negotiated (published on 2 November 1917) in which the British Government declares its support for the establishment of a Jewish national home in Palestine. Ahmed Jamal Pasha effectively separates these groups. The Jewish evacuees returned after the British conquest of Palestine.
The Ottomans were eventually defeated due to key attacks by the British general Edmund Allenby.
Empire in home front
The war tested to the limit the empire’s relations with its Arab population. In February 1915 in Syria, Cemal Pasha exercised absolute power in both military and civil affairs.Cemal Pasha was convinced that an uprising among local Arabs was imminent. Leading Arabs were executed, and notable families deported to Anatolia. Cemal’s policies did nothing to alleviate the famine that was gripping Syria; it was exacerbated by a British and French blockade of the coastal ports, the requisitioning of transports, profiteering and — strikingly — Cemal’s preference for spending scarce funds on public works and the restoration of historic monuments During the war, Britain had been a major sponsor of Arab nationalist thought and ideology, primarily as a weapon to use against the power of the Empire. Sharif Hussein ibn Ali rebelled against the Ottoman rule during the Arab Revolt of 1916. In August he was replaced by Sharif Haydar, but in October he proclaimed himself king of Arabia and in December was recognized by the British as an independent ruler. There was little the Empire could do to influence the course of events, other than try to prevent news of the uprising spreading, prevent it to demoralize the army or act as a propaganda for anti-Ottoman Arab factions. On 3 October 1918 forces of the Arab Revolt enter Damascus accompanied by British troops, ending 400 years of Ottoman rule.
War in Eastern Europe
In order to support the other Central Powers, Enver Pasha sent 3 Army Corps or around 100.000 men to fight in Eastern Europe.
• VI Corps under command of Mustafa Hilmi Pasha participated in the Romanian Campaign between September 1916 and April 1918.
• XV Corps under the command of James Alexis Bywater and later Cevat Pasha fought against the Russians in Galicia between August 1916 and August 1917.
• XX Corps under command of Abdul Kerim Pasha participated in the Salonika Campaign between December 1916 and May 1917.
• The Rumeli Field Detachment (reinforced 177th Infantry Regiment) remained in Macedonia until May 1918.
The Constantinople Agreement on 18 March 1915 was a set of secret assurances, which Great Britain promised to give the Capital, and the Dardanelles to the Russians in the event of victory. The city of Constantinople was intended to be a free port.
During 1915, British forces invalidated the Anglo-Ottoman Convention, declaring Kuwait to be an "independent sheikdom under British protectorate."[this quote needs a citation]
Capitulations and public debt, 1915
On 10 September 1915, Interior Minister Talat Pasha abolished the "Capitulations". On 10 September 1915 Grand Vizier Said Halim Pasha annulled (Vizer had the authority on annuls) the Capitulations, which ended the special privileges they granted to foreign nationals. The capitulation holders refused to recognize his action (unilateral action). The American ambassador expressed the Great Power view:
The capitulary regime, as it exists in the Empire, is not an autonomous institution of the Empire, but the result of international treaties, of diplomatic agreements and of contractual acts of various sorts. The regime, consequently, cannot be modified in any of its parts and still less suppressed in its entirety by the Ottoman Government except in consequence of an understanding with the contracting Powers.
— Henry Morgenthau, Sr.
Beside the capitulations, there was another issue which evolved under the shadow of capitulations. The dept and financial control (revenue generation) of the empire was intertwined under single institution, which its board was constituted from Great Powers rather than Ottomans. There is no sovereignty in this design. The public debt could and did interfere in state affairs because it controlled (collected) one-quarter of state revenues. The debt was administered by the Ottoman Public Debt Administration and its power extended to the Imperial Ottoman Bank (equates to modern central banks). Debt Administration controlled many of the important revenues of the empire. The council had power over every financial affairs. Its control extended to determine the tax on livestock in districts. Ottoman public debt was part of a larger scheme of political control, through which the commercial interests of the world had sought to gain advantages that may not be to Empire's interest. The immediate purpose of the abolition of capitulations and the cancellation of foreign debt repayments was to reduce the foreign stranglehold on the Ottoman economy; a second purpose — and one to which great political weight was attached – was to extirpate non—Muslims from the economy by transferring assets to Muslim Turks and encouraging their participation with government contracts and subsidies.
The French-Armenian Agreement of 27 October 1916, was reported to the interior minister, Talat Pasha, which agreement negotiations were performed with the leadership of Boghos Nubar the chairman of the Armenian National Assembly and one of the founder of the AGBU.
In 1917 the Ottoman Cabinet considered maintaining relations with Washington after the United States had declared war on Germany on 6 April. But the views of the war party prevailed and they insisted on maintaining a common front with their allies. Thus, relations with America were broken on 20 April 1917.
Diplomacy with new Russia, 1917
The 1917 Russian revolution changed the realities. The war devastated not only Russian soldiers, also the Russian economy was breaking down under the heightened strain of wartime demand by the end of 1915. The tsarist regime’s advances for the security on its southern borders proved ruinous. The tsarist regime desire to control the Eastern Anatolia and the straits (perceived as underbelly), but underbelly created the conditions that brought about Russia's own downfall. Unable to use Straits disrupted the Russian supply chain. Russia might survived without the Straits, but the strain was the tipping point for its war economy. This question was left to Soviet historians: "whether a less aggressive policy toward the Ottoman Empire before the war would have caused Istanbul to maintain neutrality or whether Russia later might have induced Istanbul to leave the war,[a] the outcome of tsarist future would be different. Nicholas's inept handling of his country and the war destroyed the Tsar and ended up costing him both his reign and his life.
Enver immediately instructed the Vehib Pasha, Third Army, to propose a ceasefire to Russia’s Caucasus Army. Vehib cautioned withdrawing forces, as due to the politics in Russia — neither Russia’s Caucasus Army nor Caucasian civil authorities give assurance that an armistice would hold. On 7 November 1917 the Bolshevik Party led by Vladimir Lenin over threw the Provisional Government in a violent coup plunged Russia into multitude of civil wars between ethnic groups. The slow dissolution of Russia’s Caucasus Army relieved one form of military threat from the east but brought another one. Russia was a long time threat, but at the same time kept the civil unrest in his land at bay without spreading to Ottomans in a violent. On 3 December the Ottoman foreign minister Ahmed Nesimi Bey informed the "Chamber of Deputies" about the prospects. Chamber discussed the possible outcomes and priorities. On 15 December Armistice between Russia and the Central Powers signed. On 18 December Armistice of Erzincansigned. The Bolsheviks’ anti-imperialist formula of peace with no annexations and no indemnities was close to Ottoman position. The Bolsheviks' position brought a conflict with the Germany's aim to preserve control over the East European lands it occupied and with Bulgaria’s claims on Dobruja and parts of Serbia.
In December Enver informed the Quadruple Alliance that they would like to see the 1877 border (Russo-Turkish War (1877–1878)), pointing out that the only Ottomans lost territory and 1877 boarder was Ottoman territories inhabited by Muslims. Ottomans did not pushed 1877 position too hard, scared to fall back to bilateral agreements. On the other hand, Germany, Austria-Hungary, and Bulgaria clearly stood behind on the pulling back the Ottoman and Russian forces from Iran. Ottomans wanted Muslim Iran be under its own control. The ambassador to Berlin, Ibrahim Hakki Pasha, wrote: "Although Russia may be in a weakened state today, it is always an awesome enemy and it is probable that in a short time it will recover its former might and power.
On 22 December 1917, the first meeting between Ottomans and the Bolsheviks, the temporary head Zeki Pasha, until Talat Pasha's arrival, requested of Lev Kamenev to put an end to atrocities being committed on Russian-occupied territory by Armenian partisans. Kamenev agreed and added "an international commission should be established to oversee the return of refugees (by own consent) and deportees (by forced relocation) to Eastern Anatolia. The battle of ideals, rhetoric, and material for the fate of Eastern Anatolia opened with this dialog .
The Treaty of Brest-Litovsk represented an enormous success for the empire.[according to whom?] Minister of Foreign Affairs Halil Bey announced the achievement of peace to the Chamber of Deputies. He cheered the deputies further with his prediction of the imminent signing of a third peace treaty (the first Ukraine, second Russia, and with Romania). Halil Bey thought the Entente to cease hostilities and bring a rapid end to the war. The creation of an independent Ukraine promised to cripple Russia, and the recovery of Kars, Ardahan and Batum gave the CUP a tangible prize. Nationalism emerged at the center of the diplomatic struggle between the Central Powers and the Bolsheviks. Empire recognized that Russia’s Muslims, their co-religionists, are disorganized and dispersed to come out as an entity in the future battles of ideals, rhetoric, and material. Thus, the Ottomans mobilized the Caucasus Committee to make claims on behalf of the Muslims. The Caucasus Committee had declined Ottoman earnest requests to break from Russia and embrace independence. The Caucasian Christians was far ahead in this new world concept. Helping the Caucasian Muslims to be free, like their neighbors, would be the Ottomans’ challenge.
In the overall war effort, the CUP was convinced that empire's contribution was essential. Ottoman armies had tied down large numbers of Allied troops on various fronts, keeping them away from theatres in Europe where they would have been used against German and Austrian forces. Moreover, they claimed that their success at Gallipoli had been an important factor in bringing about the collapse of Russia, resulting in the revolution of April 1917. They had turned the war in favor of Germany and her allies. Hopes were initially high for the Ottomans that their losses in the Middle East might be compensated for by successes in Causes Campaign. Enver Pasha maintained an optimistic stance, hid information that made the Ottoman position appear weak, and led most of the Ottoman elite believe that the war was still winnable.
Diplomacy with new states, 1918
Ottoman policy toward the Caucasus evolved according to the changing demands of the diplomatic and geopolitical environment. What was the Ottoman premise in involving with the Azerbaijan and the North Caucasus? The Empire’s leaders, in the parliament discussions throughout 1917, understood that Russia’s collapse presented a historic window of opportunity to redraw the map of the Caucasus. They were convinced, however, that soon enough Russia would recover and reemerge as the dominant power in the region and shut that window.
The principle of "self-determination" became the criterion, or at least in part, to give them a chance to stand on their feet. The Bolsheviks did not regard national separatism in this region as a lasting force. Their expectation was whole region come under a "voluntary and honest union" [b] and this union bearing no resemblance to Lenin’s famous description of Russia as a "prison house of peoples." Lenin's arrival to Russia was formally welcomed by Nikolay Chkheidze, the Menshevik Chairman of the Petrograd Soviet.
Ottoman's did not see a chance of these new states to stand against new Russia. These new Muslim states needed support to be emerged as viable independent states. In order to consolidate a buffer zone with Russia (both for the Empire and these new states), however, Ottomans needed to expel the Bolsheviks from Azerbaijan and the North Caucasus before the end of war. Based on 1917 negotiations, Enver concluded that Empire should not to expect much military assistance from the Muslims of the Caucasus as they were the one in need. Enver also knew the importance of Kars—Julfa railroad and the adjacent areas for this support. Goal was set forward beginning from 1918 to end of the war.
The Empire duly recognized the Transcaucasian Democratic Federative Republic in February 1918. This preference to remain part of Russia led Caucasusian politics to the Trebizond Peace Conference to base their diplomacy on the incoherent assertion that they were an integral part of Russia but yet not bound The representatives were Rauf Bey for the Empire, and Akaki Chkhenkeli from the Transcaucasian delegation.
On 11 May, a new peace conference opened at Batum. The Treaty of Batum was signed on 4 June 1918, in Batum between the Ottoman Empire and three Trans-Caucasus states: First Republic of Armenia, Azerbaijan Democratic Republic and Democratic Republic of Georgia.
The goal was to assist Azerbaijan Democratic Republic at Battle of Baku, then
turn north to assist the embattled Mountainous Republic of the Northern Caucasus and then sweep southward to encircle the British in Mesopotamia and retake Baghdad. The British in Mesopotamia already moving north, with forty vans (claimed to loaded with gold and silver for buying mercenary) accompanied with only a brigade, to establish a foothold. At the time Baku was under the control of the 26 Baku Commissars which were Bolshevik and Left Socialist Revolutionary (SR) members of the Baku Soviet Commune. The commune was established in the city of Baku. In this plan, they expected resistance from Bolshevik Russia and Britain, but also Germany, which opposed the extension of their influence into the Caucasus. Ottoman's goal to side with Muslims of Azerbaijan and MRNC managed to get Bolsheviks of Russia, Britain and Germany on the same side of a conflict box at this brief point in the history.
Developments in Southeast Europe squashed the Ottoman government's hopes. In September 1918, the Allied forces under the command of Louis Franchet d'Espèrey mounted a sudden offensive at the Macedonian Front, which proved quite successful. Bulgaria was forced to sue for peace in the Armistice of Salonica. This development undermined both the German and Ottoman cause simultaneously - the Germans had no troops to spare to defend Austria-Hungary from the newly formed vulnerability in Southeast Europe after the losses it had suffered in France, and the Ottomans suddenly faced having to defend Istanbul against an overland European siege without help from the Bulgarians.
Grand Vizier Talaat Pasha visited both Berlin, and Sofia, in September 1918, and came away with the understanding that the war was no longer winnable. With Germany likely seeking a separate peace, the Ottomans would be forced to as well. Grand Vizier Talaat convinced the other members of the ruling party that they must resign, as the Allies would impose far harsher terms if they thought the people who started the war were still in power. He also sought out the United States to see if he could surrender to them and gain the benefits of the Fourteen Points despite the Ottoman Empire and the United States not being at war; however, the Americans never responded, as they were waiting on British advice as to how to respond which never came. On 13 October, Talaat and the rest of his ministry resigned. Ahmed Izzet Pasha replaced Talaat as Grand Vizier.
Two days after taking office, Ahmed Izzet Pasha sent the captured British General Charles Vere Ferrers Townshend to the Allies to seek terms on an armistice. The British Cabinet were eager to negotiate a deal. British government interpreted that not only should Britain conduct the negotiations, but should conduct them alone. There may be a desire to cut the French out of territorial "spoils" promised to them in the Sykes-Picot agreement. Talaat (before resigning) had sent an emissary to the French as well, but that emissary had been slower to respond back. The British cabinet empowered Admiral Calthorpe to conduct the negotiations, and to explicitly exclude the French from them. The negotiations began on Sunday, 27 October on the HMS Agamemnon, a British battleship. The British refused to admit French Vice-Admiral Jean Amet, the senior French naval officer in the area, despite his desire to join; the Ottoman delegation, headed by Minister of Marine Affairs Rauf Bey.
Unknown to both sides, both sides were actually quite eager to sign a deal and willing to give up their objectives to do so. The British delegation had been given a list of 24 demands, but were told to concede on any of them except allowing the occupation of the forts on the Dardanelles as well as free passage through the Bosphorus; the British desired access to the Black Sea for the Rumanian front. Prime Minister David Lloyd George also desired to make a deal quickly before the United States could step in; according to the diary of Maurice Hankey:
[Lloyd George] was also very contemptuous of President Wilson and anxious to arrange the division of Empire between France, Italy, and G.B. before speaking to America. He also thought it would attract less attention to our enormous gains during the war if we swallowed our share of Empire now, and the German colonies later.
The Ottomans, for their part, believed the war to be lost and would have accepted almost any demands placed on them. As a result, the initial draft prepared by the British was accepted largely unchanged; the Ottomans did not know they could have pushed back on most of the clauses, and the British did not know they could have demanded even more. The Ottomans ceded the rights to the Allies to occupy "in case of disorder" any Ottoman territory, a vague and broad clause. The French were displeased with the precedent; French Premier Clemenceau disliked the British making unilateral decisions in so important a matter. Lloyd George countered that the French had concluded a similar armistice on short notice in the Armistice of Salonica which had been negotiated by French General d'Esperey, and that Great Britain (and Czarist Russia) had committed the vast majority of troops to the campaign against the Ottomans. The French agreed to accept the matter as closed.
On 30 October 1918, the Armistice of Mudros was signed, ending Ottoman involvement in World War 1. The Ottoman public, however, was given misleadingly positive impressions of the severity of the terms of the Armistice. They thought its terms were considerably more lenient than they actually were, a source of discontent later that the Allies had betrayed the offered terms.
- Fuzuli (1494 - 1555)
- Nasreddin Hodja (1208 - ?)
- Ottoman Turkish language
- Languages of the Ottoman Empire
- The story of the Turkish Language from the Ottoman Empire until today
- Ottoman Empire
- Ottoman Empire/origins
- Pargalı Ibrahim Pasha
- Ibrahim Pasha
- Hürrem Sultan: A beloved wife or master manipulator?
- Suleiman the Magnificent
- What if Pargali Ibrahim and Sehzade Mustafa were not executed(Ottoman Empire)
- Suleyman Shah
- Suleyman I.
- Suleyman The Magnificent
- Suleiman the Magnificent
- Şehzade Mustafa
- About Şehzade Mustafa Muhlisi, Şehzade
- Ahmed III
- Ahmed 1.
- Murad III
- Murad IV
- Murat IV.
- Murat III.
- Women Who Ruled: Mahpeyker Kosem Sultan of Ottoman Turkey
- The woman who oversaw 3 generations of the Ottoman Empire
- Kösem Sultan Bio
- Ottomans - 1600s
- The Strength of Kosem Sultan - The Last Influential Female Ruler of the Ottoman Empire
- Diriliş: Ertuğrul |
The Earth‐Moon system's history remains mysterious. Scientists believe the two formed when a Mars‐sized body collided with the proto‐Earth. Earth ended up being the larger daughter of this collision and retained enough heat to become tectonically active. The Moon, being smaller, likely cooled down faster and geologically 'froze'. The apparent early dynamism of the Moon challenges this idea.
New data suggest this is because radioactive elements were distributed uniquely after the catastrophic Moon‐ forming collision. Earth's Moon, together with the Sun, is a dominant object in our sky and offers many observable features which keep scientists busy trying to explain how our planet and the Solar System formed. Most planets in our solar system have satellites. For example, Mars has two moons, Jupiter has 79 and Neptune has 14. Some moons are icy, some are rocky, some are still geologically active and some relatively inactive. How planets got their satellites and why they have the properties they do are questions which could shed light on many aspects of the evolution of the early Solar System.
The Moon is a relatively cold rocky body, with a limited amount of water and little tectonic processing. Scientists presently believe the Earth‐Moon system formed when a Mars‐sized body dubbed Theia - who in Greek mythology was the mother of Selene, the goddess of the Moon - catastrophically collided with the proto‐Earth, causing the components of both bodies to mix.
The debris of this collision are thought to have fairly rapidly, perhaps over a few million years, separated to form the Earth and Moon. The Earth ended up being larger and evolved in a sweet spot in terms of its size being just right for it to become a dynamic planet with an atmosphere and oceans. Earth's Moon ended up being smaller and did not have sufficient mass to host these characteristics. Thus retaining volatile substances like water or the gases that form our atmosphere, or retaining sufficient internal heat to maintain long‐term planetary volcanism and tectonics, are idiosyncratic to how the Earth‐Moon forming collision occurred. Decades of observations have demonstrated that lunar history was much more dynamic than expected with volcanic and magnetic activity occurring as recently as 1 billion years ago, much later than expected.
A clue as to why the near and far side of the Moon are so different comes from strong asymmetry observable in its surface features. On the Moon's perpetually Earth‐facing near side, on any given night, or day, one can observe dark and light patches with the naked eye. Early astronomers named these dark regions 'maria', Latin for 'seas', thinking they were bodies of water by analogy with the Earth. Using telescopes, scientists were able to figure out over a century ago that these were not in fact seas, but more likely craters or volcanic features.
Back then, most scientists assumed the far side of the Moon, which they would never have been able to see, was more or less like the near side.
However, because the Moon is relatively close to the Earth, only about 380,000 km away, the Moon was the first Solar System body humans were able to explore, first using non‐crewed spacecraft and then 'in person'. In the late 1950s and early 1960s, non‐crewed space probes launched by the USSR returned the first images of the far side of the Moon, and scientists were surprised to find that the two sides were very different. The far side had almost no maria. Only 1% of the far side was covered with maria compared with ~31% for the near side. Scientists were puzzled, but they suspected this asymmetry was offering clues as to how the Moon formed.
In the late 1960s and early 1970s, NASA's Apollo missions landed six spacecraft on the Moon, and astronauts brought back 382 kg of Moon rocks to try to understand the origin of the Moon using chemical analysis. Having samples in hand, scientists quickly figured out the relative darkness of these patches was due to their geological composition and they were, in fact, attributable to volcanism. They also identified a new type of rock signature they named KREEP - short for rock enriched in potassium (chemical symbol K), rare‐earth elements (REE, which include cerium, dysprosium, erbium, europium, and other elements which are rare on Earth) and phosphorus (chemical symbol P) - which was associated with the maria. But why volcanism and this KREEP signature should be distributed so unevenly between the near and far sides of the Moon again presented a puzzle.
Now, using a combination of observation, laboratory experiments and computer modelling, scientists from the Earth‐Life Science Institute at Tokyo Institute of Technology, the University of Florida, the Carnegie Institution for Science, Towson University, NASA Johnson Space Center and the University of New Mexico have brought some new clues as to how the Moon gained its near‐ and far‐side asymmetry. These clues are linked to an important property of KREEP.
Potassium (K), thorium (Th) and uranium (U) are, importantly for this story, radioactively unstable elements. This means that they occur in a variety of atomic configurations that have variable numbers of neutrons. These variable composition atoms are known as 'isotopes', some of which are unstable and fall apart to yield other elements, producing heat.
The heat from the radioactive decay of these elements can help melt the rocks they are contained in, which may partly explain their co‐localisation.
This study shows that, in addition to enhanced heating, the inclusion of a KREEP component to rocks also lowers their melting temperature, compounding the expected volcanic activity from simply radiogenic decay models. Because most of these lava flows were emplaced early in lunar history, this study also adds constraints about the timing of the Moon's evolution and the order in which various processes occurred on the Moon.
This work required collaboration among scientists working on theory and experiment. After conducting high temperature melting experiments of rocks with various KREEP components, the team analysed the implications this would have on the timing and volume of volcanic activity at the lunar surface, providing important insight about the early stages of evolution of the Earth‐Moon system.
ELSI co‐author Matthieu Laneuville comments, 'Because of the relative lack of erosion processes, the Moon's surface records geological events from the Solar System's early history. In particular, regions on the Moon's near side have concentrations of radioactive elements like U and Th unlike anywhere else on the Moon. Understanding the origin of these local U and Th enrichments can help explain the early stages of the Moon's formation and, as a consequence, conditions on the early Earth.'
The results from this study suggest that the Moon's KREEP‐enriched maria have influenced lunar evolution since the Moon formed. Laneuville thinks evidence for these kinds of non‐symmetric, self‐amplifying processes might be found in other moons in our Solar System, and may be ubiquitous on rocky bodies throughout the Universe.
Stephen M. Elardo1,2,3*, Matthieu Laneuville4, Francis M. McCubbin5 and Charles K. Shearer6, Early crust building enhanced on the Moon's near side by mantle melting‐point depression, Nature Geoscience, DOI: 10.1038/s41561‐020‐0559‐4
1. Department of Geological Sciences, University of Florida, Gainesville, FL, USA.
2. Geophysical Laboratory, Carnegie Institution for Science, Washington, DC, USA.
3. Department of Physics, Astronomy, and Geosciences, Towson University, Towson, MD, USA.
4. Earth-Life Science Institute, Tokyo Institute of Technology, Tokyo, Japan.
5. NASA Johnson Space Center, Houston, TX, USA.
6. Institute of Meteoritics, University of New Mexico, Albuquerque, NM, USA.
Tokyo Institute of Technology (Tokyo Tech) stands at the forefront of research and higher education as the leading university for science and technology in Japan. Tokyo Tech researchers excel in fields ranging from materials science to biology, computer science, and physics. Founded in 1881, Tokyo Tech hosts over 10,000 undergraduate and graduate students per year, who develop into scientific leaders and some of the most sought‐after engineers in industry. Embodying the Japanese philosophy of "monotsukuri," meaning "technical ingenuity and innovation," the Tokyo Tech community strives to contribute to society through high‐impact research.
The Earth‐Life Science Institute (ELSI) is one of Japan's ambitious World Premiere International research centers, whose aim is to achieve progress in broadly inter‐disciplinary scientific areas by inspiring the world's greatest minds to come to Japan and collaborate on the most challenging scientific problems. ELSI's primary aim is to address the origin and co‐evolution of the Earth and life.
The World Premier International Research Center Initiative (WPI) was launched in 2007 by the Ministry of Education, Culture, Sports, Science and Technology (MEXT) to help build globally visible research centers in Japan. These institutes promote high research standards and outstanding research environments that attract frontline researchers from around the world. These centers are highly autonomous, allowing them to revolutionize conventional modes of research operation and administration in Japan. |
Within modern cosmology, the Big Bang marks the beginning of the universe and the creation of matter, space and time about 13.8 billion years ago.
Since then, the visible structures of the cosmos have developed: billions of galaxies which bind gas, dust, stars and planets with gravity and host supermassive black holes in their centres. But how could these visible structures have formed from the universe's initial conditions?
To answer this question, theoretical astrophysicists carry out cosmological simulations. They transform their knowledge about the physical processes forming our universe into mathematical models and simulate the evolution of our universe on high-performance computers over billions of years.
A group of theoretical astrophysicists from the LMU led by Klaus Dolag has now, as part of the Magneticum Pathfinder project, performed a new, unique hydrodynamic simulation of the large-scale distribution of the universe's visible matter. The most recent results regarding the three most important cosmic ingredients of the universe are taken into account - the dark energy, the dark matter and the visible matter.
The scientists incorporated a variety of physical processes in the calculations, including three that are considered particularly important for the development of the visible universe: first, the condensation of matter into stars, second, their further evolution when the surrounding matter is heated by stellar winds and supernova explosions and enriched with chemical elements, and third, the feedback of supermassive black holes that eject massive amounts of energy into the universe.
The most comprehensive simulation covers the spatial area of a cube with a box size of 12.5 billion light years. This tremendous large section of the universe was never part of a simulation before. It was divided into a previously unattained number of 180 billion resolution elements, each representing the detailed properties of the universe and containing about 500 bytes of information.
For the first time, these numerous characteristics make it possible to compare a cosmological simulation in detail with large-scale astronomical surveys. "Astronomical surveys from space telescopes like Planck or Hubble observe a large segment of the visible universe while sophisticated simulations so far could only model very small parts of the universe, making a direct comparison virtually impossible," says Klaus Dolag. "Thus, Magneticum Pathfinder marks the beginning of a new era in computer-based cosmology."
This achievement is preceded by ten years of research and development, accompanied by experts of the Leibniz Supercomputing Centre (LRZ) of the Bavarian Academy of Sciences, one of the most powerful scientific computer centres in Europe. "One of the biggest challenge for such a complex problem is to find the right balance between optimizing the simulation code and the development of the astrophysical modelling," explains Klaus Dolag. "While the code permanently needs to be adjusted to changing technologies and new hardware, the underlying models need to be improved by including better or additional descriptions of the physical processes which form our visible universe."
The realization of this largest simulation within the Magneticum Pathfinder project took about two years. The research group of Klaus Dolag was supported by the physicists of the datacentre C2PAP which is operated by the Excellence Cluster Universe and located at the LRZ. Within the framework of several one-week workshops, the Magneticum Pathfinder team got the opportunity to use the LRZ' entire highest-performance supercomputer SuperMUC for its simulation. "I do not know any datacentre that would have allowed me to use the entire computing capacity for such a long time," says Klaus Dolag.
Overall, the Magneticum Pathfinder simulation utilised all 86,016 computing cores and the complete usable main memory - 155 out of a total of 194 terabytes - of the expansion stage "Phase 2" of the SuperMUC which was put into operation recently. The entire simulation required 25 million CPU hours and generated 320 terabytes of scientific data.
These data are now available for interested researchers worldwide. The Munich-based astrophysicists are already engaged in further projects: Among others, Klaus Dolag is currently collaborating with scientists from the Planck collaboration to compare observations of the Planck satellite with the calculations of Magneticum Pathfinder. |
NCERT Solutions For Class 10 Maths Chapter 3
The introduction part of the chapter 3 presents a fact that each solution (x, y) of a linear equation in two variables, ax + by + c = 0, corresponds to a point on the line representing the equation, and vice versa. Next, a pair of linear equations in two variables is explained as a combination of two linear equations that are in the same two variables x and y. Next, several solved examples are presented wherein students are told about the graphical representation of linear equations in two variables. Thereafter, for the second exercise, the graphical method of solving a pair of linear equations in two variables is presented. The rest of the chapter covers concepts like - algebraic method of solving a pair of linear equations in two variables, elimination method, cross-multiplication method, and equations reducible to a pair of linear equations in two variables.
Download Chapter 3 Pair of Linear Equations in Two Variables Class 10 Maths Solutions PDF for FREE: |
Introduction to Chemistry
This unit gives students an introduction to many of the basic skills that will be needed throughout the year in chemistry. This includes measurement, using the metric system, performing calculations with significant digits, and differentiating between physical and chemical changes.
Icebreaker Activity - Chemistry Trivia and Jokes
Purpose: This is something fun and different to try for the first day of your chemistry class. Print out the worksheet. Make sure you use the double-sided option on your copier or printer. Cut out each of the individual boxes. Half of the boxes have some sort of trivia about a group of elements, while the other half has the answers. Give each student one box and have them find their partner in the room with the matching answer. When the two boxes are placed together, it reveals a chemistry joke that they can share with the class!
Essential Concepts: None for this activity.
Mythbusters Scientific Method - "Who Gets Wetter"?
Purpose: The main premise behind the scientific method is that any phenomenon in nature can be explained given enough research, observation, and experimentation. This episode of Mythbusters provides a great example of the scientific method in action to answer the question of whether one gets wetter by walking or running through a rainstorm. This is one of the few episodes of Mythbusters where the viewer is shown the actual quantitative data collected by Adam and Jamie. Students can perform their own analysis of the data and form conclusions about the validity of the experiment.
Essential Concepts: Scientific method, experimental design, experimental group, control group, independent variable, dependent variable, sample size.
Measuring With Significant Figures Worksheet
Chemistry is one of the first classes where the importance of measuring accurately and precisely becomes clear. This worksheet will give brief instruction on how to use rulers, graduated cylinders, and balances, but the focus is on doing so within the rules for significant figures.
Essential concepts: Measurement, significant figures, significant digits, metric system, length, volume, mass.
Calculating Density with Significant Figures Worksheet
Purpose: Once students understand how to measure accurately and precisely, the next step is understanding the rules of rounding with significant figures when performing calculations. Determining density is a good place to introduce these rules, as both subtraction and division steps are necessary.
Essential Concepts: Significant figures, significant digits, rounding, mass, volume, density.
The Density Column
Purpose: This is a challenge activity that further reinforces the idea of density. Students will calculate the density of different-colored liquids and objects. They will then sketch how each of these liquids and objects would layer if they were placed in a single beaker.
Essential Concepts: Density, mass, volume.
Elements, Compounds, and Mixtures
Purpose: This is a simple instructional worksheet that uses drawings to contrast atoms and molecules, as well as elements, compounds, and mixtures.
Essential Concepts: Elements, compounds, mixtures, atoms, molecules, pure substance.
States of Matter and Phase Changes Worksheet
Purpose: This worksheet will give the students some instruction and practice with the behavior of atoms and molecules in the three main states of matter:solid, liquid, and gas. The relationship between these states and temperature will also be explained.
Essential Concepts: Matter, atoms, molecules, states of matter, solid, liquid, gas, temperature.
Classifying Chemical and Physical Properties and Changes Worksheet
Purpose: Chemical and physical properties, as well as chemical and physical changes are one of those concepts that every chemistry class starts with. This worksheet gives students a little background instruction on each idea, then has them practice classifying different properties and changes.
Essential Concepts: Physical properties, chemical properties, physical changes, chemical changes.
Chemthink: Particulate Nature of Matter
Purpose: The Chemthink website has a series of interactive modules that help to instruct students on some of the basic, fundamental concepts of chemistry. The particulate nature of matter module is the first one in the series, and does an excellent job of demonstrating the differences between atoms, molecules, elements, compounds, and mixtures. Additionally, students will be able to visualize how particles behave differently as solids, liquids, and compounds.
Essential Concepts: Matter, atoms, molecules, elements, compounds, mixtures, solids, liquids, gases.
Introduction to Chemistry Study Guide
Purpose: Once the instruction for the unit is completed, students can complete this study guide to aid in their preparation for a written test. The study guide is divided into two sections: vocabulary and short answer questions. The vocabulary words can be found scattered throughout the different instructional worksheets from this unit. The short answer questions are conceptual and meant to see if the students are able to apply what they've learned in the unit. |
In condensed matter physics, a Cooper pair or BCS pair (Bardeen–Cooper–Schrieffer pair) is a pair of electrons (or other fermions) bound together at low temperatures in a certain manner first described in 1956 by American physicist Leon Cooper.
Cooper showed that an arbitrarily small attraction between electrons in a metal can cause a paired state of electrons to have a lower energy than the Fermi energy, which implies that the pair is bound. In conventional superconductors, this attraction is due to the electron–phonon interaction. The Cooper pair state is responsible for superconductivity, as described in the BCS theory developed by John Bardeen, Leon Cooper, and John Schrieffer for which they shared the 1972 Nobel Prize.
Although Cooper pairing is a quantum effect, the reason for the pairing can be seen from a simplified classical explanation. An electron in a metal normally behaves as a free particle. The electron is repelled from other electrons due to their negative charge, but it also attracts the positive ions that make up the rigid lattice of the metal. This attraction distorts the ion lattice, moving the ions slightly toward the electron, increasing the positive charge density of the lattice in the vicinity. This positive charge can attract other electrons. At long distances, this attraction between electrons due to the displaced ions can overcome the electrons' repulsion due to their negative charge, and cause them to pair up. The rigorous quantum mechanical explanation shows that the effect is due to electron–phonon interactions, with the phonon being the collective motion of the positively-charged lattice.
The energy of the pairing interaction is quite weak, of the order of 10−3 eV, and thermal energy can easily break the pairs. So only at low temperatures, in metal and other substrates, are a significant number of the electrons in Cooper pairs.
The electrons in a pair are not necessarily close together; because the interaction is long range, paired electrons may still be many hundreds of nanometers apart. This distance is usually greater than the average interelectron distance so that many Cooper pairs can occupy the same space. Electrons have spin-1⁄2, so they are fermions, but the total spin of a Cooper pair is integer (0 or 1) so it is a composite boson. This means the wave functions are symmetric under particle interchange. Therefore, unlike electrons, multiple Cooper pairs are allowed to be in the same quantum state, which is responsible for the phenomenon of superconductivity.
The BCS theory is also applicable to other fermion systems, such as helium-3. Indeed, Cooper pairing is responsible for the superfluidity of helium-3 at low temperatures. It has also been recently demonstrated that a Cooper pair can comprise two bosons. Here, the pairing is supported by entanglement in an optical lattice.
Relationship to superconductivity
Cooper originally considered only the case of an isolated pair's formation in a metal. When one considers the more realistic state of many electronic pair formations, as is elucidated in the full BCS theory, one finds that the pairing opens a gap in the continuous spectrum of allowed energy states of the electrons, meaning that all excitations of the system must possess some minimum amount of energy. This gap to excitations leads to superconductivity, since small excitations such as scattering of electrons are forbidden. The gap appears due to many-body effects between electrons feeling the attraction.
R. A. Ogg Jr., was first to suggest that electrons might act as pairs coupled by lattice vibrations in the material. This was indicated by the isotope effect observed in superconductors. The isotope effect showed that materials with heavier ions (different nuclear isotopes) had lower superconducting transition temperatures. This can be explained by the theory of Cooper pairing: heavier ions are harder for the electrons to attract and move (how Cooper pairs are formed), which results in smaller binding energy for the pairs.
The theory of Cooper pairs is quite general and does not depend on the specific electron-phonon interaction. Condensed matter theorists have proposed pairing mechanisms based on other attractive interactions such as electron–exciton interactions or electron–plasmon interactions. Currently, none of these other pairing interactions has been observed in any material.
An experiment to create a Cooper pair from positrons would make a great contribution to understanding the formation of an electron pair.
It should be mentioned that Cooper pairing does not involve individual electrons pairing up to form "quasi-bosons". The paired states are energetically favored, and electrons go in and out of those states preferentially. This is a fine distinction that John Bardeen makes:
- "The idea of paired electrons, though not fully accurate, captures the sense of it."
The mathematical description of the second-order coherence involved here is given by Yang.
- Cooper, Leon N. (1956). "Bound electron pairs in a degenerate Fermi gas". Physical Review. 104 (4): 1189–1190. Bibcode:1956PhRv..104.1189C. doi:10.1103/PhysRev.104.1189.
- Nave, Carl R. (2006). "Cooper Pairs". Hyperphysics. Dept. of Physics and Astronomy, Georgia State Univ. Retrieved 2008-07-24.
- Kadin, Alan M. (2005). "Spatial Structure of the Cooper Pair". Journal of Superconductivity and Novel Magnetism. 20 (4): 285–292. arXiv:cond-mat/0510279. doi:10.1007/s10948-006-0198-z.
- Fujita, Shigeji; Ito, Kei; Godoy, Salvador (2009). Quantum Theory of Conducting Matter. Springer Publishing. pp. 15–27. ISBN 978-0-387-88211-6.
- Feynman, Richard P.; Leighton, Robert; Sands, Matthew (1965). Lectures on Physics, Vol.3. Addison–Wesley. pp. 21–7, 8. ISBN 0-201-02118-8.
- "Cooper Pairs of Bosons". Archived from the original on 2015-12-09. Retrieved 2009-09-01.
- Nave, Carl R. (2006). "The BCS Theory of Superconductivity". Hyperphysics. Dept. of Physics and Astronomy, Georgia State Univ. Retrieved 2008-07-24.
- Ogg, Richard A. (1 February 1946). "Bose-Einstein Condensation of Trapped Electron Pairs. Phase Separation and Superconductivity of Metal-Ammonia Solutions". Physical Review. American Physical Society (APS). 69 (5–6): 243–244. doi:10.1103/physrev.69.243. ISSN 0031-899X.
- Poole Jr, Charles P, "Encyclopedic dictionary of condensed matter physics", (Academic Press, 2004), p. 576
- Bardeen, John (1973). "Electron-Phonon Interactions and Superconductivity". In H. Haken and M. Wagner (ed.). Cooperative Phenomena. Berlin, Heidelberg: Springer Berlin Heidelberg. p. 67. doi:10.1007/978-3-642-86003-4_6. ISBN 978-3-642-86005-8.
- Yang, C. N. (1 September 1962). "Concept of Off-Diagonal Long-Range Order and the Quantum Phases of Liquid He and of Superconductors". Reviews of Modern Physics. American Physical Society (APS). 34 (4): 694–704. Bibcode:1962RvMP...34..694Y. doi:10.1103/revmodphys.34.694. ISSN 0034-6861. |
A C source file goes through two main stages, (1) the preprocessor stage where the C source code is processed by the preprocessor utility which looks for preprocessor directives and performs those actions and (2) the compilation stage where the processed C source code is then actually compiled to produce object code files.
The preprocessor is a utility that does text manipulation. It takes as input a file that contains text (usually C source code) that may contain preprocessor directives and outputs a modified version of the file by applying any directives found to the text input to generate a text output.
The file does not have to be C source code because the preprocessor is doing text manipulation. I have seen the C Preprocssor used to extend the
make utility by allowing preprossor directives to be included in a make file. The make file with the C Preprocessor directives is run through the C Preprocessor utility and the resulting output then fed into
make to do the actual build of the make target.
Libraries and linking
A library is a file that contains object code of various functions. It is a way to package the output from several source files when they are compiled into a single file. Many times a library file is provided along with a header file (include file), typically with a .h file extension. The header file contains the function declarations, global variable declarations, as well as preprocessor directives needed for the library. So to use the library, you include the header file provided using the
#include directive and you link with the library file.
A nice feature of a library file is that you are providing the compiled version of your source code and not the source code itself. On the other hand since the library file contains compiled source code, the compiler used to generate the library file must be compatible with the compiler being used to compile your own source code files.
There are two types of libraries commonly used. The first and older type is the static library. The second and more recent is the dynamic library (Dynamic Link Library or DLL in Windows and Shared Library or SO in Linux). The difference between the two is when the functions in the library are bound to the executable that is using the library file.
The linker is a utility that takes the various object files and library files to create the executable file. When an external or global function or variable is used the C source file, a kind of marker is used to tell the linker that the address of the function or variable needs to be inserted at that point.
The C compiler only knows what is in the source it compiles and does not know what is in other files such as object files or libraries. So the linker's job is to take the various object files and libraries and to make the final connections between parts by replacing the markers with actual connections. So a linker is a utility that "links" together the various components, replacing the marker for a global function or variable in the object files and libraries with a link to the actual object code that was generated for that global function or variable.
During the linker stage is when the difference between a static library and a dynamic or shared library becomes evident. When a static library is used, the actual object code of the library is included in the application executable. When a dynamic or shared library is used, the object code included in the application executable is code to find the shared library and connect with it when the application is run.
In some cases the same global function name may be used in several different object files or libraries so the linker will normally just use the first one it comes across and issue a warning about others found.
Summary of compile and link
So the basic process for a compile and link of a C program is:
preprocessor utility generates the C source to be compiled
compiler compiles the C source into object code generating a set of object files
linker links the various object files along with any libraries into executable file
The above is the basic process however when using dynamic libraries it can get more complicated especially if part of the application being generated has dynamic libraries that it is generating.
There is also the stage of when the application is actually loaded into memory and execution starts. An operating system provides a utility, the loader, which reads the application executable file and loads it into memory and then starts the application running. The starting point or entry point for the executable is specified in the executable file so after the loader reads the executable file into memory it will then start the executable running by jumping to the entry point memory address.
One problem the linker can run into is that sometimes it may come across a marker when it is processing the object code files that requires an actual memory address. However the linker does not know the actual memory address because the address will vary depending on where in memory the application is loaded. So the linker marks that as something for the loader utility to fix when the loader is loading the executable into memory and getting ready to start it running.
With modern CPUs with hardware supported virtual address to physical address mapping or translation, this issue of actual memory address is seldom a problem. Each application is loaded at the same virtual address and the hardware address translation deals with the actual, physical address. However older CPUs or lower cost CPUs such as micro-controllers that are lacking the memory management unit (MMU) hardware support for address translation still need this issue addressed.
Entry points and the C Runtime
A final topic is the C Runtime and the
main() and the executable entry point.
The C Runtime is object code provided by the compiler manufacturer that contains the entry point for an application that is written in C. The
main() function is the entry point provided by the programmer writing the application however this is not the entry point that the loader sees. The
main() function is called by the C Runtime after the application is started and the C Runtime code sets up the environment for the application.
The C Runtime is not the Standard C Library. The purpose of the C Runtime is to manage the runtime environment for the application. The purpose of the Standard C Library is to provide a set of useful utility functions so that a programmer doesn't have to create their own.
When the loader loads the application and jumps to the entry point provided by the C Runtime, the C Runtime then performs the various initialization actions needed to provide the proper runtime environment for the application. Once this is done, the C Runtime then calls the
main() function so that the code created by the application developer or programmer starts to run. When the
main() returns or when the
exit() function is called, the C Runtime performs any actions needed to clean up and close out the application. |
- Confirmation bias
Confirmation bias (also called confirmatory bias or myside bias) is a tendency for people to favor information that confirms their preconceptions or hypotheses regardless of whether the information is true.[Note 1] As a result, people gather evidence and recall information from memory selectively, and interpret it in a biased way. The biases appear in particular for emotionally significant issues and for established beliefs. For example, in reading about gun control, people usually prefer sources that affirm their existing attitudes. They also tend to interpret ambiguous evidence as supporting their existing position. Biased search, interpretation and/or recall have been invoked to explain attitude polarization (when a disagreement becomes more extreme even though the different parties are exposed to the same evidence), belief perseverance (when beliefs persist after the evidence for them is shown to be false), the irrational primacy effect (a stronger weighting for data encountered early in an arbitrary series) and illusory correlation (in which people falsely perceive an association between two events or situations).
A series of experiments in the 1960s suggested that people are biased towards confirming their existing beliefs. Later work explained these results in terms of a tendency to test ideas in a one-sided way, focusing on one possibility and ignoring alternatives. In combination with other effects, this strategy can bias the conclusions that are reached. Explanations for the observed biases include wishful thinking and the limited human capacity to process information. Another proposal is that people show confirmation bias because they are pragmatically assessing the costs of being wrong, rather than investigating in a neutral, scientific way.
Confirmation biases contribute to overconfidence in personal beliefs and can maintain or strengthen beliefs in the face of contrary evidence. Hence they can lead to poor decisions, especially in organizational, scientific, military, political and social contexts.
- 1 Types
- 2 Related effects
- 3 History
- 4 Explanations
- 5 Consequences
- 6 See also
- 7 Notes
- 8 Footnotes
- 9 References
- 10 Further reading
- 11 External links
Confirmation biases are effects in information processing, distinct from the behavioral confirmation effect, also called "self-fulfilling prophecy", in which people behave so as to make their expectations come true. Some psychologists use "confirmation bias" to refer to any way in which people avoid rejecting a belief, whether in searching for evidence, interpreting it, or recalling it from memory. Others restrict the term to selective collection of evidence.[Note 2]
Biased search for information
Experiments have repeatedly found that people tend to test hypotheses in a one-sided way, by searching for evidence consistent with the hypothesis they hold at a given time. Rather than searching through all the relevant evidence, they ask questions that are phrased so that an affirmative answer supports their hypothesis. They look for the consequences that they would expect if their hypothesis were true, rather than what would happen if it were false. For example, someone who is trying to identify a number using yes/no questions and suspects that the number is 3 might ask, "Is it an odd number?" People prefer this sort of question, called a "positive test", even when a negative test such as "Is it an even number?" would yield exactly the same information. However, this does not mean that people seek tests that are guaranteed to give a positive answer. In studies where subjects could select either such pseudo-tests or genuinely diagnostic ones, they favored the genuinely diagnostic.
The preference for positive tests is not itself a bias, since positive tests can be highly informative. However, in conjunction with other effects, this strategy can confirm existing beliefs or assumptions, independently of whether they are true. In real-world situations, evidence is often complex and mixed. For example, various contradictory ideas about someone could each be supported by concentrating on one aspect of his or her behavior. Thus any search for evidence in favor of a hypothesis is likely to succeed. One illustration of this is the way the phrasing of a question can significantly change the answer. For example, people who are asked, "Are you happy with your social life?" report greater satisfaction than those asked, "Are you unhappy with your social life?"
Even a small change in the wording of a question can affect how people search through available information, and hence the conclusions they reach. This was shown using a fictional child custody case. Subjects read that Parent A was moderately suitable to be the guardian in multiple ways. Parent B had a mix of salient positive and negative qualities: a close relationship with the child but a job that would take him or her away for long periods. When asked, "Which parent should have custody of the child?" the subjects looked for positive attributes and a majority chose Parent B. However, when the question was, "Which parent should be denied custody of the child?" they looked for negative attributes, but again a majority answered Parent B, implying that Parent A should have custody.
Similar studies have demonstrated how people engage in biased search for information, but also that this phenomenon may be limited by a preference for genuine diagnostic tests, where they are available. In an initial experiment, subjects had to rate another person on the introversion-extroversion personality dimension on the basis of an interview. They chose the interview questions from a given list. When the interviewee was introduced as an introvert, the subjects chose questions that presumed introversion, such as, "What do you find unpleasant about noisy parties?" When the interviewee was described as extroverted, almost all the questions presumed extroversion, such as, "What would you do to liven up a dull party?" These loaded questions gave the interviewees little or no opportunity to falsify the hypothesis about them. However, a later version of the experiment gave the subjects less presumptive questions to choose from, such as, "Do you shy away from social interactions?" Subjects preferred to ask these more diagnostic questions, showing only a weak bias towards positive tests. This pattern, of a main preference for diagnostic tests and a weaker preference for positive tests, has been replicated in other studies.
Another experiment gave subjects a particularly complex rule-discovery task involving moving objects simulated by a computer. Objects on the computer screen followed specific laws, which the subjects had to figure out. They could "fire" objects across the screen to test their hypotheses. Despite making many attempts over a ten hour session, none of the subjects worked out the rules of the system. They typically sought to confirm rather than falsify their hypotheses, and were reluctant to consider alternatives. Even after seeing evidence that objectively refuted their working hypotheses, they frequently continued doing the same tests. Some of the subjects were instructed in proper hypothesis-testing, but these instructions had almost no effect.
Confirmation biases are not limited to the collection of evidence. Even if two individuals have the same information, the way they interpret it can be biased.
A team at Stanford University ran an experiment with subjects who felt strongly about capital punishment, with half in favor and half against. Each of these subjects read descriptions of two studies; a comparison of U.S. states with and without the death penalty, and a comparison of murder rates in a state before and after the introduction of the death penalty. After reading a quick description of each study, the subjects were asked whether their opinions had changed. They then read a much more detailed account of each study's procedure and had to rate how well-conducted and convincing that research was. In fact, the studies were fictional. Half the subjects were told that one kind of study supported the deterrent effect and the other undermined it, while for other subjects the conclusions were swapped.
The subjects, whether proponents or opponents, reported shifting their attitudes slightly in the direction of the first study they read. Once they read the more detailed descriptions of the two studies, they almost all returned to their original belief regardless of the evidence provided, pointing to details that supported their viewpoint and disregarding anything contrary. Subjects described studies supporting their pre-existing view as superior to those that contradicted it, in detailed and specific ways. Writing about a study that seemed to undermine the deterrence effect, a death penalty proponent wrote, "The research didn't cover a long enough period of time", while an opponent's comment on the same study said, "No strong evidence to contradict the researchers has been presented". The results illustrated that people set higher standards of evidence for hypotheses that go against their current expectations. This effect, known as "disconfirmation bias", has been supported by other experiments.
A study of biased interpretation took place during the 2004 US presidential election and involved subjects who described themselves as having strong feelings about the candidates. They were shown apparently contradictory pairs of statements, either from Republican candidate George W. Bush, Democratic candidate John Kerry or a politically neutral public figure. They were also given further statements that made the apparent contradiction seem reasonable. From these three pieces of information, they had to decide whether or not each individual's statements were inconsistent. There were strong differences in these evaluations, with subjects much more likely to interpret statements by the candidate they opposed as contradictory.
In this experiment, the subjects made their judgments while in a magnetic resonance imaging (MRI) scanner which monitored their brain activity. As subjects evaluated contradictory statements by their favored candidate, emotional centers of their brains were aroused. This did not happen with the statements by the other figures. The experimenters inferred that the different responses to the statements were not due to passive reasoning errors. Instead, the subjects were actively reducing the cognitive dissonance induced by reading about their favored candidate's irrational or hypocritical behavior.
Biased interpretation is not restricted to emotionally significant topics. In another experiment, subjects were told a story about a theft. They had to rate the evidential importance of statements arguing either for or against a particular character being responsible. When they hypothesized that character's guilt, they rated statements supporting that hypothesis as more important than conflicting statements.
Even if someone has sought and interpreted evidence in a neutral manner, they may still remember it selectively to reinforce their expectations. This effect is called "selective recall", "confirmatory memory" or "access-biased memory". Psychological theories differ in their predictions about selective recall. Schema theory predicts that information matching prior expectations will be more easily stored and recalled. Some alternative approaches say that surprising information stands out more and so is more memorable. Predictions from both these theories have been confirmed in different experimental contexts, with no theory winning outright.
In one study, subjects read a profile of a woman which described a mix of introverted and extroverted behaviors. They later had to recall examples of her introversion and extroversion. One group was told this was to assess the woman for a job as a librarian, while a second group were told it was for a job in real estate sales. There was a significant difference between what these two groups recalled, with the "librarian" group recalling more examples of introversion and the "sales" groups recalling more extraverted behavior. A selective memory effect has also been shown in experiments that manipulate the desirability of personality types. In one of these, a group of subjects were shown evidence that extraverted people are more successful than introverts. Another group were told the opposite. In a subsequent, apparently unrelated, study, they were asked to recall events from their lives in which they had been either introverted or extraverted. Each group of subjects provided more memories connecting themselves with the more desirable personality type, and recalled those memories more quickly.
One study showed how selective memory can maintain belief in extrasensory perception (ESP). Believers and disbelievers were each shown descriptions of ESP experiments. Half of each group were told that the experimental results supported the existence of ESP, while the others were told they did not. In a subsequent test, subjects recalled the material accurately, apart from believers who had read the non-supportive evidence. This group remembered significantly less information and some of them incorrectly remembered the results as supporting ESP.
Polarization of opinion
When people with opposing views interpret new information in a biased way, their views can move even further apart. This is called "attitude polarization". The effect was demonstrated by an experiment that involved drawing a series of red and black balls from one of two concealed "bingo baskets". Subjects knew that one basket contained 60% black and 40% red balls; the other, 40% black and 60% red. The experimenters looked at what happened when balls of alternating color were drawn in turn, a sequence that does not favor either basket. After each ball was drawn, subjects in one group were asked to state out loud their judgments of the probability that the balls were being drawn from one or the other basket. These subjects tended to grow more confident with each successive draw—whether they initially thought the basket with 60% black balls or the one with 60% red balls was the more likely source, their estimate of the probability increased. Another group of subjects were asked to state probability estimates only at the end of a sequence of drawn balls, rather than after each ball. They did not show the polarization effect, suggesting that it does not necessarily occur when people simply hold opposing positions, but rather when they openly commit to them.
A less abstract study was the Stanford biased interpretation experiment in which subjects with strong opinions about the death penalty read about mixed experimental evidence. Twenty-three percent of the subjects reported that their views had become more extreme, and this self-reported shift correlated strongly with their initial attitudes. In later experiments, subjects also reported their opinions becoming more extreme in response to ambiguous information. However, comparisons of their attitudes before and after the new evidence showed no significant change, suggesting that the self-reported changes might not be real. Based on these experiments, Deanna Kuhn and Joseph Lao concluded that polarization is a real phenomenon but far from inevitable, only happening in a small minority of cases. They found that it was prompted not only by considering mixed evidence, but by merely thinking about the topic.
Charles Taber and Milton Lodge argued that the Stanford team's result had been hard to replicate because the arguments used in later experiments were too abstract or confusing to evoke an emotional response. The Taber and Lodge study used the emotionally charged topics of gun control and affirmative action. They measured the attitudes of their subjects towards these issues before and after reading arguments on each side of the debate. Two groups of subjects showed attitude polarization; those with strong prior opinions and those who were politically knowledgeable. In part of this study, subjects chose which information sources to read, from a list prepared by the experimenters. For example they could read the National Rifle Association's and the Brady Anti-Handgun Coalition's arguments on gun control. Even when instructed to be even-handed, subjects were more likely to read arguments that supported their existing attitudes. This biased search for information correlated well with the polarization effect.
Persistence of discredited beliefs
Confirmation biases can be used to explain why some beliefs remain when the initial evidence for them is removed. This belief perseverance effect has been shown by a series of experiments using what is called the "debriefing paradigm": subjects examine faked evidence for a hypothesis, their attitude change is measured, then they learn that the evidence was fictitious. Their attitudes are then measured once more to see if their belief returns to its previous level.
A typical finding is that at least some of the initial belief remains even after a full debrief. In one experiment, subjects had to distinguish between real and fake suicide notes. The feedback was random: some were told they had done well while others were told they had performed badly. Even after being fully debriefed, subjects were still influenced by the feedback. They still thought they were better or worse than average at that kind of task, depending on what they had initially been told.
In another study, subjects read job performance ratings of two firefighters, along with their responses to a risk aversion test. These fictional data were arranged to show either a negative or positive association between risk-taking attitudes and job success. Even if these case studies had been true, they would have been scientifically poor evidence. However, the subjects found them subjectively persuasive. When the case studies were shown to be fictional, subjects' belief in a link diminished, but around half of the original effect remained. Follow-up interviews established that the subjects had understood the debriefing and taken it seriously. Subjects seemed to trust the debriefing, but regarded the discredited information as irrelevant to their personal belief.
Preference for early information
Experiments have shown that information is weighted more strongly when it appears early in a series, even when the order is unimportant. For example, people form a more positive impression of someone described as "intelligent, industrious, impulsive, critical, stubborn, envious" than when they are given the same words in reverse order. This irrational primacy effect is independent of the primacy effect in memory in which the earlier items in a series leave a stronger memory trace. Biased interpretation offers an explanation for this effect: seeing the initial evidence, people form a working hypothesis that affects how they interpret the rest of the information.
One demonstration of irrational primacy involved colored chips supposedly drawn from two urns. Subjects were told the color distributions of the urns, and had to estimate the probability of a chip being drawn from one of them. In fact, the colors appeared in a pre-arranged order. The first thirty draws favored one urn and the next thirty favored the other. The series as a whole was neutral, so rationally, the two urns were equally likely. However, after sixty draws, subjects favored the urn suggested by the initial thirty.
Another experiment involved a slide show of a single object, seen as just a blur at first and in slightly better focus with each succeeding slide. After each slide, subjects had to state their best guess of what the object was. Subjects whose early guesses were wrong persisted with those guesses, even when the picture was sufficiently in focus that other people could readily identify the object.
Illusory association between events
Illusory correlation is the tendency to see non-existent correlations in a set of data. This tendency was first demonstrated in a series of experiments in the late 1960s. In one experiment, subjects read a set of psychiatric case studies, including responses to the Rorschach inkblot test. They reported that the homosexual men in the set were more likely to report seeing buttocks, anuses or sexually ambiguous figures in the inkblots. In fact the case studies were fictional and, in one version of the experiment, had been constructed so that the homosexual men were less likely to report this imagery. In a survey, a group of experienced psychoanalysts reported the same set of illusory associations with homosexuality.
Another study recorded the symptoms experienced by arthritic patients, along with weather conditions over a 15-month period. Nearly all the patients reported that their pains were correlated with weather conditions, although the real correlation was zero.
This effect is a kind of biased interpretation, in that objectively neutral or unfavorable evidence is interpreted to support existing beliefs. It is also related to biases in hypothesis-testing behavior. In judging whether two events, such as illness and bad weather, are correlated, people rely heavily on the number of positive-positive cases: in this example, instances of both pain and bad weather. They pay relatively little attention to the other kinds of observation (of no pain and/or good weather). This parallels the reliance on positive tests in hypothesis testing. It may also reflect selective recall, in that people may have a sense that two events are correlated because it is easier to recall times when they happened together.
Example Days Rain No rain Arthritis 14 6 No arthritis 7 2
In the above fictional example, arthritic symptoms are more likely on days with no rain. However, people are likely to focus on the relatively large number of days which have both rain and symptoms. By concentrating on one cell of the table rather than all four, people can misperceive the relationship, in this case associating rain with arthritic symptoms.
Before psychological research on confirmation bias, the phenomenon had been observed anecdotally by writers, including the Greek historian Thucydides (c. 460 BC – c. 395 BC), Italian poet Dante Alighieri (1265–1321), English philosopher and scientist Francis Bacon (1561–1626), and Russian author Leo Tolstoy (1828–1910). Thucydides, in the History of the Peloponnesian War wrote, "it is a habit of mankind ... to use sovereign reason to thrust aside what they do not fancy." In the Divine Comedy, St. Thomas Aquinas cautions Dante when they meet in Paradise, "opinion—hasty—often can incline to the wrong side, and then affection for one's own opinion binds, confines the mind." Bacon, in the Novum Organum, wrote,The human understanding when it has once adopted an opinion ... draws all things else to support and agree with it. And though there be a greater number and weight of instances to be found on the other side, yet these it either neglects or despises, or else by some distinction sets aside or rejects[.]I know that most men—not only those considered clever, but even those who are very clever, and capable of understanding most difficult scientific, mathematical, or philosophic problems—can very seldom discern even the simplest and most obvious truth if it be such as to oblige them to admit the falsity of conclusions they have formed, perhaps with much difficulty—conclusions of which they are proud, which they have taught to others, and on which they have built their lives.
Wason's research on hypothesis-testing
The term "confirmation bias" was coined by English psychologist Peter Wason. For an experiment published in 1960, he challenged subjects to identify a rule applying to triples of numbers. At the outset, they were told that (2,4,6) fits the rule. Subjects could generate their own triples and the experimenter told them whether or not each triple conformed to the rule.
While the actual rule was simply "any ascending sequence", the subjects had a great deal of difficulty in arriving at it, often announcing rules that were far more specific, such as "the middle number is the average of the first and last". The subjects seemed to test only positive examples—triples that obeyed their hypothesized rule. For example, if they thought the rule was, "Each number is two greater than its predecessor", they would offer a triple that fit this rule, such as (11,13,15) rather than a triple that violates it, such as (11,12,19).
Wason accepted falsificationism, according to which a scientific test of a hypothesis is a serious attempt to falsify it. He interpreted his results as showing a preference for confirmation over falsification, hence the term "confirmation bias".[Note 3] Wason also used confirmation bias to explain the results of his selection task experiment. In this task, subjects are given partial information about a set of objects, and have to specify what further information they would need to tell whether or not a conditional rule ("If A, then B") applies. It has been found repeatedly that people perform badly on various forms of this test, in most cases ignoring information that could potentially refute the rule.
Klayman and Ha's critique
A 1987 paper by Joshua Klayman and Young-Won Ha argued that the Wason experiments had not actually demonstrated a bias towards confirmation. Instead, Klayman and Ha interpreted the results in terms of a tendency to make tests that are consistent with the working hypothesis. They called this the "positive test strategy". This strategy is an example of a heuristic: a reasoning shortcut that is imperfect but easy to compute. Klayman and Ha used Bayesian probability and information theory as their standard of hypothesis-testing, rather than the falsificationism used by Wason. According to these ideas, each answer to a question yields a different amount of information, which depends on the person's prior beliefs. Thus a scientific test of a hypothesis is one that is expected to produce the most information. Since the information content depends on initial probabilities, a positive test can either be highly informative or uninformative. Klayman and Ha argued that when people think about realistic problems, they are looking for a specific answer with a small initial probability. In this case, positive tests are usually more informative than negative tests. However, in Wason's rule discovery task the answer—three numbers in ascending order—is very broad, so positive tests are unlikely to yield informative answers. Klayman and Ha supported their analysis by citing an experiment that used the labels "DAX" and "MED" in place of "fits the rule" and "doesn't fit the rule". This avoided implying that the aim was to find a low-probability rule. Subjects had much more success with this version of the experiment.
In light of this and other critiques, the focus of research moved away from confirmation versus falsification to examine whether people test hypotheses in an informative way, or an uninformative but positive way. The search for "true" confirmation bias led psychologists to look at a wider range of effects in how people process information.
Confirmation bias is often described as a result of automatic processing. Individuals do not use deceptive strategies to fake data, but forms of information processing that take place more or less unintentionally. According to Robert Maccoun, most biased evidence processing occurs unintentionally through a combination of both "hot" (i.e., motivated) and "cold" (i.e., cognitive) mechanisms.
Cognitive explanations for confirmation bias are based on limitations in people's ability to handle complex tasks, and the shortcuts, called "heuristics", that they use. For example, people may judge the reliability of evidence by using the availability heuristic, i.e. how readily a particular idea comes to mind. It is also possible that people can only focus on one thought at a time, so find it difficult to test alternative hypotheses in parallel. Another heuristic is the positive test strategy identified by Klayman and Ha, in which people test a hypothesis by examining cases where they expect a property or event to occur. This heuristic avoids the difficult or impossible task of working out how diagnostic each possible question will be. However, it is not universally reliable, so people can overlook challenges to their existing beliefs.
Motivational explanations involve an effect of desire on belief, sometimes called "wishful thinking". It is known that people prefer pleasant thoughts over unpleasant ones in a number of ways: this is called the "Pollyanna principle". Applied to arguments or sources of evidence, this could explain why desired conclusions are more likely to be believed true. According to experiments that manipulate the desirability of the conclusion, people demand a high standard of evidence for unpalatable ideas and a low standard for preferred ideas. In other words, they ask, "Can I believe this?" for some suggestions and, "Must I believe this?" for others. Although consistency is a desirable feature of attitudes, an excessive drive for consistency is another potential source of bias because it may prevent people from neutrally evaluating new, surprising information. Social psychologist Ziva Kunda combines the cognitive and motivational theories, arguing that motivation creates the bias, but cognitive factors determine the size of the effect.
Explanations in terms of cost-benefit analysis assume that people do not just test hypotheses in a disinterested way, but assess the costs of different errors. Using ideas from evolutionary psychology, James Friedrich suggests that people do not primarily aim at truth in testing hypotheses, but try to avoid the most costly errors. For example, employers might ask one-sided questions in job interviews because they are focused on weeding out unsuitable candidates. Yaacov Trope and Akiva Liberman's refinement of this theory assumes that people compare the two different kinds of error: accepting a false hypothesis or rejecting a true hypothesis. For instance, someone who underestimates a friend's honesty might treat him or her suspiciously and so undermine the friendship. Overestimating the friend's honesty may also be costly, but less so. In this case, it would be rational to seek, evaluate or remember evidence of their honesty in a biased way. When someone gives an initial impression of being introverted or extraverted, questions that match that impression come across as more empathic. This suggests that when talking to someone who seems to be an introvert, it is a sign of better social skills to ask, "Do you feel awkward in social situations?" rather than, "Do you like noisy parties?" The connection between confirmation bias and social skills was corroborated by a study of how college students get to know other people. Highly self-monitoring students, who are more sensitive to their environment and to social norms, asked more matching questions when interviewing a high-status staff member than when getting to know fellow students.
Confirmation bias can lead investors to be overconfident, ignoring evidence that their strategies will lose money. In studies of political stock markets, investors made more profit when they resisted bias. For example, participants who interpreted a candidate's debate performance in a neutral rather than partisan way were more likely to profit. To combat the effect of confirmation bias, investors can try to adopt a contrary viewpoint "for the sake of argument". One such technique involves imagining that their investments have collapsed and asking why this might happen.
In physical and mental health
Raymond Nickerson, a psychologist, blames confirmation bias for the ineffective medical procedures that were used for centuries before the arrival of scientific medicine. If a patient recovered, medical authorities counted the treatment as successful, rather than looking for alternative explanations such as that the disease had run its natural course. Biased assimilation is a factor in the modern appeal of alternative medicine, whose proponents are swayed by positive anecdotal evidence but treat scientific evidence hyper-critically.
Cognitive therapy was developed by Aaron T. Beck in the early 1960s and has become a popular approach. According to Beck, biased information processing is a factor in depression. His approach teaches people to treat evidence impartially, rather than selectively reinforcing negative outlooks. Phobias and hypochondria have also been shown to involve confirmation bias for threatening information.
In politics and law
Nickerson argues that reasoning in judicial and political contexts is sometimes subconsciously biased, favoring conclusions that judges, juries or governments have already committed to. Since the evidence in a jury trial can be complex, and jurors often reach decisions about the verdict early on, it is reasonable to expect an attitude polarization effect. The prediction that jurors will become more extreme in their views as they see more evidence has been borne out in experiments with mock trials. Both inquisitorial and adversarial criminal justice systems are affected by confirmation bias.
Confirmation bias can be a factor in creating or extending conflicts, from emotionally charged debates to wars: by interpreting the evidence in their favor, each opposing party can become overconfident that it is in the stronger position. On the other hand, confirmation bias can result in people ignoring or misinterpreting the signs of an imminent or incipient conflict. For example, psychologists Stuart Sutherland and Thomas Kida have each argued that US Admiral Husband E. Kimmel showed confirmation bias when playing down the first signs of the Japanese attack on Pearl Harbor.
A two-decade study of political pundits by Philip E. Tetlock found that, on the whole, their predictions were not much better than chance. Tetlock divided experts into "foxes" who maintained multiple hypotheses, and "hedgehogs" who were more dogmatic. In general, the hedgehogs were much less accurate. Tetlock blamed their failure on confirmation bias—specifically, their inability to make use of new information that contradicted their existing theories.
In the paranormal
One factor in the appeal of psychic "readings" is that listeners apply a confirmation bias which fits the psychic's statements to their own lives. By making a large number of ambiguous statements in each sitting, the psychic gives the client more opportunities to find a match. This is one of the techniques of cold reading, with which a psychic can deliver a subjectively impressive reading without any prior information about the client. Investigator James Randi compared the transcript of a reading to the client's report of what the psychic had said, and found that the client showed a strong selective recall of the "hits".
As a "striking illustration" of confirmation bias in the real world, Nickerson mentions numerological pyramidology: the practice of finding meaning in the proportions of the Egyptian pyramids. There are many different length measurements that can be made of, for example, the Great Pyramid of Giza and many ways to combine or manipulate them. Hence it is almost inevitable that people who look at these numbers selectively will find superficially impressive correspondences, for example with the dimensions of the Earth.
In scientific procedure
A distinguishing feature of scientific thinking is the search for falsifying as well as confirming evidence. However, many times in the history of science, scientists have resisted new discoveries by selectively interpreting or ignoring unfavorable data. Previous research has shown that the assessment of the quality of scientific studies seems to be particularly vulnerable to confirmation bias. It has been found several times that scientists rate studies that report findings consistent with their prior beliefs more favorably than studies reporting findings inconsistent with their previous beliefs. However, assuming that the research question is relevant, the experimental design adequate and the data are clearly and comprehensively described, the found results should be of importance to the scientific community and should not be viewed prejudicially—regardless of whether they conform to current theoretical predictions. Confirmation bias may thus be especially harmful to objective evaluations regarding nonconforming results, since biased individuals may regard opposing evidence to be weak in principle and give little serious thought to revising their beliefs. Scientific innovators often meet with resistance from the scientific community, and research presenting controversial results frequently receives harsh peer review. In the context of scientific research, confirmation biases can sustain theories or research programs in the face of inadequate or even contradictory evidence; the field of parapsychology has been particularly affected. An experimenter's confirmation bias can potentially affect which data are reported. Data that conflict with the experimenter's expectations may be more readily discarded as unreliable, producing the so-called file drawer effect. To combat this tendency, scientific training teaches ways to avoid bias. Experimental designs involving randomization and double blind trials, along with the social process of peer review, are thought to mitigate the effect of individual scientists' biases, although it has been argued that such biases can play a role in the peer review process itself.
Social psychologists have identified two tendencies in the way people seek or interpret information about themselves. Self-verification is the drive to reinforce the existing self-image and self-enhancement is the drive to seek positive feedback. Both are served by confirmation biases. In experiments where people are given feedback that conflicts with their self-image, they are less likely to attend to it or remember it than when given self-verifying feedback. They reduce the impact of such information by interpreting it as unreliable. Similar experiments have found a preference for positive feedback, and the people who give it, over negative feedback.
- ^ David Perkins, a geneticist, coined the term "myside bias" referring to a preference for "my" side of an issue. (Baron 2000, p. 195)
- ^ "Assimilation bias" is another term used for biased interpretation of evidence. (Risen & Gilovich 2007, p. 113)
- ^ Wason also used the term "verification bias". (Poletiek 2001, p. 73)
- ^ a b Plous 1993, p. 233
- ^ Darley, John M.; Gross, Paget H. (2000), "A Hypothesis-Confirming Bias in Labelling Effects", in Stangor, Charles, Stereotypes and prejudice: essential readings, Psychology Press, p. 212, ISBN 978-0-86377-589-5, OCLC 42823720
- ^ Risen & Gilovich 2007
- ^ a b c Zweig, Jason (November 19, 2009), "How to Ignore the Yes-Man in Your Head", Wall Street Journal (Dow Jones & Company), http://online.wsj.com/article/SB10001424052748703811604574533680037778184.html, retrieved 2010-06-13
- ^ Nickerson 1998, pp. 177–178
- ^ a b c d Kunda 1999, pp. 112–115
- ^ a b Baron 2000, pp. 162–164
- ^ Kida 2006, pp. 162–165
- ^ Devine, Patricia G.; Hirt, Edward R.; Gehrke, Elizabeth M. (1990), "Diagnostic and confirmation strategies in trait hypothesis testing", Journal of Personality and Social Psychology (American Psychological Association) 58 (6): 952–963, doi:10.1037/0022-35126.96.36.1992, ISSN 1939-1315
- ^ Trope, Yaacov; Bassok, Miriam (1982), "Confirmatory and diagnosing strategies in social information gathering", Journal of Personality and Social Psychology (American Psychological Association) 43 (1): 22–34, doi:10.1037/0022-35188.8.131.52, ISSN 1939-1315
- ^ a b c Klayman, Joshua; Ha, Young-Won (1987), "Confirmation, Disconfirmation and Information in Hypothesis Testing", Psychological Review (American Psychological Association) 94 (2): 211–228, doi:10.1037/0033-295X.94.2.211, ISSN 0033-295X, http://www.stats.org.uk/statistical-inference/KlaymanHa1987.pdf, retrieved 2009-08-14
- ^ a b c Oswald & Grosjean 2004, pp. 82–83
- ^ Kunda, Ziva; Fong, G.T.; Sanitoso, R.; Reber, E. (1993), "Directional questions direct self-conceptions", Journal of Experimental Social Psychology (Society of Experimental Social Psychology) 29: 62–63, ISSN 0022-1031 via Fine 2006, pp. 63–65
- ^ a b Shafir, E. (1983), "Choosing versus rejecting: why some options are both better and worse than others", Memory and Cognition 21 (4): 546–556, PMID 8350746 via Fine 2006, pp. 63–65
- ^ Snyder, Mark; Swann, Jr., William B. (1978), "Hypothesis-Testing Processes in Social Interaction", Journal of Personality and Social Psychology (American Psychological Association) 36 (11): 1202–1212, doi:10.1037/0022-35184.108.40.2062 via Poletiek 2001, p. 131
- ^ a b Kunda 1999, pp. 117–118
- ^ a b Mynatt, Clifford R.; Doherty, Michael E.; Tweney, Ryan D. (1978), "Consequences of confirmation and disconfirmation in a simulated research environment", Quarterly Journal of Experimental Psychology 30 (3): 395–406, doi:10.1080/00335557843000007
- ^ Kida 2006, p. 157
- ^ a b c d e f Lord, Charles G.; Ross, Lee; Lepper, Mark R. (1979), "Biased assimilation and attitude polarization: The effects of prior theories on subsequently considered evidence", Journal of Personality and Social Psychology (American Psychological Association) 37 (11): 2098–2109, doi:10.1037/0022-35220.127.116.118, ISSN 0022-3514
- ^ a b Baron 2000, pp. 201–202
- ^ Vyse 1997, p. 122
- ^ a b c d Taber, Charles S.; Lodge, Milton (July 2006), "Motivated Skepticism in the Evaluation of Political Beliefs", American Journal of Political Science (Midwest Political Science Association) 50 (3): 755–769, doi:10.1111/j.1540-5907.2006.00214.x, ISSN 0092-5853
- ^ a b Westen, Drew; Blagov, Pavel S.; Harenski, Keith; Kilts, Clint; Hamann, Stephan (2006), "Neural Bases of Motivated Reasoning: An fMRI Study of Emotional Constraints on Partisan Political Judgment in the 2004 U.S. Presidential Election", Journal of Cognitive Neuroscience (Massachusetts Institute of Technology) 18 (11): 1947–1958, doi:10.1162/jocn.2006.18.11.1947, PMID 17069484, http://psychsystems.net/lab/06_Westen_fmri.pdf, retrieved 2009-08-14
- ^ Gadenne, V.; Oswald, M. (1986), "Entstehung und Veränderung von Bestätigungstendenzen beim Testen von Hypothesen [Formation and alteration of confirmatory tendencies during the testing of hypotheses]", Zeitschrift für experimentelle und angewandte Psychologie 33: 360–374 via Oswald & Grosjean 2004, p. 89
- ^ Hastie, Reid; Park, Bernadette (2005), "The Relationship Between Memory and Judgment Depends on Whether the Judgment Task is Memory-Based or On-Line", in Hamilton, David L., Social cognition: key readings, New York: Psychology Press, p. 394, ISBN 0-86377-591-8, OCLC 55078722
- ^ a b c Oswald & Grosjean 2004, pp. 88–89
- ^ Stangor, Charles; McMillan, David (1992), "Memory for expectancy-congruent and expectancy-incongruent information: A review of the social and social developmental literatures", Psychological Bulletin (American Psychological Association) 111 (1): 42–61, doi:10.1037/0033-2909.111.1.42
- ^ a b Snyder, M.; Cantor, N. (1979), "Testing hypotheses about other people: the use of historical knowledge", Journal of Experimental Social Psychology 15 (4): 330–342, doi:10.1016/0022-1031(79)90042-8 via Goldacre 2008, p. 231
- ^ Kunda 1999, pp. 225–232
- ^ Sanitioso, Rasyid; Kunda, Ziva; Fong, G.T. (1990), "Motivated recruitment of autobiographical memories", Journal of Personality and Social Psychology (American Psychological Association) 59 (2): 229–241, doi:10.1037/0022-3518.104.22.168, ISSN 0022-3514, PMID 2213492
- ^ a b Russell, Dan; Jones, Warren H. (1980), "When superstition fails: Reactions to disconfirmation of paranormal beliefs", Personality and Social Psychology Bulletin (Society for Personality and Social Psychology) 6 (1): 83–88, doi:10.1177/014616728061012, ISSN 1552-7433 via Vyse 1997, p. 121
- ^ a b c Kuhn, Deanna; Lao, Joseph (March 1996), "Effects of Evidence on Attitudes: Is Polarization the Norm?", Psychological Science (American Psychological Society) 7 (2): 115–120, doi:10.1111/j.1467-9280.1996.tb00340.x
- ^ Baron 2000, p. 201
- ^ Miller, A.G.; McHoskey, J.W.; Bane, C.M.; Dowd, T.G. (1993), "The attitude polarization phenomenon: Role of response measure, attitude extremity, and behavioral consequences of reported attitude change", Journal of Personality and Social Psychology 64 (4): 561–574, doi:10.1037/0022-3522.214.171.1241
- ^ a b c d Ross, Lee; Anderson, Craig A. (1982), "Shortcomings in the attribution process: On the origins and maintenance of erroneous social assessments", in Kahneman, Daniel; Slovic, Paul; Tversky, Amos, Judgment under uncertainty: Heuristics and biases, Cambridge University Press, pp. 129–152, ISBN 978-0-521-28414-1, OCLC 7578020
- ^ a b c d Nickerson 1998, p. 187
- ^ Kunda 1999, p. 99
- ^ Ross, Lee; Lepper, Mark R.; Hubbard, Michael (1975), "Perseverance in self-perception and social perception: Biased attributional processes in the debriefing paradigm", Journal of Personality and Social Psychology (American Psychological Association) 32 (5): 880–892, doi:10.1037/0022-35126.96.36.1990, ISSN 0022-3514, PMID 1185517 via Kunda 1999, p. 99
- ^ a b c d e Baron 2000, pp. 197–200
- ^ a b c Fine 2006, pp. 66–70
- ^ a b Plous 1993, pp. 164–166
- ^ Redelmeir, D. A.; Tversky, Amos (1996), "On the belief that arthritis pain is related to the weather", Proceedings of the National Academy of Science 93 (7): 2895–2896, doi:10.1073/pnas.93.7.2895 via Kunda 1999, p. 127
- ^ a b c Kunda 1999, pp. 127–130
- ^ Plous 1993, pp. 162–164
- ^ Adapted from Fielder, Klaus (2004), "Illusory correlation", in Pohl, Rüdiger F., Cognitive Illusions: A Handbook on Fallacies and Biases in Thinking, Judgement and Memory, Hove, UK: Psychology Press, p. 103, ISBN 978-1-84169-351-4, OCLC 55124398
- ^ a b Baron 2000, pp. 195–196
- ^ Thucydides; Crawley, Richard (trans) (431 BCE), "XIV", The History of the Peloponnesian War, The Internet Classics Archive, http://classics.mit.edu/Thucydides/pelopwar.mb.txt, retrieved 2010-05-27
- ^ Alighieri, Dante. Paradiso canto XIII: 118–120. Trans. Allen Mandelbaum
- ^ a b Bacon, Francis (1620). Novum Organum. reprinted in Burtt, E.A., ed. (1939), The English philosophers from Bacon to Mill, New York: Random House, p. 36 via Nickerson 1998, p. 176
- ^ Tolstoy, Leo. What is Art? p. 124 (1899). In The Kingdom of God Is Within You (1893), he similarly declared, "The most difficult subjects can be explained to the most slow-witted man if he has not formed any idea of them already; but the simplest thing cannot be made clear to the most intelligent man if he is firmly persuaded that he knows already, without a shadow of doubt, what is laid before him" (ch. 3). Translated from the Russian by Constance Garnett, New York, 1894. Project Gutenberg edition released November 2002. Retrieved 2009-08-24.
- ^ Gale, Maggie; Ball, Linden J. (2002), "Does Positivity Bias Explain Patterns of Performance on Wason's 2-4-6 task?", in Gray, Wayne D.; Schunn, Christian D., Proceedings of the Twenty-Fourth Annual Conference of the Cognitive Science Society, Routledge, p. 340, ISBN 978-0-8058-4581-5, OCLC 469971634
- ^ a b Wason, Peter C. (1960), "On the failure to eliminate hypotheses in a conceptual task", Quarterly Journal of Experimental Psychology (Psychology Press) 12 (3): 129–140, doi:10.1080/17470216008416717, ISSN 1747-0226
- ^ Nickerson 1998, p. 179
- ^ Lewicka 1998, p. 238
- ^ Oswald & Grosjean 2004, pp. 79–96
- ^ Wason, Peter C. (1968), "Reasoning about a rule", Quarterly Journal of Experimental Psychology (Psychology Press) 20 (3): 273–28, doi:10.1080/14640746808400161, ISSN 1747-0226
- ^ a b c Sutherland, Stuart (2007), Irrationality (2nd ed.), London: Pinter and Martin, pp. 95–103, ISBN 978-1-905177-07-3, OCLC 72151566
- ^ Barkow, Jerome H.; Cosmides, Leda; Tooby, John (1995), The adapted mind: evolutionary psychology and the generation of culture, Oxford University Press US, pp. 181–184, ISBN 978-0-19-510107-2, OCLC 33832963
- ^ Oswald & Grosjean 2004, pp. 81–82, 86–87
- ^ Lewicka 1998, p. 239
- ^ Tweney, Ryan D.; Doherty, Michael E.; Worner, Winifred J.; Pliske, Daniel B.; Mynatt, Clifford R.; Gross, Kimberly A.; Arkkelin, Daniel L. (1980), "Strategies of rule discovery in an inference task", The Quarterly Journal of Experimental Psychology (Psychology Press) 32 (1): 109–123, doi:10.1080/00335558008248237, ISSN 1747-0226 (Experiment IV)
- ^ Oswald & Grosjean 2004, pp. 86–89
- ^ a b Hergovich, Schott & Burger 2010
- ^ Maccoun 1998
- ^ Friedrich 1993, p. 298
- ^ Kunda 1999, p. 94
- ^ Nickerson 1998, pp. 198–199
- ^ Nickerson 1998, p. 200
- ^ a b c Nickerson 1998, p. 197
- ^ Baron 2000, p. 206
- ^ Matlin, Margaret W. (2004), "Pollyanna Principle", in Pohl, Rüdiger F., Cognitive Illusions: A Handbook on Fallacies and Biases in Thinking, Judgement and Memory, Hove: Psychology Press, pp. 255–272, ISBN 978-1-84169-351-4, OCLC 55124398
- ^ Dawson, Erica; Gilovich, Thomas; Regan, Dennis T. (October 2002), "Motivated Reasoning and Performance on the Wason Selection Task", Personality and Social Psychology Bulletin (Society for Personality and Social Psychology) 28 (10): 1379–1387, doi:10.1177/014616702236869, http://comp9.psych.cornell.edu/sec/pubPeople/tdg1/Dawson.Gilo.Regan.pdf, retrieved 2009-09-30
- ^ Ditto, Peter H.; Lopez, David F. (1992), "Motivated skepticism: use of differential decision criteria for preferred and nonpreferred conclusions", Journal of personality and social psychology (American Psychological Association) 63 (4): 568–584, doi:10.1037/0022-35188.8.131.528, ISSN 0022-3514
- ^ Nickerson 1998, p. 198
- ^ Oswald & Grosjean 2004, pp. 91–93
- ^ Friedrich 1993, pp. 299, 316–317
- ^ Trope, Y.; Liberman, A. (1996), "Social hypothesis testing: cognitive and motivational mechanisms", in Higgins, E. Tory; Kruglanski, Arie W., Social Psychology: Handbook of basic principles, New York: Guilford Press, ISBN 978-1-57230-100-9, OCLC 34731629 via Oswald & Grosjean 2004, pp. 91–93
- ^ a b Dardenne, Benoit; Leyens, Jacques-Philippe (1995), "Confirmation Bias as a Social Skill", Personality and Social Psychology Bulletin (Society for Personality and Social Psychology) 21 (11): 1229–1239, doi:10.1177/01461672952111011, ISSN 1552-7433
- ^ Pompian, Michael M. (2006), Behavioral finance and wealth management: how to build optimal portfolios that account for investor biases, John Wiley and Sons, pp. 187–190, ISBN 978-0-471-74517-4, OCLC 61864118
- ^ Hilton, Denis J. (2001), "The psychology of financial decision-making: Applications to trading, dealing, and investment analysis", Journal of Behavioral Finance (Institute of Behavioral Finance) 2 (1): 37–39, doi:10.1207/S15327760JPFM0201_4, ISSN 1542-7579
- ^ Krueger, David; Mann, John David (2009), The Secret Language of Money: How to Make Smarter Financial Decisions and Live a Richer Life, McGraw Hill Professional, pp. 112–113, ISBN 978-0-07-162339-1, OCLC 277205993
- ^ a b Nickerson 1998, p. 192
- ^ Goldacre 2008, p. 233
- ^ Singh, Simon; Ernst, Edzard (2008), Trick or Treatment?: Alternative Medicine on Trial, London: Bantam, pp. 287–288, ISBN 978-0-593-06129-9
- ^ Atwood, Kimball (2004), "Naturopathy, Pseudoscience, and Medicine: Myths and Fallacies vs Truth", Medscape General Medicine 6 (1): 33
- ^ Neenan, Michael; Dryden, Windy (2004), Cognitive therapy: 100 key points and techniques, Psychology Press, p. ix, ISBN 978-1-58391-858-6, OCLC 474568621
- ^ Blackburn, Ivy-Marie; Davidson, Kate M. (1995), Cognitive therapy for depression & anxiety: a practitioner's guide (2 ed.), Wiley-Blackwell, p. 19, ISBN 978-0-632-03986-9, OCLC 32699443
- ^ Harvey, Allison G.; Watkins, Edward; Mansell, Warren (2004), Cognitive behavioural processes across psychological disorders: a transdiagnostic approach to research and treatment, Oxford University Press, pp. 172–173, 176, ISBN 978-0-19-852888-3, OCLC 602015097
- ^ Nickerson 1998, pp. 191–193
- ^ Myers, D.G.; Lamm, H. (1976), "The group polarization phenomenon", Psychological Bulletin 83 (4): 602–627, doi:10.1037/0033-2909.83.4.602 via Nickerson 1998, pp. 193–194
- ^ Halpern, Diane F. (1987), Critical thinking across the curriculum: a brief edition of thought and knowledge, Lawrence Erlbaum Associates, p. 194, ISBN 978-0-8058-2731-6, OCLC 37180929
- ^ Roach, Kent (2010), "Wrongful Convictions: Adversarial and Inquisitorial Themes", North Carolina Journal of International Law and Commercial Regulation 35, SSRN 1619124, "Both adversarial and inquisitorial systems seem subject to the dangers of tunnel vision or confirmation bias."
- ^ Baron 2000, pp. 191,195
- ^ Kida 2006, p. 155
- ^ Tetlock, Philip E. (2005), Expert Political Judgment: How Good Is It? How Can We Know?, Princeton, N.J.: Princeton University Press, pp. 125–128, ISBN 978-0-691-12302-8, OCLC 56825108
- ^ a b Smith, Jonathan C. (2009), Pseudoscience and Extraordinary Claims of the Paranormal: A Critical Thinker's Toolkit, John Wiley and Sons, pp. 149–151, ISBN 978-1-4051-8122-8, OCLC 319499491
- ^ Randi, James (1991), James Randi: psychic investigator, Boxtree, pp. 58–62, ISBN 978-1-85283-144-8, OCLC 26359284
- ^ a b Nickerson 1998, p. 190
- ^ a b Nickerson 1998, pp. 192–194
- ^ a b Koehler 1993
- ^ a b c Mahoney 1977
- ^ Horrobin 1990
- ^ Proctor, Robert W.; Capaldi, E. John (2006), Why science matters: understanding the methods of psychological research, Wiley-Blackwell, p. 68, ISBN 978-1-4051-3049-3, OCLC 318365881
- ^ Sternberg, Robert J. (2007), "Critical Thinking in Psychology: It really is critical", in Sternberg, Robert J.; Roediger III, Henry L.; Halpern, Diane F., Critical Thinking in Psychology, Cambridge University Press, p. 292, ISBN 0-521-60834-1, OCLC 69423179, "Some of the worst examples of confirmation bias are in research on parapsychology ... Arguably, there is a whole field here with no powerful confirming data at all. But people want to believe, and so they find ways to believe."
- ^ a b Shadish, William R. (2007), "Critical Thinking in Quasi-Experimentation", in Sternberg, Robert J.; Roediger III, Henry L.; Halpern, Diane F., Critical Thinking in Psychology, Cambridge University Press, p. 49, ISBN 978-0-521-60834-3
- ^ Shermer, Michael (July 2006), "The Political Brain", Scientific American, ISSN 0036-8733, http://www.scientificamerican.com/article.cfm?id=the-political-brain, retrieved 2009-08-14
- ^ a b Swann, William B.; Pelham, Brett W.; Krull, Douglas S. (1989), "Agreeable Fancy or Disagreeable Truth? Reconciling Self-Enhancement and Self-Verification", Journal of Personality and Social Psychology (American Psychological Association) 57 (5): 782–791, doi:10.1037/0022-35184.108.40.2062, ISSN 0022–3514, PMID 2810025
- ^ a b Swann, William B.; Read, Stephen J. (1981), "Self-Verification Processes: How We Sustain Our Self-Conceptions", Journal of Experimental Social Psychology (Academic Press) 17 (4): 351–372, doi:10.1016/0022-1031(81)90043-3, ISSN 0022–1031
- ^ Story, Amber L. (1998), "Self-Esteem and Memory for Favorable and Unfavorable Personality Feedback", Personality and Social Psychology Bulletin (Society for Personality and Social Psychology) 24 (1): 51–64, doi:10.1177/0146167298241004, ISSN 1552-7433
- ^ White, Michael J.; Brockett, Daniel R.; Overstreet, Belinda G. (1993), "Confirmatory Bias in Evaluating Personality Test Information: Am I Really That Kind of Person?", Journal of Counseling Psychology (American Psychological Association) 40 (1): 120–126, doi:10.1037/0022-0220.127.116.11, ISSN 0022-0167
- ^ Swann, William B.; Read, Stephen J. (1981), "Acquiring Self-Knowledge: The Search for Feedback That Fits", Journal of Personality and Social Psychology (American Psychological Association) 41 (6): 1119–1128, ISSN 0022–3514
- ^ Shrauger, J. Sidney; Lund, Adrian K. (1975), "Self-evaluation and reactions to evaluations from others", Journal of Personality (Duke University Press) 43 (1): 94–108, doi:10.1111/j.1467-6494.1975.tb00574, PMID 1142062
- Baron, Jonathan (2000), Thinking and deciding (3rd ed.), New York: Cambridge University Press, ISBN 0-521-65030-5, OCLC 316403966
- Fine, Cordelia (2006), A Mind of its Own: how your brain distorts and deceives, Cambridge, UK: Icon books, ISBN 1-84046-678-2, OCLC 60668289
- Friedrich, James (1993), "Primary error detection and minimization (PEDMIN) strategies in social cognition: a reinterpretation of confirmation bias phenomena", Psychological Review (American Psychological Association) 100 (2): 298–319, doi:10.1037/0033-295X.100.2.298, ISSN 0033-295X, PMID 8483985
- Goldacre, Ben (2008), Bad Science, London: Fourth Estate, ISBN 978-0-00-724019-7, OCLC 259713114
- Hergovich, Andreas; Schott, Reinhard; Burger, Christoph (2010), "Biased Evaluation of Abstracts Depending on Topic and Conclusion: Further Evidence of a Confirmation Bias Within Scientific Psychology", Current Psychology 29 (3): 188–209, doi:10.1007/s12144-010-9087-5, http://www.springerlink.com/content/20162475422jn5x6/
- Horrobin, David F. (1990), "The philosophical basis of peer review and the suppression of innovation", Journal of the American Medical Association 263 (10): 1438–1441, doi:10.1001/jama.263.10.1438, PMID 2304222, http://jama.ama-assn.org/cgi/content/abstract/263/10/1438
- Kida, Thomas E. (2006), Don't believe everything you think: the 6 basic mistakes we make in thinking, Amherst, New York: Prometheus Books, ISBN 978-1-59102-408-8, OCLC 63297791
- Koehler, Jonathan J. (1993), "The influence of prior beliefs on scientific judgments of evidence quality", Organizational Behavior and Human Decision Processes 56: 28–55, doi:10.1006/obhd.1993.1044, http://ideas.repec.org/a/eee/jobhdp/v56y1993i1p28-55.html
- Kunda, Ziva (1999), Social Cognition: Making Sense of People, MIT Press, ISBN 978-0-262-61143-5, OCLC 40618974
- Lewicka, Maria (1998), "Confirmation Bias: Cognitive Error or Adaptive Strategy of Action Control?", in Kofta, Mirosław; Weary, Gifford; Sedek, Grzegorz, Personal control in action: cognitive and motivational mechanisms, Springer, pp. 233–255, ISBN 978-0-306-45720-3, OCLC 39002877
- Maccoun, Robert J. (1998), "Biases in the interpretation and use of research results", Annual Review of Psychology 49: 259–87, doi:10.1146/annurev.psych.49.1.259, PMID 15012470, http://socrates.berkeley.edu/~maccoun/MacCoun_AnnualReview98.pdf
- Mahoney, Michael J. (1977), "Publication prejudices: an experimental study of confirmatory bias in the peer review system", Cognitive Therapy and Research 1 (2): 161–175, doi:10.1007/BF01173636, http://www.springerlink.com/content/g1l56241734kq743/
- Nickerson, Raymond S. (1998), "Confirmation Bias; A Ubiquitous Phenomenon in Many Guises", Review of General Psychology (Educational Publishing Foundation) 2 (2): 175–220, doi:10.1037/1089-2618.104.22.168, ISSN 1089-2680
- Oswald, Margit E.; Grosjean, Stefan (2004), "Confirmation Bias", in Pohl, Rüdiger F., Cognitive Illusions: A Handbook on Fallacies and Biases in Thinking, Judgement and Memory, Hove, UK: Psychology Press, pp. 79–96, ISBN 978-1-84169-351-4, OCLC 55124398
- Plous, Scott (1993), The Psychology of Judgment and Decision Making, McGraw-Hill, ISBN 978-0-07-050477-6, OCLC 26931106
- Poletiek, Fenna (2001), Hypothesis-testing behaviour, Hove, UK: Psychology Press, ISBN 978-1-84169-159-6, OCLC 44683470
- Risen, Jane; Gilovich, Thomas (2007), "Informal Logical Fallacies", in Sternberg, Robert J.; Roediger III, Henry L.; Halpern, Diane F., Critical Thinking in Psychology, Cambridge University Press, pp. 110–130, ISBN 978-0-521-60834-3, OCLC 69423179
- Vyse, Stuart A. (1997), Believing in magic: The psychology of superstition, New York: Oxford University Press, ISBN 0-19-513634-9, OCLC 35025826
- Stanovich, Keith (2009), What Intelligence Tests Miss: The Psychology of Rational Thought, New Haven (CT): Yale University Press, ISBN 978-0-300-12385-2, Lay summary (21 November 2010)
- Westen, Drew (2007), The political brain: the role of emotion in deciding the fate of the nation, PublicAffairs, ISBN 978-1-58648-425-5, OCLC 86117725
- Skeptic's Dictionary: confirmation bias by Robert T. Carroll
- Teaching about confirmation bias, class handout and instructor's notes by K. H. Grobman
- Confirmation bias learning object, interactive number triples exercise by Rod McFarland, Simon Fraser University
- Brief summary of the 1979 Stanford assimilation bias study by Keith Rollag, Babson College
- "Morton's demon", Usenet post by Glenn Morton, February 2, 2002
Biases Cognitive bias Statistical bias Other/ungrouped
Wikimedia Foundation. 2010.
Look at other dictionaries:
Bias (disambiguation) — Bias is an inclination towards something, or a predisposition, partiality, prejudice, preference, or predilection. Bias may also refer to:In science and statistics: * Bias (statistics), the systematic distortion of a statistic ** A biased sample… … Wikipedia
Bias — This article is about different ways the term bias is used . For other uses, see Bias (disambiguation). Bias is an inclination to present or hold a partial perspective at the expense of (possibly equally valid) alternatives. Bias can come in many … Wikipedia
Biais de confirmation — Pour les articles homonymes, voir Confirmation (homonymie). ██ … Wikipédia en Français
Selection bias — is a statistical bias in which there is an error in choosing the individuals or groups to take part in a scientific study. It is sometimes referred to as the selection effect. The term selection bias most often refers to the distortion of a… … Wikipedia
Experimenter's bias — In experimental science, experimenter s bias is subjective bias towards a result expected by the human experimenter. David Sackett, in a useful review of biases in clinical studies, states that biases can occur in any one of seven stages of… … Wikipedia
Cognitive bias — For an article about the conceptual problems of the mind see Cognitive closure (philosophy). Psychology … Wikipedia
Sampling bias — In statistics, sampling bias is when a sample is collected in such a way that some members of the intended population are less likely to be included than others. It results in a biased sample, a non random sample of a population (or non human… … Wikipedia
Publication bias — is the tendency of researchers, editors, and pharmaceutical companies to handle the reporting of experimental results that are positive (i.e. showing a significant finding) differently from results that are negative (i.e. supporting the null… … Wikipedia
Omitted-variable bias — In statistics, omitted variable bias (OVB) occurs when a model is created which incorrectly leaves out one or more important causal factors. The bias is created when the model compensates for the missing factor by over or under estimating one of… … Wikipedia
Outcome bias — The outcome bias is an error made in evaluating the quality of a decision when the outcome of that decision is already known. Overview One will often judge a past decision by its ultimate outcome instead of based on the quality of the decision at … Wikipedia |
Like thermometers in space, satellites are taking the temperature of the Earths surface or skin. According to scientists, the satellite data confirm the Earth has had an increasing "fever" for decades.
Global land surface temperature, July 2003
This image shows land surface temperature for the entire month of July 2003, one of the warmest months on record throughout much of Europe. This image was derived using data from NASAs Moderate Resolution Imaging Spectroradiometer (MODIS) sensor. Credit: NASA
July Temperatures 1982-1998
This map shows averaged land surface temperature for the month of July from the years 1982 through 1998. The image was derived using data from the National Oceanographic and Atmospheric Associations (NOAA) Advanced Very High Resolution Radiometer sensor. Temperatures are in degrees Kelvin. Credit: NASA/NOAA
For the first time, satellites have been used to develop an 18- year record (1981-1998) of global land surface temperatures. The record provides additional proof that Earths snow-free land surfaces have, on average, warmed during this time period, according to a NASA study appearing in the March issue of the Bulletin of the American Meteorological Society. The satellite record is more detailed and comprehensive than previously available ground measurements. The satellite data will be necessary to improve climate analyses and computer modeling.
Menglin Jin, the lead author, is a visiting scientist at NASAs Goddard Space Flight Center, Greenbelt, Md., and a researcher with the University of Maryland, College Park, Md. Jin commented until now global land surface temperatures used in climate change studies were derived from thousands of on-the- ground World Meteorological Organization (WMO) stations located around the world, a relatively sparse set of readings given Earths size. These stations actually measure surface air temperature at two to three meters above land, instead of skin temperatures. The satellite skin temperature dataset is a good complement to the traditional ways of measuring temperatures.
Krishna Ramanujan | GSFC
'Quartz' crystals at the Earth's core power its magnetic field
23.02.2017 | Tokyo Institute of Technology
NASA spies Tropical Cyclone 08P's formation
23.02.2017 | NASA/Goddard Space Flight Center
In the field of nanoscience, an international team of physicists with participants from Konstanz has achieved a breakthrough in understanding heat transport
Cells need to repair damaged DNA in our genes to prevent the development of cancer and other diseases. Our cells therefore activate and send “repair-proteins”...
The Fraunhofer IWS Dresden and Technische Universität Dresden inaugurated their jointly operated Center for Additive Manufacturing Dresden (AMCD) with a festive ceremony on February 7, 2017. Scientists from various disciplines perform research on materials, additive manufacturing processes and innovative technologies, which build up components in a layer by layer process. This technology opens up new horizons for component design and combinations of functions. For example during fabrication, electrical conductors and sensors are already able to be additively manufactured into components. They provide information about stress conditions of a product during operation.
The 3D-printing technology, or additive manufacturing as it is often called, has long made the step out of scientific research laboratories into industrial...
Nature does amazing things with limited design materials. Grass, for example, can support its own weight, resist strong wind loads, and recover after being...
Nanometer-scale magnetic perforated grids could create new possibilities for computing. Together with international colleagues, scientists from the Helmholtz Zentrum Dresden-Rossendorf (HZDR) have shown how a cobalt grid can be reliably programmed at room temperature. In addition they discovered that for every hole ("antidot") three magnetic states can be configured. The results have been published in the journal "Scientific Reports".
Physicist Dr. Rantej Bali from the HZDR, together with scientists from Singapore and Australia, designed a special grid structure in a thin layer of cobalt in...
13.02.2017 | Event News
10.02.2017 | Event News
09.02.2017 | Event News
23.02.2017 | Physics and Astronomy
23.02.2017 | Earth Sciences
23.02.2017 | Life Sciences |
Science, Tech, Math › Math Understanding the Importance of the Central Limit Theorem Share Flipboard Email Print Photohapkidoblader / Getty Images Math Statistics Probability & Games Statistics Tutorials Formulas Descriptive Statistics Inferential Statistics Applications Of Statistics Math Tutorials Geometry Arithmetic Pre Algebra & Algebra Exponential Decay Functions Worksheets By Grade Resources View More By Courtney Taylor Professor of Mathematics Ph.D., Mathematics, Purdue University M.S., Mathematics, Purdue University B.A., Mathematics, Physics, and Chemistry, Anderson University Courtney K. Taylor, Ph.D., is a professor of mathematics at Anderson University and the author of "An Introduction to Abstract Algebra." our editorial process Courtney Taylor Updated June 23, 2019 The central limit theorem is a result from probability theory. This theorem shows up in a number of places in the field of statistics. Although the central limit theorem can seem abstract and devoid of any application, this theorem is actually quite important to the practice of statistics. So what exactly is the importance of the central limit theorem? It all has to do with the distribution of our population. This theorem allows you to simplify problems in statistics by allowing you to work with a distribution that is approximately normal. Statement of the Theorem The statement of the central limit theorem can seem quite technical but can be understood if we think through the following steps. We begin with a simple random sample with n individuals from a population of interest. From this sample, we can easily form a sample mean that corresponds to the mean of what measurement we are curious about in our population. A sampling distribution for the sample mean is produced by repeatedly selecting simple random samples from the same population and of the same size, and then computing the sample mean for each of these samples. These samples are to be thought of as being independent of one another. The central limit theorem concerns the sampling distribution of the sample means. We may ask about the overall shape of the sampling distribution. The central limit theorem says that this sampling distribution is approximately normal—commonly known as a bell curve. This approximation improves as we increase the size of the simple random samples that are used to produce the sampling distribution. There is a very surprising feature concerning the central limit theorem. The astonishing fact is that this theorem says that a normal distribution arises regardless of the initial distribution. Even if our population has a skewed distribution, which occurs when we examine things such as incomes or people’s weights, a sampling distribution for a sample with a sufficiently large sample size will be normal. Central Limit Theorem in Practice The unexpected appearance of a normal distribution from a population distribution that is skewed (even quite heavily skewed) has some very important applications in statistical practice. Many practices in statistics, such as those involving hypothesis testing or confidence intervals, make some assumptions concerning the population that the data was obtained from. One assumption that is initially made in a statistics course is that the populations that we work with are normally distributed. The assumption that data is from a normal distribution simplifies matters but seems a little unrealistic. Just a little work with some real-world data shows that outliers, skewness, multiple peaks and asymmetry show up quite routinely. We can get around the problem of data from a population that is not normal. The use of an appropriate sample size and the central limit theorem help us to get around the problem of data from populations that are not normal. Thus, even though we might not know the shape of the distribution where our data comes from, the central limit theorem says that we can treat the sampling distribution as if it were normal. Of course, in order for the conclusions of the theorem to hold, we do need a sample size that is large enough. Exploratory data analysis can help us to determine how large of a sample is necessary for a given situation. |
Basics of the Geostationary Orbit
By Dr. T.S. Kelso
Few aspects of the Space Age have had as much impact on our everyday lives as the invention of the communications satellite. In just a few short decades, they have brought together even the most far-flung reaches of the globe in ways that not that long ago were barely imaginable. In fact, today it is possible to talk directly to climbers at the top of Mount Everest or communicate via the Internet with virtually any computer system on the face of the planet—all with the help of communications satellites.
While communications satellites perform their missions in many types of orbits, from near-earth constellations like Iridium and Globalstar to the highly-inclined, eccentric Molniya orbits used by the Russian Federation, one of the more important classes of orbits for these satellites is the geostationary orbit. In this column, I'd like to examine the unique aspects of this class of orbit which make it suitable for not only satellite communications, but early warning and weather observations, too.
The concept of the geostationary orbit has been around since the early part of the twentieth century. Apparently, the concept was originated by Russian theorist Konstantin Tsiolkovsky—who wrote numerous science and science-fiction articles on space travel at the turn of the century. In the 1920s, Hermann Oberth and Herman Potocnik—perhaps better known by his pseudonym, Herman Noordung—wrote about space stations which maintained a unique vantage over the earth.1 Each author described an orbit at an altitude of 35,900 kilometers whose period exactly matched the earth's rotational period, making it appear to hover over a fixed point on the earth's equator.
However, the person most widely given credit for the concept of using this orbit for communications is Arthur C. Clarke. In an article he published in Wireless World in October 1945 titled "Extra-Terrestrial Relays: Can Rocket Stations Give World-wide Radio Coverage?" Clarke extrapolates from the German rocket research of the time to a day when communications around the world would be possible via a network of three geostationary satellites spaced at equal intervals around the earth's equator (see Figure 1).
Figure 1. Original figure from Clarke's article in the October 1945 edition of Wireless World2
In this article, Clarke not only determines the orbital characteristics necessary for such an orbit, but also discusses the frequencies and power needed for communications to and from space and how to use solar illumination for power—he even calculates the impact of solar eclipses around the vernal and autumnal equinoxes. What makes this article all the more remarkable is that Clarke wrote it more than a dozen years before the first satellite was even launched.
It wasn't until 1963 that NASA set out to test Clarke's concept with the Synchronous Communications Satellite program. Unfortunately, Syncom 1—launched 1963 February 14—while successfully reaching geosynchronous orbit in an inclined, eccentric orbit was unsuccessful due to an electronics failure. Syncom 2—launched 1963 July 26—became the first operational geosynchronous communications satellite. Syncom 3—launched 1964 August 19—became the first geostationary satellite, finally fulfilling the prediction made by Clarke almost twenty years earlier.
So just what is a geostationary orbit? In general terms, it is a special orbit for which any satellite in that orbit will appear to hover stationary over a point on the earth's surface. Unlike all other classes of orbits, however, where there can be a family of orbits, there is only one geostationary orbit. Let's examine this orbit's unique characteristics.
For any orbit to be geostationary, it must first be geosynchronous. A geosynchronous orbit is any orbit which has a period equal to the earth's rotational period. As we shall soon see, this requirement is not sufficient to ensure a fixed position relative to the earth. While all geostationary orbits must be geosynchronous, not all geosynchronous orbits are geostationary. Unfortunately, these terms are often used interchangeably.
Before continuing, it is necessary to clarify what is meant by "the earth's rotational period." For most timekeeping, we consider the earth's rotation to be measured relative to the sun's (mean) position. However, since the sun moves relative to the stars (inertial space) as a result of the earth's orbit, one mean solar day is not the rotational period that we're interested in. A geosynchronous satellite completes one orbit around the earth in the same time that it takes the earth to make one rotation in inertial (or fixed) space. This time period is known as one sidereal day and is equivalent to 23h56m04s of mean solar time (for more information, see "Orbital Coordinate Systems, Part I" in the September/October 1995 issue of Satellite Times). Without any other influences, the earth will be oriented the same way in inertial space each time a satellite with this period returns to a particular point in its orbit.
To ensure that a satellite remains over a particular point on the earth's surface, the orbit must also be circular and have zero inclination. Figure 2 shows the difference between a geostationary orbit (GSO) and a geosynchronous orbit (GEO) with an inclination of 20 degrees. Both are circular orbits. While each satellite will complete its orbit in the same time it takes the earth to rotate once, it should be obvious that the geosynchronous satellite will move north and south of the equator during its orbit while the geostationary satellite will not.
Figure 2. Geostationary and Geosynchronous Orbits
Orbits with non-zero eccentricity (i.e., elliptical rather than circular orbits) will result in drifts east and west as the satellite goes faster or slower at various points in its orbit. Combinations of non-zero inclination and eccentricity will all result in movement relative to a fixed ground point.
Figure 3 shows some typical results. The figure-eight ground track is that of the geosynchronous orbit (GEO) shown in Figure 2. The geostationary satellite (GSO) sits fixed at the crossover point of the figure eight (over the equator). If we now give the geosynchronous satellite an eccentricity of 0.10, the slanted teardrop shape results. Typically, eccentric geosynchronous orbits will result in a slanted figure eight—this one just happens to have the crossover point at the northern apex of the ground track.
Figure 3. Geosynchronous Ground Tracks
It should now be apparent that only satellites which orbit with a period equal to the earth's rotational period and with zero eccentricity and inclination can be geostationary satellites. As such, there is only one geostationary orbit—a belt circling the earth's equator at an altitude of roughly 35,786 kilometers.
It should also be clear that it is not possible to orbit a satellite which is stationary over a point which is not on the equator. This limitation is not serious, however, since most of the earth's surface is visible from geostationary orbit. In fact, a single geostationary satellite can see 42 percent of the earth's surface and a constellation of geostationary satellites—like the one Clarke suggested—can see all of the earth's surface between 81° S and 81° N.
Of course, the advantage of a satellite in a geostationary orbit is that it remains stationary relative to the earth's surface. This makes it an ideal orbit for communications since it will not be necessary to track the satellite to determine where to point an antenna. However, there are some disadvantages. Perhaps the first is the long distance between the satellite and the ground. With sufficient power or a large enough antenna, though, this limitation can be overcome.
The fact that there is only one geostationary orbit presents a more serious limitation. Just as in putting beads on a loop of string, there are only so many slots into which geostationary satellites can be placed. The primary limitation here is spacing satellites along the geostationary belt so that the limited frequencies allocated to this purpose don't result in interference between satellites on uplink or downlink. Of course, we also want to make sure the satellites aren't close enough to run into one another since they will have some small movement.
While new communications satellites may be placed in a true geostationary orbit initially, there are several forces which act to alter their orbits over time. Since the geostationary orbital plane is not coincident with the plane of the earth's orbit (the ecliptic) or that of the moon's orbit, the gravitational attraction of the sun and the moon act to pull the geostationary satellites out of their equatorial orbit, gradually increasing each satellite's orbital inclination. In addition, the noncircular shape of the earth's equator causes these satellites to be slowly drawn to one of two stable equilibrium points along the equator, resulting in an east-west libration (drifting back and forth) about these points.
To counteract these perturbations, sufficient fuel is loaded into all geostationary satellites to periodically correct any changes over the planned lifetime of the satellite. These periodic corrections are known as stationkeeping. North-south stationkeeping corrects the slowly increasing inclination back to zero and east-west stationkeeping keeps the satellite at its assigned position within the geostationary belt. These maneuvers are planned to maintain the geostationary satellite within a small distance of its ideal location (both north-south and east-west). This tolerance is normally designed to ensure the satellite remains within the ground antenna beamwidth without tracking.
Once the satellite has exhausted its fuel, its inclination will begin to grow and it will begin to drift in longitude and may present a threat to other geostationary satellites. Oftentimes, geostationary satellites are boosted into a slightly higher orbit at the end of their planned lifetime to prevent them causing havoc with other geostationary satellites. This final maneuver assumes that no unplanned failure has occurred which would prevent it (such as a power or communications failure).
This initial article on geostationary and geosynchronous orbits should give you a basic understanding of some of the fundamental orbital concepts. In our next column, I would like to continue this topic by examining the relationship among the observer, satellite, and the sun to determine a geostationary satellite's longitude, the look angles from a terrestrial observer, and how the position of the sun can affect onboard power management and interference with satellite communications.
As always, if you have any questions, please feel free to write me at TS.Kelso@celestrak.com. Until next time, keep looking up!
1 Oberth, Hermann. Die Rakete zu den
Planetenraumen (The Rocket into Interplanetary Space), 1923. Noordung, Herman. Das
Problem der Befahrung des Weltraums (The Problem of Space Travel), 1929.
2 Clarke, Arthur C. "Extra-Terrestrial Relays: Can Rocket Stations Give World-wide Radio Coverage?" Wireless World, October 1945, p. 306.
|TLE Data||Space Data|
Dr. T.S. Kelso
Follow CelesTrak on Twitter @TSKelso
Last updated: 2004 August 30 15:00:53 UTC
Accessed 181,329 times since 2000 December 16
Current system time: 2014 April 24 01:32:09 UTC |
The Dust Bowl was a period of severe dust storms that greatly damaged the ecology and agriculture of the American and Canadian prairies during the 1930s; severe drought and a failure to apply dryland farming methods to prevent the aeolian processes (wind erosion) caused the phenomenon. The drought came in three waves, 1934, 1936, and 1939–1940, but some regions of the high plains experienced drought conditions for as many as eight years. With insufficient understanding of the ecology of the plains, farmers had conducted extensive deep plowing of the virgin topsoil of the Great Plains during the previous decade; this had displaced the native, deep-rooted grasses that normally trapped soil and moisture even during periods of drought and high winds. The rapid mechanization of farm equipment, especially small gasoline tractors, and widespread use of the combine harvester contributed to farmers' decisions to convert arid grassland (much of which received no more than 10 inches (~250 mm) of precipitation per year) to cultivated cropland.
During the drought of the 1930s, the unanchored soil turned to dust, which the prevailing winds blew away in huge clouds that sometimes blackened the sky. These choking billows of dust – named "black blizzards" or "black rollers" – traveled cross country, reaching as far as the East Coast and striking such cities as New York City and Washington, D.C. On the plains, they often reduced visibility to 3 feet (1 m) or less. Associated Press reporter Robert E. Geiger happened to be in Boise City, Oklahoma, to witness the "Black Sunday" black blizzards of April 14, 1935; Edward Stanley, Kansas City news editor of the Associated Press coined the term "Dust Bowl" while rewriting Geiger's news story. While the term "the Dust Bowl" was originally a reference to the geographical area affected by the dust, today it usually refers to the event itself (the term "Dirty Thirties" is also sometimes used).
The drought and erosion of the Dust Bowl affected 100,000,000 acres (400,000 km2) that centered on the panhandles of Texas and Oklahoma and touched adjacent sections of New Mexico, Colorado, and Kansas.
The Dust Bowl forced tens of thousands of poverty-stricken families to abandon their farms, unable to pay mortgages or grow crops, and losses reached $25 million per day by 1936 (equivalent to $450,000,000 in 2018). Many of these families, who were often known as "Okies" because so many of them came from Oklahoma, migrated to California and other states to find that the Great Depression had rendered economic conditions there little better than those they had left.
The Dust Bowl has been the subject of many cultural works, notably the novel The Grapes of Wrath (1939) by John Steinbeck, the folk music of Woody Guthrie, and photographs depicting the conditions of migrants by Dorothea Lange.
Geographic characteristics and early history
The Dust Bowl area lies principally west of the 100th meridian on the High Plains, characterized by plains which vary from rolling in the north to flat in the Llano Estacado. Elevation ranges from 2,500 feet (760 m) in the east to 6,000 feet (1,800 m) at the base of the Rocky Mountains. The area is semiarid, receiving less than 20 inches (510 mm) of rain annually; this rainfall supports the shortgrass prairie biome originally present in the area. The region is also prone to extended drought, alternating with unusual wetness of equivalent duration. During wet years, the rich soil provides bountiful agricultural output, but crops fail during dry years. The region is also subject to high winds. During early European and American exploration of the Great Plains, this region was thought unsuitable for European-style agriculture; explorers called it the Great American Desert. The lack of surface water and timber made the region less attractive than other areas for pioneer settlement and agriculture.
The federal government encouraged settlement and development of the Plains for agriculture via the Homestead Act of 1862, offering settlers 160-acre (65 ha) plots. With the end of the Civil War in 1865 and the completion of the First Transcontinental Railroad in 1869, waves of new migrants and immigrants reached the Great Plains, and they greatly increased the acreage under cultivation. An unusually wet period in the Great Plains mistakenly led settlers and the federal government to believe that "rain follows the plow" (a popular phrase among real estate promoters) and that the climate of the region had changed permanently. While initial agricultural endeavors were primarily cattle ranching, the adverse effect of harsh winters on the cattle, beginning in 1886, a short drought in 1890, and general overgrazing, led many landowners to increase the amount of land under cultivation.
Recognizing the challenge of cultivating marginal arid land, the United States government expanded on the 160 acres (65 ha) offered under the Homestead Act – granting 640 acres (260 ha) to homesteaders in western Nebraska under the Kinkaid Act (1904) and 320 acres (130 ha) elsewhere in the Great Plains under the Enlarged Homestead Act (1909). Waves of European settlers arrived in the plains at the beginning of the 20th century. A return of unusually wet weather seemingly confirmed a previously held opinion that the "formerly" semiarid area could support large-scale agriculture. At the same time, technological improvements such as mechanized plowing and mechanized harvesting made it possible to operate larger properties without increasing labor costs.
The combined effects of the disruption of the Russian Revolution, which decreased the supply of wheat and other commodity crops, and World War I increased agricultural prices; this demand encouraged farmers to dramatically increase cultivation. For example, in the Llano Estacado of eastern New Mexico and northwestern Texas, the area of farmland was doubled between 1900 and 1920, then tripled again between 1925 and 1930. The agricultural methods favored by farmers during this period created the conditions for large-scale erosion under certain environmental conditions. The widespread conversion of the land by deep plowing and other soil preparation methods to enable agriculture eliminated the native grasses which held the soil in place and helped retain moisture during dry periods. Furthermore, cotton farmers left fields bare during winter months, when winds in the High Plains are highest, and burned the stubble as a means to control weeds prior to planting, thereby depriving the soil of organic nutrients and surface vegetation.
Drought and dust storms
After fairly favourable climatic conditions in the 1920s with good rainfall and relatively moderate winters, which permitted increased settlement and cultivation in the Great Plains, the region entered an unusually dry era in the summer of 1930. During the next decade, the northern plains suffered four of their seven driest calendar years since 1895, Kansas four of its twelve driest, and the entire region south to West Texas lacked any period of above-normal rainfall until record rains hit in 1941. When severe drought struck the Great Plains region in the 1930s, it resulted in erosion and loss of topsoil because of farming practices at the time. The drought dried the topsoil and over time it became friable, reduced to a powdery consistency in some places. Without the indigenous grasses in place, the high winds that occur on the plains picked up the topsoil and created the massive dust storms that marked the Dust Bowl period. The persistent dry weather caused crops to fail, leaving the plowed fields exposed to wind erosion. The fine soil of the Great Plains was easily eroded and carried east by strong continental winds.
On November 11, 1933, a very strong dust storm stripped topsoil from desiccated South Dakota farmlands in just one of a series of severe dust storms that year. Beginning on May 9, 1934, a strong, two-day dust storm removed massive amounts of Great Plains topsoil in one of the worst such storms of the Dust Bowl. The dust clouds blew all the way to Chicago, where they deposited 12 million pounds of dust (~ 5500 tonnes). Two days later, the same storm reached cities to the east, such as Cleveland, Buffalo, Boston, New York City, and Washington, D.C. That winter (1934–1935), red snow fell on New England.
On April 14, 1935, known as "Black Sunday", 20 of the worst "black blizzards" occurred across the entire sweep of the Great Plains, from Canada south to Texas. The dust storms caused extensive damage and turned the day to night; witnesses reported that they could not see five feet in front of them at certain points. Denver-based Associated Press reporter Robert E. Geiger happened to be in Boise City, Oklahoma, that day. His story about Black Sunday marked the first appearance of the term Dust Bowl; it was coined by Edward Stanley, Kansas City news editor of the Associated Press, while rewriting Geiger's news story.
Spearman and Hansford County have been literaly [sic] in a cloud of dust for the past week. Ever since Friday of last week, there hasn't been a day pass but what the county was beseieged [sic] with a blast of wind and dirt. On rare occasions when the wind did subside for a period of hours, the air has been so filled with dust that the town appeared to be overhung by a fog cloud. Because of this long seige of dust and every building being filled with it, the air has become stifling to breathe and many people have developed sore throats and dust colds as a result.— Spearman Reporter, March 21, 1935
Much of the farmland was eroded in the aftermath of the Dust Bowl. In 1941, a Kansas agricultural experiment station released a bulletin that suggested reestablishing native grasses by the "hay method". Developed in 1937 to speed up the process and increase returns from pasture, the "hay method" was originally supposed to occur in Kansas naturally over 25–40 years. After much data analysis, the causal mechanism for the droughts can be linked to ocean temperature anomalies. Specifically, Atlantic Ocean sea surface temperatures appear to have had an indirect effect on the general atmospheric circulation, while Pacific sea surface temperatures seem to have had the most direct influence.
This catastrophe intensified the economic impact of the Great Depression in the region.
In 1935, many families were forced to leave their farms and travel to other areas seeking work because of the drought (which at that time had already lasted four years). The abandonment of homesteads and financial ruin resulting from catastrophic topsoil loss led to widespread hunger and poverty. Dust Bowl conditions fomented an exodus of the displaced from Texas, Oklahoma, and the surrounding Great Plains to adjacent regions. More than 500,000 Americans were left homeless. Over 350 houses had to be torn down after one storm alone. The severe drought and dust storms had left many homeless; others had their mortgages foreclosed by banks, or felt they had no choice but to abandon their farms in search of work. Many Americans migrated west looking for work. Parents packed up "jalopies" with their families and a few personal belongings, and headed west in search of work. Some residents of the Plains, especially in Kansas and Oklahoma, fell ill and died of dust pneumonia or malnutrition.
The Dust Bowl exodus was the largest migration in American history within a short period of time. Between 1930 and 1940, approximately 3.5 million people moved out of the Plains states; of those, it is unknown how many moved to California. In just over a year, over 86,000 people migrated to California. This number is more than the number of migrants to that area during the 1849 Gold Rush. Migrants abandoned farms in Oklahoma, Arkansas, Missouri, Iowa, Nebraska, Kansas, Texas, Colorado, and New Mexico, but were often generally referred to as "Okies", "Arkies", or "Texies". Terms such as "Okies" and "Arkies" came to be known in the 1930s as the standard terms for those who had lost everything and were struggling the most during the Great Depression.
Not all migrants traveled long distances; some simply went to the next town or county. So many families left their farms and were on the move that the proportion between migrants and residents was nearly equal in the Great Plains states.
Characteristics of migrants
Historian James N. Gregory examined Census Bureau statistics and other records to learn more about the migrants. Based on a 1939 survey of occupation by the Bureau of Agricultural Economics of about 116,000 families who arrived in California in the 1930s, he learned that only 43 percent of southwesterners were doing farm work immediately before they migrated. Nearly one-third of all migrants were professional or white-collar workers. The poor economy displaced more than just farmers as refugees to California; many teachers, lawyers, and small business owners moved west with their families during this time. After the Great Depression ended, some moved back to their original states. Many others remained where they had resettled. About one-eighth of California's population is of Okie heritage.
U.S. government response
The greatly expanded participation of government in land management and soil conservation was an important outcome from the disaster. Different groups took many different approaches to responding to the disaster. To identify areas that needed attention, groups such as the Soil Conservation Service generated detailed soil maps and took photos of the land from the sky. To create shelterbelts to reduce soil erosion, groups such as the United States Forestry Service's Prairie States Forestry Project planted trees on private lands. Finally, groups like the Resettlement Administration, which later became the Farm Security Administration, encouraged small farm owners to resettle on other lands, if they lived in dryer parts of the Plains.
During President Franklin D. Roosevelt's first 100 days in office in 1933, his administration quickly initiated programs to conserve soil and restore the ecological balance of the nation. Interior Secretary Harold L. Ickes established the Soil Erosion Service in August 1933 under Hugh Hammond Bennett. In 1935, it was transferred and reorganized under the Department of Agriculture and renamed the Soil Conservation Service. It is now known as the Natural Resources Conservation Service (NRCS).
As part of New Deal programs, Congress passed the Soil Conservation and Domestic Allotment Act in 1936, requiring landowners to share the allocated government subsidies with the laborers who worked on their farms. Under the law, "benefit payments were continued as measures for production control and income support, but they were now financed by direct Congressional appropriations and justified as soil conservation measures. The Act shifted the parity goal from price equality of agricultural commodities and the articles that farmers buy to income equality of farm and non-farm population." Thus, the parity goal was to re-create the ratio between the purchasing power of the net income per person on farms from agriculture and that of the income of persons not on farms that prevailed during 1909–1914.
To stabilize prices, the government paid farmers and ordered more than six million pigs to be slaughtered. It paid to have the meat packed and distributed to the poor and hungry. The Federal Surplus Relief Corporation (FSRC) was established to regulate crop and other surpluses. FDR in an address on the AAA commented,
Let me make one other point clear for the benefit of the millions in cities who have to buy meats. Last year the Nation suffered a drought of unparalleled intensity. If there had been no Government program, if the old order had obtained in 1933 and 1934, that drought on the cattle ranges of America and in the corn belt would have resulted in the marketing of thin cattle, immature hogs and the death of these animals on the range and on the farm, and if the old order had been in effect those years, we would have had a vastly greater shortage than we face today. Our program – we can prove it – saved the lives of millions of head of livestock. They are still on the range, and other millions of heads are today canned and ready for this country to eat.
The FSRC diverted agricultural commodities to relief organizations. Apples, beans, canned beef, flour and pork products were distributed through local relief channels. Cotton goods were later included, to clothe the needy.
In 1935, the federal government formed a Drought Relief Service (DRS) to coordinate relief activities. The DRS bought cattle in counties which were designated emergency areas, for $14 to $20 a head. Animals determined unfit for human consumption were killed; at the beginning of the program, more than 50 percent were so designated in emergency areas. The DRS assigned the remaining cattle to the Federal Surplus Relief Corporation (FSRC) to be used in food distribution to families nationwide. Although it was difficult for farmers to give up their herds, the cattle slaughter program helped many of them avoid bankruptcy. "The government cattle buying program was a blessing to many farmers, as they could not afford to keep their cattle, and the government paid a better price than they could obtain in local markets."
President Roosevelt ordered the Civilian Conservation Corps to plant a huge belt of more than 200 million trees from Canada to Abilene, Texas to break the wind, hold water in the soil, and hold the soil itself in place. The administration also began to educate farmers on soil conservation and anti-erosion techniques, including crop rotation, strip farming, contour plowing, terracing, and other improved farming practices. In 1937, the federal government began an aggressive campaign to encourage farmers in the Dust Bowl to adopt planting and plowing methods that conserved the soil. The government paid reluctant farmers a dollar an acre to practice the new methods. By 1938, the massive conservation effort had reduced the amount of blowing soil by 65%. The land still failed to yield a decent living. In the fall of 1939, after nearly a decade of dirt and dust, the drought ended when regular rainfall finally returned to the region. The government still encouraged continuing the use of conservation methods to protect the soil and ecology of the Plains.
At the end of the drought, the programs which were implemented during these tough times helped to sustain a positive relationship between America's farmers and the federal government.
The President's Drought Committee issued a report in 1935 covering the government's assistance to agriculture during 1934 through mid-1935: it discussed conditions, measures of relief, organization, finances, operations, and results of the government's assistance. Numerous exhibits are included in this report.
Long-term economic impact
In many regions, more than 75% of the topsoil was blown away by the end of the 1930s. Land degradation varied widely. Aside from the short-term economic consequences caused by erosion, there were severe long-term economic consequences caused by the Dust Bowl.
By 1940, counties that had experienced the most significant levels of erosion had a greater decline in agricultural land values. The per-acre value of farmland declined by 28% in high-erosion counties and 17% in medium-erosion counties, relative to land value changes in low-erosion counties.:3 Even over the long-term, the agricultural value of the land often failed to recover to pre-Dust Bowl levels. In highly eroded areas, less than 25% of the original agricultural losses were recovered. The economy adjusted predominantly through large relative population declines in more-eroded counties, both during the 1930s and through the 1950s.:1500
The economic effects persisted, in part, because of farmers' failure to switch to more appropriate crops for highly eroded areas. Because the amount of topsoil had been reduced, it would have been more productive to shift from crops and wheat to animals and hay. During the Depression and through at least the 1950s, there was limited relative adjustment of farmland away from activities that became less productive in more-eroded counties.
Some of the failure to shift to more productive agricultural products may be related to ignorance about the benefits of changing land use. A second explanation is a lack of availability of credit, caused by the high rate of failure of banks in the Plains states. Because banks failed in the Dust Bowl region at a higher rate than elsewhere, farmers could not get the credit they needed to buy capital to shift crop production. In addition, profit margins in either animals or hay were still minimal, and farmers had little incentive in the beginning to change their crops.
- Capital-intensive agribusiness had transformed the scene; deep wells into the aquifer, intensive irrigation, the use of artificial pesticides and fertilizers, and giant harvesters were creating immense crops year after year whether it rained or not. According to the farmers he interviewed, technology had provided the perfect answer to old troubles, such of the bad days would not return. In Worster's view, by contrast, the scene demonstrated that America's capitalist high-tech farmers had learned nothing. They were continuing to work in an unsustainable way, devoting far cheaper subsidized energy to growing food than the energy could give back to its ultimate consumers.
In contrast with Worster's pessimism, historian Mathew Bonnifield argued that the long-term significance of the Dust Bowl was "the triumph of the human spirit in its capacity to endure and overcome hardships and reverses."
Influence on the arts and culture
The crisis was documented by photographers, musicians, and authors, many hired during the Great Depression by the federal government. For instance, the Farm Security Administration hired numerous photographers to document the crisis. Artists such as Dorothea Lange were aided by having salaried work during the Depression. She captured what have become classic images of the dust storms and migrant families. Among her most well-known photographs is Destitute Pea Pickers in California. Mother of Seven Children, which depicted a gaunt-looking woman, Florence Owens Thompson, holding three of her children. This picture expressed the struggles of people caught by the Dust Bowl and raised awareness in other parts of the country of its reach and human cost. Decades later, Thompson disliked the boundless circulation of the photo and resented the fact she did not receive any money from its broadcast. Thompson felt it gave her the perception as a Dust Bowl "Okie."
The work of independent artists was also influenced by the crises of the Dust Bowl and the Depression. Author John Steinbeck, wrote The Grapes of Wrath (1939) about migrant workers and farm families displaced by the Dust Bowl. Babb's own novel about the lives of the migrant workers, Whose Names Are Unknown, was written in 1939 but was eclipsed and shelved in response to the success of the Steinbeck's work, and was finally published in 2004. Many of the songs of folk singer Woody Guthrie, such as those on his 1940 album Dust Bowl Ballads, are about his experiences in the Dust Bowl era during the Great Depression when he traveled with displaced farmers from Oklahoma to California and learned their traditional folk and blues songs, earning him the nickname the "Dust Bowl Troubadour".
Migrants also influenced musical culture wherever they went. Oklahoma migrants, in particular, were rural Southwesterners who carried their traditional country music to California. Today, the "Bakersfield Sound" describes this blend, which developed after the migrants brought country music to the city. Their new music inspired a proliferation of country dance halls as far south as Los Angeles.
The 2014 science fiction film Interstellar features a ravaged 21st-century America which is again scoured by dust storms (caused by a worldwide pathogen affecting all crops). Along with inspiration from the 1930s crisis, director Christopher Nolan features interviews from the 2012 documentary The Dust Bowl to draw further parallels.
In 2017, Americana recording artist Grant Maloy Smith released the album Dust Bowl – American Stories, which was inspired by the history of the Dust Bowl. In a review, the music magazine No Depression wrote that the album's lyrics and music are "as potent as Woody Guthrie, as intense as John Trudell and dusted with the trials and tribulations of Tom Joad – Steinbeck and The Grapes of Wrath."
Aggregate changes in agriculture and population on the Plains
- 1936 North American heat wave
- Ogallala Aquifer
- U.S. Route 66 – notable Dust Bowl migration route to California
- List of environmental disasters
- McLeman, R. A.; Dupre, J.; Berrang Ford, L.; Ford, J.; Gajewski, K.; Marchildon, G. (2014). "What we learned from the Dust Bowl: lessons in science, policy, and adaptation". Population and Environment. 35 (4): 417–440. doi:10.1007/s11111-013-0190-z. PMC 4015056. PMID 24829518.
- Ben Cook; Ron Miller; Richard Seager. "Did dust storms make the Dust Bowl drought worse?". Columbia University. Retrieved November 9, 2018.
- "Drought: A Paleo Perspective – 20th Century Drought". National Climatic Data Center. Retrieved April 5, 2009.
- "The American Experience: Drought". PBS. Retrieved March 15, 2015.
- "The Black Sunday Dust Storm of 14 April 1935". National Weather Service: Norman, Oklahoma. August 24, 2010. Retrieved November 23, 2012.
- Mencken, H. L. (1979). Raven I. McDavid, Jr. (ed.). The American Language (One-Volume Abridged ed.). New York: Alfred A Knopf. p. 206. ISBN 978-0-394-40075-4.
- Hakim, Joy (1995). A History of Us: War, Peace and all that Jazz. New York: Oxford University Press. ISBN 978-0-19-509514-2.[page needed]
- Federal Reserve Bank of Minneapolis Community Development Project. "Consumer Price Index (estimate) 1800–". Federal Reserve Bank of Minneapolis. Retrieved January 2, 2019.
- Bust: America – The Story of Us. A&E Television Networks. 2010. OCLC 783245601.
- "A History of Drought in Colorado: lessons learned and what lies ahead" (PDF). Colorado Water Resources Research Institute. February 2000. Retrieved December 6, 2007.
- "A Report of the Great Plains Area Drought Committee". Hopkins Papers, Franklin D. Roosevelt Library. August 27, 1936. Retrieved December 6, 2007.
- "The Great Plains: from dust to dust". Planning Magazine. December 1987. Archived from the original on October 6, 2007. Retrieved December 6, 2007.
- Regions at Risk: a comparison of threatened environments. United Nations University Press. 1995. Retrieved December 6, 2007.
- Drought in the Dust Bowl Years. US: National Drought Mitigation Center. 2006. Retrieved December 6, 2007.
- "Northern Rockies and Plains Average Temperature – October to March". National Climatic Data Center. Retrieved September 17, 2014.
- "Northern Rockies and Plains Precipitation, 1895–2013". National Climatic Data Center. Retrieved September 17, 2014.
- "Kansas Precipitation 1895 to 2013". National Climatic Data Center. Retrieved September 17, 2014.
- "Texas Climate Division 1 (High Plains): Precipitation 1895–2013". National Climatic Data Center. Retrieved September 17, 2014.
- "The Weather of 1941 in the United States" (PDF). National Oceanic and Atmospheric Administration. Retrieved September 17, 2014.
- Cronin, Francis D; Beers, Howard W (January 1937). Areas of Intense Drought Distress, 1930–1936 (PDF). Research Bulletin. Research Bulletin (United States. Works Progress Administration. Division of Social Research). U.S. Works Progress Administration / Federal Reserve Archival System for Economic Research (FRASER). pp. 1–23. Retrieved October 15, 2014.
- Murphy, Philip G. (July 15, 1935). "The Drought of 1934" (PDF). A Report of The Federal Government's Assistance to Agriculture. U.S. Drought Coordinating Committee / Federal Reserve Archival System for Economic Research (FRASER). Retrieved October 15, 2014.
- "Surviving the Dust Bowl". 1998. Retrieved September 19, 2011.
- Stock, Catherine McNicol (1992). Main Street in Crisis: The Great Depression and the Old Middle Class on the Northern Plains, p. 24. University of North Carolina Press. ISBN 0-8078-4689-9.
- Miller, Bill (March 21, 1935). "Nearly week seige of dust storm in county". Spearman Reporter. Spearman, Texas. hdl:10605/99636.
- Hornbeck, Richard (2012). "The Enduring Impact of the American Dust Bowl: Short and Long-run Adjustments to Environmental Catastrophe". American Economic Review. 102 (4): 1477–1507. doi:10.1257/aer.102.4.1477.
- McLeman, Robert A; Dupre, Juliette; Berrang Ford, Lea; Ford, James; Gajewski, Konrad; Marchildon, Gregory (June 2014). "What we learned from the Dust Bowl: lessons in science, policy, and adaptation". Population and Environment. 35 (4): 417–440. doi:10.1007/s11111-013-0190-z. PMC 4015056. PMID 24829518.
- A Cultural History of the United States – The 1930s. San Diego, California: Lucent Books, Inc., 1999, p. 39.
- Schama, Simon; Hobkinson, Sam (2008). American Plenty. BBC. OCLC 884893188.
- "First Measured Century: Interview:James Gregory". PBS. Retrieved March 11, 2007.
- Babb, Sanora, Dorothy Babb, and Douglas Wixson. On the Dirty Plate Trail. Edited by Douglas Wixson. Autin, Texas: University of Texas Press, 2007, p. 20.
- A Cultural History (1999), p. 19
- Stephen Fender (2011). Nature, Class, and New Deal Literature: The Country Poor in the Great Depression. Routledge. p. 143. ISBN 9781136632280.
- Worster, Donald (1979). Dust Bowl: The Southern Plains in the 1930s. Oxford University Press. p. 49.
- Worster, Donald. Dust Bowl – The Southern Plains in the 1930s, New York: Oxford University Press, 2004, p. 50
- Worster (2004), Dust Bowl, p. 45,
- Gregory, N. James. (1991) American Exodus: The Dust Bowl Migration and Okie Culture in California. Oxford University Press.
- Babb, et al. (2007), On the Dirty Plate Trail, p. 13
- Steiner, Frederick (2008). The Living Landscape, Second Edition: An Ecological Approach to Landscape Planning, p. 188. Island Press. ISBN 1-59726-396-6.
- Rau, Allan. Agricultural Policy and Trade Liberalization in the United States, 1934–1956; a Study of Conflicting Policies. Genève: E. Droz, 1957. p. 81.
- "The American Experience / Surviving the Dust Bowl / Timeline".
- Monthly Catalog, United States Public Documents, By United States Superintendent of Documents, United States Government Printing Office, Published by the G.P.O., 1938
- Federal Writers' Project. Texas. Writers' Program (Tex.): Writers' Program Texas. p. 16.
- Buchanan, James Shannon. Chronicles of Oklahoma. Oklahoma Historical Society. p. 224.
- PBS Timeline of Dust Bowl
- A Cultural History (1999), p.45.
- United States. Agricultural Adjustment Administration and Murphy, Philip G., (1935), Drought of 1934: The Federal Government's Assistance to Agriculture". Accessed October 15, 2014.
- Hornbeck, Richard (June 2012). "The Enduring Impact of the American Dust Bowl: Short- and Long-Run Adjustments to Environmental Catastrophe" (PDF). American Economic Review. 102 (4): 1477–1507. doi:10.1257/aer.102.4.1477. Retrieved March 31, 2016 – via Harvard University, Department of Economics.
- Landon-Lane, John; Rockoff, Hugh; Steckel, Richard (December 2009). "Droughts, Floods, and Financial Distress in the United States". NBER Working Paper No. 15596: 6. doi:10.3386/w15596.
- Patrick Allitt, A Climate of Crisis: America in the Age of Environmentalism (2014) p 203
- Allitt p 211, paraphrasing William Cronin's evaluation of Mathew Paul Bonnifield, Dust Bowl: Men, Dirt and Depression(1979)
- "Destitute Pea Pickers in California: Mother of Seven Children, Age Thirty-two, Nipomo, California. Migrant Mother". World Digital Library. February 1936. Retrieved February 10, 2013.
- DuBois, Ellen Carol; Dumenil, Lynn (2012). Through Women's Eyes (Third ed.). Bedford/St. Martin's. p. 583. ISBN 978-0-312-67603-2.
- "Whose Names Are Unknown: Sanora Babb". Harry Ransom Center. Retrieved December 22, 2015.
- Dayton Duncan, preface by Ken Burns (2012). "Biographies: Sanora Babb". The Dust Bowl: An Illustrated History. PBS. Retrieved February 13, 2016.
- Lanzendorfer, Joy, "The forgotten Dust Bowl novel that rivaled "The Grapes of Wrath"," Smithsonian.com, 2016 May 23.
- "Sanora Babb," The Dust Bowl: a film by Ken Burns, PBS.org (2012)
- For the role of Tom Collins of the Farm Security Administration in Steinbeck's novel, see: John Steinbeck with Robert Demott, ed., Working Days: The Journals of The Grapes of Wrath, 1938–1941 (New York, New York: Penguin Books, 1990), pp. xxvii–xxviii, 33 (journal entry for 1938 June 24).
- Alarik, Scott. Robert Burns unplugged. The Boston Globe, August 7, 2005. Retrieved on December 5, 2007.
- Rosenberg, Alyssa (November 6, 2014). "How Ken Burns' surprise role in 'Interstellar' explains the movie". The Washington Post. Retrieved November 8, 2014.
- Smith, Hubble (June 1, 2017). "Kingman gets a mention on Dust Bowl album". Kingman Daily Miner. Retrieved June 11, 2017.
- Apice, John (May 22, 2017). "Expressive Original Songs Steeped In the Dirt & Reality of the Dust Bowl-Depression Era". No Depression. Retrieved June 11, 2017.
- Bonnifield, Mathew Paul. (1979) Dust Bowl: Men, Dirt and Depression
- Gregory, James Noble. American exodus: The dust bowl migration and Okie culture in California (Oxford University Press, 1989)
- Lassieur, Allison. (2009) The Dust Bowl: An Interactive History Adventure Capstone Press, ISBN 1-4296-3455-3
- Reis, Ronald A. (2008) The Dust Bowl Chelsea House ISBN 978-0-7910-9737-3
- Sylvester, Kenneth M., and Eric S. A. Rupley, "Revising the Dust Bowl: High above the Kansas Grassland", Environmental History, 17 (July 2012), 603–33.
- Worster, Donald 2004 (1979)Dust Bowl: The Southern Plains in the 1930s (25. anniversary ed) Oxford University Press. ISBN 0-19-517489-5
- Woody Guthrie, (1963) The (Nearly) Complete Collection of Woody Guthrie Folk Songs, Ludlow Music, New York.
- Alan Lomax, Woody Guthrie, Pete Seeger, (1967) Hard-Hitting Songs for Hard-Hit People, Oak Publications, New York.
- Timothy Egan (2006) The Worst Hard Time, Houghton Mifflin Company, New York, hardcover. ISBN 0-618-34697-X.
- Katelan Janke, (1935) Survival in the Storm: The Dust Bowl Diary of Grace Edwards, Dalhart, Texas, Scholastic (September 2002). ISBN 0-439-21599-4.
- Karen Hesse (paperback January 1999) Out of the Dust, Scholastic Signature. New York First Edition, 1997, hardcover. ISBN 0-590-37125-8.
- Sanora Babb (2004) Whose Names Are Unknown, University of Oklahoma Press, ISBN 978-0-8061-3579-3.
- Sweeney, Kevin Z. (2016). Prelude to the Dust Bowl: Drought in the Nineteenth-Century Southern Plains Norman, OK: University of Oklahoma Press.
|Wikimedia Commons has media related to Dust Bowl.|
- The Dust Bowl photo collection
- "The Dust Bowl", a PBS television series by filmmaker Ken Burns
- The Dust Bowl (EH.Net Encyclopedia)
- Black Sunday, April 14, 1935, Dodge City, KS
- The Bibliography of Aeolian Research
- Voices from the Dust Bowl: The Charles L. Todd and Robert Sonkin Migrant Worker Collection, 1940–1941 Library of Congress, American Folklife Center Online collection of archival sound recordings, photographs, and manuscripts
- Farming in the 1930s (Wessels Living History Farm)
- Encyclopedia of Oklahoma History and Culture – Dust Bowl
- Dust, Drought, and Dreams Gone Dry: Oklahoma Women in the Dust Bowl Oral History Project, Oklahoma Oral History Research Program
- Voices of Oklahoma interview with Frosty Troy. First person interview conducted on November 30, 2011 with Frosty Troy talking about the Oklahoma Dust Bowl. Original audio and transcript archived with Voices of Oklahoma oral history project.
- playlist on YouTube
- playlist on YouTube |
||This article needs additional citations for verification. (April 2012)|
Multiplication (often denoted by the cross symbol "×") is the mathematical operation of scaling one number by another. It is one of the four basic operations in elementary arithmetic (the others being addition, subtraction and division).
Because the result of scaling by whole numbers can be thought of as consisting of some number of copies of the original, whole-number products greater than 1 can be computed by repeated addition; for example, 3 multiplied by 4 (often said as "3 times 4") can be calculated by adding 4 copies of 3 together:
Here 3 and 4 are the "factors" and 12 is the "product".
Educators differ as to which number should normally be considered as the number of copies, and whether multiplication should even be introduced as repeated addition. For example 3 multiplied by 4 can also be calculated by adding 3 copies of 4 together:
Multiplication can also be visualized as counting objects arranged in a rectangle (for whole numbers) or as finding the area of a rectangle whose sides have given lengths (for numbers generally). The area of a rectangle does not depend on which side you measure first, which illustrates that the order numbers are multiplied together in doesn't matter.
In general the result of multiplying two measurements gives a result of a new type depending on the measurements. For instance:
The inverse operation of multiplication is division. For example, 4 multiplied by 3 equals 12. Then 12 divided by 3 equals 4. Multiplication by 3, followed by division by 3, yields the original number.
Multiplication is also defined for other types of numbers (such as complex numbers), and for more abstract constructs such as matrices. For these more abstract constructs, the order that the operands are multiplied in sometimes does matter.
Notation and terminology
|This section does not cite any references or sources. (August 2011)|
|addend + addend =||sum|
|minuend − subtrahend =||difference|
|multiplicand × multiplier =||product|
|dividend ÷ divisor =||quotient|
|nth root (√)|
|degree √ =||root|
- (verbally, "two times three equals six")
There are several other common notations for multiplication. Many of these are intended to reduce confusion between the multiplication sign × and the commonly used variable x:
- The middle dot is standard in the United States, the United Kingdom, and other countries where the period is used as a decimal point. In other countries that use a comma as a decimal point, either the period or a middle dot is used for multiplication. Internationally, the middle dot is commonly connotated with a more advanced or scientific use.
- The asterisk (as in
5*2) is often used in programming languages because it appears on every keyboard. This usage originated in the FORTRAN programming language.
- In algebra, multiplication involving variables is often written as a juxtaposition (e.g., xy for x times y or 5x for five times x). This notation can also be used for quantities that are surrounded by parentheses (e.g., 5(2) or (5)(2) for five times two).
- In matrix multiplication, there is actually a distinction between the cross and the dot symbols. The cross symbol generally denotes a vector multiplication, while the dot denotes a scalar multiplication. A similar convention distinguishes between the cross product and the dot product of two vectors.
The numbers to be multiplied are generally called the "factors" or "multiplicands". When thinking of multiplication as repeated addition, the number to be multiplied is called the "multiplicand", while the number of multiples is called the "multiplier". In algebra, a number that is the multiplier of a variable or expression (e.g., the 3 in 3xy2) is called a coefficient.
The result of a multiplication is called a product, and is a multiple of each factor if the other factor is an integer. For example, 15 is the product of 3 and 5, and is both a multiple of 3 and a multiple of 5.
The common methods for multiplying numbers using pencil and paper require a multiplication table of memorized or consulted products of small numbers (typically any two numbers from 0 to 9), however one method, the peasant multiplication algorithm, does not.
Multiplying numbers to more than a couple of decimal places by hand is tedious and error prone. Common logarithms were invented to simplify such calculations. The slide rule allowed numbers to be quickly multiplied to about three places of accuracy. Beginning in the early twentieth century, mechanical calculators, such as the Marchant, automated multiplication of up to 10 digit numbers. Modern electronic computers and calculators have greatly reduced the need for multiplication by hand.
Historical algorithms
The Egyptian method of multiplication of integers and fractions, documented in the Ahmes Papyrus, was by successive additions and doubling. For instance, to find the product of 13 and 21 one had to double 21 three times, obtaining 1 × 21 = 21, 2 × 21 = 42, 4 × 21 = 84, 8 × 21 = 168. The full product could then be found by adding the appropriate terms found in the doubling sequence:
- 13 × 21 = (1 + 4 + 8) × 21 = (1 × 21) + (4 × 21) + (8 × 21) = 21 + 84 + 168 = 273.
The Babylonians used a sexagesimal positional number system, analogous to the modern day decimal system. Thus, Babylonian multiplication was very similar to modern decimal multiplication. Because of the relative difficulty of remembering 60 × 60 different products, Babylonian mathematicians employed multiplication tables. These tables consisted of a list of the first twenty multiples of a certain principal number n: n, 2n, ..., 20n; followed by the multiples of 10n: 30n 40n, and 50n. Then to compute any sexagesimal product, say 53n, one only needed to add 50n and 3n computed from the table.
In the mathematical text Zhou Bi Suan Jing, dated prior to 300 BC, and the Nine Chapters on the Mathematical Art, multiplication calculations were written out in words, although the early Chinese mathematicians employed Rod calculus involving place value addition, subtraction, multiplication and division. These place value decimal arithmetic algorithms were introduced by Al Khwarizmi to Arab countries in the early 9th century.
Modern method
The modern method of multiplication based on the Hindu–Arabic numeral system was first described by Brahmagupta. Brahmagupta gave rules for addition, subtraction, multiplication and division. Henry Burchard Fine, then professor of Mathematics at Princeton University, wrote the following:
- The Indians are the inventors not only of the positional decimal system itself, but of most of the processes involved in elementary reckoning with the system. Addition and subtraction they performed quite as they are performed nowadays; multiplication they effected in many ways, ours among them, but division they did cumbrously.
Computer algorithms
The standard method of multiplying two n-digit numbers requires n2 simple multiplications. Multiplication algorithms have been designed that reduce the computation time considerably when multiplying large numbers. In particular for very large numbers methods based on the Discrete Fourier Transform can reduce the number of simple multiplications to the order of n log2(n).
Products of measurements
When two measurements are multiplied together the product is of a type depending on the types of the measurements. The general theory is given by dimensional analysis. This analysis is routinely applied in physics but has also found applications in finance. One can only meaningfully add or subtract quantities of the same type but can multiply or divide quantities of different types.
A common example is multiplying speed by time gives distance, so
- 50 kilometers per hour × 3 hours = 150 kilometers.
Products of sequences
Capital Pi notation
The product of a sequence of terms can be written with the product symbol, which derives from the capital letter Π (Pi) in the Greek alphabet. Unicode position U+220F (∏) contains a glyph for denoting such a product, distinct from U+03A0 (Π), the letter. The meaning of this notation is given by:
The subscript gives the symbol for a dummy variable (i in this case), called the "index of multiplication" together with its lower bound (m), whereas the superscript (here n) gives its upper bound. The lower and upper bound are expressions denoting integers. The factors of the product are obtained by taking the expression following the product operator, with successive integer values substituted for the index of multiplication, starting from the lower bound and incremented by 1 up to and including the upper bound. So, for example:
In case m = n, the value of the product is the same as that of the single factor xm. If m > n, the product is the empty product, with the value 1.
Infinite products
One may also consider products of infinitely many terms; these are called infinite products. Notationally, we would replace n above by the lemniscate ∞. The product of such a series is defined as the limit of the product of the first n terms, as n grows without bound. That is, by definition,
One can similarly replace m with negative infinity, and define:
provided both limits exist.
For the natural numbers, integers, fractions, and real and complex numbers, multiplication has certain properties:
- Commutative property
- The order in which two numbers are multiplied does not matter:
- Associative property
- Expressions solely involving multiplication or addition are invariant with respect to order of operations:
- Distributive property
- Holds with respect to multiplication over addition. This identity is of prime importance in simplifying algebraic expressions:
- Identity element
- The multiplicative identity is 1; anything multiplied by one is itself. This is known as the identity property:
- Zero element
- Any number multiplied by zero is zero. This is known as the zero property of multiplication:
- Zero is sometimes not included amongst the natural numbers.
There are a number of further properties of multiplication not satisfied by all types of numbers.
- Negative one times any number is equal to the opposite of that number.
- Negative one times negative one is positive one.
- The natural numbers do not include negative numbers.
- Order preservation
- Multiplication by a positive number preserves order: if a > 0, then if b > c then ab > ac. Multiplication by a negative number reverses order: if a < 0 and b > c then ab < ac.
- The complex numbers do not have an order predicate.
Other mathematical systems that include a multiplication operation may not have all these properties. For example, multiplication is not, in general, commutative for matrices and quaternions.
In the book Arithmetices principia, nova methodo exposita, Giuseppe Peano proposed axioms for arithmetic based on his axioms for natural numbers. Peano arithmetic has two axioms for multiplication:
Here S(y) represents the successor of y, or the natural number that follows y. The various properties like associativity can be proved from these and the other axioms of Peano arithmetic including induction. For instance S(0). denoted by 1, is a multiplicative identity because
The axioms for integers typically define them as equivalence classes of ordered pairs of natural numbers. The model is based on treating (x,y) as equivalent to x−y when x and y are treated as integers. Thus both (0,1) and (1,2) are equivalent to −1. The multiplication axiom for integers defined this way is
The rule that −1 × −1 = 1 can then be deduced from
Multiplication with set theory
It is possible, though difficult, to create a recursive definition of multiplication with set theory. Such a system usually relies on the Peano definition of multiplication.
Cartesian product
if the n copies of a are to be combined in disjoint union then clearly they must be made disjoint; an obvious way to do this is to use either a or n as the indexing set for the other. Then, the members of are exactly those of the Cartesian product . The properties of the multiplicative operation as applying to natural numbers then follow trivially from the corresponding properties of the Cartesian product.
Multiplication in group theory
There are many sets that, under the operation of multiplication, satisfy the axioms that define group structure. These axioms are closure, associativity, and the inclusion of an identity element and inverses.
A simple example is the set of non-zero rational numbers. Here we have identity 1, as opposed to groups under addition where the identity is typically 0. Note that with the rationals, we must exclude zero because, under multiplication, it does not have an inverse: there is no rational number that can be multiplied by zero to result in 1. In this example we have an abelian group, but that is not always the case.
To see this, look at the set of invertible square matrices of a given dimension, over a given field. Now it is straightforward to verify closure, associativity, and inclusion of identity (the identity matrix) and inverses. However, matrix multiplication is not commutative, therefore this group is nonabelian.
Another fact of note is that the integers under multiplication is not a group, even if we exclude zero. This is easily seen by the nonexistence of an inverse for all elements other than 1 and -1.
Multiplication in group theory is typically notated either by a dot, or by juxtaposition (the omission of an operation symbol between elements). So multiplying element a by element b could be notated a b or ab. When referring to a group via the indication of the set and operation, the dot is used, e.g., our first example could be indicated by
Multiplication of different kinds of numbers
Numbers can count (3 apples), order (the 3rd apple), or measure (3.5 feet high); as the history of mathematics has progressed from counting on our fingers to modelling quantum mechanics, multiplication has been generalized to more complicated and abstract types of numbers, and to things that are not numbers (such as matrices) or do not look much like numbers (such as quaternions).
- is the sum of M copies of N when N and M are positive whole numbers. This gives the number of things in an array N wide and M high. Generalization to negative numbers can be done by and . The same sign rules apply to rational and real numbers.
- Rational numbers
- Generalization to fractions is by multiplying the numerators and denominators respectively: . This gives the area of a rectangle high and wide, and is the same as the number of things in an array when the rational numbers happen to be whole numbers.
- Real numbers
- is the limit of the products of the corresponding terms in certain sequences of rationals that converge to x and y, respectively, and is significant in calculus. This gives the area of a rectangle x high and y wide. See Products of sequences, above.
- Complex numbers
- Considering complex numbers and as ordered pairs of real numbers and , the product is . This is the same as for reals, , when the imaginary parts and are zero.
- Further generalizations
- See Multiplication in group theory, above, and Multiplicative Group, which for example includes matrix multiplication. A very general, and abstract, concept of multiplication is as the "multiplicatively denoted" (second) binary operation in a ring. An example of a ring that is not any of the above number systems is a polynomial ring (you can add and multiply polynomials, but polynomials are not numbers in any usual sense.)
- Often division, , is the same as multiplication by an inverse, . Multiplication for some types of "numbers" may have corresponding division, without inverses; in an integral domain x may have no inverse "" but may be defined. In a division ring there are inverses but they are not commutative (since is not the same as , may be ambiguous).
When multiplication is repeated, the resulting operation is known as exponentiation. For instance, the product of three factors of two (2×2×2) is "two raised to the third power", and is denoted by 23, a two with a superscript three. In this example, the number two is the base, and three is the exponent. In general, the exponent (or superscript) indicates how many times to multiply base by itself, so that the expression
indicates that the base a to be multiplied by itself n times.
See also
- Makoto Yoshida (2009). "Is Multiplication Just Repeated Addition?".
- Henry B. Fine. The Number System of Algebra – Treated Theoretically and Historically, (2nd edition, with corrections, 1907), page 90, http://www.archive.org/download/numbersystemofal00fineuoft/numbersystemofal00fineuoft.pdf
- PlanetMath: Peano arithmetic
- Boyer, Carl B. (revised by Merzbach, Uta C.) (1991). History of Mathematics. John Wiley and Sons, Inc. ISBN 0-471-54397-7.
- Multiplication and Arithmetic Operations In Various Number Systems at cut-the-knot
- Modern Chinese Multiplication Techniques on an Abacus |
A superlens, or super lens, is a lens which uses metamaterials to go beyond the diffraction limit. The diffraction limit is a feature of conventional lenses and microscopes that limits the fineness of their resolution. Many lens designs have been proposed that go beyond the diffraction limit in some way, but constraints and obstacles face each of them.
- 1 History
- 2 Theory
- 3 Development and construction
- 3.1 Perfect lenses
- 3.2 Near-field imaging with magnetic wires
- 3.3 Optical super lens with silver metamaterial
- 3.4 50-nm flat silver layer
- 3.5 Negative index GRIN lenses
- 3.6 Far-field superlens
- 3.7 Hyperlens
- 3.8 Plasmon-assisted microscopy
- 3.9 Super-imaging in the visible frequency range
- 3.10 Super resolution far-field microscopy techniques
- 3.11 Cylindrical superlens via coordinate transformation
- 3.12 Nano-optics with metamaterials
- 3.13 Nanoparticle imaging – quantum dots
- 4 See also
- 5 References
- 6 External links
In 1873 Ernst Abbe reported that conventional lenses are incapable of capturing some fine details of any given image; the super lens is intended to capture such details. The limitation of conventional lenses has inhibited progress in the biological sciences; this is because a virus or DNA molecule cannot be resolved with the highest powered conventional microscopes. This limitation extends to the minute processes of cellular proteins moving alongside microtubules of a living cell in their natural environments. Additionally, computer chips and the interrelated microelectronics are manufactured to smaller and smaller scales; this requires specialized optical equipment, which is also limited because these use the conventional lens. Hence, the principles governing a super lens show that it has potential for imaging a DNA molecule and cellular protein processes, or aiding in the manufacture of even smaller computer chips and microelectronics.
Furthermore, conventional lenses capture only the propagating light waves; these are waves that travel from a light source or an object to a lens, or the human eye. This can alternatively be studied as the far field. In contrast, a superlens captures propagating light waves and waves that stay on top of the surface of an object, which, alternatively, can be studied as both the far field and the near field.
An image of an object can be defined as a tangible or visible representation of the features of that object. A requirement for image formation is interaction with fields of electromagnetic radiation. Furthermore, the level of feature detail, or image resolution, is limited to a length of a wave of radiation. For example, with optical microscopy, image production and resolution depends on the length of a wave of visible light. However, with a superlens, this limitation may be removed, and a new class of image generated.
Electron beam lithography can overcome this resolution limit. Optical microscopy, on the other hand cannot, being limited to some value just above 200 nanometers. However, new technologies combined with optical microscopy are beginning to allow increased feature resolution (see sections below).
One definition of being constrained by the resolution barrier, is a resolution cut off at half the wavelength of light; the visible spectrum has a range that extends from 390 nanometers to 750 nanometers. Green light, half way in between, is around 500 nanometers. Microscopy takes into account parameters such as lens aperture, distance from the object to the lens, and the refractive index of the observed material; this combination defines the resolution cutoff, or microscopy optical limit, which tabulates to 200 nanometers. Therefore, conventional lenses, which literally construct an image of an object by using "ordinary" light waves, discard information that produce very fine, and minuscule details of the object that are contained in evanescent waves; these dimensions are less than 200 nanometers. For this reason, conventional optical systems, such as microscopes, have been unable to accurately image very small, nanometer-sized structures or nanometer-sized organisms in vivo, such as individual viruses, or DNA molecules.
The limitations of standard optical microscopy (bright-field microscopy) lie in three areas:
- The technique can only image dark or strongly refracting objects effectively.
- Diffraction limits the object, or cell's, resolution to approximately 200 nanometers.
- Out-of-focus light from points outside the focal plane reduces image clarity.
Live biological cells in particular generally lack sufficient contrast to be studied successfully, because the internal structures of the cell are mostly colorless and transparent; the most common way to increase contrast is to stain the different structures with selective dyes, but often this involves killing and fixing the sample. Staining may also introduce artifacts, apparent structural details that are caused by the processing of the specimen and are thus not a legitimate feature of the specimen.
The conventional glass lens is pervasive throughout our society and in the sciences, it is one of the fundamental tools of optics simply because it interacts with various wavelengths of light. At the same time, the wavelength of light can be analogous to the width of a pencil used to draw the ordinary images; the limit becomes noticeable, for example, when the laser used in a digital video system can only detect and deliver details from a DVD based on the wavelength of light. The image cannot be rendered any sharper beyond this limitation.
Thus, when an object emits or reflects light there are two types of electromagnetic radiation associated with this phenomenon; these are the near field radiation and the far field radiation. As implied by its description, the far field escapes beyond the object, it is then easily captured and manipulated by a conventional glass lens. However, useful (nanometer-sized) resolution details are not observed, because they are hidden in the near field, they remain localized, staying much closer to the light emitting object, unable to travel, and unable to be captured by the conventional lens. Controlling the near field radiation, for high resolution, can be accomplished with a new class of materials not easily obtained in nature; these are unlike familiar solids, such as crystals, which derive their properties from atomic and molecular units. The new material class, termed metamaterials, obtains its properties from its artificially larger structure. This has resulted in novel properties, and novel responses, which allow for details of images that surpass the limitations imposed by the wavelength of light.
This has led to the desire to view live biological cell interactions in a real time, natural environment, and the need for subwavelength imaging. Subwavelength imaging can be defined as optical microscopy with the ability to see details of an object or organism below the wavelength of visible light (see discussion in the above sections). In other words, to have the capability to observe, in real time, below 200 nanometers. Optical microscopy is a non-invasive technique and technology because everyday light is the transmission medium. Imaging below the optical limit in optical microscopy (subwavelength) can be engineered for the cellular level, and nanometer level in principle.
For example, in 2007 a technique was demonstrated where a metamaterials-based lens coupled with a conventional optical lens could manipulate visible light to see (nanoscale) patterns that were too small to be observed with an ordinary optical microscope; this has potential applications not only for observing a whole living cell, or for observing cellular processes, such as how proteins and fats move in and out of cells. In the technology domain, it could be used to improve the first steps of photolithography and nanolithography, essential for manufacturing ever smaller computer chips.
Focusing at subwavelength has become a unique imaging technique which allows visualization of features on the viewed object which are smaller than the wavelength of the photons in use. A photon is the minimum unit of light. While previously thought to be physically impossible, subwavelength imaging has been made possible through the development of metamaterials; this is generally accomplished using a layer of metal such as gold or silver a few atoms thick, which acts as a superlens, or by means of 1D and 2D photonic crystals. There is a subtle interplay between propagating waves, evanescent waves, near field imaging and far field imaging discussed in the sections below.
Early subwavelength imaging
Metamaterial lenses (Superlens) are able to reconstruct nanometer sized images by producing a negative refractive index in each instance; this compensates for the swiftly decaying evanescent waves. Prior to metamaterials, numerous other techniques had been proposed and even demonstrated for creating super-resolution microscopy; as far back as 1928, Irish physicist Edward Hutchinson Synge, is given credit for conceiving and developing the idea for what would ultimately become near-field scanning optical microscopy.
In 1974 proposals for two-dimensional fabrication techniques were presented; these proposals included contact imaging to create a pattern in relief, photolithography, electron lithography, X-ray lithography, or ion bombardment, on an appropriate planar substrate. The shared technological goals of the metamaterial lens and the variety of lithography aim to optically resolve features having dimensions much smaller than that of the vacuum wavelength of the exposing light. In 1981 two different techniques of contact imaging of planar (flat) submicroscopic metal patterns with blue light (400 nm) were demonstrated. One demonstration resulted in an image resolution of 100 nm and the other a resolution of 50 to 70 nm.
Since at least 1998 near field optical lithography was designed to create nanometer-scale features. Research on this technology continued as the first experimentally demonstrated negative index metamaterial came into existence in 2000–2001; the effectiveness of electron-beam lithography was also being researched at the beginning of the new millennium for nanometer-scale applications. Imprint lithography was shown to have desirable advantages for nanometer-scaled research and technology.
Advanced deep UV photolithography can now offer sub-100 nm resolution, yet the minimum feature size and spacing between patterns are determined by the diffraction limit of light, its derivative technologies such as evanescent near-field lithography, near-field interference lithography, and phase-shifting mask lithography were developed to overcome the diffraction limit.
Analysis of the diffraction limit
The original problem of the perfect lens: The general expansion of an EM field emanating from a source consists of both propagating waves and near-field or evanescent waves. An example of a 2-D line source with an electric field which has S-polarization will have plane waves consisting of propagating and evanescent components, which advance parallel to the interface; as both the propagating and the smaller evanescent waves advance in a direction parallel to the medium interface, evanescent waves decay in the direction of propagation. Ordinary (positive index) optical elements can refocus the propagating components, but the exponentially decaying inhomogeneous components are always lost, leading to the diffraction limit for focusing to an image.
A superlens is a lens which is capable of subwavelength imaging, allowing for magnification of near field rays. Conventional lenses have a resolution on the order of one wavelength due to the so-called diffraction limit; this limit hinders imaging very small objects, such as individual atoms, which are much smaller than the wavelength of visible light. A superlens is able to beat the diffraction limit. An example is the initial lens described by Pendry, which uses a slab of material with a negative index of refraction as a flat lens. In theory, a perfect lens would be capable of perfect focus – meaning that it could perfectly reproduce the electromagnetic field of the source plane at the image plane.
The diffraction limit as restriction on resolution
The performance limitation of conventional lenses is due to the diffraction limit. Following Pendry (2000), the diffraction limit can be understood as follows. Consider an object and a lens placed along the z-axis so the rays from the object are traveling in the +z direction; the field emanating from the object can be written in terms of its angular spectrum method, as a superposition of plane waves:
where is a function of :
Only the positive square root is taken as the energy is going in the +z direction. All of the components of the angular spectrum of the image for which is real are transmitted and re-focused by an ordinary lens. However, if
then becomes imaginary, and the wave is an evanescent wave, whose amplitude decays as the wave propagates along the z axis. This results in the loss of the high-angular-frequency components of the wave, which contain information about the high-frequency (small-scale) features of the object being imaged; the highest resolution that can be obtained can be expressed in terms of the wavelength:
A superlens overcomes the limit. A Pendry-type superlens has an index of n=−1 (ε=−1, µ=−1), and in such a material, transport of energy in the +z direction requires the z component of the wave vector to have opposite sign:
For large angular frequencies, the evanescent wave now grows, so with proper lens thickness, all components of the angular spectrum can be transmitted through the lens undistorted. There are no problems with conservation of energy, as evanescent waves carry none in the direction of growth: the Poynting vector is oriented perpendicularly to the direction of growth. For traveling waves inside a perfect lens, the Poynting vector points in direction opposite to the phase velocity.
Effects of negative index of refraction
Normally, when a wave passes through the interface of two materials, the wave appears on the opposite side of the normal. However, if the interface is between a material with a positive index of refraction and another material with a negative index of refraction, the wave will appear on the same side of the normal. Pendry's idea of a perfect lens is a flat material where n=−1; such a lens allows near-field rays, which normally decay due to the diffraction limit, to focus once within the lens and once outside the lens, allowing subwavelength imaging.
Development and construction
Superlens construction was at one time thought to be impossible. In 2000, Pendry claimed that a simple slab of left-handed material would do the job; the experimental realization of such a lens took, however, some more time, because it is not that easy to fabricate metamaterials with both negative permittivity and permeability. Indeed, no such material exists naturally and construction of the required metamaterials is non-trivial. Furthermore, it was shown that the parameters of the material are extremely sensitive (the index must equal −1); small deviations make the subwavelength resolution unobservable. Due to the resonant nature of metamaterials, on which many (proposed) implementations of superlenses depend, metamaterials are highly dispersive; the sensitive nature of the superlens to the material parameters causes superlenses based on metamaterials to have a limited usable frequency range. This initial theoretical superlens design consisted of a metamaterial that compensated for wave decay and reconstructs images in the near field. Both propagating and evanescent waves could contribute to the resolution of the image.
Pendry also suggested that a lens having only one negative parameter would form an approximate superlens, provided that the distances involved are also very small and provided that the source polarization is appropriate. For visible light this is a useful substitute, since engineering metamaterials with a negative permeability at the frequency of visible light is difficult. Metals are then a good alternative as they have negative permittivity (but not negative permeability). Pendry suggested using silver due to its relatively low loss at the predicted wavelength of operation (356 nm). In 2003 Pendry's theory was first experimentally demonstrated at RF/microwave frequencies. In 2005, two independent groups verified Pendry's lens at UV range, both using thin layers of silver illuminated with UV light to produce "photographs" of objects smaller than the wavelength. Negative refraction of visible light was experimentally verified in an yttrium orthovanadate (YVO4) bicrystal in 2003.
In 2004, the first superlens with a negative refractive index provided resolution three times better than the diffraction limit and was demonstrated at microwave frequencies. In 2005, the first near field superlens was demonstrated by N.Fang et al., but the lens did not rely on negative refraction. Instead, a thin silver film was used to enhance the evanescent modes through surface plasmon coupling. Almost at the same time Melville and Blaikie succeeded with a near field superlens. Other groups followed. Two developments in superlens research were reported in 2008. In the second case, a metamaterial was formed from silver nanowires which were electrochemically deposited in porous aluminium oxide; the material exhibited negative refraction. The imaging performance of such isotropic negative dielectric constant slab lenses were also analyzed with respect to the slab material and thickness. Subwavelength imaging opportunities with planar uniaxial anisotropic lenses, where the dielectric tensor components are of the opposite sign, have also been studied as a function of the structure parameters.
The superlens has not yet been demonstrated at visible or near-infrared frequencies (Nielsen, R. B.; 2010). Furthermore, as dispersive materials, these are limited to functioning at a single wavelength. Proposed solutions are metal–dielectric composites (MDCs) and multilayer lens structures; the multi-layer superlens appears to have better subwavelength resolution than the single layer superlens. Losses are less of a concern with the multi-layer system, but so far it appears to be impractical because of impedance mis-match.
While the evolution of nanofabrication techniques continues to push the limits in fabrication of nanostructures, surface roughness remains an inevitable source of concern in the design of nano-photonic devices; the impact of this surface roughness on the effective dielectric constants and subwavelength image resolution of multilayer metal–insulator stack lenses has also been studied.
When the world is observed through conventional lenses, the sharpness of the image is determined by and limited to the wavelength of light. Around the year 2000, a slab of negative index metamaterial was theorized to create a lens with capabilities beyond conventional (positive index) lenses. Pendry proposed that a thin slab of negative refractive metamaterial might overcome known problems with common lenses to achieve a "perfect" lens that would focus the entire spectrum, both the propagating as well as the evanescent spectra.
A slab of silver was proposed as the metamaterial. More specifically, such silver thin film can be regarded as a metasurface; as light moves away (propagates) from the source, it acquires an arbitrary phase. Through a conventional lens the phase remains consistent, but the evanescent waves decay exponentially. In the flat metamaterial DNG slab, normally decaying evanescent waves are contrarily amplified. Furthermore, as the evanescent waves are now amplified, the phase is reversed.
Therefore, a type of lens was proposed, consisting of a metal film metamaterial; when illuminated near its plasma frequency, the lens could be used for superresolution imaging that compensates for wave decay and reconstructs images in the near-field. In addition, both propagating and evanescent waves contribute to the resolution of the image.
Pendry suggested that left-handed slabs allow "perfect imaging" if they are completely lossless, impedance matched, and their refractive index is −1 relative to the surrounding medium. Theoretically, this would be a breakthrough in that the optical version resolves objects as minuscule as nanometers across. Pendry predicted that Double negative metamaterials (DNG) with a refractive index of n=−1, can act, at least in principle, as a "perfect lens" allowing imaging resolution which is limited not by the wavelength, but rather by material quality.
Other studies concerning the perfect lens
Further research demonstrated that Pendry's theory behind the perfect lens was not exactly correct; the analysis of the focusing of the evanescent spectrum (equations 13–21 in reference ) was flawed. In addition, this applies to only one (theoretical) instance, and that is one particular medium that is lossless, nondispersive and the constituent parameters are defined as:
- ε(ω) / ε0=µ(ω) / µ0=−1, which in turn results in a negative refraction of n=−1
However, the final intuitive result of this theory that both the propagating and evanescent waves are focused, resulting in a converging focal point within the slab and another convergence (focal point) beyond the slab turned out to be correct.
If the DNG metamaterial medium has a large negative index or becomes lossy or dispersive, Pendry's perfect lens effect cannot be realized; as a result, the perfect lens effect does not exist in general. According to FDTD simulations at the time (2001), the DNG slab acts like a converter from a pulsed cylindrical wave to a pulsed beam. Furthermore, in reality (in practice), a DNG medium must be and is dispersive and lossy, which can have either desirable or undesirable effects, depending on the research or application. Consequently, Pendry's perfect lens effect is inaccessible with any metamaterial designed to be a DNG medium.
Another analysis, in 2002, of the perfect lens concept showed it to be in error while using the lossless, dispersionless DNG as the subject; this analysis mathematically demonstrated that subtleties of evanescent waves, restriction to a finite slab and absorption had led to inconsistencies and divergencies that contradict the basic mathematical properties of scattered wave fields. For example, this analysis stated that absorption, which is linked to dispersion, is always present in practice, and absorption tends to transform amplified waves into decaying ones inside this medium (DNG).
A third analysis of Pendry's perfect lens concept, published in 2003, used the recent demonstration of negative refraction at microwave frequencies as confirming the viability of the fundamental concept of the perfect lens. In addition, this demonstration was thought to be experimental evidence that a planar DNG metamaterial would refocus the far field radiation of a point source. However, the perfect lens would require significantly different values for permittivity, permeability, and spatial periodicity than the demonstrated negative refractive sample.
This study agrees that any deviation from conditions where ε=µ=−1 results in the normal, conventional, imperfect image that degrades exponentially i.e., the diffraction limit. The perfect lens solution in the absence of losses is again, not practical, and can lead to paradoxical interpretations.
It was determined that although resonant surface plasmons are undesirable for imaging, these turn out to be essential for recovery of decaying evanescent waves; this analysis discovered that metamaterial periodicity has a significant effect on the recovery of types of evanescent components. In addition, achieving subwavelength resolution is possible with current technologies. Negative refractive indices have been demonstrated in structured metamaterials; such materials can be engineered to have tunable material parameters, and so achieve the optimal conditions. Losses can be minimized in structures utilizing superconducting elements. Furthermore, consideration of alternate structures may lead to configurations of left-handed materials that can achieve subwavelength focusing; such structures were being studied at the time.
An effective approach for the compensation of losses in metamaterials, called plasmon injection scheme, has been recently proposed; the plasmon injection scheme has been applied theoretically to imperfect negative index flat lenses with reasonable material losses and in the presence of noise as well as hyperlenses. It has been shown that even imperfect negative index flat lenses assisted with plasmon injection scheme can enable subdiffraction imaging of objects which is otherwise not possible due to the losses and noise. Although plasmon injection scheme was originally conceptualized for plasmonic metamaterials, the concept is general and applicable to all types electromagnetic modes; the main idea of the scheme is the coherent superposition of the lossy modes in the metamaterial with an appropriately structured external auxiliary field. This auxiliary field accounts for the losses in the metamaterial, hence effectively reduces the losses experienced by the signal beam or object field in the case of a metamaterial lens; the plasmon injection scheme can be implemented either physically or equivalently through deconvolution post-processing method. However, the physical implementation has shown to be more effective than the deconvolution. Physical construction of convolution and selective amplification of the spatial frequencies within a narrow bandwidth are the keys to the physical implementation of the plasmon injection scheme; this loss compensation scheme is ideally suited especially for metamaterial lenses since it does not require gain medium, nonlinearity, or any interaction with phonons. Experimental demonstration of the plasmon injection scheme has not yet been shown partly because the theory is rather new.
Near-field imaging with magnetic wires
Pendry's theoretical lens was designed to focus both propagating waves and the near-field evanescent waves. From permittivity "ε" and magnetic permeability "µ" an index of refraction "n" is derived; the index of refraction determines how light is bent on traversing from one material to another. In 2003, it was suggested that a metamaterial constructed with alternating, parallel, layers of n=−1 materials and n=+1 materials, would be a more effective design for a metamaterial lens, it is an effective medium made up of a multi-layer stack, which exhibits birefringence, n2=∞, nx=0. The effective refractive indices are then perpendicular and parallel, respectively.
Like a conventional lens, the z-direction is along the axis of the roll; the resonant frequency (w0) – close to 21.3 MHz – is determined by the construction of the roll. Damping is achieved by the inherent resistance of the layers and the lossy part of permittivity.
Simply put, as the field pattern is transferred from the input to the output face of a slab, so the image information is transported across each layer; this was experimentally demonstrated. To test the two-dimensional imaging performance of the material, an antenna was constructed from a pair of anti-parallel wires in the shape of the letter M; this generated a line of magnetic flux, so providing a characteristic field pattern for imaging. It was placed horizontally, and the material, consisting of 271 Swiss rolls tuned to 21.5 MHz, was positioned on top of it. The material does indeed act as an image transfer device for the magnetic field; the shape of the antenna is faithfully reproduced in the output plane, both in the distribution of the peak intensity, and in the “valleys” that bound the M.
A consistent characteristic of the very near (evanescent) field is that the electric and magnetic fields are largely decoupled; this allows for nearly independent manipulation of the electric field with the permittivity and the magnetic field with the permeability.
Furthermore, this is highly anisotropic system. Therefore, the transverse (perpendicular) components of the EM field which radiate the material, that is the wavevector components kx and ky, are decoupled from the longitudinal component kz. So, the field pattern should be transferred from the input to the output face of a slab of material without degradation of the image information.
Optical super lens with silver metamaterial
In 2003, a group of researchers showed that optical evanescent waves would be enhanced as they passed through a silver metamaterial lens; this was referred to as a diffraction-free lens. Although a coherent, high-resolution, image was not intended, nor achieved, regeneration of the evanescent field was experimentally demonstrated.
By 2003 it was known for decades that evanescent waves could be enhanced by producing excited states at the interface surfaces. However, the use of surface plasmons to reconstruct evanescent components was not tried until Pendry's recent proposal (see "Perfect lens" above). By studying films of varying thickness it has been noted that a rapidly growing transmission coefficient occurs, under the appropriate conditions; this demonstration provided direct evidence that the foundation of superlensing is solid, and suggested the path that will enable the observation of superlensing at optical wavelengths.
In 2005, a coherent, high-resolution, image was produced (based on the 2003 results). A thinner slab of silver (35 nm) was better for sub–diffraction-limited imaging, which results in one-sixth of the illumination wavelength; this type of lens was used to compensate for wave decay and reconstruct images in the near-field. Prior attempts to create a working superlens used a slab of silver that was too thick.
Objects were imaged as small as 40 nm across. In 2005 the imaging resolution limit for optical microscopes was at about one tenth the diameter of a red blood cell. With the silver superlens this results in a resolution of one hundredth of the diameter of a red blood cell.
Conventional lenses, whether man-made or natural, create images by capturing the propagating light waves all objects emit and then bending them; the angle of the bend is determined by the index of refraction and has always been positive until the fabrication of artificial negative index materials. Objects also emit evanescent waves that carry details of the object, but are unobtainable with conventional optics; such evanescent waves decay exponentially and thus never become part of the image resolution, an optics threshold known as the diffraction limit. Breaking this diffraction limit, and capturing evanescent waves are critical to the creation of a 100-percent perfect representation of an object.
In addition, conventional optical materials suffer a diffraction limit because only the propagating components are transmitted (by the optical material) from a light source; the non-propagating components, the evanescent waves, are not transmitted. Moreover, lenses that improve image resolution by increasing the index of refraction are limited by the availability of high-index materials, and point by point subwavelength imaging of electron microscopy also has limitations when compared to the potential of a working superlens. Scanning electron and atomic force microscopes are now used to capture detail down to a few nanometers. However, such microscopes create images by scanning objects point by point, which means they are typically limited to non-living samples, and image capture times can take up to several minutes.
With current optical microscopes, scientists can only make out relatively large structures within a cell, such as its nucleus and mitochondria. With a superlens, optical microscopes could one day reveal the movements of individual proteins traveling along the microtubules that make up a cell's skeleton, the researchers said. Optical microscopes can capture an entire frame with a single snapshot in a fraction of a second. With superlenses this opens up nanoscale imaging to living materials, which can help biologists better understand cell structure and function in real time.
Advances of magnetic coupling in the THz and infrared regime provided the realization of a possible metamaterial superlens. However, in the near field, the electric and magnetic responses of materials are decoupled. Therefore, for transverse magnetic (TM) waves, only the permittivity needed to be considered. Noble metals, then become natural selections for superlensing because negative permittivity is easily achieved.
By designing the thin metal slab so that the surface current oscillations (the surface plasmons) match the evanescent waves from the object, the superlens is able to substantially enhance the amplitude of the field. Superlensing results from the enhancement of evanescent waves by surface plasmons.
The key to the superlens is its ability to significantly enhance and recover the evanescent waves that carry information at very small scales; this enables imaging well below the diffraction limit. No lens is yet able to completely reconstitute all the evanescent waves emitted by an object, so the goal of a 100-percent perfect image will persist. However, many scientists believe that a true perfect lens is not possible because there will always be some energy absorption loss as the waves pass through any known material. In comparison, the superlens image is substantially better than the one created without the silver superlens.
50-nm flat silver layer
In February 2004, an electromagnetic radiation focusing system, based on a negative index metamaterial plate, accomplished subwavelength imaging in the microwave domain; this showed that obtaining separated images at much less than the wavelength of light is possible. Also, in 2004, a silver layer was used for sub-micrometre near-field imaging. Super high resolution was not achieved, but this was intended; the silver layer was too thick to allow significant enhancements of evanescent field components.
In early 2005, feature resolution was achieved with a different silver layer. Though this was not an actual image, it was intended. Dense feature resolution down to 250 nm was produced in a 50 nm thick photoresist using illumination from a mercury lamp. Using simulations (FDTD), the study noted that resolution improvements could be expected for imaging through silver lenses, rather than another method of near field imaging.
Building on this prior research, super resolution was achieved at optical frequencies using a 50 nm flat silver layer; the capability of resolving an image beyond the diffraction limit, for far-field imaging, is defined here as superresolution.
The image fidelity is much improved over earlier results of the previous experimental lens stack. Imaging of sub-micrometre features has been greatly improved by using thinner silver and spacer layers, and by reducing the surface roughness of the lens stack; the ability of the silver lenses to image the gratings has been used as the ultimate resolution test, as there is a concrete limit for the ability of a conventional (far field) lens to image a periodic object – in this case the image is a diffraction grating. For normal-incidence illumination the minimum spatial period that can be resolved with wavelength λ through a medium with refractive index n is λ/n. Zero contrast would therefore be expected in any (conventional) far-field image below this limit, no matter how good the imaging resist might be.
The (super) lens stack here results in a computational result of a diffraction-limited resolution of 243 nm. Gratings with periods from 500 nm down to 170 nm are imaged, with the depth of the modulation in the resist reducing as the grating period reduces. All of the gratings with periods above the diffraction limit (243 nm) are well resolved; the key results of this experiment are super-imaging of the sub-diffraction limit for 200 nm and 170 nm periods. In both cases the gratings are resolved, even though the contrast is diminished, but this gives experimental confirmation of Pendry's superlensing proposal.
Negative index GRIN lenses
Gradient Index (GRIN) – The larger range of material response available in metamaterials should lead to improved GRIN lens design. In particular, since the permittivity and permeability of a metamaterial can be adjusted independently, metamaterial GRIN lenses can presumably be better matched to free space; the GRIN lens is constructed by using a slab of NIM with a variable index of refraction in the y direction, perpendicular to the direction of propagation z.
In 2005, a group proposed a theoretical way to overcome the near-field limitation using a new device termed a far-field superlens (FSL), which is a properly designed periodically corrugated metallic slab-based superlens.
Imaging was experimentally demonstrated in the far field, taking the next step after near-field experiments; the key element is termed as a far-field superlens (FSL) which consists of a conventional superlens and a nanoscale coupler.
Focusing beyond the diffraction limit with far-field time reversal
An approach is presented for subwavelength focusing of microwaves using both a time-reversal mirror placed in the far field and a random distribution of scatterers placed in the near field of the focusing point.
Once capability for near-field imaging was demonstrated, the next step was to project a near-field image into the far-field; this concept, including technique and materials, is dubbed "hyperlens".,
The capability of a metamaterial-hyperlens for sub-diffraction-limited imaging is shown below.
Sub-diffraction imaging in the far field
With conventional optical lenses, the far field is a limit that is too distant for evanescent waves to arrive intact; when imaging an object, this limits the optical resolution of lenses to the order of the wavelength of light. These non-propagating waves carry detailed information in the form of high spatial resolution, and overcome limitations. Therefore, projecting image details, normally limited by diffraction into the far field does require recovery of the evanescent waves.
In essence steps leading up to this investigation and demonstration was the employment of an anisotropic metamaterial with a hyperbolic dispersion; the effect was such that ordinary evanescent waves propagate along the radial direction of the layered metamaterial. On a microscopic level the large spatial frequency waves propagate through coupled surface plasmon excitations between the metallic layers.
In 2007, just such an anisotropic metamaterial was employed as a magnifying optical hyperlens; the hyperlens consisted of a curved periodic stack of thin silver and alumina (at 35 nanometers thick) deposited on a half-cylindrical cavity, and fabricated on a quartz substrate. The radial and tangential permittivities have different signs.
Upon illumination, the scattered evanescent field from the object enters the anisotropic medium and propagates along the radial direction. Combined with another effect of the metamaterial, a magnified image at the outer diffraction limit-boundary of the hyperlens occurs. Once the magnified feature is larger than (beyond) the diffraction limit, it can then be imaged with a conventional optical microscope, thus demonstrating magnification and projection of a sub-diffraction-limited image into the far field.
The hyperlens magnifies the object by transforming the scattered evanescent waves into propagating waves in the anisotropic medium, projecting a spatial resolution high-resolution image into the far field; this type of metamaterials-based lens, paired with a conventional optical lens is therefore able to reveal patterns too small to be discerned with an ordinary optical microscope. In one experiment, the lens was able to distinguish two 35-nanometer lines etched 150 nanometers apart. Without the metamaterials, the microscope showed only one thick line.
In a control experiment, the line pair object was imaged without the hyperlens; the line pair could not be resolved because of the diffraction limit of the (optical) aperture was limited to 260 nm. Because the hyperlens supports the propagation of a very broad spectrum of wave vectors, it can magnify arbitrary objects with sub-diffraction-limited resolution.
Although this work appears to be limited by being only a cylindrical hyperlens, the next step is to design a spherical lens; that lens will exhibit three-dimensional capability. Near-field optical microscopy uses a tip to scan an object. In contrast, this optical hyperlens magnifies an image that is sub-diffraction-limited; the magnified sub-diffraction image is then projected into the far field.
The optical hyperlens shows a notable potential for applications, such as real-time biomolecular imaging and nanolithography; such a lens could be used to watch cellular processes that have been impossible to see. Conversely, it could be used to project an image with extremely fine features onto a photoresist as a first step in photolithography, a process used to make computer chips; the hyperlens also has applications for DVD technology.
In 2010, spherical hyperlens for two dimensional imaging at visible frequencies is demonstrated experimentally; the spherical hyperlens based on silver and titanium oxide alternating layers has strong anisotropic hyperbolic dispersion allowing super-resolution with visible spectrum. The resolution is 160 nm at visible spectrum, it will enable biological imaging such as cell and DNA with a strong benefit of magnifying sub-diffraction resolution into far-field.
Super-imaging in the visible frequency range
Continual improvements in optical microscopy are needed to keep up with the progress in nanotechnology and microbiology. Advancement in spatial resolution is key. Conventional optical microscopy is limited by a diffraction limit which is on the order of 200 nanometers (wavelength); this means that viruses, proteins, DNA molecules and many other samples are hard to observe with a regular (optical) microscope. The lens previously demonstrated with negative refractive index material, a thin planar superlens, does not provide magnification beyond the diffraction limit of conventional microscopes. Therefore, images smaller than the conventional diffraction limit will still be unavailable.
Another approach achieving super-resolution at visible wavelength is recently developed spherical hyperlens based on silver and titanium oxide alternating layers, it has strong anisotropic hyperbolic dispersion allowing super-resolution with converting evanescent waves into propagating waves. This method is non-fluorescence based super-resolution imaging, which results in real-time imaging without any reconstruction of images and information.
Super resolution far-field microscopy techniques
By 2008 the diffraction limit has been surpassed and lateral imaging resolutions of 20 to 50 nm have been achieved by several "super-resolution" far-field microscopy techniques, including stimulated emission depletion (STED) and its related RESOLFT (reversible saturable optically linear fluorescent transitions) microscopy; saturated structured illumination microscopy (SSIM) ; stochastic optical reconstruction microscopy (STORM); photoactivated localization microscopy (PALM); and other methods using similar principles.
Cylindrical superlens via coordinate transformation
This began with a proposal by Pendry, in 2003. Magnifying the image required a new design concept in which the surface of the negatively refracting lens is curved. One cylinder touches another cylinder, resulting in a curved cylindrical lens which reproduced the contents of the smaller cylinder in magnified but undistorted form outside the larger cylinder. Coordinate transformations are required to curve the original perfect lens into the cylindrical, lens structure.
In 2007, a superlens utilizing coordinate transformation was again the subject. However, in addition to image transfer other useful operations were discussed; translation, rotation, mirroring and inversion as well as the superlens effect. Furthermore, elements that perform magnification are described, which are free from geometric aberrations, on both the input and output sides while utilizing free space sourcing (rather than waveguide); these magnifying elements also operate in the near and far field, transferring the image from near field to far field.
Nano-optics with metamaterials
Nanohole array as a lens
Work in 2007 demonstrated that a quasi-periodic array of nanoholes, in a metal screen, were able to focus the optical energy of a plane wave to form subwavelength spots (hot spots); the distances for the spots was a few tens of wavelengths on the other side of the array, or, in other words, opposite the side of the incident plane wave. The quasi-periodic array of nanoholes functioned as a light concentrator.
In June 2008, this was followed by the demonstrated capability of an array of quasi-crystal nanoholes in a metal screen. More than concentrating hot spots, an image of the point source is displayed a few tens of wavelengths from the array, on the other side of the array (the image plane); also this type of array exhibited a 1 to 1 linear displacement, – from the location of the point source to its respective, parallel, location on the image plane. In other words, from x to x + δx. For example, other point sources were similarly displaced from x' to x' + δx', from x^ to x^ + δx^, and from x^^ to x^^ + δx^^, and so on. Instead of functioning as a light concentrator, this performs the function of conventional lens imaging with a 1 to 1 correspondence, albeit with a point source.
However, resolution of more complicated structures can be achieved as constructions of multiple point sources; the fine details, and brighter image, that are normally associated with the high numerical apertures of conventional lenses can be reliably produced. Notable applications for this technology arise when conventional optics is not suitable for the task at hand. For example, this technology is better suited for X-ray imaging, or nano-optical circuits, and so forth.
The metamaterial nanolens was constructed of millions of nanowires at 20 nanometers in diameter; these were precisely aligned and a packaged configuration was applied. The lens is able to depict a clear, high-resolution image of nano-sized objects because it uses both normal propagating EM radiation, and evanescent waves to construct the image. Super-resolution imaging was demonstrated over a distance of 6 times the wavelength (λ), in the far-field, with a resolution of at least λ/4; this is a significant improvement over previous research and demonstration of other near field and far field imaging, including nanohole arrays discussed below.
Light transmission properties of holey metal films
2009-12. The light transmission properties of holey metal films in the metamaterial limit, where the unit length of the periodic structures is much smaller than the operating wavelength, are analyzed theoretically.
Transporting an Image through a subwavelength hole
Theoretically it appears possible to transport a complex electromagnetic image through a tiny subwavelength hole with diameter considerably smaller than the diameter of the image, without losing the subwavelength details.
Nanoparticle imaging – quantum dots
When observing the complex processes in a living cell, significant processes (changes) or details are easy to overlook; this can more easily occur when watching changes that take a long time to unfold and require high-spatial-resolution imaging. However, recent research offers a solution to scrutinize activities that occur over hours or even days inside cells, potentially solving many of the mysteries associated with molecular-scale events occurring in these tiny organisms.
A joint research team, working at the National Institute of Standards and Technology (NIST) and the National Institute of Allergy and Infectious Diseases (NIAID), has discovered a method of using nanoparticles to illuminate the cellular interior to reveal these slow processes. Nanoparticles, thousands of times smaller than a cell, have a variety of applications. One type of nanoparticle called a quantum dot glows when exposed to light; these semiconductor particles can be coated with organic materials, which are tailored to be attracted to specific proteins within the part of a cell a scientist wishes to examine.
Notably, quantum dots last longer than many organic dyes and fluorescent proteins that were previously used to illuminate the interiors of cells, they also have the advantage of monitoring changes in cellular processes while most high-resolution techniques like electron microscopy only provide images of cellular processes frozen at one moment. Using quantum dots, cellular processes involving the dynamic motions of proteins, are observable (elucidated).
The research focused primarily on characterizing quantum dot properties, contrasting them with other imaging techniques. In one example, quantum dots were designed to target a specific type of human red blood cell protein that forms part of a network structure in the cell's inner membrane; when these proteins cluster together in a healthy cell, the network provides mechanical flexibility to the cell so it can squeeze through narrow capillaries and other tight spaces. But when the cell gets infected with the malaria parasite, the structure of the network protein changes.
Because the clustering mechanism is not well understood, it was decided to examine it with the quantum dots. If a technique could be developed to visualize the clustering, then the progress of a malaria infection could be understood, which has several distinct developmental stages.
Research efforts revealed that as the membrane proteins bunch up, the quantum dots attached to them are induced to cluster themselves and glow more brightly, permitting real time observation as the clustering of proteins progresses. More broadly, the research discovered that when quantum dots attach themselves to other nanomaterials, the dots' optical properties change in unique ways in each case. Furthermore, evidence was discovered that quantum dot optical properties are altered as the nanoscale environment changes, offering greater possibility of using quantum dots to sense the local biochemical environment inside cells.
Some concerns remain over toxicity and other properties. However, the overall findings indicate that quantum dots could be a valuable tool to investigate dynamic cellular processes.
The abstract from the related published research paper states (in part): Results are presented regarding the dynamic fluorescence properties of bioconjugated nanocrystals or quantum dots (QDs) in different chemical and physical environments. A variety of QD samples was prepared and compared: isolated individual QDs, QD aggregates, and QDs conjugated to other nanoscale materials...
- Pendry, J. B. (2000). "Negative Refraction Makes a Perfect Lens" (PDF). Physical Review Letters. 85 (18): 3966–3969. Bibcode:2000PhRvL..85.3966P. doi:10.1103/PhysRevLett.85.3966. PMID 11041972.
- Zhang, Xiang; Liu, Zhaowei (2008). "Superlenses to overcome the diffraction limit" (Free PDF download). Nature Materials. 7 (6): 435–441. Bibcode:2008NatMa...7..435Z. doi:10.1038/nmat2141. PMID 18497850. Retrieved 2013-06-03.
- Aguirre, Edwin L. (2012-09-18). "Creating a 'Perfect' Lens for Super-Resolution Imaging". Journal of Nanophotonics. 4 (1): 043514. Bibcode:2010JNano...4d3514K. doi:10.1117/1.3484153. Retrieved 2013-06-02.
- Kawata, S.; Inouye, Y.; Verma, P. (2009). "Plasmonics for near-field nano-imaging and superlensing". Nature Photonics. 3 (7): 388–394. Bibcode:2009NaPho...3..388K. doi:10.1038/nphoton.2009.111.
- Vinson, V; Chin, G. (2007). "Introduction to special issue – Lights, Camera, Action". Science. 316 (5828): 1143. doi:10.1126/science.316.5828.1143.
- Pendry, John (September 2004). "Manipulating the Near Field" (PDF). Optics & Photonics News.
- Anantha, S. Ramakrishna; J.B. Pendry; M.C.K. Wiltshire; W.J. Stewart (2003). "Imaging the Near Field" (PDF). Journal of Modern Optics. 50 (9): 1419–1430. doi:10.1080/0950034021000020824.
- GB 541753, Dennis Gabor, "Improvements in or relating to optical systems composed of lenticules", published 1941
- Lauterbur, P. (1973). "Image Formation by Induced Local Interactions: Examples Employing Nuclear Magnetic Resonance". Nature. 242 (5394): 190–191. Bibcode:1973Natur.242..190L. doi:10.1038/242190a0.
- "Prof. Sir John Pendry, Imperial College, London". Colloquia Series. Research Laboratory of Electronics. 13 March 2007. Retrieved 2010-04-07.
- Yeager, A. (28 March 2009). "Cornering The Terahertz Gap". Science News. Retrieved 2010-03-02.
- Savo, S.; Andreone, A.; Di Gennaro, E. (2009). "Superlensing properties of one-dimensional dielectric photonic crystals". Optics Express. 17 (22): 19848–19856. arXiv:0907.3821. Bibcode:2009OExpr..1719848S. doi:10.1364/OE.17.019848. PMID 19997206.
- Parimi, P.; et al. (2003). "Imaging by Flat Lens using Negative Refraction". Nature. 426 (6965): 404. Bibcode:2003Natur.426..404P. doi:10.1038/426404a. PMID 14647372.
- Bullis, Kevin (2007-03-27). "Superlenses and Smaller Computer Chips". Technology Review magazine of Massachusetts Institute of Technology. Retrieved 2010-01-13.
- Novotny, Lukas (November 2007). "Adapted from "The History of Near-field Optics"" (PDF). In Wolf, Emil (ed.). Progress in Optics. Progress In Optics series. 50. Amsterdam: Elsevier. pp. 142–150. ISBN 978-0-444-53023-3.
- Synge, E.H. (1928). "A suggested method for extending the microscopic resolution into the ultramicroscopic region". Philosophical Magazine and Journal of Science. Series 7. 6 (35): 356–362. doi:10.1080/14786440808564615.
- Synge, E.H. (1932). "An application of piezoelectricity to microscopy". Philos. Mag. 13 (83): 297. doi:10.1080/14786443209461931.
- Smith, H.I. (1974). "Fabrication techniques for surface-acoustic-wave and thin-film optical devices". Proceedings of the IEEE. 62 (10): 1361–1387. doi:10.1109/PROC.1974.9627.
Srituravanich, W.; et al. (2004). "Plasmonic Nanolithography" (PDF). Nano Letters. 4 (6): 1085–1088. Bibcode:2004NanoL...4.1085S. doi:10.1021/nl049573q. Archived from the original (PDF) on April 15, 2010. Cite uses deprecated parameter
- Fischer, U. Ch.; Zingsheim, H. P. (1981). "Submicroscopic pattern replication with visible light". Journal of Vacuum Science and Technology. 19 (4): 881. Bibcode:1981JVST...19..881F. doi:10.1116/1.571227.
- Schmid, H.; et al. (1998). "Light-coupling masks for lensless, sub-wavelength optical lithography" (PDF). Applied Physics Letters. 73 (19): 237. Bibcode:1998ApPhL..72.2379S. doi:10.1063/1.121362.
- Fang, N.; et al. (2005). "Sub–Diffraction-Limited Optical Imaging with a Silver Superlens". Science. 308 (5721): 534–537. Bibcode:2005Sci...308..534F. doi:10.1126/science.1108759. PMID 15845849.
- Garcia, N.; Nieto-Vesperinas, M. (2002). "Left-Handed Materials Do Not Make a Perfect Lens". Physical Review Letters. 88 (20): 207403. Bibcode:2002PhRvL..88t7403G. doi:10.1103/PhysRevLett.88.207403. PMID 12005605.
- "David R. Smith (May 10, 2004). "Breaking the diffraction limit". Institute of Physics. Retrieved May 31, 2009.
- Pendry, J. B. (2000). "Negative refraction makes a perfect lens". Phys. Rev. Lett. 85 (18): 3966–3969. Bibcode:2000PhRvL..85.3966P. doi:10.1103/PhysRevLett.85.3966. PMID 11041972.
- Podolskiy, V.A.; Narimanov, EE (2005). "Near-sighted superlens". Opt. Lett. 30 (1): 75–7. arXiv:physics/0403139. Bibcode:2005OptL...30...75P. doi:10.1364/OL.30.000075. PMID 15648643.
- Tassin, P.; Veretennicoff, I; Vandersande, G (2006). "Veselago's lens consisting of left-handed materials with arbitrary index of refraction". Opt. Commun. 264 (1): 130–134. Bibcode:2006OptCo.264..130T. doi:10.1016/j.optcom.2006.02.013.
- Brumfiel, G (2009). "Metamaterials: Ideal focus". Nature News. 459 (7246): 504–505. doi:10.1038/459504a. PMID 19478762.
- Melville, David; Blaikie, Richard (2005-03-21). "Super-resolution imaging through a planar silver layer" (PDF). Optics Express. 13 (6): 2127–2134. Bibcode:2005OExpr..13.2127M. doi:10.1364/OPEX.13.002127. PMID 19495100. Retrieved 2009-10-23.
- Fang, Nicholas; Lee, H; Sun, C; Zhang, X (2005). "Sub–Diffraction-Limited Optical Imaging with a Silver Superlens". Science. 308 (5721): 534–537. Bibcode:2005Sci...308..534F. doi:10.1126/science.1108759. PMID 15845849.
- Zhang, Yong; Fluegel, B.; Mascarenhas, A. (2003). "Total Negative Refraction in Real Crystals for Ballistic Electrons and Light". Physical Review Letters. 91 (15): 157404. Bibcode:2003PhRvL..91o7404Z. doi:10.1103/PhysRevLett.91.157404. PMID 14611495.
- Belov, Pavel; Simovski, Constantin (2005). "Canalization of subwavelength images by electromagnetic crystals". Physical Review B. 71 (19): 193105. Bibcode:2005PhRvB..71s3105B. doi:10.1103/PhysRevB.71.193105.
- Grbic, A.; Eleftheriades, G. V. (2004). "Overcoming the Diffraction Limit with a Planar Left-handed Transmission-line Lens". Physical Review Letters. 92 (11): 117403. Bibcode:2004PhRvL..92k7403G. doi:10.1103/PhysRevLett.92.117403. PMID 15089166.
- Nielsen, R. B.; Thoreson, M. D.; Chen, W.; Kristensen, A.; Hvam, J. M.; Shalaev, V. M.; Boltasseva, A. (2010). "Toward superlensing with metal–dielectric composites and multilayers" (PDF). Applied Physics B. 100 (1): 93–100. Bibcode:2010ApPhB.100...93N. doi:10.1007/s00340-010-4065-z. Archived from the original (Free PDF download) on September 8, 2014. Cite uses deprecated parameter
- Fang, N.; Lee, H; Sun, C; Zhang, X (2005). "Sub-Diffraction-Limited Optical Imaging with a Silver Superlens". Science. 308 (5721): 534–537. Bibcode:2005Sci...308..534F. doi:10.1126/science.1108759. PMID 15845849.
- Jeppesen, C.; Nielsen, R. B.; Boltasseva, A.; Xiao, S.; Mortensen, N. A.; Kristensen, A. (2009). "Thin film Ag superlens towards lab-on-a-chip integration" (PDF). Optics Express. 17 (25): 22543–52. Bibcode:2009OExpr..1722543J. doi:10.1364/OE.17.022543. PMID 20052179.
- Valentine, J.; et al. (2008). "Three-dimensional optical metamaterial with a negative refractive index". Nature. 455 (7211): 376–379. Bibcode:2008Natur.455..376V. doi:10.1038/nature07247. PMID 18690249.
- Yao, J.; et al. (2008). "Optical Negative Refraction in Bulk Metamaterials of Nanowires". Science. 321 (5891): 930. Bibcode:2008Sci...321..930Y. CiteSeerX 10.1.1.716.4426. doi:10.1126/science.1157566. PMID 18703734.
- Shivanand; Liu, Huikan; Webb, K.J. (2008). "Imaging performance of an isotropic negative dielectric constant slab". Opt. Lett. 33 (21): 2562. Bibcode:2008OptL...33.2562S. doi:10.1364/OL.33.002562.
- Liu, Huikan; Shivanand; Webb, K.J. (2008). "Subwavelength imaging opportunities with planar uniaxial anisotropic lenses". Opt. Lett. 33 (21): 2568. Bibcode:2008OptL...33.2568L. doi:10.1364/OL.33.002568.
- W. Cai, D.A. Genov, V.M. Shalaev, Phys. Rev. B 72, 193101 (2005)
- A.V. Kildishev, W. Cai, U.K. Chettiar, H.-K. Yuan, A.K. Sarychev, V.P. Drachev, V.M. Shalaev, J. Opt. Soc. Am. B 23, 423 (2006)
- L. Shi, L. Gao, S. He, B. Li, Phys. Rev. B 76, 045116 (2007)
- Z. Jacob, L.V. Alekseyev, E. Narimanov, Opt. Express 14, 8247 (2006)
- P.A. Belov, Y. Hao, Phys. Rev. B 73, 113110 (2006)
- B. Wood, J.B. Pendry, D.P. Tsai, Phys. Rev. B 74, 115116 (2006)
- E. Shamonina, V.A. Kalinin, K.H. Ringhofer, L. Solymar, Electron. Lett. 37, 1243 (2001)
- Shivanand; Ludwig, Alon; Webb, K.J. (2012). "Impact of surface roughness on the effective dielectric constants and subwavelength image resolution of metal–insulator stack lenses". Opt. Lett. 37 (20): 4317. Bibcode:2012OptL...37.4317S. doi:10.1364/OL.37.004317.
Ziolkowski, R. W.; Heyman, E. (2001). "Wave propagation in media having negative permittivity and permeability" (PDF). Physical Review E. 64 (5): 056625. Bibcode:2001PhRvE..64e6625Z. doi:10.1103/PhysRevE.64.056625. PMID 11736134. Archived from the original (PDF) on July 17, 2010. Cite uses deprecated parameter
- Smolyaninov, Igor I.; Hung, YJ; Davis, CC (2007-03-27). "Magnifying Superlens in the Visible Frequency Range". Science. 315 (5819): 1699–1701. arXiv:physics/0610230. Bibcode:2007Sci...315.1699S. doi:10.1126/science.1138746. PMID 17379804.
- Dumé, B. (21 April 2005). "Superlens breakthrough". Physics World.
- Pendry, J. B. (18 February 2005). "Collection of photonics references".
- Smith, D.R.; et al. (2003). "Limitations on subdiffraction imaging with a negative refractive index slab" (PDF). Applied Physics Letters. 82 (10): 1506–1508. arXiv:cond-mat/0206568. Bibcode:2003ApPhL..82.1506S. doi:10.1063/1.1554779.
- Shelby, R. A.; Smith, D. R.; Schultz, S. (2001). "Experimental Verification of a Negative Index of Refraction". Science. 292 (5514): 77–9. Bibcode:2001Sci...292...77S. CiteSeerX 10.1.1.119.1617. doi:10.1126/science.1058847. PMID 11292865.
- Sadatgol, M.; Ozdemir, S. K.; Yang, L.; Guney, D. O. (2015). "Plasmon injection to compensate and control losses in negative index metamaterials". Physical Review Letters. 115 (3): 035502. arXiv:1506.06282. Bibcode:2015PhRvL.115c5502S. doi:10.1103/physrevlett.115.035502. PMID 26230802.
- Adams, W.; Sadatgol, M.; Zhang, X.; Guney, D. O. (2016). "Bringing the 'perfect lens' into focus by near-perfect compensation of losses without gain media". New Journal of Physics. 18 (12): 125004. arXiv:1607.07464. Bibcode:2016NJPh...18l5004A. doi:10.1088/1367-2630/aa4f9e.
- A. Ghoshroy, W. Adams, X. Zhang, and D. O. Guney, Active plasmon injection scheme for subdiffraction imaging with imperfect negative index flat lens, arXiv: 1706.03886
- Zhang, Xu; Adams, Wyatt; Guney, Durdu O. (2017). "Analytical description of inverse filter emulating the plasmon injection loss compensation scheme and implementation for ultrahigh-resolution hyperlens". J. Opt. Soc. Am. B. 34 (6): 1310. Bibcode:2017JOSAB..34.1310Z. doi:10.1364/josab.34.001310.
Wiltshire, M. c. k.; et al. (2003). "Metamaterial endoscope for magnetic field transfer: near field imaging with magnetic wires" (PDF). Optics Express. 11 (7): 709–715. Bibcode:2003OExpr..11..709W. doi:10.1364/OE.11.000709. PMID 19461782. Archived from the original (PDF) on 2009-04-19. Cite uses deprecated parameter
- Dumé, B. (4 April 2005). "Superlens breakthrough". Physics World. Retrieved 2009-11-10.
Liu, Z.; et al. (2003). "Rapid growth of evanescent wave by a silver superlens" (PDF). Applied Physics Letters. 83 (25): 5184. Bibcode:2003ApPhL..83.5184L. doi:10.1063/1.1636250. Archived from the original (PDF) on June 24, 2010. Cite uses deprecated parameter
- Lagarkov, A. N.; V. N. Kissel (2004-02-18). "Near-Perfect Imaging in a Focusing System Based on a Left-Handed-Material Plate". Phys. Rev. Lett. 92 (7): 077401 [4 pages]. Bibcode:2004PhRvL..92g7401L. doi:10.1103/PhysRevLett.92.077401. PMID 14995884.
- Blaikie, Richard J; Melville, David O. S. (2005-01-20). "Imaging through planar silver lenses in the optical near field". J. Opt. Soc. Am. A. 7 (2): S176–S183. Bibcode:2005JOptA...7S.176B. doi:10.1088/1464-4258/7/2/023.
- Greegor RB, et al. (2005-08-25). "Simulation and testing of a graded negative index of refraction lens" (PDF). Applied Physics Letters. 87 (9): 091114. Bibcode:2005ApPhL..87i1114G. doi:10.1063/1.2037202. Archived from the original (PDF) on June 18, 2010. Retrieved 2009-11-01. Cite uses deprecated parameter
- Durant, Stéphane; et al. (2005-12-02). "Theory of the transmission properties of an optical far-field superlens for imaging beyond the diffraction limit" (PDF). J. Opt. Soc. Am. B. 23 (11): 2383–2392. Bibcode:2006JOSAB..23.2383D. doi:10.1364/JOSAB.23.002383. Retrieved 2009-10-26.
- Liu, Zhaowei; et al. (2007-05-22). "Experimental studies of far-field superlens for sub-diffractional optical imaging" (PDF). Optics Express. 15 (11): 6947–6954. Bibcode:2007OExpr..15.6947L. doi:10.1364/OE.15.006947. PMID 19547010. Archived from the original (PDF) on June 24, 2010. Retrieved 2009-10-26. Cite uses deprecated parameter
- Geoffroy, Lerosey; et al. (2007-02-27). "Focusing Beyond the Diffraction Limit with Far-Field Time Reversal". Science. 315 (5815): 1120–1122. Bibcode:2007Sci...315.1120L. doi:10.1126/science.1134824. PMID 17322059.
- Jacob, Z.; Alekseyev, L.; Narimanov, E. (2005). "Optical Hyperlens: Far-field imaging beyond the diffraction limit". Optics Express. 14 (18): 8247–8256. arXiv:physics/0607277. Bibcode:2006OExpr..14.8247J. doi:10.1364/OE.14.008247. PMID 19529199.
- Salandrino, Alessandro; Nader Engheta (2006-08-16). "Far-field subdiffraction optical microscopy using metamaterial crystals: Theory and simulations". Phys. Rev. B. 74 (7): 075103. Bibcode:2006PhRvB..74g5103S. doi:10.1103/PhysRevB.74.075103. hdl:1808/21743.
- Wang, Junxia; Yang Xu Hongsheng Chen; Zhang, Baile (2012). "Ultraviolet dielectric hyperlens with layered graphene and boron nitride". arXiv:1205.4823 [physics.chem-ph].
- Hart, William S; Bak, Alexey O; Phillips, Chris C (7 February 2018). "Ultra low-loss super-resolution with extremely anisotropic semiconductor metamaterials". AIP Advances. 8 (2): 025203. Bibcode:2018AIPA....8b5203H. doi:10.1063/1.5013084.
Liu, Z; et al. (2007-03-27). "Far-Field Optical Hyperlens Magnifying Sub-Diffraction-Limited Objects" (PDF). Science. 315 (5819): 1686. Bibcode:2007Sci...315.1686L. CiteSeerX 10.1.1.708.3342. doi:10.1126/science.1137368. PMID 17379801. Archived from the original (PDF) on September 20, 2009. Cite uses deprecated parameter
- Rho, Junsuk; Ye, Ziliang; Xiong, Yi; Yin, Xiaobo; Liu, Zhaowei; Choi, Hyeunseok; Bartal, Guy; Zhang, Xiang (1 December 2010). "Spherical hyperlens for two-dimensional sub-diffractional imaging at visible frequencies" (PDF). Nature Communications. 1 (9): 143. Bibcode:2010NatCo...1E.143R. doi:10.1038/ncomms1148. PMID 21266993. Archived from the original (PDF) on August 31, 2012. Cite uses deprecated parameter
- Huang, Bo; Wang, W.; Bates, M.; Zhuang, X. (2008-02-08). "Three-Dimensional Super-Resolution Imaging by Stochastic Optical Reconstruction Microscopy". Science. 319 (5864): 810–813. Bibcode:2008Sci...319..810H. doi:10.1126/science.1153529. PMC 2633023. PMID 18174397.
- Pendry, John (2003-04-07). "Perfect cylindrical lenses" (PDF). Optics Express. 11 (7): 755. Bibcode:2003OExpr..11..755P. doi:10.1364/OE.11.000755. Retrieved 2009-11-04.
- Milton, Graeme W.; Nicorovici, Nicolae-Alexandru P.; McPhedran, Ross C.; Podolskiy, Viktor A. (2005-12-08). "A proof of superlensing in the quasistatic regime, and limitations of superlenses in this regime due to anomalous localized resonance". Proceedings of the Royal Society A. 461 (2064): 3999 [36 pages]. Bibcode:2005RSPSA.461.3999M. doi:10.1098/rspa.2005.1570.
- Schurig, D.; J. B. Pendry; D. R. Smith (2007-10-24). "Transformation-designed optical elements". Optics Express. 15 (22): 14772 [10 pages]. Bibcode:2007OExpr..1514772S. doi:10.1364/OE.15.014772.
- Tsang, Mankei; Psaltis, Demetri (2008). "Magnifying perfect lens and superlens design by coordinate transformation". Physical Review B. 77 (3): 035122. arXiv:0708.0262. Bibcode:2008PhRvB..77c5122T. doi:10.1103/PhysRevB.77.035122.
- Huang FM, et al. (2008-06-24). "Nanohole Array as a Lens" (PDF). Nano Lett. 8 (8): 2469–2472. Bibcode:2008NanoL...8.2469H. doi:10.1021/nl801476v. PMID 18572971. Retrieved 2009-12-21.
- "Northeastern physicists develop 3D metamaterial nanolens that achieves super-resolution imaging". prototype super-resolution metamaterial nanonlens. Nanotechwire.com. 2010-01-18. Retrieved 2010-01-20.
- Casse, B. D. F.; Lu, W. T.; Huang, Y. J.; Gultepe, E.; Menon, L.; Sridhar, S. (2010). "Super-resolution imaging using a three-dimensional metamaterials nanolens". Applied Physics Letters. 96 (2): 023114. Bibcode:2010ApPhL..96b3114C. doi:10.1063/1.3291677. hdl:2047/d20002681.
- Jung, J. and; L. Martín-Moreno; F J García-Vidal (2009-12-09). "Light transmission properties of holey metal films in the metamaterial limit: effective medium theory and subwavelength imaging". New Journal of Physics. 11 (12): 123013. Bibcode:2009NJPh...11l3013J. doi:10.1088/1367-2630/11/12/123013.
- Silveirinha, Mário G.; Engheta, Nader (2009-03-13). "Transporting an Image through a Subwavelength Hole". Physical Review Letters. 102 (10): 103902. Bibcode:2009PhRvL.102j3902S. doi:10.1103/PhysRevLett.102.103902. PMID 19392114.
- Kang, Hyeong-Gon; Tokumasu, Fuyuki; Clarke, Matthew; Zhou, Zhenping; Tang, Jianyong; Nguyen, Tinh; Hwang, Jeeseong (2010). "Probing dynamic fluorescence properties of single and clustered quantum dots toward quantitative biomedical imaging of cells". Wiley Interdisciplinary Reviews: Nanomedicine and Nanobiotechnology. 2 (1): 48–58. doi:10.1002/wnan.62. PMID 20049830.
- "The Quest for the Superlens" by John B. Pendry and David R. Smith. Scientific American. July 2006. PDF Imperial College.
- Subwavelength imaging
- Professor Sir John Pendry at MIT – "The Perfect Lens: Resolution Beyond the Limits of Wavelength'
- "Surface plasmon subwavelength optics" 2009-12-05
- "Superlenses to overcome the diffraction limit"
- "Breaking the diffracion limit" Overview of superlens theory
- "Flat Superlens Simulation" EM Talk
- "Superlens microscope gets up close"
- "Superlens breakthrough"
- "Superlens breaks optical barrier"
- "Materials with negative index of refraction" by V.A. Podolskiy
- "Optimizing the superlens: Manipulating geometry to enhance the resolution" by V.A. Podolskiy and Nicholas A. Kuhta
- "Now you see it, now you don't: cloaking device is not just sci-fi"
- "Initial page describes first demonstration of negative refraction in a natural material"
- "Negative-index materials made easy"
- "Simple 'superlens' sharpens focusing power" – A lens able to focus 10 times more intensely than any conventional design could significantly enhance wireless power transmission and photolithography (New Scientist, 24 April 2008)
- "Far-Field Optical Nanoscopy" by Stefan W. Hell. Vol. 316. Science. 25 May 2007
- "Ultraviolet dielectric hyperlens with layered graphene and boron nitride", 22 May 2012
- Andrei, Mihai (2018-01-04). "New, revolutionary metalens focuses entire visible spectrum into a single point". ZME Science. Retrieved 2018-01-05. |
In arithmetic, short division is a division algorithm which breaks down a division problem into a series of easy steps. It is an abbreviated form of long division — whereby the products are omitted and the partial remainders are notated as superscripts.
As a result, a short division tableau is always more notationally efficient than its long division counterpart — though sometimes at the expense of relying on mental arithmetic, which could limit the size of the divisor. For most people, small integer divisors up to 12 are handled using memorised multiplication tables, although the procedure could also be adapted to the larger divisors as well.
As in all division problems, a number called the dividend is divided by another, called the divisor. The answer to the problem would be the quotient, and in the case of Euclidean division, the remainder would be included as well.
Using short division, one can solve a division problem with a very large dividend by following a series of easy steps.
Short division does not use the slash (/) or obelus (÷) symbols. Instead, it displays the dividend, divisor, and quotient (when it is found) in a tableau. An example is shown below, representing the division of 500 by 4. The quotient is 125.
Alternatively, the bar may be placed below the number, which means the sum proceeds down the page. This is in distinction to long division, where the space under the dividend is required for workings:
The procedure involves several steps. As an example, consider 950 divided by 4:
- The dividend and divisor are written in the short division tableau:
- The first number to be divided by the divisor (4) is the partial dividend (9). We write the integer part of the result (2) above the division bar over the leftmost digit of the dividend, and we write the remainder (1) as a small digit above and to the right of the partial dividend (9).
- Next we repeat step 2, using the small digit concatenated with the next digit of the dividend to form a new partial dividend (15). Dividing the new partial dividend by the divisor (4), we write the result as before — the quotient above the next digit of the dividend, and the remainder as a small digit to the upper right. (Here 15 divided by 4 is 3, with a remainder of 3.)
- We continue repeating step 2 until there are no digits remaining in the dividend. In this example, we see that 30 divided by 4 is 7 with a remainder of 2. The number written above the bar (237) is the quotient, and the last small digit (2) is the remainder.
- The answer in this example is 237 with a remainder of 2. Alternatively, we can continue the above procedure if we want to produce a decimal answer. We do this by adding a decimal point and zeroes as necessary at the right of the dividend, and then treating each zero as another digit of the dividend. Thus, the next step in such a calculation would give the following:
Using the alternative layout the final workings would be:
A common requirement is to reduce a number to its prime factors. This is used particularly in working with vulgar fractions. The dividend is successively divided by prime numbers, repeating where possible:
So 950 = 2 x 5² x 19
When one is interested only in the remainder of the division, this procedure (a variation of short division) ignores the quotient and tallies only the remainders. It can be used for manual modulo calculation or as a test for even divisibility. The quotient digits are not written down.
For example, what is the remainder of 16762109 divided by 7?
The remainder is zero, so 16762109 is exactly divisible by 7.
- Arbitrary-precision arithmetic
- Chunking (division)
- Division algorithm
- Elementary arithmetic
- Fourier division
- Long division
- Polynomial long division
- Synthetic division
- "The Definitive Higher Math Guide to Long Division and Its Variants — for Integers". Math Vault. 2019-02-24. Retrieved 2019-06-23.
- G.P Quackenbos, LL.D. (1874). "Chapter VII: Division". A Practical Arithmetic. D. Appleton & Company.
- "Dividing whole numbers -- A complete course in arithmetic". www.themathpage.com. Retrieved 2019-06-23. |
A galaxy is a gravitationally bound system of stars, stellar remnants, interstellar gas, dust, and dark matter. The word galaxy is derived from the Greek galaxias (γαλαξίας), literally "milky", a reference to the Milky Way. Galaxies range in size from dwarfs with just a few hundred million (108) stars to giants with one hundred trillion (1014) stars, each orbiting its galaxy's center of mass.
Galaxies are categorized according to their visual morphology as elliptical, spiral, or irregular. Many galaxies are thought to have supermassive black holes at their centers. The Milky Way's central black hole, known as Sagittarius A*, has a mass four million times greater than the Sun. As of March 2016, GN-z11 is the oldest and most distant observed galaxy with a comoving distance of 32 billion light-years from Earth, and observed as it existed just 400 million years after the Big Bang.
Research released in 2016 revised the number of galaxies in the observable universe from a previous estimate of 200 billion (×1011) 2 to a suggested 2 trillion (×1012) or more, 2 containing more stars than all the grains of sand on planet Earth. Most of the galaxies are 1,000 to 100,000 parsecs in diameter (approximately 3000 to 300,000 light years) and separated by distances on the order of millions of parsecs (or megaparsecs). For comparison, the Milky Way has a diameter of at least 30,000 parsecs (100,000 LY) and is separated from the Andromeda Galaxy, its nearest large neighbor, by 780,000 parsecs (2.5 million LY).
The space between galaxies is filled with a tenuous gas (the intergalactic medium) having an average density of less than one atom per cubic meter. The majority of galaxies are gravitationally organized into groups, clusters, and superclusters. The Milky Way is part of the Local Group, which is dominated by it and the Andromeda Galaxy and is part of the Virgo Supercluster. At the largest scale, these associations are generally arranged into sheets and filaments surrounded by immense voids. The largest structure of galaxies yet recognised is a cluster of superclusters that has been named Laniakea, which contains the Virgo supercluster.
The origin of the word galaxy derives from the Greek term for the Milky Way, galaxias (γαλαξίας, "milky one"), or kyklos galaktikos ("milky circle") due to its appearance as a "milky" band of light in the sky. In Greek mythology, Zeus places his son born by a mortal woman, the infant Heracles, on Hera's breast while she is asleep so that the baby will drink her divine milk and will thus become immortal. Hera wakes up while breastfeeding and then realizes she is nursing an unknown baby: she pushes the baby away, some of her milk spills, and it produces the faint band of light known as the Milky Way.
In the astronomical literature, the capitalized word "Galaxy" is often used to refer to our galaxy, the Milky Way, to distinguish it from the other galaxies in our universe. The English term Milky Way can be traced back to a story by Chaucer c. 1380:
See yonder, lo, the Galaxyë
Which men clepeth the Milky Wey,
For hit is whyt.
Galaxies were initially discovered telescopically and were known as spiral nebulae. Most 18th to 19th Century astronomers considered them as either unresolved star clusters or anagalactic nebulae, and were just thought as a part of the Milky Way, but their true composition and natures remained a mystery. Observations using larger telescopes of a few nearby bright galaxies, like the Andromeda Galaxy, began resolving them into huge conglomerations of stars, but based simply on the apparent faintness and sheer population of stars, the true distances of these objects placed them well beyond the Milky Way. For this reason they were popularly called island universes, but this term quickly fell into disuse, as the word universe implied the entirety of existence. Instead, they became known simply as galaxies.
Tens of thousands of galaxies have been catalogued, but only a few have well-established names, such as the Andromeda Galaxy, the Magellanic Clouds, the Whirlpool Galaxy, and the Sombrero Galaxy. Astronomers work with numbers from certain catalogues, such as the Messier catalogue, the NGC (New General Catalogue), the IC (Index Catalogue), the CGCG (Catalogue of Galaxies and of Clusters of Galaxies), the MCG (Morphological Catalogue of Galaxies) and UGC (Uppsala General Catalogue of Galaxies). All of the well-known galaxies appear in one or more of these catalogues but each time under a different number. For example, Messier 109 is a spiral galaxy having the number 109 in the catalogue of Messier, and also having the designations NGC 3992, UGC 6937, CGCG 269-023, MCG +09-20-044, and PGC 37617.
The realization that we live in a galaxy which is one among many galaxies, parallels major discoveries that were made about the Milky Way and other nebulae.
The Greek philosopher Democritus (450–370 BCE) proposed that the bright band on the night sky known as the Milky Way might consist of distant stars.Aristotle (384–322 BCE), however, believed the Milky Way to be caused by "the ignition of the fiery exhalation of some stars that were large, numerous and close together" and that the "ignition takes place in the upper part of the atmosphere, in the region of the World that is continuous with the heavenly motions." The Neoplatonist philosopher Olympiodorus the Younger (c. 495–570 CE) was critical of this view, arguing that if the Milky Way is sublunary (situated between Earth and the Moon) it should appear different at different times and places on Earth, and that it should have parallax, which it does not. In his view, the Milky Way is celestial.
According to Mohani Mohamed, the Arabian astronomer Alhazen (965–1037) made the first attempt at observing and measuring the Milky Way's parallax, and he thus "determined that because the Milky Way had no parallax, it must be remote from the Earth, not belonging to the atmosphere." The Persian astronomer al-Bīrūnī (973–1048) proposed the Milky Way galaxy to be "a collection of countless fragments of the nature of nebulous stars." The Andalusian astronomer Ibn Bâjjah ("Avempace", d. 1138) proposed that the Milky Way is made up of many stars that almost touch one another and appear to be a continuous image due to the effect of refraction from sublunary material, citing his observation of the conjunction of Jupiter and Mars as evidence of this occurring when two objects are near. In the 14th century, the Syrian-born Ibn Qayyim proposed the Milky Way galaxy to be "a myriad of tiny stars packed together in the sphere of the fixed stars."
Actual proof of the Milky Way consisting of many stars came in 1610 when the Italian astronomer Galileo Galilei used a telescope to study the Milky Way and discovered that it is composed of a huge number of faint stars. In 1750 the English astronomer Thomas Wright, in his An original theory or new hypothesis of the Universe, speculated (correctly) that the galaxy might be a rotating body of a huge number of stars held together by gravitational forces, akin to the Solar System but on a much larger scale. The resulting disk of stars can be seen as a band on the sky from our perspective inside the disk. In a treatise in 1755, Immanuel Kant elaborated on Wright's idea about the structure of the Milky Way.
The first project to describe the shape of the Milky Way and the position of the Sun was undertaken by William Herschel in 1785 by counting the number of stars in different regions of the sky. He produced a diagram of the shape of the galaxy with the Solar System close to the center. Using a refined approach, Kapteyn in 1920 arrived at the picture of a small (diameter about 15 kiloparsecs) ellipsoid galaxy with the Sun close to the center. A different method by Harlow Shapley based on the cataloguing of globular clusters led to a radically different picture: a flat disk with diameter approximately 70 kiloparsecs and the Sun far from the center. Both analyses failed to take into account the absorption of light by interstellar dust present in the galactic plane, but after Robert Julius Trumpler quantified this effect in 1930 by studying open clusters, the present picture of our host galaxy, the Milky Way, emerged.
Distinction from other nebulae
A few galaxies outside the Milky Way are visible on a dark night to the unaided eye, including the Andromeda Galaxy, Large Magellanic Cloud, the Small Magellanic Cloud, and the Triangulum Galaxy. In the 10th century, the Persian astronomer Al-Sufi made the earliest recorded identification of the Andromeda Galaxy, describing it as a "small cloud". In 964, Al-Sufi probably mentioned the Large Magellanic Cloud in his Book of Fixed Stars (referring to "Al Bakr of the southern Arabs", since at a declination of about 70° south it was not visible where he lived); it was not well known to Europeans until Magellan's voyage in the 16th century. The Andromeda Galaxy was later independently noted by Simon Marius in 1612. In 1734, philosopher Emanuel Swedenborg in his Principia speculated that there may be galaxies outside our own that are formed into galactic clusters that are miniscule parts of the universe which extends far beyond what we can see. These views "are remarkably close to the present-day views of the cosmos." In 1750, Thomas Wright speculated (correctly) that the Milky Way is a flattened disk of stars, and that some of the nebulae visible in the night sky might be separate Milky Ways. In 1755, Immanuel Kant used the term "island Universe" to describe these distant nebulae.
Toward the end of the 18th century, Charles Messier compiled a catalog containing the 109 brightest celestial objects having nebulous appearance. Subsequently, William Herschel assembled a catalog of 5,000 nebulae. In 1845, Lord Rosse constructed a new telescope and was able to distinguish between elliptical and spiral nebulae. He also managed to make out individual point sources in some of these nebulae, lending credence to Kant's earlier conjecture.
In 1912, Vesto Slipher made spectrographic studies of the brightest spiral nebulae to determine their composition. Slipher discovered that the spiral nebulae have high Doppler shifts, indicating that they are moving at a rate exceeding the velocity of the stars he had measured. He found that the majority of these nebulae are moving away from us.
In 1917, Heber Curtis observed nova S Andromedae within the "Great Andromeda Nebula" (as the Andromeda Galaxy, Messier object M31, was then known). Searching the photographic record, he found 11 more novae. Curtis noticed that these novae were, on average, 10 magnitudes fainter than those that occurred within our galaxy. As a result, he was able to come up with a distance estimate of 150,000 parsecs. He became a proponent of the so-called "island universes" hypothesis, which holds that spiral nebulae are actually independent galaxies.
In 1920 a debate took place between Harlow Shapley and Heber Curtis (the Great Debate), concerning the nature of the Milky Way, spiral nebulae, and the dimensions of the Universe. To support his claim that the Great Andromeda Nebula is an external galaxy, Curtis noted the appearance of dark lanes resembling the dust clouds in the Milky Way, as well as the significant Doppler shift.
In 1922, the Estonian astronomer Ernst Öpik gave a distance determination that supported the theory that the Andromeda Nebula is indeed a distant extra-galactic object. Using the new 100 inch Mt. Wilson telescope, Edwin Hubble was able to resolve the outer parts of some spiral nebulae as collections of individual stars and identified some Cepheid variables, thus allowing him to estimate the distance to the nebulae: they were far too distant to be part of the Milky Way. In 1936 Hubble produced a classification of galactic morphology that is used to this day.
In 1944, Hendrik van de Hulst predicted that microwave radiation with wavelength of 21 cm would be detectable from interstellar atomic hydrogen gas; and in 1951 it was observed. This radiation is not affected by dust absorption, and so its Doppler shift can be used to map the motion of the gas in our galaxy. These observations led to the hypothesis of a rotating bar structure in the center of our galaxy. With improved radio telescopes, hydrogen gas could also be traced in other galaxies. In the 1970s, Vera Rubin uncovered a discrepancy between observed galactic rotation speed and that predicted by the visible mass of stars and gas. Today, the galaxy rotation problem is thought to be explained by the presence of large quantities of unseen dark matter.
Beginning in the 1990s, the Hubble Space Telescope yielded improved observations. Among other things, Hubble data helped establish that the missing dark matter in our galaxy cannot solely consist of inherently faint and small stars. The Hubble Deep Field, an extremely long exposure of a relatively empty part of the sky, provided evidence that there are about 125 billion (×1011) galaxies in the observable universe. 1.25 Improved technology in detecting the spectra invisible to humans (radio telescopes, infrared cameras, and x-ray telescopes) allow detection of other galaxies that are not detected by Hubble. Particularly, galaxy surveys in the Zone of Avoidance (the region of the sky blocked at visible-light wavelengths by the Milky Way) have revealed a number of new galaxies.
In 2016, a study published in The Astrophysical Journal and led by Christopher Conselice of the University of Nottingham using 3D modeling of images collected over 20 years by the Hubble Space Telescope concluded that there are over 2 trillion (×1012) galaxies in the observable universe. 2
Types and morphology
Galaxies come in three main types: ellipticals, spirals, and irregulars. A slightly more extensive description of galaxy types based on their appearance is given by the Hubble sequence. Since the Hubble sequence is entirely based upon visual morphological type (shape), it may miss certain important characteristics of galaxies such as star formation rate in starburst galaxies and activity in the cores of active galaxies.
The Hubble classification system rates elliptical galaxies on the basis of their ellipticity, ranging from E0, being nearly spherical, up to E7, which is highly elongated. These galaxies have an ellipsoidal profile, giving them an elliptical appearance regardless of the viewing angle. Their appearance shows little structure and they typically have relatively little interstellar matter. Consequently, these galaxies also have a low portion of open clusters and a reduced rate of new star formation. Instead they are dominated by generally older, more evolved stars that are orbiting the common center of gravity in random directions. The stars contain low abundances of heavy elements because star formation ceases after the initial burst. In this sense they have some similarity to the much smaller globular clusters.
The largest galaxies are giant ellipticals. Many elliptical galaxies are believed to form due to the interaction of galaxies, resulting in a collision and merger. They can grow to enormous sizes (compared to spiral galaxies, for example), and giant elliptical galaxies are often found near the core of large galaxy clusters.
A shell galaxy is a type of elliptical galaxy where the stars in the galaxy's halo are arranged in concentric shells. About one-tenth of elliptical galaxies have a shell-like structure, which has never been observed in spiral galaxies. The shell-like structures are thought to develop when a larger galaxy absorbs a smaller companion galaxy. As the two galaxy centers approach, the centers start to oscillate around a center point, the oscillation creates gravitational ripples forming the shells of stars, similar to ripples spreading on water. For example, galaxy NGC 3923 has over twenty shells.
Spiral galaxies resemble spiraling pinwheels. Though the stars and other visible material contained in such a galaxy lie mostly on a plane, the majority of mass in spiral galaxies exists in a roughly spherical halo of dark matter that extends beyond the visible component, as demonstrated by the universal rotation curve concept.
Spiral galaxies consist of a rotating disk of stars and interstellar medium, along with a central bulge of generally older stars. Extending outward from the bulge are relatively bright arms. In the Hubble classification scheme, spiral galaxies are listed as type S, followed by a letter (a, b, or c) that indicates the degree of tightness of the spiral arms and the size of the central bulge. An Sa galaxy has tightly wound, poorly defined arms and possesses a relatively large core region. At the other extreme, an Sc galaxy has open, well-defined arms and a small core region. A galaxy with poorly defined arms is sometimes referred to as a flocculent spiral galaxy; in contrast to the grand design spiral galaxy that has prominent and well-defined spiral arms. The speed in which a galaxy rotates is thought to correlate with the flatness of the disc as some spiral galaxies have thick bulges, while others are thin and dense.
In spiral galaxies, the spiral arms do have the shape of approximate logarithmic spirals, a pattern that can be theoretically shown to result from a disturbance in a uniformly rotating mass of stars. Like the stars, the spiral arms rotate around the center, but they do so with constant angular velocity. The spiral arms are thought to be areas of high-density matter, or "density waves". As stars move through an arm, the space velocity of each stellar system is modified by the gravitational force of the higher density. (The velocity returns to normal after the stars depart on the other side of the arm.) This effect is akin to a "wave" of slowdowns moving along a highway full of moving cars. The arms are visible because the high density facilitates star formation, and therefore they harbor many bright and young stars.
Barred spiral galaxy
A majority of spiral galaxies, including our own Milky Way galaxy, have a linear, bar-shaped band of stars that extends outward to either side of the core, then merges into the spiral arm structure. In the Hubble classification scheme, these are designated by an SB, followed by a lower-case letter (a, b or c) that indicates the form of the spiral arms (in the same manner as the categorization of normal spiral galaxies). Bars are thought to be temporary structures that can occur as a result of a density wave radiating outward from the core, or else due to a tidal interaction with another galaxy. Many barred spiral galaxies are active, possibly as a result of gas being channeled into the core along the arms.
Our own galaxy, the Milky Way, is a large disk-shaped barred-spiral galaxy about 30 kiloparsecs in diameter and a kiloparsec thick. It contains about two hundred billion (2×1011) stars and has a total mass of about six hundred billion (6×1011) times the mass of the Sun.
Recently, researchers described galaxies called super-luminous spirals. They are very large with an upward diameter of 437,000 light-years (compared to the Milky Way's 100,000 light-year diameter). With a mass of 340 billion solar masses, they generate a significant amount of ultraviolet and mid-infrared light. They are thought to have an increased star formation rate around 30 times faster than the Milky Way.
- Peculiar galaxies are galactic formations that develop unusual properties due to tidal interactions with other galaxies.
- A ring galaxy has a ring-like structure of stars and interstellar medium surrounding a bare core. A ring galaxy is thought to occur when a smaller galaxy passes through the core of a spiral galaxy. Such an event may have affected the Andromeda Galaxy, as it displays a multi-ring-like structure when viewed in infrared radiation.
- A lenticular galaxy is an intermediate form that has properties of both elliptical and spiral galaxies. These are categorized as Hubble type S0, and they possess ill-defined spiral arms with an elliptical halo of stars (barred lenticular galaxies receive Hubble classification SB0.)
- Irregular galaxies are galaxies that can not be readily classified into an elliptical or spiral morphology.
- An ultra diffuse galaxy (UDG) is an extremely-low-density galaxy. The galaxy may be the same size as the Milky Way but has a visible star count of only 1% of the Milky Way. The lack of luminosity is because there is a lack of star-forming gas in the galaxy which results in old stellar populations.
Despite the prominence of large elliptical and spiral galaxies, most galaxies in the Universe are dwarf galaxies. These galaxies are relatively small when compared with other galactic formations, being about one hundredth the size of the Milky Way, containing only a few billion stars. Ultra-compact dwarf galaxies have recently been discovered that are only 100 parsecs across.
Many dwarf galaxies may orbit a single larger galaxy; the Milky Way has at least a dozen such satellites, with an estimated 300–500 yet to be discovered. Dwarf galaxies may also be classified as elliptical, spiral, or irregular. Since small dwarf ellipticals bear little resemblance to large ellipticals, they are often called dwarf spheroidal galaxies instead.
A study of 27 Milky Way neighbors found that in all dwarf galaxies, the central mass is approximately 10 million solar masses, regardless of whether the galaxy has thousands or millions of stars. This has led to the suggestion that galaxies are largely formed by dark matter, and that the minimum size may indicate a form of warm dark matter incapable of gravitational coalescence on a smaller scale.
Other types of galaxies
Interactions between galaxies are relatively frequent, and they can play an important role in galactic evolution. Near misses between galaxies result in warping distortions due to tidal interactions, and may cause some exchange of gas and dust. Collisions occur when two galaxies pass directly through each other and have sufficient relative momentum not to merge. The stars of interacting galaxies will usually not collide, but the gas and dust within the two forms will interact, sometimes triggering star formation. A collision can severely distort the shape of the galaxies, forming bars, rings or tail-like structures.
At the extreme of interactions are galactic mergers. In this case the relative momentum of the two galaxies is insufficient to allow the galaxies to pass through each other. Instead, they gradually merge to form a single, larger galaxy. Mergers can result in significant changes to morphology, as compared to the original galaxies. If one of the merging galaxies is much more massive than the other merging galaxy then the result is known as cannibalism. The more massive larger galaxy will remain relatively undisturbed by the merger, while the smaller galaxy is torn apart. The Milky Way galaxy is currently in the process of cannibalizing the Sagittarius Dwarf Elliptical Galaxy and the Canis Major Dwarf Galaxy.
Stars are created within galaxies from a reserve of cold gas that forms into giant molecular clouds. Some galaxies have been observed to form stars at an exceptional rate, which is known as a starburst. If they continue to do so, then they would consume their reserve of gas in a time span less than the lifespan of the galaxy. Hence starburst activity usually lasts for only about ten million years, a relatively brief period in the history of a galaxy. Starburst galaxies were more common during the early history of the Universe, and, at present, still contribute an estimated 15% to the total star production rate.
Starburst galaxies are characterized by dusty concentrations of gas and the appearance of newly formed stars, including massive stars that ionize the surrounding clouds to create H II regions. These massive stars produce supernova explosions, resulting in expanding remnants that interact powerfully with the surrounding gas. These outbursts trigger a chain reaction of star building that spreads throughout the gaseous region. Only when the available gas is nearly consumed or dispersed does the starburst activity end.
Starbursts are often associated with merging or interacting galaxies. The prototype example of such a starburst-forming interaction is M82, which experienced a close encounter with the larger M81. Irregular galaxies often exhibit spaced knots of starburst activity.
A portion of the observable galaxies are classified as active galaxies if the galaxy contains an active galactic nucleus (AGN). A significant portion of the total energy output from the galaxy is emitted by the active galactic nucleus, instead of the stars, dust and interstellar medium of the galaxy.
The standard model for an active galactic nucleus is based upon an accretion disc that forms around a supermassive black hole (SMBH) at the core region of the galaxy. The radiation from an active galactic nucleus results from the gravitational energy of matter as it falls toward the black hole from the disc. In about 10% of these galaxies, a diametrically opposed pair of energetic jets ejects particles from the galaxy core at velocities close to the speed of light. The mechanism for producing these jets is not well understood.
- Seyfert galaxies or quasars, are classified depending on the luminosity, are active galaxies that emit high-energy radiation in the form of x-rays.
Blazars are believed to be an active galaxy with a relativistic jet that is pointed in the direction of Earth. A radio galaxy emits radio frequencies from relativistic jets. A unified model of these types of active galaxies explains their differences based on the viewing angle of the observer.
Possibly related to active galactic nuclei (as well as starburst regions) are low-ionization nuclear emission-line regions (LINERs). The emission from LINER-type galaxies is dominated by weakly ionized elements. The excitation sources for the weakly ionized lines include post-AGB stars, AGN, and shocks. Approximately one-third of nearby galaxies are classified as containing LINER nuclei.
Seyfert galaxies are one of the two largest groups of active galaxies, along with quasars. They have quasar-like nuclei (very luminous, distant and bright sources of electromagnetic radiation) with very high surface brightnesses but unlike quasars, their host galaxies are clearly detectable. Seyfert galaxies account for about 10% of all galaxies. Seen in visible light, most Seyfert galaxies look like normal spiral galaxies, but when studied under other wavelengths, the luminosity of their cores is equivalent to the luminosity of whole galaxies the size of the Milky Way.
Quasars (/ˈkweɪzɑr/) or quasi-stellar radio sources are the most energetic and distant members of active galactic nuclei. Quasars are extremely luminous and were first identified as being high redshift sources of electromagnetic energy, including radio waves and visible light, that appeared to be similar to stars, rather than extended sources similar to galaxies. Their luminosity can be 100 times greater than that of the Milky Way.
Luminous infrared galaxy
Luminous infrared galaxies or LIRGs are galaxies with luminosities, the measurement of brightness, above 1011 L☉. LIRGs are more abundant than starburst galaxies, Seyfert galaxies and quasi-stellar objects at comparable total luminosity. Infrared galaxies emit more energy in the infrared than at all other wavelengths combined. A LIRG's luminosity is 100 billion times that of our Sun.
Galaxies have magnetic fields of their own. They are strong enough to be dynamically important: they drive mass inflow into the centers of galaxies, they modify the formation of spiral arms and they can affect the rotation of gas in the outer regions of galaxies. Magnetic fields provide the transport of angular momentum required for the collapse of gas clouds and hence the formation of new stars.
The typical average equipartition strength for spiral galaxies is about 10 μG (microGauss) or 1 nT (nanoTesla). For comparison, the Earth's magnetic field has an average strength of about 0.3 G (Gauss or 30 μT (microTesla). Radio-faint galaxies like M 31 and M 33, our Milky Way's neighbors, have weaker fields (about 5 μG), while gas-rich galaxies with high star-formation rates, like M 51, M 83 and NGC 6946, have 15 μG on average. In prominent spiral arms the field strength can be up to 25 μG, in regions where cold gas and dust are also concentrated. The strongest total equipartition fields (50–100 μG) were found in starburst galaxies, for example in M 82 and the Antennae, and in nuclear starburst regions, for example in the centers of NGC 1097 and of other barred galaxies.
Formation and evolution
Galactic formation and evolution is an active area of research in astrophysics.
Current cosmological models of the early Universe are based on the Big Bang theory. About 300,000 years after this event, atoms of hydrogen and helium began to form, in an event called recombination. Nearly all the hydrogen was neutral (non-ionized) and readily absorbed light, and no stars had yet formed. As a result, this period has been called the "dark ages". It was from density fluctuations (or anisotropic irregularities) in this primordial matter that larger structures began to appear. As a result, masses of baryonic matter started to condense within cold dark matter halos. These primordial structures would eventually become the galaxies we see today.
Evidence for the early appearance of galaxies was found in 2006, when it was discovered that the galaxy IOK-1 has an unusually high redshift of 6.96, corresponding to just 750 million years after the Big Bang and making it the most distant and primordial galaxy yet seen. While some scientists have claimed other objects (such as Abell 1835 IR1916) have higher redshifts (and therefore are seen in an earlier stage of the Universe's evolution), IOK-1's age and composition have been more reliably established. In December 2012, astronomers reported that UDFj-39546284 is the most distant object known and has a redshift value of 11.9. The object, estimated to have existed around "380 million years" after the Big Bang (which was about 13.8 billion years ago), is about 13.42 billion light travel distance years away. The existence of such early protogalaxies suggests that they must have grown in the so-called "dark ages". As of May 5, 2015, the galaxy EGS-zs8-1 is the most distant and earliest galaxy measured, forming 670 million years after the Big Bang. The light from EGS-zs8-1 has taken 13 billion years to reach Earth, and is now 30 billion light-years away, because of the expansion of the universe during 13 billion years.
Early galaxy formation
The detailed process by which early galaxies formed is an open question in astrophysics. Theories can be divided into two categories: top-down and bottom-up. In top-down correlations (such as the Eggen–Lynden-Bell–Sandage [ELS] model), protogalaxies form in a large-scale simultaneous collapse lasting about one hundred million years. In bottom-up theories (such as the Searle-Zinn [SZ] model), small structures such as globular clusters form first, and then a number of such bodies accrete to form a larger galaxy.
Once protogalaxies began to form and contract, the first halo stars (called Population III stars) appeared within them. These were composed almost entirely of hydrogen and helium, and may have been massive. If so, these huge stars would have quickly consumed their supply of fuel and became supernovae, releasing heavy elements into the interstellar medium. This first generation of stars re-ionized the surrounding neutral hydrogen, creating expanding bubbles of space through which light could readily travel.
In June 2015, astronomers reported evidence for Population III stars in the Cosmos Redshift 7 galaxy at z = 6.60. Such stars are likely to have existed in the very early universe (i.e., at high redshift), and may have started the production of chemical elements heavier than hydrogen that are needed for the later formation of planets and life as we know it.
Within a billion years of a galaxy's formation, key structures begin to appear. Globular clusters, the central supermassive black hole, and a galactic bulge of metal-poor Population II stars form. The creation of a supermassive black hole appears to play a key role in actively regulating the growth of galaxies by limiting the total amount of additional matter added. During this early epoch, galaxies undergo a major burst of star formation.
During the following two billion years, the accumulated matter settles into a galactic disc. A galaxy will continue to absorb infalling material from high-velocity clouds and dwarf galaxies throughout its life. This matter is mostly hydrogen and helium. The cycle of stellar birth and death slowly increases the abundance of heavy elements, eventually allowing the formation of planets.
The evolution of galaxies can be significantly affected by interactions and collisions. Mergers of galaxies were common during the early epoch, and the majority of galaxies were peculiar in morphology. Given the distances between the stars, the great majority of stellar systems in colliding galaxies will be unaffected. However, gravitational stripping of the interstellar gas and dust that makes up the spiral arms produces a long train of stars known as tidal tails. Examples of these formations can be seen in NGC 4676 or the Antennae Galaxies.
The Milky Way galaxy and the nearby Andromeda Galaxy are moving toward each other at about 130 km/s, and—depending upon the lateral movements—the two might collide in about five to six billion years. Although the Milky Way has never collided with a galaxy as large as Andromeda before, evidence of past collisions of the Milky Way with smaller dwarf galaxies is increasing.
Such large-scale interactions are rare. As time passes, mergers of two systems of equal size become less common. Most bright galaxies have remained fundamentally unchanged for the last few billion years, and the net rate of star formation probably also peaked approximately ten billion years ago.
Spiral galaxies, like the Milky Way, produce new generations of stars as long as they have dense molecular clouds of interstellar hydrogen in their spiral arms. Elliptical galaxies are largely devoid of this gas, and so form few new stars. The supply of star-forming material is finite; once stars have converted the available supply of hydrogen into heavier elements, new star formation will come to an end.
The current era of star formation is expected to continue for up to one hundred billion years, and then the "stellar age" will wind down after about ten trillion to one hundred trillion years (1013–1014 years), as the smallest, longest-lived stars in our universe, tiny red dwarfs, begin to fade. At the end of the stellar age, galaxies will be composed of compact objects: brown dwarfs, white dwarfs that are cooling or cold ("black dwarfs"), neutron stars, and black holes. Eventually, as a result of gravitational relaxation, all stars will either fall into central supermassive black holes or be flung into intergalactic space as a result of collisions.
Deep sky surveys show that galaxies are often found in groups and clusters. Solitary galaxies that have not significantly interacted with another galaxy of comparable mass during the past billion years are relatively scarce. Only about 5% of the galaxies surveyed have been found to be truly isolated; however, these isolated formations may have interacted and even merged with other galaxies in the past, and may still be orbited by smaller, satellite galaxies. Isolated galaxies[note 2] can produce stars at a higher rate than normal, as their gas is not being stripped by other nearby galaxies.
On the largest scale, the Universe is continually expanding, resulting in an average increase in the separation between individual galaxies (see Hubble's law). Associations of galaxies can overcome this expansion on a local scale through their mutual gravitational attraction. These associations formed early in the Universe, as clumps of dark matter pulled their respective galaxies together. Nearby groups later merged to form larger-scale clusters. This on-going merger process (as well as an influx of infalling gas) heats the inter-galactic gas within a cluster to very high temperatures, reaching 30–100 megakelvins. About 70–80% of the mass in a cluster is in the form of dark matter, with 10–30% consisting of this heated gas and the remaining few percent of the matter in the form of galaxies.
Most galaxies in the Universe are gravitationally bound to a number of other galaxies. These form a fractal-like hierarchical distribution of clustered structures, with the smallest such associations being termed groups. A group of galaxies is the most common type of galactic cluster, and these formations contain a majority of the galaxies (as well as most of the baryonic mass) in the Universe. To remain gravitationally bound to such a group, each member galaxy must have a sufficiently low velocity to prevent it from escaping (see Virial theorem). If there is insufficient kinetic energy, however, the group may evolve into a smaller number of galaxies through mergers.
Clusters of galaxies consist of hundreds to thousands of galaxies bound together by gravity. Clusters of galaxies are often dominated by a single giant elliptical galaxy, known as the brightest cluster galaxy, which, over time, tidally destroys its satellite galaxies and adds their mass to its own.
Superclusters contain tens of thousands of galaxies, which are found in clusters, groups and sometimes individually. At the supercluster scale, galaxies are arranged into sheets and filaments surrounding vast empty voids. Above this scale, the Universe appears to be the same in all directions (isotropic and homogeneous).
The Milky Way galaxy is a member of an association named the Local Group, a relatively small group of galaxies that has a diameter of approximately one megaparsec. The Milky Way and the Andromeda Galaxy are the two brightest galaxies within the group; many of the other member galaxies are dwarf companions of these two galaxies. The Local Group itself is a part of a cloud-like structure within the Virgo Supercluster, a large, extended structure of groups and clusters of galaxies centered on the Virgo Cluster. And the Virgo Supercluster itself is a part of the Pisces-Cetus Supercluster Complex, a giant galaxy filament.
The peak radiation of most stars lies in the visible spectrum, so the observation of the stars that form galaxies has been a major component of optical astronomy. It is also a favorable portion of the spectrum for observing ionized H II regions, and for examining the distribution of dusty arms.
The dust present in the interstellar medium is opaque to visual light. It is more transparent to far-infrared, which can be used to observe the interior regions of giant molecular clouds and galactic cores in great detail. Infrared is also used to observe distant, red-shifted galaxies that were formed much earlier in the history of the Universe. Water vapor and carbon dioxide absorb a number of useful portions of the infrared spectrum, so high-altitude or space-based telescopes are used for infrared astronomy.
The first non-visual study of galaxies, particularly active galaxies, was made using radio frequencies. The Earth's atmosphere is nearly transparent to radio between 5 MHz and 30 GHz. (The ionosphere blocks signals below this range.) Large radio interferometers have been used to map the active jets emitted from active nuclei. Radio telescopes can also be used to observe neutral hydrogen (via 21 cm radiation), including, potentially, the non-ionized matter in the early Universe that later collapsed to form galaxies.
Ultraviolet and X-ray telescopes can observe highly energetic galactic phenomena. Ultraviolet flares are sometimes observed when a star in a distant galaxy is torn apart from the tidal forces of a nearby black hole. The distribution of hot gas in galactic clusters can be mapped by X-rays. The existence of supermassive black holes at the cores of galaxies was confirmed through X-ray astronomy.
- Galaxies to the left side of the Hubble classification scheme are sometimes referred to as "early-type", while those to the right are "late-type".
- The term "field galaxy" is sometimes used to mean an isolated galaxy, although the same term is also used to describe galaxies that do not belong to a cluster but may be a member of a group of galaxies.
- Sparke & Gallagher III 2000, p. i
- Hupp, E.; Roy, S.; Watzke, M. (August 12, 2006). "NASA Finds Direct Proof of Dark Matter". NASA. Retrieved April 17, 2007.
- Uson, J. M.; Boughn, S. P.; Kuhn, J. R. (1990). "The central galaxy in Abell 2029 – An old supergiant". Science. 250 (4980): 539–540. Bibcode:1990Sci...250..539U. doi:10.1126/science.250.4980.539. PMID 17751483.
- Hoover, A. (June 16, 2003). "UF Astronomers: Universe Slightly Simpler Than Expected". Hubble News Desk. Archived from the original on July 20, 2011. Retrieved March 4, 2011. Based upon:
- Graham, A. W.; Guzman, R. (2003). "HST Photometry of Dwarf Elliptical Galaxies in Coma, and an Explanation for the Alleged Structural Dichotomy between Dwarf and Bright Elliptical Galaxies". The Astronomical Journal. 125 (6): 2936–2950. arXiv:astro-ph/0303391. Bibcode:2003AJ....125.2936G. doi:10.1086/374992.
- Jarrett, T. H. "Near-Infrared Galaxy Morphology Atlas". California Institute of Technology. Retrieved January 9, 2007.
- Finley, D.; Aguilar, D. (November 2, 2005). "Astronomers Get Closest Look Yet At Milky Way's Mysterious Core". National Radio Astronomy Observatory. Retrieved August 10, 2006.
- Gott III, J. R.; et al. (2005). "A Map of the Universe". The Astrophysical Journal. 624 (2): 463–484. arXiv:astro-ph/0310571. Bibcode:2005ApJ...624..463G. doi:10.1086/428890.
- Christopher J. Conselice; et al. (2016). "The Evolution of Galaxy Number Density at z < 8 and its Implications". The Astrophysical Journal. 830 (2): 83. arXiv:1607.03909v2. Bibcode:2016ApJ...830...83C. doi:10.3847/0004-637X/830/2/83.
- Fountain, Henry (October 17, 2016). "Two Trillion Galaxies, at the Very Least". The New York Times. Retrieved October 17, 2016.
- Mackie, Glen (February 1, 2002). "To see the Universe in a Grain of Taranaki Sand". Centre for Astrophysics and Supercomputing. Retrieved January 28, 2017.
- "Galaxy Clusters and Large-Scale Structure". University of Cambridge. Retrieved January 15, 2007.
- Gibney, Elizabeth (2014). "Earth's new address: 'Solar System, Milky Way, Laniakea'". Nature. doi:10.1038/nature.2014.15819.
- Harper, D. "galaxy". Online Etymology Dictionary. Retrieved November 11, 2011.
- Waller & Hodge 2003, p. 91
- Konečný, Lubomír. "Emblematics, Agriculture, and Mythography in The Origin of the Milky Way" (PDF). Academy of Sciences of the Czech Republic. Archived from the original (PDF) on July 20, 2006. Retrieved January 5, 2007.
- Rao, J. (September 2, 2005). "Explore the Archer's Realm". Space.com. Retrieved January 3, 2007.
- Plutarch (2006). The Complete Works Volume 3: Essays and Miscellanies. Chapter 3: Echo Library. p. 66. ISBN 978-1-4068-3224-2.
- Montada, J. P. (September 28, 2007). "Ibn Bâjja". Stanford Encyclopedia of Philosophy. Retrieved July 11, 2008.
- Heidarzadeh 2008, pp. 23–25
- Mohamed 2000, pp. 49–50
- Bouali, H.-E.; Zghal, M.; Lakhdar, Z. B. (2005). "Popularisation of Optical Phenomena: Establishing the First Ibn Al-Haytham Workshop on Photography" (PDF). The Education and Training in Optics and Photonics Conference. Retrieved July 8, 2008.
- O'Connor, John J.; Robertson, Edmund F., "Abu Arrayhan Muhammad ibn Ahmad al-Biruni", MacTutor History of Mathematics archive, University of St Andrews.
- Al-Biruni 2004, p. 87
- Heidarzadeh 2008, p. 25, Table 2.1
- Livingston, J. W. (1971). "Ibn Qayyim al-Jawziyyah: A Fourteenth Century Defense against Astrological Divination and Alchemical Transmutation". Journal of the American Oriental Society. 91 (1): 96–103 . doi:10.2307/600445. JSTOR 600445.
- Galileo Galilei, Sidereus Nuncius (Venice, (Italy): Thomas Baglioni, 1610), pages 15 and 16.
English translation: Galileo Galilei with Edward Stafford Carlos, trans., The Sidereal Messenger (London, England: Rivingtons, 1880), pages 42 and 43.
- O'Connor, J. J.; Robertson, E. F. (November 2002). "Galileo Galilei". University of St. Andrews. Retrieved January 8, 2007.
- Thomas Wright, An Original Theory or New Hypothesis of the Universe … (London, England: H. Chapelle, 1750). From p.48: " … the stars are not infinitely dispersed and distributed in a promiscuous manner throughout all the mundane space, without order or design, … this phænomenon [is] no other than a certain effect arising from the observer's situation, … To a spectator placed in an indefinite space, … it [i.e., the Milky Way (Via Lactea)] [is] a vast ring of stars … "
On page 73, Wright called the Milky Way the Vortex Magnus (the great whirlpool) and estimated its diameter at 8.64×1012 miles (13.9×1012 km).
- Evans, J. C. (November 24, 1998). "Our Galaxy". George Mason University. Archived from the original on June 30, 2012. Retrieved January 4, 2007.
Immanuel Kant, Allgemeine Naturgeschichte und Theorie des Himmels … [Universal Natural History and Theory of the Heavens … ], (Königsberg and Leipzig, (Germany): Johann Friederich Petersen, 1755).
Available in English translation by Ian Johnston at: Vancouver Island University, British Columbia, Canada Archived August 29, 2014, at the Wayback Machine Archived August 29, 2014, at the Wayback Machine
- William Herschel (1785). "XII. On the construction of the heavens". Giving Some Accounts of the Present Undertakings, Studies, and Labours, of the Ingenious, in Many Considerable Parts of the World. Philosophical Transactions of the Royal Society of London. vol. 75. London. pp. 213–266. doi:10.1098/rstl.1785.0012. ISSN 0261-0523. Herschel's diagram of the galaxy appears immediately after the article's last page.
- Paul 1993, pp. 16–18
- Trimble, V. (1999). "Robert Trumpler and the (Non)transparency of Space". Bulletin of the American Astronomical Society. 31 (31): 1479. Bibcode:1999AAS...195.7409T.
- Kepple & Sanner 1998, p. 18
- "The Large Magellanic Cloud, LMC". Observatoire de Paris. March 11, 2004. Archived from the original on June 22, 2017.
- "Abd-al-Rahman Al Sufi (December 7, 903 – May 25, 986 A.D.)". Observatoire de Paris. Retrieved April 19, 2007.
- Gordon, Kurtiss J. "History of our Understanding of a Spiral Galaxy: Messier 33". Caltech.edu. Retrieved June 11, 2018.
- See text quoted from Wright's An original theory or new hypothesis of the Universe in Dyson, F. (1979). Disturbing the Universe. Pan Books. p. 245. ISBN 978-0-330-26324-5.
- "Parsonstown | The genius of the Parsons family | William Rosse". parsonstown.info.
- Slipher, V. M. (1913). "The radial velocity of the Andromeda Nebula". Lowell Observatory Bulletin. 1: 56–57. Bibcode:1913LowOB...2...56S.
- Slipher, V. M. (1915). "Spectrographic Observations of Nebulae". Popular Astronomy. Vol. 23. pp. 21–24. Bibcode:1915PA.....23...21S.
- Curtis, H. D. (1988). "Novae in Spiral Nebulae and the Island Universe Theory". Publications of the Astronomical Society of the Pacific. 100: 6. Bibcode:1988PASP..100....6C. doi:10.1086/132128.
- Weaver, H. F. "Robert Julius Trumpler". US National Academy of Sciences. Retrieved January 5, 2007.
- Öpik, E. (1922). "An estimate of the distance of the Andromeda Nebula". The Astrophysical Journal. 55: 406. Bibcode:1922ApJ....55..406O. doi:10.1086/142680.
- Hubble, E. P. (1929). "A spiral nebula as a stellar system, Messier 31". The Astrophysical Journal. 69: 103–158. Bibcode:1929ApJ....69..103H. doi:10.1086/143167.
- Sandage, A. (1989). "Edwin Hubble, 1889–1953". Journal of the Royal Astronomical Society of Canada. 83 (6): 351–362. Bibcode:1989JRASC..83..351S. Retrieved January 8, 2007.
- Tenn, J. "Hendrik Christoffel van de Hulst". Sonoma State University. Retrieved January 5, 2007.
- López-Corredoira, M.; et al. (2001). "Searching for the in-plane Galactic bar and ring in DENIS". Astronomy and Astrophysics. 373 (1): 139–152. arXiv:astro-ph/0104307. Bibcode:2001A&A...373..139L. doi:10.1051/0004-6361:20010560.
- Rubin, V. C. (1983). "Dark matter in spiral galaxies". Scientific American. Vol. 248 no. 6. pp. 96–106. Bibcode:1983SciAm.248f..96R. doi:10.1038/scientificamerican0683-96.
- Rubin, V. C. (2000). "One Hundred Years of Rotating Galaxies". Publications of the Astronomical Society of the Pacific. 112 (772): 747–750. Bibcode:2000PASP..112..747R. doi:10.1086/316573.
- "Observable Universe contains ten times more galaxies than previously thought". www.spacetelescope.org. Retrieved October 17, 2016.
- "Hubble Rules Out a Leading Explanation for Dark Matter". Hubble News Desk. October 17, 1994. Retrieved January 8, 2007.
- "How many galaxies are there?". NASA. November 27, 2002. Retrieved January 8, 2007.
- Kraan-Korteweg, R. C.; Juraszek, S. (2000). "Mapping the hidden Universe: The galaxy distribution in the Zone of Avoidance". Publications of the Astronomical Society of Australia. 17 (1): 6–12. arXiv:astro-ph/9910572. Bibcode:2000PASA...17....6K. doi:10.1071/AS00006.
- "Universe has two trillion galaxies, astronomers say". The Guardian. October 13, 2016. Retrieved October 14, 2016.
- "The Universe Has 10 Times More Galaxies Than Scientists Thought". space.com. October 13, 2016. Retrieved October 14, 2016.
- Barstow, M. A. (2005). "Elliptical Galaxies". Leicester University Physics Department. Archived from the original on July 29, 2012. Retrieved June 8, 2006.
- "Galaxies". Cornell University. October 20, 2005. Archived from the original on June 29, 2014. Retrieved August 10, 2006.
- "Galactic onion". www.spacetelescope.org. Retrieved May 11, 2015.
- Williams, M. J.; Bureau, M.; Cappellari, M. (2010). "Kinematic constraints on the stellar and dark matter content of spiral and S0 galaxies". Monthly Notices of the Royal Astronomical Society. 400 (4): 1665–1689. arXiv:0909.0680. Bibcode:2009MNRAS.400.1665W. doi:10.1111/j.1365-2966.2009.15582.x.
- Smith, G. (March 6, 2000). "Galaxies — The Spiral Nebulae". University of California, San Diego Center for Astrophysics & Space Sciences. Archived from the original on July 10, 2012. Retrieved November 30, 2006.
- Van den Bergh 1998, p. 17
- "Fat or flat: Getting galaxies into shape". phys.org. February 2014
- Bertin & Lin 1996, pp. 65–85
- Belkora 2003, p. 355
- Eskridge, P. B.; Frogel, J. A. (1999). "What is the True Fraction of Barred Spiral Galaxies?". Astrophysics and Space Science. 269/270: 427–430. Bibcode:1999Ap&SS.269..427E. doi:10.1023/A:1017025820201.
- Bournaud, F.; Combes, F. (2002). "Gas accretion on spiral galaxies: Bar formation and renewal". Astronomy and Astrophysics. 392 (1): 83–102. arXiv:astro-ph/0206273. Bibcode:2002A&A...392...83B. doi:10.1051/0004-6361:20020920.
- Knapen, J. H.; Perez-Ramirez, D.; Laine, S. (2002). "Circumnuclear regions in barred spiral galaxies — II. Relations to host galaxies". Monthly Notices of the Royal Astronomical Society. 337 (3): 808–828. arXiv:astro-ph/0207258. Bibcode:2002MNRAS.337..808K. doi:10.1046/j.1365-8711.2002.05840.x.
- Alard, C. (2001). "Another bar in the Bulge". Astronomy and Astrophysics Letters. 379 (2): L44–L47. arXiv:astro-ph/0110491. Bibcode:2001A&A...379L..44A. doi:10.1051/0004-6361:20011487.
- Sanders, R. (January 9, 2006). "Milky Way galaxy is warped and vibrating like a drum". UCBerkeley News. Retrieved May 24, 2006.
- Bell, G. R.; Levine, S. E. (1997). "Mass of the Milky Way and Dwarf Spheroidal Stream Membership". Bulletin of the American Astronomical Society. 29 (2): 1384. Bibcode:1997AAS...19110806B.
- "We Just Discovered a New Type of Colossal Galaxy". Futurism. March 21, 2016. Retrieved March 21, 2016.
- Ogle, Patrick M.; Lanz, Lauranne; Nader, Cyril; Helou, George (January 1, 2016). "Superluminous Spiral Galaxies". The Astrophysical Journal. 817 (2): 109. arXiv:1511.00659. Bibcode:2016ApJ...817..109O. doi:10.3847/0004-637X/817/2/109. ISSN 0004-637X.
- Gerber, R. A.; Lamb, S. A.; Balsara, D. S. (1994). "Ring Galaxy Evolution as a Function of "Intruder" Mass". Bulletin of the American Astronomical Society. 26: 911. Bibcode:1994AAS...184.3204G.
- "ISO unveils the hidden rings of Andromeda" (Press release). European Space Agency. October 14, 1998. Archived from the original on August 28, 1999. Retrieved May 24, 2006.
- "Spitzer Reveals What Edwin Hubble Missed". Harvard-Smithsonian Center for Astrophysics. May 31, 2004. Archived from the original on September 7, 2006. Retrieved December 6, 2006.
- Barstow, M. A. (2005). "Irregular Galaxies". University of Leicester. Archived from the original on February 27, 2012. Retrieved December 5, 2006.
- Phillipps, S.; Drinkwater, M. J.; Gregg, M. D.; Jones, J. B. (2001). "Ultracompact Dwarf Galaxies in the Fornax Cluster". The Astrophysical Journal. 560 (1): 201–206. arXiv:astro-ph/0106377. Bibcode:2001ApJ...560..201P. doi:10.1086/322517.
- Groshong, K. (April 24, 2006). "Strange satellite galaxies revealed around Milky Way". New Scientist. Retrieved January 10, 2007.
- Schirber, M. (August 27, 2008). "No Slimming Down for Dwarf Galaxies". ScienceNOW. Retrieved August 27, 2008.
- "Galaxy Interactions". University of Maryland Department of Astronomy. Archived from the original on May 9, 2006. Retrieved December 19, 2006.
- "Interacting Galaxies". Swinburne University. Retrieved December 19, 2006.
- "Happy Sweet Sixteen, Hubble Telescope!". NASA. April 24, 2006. Retrieved August 10, 2006.
- "Starburst Galaxies". Harvard-Smithsonian Center for Astrophysics. August 29, 2006. Retrieved August 10, 2006.
- Kennicutt Jr., R. C.; et al. (2005). Demographics and Host Galaxies of Starbursts. Starbursts: From 30 Doradus to Lyman Break Galaxies. Springer. p. 187. Bibcode:2005ASSL..329..187K. doi:10.1007/1-4020-3539-X_33.
- Smith, G. (July 13, 2006). "Starbursts & Colliding Galaxies". University of California, San Diego Center for Astrophysics & Space Sciences. Archived from the original on July 7, 2012. Retrieved August 10, 2006.
- Keel, B. (September 2006). "Starburst Galaxies". University of Alabama. Retrieved December 11, 2006.
- Keel, W. C. (2000). "Introducing Active Galactic Nuclei". University of Alabama. Retrieved December 6, 2006.
- Lochner, J.; Gibb, M. "A Monster in the Middle". NASA. Retrieved December 20, 2006.
- Heckman, T. M. (1980). "An optical and radio survey of the nuclei of bright galaxies — Activity in normal galactic nuclei". Astronomy and Astrophysics. 87: 152–164. Bibcode:1980A&A....87..152H.
- Ho, L. C.; Filippenko, A. V.; Sargent, W. L. W. (1997). "A Search for "Dwarf" Seyfert Nuclei. V. Demographics of Nuclear Activity in Nearby Galaxies". The Astrophysical Journal. 487 (2): 568–578. arXiv:astro-ph/9704108. Bibcode:1997ApJ...487..568H. doi:10.1086/304638.
- Beck, Rainer (2007). "Galactic magnetic fields". Scholarpedia. 2. p. 2411. Bibcode:2007SchpJ...2.2411B. doi:10.4249/scholarpedia.2411. Retrieved November 5, 2015.
- "Construction Secrets of a Galactic Metropolis". www.eso.org. ESO Press Release. Retrieved October 15, 2014.
- "Protogalaxies". Harvard-Smithsonian Center for Astrophysics. November 18, 1999. Archived from the original on March 25, 2008. Retrieved January 10, 2007.
- Firmani, C.; Avila-Reese, V. (2003). "Physical processes behind the morphological Hubble sequence". Revista Mexicana de Astronomía y Astrofísica. 17: 107–120. arXiv:astro-ph/0303543. Bibcode:2003RMxAC..17..107F.
- McMahon, R. (2006). "Astronomy: Dawn after the dark age". Nature. 443 (7108): 151–2. Bibcode:2006Natur.443..151M. doi:10.1038/443151a. PMID 16971933.
- Wall, Mike (December 12, 2012). "Ancient Galaxy May Be Most Distant Ever Seen". Space.com. Retrieved December 12, 2012.
- "Cosmic Detectives". The European Space Agency (ESA). April 2, 2013. Retrieved April 15, 2013.
- "HubbleSite – NewsCenter – Astronomers Set a New Galaxy Distance Record (05/05/2015) – Introduction". hubblesite.org. Retrieved May 7, 2015.
- "This Galaxy Far, Far Away Is the Farthest One Yet Found". Retrieved May 7, 2015.
- "Astronomers unveil the farthest galaxy". Retrieved May 7, 2015.
- Overbye, Dennis (May 5, 2015). "Astronomers Measure Distance to Farthest Galaxy Yet". The New York Times. ISSN 0362-4331. Retrieved May 7, 2015.
- Oesch, P. A.; van Dokkum, P. G.; Illingworth, G. D.; Bouwens, R. J.; Momcheva, I.; Holden, B.; Roberts-Borsani, G. W.; Smit, R.; Franx, M. (February 18, 2015). "A Spectroscopic Redshift Measurement for a Luminous Lyman Break Galaxy at z=7.730 using Keck/MOSFIRE". The Astrophysical Journal. 804 (2): L30. arXiv:1502.05399. Bibcode:2015ApJ...804L..30O. doi:10.1088/2041-8205/804/2/L30.
- "Signatures of the Earliest Galaxies". Retrieved September 15, 2015.
- Eggen, O. J.; Lynden-Bell, D.; Sandage, A. R. (1962). "Evidence from the motions of old stars that the Galaxy collapsed". The Astrophysical Journal. 136: 748. Bibcode:1962ApJ...136..748E. doi:10.1086/147433.
- Searle, L.; Zinn, R. (1978). "Compositions of halo clusters and the formation of the galactic halo". The Astrophysical Journal. 225 (1): 357–379. Bibcode:1978ApJ...225..357S. doi:10.1086/156499.
- Heger, A.; Woosley, S. E. (2002). "The Nucleosynthetic Signature of Population III". The Astrophysical Journal. 567 (1): 532–543. arXiv:astro-ph/0107037. Bibcode:2002ApJ...567..532H. doi:10.1086/338487.
- Barkana, R.; Loeb, A. (2001). "In the beginning: the first sources of light and the reionization of the Universe" (PDF). Physics Reports (Submitted manuscript). 349 (2): 125–238. arXiv:astro-ph/0010468. Bibcode:2001PhR...349..125B. doi:10.1016/S0370-1573(01)00019-9.
- Sobral, David; Matthee, Jorryt; Darvish, Behnam; Schaerer, Daniel; Mobasher, Bahram; Röttgering, Huub J. A.; Santos, Sérgio; Hemmati, Shoubaneh (June 4, 2015). "Evidence for POPIII-like Stellar Populations in the Most Luminous LYMAN-α Emitters at the Epoch of Re-ionisation: Spectroscopic Confirmation". The Astrophysical Journal. 808 (2): 139. arXiv:1504.01734. Bibcode:2015ApJ...808..139S. doi:10.1088/0004-637x/808/2/139.
- Overbye, Dennis (June 17, 2015). "Traces of Earliest Stars That Enriched Cosmos Are Spied". The New York Times. Retrieved June 17, 2015.
- "Simulations Show How Growing Black Holes Regulate Galaxy Formation". Carnegie Mellon University. February 9, 2005. Archived from the original on June 4, 2012. Retrieved January 7, 2007.
- Massey, R. (April 21, 2007). "Caught in the act; forming galaxies captured in the young Universe". Royal Astronomical Society. Archived from the original on November 15, 2013. Retrieved April 20, 2007.
- Noguchi, M. (1999). "Early Evolution of Disk Galaxies: Formation of Bulges in Clumpy Young Galactic Disks". The Astrophysical Journal. 514 (1): 77–95. arXiv:astro-ph/9806355. Bibcode:1999ApJ...514...77N. doi:10.1086/306932.
- Baugh, C.; Frenk, C. (May 1999). "How are galaxies made?". PhysicsWeb. Archived from the original on April 26, 2007. Retrieved January 16, 2007.
- Gonzalez, G. (1998). The Stellar Metallicity — Planet Connection. Brown dwarfs and extrasolar planets: Proceedings of a workshop ... p. 431. Bibcode:1998ASPC..134..431G.
- Moskowitz, Clara (September 25, 2012). "Hubble Telescope Reveals Farthest View Into Universe Ever". Space.com. Retrieved September 26, 2012.
- Conselice, C. J. (February 2007). "The Universe's Invisible Hand". Scientific American. Vol. 296 no. 2. pp. 35–41. Bibcode:2007SciAm.296b..34C. doi:10.1038/scientificamerican0207-34.
- Ford, H.; et al. (April 30, 2002). "The Mice (NGC 4676): Colliding Galaxies With Tails of Stars and Gas". Hubble News Desk. Retrieved May 8, 2007.
- Struck, C. (1999). "Galaxy Collisions". Physics Reports. 321 (1–3): 1–137. arXiv:astro-ph/9908269. Bibcode:1999PhR...321....1S. doi:10.1016/S0370-1573(99)00030-7.
- Wong, J. (April 14, 2000). "Astrophysicist maps out our own galaxy's end". University of Toronto. Archived from the original on January 8, 2007. Retrieved January 11, 2007.
- Panter, B.; Jimenez, R.; Heavens, A. F.; Charlot, S. (2007). "The star formation histories of galaxies in the Sloan Digital Sky Survey". Monthly Notices of the Royal Astronomical Society. 378 (4): 1550–1564. arXiv:astro-ph/0608531. Bibcode:2007MNRAS.378.1550P. doi:10.1111/j.1365-2966.2007.11909.x.
- Kennicutt Jr., R. C.; Tamblyn, P.; Congdon, C. E. (1994). "Past and future star formation in disk galaxies". The Astrophysical Journal. 435 (1): 22–36. Bibcode:1994ApJ...435...22K. doi:10.1086/174790.
- Knapp, G. R. (1999). Star Formation in Early Type Galaxies. Star Formation in Early Type Galaxies. 163. Astronomical Society of the Pacific. p. 119. arXiv:astro-ph/9808266. Bibcode:1999ASPC..163..119K. ISBN 978-1-886733-84-8. OCLC 41302839.
- Adams, Fred; Laughlin, Greg (July 13, 2006). "The Great Cosmic Battle". Astronomical Society of the Pacific. Retrieved January 16, 2007.
- "Cosmic 'Murder Mystery' Solved: Galaxies Are 'Strangled to Death'". Retrieved May 14, 2015.
- Pobojewski, S. (January 21, 1997). "Physics offers glimpse into the dark side of the Universe". University of Michigan. Retrieved January 13, 2007.
- McKee, M. (June 7, 2005). "Galactic loners produce more stars". New Scientist. Retrieved January 15, 2007.
- "Groups & Clusters of Galaxies". NASA/Chandra. Retrieved January 15, 2007.
- Ricker, P. "When Galaxy Clusters Collide". San Diego Supercomputer Center. Retrieved August 27, 2008.
- Dahlem, M. (November 24, 2006). "Optical and radio survey of Southern Compact Groups of galaxies". University of Birmingham Astrophysics and Space Research Group. Archived from the original on June 13, 2007. Retrieved January 15, 2007.
- Ponman, T. (February 25, 2005). "Galaxy Systems: Groups". University of Birmingham Astrophysics and Space Research Group. Archived from the original on February 15, 2009. Retrieved January 15, 2007.
- Girardi, M.; Giuricin, G. (2000). "The Observational Mass Function of Loose Galaxy Groups". The Astrophysical Journal. 540 (1): 45–56. arXiv:astro-ph/0004149. Bibcode:2000ApJ...540...45G. doi:10.1086/309314.
- "Hubble Pinpoints Furthest Protocluster of Galaxies Ever Seen". ESA/Hubble Press Release. Retrieved January 22, 2015.
- Dubinski, J. (1998). "The Origin of the Brightest Cluster Galaxies". The Astrophysical Journal. 502 (2): 141–149. arXiv:astro-ph/9709102. Bibcode:1998ApJ...502..141D. doi:10.1086/305901. Archived from the original on May 14, 2011. Retrieved January 16, 2007.
- Bahcall, N. A. (1988). "Large-scale structure in the Universe indicated by galaxy clusters". Annual Review of Astronomy and Astrophysics. 26 (1): 631–686. Bibcode:1988ARA&A..26..631B. doi:10.1146/annurev.aa.26.090188.003215.
- Mandolesi, N.; et al. (1986). "Large-scale homogeneity of the Universe measured by the microwave background". Letters to Nature. 319 (6056): 751–753. Bibcode:1986Natur.319..751M. doi:10.1038/319751a0.
- van den Bergh, S. (2000). "Updated Information on the Local Group". Publications of the Astronomical Society of the Pacific. 112 (770): 529–536. arXiv:astro-ph/0001040. Bibcode:2000PASP..112..529V. doi:10.1086/316548.
- Tully, R. B. (1982). "The Local Supercluster". The Astrophysical Journal. 257: 389–422. Bibcode:1982ApJ...257..389T. doi:10.1086/159999.
- "Near, Mid & Far Infrared". IPAC/NASA. Archived from the original on December 30, 2006. Retrieved January 2, 2007.
- "ATLASGAL Survey of Milky Way Completed". Retrieved March 7, 2016.
- "The Effects of Earth's Upper Atmosphere on Radio Signals". NASA. Retrieved August 10, 2006.
- "Giant Radio Telescope Imaging Could Make Dark Matter Visible". ScienceDaily. December 14, 2006. Retrieved January 2, 2007.
- "NASA Telescope Sees Black Hole Munch on a Star". NASA. December 5, 2006. Retrieved January 2, 2007.
- Dunn, R. "An Introduction to X-ray Astronomy". Institute of Astronomy X-Ray Group. Retrieved January 2, 2007.
- Al-Biruni (2004). The Book of Instruction in the Elements of the Art of Astrology. R. Ramsay Wright (transl.). Kessinger Publishing. ISBN 978-0-7661-9307-9.
- Belkora, L. (2003). Minding the Heavens: the Story of our Discovery of the Milky Way. CRC Press. ISBN 978-0-7503-0730-7.
- Bertin, G.; Lin, C.-C. (1996). Spiral Structure in Galaxies: a Density Wave Theory. MIT Press. ISBN 978-0-262-02396-2.
- Binney, J.; Merrifield, M. (1998). Galactic Astronomy. Princeton University Press. ISBN 978-0-691-00402-0. OCLC 39108765.
- Dickinson, T. (2004). The Universe and Beyond (4th ed.). Firefly Books. ISBN 978-1-55297-901-3. OCLC 55596414.
- Heidarzadeh, T. (2008). A History of Physical Theories of Comets, from Aristotle to Whipple. Springer. ISBN 978-1-4020-8322-8.
- Mo, Houjun; van den Bosch, Frank; White, Simon (2010). Galaxy Formation and Evolution (1 ed.). Cambridge University Press. ISBN 978-0-521-85793-2.
- Kepple, G. R.; Sanner, G. W. (1998). The Night Sky Observer's Guide, Volume 1. Willmann-Bell. ISBN 978-0-943396-58-3.
- Merritt, D. (2013). Dynamics and Evolution of Galactic Nuclei. Princeton University Press. ISBN 978-1-4008-4612-2.
- Mohamed, M. (2000). Great Muslim Mathematicians. Penerbit UTM. ISBN 978-983-52-0157-8. OCLC 48759017.
- Paul, E. R. (1993). The Milky Way Galaxy and Statistical Cosmology, 1890–1924. Cambridge University Press. ISBN 978-0-521-35363-2.
- Sparke, L. S.; Gallagher III, J. S. (2000). Galaxies in the Universe: An Introduction. Cambridge University Press. ISBN 978-0-521-59740-1.
- Van den Bergh, S. (1998). Galaxy Morphology and Classification. Cambridge University Press. ISBN 978-0-521-62335-3.
- Waller, W. H.; Hodge, P. W. (2003). Galaxies and the Cosmic Frontier. Harvard University Press. ISBN 978-0-674-01079-6.
- NASA/IPAC Extragalactic Database (NED) (NED-Distances)
- Galaxies on In Our Time at the BBC
- An Atlas of The Universe
- Galaxies — Information and amateur observations
- The Oldest Galaxy Yet Found
- Galaxy classification project, harnessing the power of the internet and the human brain
- How many galaxies are in our Universe?
- The most beautiful galaxies on Astronoo
- 3-D Video (01:46) – Over a Million Galaxies of Billions of Stars each – BerkeleyLab/animated. |
Earth’s Hydrosphere: Hands-on Activities for Upper Elementary Students
Are your upper elementary students looking to explore the wonders of Earth’s hydrosphere? Engage them in some hands-on activities about water and ice! From experiments with saltwater to exploring the water cycle, there is an array of fun and educational ways for your students to learn more about Earth’s hydrosphere. We’ve gathered our favorite activities here so you can spend less time researching how to engage learners and more time watching their learning take off.
Quick Links for Teaching Ideas about the Earth’s Hydrosphere
What is the Earth’s Hydrosphere?
The Earth’s hydrosphere is the layer of water on the planet. It consists of all the water on the surface, in the atmosphere, and underground. This includes rivers, lakes, oceans, and even the moisture in the air. The hydrosphere is important because it helps to keep our planet cool as well as provides us with a source of drinking water. It also influences storm systems, flooding, and other weather-related events. Without this layer of water surrounding us, life on Earth would not be possible.
Hands-on Activities that Teach Students about the Earth’s Hydrosphere’s Interaction with Other Spheres
Dive into explorations that highlight the vital role of water in shaping Earth’s systems. Students will explore water filtration experiments, watershed modeling, and simulations of the water cycle. By engaging in these activities, they will develop a deep appreciation for the Earth’s hydrosphere’s impact on the geosphere, the biosphere, and even the atmosphere.
Water Cycle in a Bag
Engaging in hands-on activities about the hydrosphere is a great way to help students understand the vital role of water in shaping Earth’s systems. One such activity is the Water Cycle in a Bag, which involves filling a zip-lock bag with water and observing condensation, evaporation, and precipitation as the water cycle occurs. Through this simple yet effective experiment, students will gain an appreciation for how the hydrosphere influences other spheres on Earth.
Fill a ziplock bag with water, hang it near a window, and observe condensation, evaporation, and precipitation as the water cycle occurs. Label the bag with the water cycle process.
Water Cycle Simulation
The water cycle simulation using heat is a great way to explore the importance of the hydrosphere, as it helps students understand how evaporation, condensation, and precipitation are connected within the Earth’s atmosphere. By providing a closed system and applying heat to the jar, students can observe the condensation and precipitation process.
Set up a water cycle simulation using a large jar, a heat source, and a plastic cover. Fill the jar with water and cover it with plastic wrap to create a closed system. Apply heat to the jar, causing evaporation, and observe the condensation on the plastic wrap and the precipitation that follows as the big drops fall from the plastic wrap back into the glass. Explain how the Earth’s hydrosphere undergoes constant processes of evaporation, condensation, and precipitation, which connect to the atmosphere (air) and form the water cycle.
This second-grade science activity is all about clouds! It’s a great way to investigate the water cycle by using ice and a glass jar. Students read about the water cycle. They observe the water cycle in action using a glass jar and ice cubes.
A watershed model is a great way to demonstrate how water interacts with the Earth’s geosphere, biosphere, and atmosphere. By creating a mini version of an actual watershed, students can observe how water flows through different land features such as mountains and valleys, and gain an understanding of how pollution affects water quality. This activity provides a hands-on experience that brings the concept of the hydrosphere to life for students and allows them to explore the interconnectedness between all three spheres.
Create a mini watershed model by using a large container, sand, rocks, and a water source to demonstrate how water flows through a landscape and the effects of pollution on water quality. Help students pour water into the model to observe how it flows through different land features like mountains, valleys, and rivers. Discuss how the hydrosphere (water) interacts with the geosphere (land) by shaping the landforms and creating erosion and deposition. Highlight how the biosphere (plants and animals) in and around the water is affected by the characteristics of the watershed.
In this fifth-grade science activity, students read about the watersheds and the water cycle. They make their own watershed and investigate how water moves through it. Students can then answer questions about the investigation.
Water Filtration Experiment
One of the most engaging activities to learn about the Earth’s hydrosphere is a water filtration experiment. This hands-on activity helps students understand how the hydrosphere interacts with the geosphere and biosphere and teaches them about processes like filtration and clean water sources. Students will be provided with dirty water samples and materials such as sand, gravel, activated charcoal, and filter paper to design their own systems for cleaning it. Through this experiment, they will gain an understanding for why clean water is essential for ecosystems and human consumption.
Provide students with dirty water samples (you can make them using non-toxic materials like soil and food coloring). Set up a filtration station with materials like sand, gravel, activated charcoal, and filter paper. Guide students to design and build their own water filtration systems to clean the dirty water. Discuss how the hydrosphere interacts with the geosphere (land) through processes like filtration and explain the importance of clean water for ecosystems and human consumption.
Ocean Acidification Experiment
The ocean acidification experiment is a great way to demonstrate how increased carbon dioxide levels can cause seashells, coral skeletons, or chalk to dissolve or weaken due to a decrease in pH levels. This activity provides students with an engaging and hands-on way to explore the effects of ocean acidification on marine life and ecosystems.
Prepare containers with water (representing the ocean) and add vinegar (representing increased carbon dioxide levels). Provide students with seashells, coral skeletons, or chalk, and place them in the containers. Observe and discuss how the shells or coral dissolve or weaken due to increased acidity, representing the impact of ocean acidification. Connect this activity to the biosphere (marine life) and discuss the effects of ocean acidification on marine organisms and ecosystems.
Science Stations about the Earth’s Hydrosphere
We have a variety of science activities that focus on the Hydrosphere. Students can learn about how water exists on Erath in different forms and how it impacts other systems. Each science center has a reading passage, hands-on activity, and differentiated questions to extend the learning.
In this science activity, students read about the negative impacts people have on Earth’s water and what is being done to solve the problems. They explore natural filters in a hands-on activity.
In this sorting activity, students read about water on Earth. They will sort several water systems into freshwater and saltwater categories.
In this exploration, students read about water distribution on Earth, water scarcity, and saltwater on Earth. They explore saltwater in a hands-on activity. Students answer questions about the exploration
With this DIAGRAM science station, students can get an in-depth understanding of the water cycle and how water is distributed around Earth. Students create graphs that diagram water distribution
Science Stations about the Earth’s Hydrosphere’s Effect on the Geosphere
While not specifically about the hydrosphere, the following science stations explore how water affects the land. They include explorations into flood waters, river systems, and water erosion.
In this EXPLORE science station, students explore floods and how to keep a house safe from flooding.
In this DIAGRAM science station, students read about river systems. They put together a diagram of a river.
In this INVESTIGATE science station, read about weathering and erosion that happen with water. They investigate this in a hands-on activity.
In this MODEL science station, read about problems that happen with weathering and erosion. Students draw models of solutions to erosion problems.
Science Stations about the Earth’s Hydrosphere’s Effect on the Biosphere
These science activities focus on how the Earth’s hydrosphere affects the biosphere by looking at water habitats.
Conduct a Water Quality Test
Have students conduct a water quality test to determine the presence of contaminants and pollutants in different sources of water. Through this activity, they will gain a better understanding of how pollution can affect the environment and its inhabitants. Additionally, it is important to discuss with students the importance of conserving our limited resources and reducing waste production. This is a great way for students to learn about the importance of clean water and the effects of pollution on our water supply.
These hands-on activities will help students understand the importance of the Earth’s hydrosphere and how it interacts with other spheres, fostering a deeper understanding of Earth’s interconnected systems.
Activities about the Cryosphere
Did you know that the icy part of the Earth’s hydrosphere has its own name? The cryosphere is the icy realm that encompasses glaciers, ice caps, and frozen bodies of water. Students will participate in activities such as glacier formation simulations and investigations into the effects of melting ice. Through these experiences, they will discover the cryosphere’s vital role in shaping landscapes, influencing sea levels, and impacting the broader Earth system.
Melting Ice Investigation
In this investigation, students will explore the cryosphere and its role in shaping landscapes, influencing sea levels, and impacting the broader Earth system. They will use different types of ice to observe and record the rate of melting under various conditions (temperature, exposure to sunlight, etc.). By doing so, they will gain a better understanding of how changes in the cryosphere can affect other Earth systems.
Provide students with different types of ice (cubes, crushed, or blocks) and ask them to observe and record the rate of melting under various conditions (temperature, exposure to sunlight, etc.). Have a discussion about the melting polar ice caps and how this investigation relates to climate change.
Water is one of the most important components of Earth’s spheres, and its icy form – known as the cryosphere – plays an essential role in shaping landscapes, influencing sea levels, and impacting other Earth systems. To help students understand these complex processes better, try this hands-on experiment about glacier formation! Using sand, ice, and a tray to simulate glacial movement, kids will learn how glaciers can push soil and rocks to create moraines and other glacial features. It’s a great way for them to explore how changes in the cryosphere affect our planet.
Use sand, ice, and a tray to demonstrate how glaciers form, move, and shape the land by pushing soil and rocks, creating moraines and other glacial features. Place the tray on a flat surface and spread the sand evenly over it. Place ice cubes on top of the sand and observe how they move and shape it as they melt. Use a magnifying glass to observe the surface of the tray in more detail (optional). Discuss what is happening with your students as they watch the glacier form,
Science Stations about the Cryosphere
We also have several science stations that focus specifically on the Earth’s Cryosphere. These stations focus on glaciers and erosion by glaciers.
More Activities about the Earth’s Hydrosphere
PBS Learning Media has a great site with videos and interactive lessons that are aligned with the NGSS. The Earth’s Hydrosphere section focuses on the role of water in Earth’s systems, components of the hydrosphere, the water cycle, and ocean systems. There are a variety of lessons for grades 3-12.
This lesson from NASA shows students how to use measurements and data to investigate freshwater sources.
A Complete Unit of 5th Grade Science Activities about Earth’s Spheres and Systems
We have a set of science centers that focus on the Earth’s Spheres’ interactions with each other and ecosystems. These Earth’s Spheres and Systems Next Generation Science Stations include eight different science stations in which students deepen their understanding of the Earth’s Spheres, the five layers of the atmosphere, the kingdoms in the biosphere, and the systems in the geosphere. The focus is on 5-ESS2-1.
Key Concepts of the Earth’s Spheres and Systems Science Stations
- General overview of the four spheres on Earth and how they affect each other
- How the spheres interact and create landforms in the geosphere and how that affects coastal erosion and deposition
- How the Earth’s hydrosphere interacts with eight major ecosystems (changes in ecosystems)
- How the geosphere affects the hydrosphere
- Five layers of the atmosphere
- Kingdoms in the biosphere
For more information about the 5th grade Earth’s Spheres and Systems Science Stations, see this blog post. It goes into detail about each of the stations.
Fourth grade is a magical time for reading growth. With more profound themes and complex topics, chapter books for… |
Consumer Price Index and How It Measures Inflation
Why You Should Pay Attention to the Core CPI
The Consumer Price Index (CPI) is a monthly measurement of U.S. prices for household goods and services. It reports inflation (rising prices) and deflation (falling prices). Both can hurt a healthy economy.
The Federal Reserve, the U.S. central bank, monitors price changes to ensure economic growth remains stable. If the Federal Reserve detects too much inflation or deflation, it uses monetary policy tools to intervene.
What Is the CPI?
The CPI is the U.S. government's measurement of price changes in a typical "basket" of goods and services bought by urban consumers.
- Alternate Name: CPI for All Urban Consumers (CPI-U).
- Acronym: CPI
CPI and inflation are often used interchangeably, as inflation is the percentage increase or decrease of CPI over a certain period of time.
What's in the CPI Basket?
The basket represents the prices of a cross-section of goods and services commonly bought by urban households. The cross-section represents around 93% of the U.S. population, and factors in a sample of 14,500 families and 80,000 consumer prices.
Here are the major categories in the basket and how much each contributed to the CPI as of February 2021.
|Consumer Price Index Categories|
|Energy (Incl. Gasoline)||7%|
|Commodities (Incl. Medication and Autos)||20%|
For those who own their homes, the CPI calculates the owner's equivalent of primary residence (OER) instead of the monthly mortgage payment. The OER is what the owners predict how much rent would be if they rented the home.
The CPI could give a false low-inflation reading due to low rents, even when home prices are high. Low rents can result from fewer renters and increased vacancies, as low interest rates spur more home purchases. At the same time, housing prices could rise due to increased market activity.
Conversely, rising interest rates might lead to fewer buyers in the market and falling home prices. As more people compete for apartments, rents go up.
This is why the CPI didn't warn of asset inflation during the housing bubble of 2005. The CPI includes sales taxes. It excludes income taxes and the prices of investments, such as stocks and bonds.
How the CPI Is Calculated
The BLS computes the CPI by taking the average weighted cost of a basket of goods in a month and dividing it by the same basket the previous month. It then multiplies this percentage by 100 to get the number for the index.
Consumer Price Index =
Cost of Basket (This Month) / Cost of Basket (Last Month) X 100
The index shows how much the prices have changed since the base year of 1982. For example, in March 2021 the index was 264.9. That's how much prices have increased since the base period of 1982 to 1984 was established at roughly 100.
The BLS conveniently publishes the percentage change since last month or last year. In March 2021, prices rose by 0.6% from February. In February 2021, there was an increase of 0.4% in the index from January.
The CPI for March 2021 was 0.6% higher than February 2021 and increased by 2.6% from March 2020.
Why the CPI Is Important
The CPI measures inflation, which is one of the greatest threats to a healthy economy. Inflation eats away at your standard of living if your income doesn't keep pace with rising prices. Over time, your cost of living increases.
A high inflation rate can hurt the economy. Since everything costs more, manufacturers produce less and may be forced to lay off workers.
CPI Affects the Fed
The Fed uses the CPI to determine whether economic policies need to be modified to prevent inflation.
In the past, when recognizing inflation was on the horizon, the Fed used contractionary monetary policy to slow economic growth. It changed the fed funds rate to make loans more expensive, which tightened the money supply–the total amount of credit allowed into the market. Slowed economic growth and demand put downward pressure on prices and returned the economy to a healthy growth rate of 2% to 3% a year.
On Aug. 27, 2020, the Federal Reserve announced a change—it will allow a target inflation rate of more than 2% to help ensure maximum employment. Over time, 2% inflation growth is preferred, but the Fed is willing to allow higher rates if inflation has been low for a while.
CPI Affects Other Government Agencies
The government uses the CPI to improve benefit levels for recipients of Social Security and other government programs that provide financial assistance.
CPI Affects Housing and Investments
Landlords use the CPI forecasts to determine future rent increases in contracts.
An increased CPI can depress bond prices, too. Fixed-income investments tend to lose value during inflation. As a result, investors demand higher yields on these investments to make up for the loss in value.
These yield demands can increase interest rates, which then increases costs for businesses borrowing money to expand. The net effect is a decrease in earnings, which could depress the stock market.
The CPI measures two commodities with wild price swings: food and energy commodities (oil and gasoline). These products are traded constantly on the commodities market. Traders can bid prices up or down based on news such as wars in oil-producing countries or droughts. As a result, the CPI often reflects these price swings.
The "core" CPI solves the problem of volatile food and energy prices by excluding food and energy. In the past, the Fed considered core CPI when deciding whether to raise the fed funds rate. The core CPI is useful because food, oil, and gas prices are volatile, and the Fed's tools are slow-acting.
Historical CPI & Inflation
The U.S. inflation rate by year shows that fluctuations in CPI used to be much worse. In 1946, inflation hit a record annual high of 18.1% year-on-year.
Inflation next broke a record in 1974, when it hit 12.3% year-on-year while the economy contracted 0.5%. That anomaly is called stagflation.
Deflation occurred between 1930 and 1933. Prices fell 10.7% in September 1932 compared to September 1931. Congress had imposed the Smoot-Hawley Tariff two years earlier, which created a trade war that lowered prices and worsened the Great Depression.
The BLS publishes a handy inflation calculator you can use to plug in the dollar value for any year from 1913 to the present. The calculator will tell you what the dollar amount is or was worth for any year from 1913 to the present. It uses the average Consumer Price Index for that calendar year. For the current year, it uses the latest monthly index. |
We will learn:
1. Vector Math (Using Scalar Value in c Vector)
3. N/A Values
E. Vector MathNow we will try to add scalar value in c vector. What is Scalar? Scalar is a single and real value. In Arithmetic Operation, you will find this math operation;
x = 1 + 1
y = 4 / 2
24 m -> Scalar Value
24 m2 -> Vector Value
Note: For my math teacher, Drs.Sutikno, in SMA Negeri 1 Bukit Kemuning, North Lampung, Lampung Province (studied in 1992) : "This is the first time for me to understand what Scalar & Vector is. ..... - I understand this when I understand English Language ... (crying)."in R, like our previous lesson, we have:
> a <- c(3, 9, 7)
Then, we add "2" to each values in c vectors, as in:
> a + 2
Result: 5 11 9
This is simple example where a is 3, a is 9 and a is 7. Each of the a(s) is counted up to 2 (Scalar).
Further, you can do that with other arithmetic operation, as follows:
> a * 3
Result: 9 27 21
> a / 3
Result: 1.000000 3.000000 2.333333
Now, if we have other c vector value after the previous one > a <- c(3, 9, 7), as in:
b <- c(2,1,4)
and, add them (a + b) up, that will be as follows:
> b <- c(2,1,4)
> a + b
Result: 5 10 11
The result is coming from:
3 (in a) + 2 (in b)
9 (in a) + 1 (in b)
7 (in a) + 4 (in b)
Try other operation by substract b - a or vice verse!
Take notice that, when you try to compare c vector in previous a to other new a vector values:
previous a vector: > a <- c(3, 9, 7)
new a vector: > a <- c(1, 9, 7)
> a == c(1, 9, 7)
Result: FALSE TRUE TRUE
then R does not sum up both vectors, but index each values of both vectors.
Now we try to use > (more than) to compare each values in a and b vectors, as follows:
> a > c (1,9,7)
Result: TRUE FALSE FALSE
Vectors in Trigonometric Function
When you use Trigonometric Function, such as, Sin, Cos or Tan then R will figure each values of a vector against Sin, Cos or Tan, as follows:
Result: 0.1411200 0.4121185 0.6569866
Result: -0.9899925 -0.9111303 0.7539023
Result: -0.1425465 -0.4523157 0.8714480
Now, using sqrt function in a vectors:
Result: 1.732051 3.000000 2.645751
F. Scatter PlotsIn R, Plot function handles graphic as written below:
||the coordinates of points in the plot. Alternatively, a
single plotting structure, function or any R object with a
||the y coordinates of points in the plot, optional
By using plot() we can draw a graph by relating x to y coordinates. To draw it, we then need data. In this case, the data must contains data x coordinate and y coordinates.
> x <- seq(1, 20, 0.1)
> y <- sin(x)
(taken from: http://tryr.codeschool.com/levels/2/challenges/35)
Then we do the plot on both data, as follows:
Result: # ........ see this following graph, awesome!
Now, let's take another example by using negative values in one of the vectors values and assign the first vector into absolute function in second vector, as follows:
1. First vector values (using negative values)
> mylesson <- -2:7
2. Second vector values (using negative values)
> mygrade <- abs(mylesson)
G. N/A ValuesIn a sample of a vector, one of the values is not available. This sometimes occurs in a column in Database where the data in that column is not filled by user for optional form-sheet. In database, we usually set it active as NULL data.
In R, this NULL data means not exists or not available, then R assigns it as Non Active or N/A Values.
> a <- c(1, 3, NA, 7, 9)
,where within c vector, we assign that the third value is NA status. When we need the result of a vector, R will give you N/A or NA, as in:
Result: NA
In this case, we use sum function to test it. sum function considers it as NA since the calculation is not complete yet (NA means: can not be calculated). see the sum documentation in R by typing help(sum).
"As you see in the documentation, sum can take an optional named argument, na.rm. It's set to FALSE by default, but if you set it to TRUE, all NA arguments will be removed from the vector before the calculation is performed." (http://tryr.codeschool.com/levels/2/challenges/38)However, R can ignore the NA values by calling na.rm and set it to TRUE, as in:
Result: 20 #where 20 is coming from: 1+3+7+9.
See you on the next lesson!
My best regards |
Since the dawn of humanity, we have been looking up at the stars and trying to understand them. We learned to use them for navigation, and watched as they moved around our skies, signalling a change of season, or time for an important ritual.
The recent discovery of an ancient Babylonian tablet has revealed that our ability to accurately predict the movement of celestial bodies dates back way further than we thought, suggesting that there is more to be learned about ancient scientific knowledge.
How we calculate movement
Up until recently, we credited a group of Oxford boffins from the 14th century with the idea of astronomical geometry. It requires a good handle on conceptual, abstract thinking, a firm grasp of arithmetic, and an understanding of geometry, to be able to look at the stars and planets, and accurately predict where they will appear over time.
This was an early form of calculus (that scary thing from maths class that you’ve tried to forget about ever since). It was a more sophisticated method of tracking celestial bodies than the basic arithmetic previously used. Where arithmetic is about numbers, and algebra is about relationships between numbers, calculus is about relationships between equations.
A new study of ancient clay tablets, dated between 350 BCE and 50 BCE, reveal that some clever clogs living in ancient Babylonia, a state within Mesopotamia (now Iraq), were using a similar method of calculus to plot the movements of Jupiter.
Apart from having a very cool title, astroarchaeologist Matthieu Ossendrijver of Humboldt University has spent the last few years studying these clay tablets, housed in the British Museum in London.
“I couldn’t understand what they were about," he told the Washington Post. “I couldn't understand anything about them, neither did anyone else. I could only see that they dealt with geometrical stuff."
Only after the text of another, unstudied tablet was revealed, did it all click into place.
Measuring just 5cm by 5cm, this tiny tablet, marked with the tick-shaped imprints of the ancient cuneiform script, contained “numbers and computations, additions, divisions, multiplications," says Ossendrijver. "It doesn’t actually mention Jupiter. It’s a highly abbreviated version of a more complete computation that I already knew from five, six, seven other tablets."
It was the missing piece that proved that the ancient Babylonians were using sophisticated methods of tracking the stars and planets some 1 400 years earlier than we thought possible.
What's the big idea?
So, besides revealing that historians have been horribly wrong about the origin of calculus all this time, why is this discovery significant?
Well, if no other evidence emerges that this Babylonian knowledge was preserved and carried on through the ages to 14th century Europe, it shows that new knowledge can emerge independently, in different parts of the world. This is useful when exploring other human origins, like language and agriculture, or religion.
It shifts our understanding of an “enlightened Europe", and gives more credence to the idea that the ancient Babylonians were a sophisticated civilisation.
Lastly, it begs the question: what other amazing discoveries are yet to be made from artifacts gathering dust in the back rooms of museums around the world? |
Vectors can use simple arithmetic expressions (+, -, *, /) to perform basic operations. Let's first look at addition, then discuss a caveat of vector arithmetics.
You can add or subtract the corresponding elements of two or more vectors of the same length together.
> c(1,2,3) + c(99,98,97) 100 100 100 > c(1,2,3) + c(4,5,6) 5 7 9 > c(1,2,3) - c(1,1,1) 0 1 2
But what would happen if all the vectors weren't of the same length? Instead of erroring out, R performs recycling.
Recycling occurs when vector arithmetic is performed on multiple vectors of different sizes. R takes the shorter vector and repeats them until it becomes long enough to match the longer one.
> c(1,2,3,4,5,6) + c(1,3) 2 4 3 7 6 9
As you can see, the
c(1,3) vector repeated itself to form
c(1,3,1,3,1,3) so that it could successfully match the previous term.
If the shorter vector is not a vector of the longer one, then a warning message appears, but the operation still takes place.
> c(1,2,3,4,5) + c(1,3) 2 5 4 7 6 Warning message: In c(1, 2, 3, 4, 5) + c(1, 3) : longer object length is not a multiple of shorter object length
Multiplying or dividing vectors is similar to addition and subtraction in that each corresponding element matches up and a product is formed. When the sizes differ, recycling occurs.
> c(1,2,3) * c(0,3,6) 0 6 18 > c(1,3,5) * c(2,4) 2 12 10Warning message: In c(1, 3, 5) * c(2, 4) : longer object length is not a multiple of shorter object length
One small detail to notice is that these common arithmetic expressions are actually functions. Thus, they can be with a similar function notation.
> "*"(5,6) 30
We can also perform the modulo operator, which outputs the remainder after division of two numbers.
> c(55,54,53) %% c(3) 1 0 2
You can also apply linear algebra on your vectors in R. To calculate the cross product, use
> crossprod(1:3, 4:6) [,1] [1,] 32
You'll notice that the return type isn't a new vector, but instead a one-dimensional matrix. We'll look at matrices in the next lesson.
Learn the best practices used by academic and industry professionals. Bioinformatics Data Skills give a great overview to the Linux Command Line, Github, and other essential tools used in the trade. This book bridges the gap between knowing a few programming languages and being able to utilize the tools to analyze large amounts of biological data.$ Check price
Python Playground is a collection of fun programming projects that will inspire you to new heights. You'll manipulate images, build simulations, and interact with hardware using Arduino & Raspberry Pi. With each project, you'll get familiarized with leveraging external libraries for specialized tasks, breaking problems into smaller, solvable pieces, and translating algorithms into code.$ Check price |
Lesson Title: On Target Challenge
Students will modify a paper cup so it can zip down a line and drop a marble onto a target.
• Apply the engineering design process.
• Modify a cup to carry a marble down a zip line.
• Test their cup by sliding it down the zip line, releasing the marble and trying to hit a target on the floor.
• Improve their system based on testing results.
Lesson Activities and Sequence
Access the On Target
Keywords: engineering challenge, scientific method, rockets, moon, Newton's laws, engineering design process, moon, Mars, acceleration, vector, trajectory, potential energy, kinetic energy, LCROSS
- Introduce the challenge: Tell kids how NASA will use the LCROSS spacecraft to search for water on the moon (scripted in the Leader Notes).
- Brainstorm and design: Students should be working in cooperative groups to develop a group design and using individual journals to record their decisions, design sketches, test results, etc.
- Build, test, evaluate and redesign: Test data, solutions, modifications, etc., should all be recorded in their journals individually.
- Discuss what happened: Ask the students to show each other their modified cups and talk about how they solved any problems that came up.
- Evaluation: Using the students' journal entries, assess their mastery of content, skills and the engineering design process.
National Science Education Standards, NSTA
Science as Inquiry
• Understanding of scientific concepts.
• Understanding of the nature of science.
• Skills necessary to become independent inquirers about the natural world.
• The dispositions to use the skills, abilities and attitudes associated with science.
• Position and motion of objects.
Common Core State Standards for Mathematics, NCTM
Expressions and Equations
• Apply and extend previous understandings of arithmetic to algebraic expressions.
• Solve real-life and mathematical problems using numerical and algebraic expressions and equations.
• Understand the connections between proportional relationships, lines and linear equations.
ISTE NETS and Performance Indicators for Students, ISTE
Creativity and Innovation
• Apply existing knowledge to generate new ideas, products or processes.
• Create original works as a means of personal or group expression.
• Use models and simulations to explore complex systems and issues.
• Identify trends and forecast possibilities.
Critical Thinking, Problem Solving and Decision Making
• Identify and define authentic problems and significant questions for investigation.
• Plan and manage activities to develop a solution or complete a project.
• Collect and analyze data to identify solutions and/or make informed decisions.
• Use multiple processes and diverse perspectives to explore alternative solutions. |
CS 1440 Lab 8
· Continued practice with C++ classes
1. First, we'll look at a main function that uses Dates.
2. Then, we'll define the Date class.
3. Then, we'll begin implementing the Date class.
1. Main -- client of our Date class
Look over this main function that uses Dates.
Date today; /* Creates Date object */
int d, m, y;
cout << "Enter month: ";
cin >> m;
cout << "Enter day: ";
cin >> d;
cout << "Enter year: ";
cin >> y;
/* Some member function calls */
today.set(m,d,y); /* Sets object data members */
today.ShortDisplay(); /* Displays: dd/mm/yyyy */
cout << endl;
today.LongDisplay(); /* Displays: dd Month yyyy */
cout << endl;
2. Defining the Date class
We know that a Date keeps track of month, day, and year. So, we'll need some data members for these. (If we wanted a really great Date class we could have another data member for day of week, but we won't worry about that now!) From seeing our client, we need some member functions: set, LongDisplay, and ShortDisplay. So, let's define it!
If you're brave, try defining it on your own first -- then compare yours with the one below.
void set(int m, int d, int y);
int day, month, year;
3. Implementing the Date class
Now, we need to actually write the functions that the class definition only prototypes. Remember that you need to connect your function definition to the prototype in the class definition by preceding the function name with the class name and the :: symbol.
void Date::set(int m, int d, int y)
/* Set "my" data members with values
passed in from client as parameters. */
year = y;
day = d;
month = m;
/* Display as: mm/dd/yyyy
for example: 3/31/2002 */
/* YOU DO IT!! */
/* Display as: dd Month yyyy
for example: 31 March 2002 */
cout << day;
if (month == 1) cout << " January ";
else if (month == 2) cout << " February ";
else if (month == 3) cout << " March ";
else if (month == 4) cout << " April ";
else if (month == 5) cout << " May ";
else if (month == 6) cout << " June ";
else if (month == 7) cout << " July ";
else if (month == 8) cout << " August ";
else if (month == 9) cout << " September ";
else if (month == 10) cout << " October ";
else if (month == 11) cout << " November ";
else if (month == 12) cout << " December ";
else cout << "UnknownMonth";
cout << year;
Cut and the paste the above pieces of code into a file called Act2.C. (Be careful to put the class definition before the main function.)
Finish implementing the Date::ShortDisplay function.
Make sure you include the necessary headers, and add a comment block.
Verify your program works by compiling and running it. |
C2 Worksheets – Core Advanced Mathematics
The C2 worksheets have questions on the following topic areas: Algebra & Functions, Sine & Cosine rule, Logarithms, Equation of a Circle, Binomial Expansion, Radians, Arcs & Sectors, Geometric Sequences & Series, Sketching Trigonometric Functions, Differentiation, Solving Trigonometric Equations and Definite Integration.
These worksheets are based on the UK A-Level C2 module but the worksheets can be used anywhere. See below to find out what the content of each question is based on.
The C2 Worksheets have 11 questions on the following topic areas, note that a working knowledge of the C1 syllabus is assumed:
- Algebra & Functions – simplify algebraic fractions by finding common factors on the top and bottom or by using polynomial division. Understand and apply factor and remainder theorem.
- Sine & Cosine rule – understand and possibly prove the sine and cosine rule. Use them to find missing angles and lengths in a non-right-angled triangle.
- Logarithms – understand and use logarithms of various bases. Use the logarithm laws to manipulate and solve equations involving logarithms.
- Equation of a Circle – find the equation of a circle given two points that lie on it. Find the equation of a circle given the location of its centre and its radius.
- Binomial Expansion – find the first few terms, or possibly all of, the expansion of , where n is a positive integer. Use the expansion to estimate values of real numbers to a given power.
- Radians, Arcs & Sectors – understand how to measure angles in radians. Find the length of an arc or the area of a sector in complex problems.
- Geometric Sequences & Series – find the nth term and the sum of the first n terms of a geometric sequence. Know how to show that the formula for the sum of the first n terms is derived. Know when geometric series can be summed to infinity.
- Sketching Trigonometric Functions – know how to sketch the graphs of familiar trigonometric functions particularly when angles are measured in radians. Evaluate the values of trigonometric functions at simple angle values. Use triangles to evaluate trigonometric functions at not-so-simple angle values.
- Differentiation – use the derivative (from C1) to show when a function is increasing or decreasing. Know that the derivative is zero at a stationary point. Use the second derivative to classify it as a maximum or minimum.
- Solving Trigonometric Equations – Know and apply the identities and . Solve equations involving trigonometric functions.
- Definite Integration – Evaluate integrals including those with limits. Use integration to find areas beneath curves and between curves. |
Presentation on theme: "Representing Motion Chapter 2 (pg 30-55). Do Now Why is it important to describe and analyze motion? How fast? How far? Slowing/Speeding? Rest/Constant."— Presentation transcript:
Representing Motion Chapter 2 (pg 30-55)
Do Now Why is it important to describe and analyze motion? How fast? How far? Slowing/Speeding? Rest/Constant velocity What are some of the different types of motion? Translational – motion along a straight line Circular – motion along a circular path Rotational – Rotation about a fixed point
Chapter Objectives ● Represent motion through the use of words, motion diagrams, and graphs. ● Use the terms position, distance, displacement, and time interval in a scientific manner to describe motion.
Motion Motion is instinctive Eyes will notice moving objects more readily than stationary ones Object changes position Motion can occur in many directions and paths
(2.1)Picturing Motion (2.2) Where and When? Analyze motion diagrams to describe motion. Develop a particle model to represent a moving object. Define coordinate systems for motion problems and recognize that it affects the sign of an object’s position. Define displacement. Use a motion diagram to answer questions about an object’s position or displacement.
Picturing Motion A description of motion relates to a PLACE and TIME. Answers the questions WHERE? and WHEN? PLACE TIME
Motion Diagrams Series of images showing the positions of a moving object at equal time intervals.
Particle Model Simplified version of a motion diagram in which the object in motion is replaced by a series of single points. Size of object much less than the distance moved Internal motions of object ignored
Motion Diagram & Particle Model
Practice Draw a particle model for the motion diagram above of a car coming to a stop.
Describe the motion of the bird… Draw a particle model….
Use the particle model to draw motion diagrams for two runners in a race. When the first runner crosses the finish line, the second runner is ¾ of the way to the finish line. First Runner Second Runner Must have same number of particles to represent equal time.
How are the two particle models different? Describe the motion of each. A. B. A.Eight time intervals & Constant velocity (equal spacing) B.Five time Intervals & speeding up (spacing is getting farther)
Which statement describes best the motion diagram of an object in motion? A.a graph of the time data on a horizontal axis and the position on a vertical axis B.a series of images showing the positions of a moving object at equal time intervals C.a diagram in which the object in motion is replaced by a series of single points D.a diagram that tells us the location of the zero point of the object in motion and the direction in which the object is moving
What is the purpose of drawing a motion diagram or a particle model? A.to calculate the speed of the object in motion B.to calculate the distance covered by the object in a particular time C.to check whether an object is in motion D.to calculate the instantaneous velocity of the object in motion
Coordinate System Tells you the location of the zero point of the variable you are studying and the direction in which the values of the variable increase. ORIGIN The point at which both variables have the value zero
Coordinate System Motion is RELATIVE You can define a coordinate system any way you want, but some are more useful than others.
Coordinate systems This coordinate system works as well but is not as convenient to use as the first one. Try to always pick your origin where motion begins. Axis of the coordinate system
Position & Distance You can indicate how far an object is from the origin by drawing an arrow from the origin to the point representing the object. The two arrows indicate the runner’s POSITION at two different times. (vector) Separation between an object and the origin The length of how far an object is from the origin indicates DISTANCE. (scalar)
Refer to the figure and calculate the distance between the two signals? A.3 m B.8 m C.5 m D.5 cm
Vectors & Scalars SCALARS: quantities that are just numbers without any direction Magnitude (number) only Examples: time, volume, mass, temperature VECTORS: quantities that have both magnitude (size) and direction Represented by arrows Examples: velocity, acceleration, force, momentum Tail Tip
Vector Addition: Tail to Tip Just like you can add scalars, you can also add vectors. Algebraically (look at later) & Graphically Place vectors tail to tip Place the tail of the 2 nd vector next to the tip of the 1 st vector RESULTANT - the vector that represents the sum of two or more other vectors is drawn from the tail of the first vector to the tip of the last vector.
Example: Add the three vectors Vector 1 Vector 2 Vector 3
Resultant Like scalars, vectors can be added in different order and still have the same resultant. Vector 1 Vector 2 Vector 3 RESULTANT Vector 1 Vector 2 Vector 3 RESULTANT
Distance vs. Displacement DISTANCE Actual length traveled Scalar measurement Path dependent DISPLACEMENT Change in position Vector measurement Path independent ∆x = x f – x i
Distance vs. Displacement Find distance and displacement for the following races: RaceDistanceDisplacement 100 m 400 m 1 mile
Practice Ex: Jared walks 75 m down the block heading east when he realizes he dropped his book. He turns around and walks 15 m until he finds his book. Draw vectors to represent Jared’s motion. Find the distance that Jarred walked. Find Jared’s displacement.
What is displacement? A.the vector drawn from the initial position to the final position of the motion in a coordinate system B.the distance between the initial position and the final position of the motion in a coordinate system C.the amount by which the object is displaced from the initial position D.the amount by which the object moved from the initial position
Ch. 2.3 Objectives Develop position-time graphs for moving objects. Use position-time graphs to interpret an object’s position or displacement. Make motion diagrams, pictorial representations, and position-time graphs that are equivalent in describing an object’s motion.
Graphs Named as y-axis vs. x-axis Also as Dependent vs. Independent Ex. Position vs. time Place position on the y-axis and time on the x-axis Always play close attention to the units. Units are key to analyzing graphs…
Analyzing Graphs Slope: Look at the units of the slope to see if it corresponds to a measurement. Area: look at the units for the area under the curve to see if it corresponds to a measurement.
Position-Time Graphs x-axis: Independent variable Velocity Where? & When? Area (m*s) No Meaning y-axis Dependent Variable
From the following position-time graph of two brothers running a 100-m dash, at what time do both brothers have the same position? At what position? At about 6 seconds At about 60 meters
Position-Time Graphs When does runner B pass runner A? 45 seconds into the race Where does runner B pass runner A? 190 m
Position-Time Graphs What is the displacement of the runner between 5 s and 10 s? 10 m
What is happening in this graph?
What is happening in each? A. B. C. D.
Position-Time Graph - SLOPE Analyze the units Slope = rise over run m = ∆y / ∆x Slope = m / s m/s is the unit for velocity The slope of a position-time graph is the average velocity
Position-Time Graph - AREA Analyze units only Area under the curve Area of a triangle A = ½ b * h ½ is a constant and has no units Base has units of time (s) Height has units of position (m) Area = (m)(s) We do not have any measurements that have the units (s)(m): thus the area of a position-time graph does not have any meaning.
Ch. 2.4 Objectives Define Velocity Differentiate between speed and velocity Create pictorial, physical, and mathematical models of motion problems
Average Velocity Defined as the change in position, divided by the time during which the change occurred. How fast in a given direction? Vector quantity Same direction as the displacement (Δx)
Average Speed The absolute value of the slope of a position-time graph. It describes how fast the object is moving.
Instantaneous Velocity Velocity at a given instant Slope of the line drawn on the x-t graph at the given instant Need calculus to find unless object moving at a constant velocity
Velocity vs. Speed Speed is the distance divided by the time during which the distance was traveled. Scalar (No direction) RaceDistance (m)Displacement (m) Time (s)Speed (m/s)Velocity (m/s) 100 m20 400 m80 1 mile400
Position-Time Graphs - Velocity Suppose you recorded two joggers in one motion diagram, as shown in the figure above. From one frame to the next, you can see that the position of the jogger in red shorts changes more than that of the one wearing blue. What would the x-t graph look like if they both started at the same time?
Position-Time Graphs You can create a position-time graph if you know the position and time of the joggers at different points. Need a minimum of two data points in order to create a x-t graph.
Position-Time Graphs (velocity) Can find the velocity of each jogger by calculating the slope of the line. Red Jogger v = m = (6m – 2 m)/ (3s – 1s) v = 2 m/s Blue Jogger v = m = (3m – 2m) / (3s – 2s) v = 1 m/s
What is happening? 1 - Compare the velocities for each of the segments. 2- Rate the segments in increasing speed. |
Eventually, humans figured out that the varying seasons were caused by the earth’s tilt in combination with its orbit around the sun. You might have learned that the Earth spins like a top. It does not spin straight up and down, though—it spins at a 23 ½ degree tilt. The imaginary line that the earth is spinning around on is called its Axis. When the top of the Earth is tilted TowardsThe sun, it is summer in the Northern hemisphere. Since the bottom half of the Earth (the Southern hemisphere) is tilted AwayFrom the sun, it’s winter there.
On around June 21st, the northern hemisphere is at its max tilt towards the sun. This is called the Summer solstice. This is also the longest period of daylight of the year in the northern hemisphere. On around December 21St, the Northern hemisphere is at its max tilt away from the Sun, which is known as the Winter solsticeAnd the shortest period of daylight. June 21StIs the Winter solsticeFor the southern hemisphere, and December 21StIs its summer solstice. Just think about it: kids in Australia get to enjoy long summer days and winter holidays at the same time!
In addition to solstices, our planet also experiences Equinoxes. During its journey around the sun, the Earth reaches two points in its orbit where the tilt isn’t towards or away from the sun. The length of day and night are equal. These are called the Equinoxes. On about March 21St, it is the Spring equinoxIn the northern hemisphere, and the Fall equinoxIn the southern hemisphere. On about September 21St, it is the spring equinox in the southern hemisphere and fall equinox in the northern hemisphere.
The tilt correctly predicts the seasons, but how does the tilt cause warmer or colder temperatures? You can see for yourself!
How does the tilt of the Earth cause the seasons?
- Graph paper
- Room you can make dark
- Tape a piece of graph paper over the Northern hemisphere of your globe.
- Tape another piece of graph paper over the Southern hemisphere of your globe.
- Using the protractor, have you partner tilt the Northern hemisphere of your globe toward you 23 ½ degrees (Note: some globes might already be tilted on the correct axis).
- Have your partner continue to hold the globe in position.
- Standing about one foot away from the globe, shine the flashlight at a point just above the equator.
- Congratulations! You just modeled the way Earth and Sun are positioned during the summer solstice in the Northern hemisphere, which occurs on June 21St.
- Now, make the room dark.
- Ask your partner to trace the circle of light made by the flashlight. He or she should be tracing on the paper, not the globe itself.
- Next, ask your partner to lower the tilted globe (without changing its tilt) so that circle of light is now on the middle of the Northern hemisphere of the globe. Keep the flashlight in the same position.
- Ask your partner to trace the circle of light made by the flashlight in this region. Do you notice anything about how the brightness and shape of the circle of light changes?
- Next, ask your partner to lower the tilted globe so that circle of light is now on the upper part of the northern hemisphere of the globe. Again, describe how the shape of the light circle has changed.
- Next, ask your partner to raise the tilted globe so that the circle of light is now just below the equator. Make sure that this hemisphere is still tilted away from the flashlight.
- Ask your partner to trace the circle of light made by the flashlight.
- Next, ask your partner to raise the tilted globe so that circle of light is now on the middle of the southern hemisphere of the globe. Keep the flashlight in the same position.
- Again, ask your partner to trace the circle made by the flashlight.
- Finally, ask your partner to raise the tilted globe so that the light is nearest to bottom of the globe. What do you notice about the light circle?
- Have your partner do his or her best to draw the shape of light on the graph paper.
- Turn on the lights.
- Compare the number of squares in the light tracings your partner drew.
The number of squares you count will vary depending on the size of your globe, graph paper squares, and flashlight. When you moved the flashlight over the surface of the globe, you probably noticed that the circle of light emitted by the flashlight was brighter and rounder near the equator. The circle of light became bigger and not as bright as you moved it towards the poles. There should be more graph paper squares in those circles. The light tracings also should start to look less like circles and more like stretched-out ovals. When you moved the flashlight along the middle part of your globe’s southern hemisphere, you were likely to notice that the circle of light was not as bright as it was in the northern hemisphere. When you got to the bottom of the globe, the light from the flashlight shouldn’t have reached the globe’s South Pole at all.
In addition to demonstrating how seasons are caused, this experiment shows why some parts of the earth are warmer than others year-round. Remember that the flashlight always emits the same amount of light. When the light shines on the equator, the circle is bright and small, meaning lots of sunlight is concentrated in a small area. This is why the parts of the Earth near the equator are hot and have tropical vegetation. As you moved the light along the curve of the Earth towards the middle of either hemisphere, the same amount of light was spread out over a larger area. These parts of Earth’s surface get only a medium amount of light energy, explaining why these parts of the Earth are not as warm as parts located near the equator. Your light tracings near the poles were the biggest, meaning the sun’s light was spread very thin. That is why the North and South Poles are so much colder than the equator.
The Earth’s tilt creates seasonal differences in light intensity. Since the northern part of the Earth was tilted towards the sun, the light circles were smaller and brighter. This causes these parts of the Earth to be warmer during the summer. By the time your light reached the North Pole, the light circle was bigger (meaning the light is more spread out), but since the pole was tilted toward the light, it can still experience daylight in the summer. When you shined the light toward the South Pole, it probably didn’t reach the South Pole at all because this pole is tilted away from the sun during the winter. This is why the poles experience almost total darkness in the winter. |
Aside from our genes, what makes humans different from other animals?
Humans (Homo sapiens) are the only extant members of the subtribe Hominina. Together with chimpanzees, gorillas, and orangutans, they are part of the family Hominidae (the great apes, or hominids). A terrestrial animal, humans are characterized by their erect posture and bipedal locomotion; high manual dexterity and heavy tool use compared to other animals; open-ended and complex language use compared to other animal communications; larger, more complex brains than other animals; and highly advanced and organized societies.
Early hominins—particularly the australopithecines, whose brains and anatomy are in many ways more similar to ancestral non-human apes—are less often referred to as “human” than hominins of the genus Homo. Several of these hominins used fire, occupied much of Eurasia, and gave rise to anatomically modern Homo sapiens in Africa about 315,000 years ago. Humans began to exhibit evidence of behavioral modernity around 50,000 years ago, and in several waves of migration, they ventured out of Africa and populated most of the world.
The spread of the large and increasing population of humans has profoundly affected much of the biosphere and millions of species worldwide. Advantages that explain this evolutionary success include a larger brain with a well-developed neocortex, prefrontal cortex and temporal lobes, which enable advanced abstract reasoning, language, problem solving, sociality, and culture through social learning. Humans use tools more frequently and effectively than any other animal; and are the only extant species to build fires, cook food, clothe themselves, and create and use numerous other technologies and arts.
Humans uniquely use such systems of symbolic communication as language and art to express themselves and exchange ideas, and also organize themselves into purposeful groups. Humans create complex social structures composed of many cooperating and competing groups, from families and kinship networks to political states. Social interactions between humans have established an extremely wide variety of values, social norms, and rituals, which together undergird human society. Curiosity and the human desire to understand and influence the environment and to explain and manipulate phenomena (or events) have motivated humanity’s development of science, philosophy, mythology, religion, anthropology, and numerous other fields of knowledge.
Though most of human existence has been sustained by hunting and gathering in band societies, increasingly many human societies transitioned to sedentary agriculture approximately some 10,000 years ago, domesticating plants and animals, thus enabling the growth of civilization. These human societies subsequently expanded, establishing various forms of government, religion, and culture around the world, and unifying people within regions to form states and empires. The rapid advancement of scientific and medical understanding in the 19th and 20th centuries permitted the development of fuel-driven technologies and increased lifespans, causing the human population to rise exponentially. The global human population was estimated to be near 7.7 billion in 2019.
Interpreting Paleolithic and Neolithic Art
Humans have been producing art works for at least seventy-three thousand years.
Look at the following art, which dates back to the Paleolithic Age – the Old Stone Age, before humans discovered how to farm. For each piece, respond to the following:
Describe what you see with your eyes (figures, colors, size, etc)
Offer an interpretation – What was this artist trying to communicate?/What was the purpose of this art?
What can we learn about this artist’s way of life from this art?
What modern artwork or form of expression does this ancient piece remind you of, and why?
A. Grotte de Niaux.
B. Laas Geel.
C. Cueva de las Manos.
D. Venus of Willendorf.
E. Bradshaw Rock.
What art will you leave behind as a testament to your presence on Earth? Create your own piece of “rock art” – though please don’t paint it on the classroom wall – depicting the important things in your life.
More information about each piece can be found here.
This article lists many benefits of living in a medina – list them, adding any additional benefits that strike you. Then, create a list of drawbacks.
Should cities in your country build neighborhoods that look more like this? Would you live in one?
Design your own ideal neighborhood – create a map that considers space to live, work, and play, as well as transportation and utilities like power and water.
A medina quarter (Arabic: المدينة القديمة al-madīnah al-qadīmah “the old city”) is a distinct city section found in a number of North African cities, including in Morocco. A medina is typically walled, with many narrow and maze-like streets. The word “medina” (Arabic: مدينة madīnah) itself simply means “city” or “town” in modern-day Arabic.
Medina quarters have usually been inhabited for a thousand years or more, and often contain historical fountains, palaces, public squares, mosques, and churches.
Because of their very narrow streets, medinas are generally free from car traffic, and in some cases even motorcycle and bicycle traffic. The streets can be less than a metre wide. This makes them unique among highly populated urban centres. The Medina of Fes, or Fes el Bali, is considered one of the largest car-free urban areas in the world.
Aside from the addition of some electrical wires and modern plumbing, most medinas in Morocco today look a lot like they did in those bygone glory days of the trans-Saharan trade one thousand years ago. The streets are rarely wider that six or seven feet, and are sometimes as narrow as two or three. Mules and men with carts do most of the heavy lifting in the streets, delivering or carrying away what can’t be done by hand – no trucks or cars can penetrate, so most people buy groceries for today, and maybe tomorrow. Furniture and modern appliances are often transported into the medina over a neighbor’s rooftop, and down a home through the central, open air courtyard. Anger your neighbors, and you might have a hard time remodeling your house.
Fresh fish – caught this morning – for sale in the souq, or market. It will likely be carried home by the purchaser, wrapped in yesterday’s newspaper. Sometimes a map pushing a cart full of iced fish will delivery it straight to residents’ doors.
Mosques – identifiable by their lighted minarets – serve each neighborhood within a medina, offering a space to pray, socialize, connect, and to resolve dispute with the community.
The crowded street of the medina in Fez. The ditch is in the center of the street, channeling water away from buildings and allowing pedestrians to stay dry even in the rain, beneath the awnings that are common in front of homes and shops.
In this cafe, refrigeration is provided by cool mountain water fed from a small mountain stream – there is no electricity.
Don’t think of a city like this as “backward” – think of it as a template for a life many in our modern times have forgotten… But which many of us are trying to rediscover. What appear to be drawbacks at first glance, described another way, are what many Americans and Europeans list as desirable qualities in a neighborhood.
It is walkable, by necessity. Most anything you need is available in a five to ten minute walk from your door.
It is communal – there are basically no police present, so most problems are solved in the community. Violence is squashed through neighbors’ intervention and social pressure. Public fountains with fresh, clean water can be found at most major intersections. Same with mosques, which, in addition to the square full of small shops, are at the center of residents’ spiritual and social lives.
Most all food is organic, fresh, and affordable, sold with zero plastic packaging.
The narrow streets, paved with stone, between high mud-walled homes are many degrees cooler than the open air outside the medina, meaning that while most who live within don’t have air conditioning, they don’t really need it either.
Dusk falls on the densely packed medina of Fez. In this rooftop photo, it is easy to see the density, the open courtyards, and the mosques – recognizable from their tall towers – stretching into the distance.
Since there are few to no vehicles in the medina, the streets are designed for human traffic – stairways are common in mountain towns.
At their narrowest, the streets of the medina can be narrower than the hallway in your house. Extended families might build passover hallways to join two households across the street.
In an effort to maximize living space, some families build additions to their homes – which extend over the street. This does provide additional shade to pedestrians below.
The soul is the market section of the medina. Many small shops – usually highly specialized, selling only meat, only women’s clothing, only fruit, sometimes only one kind of fruit – characterize the shopping experience. All of this is within a few minutes walk from home.
Larger deliveries in the medina might be made to by mules. As a result, pedestrian fatalities – a real problem in every American community – are almost unheard of in the medina.
Occasionally, streets of the medina are wide enough to accommodate small motorbikes, such as this one making deliveries in the souk of Essaouira, Morocco.
For hundreds of years before running water was widely available to every home, Moroccan rulers built fountains at close intervals throughout the medina – fed sometimes by springs, sometimes by aqueducts carrying clean water from distant mountains to the corner by your house.
In Muslim countries, the symbol for a pharmacy is often the green crescent moon. This pharmacy serves a small neighborhood in the medina – and offers shade to pedestrians below.
A small public mailbox serves the neighborhood.
In the medina of Chefchaouen, residents have cultivated a vast, interwoven web of grape vines that grow overhead. The grapes can be eaten or turned into wine; their leaves provide fresh air and shade, and can also be eaten.
In the medina, streetlights are usually fixed overhead, right to the side of buildings – there is far less light pollution, because it takes far fewer lamps to light such narrow streets.
Electrical wires, added long after these ancient cities were originally built, are often run directly alongside buildings, or buried beneath the streets alongside waterlines.
Chickens often range freely, living on rooftops or in the courtyards of homes, providing fresh eggs and meat to their owners.
This public street runs completely underneath a multi-story home. Public parking typically means room for a bicycle or a hitch for a mule.
The public street continues into this tunnel.
Living in the medina can mean cramped corners, however. Each of these doors leads to a different home.
Traditional doors in the medinas of Morocco feature a smaller door nestled within a larger one, each with a seperate knocker which resonates with a distinct tone. The smaller door is for close family, as well as for ventilation while cooking – it allows for a modest amount of privacy within. The larger door is opened to welcome company or celebrate special occasion, symbolically opening the home to the wider community.
Air conditioning is rare in traditional medinas. Thick brick or mud walls and windows open at the right time of day help to keep indoor spaces cool.
This wall is made from sun-baked mud and straw, which insulates well against the heat of the day – and is durable in the arid, rainless climate that covers much of Morocco.
In most medinas, there are at least a few public squares filled with restaurants, shops, kids playing soccer, as well as musicians and other entertainers, such as snake charmers. Much of life outside of work and school takes place in open, public spaces like this square in Marrakesh, Morocco is by far one of the largest and busiest.
This cart is loaded with scrap metal for recycling. Any waste collection within the medina is done this way, on a human or mule-drawn carts. In truth, residents of the medina purchase most food without packaging in the local souq, meaning that they produce little inorganic waste. Large trash trucks are not really necessary here, even if they were possible.
Medinas are traditionally walled, guarding against attack from raiders or rival nations.
Industry can take place very close to residential areas. This tannery emits the strong smell of ammonia, which radiates for blocks around – neighbors live with it.
Without glamorizing the social problems like poverty and sanitation issues that persist in some medinas, there is a good reason that this way of life has persisted since prehistoric times, while the patterns of American suburbanization are barely a century and half old – but are killing the people who live there via social isolation, urban sprawl, excess carbon emissions, and water wasted irrigating green lawns.
Which is really the ideal here? Can we learn something from our human past?
It’s not backward if it offers answers to questions of sustainability and community that so many in countries like the United States are trying in vain and at great cost to reinvent.
THIS LESSON WAS MADE POSSIBLE THROUGH A GENEROUS GRANT FROM THE QATAR FOUNDATION.
Who are the Berber? Briefly describe their culture.
What do Berbers call themselves, and what does it mean in English?
Write your name in the Berber alphabet.
An anthropologist is someone who examines culture, artifacts, religion, language, lifestyles, and traditions to describe and understand a group of people, either from the present or the past. How would an anthropologist describe your community’s culture and history?
The main ethnic group inhabiting the Maghreb – which literally means “the west” in Arabic, and includes Morocco, Algeria, and Tunisia – are known as the Berber people. They and their ancestors have inhabited North Africa for more than 10,000 years, and possess a rich history and culture shaped by the varied geography of the area, as well as by their interactions with other groups, including the Phoenicians, the Romans, the Arabs, the Spanish, and the French.
The Berbers call themselves Imazighen, which means free or noble people in their own language. It is a fitting descriptor.
Historically, the Berbers have been successful in trade, navigating the harsh conditions of the Sahara and the Atlas Mountains, linking Sub Saharan Africa to the Mediterranean world when other groups struggled to do so. In ancient times, this wealth – as well as Berber prowess on horseback meant that groups such as the Carthaginians were paying them tribute in North Africa.
Unlike the conquests of previous religions and cultures, the coming of Islam, which was spread by Arabs, was to have extensive and long-lasting effects on the Maghreb. The new faith, in its various forms, would penetrate nearly all segments of Berber society, bringing with it armies, learned men, and fervent mystics, and in large part replacing tribal practices and loyalties with new social norms and political traditions influenced by the Arab world.
Traditionally, Berber men take care of livestock such as sheep, goats, cows, horses, and camels. Families migrate by following the natural cycle of grazing, and seeking water and shelter with the changing seasons. They are thus assured with an abundance of wool, cotton, and plants used for dyeing. For their part, women look after the family and produce handicrafts like clothing, rugs, or blankets – first for their personal use, and secondly for sale in local souqs, or markets. While many Berber still live according to these patterns, many more no longer follow these traditional patterns – they now have jobs, homes, and lifestyles similar to any of those found in your country.
The Berber are experts of irrigation, drawing water from mountain rivers and feeding it via gravity into green oases of productivity.
Traditionally, Berber men have raised livestock like these sheep, which provide wool, meat, and leather.
The goats are nearly as resourceful as the Berber themselves.
Staple crops of the modern Berber are wheat, corn, dates, and tomato.
The Berber inhabit a wide range of climate zones, including this harsh foothills of the Atlas Mountains, on the edge of the Sahara Desert. Rainfall and grass are sparse here, and the hearty goats herded by the Berber graze in the Argon trees that thrive in this arid landscape.
Berber women have traditionally created inticateky woven patterns on looms such as this one.
Berber women hand stitch the lushly detailed patterns seen on these kaftans. The work is exacting – and can earn a good income for the skilled artisan.
The Berber also cultivate alfalfa as a feed for livestock. This alfalfa is cut, bundled, and carried home from a family garden plot, usually by women.
Other Berber men work in tanneries, turning animal hides into leather in these vats of ammonia. The ammonia is sourced from the waste of animals.
Increasingly, the Berber are sedentary, but traditionally, many have been nomadic, following the green grass with their herds on a seasonal basis. This is mobile home of one family who still follows such a nomadic existence.
Like kids almost anywhere, modern Berber children love to play soccer – anywhere, any time.
Some Berber men create impressive tiled mosaics. These are the plain backsides of vibrantly-colored tiles, which will be held together with concrete. When flipped over, they will create a stunning geometric pattern – avoiding the depiction of the human form, as prescribed by Islam.
The traditional social structure of the Berbers is tribal. A leader is appointed to command the tribe through a generally democratic process. In the Middle Ages, many women had the power to govern. The majority of Berber tribes currently have men as heads of the tribe.
Imazighen (Berber) cuisine draws influence and flavors from distinct regions across North Africa and the Mediterranean world.
Principal Berber foods include:
Couscous, a staple dish made from a grain called semolina
Tajine, a stew made in various forms
Pastilla, a meat pie traditionally made with squab (fledgling pigeon) often today using chicken
Morocco is a former French colony, and French-style cafe culture has also influenced the country. Men in particular can be found at most hours of the day drinking espresso or tea, and possibly eating a pastry in one of the country’s thousands of cafes.
A tajine (Standard Moroccan Berber: ⵜⴰⵊⵉⵏ) is a Maghrebi dish which is named after the earthenware pot in which it is cooked. The earliest writings about the concept of cooking in a tajine appear in the famous One Thousand and One Nights, though the dish would have been already famous amongst the nomadic Bedouin people of the Arabian Peninsula, who added dried fruits like dates, apricots and plums to meat like mutton, chicken, or camel, giving tajine its unique taste. Tagine is now often eaten with french fries, either on the top or on the side.
Couscous (Berber : ⵙⴽⵙⵓ seksu, Arabic: كُسْكُس kuskus) is originally a Maghrebi dish of small (about 3 millimetres (0.12 in) diameter) steamed balls of crushed durum wheat semolina that is traditionally served with a stew spooned on top. It is a staple of the Moroccan diet, meaning that is eaten routinely and in such quantities that it constitutes a dominant portion of a standard diet for a given people, supplying a large fraction of energy needs and generally forming a significant proportion of the intake of other nutrients as well. In Tunisia, Algeria, Morocco, and Libya, couscous is generally served with vegetables (carrots, potatoes, and turnips) cooked in a spicy or mild broth or stew, and some meat (generally, chicken, lamb or mutton).
Pastilla (Moroccan Arabic: بسطيلة, romanized: bəsṭila) is a traditional Moroccan dish of Andalusian origin consumed in countries of the Maghreb. It is a pie which combines sweet and salty flavours; a combination of crisp layers of the crêpe-like werqa dough (a thinner cousin of phyllo dough), savory meat slow-cooked in broth and spices and then shredded, and a crunchy layer of toasted and ground almonds, cinnamon, and sugar. Pastilla is said to be “uniquely Moroccan, intricate and grand, fabulously rich and fantastical.”
In Morocco, Tunisia, Libya, Saudi Arabia, Jordan, and other parts of the Middle East, prickly pears of the yellow and orange varieties are grown by the side of farms, beside railway tracks and other otherwise noncultivable land. It is sold in summer by street vendors, and is considered a refreshing fruit for that season.
Writing in 1377, the scholar Ibn Khaldun offered a general description of the Berber that applies nearly as well in the twenty-first century:
“As for [their] moral virtues, one can cite: respect for one’s neighbours; the protection of guests; the observance of obligations and commitments; faithful adherence to promises and treaties; resolve in misfortune; indulgence towards the failings of others; renouncement of vengeance; kindness to the unfortunate; respect for the elderly; veneration for men of science; hatred of oppression; resolve before states; determination to win in matters of power; devotion to God in matters of religion.”
Indeed, nearly eight hundred years later the anthropologist Ahmed Skounti echoed these sentiments:
“The Imazighen (singular Amazigh) also known as the Berbers are among the original peoples of North Africa. Their myths, legends and history span 9,000 years, back to the Proto-Mediterraneans. They have achieved unity by keeping up their unique language and culture which are, like their land, both African and Mediterranean.
The Berbers of Morocco share this duality, reflecting the diversity of their nature and stormy history. Through contact with other peoples of the Mediterranean, they created kingdoms but also vast territories organised into powerful, democratic, war-mongering, tribal communities. Both aspects of this social political organisation have left a mark on recent historical events and the two millenia of the country’s history. As opposed to the pagan Mediterranean kingdoms of Antiquity, Berber empires developed inland and were Muslim. Judaism continued to be practiced, and the Sunni Islam majority gradually took
on a Berber hue with its brotherhoods, zaouias, marabouts. and rituals.
The roots of the Berber culture go deep down into Morocco’s proto-history. They are illustrated by a strong link with their land, a sense of community, hospitality sharing food and a specific relationship with spirituality. Its openness to many influences whether Mediterranean, African, Oriental, European or international have defined its current characteristics.
The Berber language, an Afro-Asian idiom, is the melting pot of the history and culture of the country. It has outlived most languages of Antiquity such as Ancient Greek, Phoenician, Latin and Egyptian. It used to be written but is now mainly oral. Though there are fewer now that can speak it, the language is nevertheless still used by a substantial number of Moroccans.”
The hamsa (Berber: ⵜⴰⴼⵓⵙⵜ tafust) is a palm-shaped amulet popular throughout the Middle East and North Africa and commonly used in jewelry and wall hangings. Depicting the open right hand, an image recognized and used as a sign of protection in many times throughout history, the hamsa is believed by some, predominantly Muslims and Jews, to provide defense against the evil eye. It has been theorized that its origins lie in Ancient Egypt or Carthage (modern-day Tunisia) and may have been associated with the Goddess Tanit. The Hamsa is also known as the Hand of Fatima after the daughter of the prophet Muhammad.
The Berber are known for their skills in working with silver. Its color is associated with purity and piety. The vibrant colors that highlight the silver are called enamel, which is a technique that probably arrived in Morocco in the 1400s, as many Muslims and Jews were expelled from Christian Spain.
Inseparable from poetry and associated with the dance, Amazigh music take many forms, but two popular folk forms include Ahwach and Rouaiss. – AHWACH: is a collective dance according to musical rhythms and with a accompaniment of songs. The group composed of flatists (aouad), percussion flat drums (bendir), percussion sionists of metal instruments (naqos) and dancers. ROUAISS a musical group that sings Amazigh poetry (amerg). The instruments used are the three-stringed lute (guembri), the monochord violin (rebab), the flat drum (bendir) and an instrument percussion metal (naqoss).
If men cover their heads, it is often with a wrapping as seen in this photo, which shields their heads and faces from the heat of the sun and the any sand on the wind.
Two men in traditional Berber clothing. The long tunic worn by both men and women is called a kaftan.
The djellaba is a long, loose-fitting outer robe with full sleeves that is worn in the Maghreb region of North Africa. Djellabas are made of wool in different shapes and colors, but lightweight cotton djellabas have now become popular. Among the Berbers, or Imazighen, such as the Imilchil in the Atlas Mountains, the color of a djellaba traditionally indicates the marital status (single or married) of the bearer: a dark brown djellaba indicating bachelorhood. Almost all djellabas of both styles (male or female) include a baggy hood called a qob (Arabic: قب) that comes to a point at the back. The hood is important for both sexes, as it protects the wearer from the sun, and in earlier times, it was used as a defence against sand being blown into the wearer’s face by strong desert winds. In colder climes, as in the mountains of Morocco and Algeria, it also serves the same function as a winter hat, preventing heat loss through the head and protecting the face from snow and rain. It is common for the roomy hood to be used as a pocket during times of warm weather; it can fit loaves of bread or bags of groceries.
Many women cover their heads in accordance with Muslim tradition, but many more do not.
No matter how traditional the dress, Berbers are not stuck in the past – this man carries a messenger bag containing his cell phone and other modern necessities.
The unique Berber alphabet is called tifinagh. Like the Berbers themselves, the writing has been attributed in turn to having Egyptian, Greek, Phoeno-Punic or South-Arabic origins, though none of these theories is definitive. Other research points toward the indigenous origins of Berber writing, linking it closely to cave art. The undecoded signs and symbols that accompany the depiction of humans, animals, weapons and ritual or combat scenes create a sort of visual vocabulary which may have later developed into the writing system.
Historically, Berber writing had limited uses, primarily in memorials and commemorative stone carvings. It was largely replaced by Arabic around the fifth or sixth centuries, and later by French in the twentieth century. Berber was originally written vertically from top to bottom, but today is oriented from right to left, like Arabic. The alphabet is composed of a distinctive geometric written form, in which 33 characters are created from three basic shapes: the circle, the line, and the dot.
This ancient alphabet serves as the basis for the formation of the modern tifinagh alphabet adopted since 2003 by Morocco in order to write the Berber language.
THIS LESSON WAS MADE POSSIBLE THROUGH A GENEROUS GRANT FROM THE QATAR FOUNDATION.
(Information on the Berber alphabet was adapted from the work of Aline Star, anthropologist at the Institut National des Sciences de L’Archéologie et du Patrimoine. Rabat)
“the Fugitive Slave Law of 1850 penalized officials who did not arrest an alleged runaway slave, and made them liable to a fine of $1,000 (about $29,000 in present-day value). Law-enforcement officials everywhere were required to arrest people suspected of being a runaway slave on as little as a claimant‘s sworn testimony of ownership. The suspected slave could not ask for a jury trial or testify on his or her own behalf. In addition, any person aiding a runaway slave by providing food or shelter was subject to six months’ imprisonment and a $1,000 fine. Officers who captured a fugitive slave were entitled to a bonus or promotion for their work.
Slave owners needed only to supply an affidavit to a Federal marshal to capture an escaped slave. Since a suspected slave was not eligible for a trial, the law resulted in the kidnapping and conscription of free blacks into slavery, as suspected fugitive slaves had no rights in court and could not defend themselves against accusations.”
You’ve probably grown up seeing political ads on TV. Most of these are sponsored by PACs or Political Action Committees – groups that aren’t candidates in an election, but wish to influence the outcome with money spent on advertisements.
Imagine that TV and PACs existed in 1850. Create a television spot either opposing or supporting the Compromise of 1850. In your ad, be sure to explain the components of the compromise. Also mention the alternatives – do you have a better plan, or are there alternatives worse than the unpalatable elements found in the compromise. Be creative, but in order to get 100% on this assignment, in addition to taking an editorial point of view, you will need to include lots of rich historical details, such as who in Congress supports this compromise, who opposes it, and why.TV ads should be one to two minutes in length. They may be filmed and uploaded to YouTube or performed in class.
For writing (Approximately 250 words): In politics, is it better to compromise to solve disputes, even if that compromise is ugly, or is it better to “stay the course” – sticking to your beliefs about what is right, no matter what, even if it means greater conflict and division? Make sure that your answer uses historical examples such as the Compromise of 1850.
The Oregon Trail was a 2,170-mile, historic East–West, large-wheeled wagon route and emigrant trail in the United States that connected the Missouri River to valleys in Oregon.From the early to mid-1830s (and particularly through the years 1846–69) the Oregon Trail and its many offshoots were used by about 400,000 settlers, farmers, miners, ranchers, and business owners and their families.
The Oregon Trail is a computer game developed by the Minnesota Educational Computing Consortium and first released in 1985. It was designed to teach students about the realities of 19th-century pioneer life on the Oregon Trail. In the game, the player assumes the role of a wagon leader guiding a party of settlers from Independence, Missouri, to Oregon‘s Willamette Valley via a covered wagon in 1848.
Play several rounds of the game, embedded below. While you play, devise a research question about the real life Oregon Trail. For example, what was the leading cause of death for pioneers traveling west? Are there many grave markers left along the old route of the trail, and if so, what do they say? What was hunting like in the 1800s, and what impact did it have on animals like American Bison? What were covered wagons really like, and did settlers actually carry spare parts for them?
Create an infographic with facts, figures, images, and at least three paragraphs worth of information on the realities of some aspect of the game. Be sure to include information about your sources at the bottom of your infographic. You can see an example of a student infographic here and here.
Imagine that our class is a committee appointed by Congress to select one reformer from the Antebellum (pre-Civil War) era to replace nasty old Andrew Jackson on the $20 bill and to simultaneously celebrate the US’s rich history of forward-thinking individuals. You should base your decision on your knowledge of what these people accomplished in their lifetimes, as well as the lasting impact they have had on our overall society. You will need to research what these people did using your textbook or the Internet. You may use whatever criteria for inclusion that you choose, however, you may not just say you’re voting for some guy because he’s rich or fat or some such reason that lacks historical substance. (Remember this is a history class.)
William Lloyd Garrison
Harriet Beecher Stowe
Elizabeth Cady Stanton
Susan B. Anthony
Henry David Thoreau
Spiritual Leaders and Communalists
Charles G. Finney
John Humphrey Noyes
You will compose a persuasive essay – including a brief biographical overview, an explanation of the reformer’s accomplishments/lasting legacy, a direct quote from your reformer’s writings (if available), and a clear argument for why this person deserves to be the face of the 20 dollar bill. You should also create a physical life-size mock up of your new 20 dollar bill (it can be creative, colorful, and impressionistic). Make sure you cite your sources! |
This article needs additional citations for verification. (December 2007) (Learn how and when to remove this template message)
Linearity is the property of a mathematical relationship or function which means that it can be graphically represented as a straight line. Examples are the relationship of voltage and current across a resistor (Ohm's law), or the mass and weight of an object. Proportionality implies linearity, but linearity does not imply proportionality.
The homogeneity and additivity properties together are called the superposition principle. It can be shown that additivity implies homogeneity in all cases where α is rational; this is done by proving the case where α is a natural number by mathematical induction and then extending the result to arbitrary rational numbers. If f is assumed to be continuous as well, then this can be extended to show homogeneity for any real number α, using the fact that rationals form a dense subset of the reals.
In this definition, x is not necessarily a real number, but can in general be a member of any vector space. A more specific definition of linear function, not coinciding with the definition of linear map, is used in elementary mathematics.
The concept of linearity can be extended to linear operators. Important examples of linear operators include the derivative considered as a differential operator, and many constructed from it, such as del and the Laplacian. When a differential equation can be expressed in linear form, it is generally straightforward to solve by breaking the equation up into smaller pieces, solving each of those pieces, and summing the solutions.
Linear algebra is the branch of mathematics concerned with the study of vectors, vector spaces (also called linear spaces), linear transformations (also called linear maps), and systems of linear equations.
The word linear comes from the Latin word linearis, which means pertaining to or resembling a line. For a description of linear and nonlinear equations, see linear equation. Nonlinear equations and functions are of interest to physicists and mathematicians because they can be used to represent many natural phenomena, including chaos.
Over the reals, a linear equation is one of the forms:
Note that this usage of the term linear is not the same as in the section above, because linear polynomials over the real numbers do not in general satisfy either additivity or homogeneity. In fact, they do so if and only if b = 0. Hence, if b ≠ 0, the function is often called an affine function (see in greater generality affine transformation).
In Boolean algebra, a linear function is a function for which there exist such that
- , where
Note that if , the above function is considered affine in linear algebra (i.e. not linear).
A Boolean function is linear if one of the following holds for the function's truth table:
- In every row in which the truth value of the function is T, there are an odd number of Ts assigned to the arguments, and in every row in which the function is F there is an even number of Ts assigned to arguments. Specifically, f(F, F, ..., F) = F, and these functions correspond to linear maps over the Boolean vector space.
- In every row in which the value of the function is T, there is an even number of Ts assigned to the arguments of the function; and in every row in which the truth value of the function is F, there are an odd number of Ts assigned to arguments. In this case, f(F, F, ..., F) = T.
Another way to express this is that each variable always makes a difference in the truth value of the operation or it never makes a difference.
In instrumentation, linearity means that for every change in the variable you are observing, you get the same change in the output of the measurement apparatus - this is highly desirable in scientific work. In general, instruments are close to linear over a useful certain range, and most useful within that range. In contrast, human senses are highly nonlinear- for instance, the brain totally ignores incoming light unless it exceeds a certain absolute threshold number of photons.
In electronics, the linear operating region of a device, for example a transistor, is where a dependent variable (such as the transistor collector current) is directly proportional to an independent variable (such as the base current). This ensures that an analog output is an accurate representation of an input, typically with higher amplitude (amplified). A typical example of linear equipment is a high fidelity audio amplifier, which must amplify a signal without changing its waveform. Others are linear filters, linear regulators, and linear amplifiers in general.
In most scientific and technological, as distinct from mathematical, applications, something may be described as linear if the characteristic is approximately but not exactly a straight line; and linearity may be valid only within a certain operating region—for example, a high-fidelity amplifier may distort a small signal, but sufficiently little to be acceptable (acceptable but imperfect linearity); and may distort very badly if the input exceeds a certain value, taking it away from the approximately linear part of the transfer function.
This section contains too-lengthy quotations for an encyclopedic entry. (September 2014)
There are three basic definitions for integral linearity in common use: independent linearity, zero-based linearity, and terminal, or end-point, linearity. In each case, linearity defines how well the device's actual performance across a specified operating range approximates a straight line. Linearity is usually measured in terms of a deviation, or non-linearity, from an ideal straight line and it is typically expressed in terms of percent of full scale, or in ppm (parts per million) of full scale. Typically, the straight line is obtained by performing a least-squares fit of the data. The three definitions vary in the manner in which the straight line is positioned relative to the actual device's performance. Also, all three of these definitions ignore any gain, or offset errors that may be present in the actual device's performance characteristics.
Many times a device's specifications will simply refer to linearity, with no other explanation as to which type of linearity is intended. In cases where a specification is expressed simply as linearity, it is assumed to imply independent linearity.
Independent linearity is probably the most commonly used linearity definition and is often found in the specifications for DMMs and ADCs, as well as devices like potentiometers. Independent linearity is defined as the maximum deviation of actual performance relative to a straight line, located such that it minimizes the maximum deviation. In that case there are no constraints placed upon the positioning of the straight line and it may be wherever necessary to minimize the deviations between it and the device's actual performance characteristic.
Zero-based linearity forces the lower range value of the straight line to be equal to the actual lower range value of the device's characteristic, but it does allow the line to be rotated to minimize the maximum deviation. In this case, since the positioning of the straight line is constrained by the requirement that the lower range values of the line and the device's characteristic be coincident, the non-linearity based on this definition will generally be larger than for independent linearity.
For terminal linearity, there is no flexibility allowed in the placement of the straight line in order to minimize the deviations. The straight line must be located such that each of its end-points coincides with the device's actual upper and lower range values. This means that the non-linearity measured by this definition will typically be larger than that measured by the independent, or the zero-based linearity definitions. This definition of linearity is often associated with ADCs, DACs and various sensors.
A fourth linearity definition, absolute linearity, is sometimes also encountered. Absolute linearity is a variation of terminal linearity, in that it allows no flexibility in the placement of the straight line, however in this case the gain and offset errors of the actual device are included in the linearity measurement, making this the most difficult measure of a device's performance. For absolute linearity the end points of the straight line are defined by the ideal upper and lower range values for the device, rather than the actual values. The linearity error in this instance is the maximum deviation of the actual device's performance from ideal.
Military tactical formations
In military tactical formations, "linear formations" were adapted from phalanx-like formations of pike protected by handgunners towards shallow formations of handgunners protected by progressively fewer pikes. This kind of formation would get thinner until its extreme in the age of Wellington with the 'Thin Red Line'. It would eventually be replaced by skirmish order at the time of the invention of the breech-loading rifle that allowed soldiers to move and fire independently of the large-scale formations and fight in small, mobile units.
Linear is one of the five categories proposed by Swiss art historian Heinrich Wölfflin to distinguish "Classic", or Renaissance art, from the Baroque. According to Wölfflin, painters of the fifteenth and early sixteenth centuries (Leonardo da Vinci, Raphael or Albrecht Dürer) are more linear than "painterly" Baroque painters of the seventeenth century (Peter Paul Rubens, Rembrandt, and Velázquez) because they primarily use outline to create shape. Linearity in art can also be referenced in digital art. For example, hypertext fiction can be an example of nonlinear narrative, but there are also websites designed to go in a specified, organized manner, following a linear path.
In measurement, the term "linear foot" refers to the number of feet in a straight line of material (such as lumber or fabric) generally without regard to the width. It is sometimes incorrectly referred to as "lineal feet"; however, "lineal" is typically reserved for usage when referring to ancestry or heredity. The words "linear" & "lineal" both descend from the same root meaning, the Latin word for line, which is "linea".
This section needs expansion. You can help by adding to it. (March 2013)
- Linear actuator
- Linear element
- Linear system
- Linear medium
- Linear programming
- Linear differential equation
- Linear motor
- Linear A and Linear B scripts.
- Linear interpolation
- Edwards, Harold M. (1995). Linear Algebra. Springer. p. 78. ISBN 9780817637316.
- Stewart, James (2008). Calculus: Early Transcendentals, 6th ed., Brooks Cole Cengage Learning. ISBN 978-0-495-01166-8, Section 1.2
- Evans, Lawrence C. (2010) , Partial differential equations (PDF), Graduate Studies in Mathematics, 19 (2nd ed.), Providence, R.I.: American Mathematical Society, doi:10.1090/gsm/019, ISBN 978-0-8218-4974-3, MR 2597943
- Whitaker, Jerry C. (2002). The RF transmission systems handbook. CRC Press. ISBN 978-0-8493-0973-1.
- Kolts, Bertram S. (2005). "Understanding Linearity and Monotonicity" (PDF). analogZONE. Archived from the original (PDF) on February 4, 2012. Retrieved September 24, 2014.
- Kolts, Bertram S. (2005). "Understanding Linearity and Monotonicity". Foreign Electronic Measurement Technology. 24 (5): 30–31. Retrieved September 25, 2014.
- Wölfflin, Heinrich (1950). Hottinger, M.D., ed. Principles of Art History: The Problem of the Development of Style in Later Art. New York: Dover. pp. 18–72.
- The dictionary definition of linearity at Wiktionary |
|1 Crore+ students have signed up on EduRev. Have you?|
16.1 Pole strength, magnetic dipole and magnetic dipole moment:
A magnet always has two poles `N' and `S' and like poles of two magnets repel other and the unlike poles of two magnets attract each other they form action reaction pair.
Fig: Magnetic poles
The poles of the same magnet do not comes to meet each other due to attraction. They are maintained we cannot get two isolated poles by cutting the magnet from the middle. The other end becomes pole of opposite nature. SO, `N' and `S' always exist together.
Therefore, they are:
Known as +ve and -ve poles. North pole is treated as positive pole (or positive magnetic charge) and the south pole is treated as -ve pole (or -ve magnetic charge). They are quantitatively represented by their "POLE STRENGTH" + m and - m respectively (just like we have charge +q and -q in electrostatics). Pole strength is a scalar quantity and represents the strength of the pole hence, of the magnet also).
A magnet can be treated as a dipole since it always has two opposite poles (just like in electric dipole we have two opposite charges -q and +q). It is called MAGNETIC DIPOLE and it has a direction is from -m to +m that means from `S' to `N').
M = m.lm here lm = magnetic length of the magnet. lm is slightly less than lg (it is geometrical length of the magnet = end to end distance). The `N' and `S' are not located exactly at the ends of the magnet. For calculation purposes we can assume [Actually 0.84].
The units of m and M will be mentioned afterwards where you can remember and understand.
16.2 Magnetic field and strength of magnetic field:
The physical space around a magnetic pole has special influence due to which other pole experience a force. That special influence is called MAGNETIC FIELD and that force is called "MAGNETIC FORCE". This field is quantitatively represented by "STRENGTH OF MAGNETIC FIELD" or "MAGNETIC INDUCTION" or "MAGNETIC FLUX DENSITY". It is represented by . It is a vector quantity.
Definition of : The magnetic force experienced by a north pole of unit pole strength at a point due to some other poles (called source) is called the strength of magnetic field at that point due to the source.
Here = magnetic force on pole of pole strength m. m may be +ve or -ve and of any value. S.I. unit of is Tesla or Weber/m2 (abbreviated as T and Wb/m2).
We can also write . According to this direction of on +ve pole (North pole) will be in the direction of field and on -ve pole (south pole) it will be opposite to the direction of .
The field generated by sources does not depend on the test pole (for its any value and any sign).
(A) due to various sources:
(i) Due to a single pole:
(Similar to the case of a point charge in electrostatics)
B = .....(23)
This is magnitude
Direction of B due to north pole and due to south poles are as shown.
in vector from ...(24)
here m is with sign and = position vector of the test point with respect to the pole.
(ii) Due to a bar magnet:
(Same as the case of electric dipole in electrostatics) independent case never found. Always `N' and `S' exist together as magnet.
at A (on the axis) = 2 for a << r ...(25)
at B (on the equatorial) = - for a << r ...(26)
At General point :
Br = 2
Bres = ...(27 (a))
tan = = ...(28 (b))
Ex. 36: Find the magnetic force on a short magnet of magnetic dipole moment M2 due to another short magnet of magnetic dipole moment M1.
Ans: To find the magnetic force we will use the formula of `B' due to a magnet. We will also assume m and -m as pole strengths of `N' and `S' of M2. Also length of M2 as 2a. B1 and B2 are the strengths of the magnetic field due to M1 at +m and -m respectively. They experience magnetic forces F1 and F2 as shown.
F1 = and
Therefore, Fres F1-F2=2M1m
By using, Binomial expansion, and neglecting terms of high power we get
= = =
Direction of Fres is towards right.
B = ⇒
F = - M2 × ⇒ F =
Ex. 37: Two short magnets A and B of magnetic dipole moments M1 and M2 respectively are placed as shown. The axis of `A' and the equatorial line of `B' are the same. Find the magnetic force on one magnet due to the other.
Ans: upwards on M1
down wards on M2
Ex 38. A magnet is 10 cm long and its pole strength is 120 CGS units (1 CGS unit of pole strength = 0.1 A-m). Find the magnitude of the magnetic field B at a point on its axis at a distance 20 cm from it.
Ans: The pole strength is m = 120
CGS units = 12 A-m
Magnetic length is 2l = 10 cm or l = 0.05 m
Distance from the magnet is d = 20 cm = 0.2 m. The field B at a point in end-on position is
B = =
Ex. 39: Find the magnetic field due to a dipole of magnetic moment 1.2 A-m2 at a point 1 m away from it in a direction making an angle of 60° with the dipole-axis.
Ans: The magnitude of the field is
The direction of the field makes an angle a with the radial line where
tan a = =
Ex. 40: Figure shows two identical magnetic dipoles a and b of magnetic moments M each, placed at a separation d, with their axes perpendicular to each other. Find the magnetic field at the point P midway between the dipoles.
Ans: The point p is in end-on position for the dipole a and in broadside-on position for the dipole b'. The magnetic field at P due to a is Ba = along the axis of a, and that due to b is Bb = parallel to the axis of b as shown in figure. The resultant field at P is, therefore
The direction of this field makes an angle a with Ba such that tan a = Bb/Ba = 1/2.
16.3 Magnet in an external uniform magnetic field:
(same as case of electric dipole)
Fres = 0 (for any angle)
t = MB sin θ
* here θ is angle between
acts such that it tries to make
is same about every point of the dipole it's potential energy is U = - MB cosθ = -
θ= 0° is stable equilibrium
θ = p is unstable equilibrium
for small `q' the dipole performs SHM about q = 0° position
τ = - MB sin θ;
I α = - M B sin θ
α = -
for small θ, sin θ θ
Angular frequency of SHM
ω= = ⇒ T = 2p
here I = Icm if the dipole is free to rotate
= Ihinge if the dipole is hinged
Ex. 41: A bar magnet having a magnetic moment of 1.0 × 10-4 J/T is free to rotate in a horizontal plane. A horizontal magnetic field B = 4 × 10-5T exists in the space. Find the work done in rotating the magnet slowly from a direction parallel to the field to a direction 60° from the field.
Ans: The work done by the external agent = change in potential energy
= (-MB cosθ2) - (-MB cosθ1)
= - MB (cos60° - cos 0°)
= = × (1.0 × 104 J/T) (4 × 10-5 T) = 0.2 J
Ex. 42: A magnet of magnetic dipole moment M is released in a uniform magnetic field of induction B from the position shown in the figure.
(i) Its kinetic energy at θ = 90°
(ii) its maximum kinetic energy during the motion.
(iii) will it perform SHM? oscillation ? Periodic motion ? What is its amplitude ?
Ans: (i) Apply energy conservation at θ = 120° and θ = 90°
= - MB cos 120° + 0
= - MB cos 90° + (K.E)
KE = Ans.
(ii) K.E. will be maximum where P.E. is minimum. P.E. is minimum at θ = 0º.
Now apply energy conservation between θ = 120º and θ = 0º.
= -mB cos 120º + 0
= - mB cos 0º + (KE)max
The K.E. is max at θ = 0° can also be proved by torque method. From θ = 120° to θ = 0° the torque always acts on the dipole in the same direction (here it is clockwise) so its K.E. keeps on increases till θ = 0°. Beyond that t reverses its direction and then K.E. starts decreasing. Therefore, θ = 0° is the orientation of M to here the maximum K.E.
(iii) Since `q' is not small.
Therefore, the motion is not S.H.M. but it is oscillatory and periodic amplitude is 120°.
Ex. 43: A bar magnet of mass 100 g, length 7.0 cm, width 1.0 cm and height 0.50 cm takes p/2 seconds to complete an oscillation in an oscillation magnetometer placed in a horizontal magnetic field of 25μT.
(a) Find the magnetic moment of the magnet.
(b) If the magnet is put in the magnetometer with its 0.50 cm edge horizontal, what would be the time period?
Ans: (a) The moment of inertia of the magnet about the axis of rotation is
= [(7 × 10-2)2 + (1 × 10-2)2] kg-m2 =
We have, T =
or, M = = = 27 A-m2
(b) In this case the moment of inertia becomes
I' = where b' = 0.5 cm.
The time period would be
T' = ...(ii)
Dividing by equation (i),
= = = 0.992
or, T' = = 0.496 p s.
16.4 Magnet in an External Non-uniform Magnetic field:
No special formula are applied is such problems. Instead see the force on individual poles and calculate the resultant force torque on the dipole.
- Force due to Non-uniform Magnetic field:
- If a source of Magnetic Momenthave dimension very less than the distance of point of application then we can replace it with magnet of magnetic moment equal to .
17. TERRESTRIAL MAGNETISM:
Earth is a natural source of magnetic field.
17.1 Elements of the Earth's Magnetic Field:
The earth's magnetic field at a point on its surface is usually characterised by three quantities :
(b) inclination or dip and
(c) horizontal component of the field. These are known as the elements of the earth's magnetic field.
A plane passing through the geographical poles (that is, through the axis of rotation of the earth) and a given point P on the earth's surface is called the geographical meridian at the point P. Similarly, the plane passing through the geomagnetic poles (that is, through the dipole-axis of the earth) and the point P is called the magnetic meridian at the point P.
The angle made by the magnetic meridian at a point with the geographical meridian is called the declination at that point.
(b) Inclination or dip:
The angle made by the earth's magnetic field with the horizontal direction in the magnetic meridian, is called the inclination or dip at that point.
(c) Horizontal component of the earth's magnetic field:
As the name indicates, the horizontal component is component of the earth's magnetic field in the horizontal direction in the magnetic meridian. This direction is towards the magnetic north.
Figure shows the three elements. Starting from the geographical meridian we draw the magnetic meridian at an angle q (declination). In the magnetic meridian we draw the horizontal direction specifying magnetic north. The magnetic field is at an angle d (dip) from this direction. The horizontal component BH and the total field B are related as
BH = B cos δ
or, B = BH / cos δ
Thus, from the knowledge of the three elements, both the magnitude and direction of the earth's magnetic field can be obtained.
Ex. 45 The horizontal component of the earth's magnetic field is 3.6 × 10-5 T where the dip is 60º. Find the magnitude of the earth's magnetic field.
Ans: We have BH = B cos δ |
Word problems (or story problems) allow kids to apply what they have learned in math class to real-world situations. Word problems build higher-order thinking, critical problem-solving, and reasoning skills.
Decimal Addition & Subtraction. Find sums and differences for pairs of decimals on these worksheets. These practice pages have decimals in tenths, hundredths, and thousandths.
Money Addition.These printables have horizontal and vertical money addition problems, as well as word problems.
2-Digit Addition (No Regrouping). The addition worksheets on this page have no regrouping or carrying. Approx. levels: 1st grade, 2nd grade.
math worksheets algebra
math worksheets decimal |
|Volume flow rate|
In physics and engineering, in particular fluid dynamics, the volumetric flow rate (also known as volume flow rate, rate of fluid flow, or volume velocity) is the volume of fluid which passes per unit time; usually it is represented by the symbol Q (sometimes V̇). The SI unit is cubic metres per second (m3/s). Another unit used is standard cubic centimetres per minute (SCCM). In hydrometry, it is known as discharge.
Volumetric flow rate should not be confused with volumetric flux, as defined by Darcy's law and represented by the symbol q, with units of m3/(m2·s), that is, m·s−1. The integration of a flux over an area gives the volumetric flow rate.
That is, the flow of volume of fluid V through a surface per unit time t.
Since this is only the time derivative of volume, a scalar quantity, the volumetric flow rate is also a scalar quantity. The change in volume is the amount that flows after crossing the boundary for some time duration, not simply the initial amount of volume at the boundary minus the final amount at the boundary, since the change in volume flowing through the area would be zero for steady flow.
Volumetric flow rate can also be defined by:
The above equation is only true for flat, plane cross-sections. In general, including curved surfaces, the equation becomes a surface integral:
This is the definition used in practice. The area required to calculate the volumetric flow rate is real or imaginary, flat or curved, either as a cross-sectional area or a surface. The vector area is a combination of the magnitude of the area through which the volume passes through, A, and a unit vector normal to the area, n̂. The relation is A = An̂.
where θ is the angle between the unit normal n̂ and the velocity vector v of the substance elements. The amount passing through the cross-section is reduced by the factor cos θ. As θ increases less volume passes through. Substance which passes tangential to the area, that is perpendicular to the unit normal, does not pass through the area. This occurs when θ = π/ and so this amount of the volumetric flow rate is zero:
These results are equivalent to the dot product between velocity and the normal direction to the area.
When the mass flow rate is known, and the density can be assumed constant, this is an easy way to get .
In internal combustion engines, the time area integral is considered over the range of valve opening. The time lift integral is given by:
where T is the time per revolution, R is the distance from the camshaft centreline to the cam tip, r is the radius of the camshaft (that is, R − r is the maximum lift), θ1 is the angle where opening begins, and θ2 is where the valve closes (seconds, mm, radians). This has to be factored by the width (circumference) of the valve throat. The answer is usually related to the cylinder's swept volume.
- Air to cloth ratio
- Discharge (hydrology)
- List of rivers by discharge
- List of waterfalls by flow rate
- Flow measurement
- Orifice plate
- Poiseuille's law
- Stokes flow
- Weir#Flow measurement
- Engineers Edge, LLC. "Fluid Volumetric Flow Rate Equation". Engineers Edge. Retrieved 2016-12-01. |
The outer reaches of the Milky Way galaxy are a different place. Stars are much harder to come by, with most of this “galactic halo” being made up of empty space. But scientists theorize that there is an abundance of one particular thing in this desolate area – dark matter. Now, a team from Harvard and the University of Arizona (UA) spent some time studying and modeling one of the galaxy’s nearest neighbors to try to tease out more information about that dark matter, and as a result came up with an all new way to look at the halo itself.
The neighbor they used is the Large Magellanic Cloud (LMC), a satellite galaxy of the Milky Way made up of several billion stars. It is positioned such that it is floating around the outer reaches of the halo where it creates a “wake” through the Milky Way’s outer reaches, similar to how a boat creates a wake when it travels through water.
Given the paucity of normal matter in the halo, the wake is made through dark matter, which interacts with the universe only through the influence of gravity. Tracking the progress of the LMC through the halo, the UA and Harvard teams were able to discern an outline of the dark matter wake by utilizing a tool they created – the first ever detailed star map of the outer halo.
That map required some inventive sleuthing to determine which stars were actually separate from the Milky Way or the LMC. The team used a two tiered approach by first analyzing data from Gaia, which is able to accurately pinpoint stars locations but couldn’t tell their distance, and combining it with data from NEOWISE, which looked at a specific type of giant star in that location data that helped them determine the distance.
The resulting star map starts about 200,000 light years away from the center of the Milky Way and continues to about 325,000 light years beyond it. This swath of the outer halo is also the same area the LMC is moving through, and the Harvard team that originally developed the map contacted the UA team who had separately come up with a model for predicting how dark matter would look in the galactic halo.
The combined team found that one of UA’s models accurately predicted the dispersion of stars in the map the Harvard team had developed. UA’s model used the popular dark matter theory known as “cold dark matter”, and while it seemed to fit the star profile pretty well, there was some room for improvement. The UA team is continuing to tweak the model to see if they can get a better fit to the observed star pattern.
One outcome of the combined model and star map is more information about the LMC itself. It appears that the LMC is just completing its first orbit around the Milky Way after being formed in the M31 galaxy more than 13 billion years ago. Eventually it will collapse into the Milky Way itself, though after another few billion years of spiraling around it.
That dance offers insight into how galaxies merge more generally, and the combined model and map seem to confirm the general theory of how those mergers happen. With a better understanding of the effects of dark matter that this paper provides it will become even easier to model these gigantic galactic fusions better than ever before.
JPL – Astronomers Release New All-Sky Map of Milky Way’s Outer Reaches
Nature – All-sky dynamical response of the Galactic halo to the Large Magellanic Cloud
UA – Astrophysicists Help Chart Dark Matter’s Invisible ‘Ocean’
ScienceDaily – The Milky Way galaxy has a clumpy halo
UT – Decaying Dark Matter Should be Visible Here in the Milky Way as a Halo Around the Galaxy |
In this course, We are going to learn what a linked list is and how to implement one using Java programming. We would be coding real implementations of these data structures and solving problems with them using java. you would also learn how to debug your java code in an IDE
In addition to this, there would be a lot of drawings to help you visualize and help you get comfortable with coding a linked list data structure. Now, because the materials in this course is put in such a way that an average person without a CS background but adequate knowledge in Java would easily master the material. You would also be comfortable enough to take these concept of visualization to other data structures like Hash Tables, Trees, Graphs and many other data structures.
The course has been designed to help tackle technical interview questions on linked-lists and college/curious students struggling to understand the concept. 24-hours assistance is provided to all students in need of help.
Learn how to simplify the concept of a linked-list in java programming to fully understand it better. Free e-book/notes included in the resource folder.
Learn how to insert nodes into a singly linked list using java programming
Learn how to delete nodes from a linked list using runner technique in java programming
Learn how to search for nodes in a linked list in java programming
Learn one of the fastest ways to get the total size of a linked list in java programming
This is how to check if a linked list is empty using java programming
Learn the concepts of using doubly-linked lists in java programming
Learn how to insert at the front and back of a doubly-linked list in java programming
Display the data in the nodes forward and backwards using java programming
Learn how to use a doubly linked list to determine if a string is a palindrome in java programming
A simple quiz to test your knowledge on the subject
Learn the concept of a double ended list in java programming
Learn how to insert into a double-ended list in java programming
Learn how to traverse a double ended list
Learn how to delete the last element in a double ended list
Learn how to delete all nodes in a linked list
Learn the concept of a circular linked list
Learn how to insert into a circular linked list
Learn how to traverse and display node data in a circular linked list
Learn how to make your program find data in a circular linked list
Lets test what you know so far on circular linked lists
If you don't want to use an IDE. In this video you would be learning how to run your java program by just using terminal and a text editor for Mac
Running your linked list code using Terminal (osx and linux only)
This quiz consists of questions related to all sections of the course
Join over 3000+ students learning how to improve their programming skills
Esther is a mobile/web developer. Most of her time, she spends as a teaching assistant for courses on data structures and algorithms. She also plays the role of a mentor to thousands of students in MOOC on game development using unity and C#. In her free-time, she enjoys teaching online and helping people learn how to code. |
Exploring the Cosmic Symphony
The Doppler Effect is a fascinating phenomenon that has been observed and studied for over a century. This effect describes the shift in frequency of a wave, such as sound or light, that results from the relative motion between a wave source and an observer. This effect has a wide range of applications and is important in many fields, including physics, astronomy, and even weather forecasting. In this article, we will delve into the phenomenon of the Doppler Effect in astronomy and its significance in our understanding of the universe.
In astronomy, the Doppler Effect is commonly used to determine the velocity of celestial objects, such as stars and galaxies. By observing the shift in the spectral lines of light emitted by these objects, astronomers can calculate the radial velocity of the object, which is the component of its velocity along the line of sight. This information is critical for understanding the motion of celestial objects and their relationship to each other.
One of the most important applications of the Doppler Effect in astronomy is in the study of stars. By measuring the Doppler shift of the spectral lines of light emitted by stars, astronomers can determine their velocity and direction of motion. This information is crucial for understanding the dynamics of star systems and the formation and evolution of galaxies.
Another application of the Doppler Effect in astronomy is in the study of galaxies and the large-scale structure of the universe. By measuring the Doppler shift of the spectral lines of light emitted by galaxies, astronomers can determine the radial velocity of the galaxy and its position relative to other galaxies. This information is critical for mapping the large-scale structure of the universe and studying the evolution of galaxies.
The Doppler Effect is employed in the study of planetary systems in addition to its uses in star and galaxy studies. Astronomers can ascertain the velocity of the planet and its orbit around its parent star by measuring the Doppler shift of the spectral lines of light emitted by exoplanets. This knowledge is essential for figuring out how planetary systems arise and evolve as well as for finding exoplanets that might harbour life.
In addition to its applications in physics and astronomy, the Doppler Effect also has practical applications in our daily lives. For example, when an ambulance approaches with its siren on, the sound of the siren appears to get higher in pitch as it approaches and then lower in pitch as it moves away. This is because the sound waves from the siren are compressed and stretched, respectively, as the ambulance moves towards or away from the observer, as shown above.
In conclusion, the Doppler Effect is a fascinating and important phenomenon that has a wide range of applications. It is one of the most crucial phenomenon in astronomy that provides astronomers with important information about the motion and velocity of celestial objects. Its applications range from the study of stars and galaxies to the study of planetary systems and the large-scale structure of the universe. The Doppler Effect continues to play a vital role in our understanding of the cosmos and the evolution of the universe. |
LESSON PLAN in Separating Mixtures, Physical Properties, Chemical Change, Introduction, Physical Change, History, Lab Safety, Measurements, Significant Figures, SI Units, Chemical Properties. Last updated September 17, 2020.
The AACT High School Classroom Resource library and multimedia collection has everything you need to put together a unit plan for your classroom: lessons, activities, labs, projects, videos, simulations, and animations. We searched through our resource library and constructed a unit plan for introducing the basic chemistry concepts to students: Laboratory Safety, Equipment, and Reports, Periodic Table Basics, Physical and Chemical Properties and Changes, Endothermic and Exothermic Changes, and Classification of Matter. These topics are very important for your students to master before they dig into other chemistry concepts. This unit is designed to be used at beginning of the school year.
By the end of this unit, students should be able to
- Distinguish between safe and unsafe behavior in the chemistry laboratory.
- Understand the importance of following safety rules in a chemistry laboratory.
- Responsibly follow safety guidelines presented in a chemistry laboratory.
- Correctly identify and name common pieces of laboratory equipment.
- Associate a hazard symbol with its meaning.
- Understand the importance of hazard symbols.
- Write a formal lab write up.
- Determine a method to measure the mass and volume of an irregular object.
- Accurately use laboratory equipment to gather data.
- Calculate the density of an irregular object using their data.
- Create a graph of mass vs. volume using class data and use the slope of the line to calculate the average density of the objects.
- Create a lab report using tables and graphs, following a provided template.
- Explain how the accuracy of a measurement will change depending upon the measuring tool used to measure.
- Determine the correct measurement based on the markings on the device used.
- Identify the uncertainty value for a measurement based on the markings on a measurement device used.
- Distinguish between the states of matter at the particle level.
- Explain, using examples how matter is different in one state versus another.
- Identify examples of different states of matter.
- Classify the three states of matter found in the laboratory by molecular level particle representations.
- Identify differences in the particle representations to classify them as pure substances, both elements and compounds, as well as mixtures.
- Verbally explain the classification system their group developed.
- Understand vocabulary related to chemistry.
- Identify whether a physical or chemical change has occurred.
- Provide evidence supporting which change has occurred.
- Identify physical properties substances.
- Identify appropriate methods for separating mixtures.
- Better understand how the periodic table is organized.
- Classify elements by family name, group number and period number.
- Identify reactions as either endothermic or exothermic.
This unit supports students’ understanding of
- Laboratory Safety
- Laboratory Equipment
- Data collection
- Graphing and analyzing data
- Quantitative Chemistry
- SI Units
- Significant Figures
- States of Matter
- Molecular Motion
- Physical Properties
- States of matter
- Pure substances
- Separating mixtures
- Periodic Table
- Evidence of a chemical reaction
- Physical changes and properties
- Chemical changes and properties
- Endothermic/exothermic reactions
Teacher Preparation: See individual resources.
Lesson: 10 - 15 class periods, depending on class level.
- Refer to the materials list given with each individual activity.
- Refer to the safety instructions given with each individual activity.
- This unit plan begins with an introduction to the laboratory that includes safety guidelines, equipment, and lab reports. It then moves through activities that introduce other chemistry basics topics. You can use this unit plan as written, or change the order to meet the needs of your students.
- The teacher notes, student handouts, and additional materials can be accessed on the page for each individual activity.
- Please note that most of these resources are AACT member benefits.
- Laboratory safety is an important topic for teachers of chemistry, many of who may not have had access to enough training. Before planning your lab activities and demos, read Keeping the Wow Factor, and Controlling the Risks, an article from the September 2016 issue of Chemistry Solutions. This article reviews the safety problems inherent in the “traditional” rainbow experiment and similar demonstrations, includes the responses from various organizations, and also provides safer alternatives.
- Another article that you might find helpful is There’s More to the New Safety Data Sheets than a Missing “M” from the May 2017 issue. This article aims to increase your comfort level with the new SDSs by describing the timeline, some changes over the past 5 years, the pros and cons of the newer format, related hazard communication issues, and information on other available resources.
- Finally, if you’re unsure of what to do with laboratory waste, read Managing Chemical Wastes in the High School Lab from the May 2016 issue. This article provides a solid starting point to determine proper disposal methods for high school lab waste and includes a Quick Disposal Reference Guide that you can download, print, and hang in your chemical prep room.
- Teachers should refer to the ACS published document, Guidelines for Chemical Laboratory Safety on Secondary Schools as a great resource for teaching lab safety in the chemistry classroom. Additional specific information that could be used when teaching hazard symbols can be found starting on page 21.
Laboratory Safety, Equipment, and Reports
- Use one or more of the videos from the ACS Chemical Safety video series to introduce important safety concepts to your students. This series includes five student videos and one video for teacher use. All of the videos are unlocked so that your students can access them. The RAMP lesson plan includes several activities to help students become more knowledgeable and better able to assess risks and hazards in the lab. Student videos include:
- The final video, RAMP for Teachers, outlines steps you can take to make sure your students are as safe as possible while exploring and experimenting in the lab.
- Use the activity, What Not to do in the Chemistry Lab to introduce laboratory safety and best practices and discuss these important topics with your students. During the activity students examine a cartoon of a chaotic chemistry lab and note the specific behaviors that are dangerous and unsafe in a chemistry laboratory setting.
- The Hazard Symbols activity from the May 2018 issue of Chemistry Solutions is a great way to familiarize your students with common hazard symbols and their meaning. As optional extensions to this activity, use chemical containers that contain these hazard symbols as examples for your students. You could also share an example of a MSDS with students as part of a discussion about these hazard symbols.
- Then use the activity, Analyzing & Creating Safety Labels to further help your students understand the color and symbols on the Safety Diamond and apply their knowledge to interpreting a chemical label.
- Another option is the Lab Safety and Safety Data Sheets (SDS) activity that helps students learn how to identify safe lab practices with a focus on labeling and background safety information for reagents used in the lab.
- Many chemistry students come to the class with a limited understanding of how to write high quality lab reports. Before your students start working in the lab, read Tools and Strategies for Teaching Lab Report Writing from the September 2016 issue of Chemistry Solutions for some ideas that will help your students with their reports. This article includes an accompanying lab, Investigating the Density of an Irregular Solid Object. You may want to use the Density animation to introduce this concept to your students.
- Use our How To Write a Formal Lab lesson plan to teach students how to put the parts of a formal lab report together. Having students familiarize themselves with this format will expedite teacher grading.
- Introduce your students to basic lab equipment with the activity, The Essentials for Survival. In addition to learning about equipment, students will model appropriate group work, class discussions and practice wiring efficient Claim-Evidence-Reasoning (CER) reports.
- The Laboratory Equipment Memory Game from the March 2018 issue of Chemistry Solutions is a fun and effective activity to assess your students’ understanding of common lab equipment. This resource includes a card template that you can use to print the game cards on stock paper. You can reuse the cards from year to year by laminating them and then storing them in zip-lock bags.
- Then use the Laboratory Equipment Bingo activity to see how well your students know common equipment that can be found in a chemistry lab. This activity includes 32 unique bingo cards and a PowerPoint presentation with pictures of 49 pieces of lab equipment.
- You may want to start off the year showing one of the videos from our multimedia library.
- The Ancient Chemistry Video traces the history of chemistry from the discovery of fire, through the various metal ages, and finally to the great philosophers. This resource includes an activity sheet for students to use while viewing the video.
- You might prefer to pique your students’ interest in chemistry with the Arsenic Video and hear Sam Kean, author of The Disappearing Spoon, tell stories about arsenic, a deadly element that was once referred to as the "Inheritance Powder". Additional videos about the elements can be found in our multimedia library.
- Another option is the Frontiers of Chemistry video that explores new scientific developments made possible by the application of fundamental chemistry concepts. This video also includes a student activity sheet.
- Students explore the properties of the states of matter with the Categorizing States of Matter activity, which has them analyze both written statements and images that describe the properties of a solid, liquid or gas. Students then determine which state of matter the description best describes and categorize it accordingly.
- The Visualizing States of Matter activity has students view, sort and classify pure substances and mixtures into the 3 common states of matter found in the laboratory. They then discuss their classification system with their teacher and peers. This resource include NGSS-alignment.
- Assess your students’ understanding of the concept with a 10 question quiz in the States of Matter and Phase Changes simulation. This activity challenges students to identify the correct state of matter and connect it with an animated particle diagram.
- In the Chemical and Physical Changes Lab students observe and analyze a number of interactions and determine if a chemical or physical change occurred. This activity will help students understand vocabulary related to chemistry, identify whether a physical or chemical change has occurred, and provide evidence supporting which change has occurred.
- The Classifying Matter animation will help students become familiar with definitions and examples of several broad classifications of matter, including pure substances (elements and compounds) and mixtures (homogeneous and heterogeneous). The animation includes real-life examples as well as particles for each type of matter.
- Follow up with the activity, Elements, Compounds, & Mixtures – Oh My! to provide students with more practice with classification of matter and particle diagrams.
- Then use the Separation of a Mixture Lab to allow students to devise their own method to separate a mixture of sand, salt, poppy seeds, and iron filings after they identify the physical properties of each and identify appropriate methods for separating them.
- Introduce the Periodic Table to your students with The Scavenger Hunt lesson from our PTable.com Investigations activity. This lesson walks students through an investigation of a large number of topics, from physical properties to history of the elements.
- A second option is to use the video, How the Periodic Table Organizes the Elements from the American Chemical Society to introduce the basics to your students. The resource includes a student activity sheet for students to answer questions during the video and an answer key.
- Students determine whether mixing two chemicals is endothermic or exothermic in the Exothermic and Endothermic Lab. This is a quick, simple lab that allows students to witness endothermic and exothermic processes; one from a physical change, one from a chemical change.
- The Chemistry in a Bag lab will help students distinguish between chemical and physical changes, identify evidence of a chemical change, and understand the difference between exothermic and endothermic reactions. Students also use this lab to understand the Law of Conservation of Mass.
- Finish up the unit with the Chemistry Basics Advanced Crossword Puzzle to assess your students’ understanding of fundamental chemistry topics. There is also a version of the puzzle written for middle school students.
- If time permits, you may want to use one of the following resources as a culminating event for this introductory unit.
- In the activity, Lab Safety, You’re Fired!, students read an account of a laboratory tour that details numerous safety infractions. They are then charged with identifying the safety violations and determining which scientist working in the lab should be fired.
- Students research an actual industrial chemical accident in the project, Chemical Disasters: Good Chemicals Gone Bad! by examining the chemicals involved including uses, hazards, chemical and physical properties. |
Solar flares are powerful bursts of energy. The Sun has emitted a powerful (X-class) solar flare, which peaked at 12:52 PM on March 3, 2023. NASA’s Solar Dynamics Observatory, which continuously observes the Sun, took an image of the event.
Close-up photo of the Sun showing red and orange spots on a dark background. A bright yellow glow is seen from the Sun, showing where the explosion occurred.
Space organizations, researchers, and countries need to all the more likely comprehend our turbulent sun, a sparkling circle of gas that is equipped for vigorous blasts from its surface. A common such event is a solar flare, which is an explosion that emits light and energy into space, sometimes toward Earth.
In the core of the Sun, hydrogen is converted into helium. This is called nuclear fusion. It takes four hydrogen atoms to fuse into each helium atom. During the process, some of the mass is converted into energy.
NASA’s Solar Dynamics Observatory captured this image of a solar flare—as seen in the bright flash at the upper right—on March 3, 2023. The photo shows a subset of the extreme ultraviolet light that illuminates boiling material in flares, colored orange.
This flare is classified as an X2.1 flare. The X-class refers to the most intense flares, while the number provides more information about its strength.
Solar flares are powerful bursts of energy. Flares and solar eruptions can affect high-frequency (HF) radio communications, the electric power grid, and navigation signals, and pose a risk to spacecraft and astronauts. Fortunately, Earth shields us from potential harm, though our power grids and communications can be severely damaged.
To see how this type of space weather could affect Earth, please visit NOAA’s Space Weather Prediction Center https://spaceweather.gov/, the US government’s official source for space weather forecasts, watches, warnings, and information. for alert.
NASA serves as the research arm of the nation’s space weather effort. NASA constantly watches the Sun and our space environment with a fleet of spacecraft that study everything from the Sun’s activity to the solar atmosphere and the particles and magnetic fields in space around Earth.
for reliable news visit our blog: https://thenewsamerica.com/
news source: https://blogs.nasa.gov/solarcycle25/2023/03/03/sun-releases-strong-solar-flare-5/ |
A giant star is a star with substantially larger radius and luminosity than a main-sequence (or dwarf) star of the same surface temperature. They lie above the main sequence (luminosity class V in the Yerkes spectral classification) on the Hertzsprung–Russell diagram and correspond to luminosity classes II and III. The terms giant and dwarf were coined for stars of quite different luminosity despite similar temperature or spectral type by Ejnar Hertzsprung about 1905.
Giant stars have radii up to a few hundred times the Sun and luminosities between 10 and a few thousand times that of the Sun. Stars still more luminous than giants are referred to as supergiants and hypergiants.
A hot, luminous main-sequence star may also be referred to as a giant, but any main-sequence star is properly called a dwarf no matter how large and luminous it is.
A star becomes a giant after all the hydrogen available for fusion at its core has been depleted and, as a result, leaves the main sequence. The behaviour of a post-main-sequence star depends largely on its mass.
For a star with a mass above about 0.25 solar masses, once the core is depleted of hydrogen it contracts and heats up so that hydrogen starts to fuse in a shell around the core. The portion of the star outside the shell expands and cools, but with only a small increase in luminosity, and the star becomes a subgiant. The inert helium core continues to grow and increase in temperature as it accretes helium from the shell, but in stars up to about it does not become hot enough to start helium burning (higher-mass stars are supergiants and evolve differently). Instead, after just a few million years the core reaches the Schönberg–Chandrasekhar limit, rapidly collapses, and may become degenerate. This causes the outer layers to expand even further and generates a strong convective zone that brings heavy elements to the surface in a process called the first dredge-up. This strong convection also increases the transport of energy to the surface, the luminosity increases dramatically, and the star moves onto the red-giant branch where it will stably burn hydrogen in a shell for a substantial fraction of its entire life (roughly 10% for a Sun-like star). The core continues to gain mass, contract, and increase in temperature, whereas there is some mass loss in the outer layers. , § 5.9.
If the star's mass, when on the main sequence, was below approximately, it will never reach the central temperatures necessary to fuse helium. , p. 169. It will therefore remain a hydrogen-fusing red giant until it runs out of hydrogen, at which point it will become a helium white dwarf. , § 4.1, 6.1. According to stellar evolution theory, no star of such low mass can have evolved to that stage within the age of the Universe.
In stars above about the core temperature eventually reaches 108 K and helium will begin to fuse to carbon and oxygen in the core by the triple-alpha process. ,§ 5.9, chapter 6. When the core is degenerate helium fusion begins explosively, but most of the energy goes into lifting the degeneracy and the core becomes convective. The energy generated by helium fusion reduces the pressure in the surrounding hydrogen-burning shell, which reduces its energy-generation rate. The overall luminosity of the star decreases, its outer envelope contracts again, and the star moves from the red-giant branch to the horizontal branch. , chapter 6.
When the core helium is exhausted, a star with up to about has a carbon–oxygen core that becomes degenerate and starts helium burning in a shell. As with the earlier collapse of the helium core, this starts convection in the outer layers, triggers a second dredge-up, and causes a dramatic increase in size and luminosity. This is the asymptotic giant branch (AGB) analogous to the red-giant branch but more luminous, with a hydrogen-burning shell contributing most of the energy. Stars only remain on the AGB for around a million years, becoming increasingly unstable until they exhaust their fuel, go through a planetary nebula phase, and then become a carbon–oxygen white dwarf. , § 7.1–7.4.
Main-sequence stars with masses above about are already very luminous and they move horizontally across the HR diagram when they leave the main sequence, briefly becoming blue giants before they expand further into blue supergiants. They start core-helium burning before the core becomes degenerate and develop smoothly into red supergiants without a strong increase in luminosity. At this stage they have comparable luminosities to bright AGB stars although they have much higher masses, but will further increase in luminosity as they burn heavier elements and eventually become a supernova. Stars in the range have somewhat intermediate properties and have been called super-AGB stars. They largely follow the tracks of lighter stars through RGB, HB, and AGB phases, but are massive enough to initiate core carbon burning and even some neon burning. They form oxygen–magnesium–neon cores, which may collapse in an electron-capture supernova, or they may leave behind an oxygen–neon white dwarf.
O class main sequence stars are already highly luminous. The giant phase for such stars is a brief phase of slightly increased size and luminosity before developing a supergiant spectral luminosity class. Type O giants may be more than a hundred thousand times as luminous as the sun, brighter than many supergiants. Classification is complex and difficult with small differences between luminosity classes and a continuous range of intermediate forms. The most massive stars develop giant or supergiant spectral features while still burning hydrogen in their cores, due to mixing of heavy elements to the surface and high luminosity which produces a powerful stellar wind and causes the star's atmosphere to expand.
A star whose initial mass is less than approximately will not become a giant star at all. For most of their lifetimes, such stars have their interior thoroughly mixed by convection and so they can continue fusing hydrogen for a time in excess of 1012 years, much longer than the current age of the Universe. They steadily become hotter and more luminous throughout this time. Eventually they do develop a radiative core, subsequently exhausting hydrogen in the core and burning hydrogen in a shell surrounding the core. (Stars with a mass in excess of may expand at this point, but will never become very large.) Shortly thereafter, the star's supply of hydrogen will be completely exhausted and it will become a helium white dwarf. Again, the universe is too young for any such stars to be observed.
There are a wide range of giant-class stars and several subdivisions are commonly used to identify smaller groups of stars.
See main article: Subgiant. Subgiants are an entirely separate spectroscopic luminosity class (IV) from giants, but share many features with them. Although some subgiants are simply over-luminous main-sequence stars due to chemical variation or age, others are a distinct evolutionary track towards true giants.
See main article: Bright giant. Another luminosity class is the bright giants (class II), differentiated from normal giants (class III) simply by being a little larger and more luminous. These have luminosities between the normal giants and the supergiants, around absolute magnitude -3.
See main article: Red giant. Within any giant luminosity class, the cooler stars of spectral class K, M, S, and C, (and sometimes some G-type stars) are called red giants. Red giants include stars in a number of distinct evolutionary phases of their lives: a main red-giant branch (RGB); a red horizontal branch or red clump; the asymptotic giant branch (AGB), although AGB stars are often large enough and luminous enough to get classified as supergiants; and sometimes other large cool stars such as immediate post-AGB stars. The RGB stars are by far the most common type of giant star due to their moderate mass, relatively long stable lives, and luminosity. They are the most obvious grouping of stars after the main sequence on most HR diagrams, although white dwarfs are more numerous but far less luminous.
See main article: Yellow giant star. Giant stars with intermediate temperatures (spectral class G, F, and at least some A) are called yellow giants. They are far less numerous than red giants, partly because they only form from stars with somewhat higher masses, and partly because they spend less time in that phase of their lives. However, they include a number of important classes of variable stars. High-luminosity yellow stars are generally unstable, leading to the instability strip on the HR diagram where the majority of stars are pulsating variables. The instability strip reaches from the main sequence up to hypergiant luminosities, but at the luminosities of giants there are several classes of variable stars:
Yellow giants may be moderate-mass stars evolving for the first time towards the red-giant branch, or they may be more evolved stars on the horizontal branch. Evolution towards the red-giant branch for the first time is very rapid, whereas stars can spend much longer on the horizontal branch. Horizontal-branch stars, with more heavy elements and lower mass, are more unstable.
The blue giants are a very heterogeneous grouping, ranging from high-mass, high-luminosity stars just leaving the main sequence to low-mass, horizontal-branch stars. Higher-mass stars leave the main sequence to become blue giants, then bright blue giants, and then blue supergiants, before expanding into red supergiants, although at the very highest masses the giant stage is so brief and narrow that it can hardly be distinguished from a blue supergiant.
Lower-mass, core-helium-burning stars evolve from red giants along the horizontal branch and then back again to the asymptotic giant branch, and depending on mass and metallicity they can become blue giants. It is thought that some post-AGB stars experiencing a late thermal pulse can become peculiar blue giants. |
The History Of Manifest Destiny History Essay
Disclaimer: This essay has been submitted by a student. This is not an example of the work written by our professional essay writers. You can view samples of our professional work here.
Any opinions, findings, conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of UK Essays.
Americans have always believed that they have a unique mission. They have always thought that they possess a destiny to do exceptional things and accomplish extraordinary feats. This is evident from the first American "City Upon a Hill", to the just cause democratic of the American revolution, to Jefferson's Empire of Liberty, and nowadays in American foreign policy. Nowhere is the self-righteous principle more evident than in the idea of Manifest Destiny. Manifest Destiny has served as the basis and inspiration for outstanding acts of American unity and nationalism, but it also is responsible for the horrible mistreatment of Native Americans and bloody wars, including the American Civil War. Although a loose, abstract idea, Manifest Destiny has had an unparalleled effect on the course this great nation has taken.
The definition of Manifest Destiny is a topic of much historical disagreement and inconsistency, but it is largely agreed upon that Manifest Destiny refers to the God-given right of the United States to further develop and expand its institutions and democratic values. Before delving into the complex meaning of Manifest Destiny, it is imperative to understand the origins and basis for the idea.
The term, "Manifest Destiny" was first seen in writing in 1845 in an essay entitled Annexation by John L. O'Sullivan. In his essay, Sullivan urges the United States to annex Texas not only because the Texans wanted us to, but also because it was our "Manifest Destiny" to annex areas and spread our democratic values. "Manifest Destiny" went relatively unnoticed in Sullivan's Annexation and received minimal public response. This all changed the second time Sullivan mentioned his idea of Manifest Destiny. O'Sullivan, in his article entitled Manifest Destiny in the New York Morning News, argued that the United States held the higher ground in the dispute between itself and Britain over the Oregon country because we had the god-given right to expand throughout North America. From this article, American exceptionalism and America's Manifest Destiny was born (Steffen).
Both Manifest Destiny and American Exceptionalism are most commonly associated with the 19th century Era of Expansion in American history starting with the Louisiana Purchase.
In 1803, President Thomas Jefferson pushed through Senate the best business deal ever negotiated and what was to become the future of the United States of America, the Louisiana Purchase. The Louisiana Purchase was the most important event of President Thomas Jefferson's first administration. In this transaction, the United States bought 827,987 square miles of land from France for about fifteen million dollars. This vast area lay between the Mississippi River and the Rocky Mountains, stretching from the Gulf of Mexico to the Canadian border. The purchase of this land greatly increased the economic resources of the United States, and cemented the geographic presence of America. This purchase more than doubled the size of America. This vast, unexplored backyard that the US now had sole ownership of posed many questions and mysteries to the American people. Thomas Jefferson commissioned Meriwether Lewis and William Clark to answer these questions and solve these mysteries. In 1804, Lewis and Clark set out with their corps of discovery on the first government-funded transcontinental expedition that would bring back glorious stories of the west and contribute immensely to the westward expansion of the United States ("Louisiana Purchase").
The Corps of discovery started their journey in St. Louis, Missouri in May of 1804. From there, they started their long journey up Missouri river. By late October, the Corps had reached the settlement of Hidatsa, North Dakota, where they decided to hold up for the winter. Here, they enlisted the help of a Shoshone woman named Sacajawea who's experience and nativity among the native people of the western territory proved to be an invaluable asset to the Corps. As the group pressed westward, Lewis kept maps of terrain they had covered, notes on new vegetation that they encountered, and journals of the types of people that inhabited the places that they traveled through. Early in 1805, Lewis dispatched 12 members of the Corps back to Washington with many scientific and topographic discoveries that would intrigue both Thomas Jefferson and the American people alike. By November 1805, the expedition had accomplished their goal of traveling across the territory to the Pacific Ocean and decided to spend the remainder of the winter close to the temperate coast. When spring time came, they retraced their footsteps all the way back to the Missouri tributary, which they took back down to St. Louis. From St. Louis they then carried the remainder of their findings back to the nation's capital. In Washington, they were greeted as heroes and conquerors of the west. The findings and stories that the Lewis and Clark expedition brought back both fascinated the American people and inspired them to take hold of their Manifest Destiny and become the conquerors of the western territory. What Meriwether Lewis and William Clark accomplished on their expedition was more than just scientific findings, knowledge of terrain, and information on peoples of the west, through this expedition, they became the forefathers of a great American westward movement that truly embodied the idea of manifest destiny ("Lewis and Clark Expedition").
The accomplishments and discoveries that Lewis and Clark brought back with them from their transcontinental expedition sparked an interest in the American people as to what laid beyond the Mississippi River. People began flocking to the western territories in hopes of amassing wealth and prosperity that the wide open plains and promising coastal lands had to offer. The main route taken by these adventurous people was the Oregon Trail. Starting in Independence Missouri and extending across the continent to the Columbia River in the Oregon Territory, the Oregon Trail was first used by westbound fur traders, but eventually evolved into the main artery for west bound citizens ("Oregon Trail").
The two thousand mile trek usually took no less than six months in covered wagons (the most common form of long-distance travel). The trip was so treacherous that it is estimated that one traveler died for every eighty yards of trail. To make matters worse, most everyone except the extremely old, extremely young, and crippled walked along the ox drawn carriages for the entire two thousand mile journey. The wagons themselves were usually four feet wide by ten feet high and were reserved for provisions, clothing, family heirlooms, and pieces of expensive family furniture or treasure that was almost always abandoned along the trail. The abandonment of such treasure was so commonplace that, along the trail, there was always pieces of luxury furniture or expensive pianos that undoubtably slowed down the wagon they had previously belonged to. These pieces of luxury were for the taking for anyone that could fit them in their wagon, but more often than not they were left to rot along the side of the trail. The trail was marked by the carcasses of dead animals, graves of dead humans, and debris from overloaded wagons. The vast immigration of American peoples across the Oregon trail to the Oregon territory in essence accomplished just what O'Sullivan wanted: an American Oregon ("Oregon Trail").
Myths about the Natives along the Oregon trail both intrigued and plagued the west-bound Americans. They heard of savages that would ravage wagon trains, but also of peaceful relations between the white settler and Natives beyond the Mississippi. Early on in the westward march of American settlers, peaceful relations seemed to be the status quo. Many natives aided the travelers in ways such as showing possible navigational routes of tricky waterways, pointing out the most efficient routes of travel, and even performing tasks for the white man such as cutting wood or carrying mail eastward. Most of the early relations between natives and American settlers relied on a rudimentary form of trade and barter. Goods such as moccasins, robes, horses, and food supplies were exchanged for small trinkets, weapons, ammunition, and other relevant items that the travelers carried along with them. Contrary to common-held belief, most of this early trade occurred in between Idaho and the coast of the Oregon territory, not along the great plains ("Oregon Trail").
As the immigration intensified, relations between the white man and the natives gradually declined to the level of military confrontation leading to the removal and near extinction of many peoples native to the western territory. Fueled by a sense of entitlement aroused by the idea of Manifest Destiny, the immigrating Americans helped themselves and all but depleted almost all natural resources available along the trail that the natives had relied on for thousands of years. Examples of this overuse of natural resources include the depletion of American bison populations that once roamed as far as the eye could see, the spoiling of water sources, deforestation, and the depletion of grazing land for livestock. The gradual augmentation of tension between the Natives and immigrating citizens made for an ugly scene along the trail until the later transcontinental railroad was commissioned thus rendering the Oregon Trail obsolete ("Oregon Trail").
In 1492, when Christopher Columbus landed on the shores of the foreign land, which he mistakenly believed to be India, he immediately claimed that land for Queen Isabel and King Ferdinand of Spain. The problem with his claim was that Natives had already occupied this land for thousands of years. This immediate and utter disregard for the claims of the Natives became an integral part of the American Manifest Destiny and characterized the general attitude Americans embraced as they moved west into occupied land.
In the early 1800s, the growing American population started creeping into the lower south. As they moved farther west, the white settlers came across an obstacle, the Cherokee, Creek, Choctaw, Chickasaw, and Seminole nations. The Indian nations were, as the settlers saw it, in the way of divine progress (Ball).
Americans believed that all native tribes were "savage" bands of "hunters" who did not fully utilize the land they occupied. Therefore, the removal of such tribes for the use of their fertile land was a justified action. The process of removal included much bribery and intimidation on the part of not only the American people, but also the government in Washington (Ball).
Originating in the late 18th century with George Washington, governmental removal of Natives permeated the presidencies of many men including Thomas Jefferson, James Monroe, and Andrew Jackson (Ball).
Andrew Jackson, a lifelong proponent of American expansion and the idea of American Exceptionalism, is perhaps the most notorious advocate of Indian removal during this era. In 1814, he led United States military forces successfully against the Creek Nation. In their defeat, the Creeks relinquished twenty two million acres to the American government. The United States garnered more native land in 1818 when Jackson and his troops invaded the Seminole land in present day Florida ("Indian Removal").
From 1814 to 1824, Jackson was instrumental in the negotiating of nine different treaties with southern Indian Nations which divested the Indian people of their eastern land claims in exchange for lands west of the Mississippi. The Indian nations agreed to these harsh treaties partly because they believed that if they conceded to the white man, then he would be more inclined to let them retain some of their original lands. Instead of satisfying the American desire for the natives' land, the policies of appeasement that the natives employed only heightened American desires ("Indian Removal").
While some Indian nations attempted to deter the influx of Americans through violence, the Cherokee nation made an effort to adjust and assimilate to the new societal expectations of the white man. They adopted the practice of large scale farming and some even bought into the slave trade. The Cherokee Nation, through the direction of the American government and people, established schools, abandoned their nomadic ways in favor of the more "civil" towns and cities, and even attempted to assimilate into the American religions through churches and schools set up by missionaries. Despite all of their attempts to adjust and assimilate into American society, the Cherokee nation proved to be no match for the American expansionist spirit (Ball).
The vision that the American expansionists saw for their future was one of total American domination of continental America with no room for the Natives. So, a series of court decisions and legislative acts began the removal and almost elimination of native peoples in the eastern portion of America.
In 1823, the Supreme Court ruled in the case of Johnson vs. M'Intosh that Native Americans, as mere occupants of American land, did not have the legal right to own or sell any land. Furthermore, Chief Justice John Marshall stated his "discovery doctrine" that the Native Americans were a conquered people and to be classified only as occupants on American land. Marshall held that the American "right of discover" trumped the natives' "right to occupancy". This court ruling was instrumental in the removal of Indians from their homelands in order to make way for the expanding American population ("Indian Removal").
Seven years after the Supreme court ruling that stated the dominance of Americans over the Natives in matters of land rights, Andrew Jackson, the famed advocate of Indian removal and now President, pushed through Congress the Indian Removal Act. This piece of legislation gave the President the authority to negotiate "removal" treaties with all of the Indian tribes east of the Mississippi River. Under these agreements, each tribe would surrender its homeland in the east and relocate within a stated period of time to a territory west of that great waterway. The Senate passed the bill by a vote of 28 to 19, the House of Representatives by a vote of 102 to 97. Jackson then moved quickly to bring about a general removal of all of the eastern tribes, in the North and South alike (Garrison).
The Cherokee people, who had tried so earnestly to adapt themselves to the demanding societal expectations of the Americans, now were set to be relocated to the territory that occupies present day Oklahoma. The new land that they were set to move to soon became known as the Indian Territory. Beginning in May 1838, the United States Army oversaw the max exodus of the Cherokee people from their native lands to the Oklahoma territory. The journey that they had to endure was so difficult and deadly that thousands perished along the way. The trail they followed became known as the Trail of Tears (Blackhawk).
American ideas of exceptionalism and Manifest Destiny provided both the impetus and the justification for the removal of Natives from the lands they had occupied for thousands of years. The Indian Removal Act is a prime example of how Americans truly believed they had a divine right to expand their institutions and values at any cost. As America successfully removed the Native people from the southeast, they continued to explore and expand westward. This expansion would cause the next great clash between Americans and the people who were perceived to be impeding the divine growth of America.
According to the idea of Manifest Destiny, the land to the west of the Mississippi was divinely given American property. Believers of Manifest Destiny held no doubts as to the legitimacy of their claim to that land, but Mexico, who also claimed much of this manifested American land, disagreed vehemently. Before Mexico won its independence from Spain, American settlers and farmers were living in the northernmost province of Mexico, Texas. Once Mexico became its own republic, the American-Texans remained in this area, but refused to recognize the land as Mexican and refused to obey any Mexican law or authority. This refusal of authority led to constantly rising tensions between the Texans the the Mexican government. In 1835, the Texans revolted against the Mexican government and a year later they declared their independence as the Lone Star Republic. The Republic successfully lasted on its own for nine years, but growing urges from the Southern Expansionists and even President Polk himself persuaded Texas to formally join the United States. Mexico, infuriated by this complete disregard for their claimed authority, immediately severed diplomatic ties with Washington. Just after the annexation was a very critical time for the relations between the two countries because the matters at hand could have been solved peacefully had it not been for the overwhelming American desire for mass amounts of Mexican land in what the Americans believed to be the western part of their own country (Stout). The Americans held that the southern boarder of their newly acquired state was the Rio Grande River. Mexico, however, contested that the Texas territory had never extended farther than the Nueces River. This discrepancy of land claims and also a growing American sentiment of Manifest Destiny led to what we now know as the Mexican-American war (Stout).
America, ready to acquire the Mexican territory that was their "divine right" to acquire, employed one last diplomatic option. They sent John Slidell to the Mexican capital of Mexico City with orders to attempt to buy the territory. Slidell offered the Mexicans twenty five millions dollars for the land in what is now present day California and New Mexico. Mexico at the time was engrossed in political turmoil and refused to hear Slidell's proposal. Slidell was forced to return to Washington and upon arrival is rumored to have told President Polk that Mexico must be "chastised" (Stout). During all of this diplomatic talk, President Polk ordered American troops under the direction of General Zachary Taylor to the Nueces River. Taylor made sure that he stayed out of the disputed "no mans land" between the Nueces in the north and the Rio Grand in the south. On the morning of April 25th 1846, a small contingency of Mexican soldiers surprised and defeated a small American cavalry unit north of the Rio Grand. Since Americans claimed that all land north of the Rio Grand was theirs, this was an overt act of war and deserved an immediate response. From this attack, the phrase "American blood on American soil" was born. This small act of violence was all the Polk and the American people (especially southern expansionists) needed to justify a war with Mexico. The Mexican government and people were also eager for a fight. They were fed up with the ever encroaching American people and wanted to take back what they perceived to be theirs. Moral righteousness on both sides fed fuel to the progressing conflict. Once war broke out, the Americans mounted one successful campaign after another and victory seemed all but guaranteed. Polk, eager to end the shooting, sent Nicholas Trist to Mexico city who arranged for and armistice with the Mexicans at a cost of ten thousand dollars. General Santa Ana, the dictator of Mexico, agreed but instead of calling of the fighting, used the money to further fund his army. President Polk, infuriated by this blatant double-crossing, immediately recalled Trist to Washington. Trist, holding on to the idea of peace between America and Mexico, disobeyed his orders and stayed in Mexico. Trist's stay in Mexico proved to be beneficial to America. While there he negotiated and drafted the treaty of Guadalupe Hidalgo. The terms of the treaty were heavily in the United States' favor. They confirmed the American claim to Texas and included massive amounts of land westward extending to the Pacific Ocean. In return, the Americans agreed to pay fifteen million dollars to the Mexican government (Bailey). The Mexican War and Treaty of Guadalupe Hidalgo gave the United States more than half a million acres of land. To the Americans, the victory and favorable treaty terms were just another example of how they were divinely favored. Although the Treaty yielded massive amounts of land to the American government and people, it also revived the sectional quarrels over slavery (Stout).
Slavery had been a part of American culture since the very beginning of the Colonial Era. As America manifested its destiny and expanded westward, the question of how slavery would expand was the source of much civil disagreement and ultimately led to the bloodiest conflict America has ever seen.
The first American slaves were brought to the Jamestown Colony in 1619. At first, slaves were brought to America as indentured servants and to eventually become free, but a series of laws called the Slave Codes replaced indentured servitude with slavery and ensured the life-long bondage of slaves (Gunderson).
As the colonies grew in population and began to establish their own specialized economies, slave labor became the most cost and time effective means of getting large amounts of work done (Gunderson).
The climactic differences between the north and the south fueled the development of decidedly different economies. While the soil in the north was infertile and ill-suited for large farming plantations, the mild climate and fertile soil of the Middle and Southern colonies promoted the growth of a plantation based economy (Bailey).
The Plantations in the Middle and Southern colonies harnessed massive amounts of land in order to plant and harvest cash crops such as tobacco, rice, and bread ("Regional Development of Colonies"). The middle colonies were commonly referred to as the bread colonies because they produced massive amounts of grain. The Southern Colonies were well-known for their rural, farm-dominated economies. The extremely fertile land was well suited for large-scale farming. This cash crop based economy gave rise to heavy reliance on slaves as the work force. Without slaves, the Southern economy would literally be decimated (Gunderson). This Southern reliance upon slave labor caused many sectional clashes between the North and South until the American Civil War.
As America began to acquire more and more western territory and expand is state count, the issue of sectional difference was manifested in the form of balance of power. The South, fearful of a majority anti-slave Congress clashed with the North over how Slavery would develop into the new western territories.
In 1819, Missouri, part of the Louisiana Purchase, applied to Congress for statehood as a slave state. The fertile land of Missouri was ideal for the large-scale, slave based farming economy already present in the south. At the time, the balance of slave and free states in Congress was just right, and the admission of Missouri as a slave state would upset the balance. A very heated debate ensued in Congress that resulted in James Tallmadge proposing an amendment to the bill that stipulated that Missouri could be admitted as a slave state provided that no more slaves would be brought into the state and also that all children born to slave parents were to be free. The slave-minded Southerners were disgruntled by this proposal because they were bent on expanding the profitable practice of slave labor. The free-north dominated House of Representatives quickly passed the amended bill for admission, but it was shot down by the Senate. However, during the next congressional session, Maine applied for statehood as a free state. Missouri and Maine could both be admitted, and the balance of power would remain intact. With Maine's application for free statehood, the Missouri Compromise became feasible (Foley).
Henry Clay, a Congressman from Kentucky and gifted mediator, assumed a leading role in the formulation of the Missouri Compromise. Congress (amidst cries for abolition from the north) agreed to admit Missouri as a slave state, but also admitted Maine as a free state, so as to maintain the balance of power. Another stipulation of the Missouri Compromise was that all future slavery in the Louisiana Purchase was outlawed above the imaginary line created at 36 degrees longitude and 30 degrees latitude (Foley).
The politically-balanced terms of the compromise made it so neither the North nor the South could rightfully claim that they got the short end of the stick. So, in an immediate sense, the Missouri Compromise was extremely effective. Although effective in the short run, the Missouri Compromise achieved a peace that was somewhat superficial because it only delayed the inevitable sectional clash until a time when the North and the South would be further disparate not only in their economical stance on slavery, but also in their social and political fabric.
The history and progression of slavery in the western portion of the United States is characterized by compromises that pacify the growing sectional differences between the North and the South over the issue of human bondage.
The Treaty of Guadalupe Hidalgo settled the Mexican-American War, but it also revived and added fuel to the fiery issue of the western progress of slavery. The main issue that arose out of the treaty was this: How would slavery expand into this newly acquired half million acres of land. At the time, the balance between North and South in Congress that was achieved through the Missouri Compromise was still intact, but the land granted to the United States in the Treaty of Guadalupe Hidalgo threatened to upset that balance. When California applied for statehood in 1849, both the North and the South feared deeply that California would tip the scales in favor of the other. Southern fears were so fervid that threats of succession if California were to be admitted as a free state were heard. After much deliberation and encouragement coming from both sides, the Compromise of 1850 was passed on the following terms: California would enter the Union as a free state, slave trade would be abolished in Washington D.C., Texas be paid ten million dollars for abandoning land claims in the New Mexico territory, a stricter fugitive slave law be enforced, and finally that Utah and New Mexico be open to popular sovereignty. The Compromise of 1850 pacified the desires of both the North and the South and was followed by what many consider to be a "second era of good feelings". The disgruntling secession threats subsided, and the American people wholeheartedly hoped this time around that the compromise would be final (Dalzell). Much like the Missouri Compromise, the Compromise of 1850 was a superficial peace that only put off confrontation, only this time for a much shorter period of time.
The arousal of sectional differences surfaced again just four short years later once again over the balance of power and also the locus of the eastern terminus of the transcontinental railroad.
The land that the United States received as a part of the Treaty of Guadalupe Hidalgo was several thousand miles away from the nearest American city. Washington, fearing that its newly acquired prize might slip through its fingers, was determined to facilitate an efficient means of traveling to this new territory. After many ideas of cross-country travel were proposed and voted down, railroad promoters from both the north and the south proposed a transcontinental railroad. In addition to facilitating the population growth of the west, the railroad would open up the west and east to enormous economic opportunities. As the plan progressed, and an accurate cost was drawn up, it became apparent that only one of these railroads could feasibly be built. The impossibility of multiple railroads became a subject of much sectional debate and confrontation. The section that the possessed such a jewel would be propelled far beyond the other in terms of economic opportunity and success. The southern states were rapidly falling behind in the economic race and were extremely eager to have this railroad end in their territory. In hopes of having their railroad, the South plotted out the most geographically ideal route through the South. Unfortunately for Southerners, a piece of the proposed Southern route still belonged to Mexico. So in 1854, on orders from Secretary of War, Jefferson Davis, James Gadsden offered Mexico ten million dollars for this tract of land. Mexican Dictator, Santa Ana, helplessly in need of funds, accepted the proposed deal. The deal triggered an animated response from the Northerners. They strongly opposed paying for a small piece of land that would solely benefit the southern section of the United States. The Gadsden Purchase enabled the South to put the polishing touches on their case for having a southernly transcontinental railroad. The South now boasted a route that would venture over smaller mountains than their Northern counterpart and not have to travel through unorganized territory. The North could not pose such a compelling argument because there was no direct northern route to the Pacific Ocean that did not have to go through unorganized territory. Set back by the Gadsden Purchase, the Northerners quickly retorted that if traveling through unorganized territory was the problem, then Nebraska ought to be organized (Bailey).
This proposal of organization of the Nebraska Territory was abhorred by the Southerners who saw it as a death sentence to the already biased balance of power. The proposal was made by Stephan A. Douglas of Illinois, a long standing proponent of popular sovereignty, in the form of the Kansas-Nebraska Act. The Act called for the separation of the Nebraska territory into two separate sections: Kansas and Nebraska, and also that these two territories would be open to popular sovereignty in decisions regarding the legalization of slavery. The theory behind Douglas' proposal was that Kansas, due west of slave state Missouri, would become a slave state and that Nebraska, due west of free soil Iowa would become a free state. The Act blatantly ignored the Missouri Compromise which forbade any slavery in the Nebraska territory above the 36 degree 30 degree line. The passionate debate that entailed as a result of the proposal was so brutal that it nearly erupted into bloodshed. Although facing strong opposition from the Northerners who remained loyal to the Missouri Compromise terms, Douglas was able to push the bill through Congress and it was passed in 1854 (Dalzell).
The Northerners saw this act as an inexcusable injustice towards their cherished Missouri Compromise terms and were enraged when it passed in Congress. The Northern opponents of the act pleaded that any form of compromise with the South would now be vastly more difficult, and they feared with out compromise, conflict was inevitable (American Pageant). The Kansas-Nebraska act only further encouraged sectional differences which in turn accelerated the downward spiral towards civil war.
According to the Kansas-Nebraska Act, the territories of Kansas and Nebraska were now open to popular sovereignty meaning that the general public would vote and decided the fate of slavery in their own territory. Both proslavery and abolitionist groups recognized the potential that popular sovereignty proposed. That was: If enough people could be present in Kansas during the voting that favored a certain side, it would be possible to swing the local votes towards that particular side, thereby swinging the national balance of power in their favor. The influx of radicals from both sections of the country created a time bomb that kept itching closer and closer to explosion. The time ran out in 1856 when a group of proslavery fanatics raided and set fire to an antislavery settlement. This small act of violence was the beginning of what would become the American Civil War. As the proslavery and abolitionists quarreled back and forth in Kansas, the rest of the nation was slowly slipping into a civil war. The fighting in Kansas that started in 1856 didn't end until the end of the American Civil War in 1865 (Woodworth).
The sectional clashes that evolved and developed over the 19th century all had their roots in the issue of slavery and its westward manifestation into the western territories, an issue that proved to be fatal for the United States of America. The idea of Manifest Destiny can be seen in the descent into civil war in that the North and the South both believed that they were pursuing their God-given destiny.
The idea that Americans are a divine people, that we have a God-given right to expand and overtake whatever gets in the way, has had an unparalleled effect, not just on North American affairs, but also on the World.
The significance of the idea of Manifest Destiny can be seen in almost all aspects of current day American society. Had the Americans never believed it was their right to wrestle the last west of the Mississippi away from the Natives and settle it, the whole western portion of the United States and all it has accomplished would cease to exist.
The ideas of Manifest Destiny and American exceptionalism have been so important to the growth of the nation because they have been the guiding forces and beliefs on our track to greatness. These great nationalistic forces have provided us with reason and justification to grow as a nation.
Throughout the history of this nation, we truly have believed that our destiny was to become great. Whether or not we are actually a divinely ordained people doesn't really matter as long as the national sentiment is that we are. As long as we believe that we are the greatest people on Earth and were put here to do truly special things, our actual status relative to other populations doesn't matter at all. What matters is how we work together and progress to become the greatest country we can possibly be.
Cite This Essay
To export a reference to this article please select a referencing stye below: |
This lesson is students’ first encounter with trigonometry although they won’t encounter the word trigonometry yet. They start with the essential concept of connecting angle measurements with the ratios of side lengths in a right triangle. In this lesson, students are focused on looking for patterns, estimating, and moving back and forth between predicting angle measures from ratios and ratios from angle measures. Taking time to build students’ intuition helps them make sense of trigonometry so that they will be able to ask themselves, “is that a reasonable answer?” in subsequent lessons (MP1).
Students start by measuring side lengths and calculating ratios in several different-sized triangles with the same angle measures. This reinforces students’ understanding of similarity. Do not name these ratios yet; the long descriptions are important for students to build understanding. The decision to put the columns in the order that will eventually be named “cosine, sine, tangent” is purposeful. Because cosine represents the \(x\)-coordinate in the unit circle, while sine represents the \(y\)-coordinate, tables with cosine first correctly correspond to the \((x,y)\) coordinates that students will see later.
As students measure side lengths and compute ratios there is an opportunity to discuss measurement error and the relationships between precision in measurement and precision in values calculated with those measurements. In this unit, we recommend rounding side lengths to the nearest tenth and angle measures to the nearest degree. Students are instructed to calculate the ratios of side lengths based on measured lengths to the hundredths place, and, when using digital tools, to use ratios calculated to the thousandths place.
When students examine the class table they might notice that:
- angles with larger measures have larger ratios of the opposite side to the adjacent side or hypotenuse
- the larger the ratio of the opposite side to the hypotenuse, the smaller the ratio of the adjacent side to the hypotenuse
- the ratio of adjacent side to hypotenuse is equal to the ratio of opposite side to hypotenuse for complementary angles, or angles which sum to 90 degrees.
These observations will be topics in subsequent lessons, so students need not justify their conjectures at this point.
- Comprehend that knowing one acute angle of a right triangle determines all the ratios of side lengths in that triangle.
- Generate ratios of side lengths of right triangles (using words and other representations).
- Let’s investigate ratios in the side lengths of right triangles.
- I can build a table of ratios of side lengths of right triangles.
Two angles are complementary to each other if their measures add up to \(90^\circ\). The two acute angles in a right triangle are complementary to each other.
Print Formatted Materials
For access, consult one of our IM Certified Partners. |
Ground-penetrating radar (GPR) is a geophysical method that uses radar pulses to image the subsurface. It is a non-intrusive method of surveying the sub-surface to investigate underground utilities such as concrete, asphalt, metals, pipes, cables or masonry. This nondestructive method uses electromagnetic radiation in the microwave band (UHF/VHF frequencies) of the radio spectrum, and detects the reflected signals from subsurface structures. GPR can have applications in a variety of media, including rock, soil, ice, fresh water, pavements and structures. In the right conditions, practitioners can use GPR to detect subsurface objects, changes in material properties, and voids and cracks.
GPR uses high-frequency (usually polarized) radio waves, usually in the range 10 MHz to 2.6 GHz. A GPR transmitter and antenna emits electromagnetic energy into the ground. When the energy encounters a buried object or a boundary between materials having different permittivities, it may be reflected or refracted or scattered back to the surface. A receiving antenna can then record the variations in the return signal. The principles involved are similar to seismology, except GPR methods implement electromagnetic energy rather than acoustic energy, and energy may be reflected at boundaries where subsurface electrical properties change rather than subsurface mechanical properties as is the case with seismic energy.
The electrical conductivity of the ground, the transmitted center frequency, and the radiated power all may limit the effective depth range of GPR investigation. Increases in electrical conductivity attenuate the introduced electromagnetic wave, and thus the penetration depth decreases. Because of frequency-dependent attenuation mechanisms, higher frequencies do not penetrate as far as lower frequencies. However, higher frequencies may provide improved resolution. Thus operating frequency is always a trade-off between resolution and penetration. Optimal depth of subsurface penetration is achieved in ice where the depth of penetration can achieve several thousand metres (to bedrock in Greenland) at low GPR frequencies. Dry sandy soils or massive dry materials such as granite, limestone, and concrete tend to be resistive rather than conductive, and the depth of penetration could be up to 15 metres (49 ft). However, in moist or clay-laden soils and materials with high electrical conductivity, penetration may be as little as a few centimetres.
Ground-penetrating radar antennas are generally in contact with the ground for the strongest signal strength; however, GPR air-launched antennas can be used above the ground.
The first patent for a system designed to use continuous-wave radar to locate buried objects was submitted by Gotthelf Leimbach and Heinrich Löwy in 1910, six years after the first patent for radar itself (patent DE 237 944). A patent for a system using radar pulses rather than a continuous wave was filed in 1926 by Dr. Hülsenbeck (DE 489 434), leading to improved depth resolution. A glacier's depth was measured using ground penetrating radar in 1929 by W. Stern.
Further developments in the field remained sparse until the 1970s, when military applications began driving research. Commercial applications followed and the first affordable consumer equipment was sold in 1975.
In 1972, the Apollo 17 mission carried a ground penetrating radar called ALSE (Apollo Lunar Sounder Experiment) in orbit around the Moon. It was able to record depth information up to 1.3 km and recorded the results on film due to the lack of suitable computer storage at the time.
GPR has many applications in a number of fields. In the Earth sciences it is used to study bedrock, soils, groundwater, and ice. It is of some utility in prospecting for gold nuggets and for diamonds in alluvial gravel beds, by finding natural traps in buried stream beds that have the potential for accumulating heavier particles. The Chinese lunar rover Yutu has a GPR on its underside to investigate the soil and crust of the Moon.
Engineering applications include nondestructive testing (NDT) of structures and pavements, locating buried structures and utility lines, and studying soils and bedrock. In environmental remediation, GPR is used to define landfills, contaminant plumes, and other remediation sites, while in archaeology it is used for mapping archaeological features and cemeteries. GPR is used in law enforcement for locating clandestine graves and buried evidence. Military uses include detection of mines, unexploded ordnance, and tunnels.
Borehole radars utilizing GPR are used to map the structures from a borehole in underground mining applications. Modern directional borehole radar systems are able to produce three-dimensional images from measurements in a single borehole.
One of the other main applications for ground-penetrating radars is for locating underground utilities. Standard electromagnetic induction utility locating tools require utilities to be conductive. These tools are ineffective for locating plastic conduits or concrete storm and sanitary sewers. Since GPR detects variations in dielectric properties in the subsurface, it can be highly effective for locating non-conductive utilities.
GPR was often used on the Channel 4 television programme Time Team which used the technology to determine a suitable area for examination by means of excavations. GPR was also used to recover £150,000 in cash ransom that Michael Sams had buried in a field, following his 1992 kidnapping of an estate agent.
Military applications of ground-penetrating radar include detection of unexploded ordnance and detecting tunnels. In military applications and other common GPR applications, practitioners often use GPR in conjunction with other available geophysical techniques such as electrical resistivity and electromagnetic induction methods.
In May 2020, the U.S. military ordered ground-penetrating radar system from Chemring Sensors and Electronics Systems (CSES), to detect improvised explosive devices (IEDs) buried in roadways, in $200.2 million deal.
A recent novel approach to vehicle localization using prior map based images from ground penetrating radar has been demonstrated. Termed "Localizing Ground Penetrating Radar" (LGPR), centimeter level accuracies at speeds up to 60 mph have been demonstrated. Closed-loop operation was first demonstrated in 2012 for autonomous vehicle steering and fielded for military operation in 2013. Highway speed centimeter-level localization during a night-time snow-storm was demonstrated in 2016. This technology was exclusively licensed and commercialized for vehicle safety in ADAS and Autonomous Vehicle positioning and lane-keeping systems by GPR Inc. and marketed as Ground Positioning Radar(tm).
The concept of radar is familiar to most people. With ground penetrating radar, the radar signal – an electromagnetic pulse – is directed into the ground. Subsurface objects and stratigraphy (layering) will cause reflections that are picked up by a receiver. The travel time of the reflected signal indicates the depth. Data may be plotted as profiles, as planview maps isolating specific depths, or as three-dimensional models.
GPR can be a powerful tool in favorable conditions (uniform sandy soils are ideal). Like other geophysical methods used in archaeology (and unlike excavation) it can locate artifacts and map features without any risk of damaging them. Among methods used in archaeological geophysics, it is unique both in its ability to detect some small objects at relatively great depths, and in its ability to distinguish the depth of anomaly sources.
The principal disadvantage of GPR is that it is severely limited by less-than-ideal environmental conditions. Fine-grained sediments (clays and silts) are often problematic because their high electrical conductivity causes loss of signal strength; rocky or heterogeneous sediments scatter the GPR signal, weakening the useful signal while increasing extraneous noise.
In the field of cultural heritage GPR with high frequency antenna is also used for investigating historical masonry structures, detecting cracks and decay patterns of columns and detachment of frescoes.
GPR is used by criminologists, historians, and archaeologists to search burial sites. In his publication, Interpreting Ground-penetrating Radar for Archaeology, Lawrence Conyers, who is "one of the first archaeological specialists in GPR" described the process. Conyers published research using GPR in El Salvador in 1996, in the Four Corners region Chaco period in southern Arizona in 1997, and in a medieval site in Ireland in 2018. Informed by Conyer's research, the Institute of Prairie and Indigenous Archaeology at the University of Alberta, in collaboration with the National Centre for Truth and Reconciliation, have been using GPR in their survey of Indian Residential Schools in Canada. By June 2021, the Institute had used GPR to locate suspected unmarked graves in areas near historic cemeteries and Indian Residential Schools. On May 27, 2021, it was reported that the remains of 215 children were found using GPR at a burial site at the Kamloops Indian Residential School on Tk’emlúps te Secwépemc First Nation land in British Columbia. In June 2021, GPR technology was used by the Cowessess First Nation in Saskatchewan to locate 751 unmarked gravesites on the Marieval Indian Residential School site, which had been in operation for a century until it was closed down in 1996.
Advancements in GPR technology integrated with various 3D software modelling platforms, generate three-dimensional reconstructions of subsurface "shapes and their spatial relationships". By 2021, this has been "emerging as the new standard".
Individual lines of GPR data represent a sectional (profile) view of the subsurface. Multiple lines of data systematically collected over an area may be used to construct three-dimensional or tomographic images. Data may be presented as three-dimensional blocks, or as horizontal or vertical slices. Horizontal slices (known as "depth slices" or "time slices") are essentially planview maps isolating specific depths. Time-slicing has become standard practice in archaeological applications, because horizontal patterning is often the most important indicator of cultural activities.
The most significant performance limitation of GPR is in high-conductivity materials such as clay soils and soils that are salt contaminated. Performance is also limited by signal scattering in heterogeneous conditions (e.g. rocky soils).
Other disadvantages of currently available GPR systems include:
- Interpretation of radar-grams is generally non-intuitive to the novice.
- Considerable expertise is necessary to effectively design, conduct, and interpret GPR surveys.
- Relatively high energy consumption can be problematic for extensive field surveys.
Radar is sensitive to changes in material composition, detecting changes requires movement. When looking through stationary items using surface-penetrating or ground-penetrating radar, the equipment needs to be moved in order for the radar to examine the specified area by looking for differences in material composition. While it can identify items such as pipes, voids, and soil, it cannot identify the specific materials, such as gold and precious gems. It can however, be useful in providing subsurface mapping of potential gem-bearing pockets, or "vugs." The readings can be confused by moisture in the ground, and they can't separate gem-bearing pockets from the non-gem-bearing ones.
When determining depth capabilities, the frequency range of the antenna dictates the size of the antenna and the depth capability. The grid spacing which is scanned is based on the size of the targets that need to be identified and the results required. Typical grid spacings can be 1 meter, 3 ft, 5 ft, 10 ft, 20 ft for ground surveys, and for walls and floors 1 inch–1 ft.
The speed at which a radar signal travels is dependent upon the composition of the material being penetrated. The depth to a target is determined based on the amount of time it takes for the radar signal to reflect back to the unit’s antenna. Radar signals travel at different velocities through different types of materials. It is possible to use the depth to a known object to determine a specific velocity and then calibrate the depth calculations.
In 2005, the European Telecommunications Standards Institute introduced legislation to regulate GPR equipment and GPR operators to control excess emissions of electromagnetic radiation. The European GPR association (EuroGPR) was formed as a trade association to represent and protect the legitimate use of GPR in Europe.
Ground-penetrating radar uses a variety of technologies to generate the radar signal: these are impulse, stepped frequency, frequency-modulated continuous-wave (FMCW), and noise. Systems on the market in 2009 also use Digital signal processing (DSP) to process the data during survey work rather than off-line.
A special kind of GPR uses unmodulated continuous-wave signals. This holographic subsurface radar differs from other GPR types in that it records plan-view subsurface holograms. Depth penetration of this kind of radar is rather small (20–30 cm), but lateral resolution is enough to discriminate different types of landmines in the soil, or cavities, defects, bugging devices, or other hidden objects in walls, floors, and structural elements.
GPR is used on vehicles for close-in high-speed road survey and landmine detection as well as in stand-off mode.[definition needed]
In Pipe-Penetrating Radar (IPPR) and In Sewer GPR (ISGPR) are applications of GPR technologies applied in non-metallic-pipes where the signals are directed through pipe and conduit walls to detect pipe wall thickness and voids behind the pipe walls.
Wall-penetrating radar can read through non-metallic structures as demonstrated for the first time by ASIO and Australian Police in 1984 while surveying an ex Russian Embassy in Canberra. Police showed how to watch people up to two rooms away laterally and through floors vertically, could see metal lumps that might be weapons; GPR can even act as a motion sensor for military guards and police.
SewerVUE Technology, an advanced pipe condition assessment company utilizes Pipe Penetrating Radar (PPR) as an in pipe GPR application to see remaining wall thickness, rebar cover, delamination, and detect the presence of voids developing outside the pipe.
EU Detect Force Technology, an advanced soil research company, design utilizes X6 Plus Grounding Radar (XGR) as an hybrid GPR application for military mine detection and also police bomb detection.
- "How Ground Penetrating Radar Works". Tech27.
- Srivastav, A.; Nguyen, P.; McConnell, M.; Loparo, K. N.; Mandal, S. (October 2020). "A Highly Digital Multiantenna Ground-Penetrating Radar System". IEEE Transactions on Instrumentation and Measurement. 69: 7422–7436. doi:10.1109/TIM.2020.2984415. S2CID 216338273.
- Daniels DJ (2004). Ground Penetrating Radar (2nd ed.). Knoval (Institution of Engineering and Technology). pp. 1–4. ISBN 978-0-86341-360-5.
- "History of Ground Penetrating Radar Technology". Ingenieurbüro obonic. Archived from the original on 2 February 2017. Retrieved 13 February 2016.
- "The Apollo Lunar Sounder Radar System" - Proceedings of the IEEE, June 1974
- "Lunar Sounder Experiment". Lunar and Planetary Institute (LPI). Apollo 17 Experiments. Retrieved 24 June 2021.
- Wilson, M. G. C.; Henry, G.; Marshall, T. R. (2006). "A review of the alluvial diamond industry and the gravels of the North West Province, South Africa" (PDF). South African Journal of Geology. 109 (3): 301–314. doi:10.2113/gssajg.109.3.301. Archived (PDF) from the original on 5 July 2013. Retrieved 9 December 2012.
- Hofinghoff, Jan-Florian (2013). "Resistive Loaded Antenna for Ground Penetrating Radar Inside a Bottom Hole Assembly". IEEE Transactions on Antennas and Propagation. 61 (12): 6201–6205. Bibcode:2013ITAP...61.6201H. doi:10.1109/TAP.2013.2283604. S2CID 43083872.
- Birmingham Mail
- "Army orders ground-penetrating radar system from CSES for detecting hidden IEDs in $200.2 million deal". Military & Aerospace Electronics. 13 May 2020.
- Cornick, Matthew; Koechling, Jeffrey; Stanley, Byron; Zhang, Beijia (1 January 2016). "Localizing ground penetrating RADAR: A step toward robust autonomous ground vehicle localization". Journal of Field Robotics. 33 (1): 82–102. doi:10.1002/rob.21605. ISSN 1556-4967.
- Enabling autonomous vehicles to drive in the snow with localizing ground penetrating radar (video). MIT Lincoln Laboratory. 24 June 2016. Archived from the original on 19 January 2017. Retrieved 31 May 2017 – via YouTube.
- "MIT Lincoln Laboratory: News: Lincoln Laboratory demonstrates highly accurate vehicle localization under adverse weather conditions". www.ll.mit.edu. Archived from the original on 31 May 2017. Retrieved 31 May 2017.
- Lowe, Kelsey M; Wallis, Lynley A.; Pardoe, Colin; Marwick, Benjamin; Clarkson, Christopher J; Manne, Tiina; Smith, M.A.; Fullagar, Richard (2014). "Ground-penetrating radar and burial practices in western Arnhem Land, Australia". Archaeology in Oceania. 49 (3): 148–157. doi:10.1002/arco.5039.
- Masini, N; Persico, R; Rizzo, E (2010). "Some examples of GPR prospecting for monitoring of the monumental heritage". Journal of Geophysics and Engineering. 7 (2): 190. Bibcode:2010JGE.....7..190M. doi:10.1088/1742-2132/7/2/S05.
- Mazurkiewicz, Ewelina; Tadeusiewicz, Ryszard; Tomecka-Suchoń, Sylwia (20 October 2016). "Application of Neural Network Enhanced Ground-Penetrating Radar to Localization of Burial Sites". Applied Artificial Intelligence. 30 (9): 844–860. doi:10.1080/08839514.2016.1274250. ISSN 0883-9514. S2CID 36779388. Retrieved 24 June 2021.
- Conyers, Lawrence B. (1 April 2014) . Interpreting Ground-penetrating Radar for Archaeology. Routledge & CRC Press. p. 220. ISBN 9781611322170. Retrieved 24 June 2021.
- Conyers, Lawrence (1 October 1996). "Archaeological evidence for dating the Loma Caldera eruption, Ceren, El Salvador". Geoarchaeology. 11 (5): 377–391. doi:10.1002/(SICI)1520-6548(199610)11:5<377::AID-GEA1>3.0.CO;2-5.
- Conyers, Lawrence B. (1 September 2006). "Ground-Penetrating Radar Techniques to Discover and Map Historic Graves". Historical Archaeology. 40 (3): 64–73. doi:10.1007/BF03376733. ISSN 2328-1103. S2CID 31432686. Retrieved 24 June 2021.
- Conyers, Lawrence B; Goodman, Dean (1997). Ground-penetrating radar: an introduction for archaeologists. Walnut Creek, CA: AltaMira Press. ISBN 978-0-7619-8927-1. OCLC 36817059.
- Conyers, Lawrence B. (2018). "Medieval Site in Ireland". Ground-penetrating Radar and Magnetometry for Buried Landscape Analysis. SpringerBriefs in Geography. Cham: Springer International Publishing. pp. 75–90. doi:10.1007/978-3-319-70890-4_7. ISBN 978-3-319-70890-4. Retrieved 24 June 2021.
- Wadsworth, William T. D. (22 July 2020). "Geophysics and Unmarked Graves: a Short Introduction for Communities". ArcGIS StoryMaps. Retrieved 24 June 2021.
- "Remains of 215 children found at former residential school in B.C." The Canadian Press via APTN News. 28 May 2021. Retrieved 4 June 2021.
- "Saskatchewan First Nation discovers hundreds of unmarked graves at former residential school site". CTV News. 23 June 2021. Retrieved 24 June 2021.
- Kelly, T. B.; Angel, M. N.; O’Connor, D. E.; Huff, C. C.; Morris, L.; Wach, G. D. (22 June 2021). "A novel approach to 3D modelling ground-penetrating radar (GPR) data – a case study of a cemetery and applications for criminal investigation". Forensic Science International. 325: 110882. doi:10.1016/j.forsciint.2021.110882. ISSN 0379-0738. PMID 34182205. S2CID 235673352. Retrieved 24 June 2021.
- "Gems and Technology – Vision Underground". The Ganoksin Project. Archived from the original on 22 February 2014. Retrieved 5 February 2014.
- Electromagnetic compatibility and Radio spectrum Matters (ERM). Code of Practice in respect of the control, use and application of Ground Probing Radar (GPR) and Wall Probing Radar (WPR) systems and equipment. European Telecommunications Standards Institute. September 2009. ETSI EG 202 730 V1.1.1.
- "An impulse generator for the ground penetrating radar" (PDF). Archived (PDF) from the original on 18 April 2015. Retrieved 25 March 2013.
- Zhuravlev, A.V.; Ivashov, S.I.; Razevig, V.V.; Vasiliev, I.A.; Türk, A.S.; Kizilay, A. (2013). "Holographic subsurface imaging radar for applications in civil engineering" (PDF). IET International Radar Conference 2013. IET International Radar Conference. Xi'an, China: IET. p. 0065. doi:10.1049/cp.2013.0111. ISBN 978-1-84919-603-1. Archived (PDF) from the original on 29 September 2013. Retrieved 26 September 2013.
- Ivashov, S. I.; Razevig, V. V.; Vasiliev, I. A.; Zhuravlev, A. V.; Bechtel, T. D.; Capineri, L. (2011). "Holographic Subsurface Radar of RASCAN Type: Development and Application" (PDF). IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing. 4 (4): 763–778. Bibcode:2011IJSTA...4..763I. doi:10.1109/JSTARS.2011.2161755. S2CID 12663279. Archived (PDF) from the original on 29 September 2013. Retrieved 26 September 2013.
- "Ground Penetrating Radar(GPR) Systems – Murphysurveys". www.murphysurveys.co.uk. Archived from the original on 10 September 2017. Retrieved 10 September 2017.
- Ékes, C.; Neducza, B.; Takacs, P. (2014). Proceedings of the 15th International Conference on Ground Penetrating Radar. pp. 368–371. doi:10.1109/ICGPR.2014.6970448. ISBN 978-1-4799-6789-6. S2CID 22956188.
- "International No-Dig Meets in Singapore - Trenchless Technology Magazine". Trenchless Technology Magazine. 30 December 2010. Retrieved 10 September 2017.
- Borchert, Olaf (2008). "Receiver Design for a Directional Borehole Radar System (dissertation)". University of Wuppertal.
- Jaufer, Rakeeb M., Amine Ihamouten, Yann Goyat, Shreedhar S. Todkar, David Guilbert, Ali Assaf, and Xavier Dérobert. 2022. "A Preliminary Numerical Study to Compare the Physical Method and Machine Learning Methods Applied to GPR Data for Underground Utility Network Characterization" Remote Sensing 14, no. 4: 1047. https://doi.org/10.3390/rs14041047
An overview of scientific and engineering applications can be found in:
- Jol, H. M., ed. (2008). Ground Penetrating Radar Theory and Applications. Elsevier.
- Persico, Raffaele (2014). Introduction to ground penetrating radar: inverse scattering and data processing. John Wiley & Sons.
A general overview of geophysical methods in archaeology can be found in the following works:
- Clark, Anthony J. (1996). Seeing Beneath the Soil. Prospecting Methods in Archaeology. London, United Kingdom: B.T. Batsford Ltd.
- Conyers, Lawrence B; Goodman, Dean (1997). Ground-penetrating radar: an introduction for archaeologists. Walnut Creek, CA: AltaMira Press. ISBN 978-0-7619-8927-1. OCLC 36817059.
- Gaffney, Chris; John Gater (2003). Revealing the Buried Past: Geophysics for Archaeologists. Stroud, United Kingdom: Tempus.
|Wikimedia Commons has media related to Ground penetrating radars.|
- "EUROGPR – The European GPR regulatory body".
- "GprMax – GPR numerical simulator based on the FDTD method".
- "Short movie showing acquisition, processing and accuracy of GPR readings". Archived from the original on 22 December 2021 – via YouTube.
- "FDTD Animation of sample GPR propagation". Archived from the original on 22 December 2021 – via YouTube.
- "GPR Electromagnetic Emissions Safety Information". 17 May 2016. Archived from the original on 13 September 2018. Retrieved 15 February 2017. |
The learning objectives in mathematics typically include developing skills in problem-solving, critical thinking, logical reasoning, and numerical fluency, as well as understanding mathematical concepts and formulas.
If you want a detailed response, continue reading
Mathematics is a critical subject in education as it lays the foundation for understanding several fields of study. The learning objectives in mathematics are numerous, but they generally share a few key components, including developing critical thinking, logical reasoning, and numerical fluency. Here are some of the specific learning objectives in mathematics:
Understanding Mathematical Concepts: The primary learning objective in mathematics is to develop a deep understanding of mathematical concepts and formulas. Students need to understand the underlying principles and how they apply to various real-life situations.
Problem-solving: An essential aspect of mathematics is problem-solving, which involves the application of critical thinking, logic, and reasoning to find solutions to mathematical problems. Students should learn to solve problems using various techniques and methods.
Numerical Fluency: Students should be able to manipulate numbers fluently, calculate with ease, and perform mathematical operations with precision and speed.
Mathematical Communication: Students should learn to communicate mathematical ideas clearly and concisely through verbal or written communication.
Mathematical Reasoning: Critical thinking and logical reasoning is a vital aspect of mathematical learning. Students should develop an ability to reason logically, analyze and evaluate information, and draw conclusions.
Mathematics is a fundamental subject that plays a crucial role in our daily lives. “Pure mathematics is, in its way, the poetry of logical ideas.” (Albert Einstein)
Here are some interesting facts about mathematics:
- The world’s oldest mathematical object is the Lebombo bone, dated to around 35,000 BC.
- The word “mathematics” comes from the Greek word μάθημα (mathema) which means learning, study, or science.
- The Fibonacci sequence, named after the Italian mathematician Leonardo Fibonacci, appears in many natural phenomena, including the pattern of leaves on a stem, the branching of trees, and the spiral shells of snails and seashells.
- The Pythagorean theorem, which states that the sum of the squares of the lengths of the two shorter sides of a right triangle is equal to the square of the length of the longest side, is named after the Greek mathematician Pythagoras.
- The number zero was invented independently by the Mayans, Indians, and Babylonians.
- Zero is the only number that cannot be represented in Roman numerals.
Here is a table summarizing the learning objectives in mathematics:
|Learning Objectives||Sub Objectives|
|Understanding Mathematical Concepts||Principles & Formulas|
|Problem-solving||Techniques & Methods|
|Mathematical Communication||Verbal & Written Communication|
|Mathematical Reasoning||Critical Thinking & Logical Reasoning|
In conclusion, mathematics is an intricate subject that has several learning objectives. Developing a deep understanding of mathematical concepts, problem-solving, logical reasoning, numerical fluency, and mathematical communication and reasoning are among its critical learning objectives. As Albert Einstein states, mathematics is one of the most artistic and logical subjects that we can learn and apply in our daily lives.
This video contains the answer to your query
This video discusses the difference between goals, objectives, and learning outcomes. Goals are broad aims of a course or project, while objectives are specific actions needed to attain the goals. Learning outcomes are what learners can perform as a result of the course activities, and can be articulated using action verbs, learning statements, and criteria. Bloom’s taxonomy is suggested as a framework to help choose action verbs that align with learning levels associated with objectives. Clear distinctions between objectives and learning outcomes can help students understand what the activity entails and what benefits they can receive from it.
Found more answers on the internet
Mathematics Learning Objectives and Assessment Plan Be able to apply problem-solving and logical skills. Have a deeper understanding of mathematical theory. Have a solid knowledge of elementary statistics. Be able to communicate mathematical/logical ideas in writing.
The aims of teaching and learning mathematics are to encourage and enable students to: recognize that mathematics permeates the world around us appreciate the usefulness, power and beauty of mathematics enjoy mathematics and develop patience and persistence when solving problems
These objectives have included: The teaching of basic numeracy skills to all pupils The teaching of practical mathematics (arithmetic, elementary algebra, plane and solid geometry, trigonometry) to most pupils, to equip them to follow a trade or craft The teaching of abstract mathematical concepts (such as set and function) at an early age
Common Core Math contains 11 Standards to cover such topics as counting, one-to-one correspondence, addition and multiplication, measurement of time, distance, and money, and fractions and decimals. Depending on the student’s grade, there are 4 or 5 standards to be covered that school year. These standards are the Objectives in our math Goals.
Mathematics Undergraduate Student Learning Objectives The Mathematics program promotes mathematical skills and knowledge for their intrinsic beauty, effectiveness in developing proficiency in analytical reasoning, and utility in modeling and solving real world problems.
People also ask
- Identify the Level of Knowledge Necessary to Achieve Your Objective.
- Select an Action Verb.
- Create Your Very Own Objective.
- Check Your Objective.
- Repeat, Repeat, Repeat.
Sample learning objectives for a math class might be: “State theorems” (implies memorization and recall) “Prove theorems” (implies applying knowledge) “Apply theorems to solve problems“ (implies applying knowledge) |
Lets explore these ideas for the class of objects known as triangles. What are the necessary conditions for a triangle?
Necessary properties/conditions/characteristics of a triangle
Not Necessary properties/conditions/characteristics of a triangle
Which of the following necessary conditions must be included to include all the sufficient conditions needed to describe a triangle or for a figure to have to be a triangle?
SUMMARY - Does this list include sufficient conditions or all the minimum number of necessary conditions to represent a triangle?
The list is sufficient And all are necessary for a plane figure to be a triangle or for the list to be used to define a triangle.
This comparison can be used to categorize or group objects by their common properties.
Or when considering if two things are equivalent or congruent. Triangles with identical necessary and sufficient properties are congruent.
Example of equivalent descriptions of numbers.
Two sets of properties that represent a whole number.
To prove these are equivalent conditions we must show they are both necessary and sufficient. They are as they lead to the same conclusions. This would make it possible to write: the whole number greater than one and less than three is equal to the smallest positive even whole number. Further it would also be valid to set both equal to 2.
Take your time and make this information YOURS.
If you do, then you can use it to create questions to probe your students' depth of understanding. By having them consider what is necessary? What is sufficient? And what more needs to be added to make something sufficient that is not? What statements or groups of statements are equal or equivalent? What does it look and sound like if we put them together as equals?
We make a conjecture and find an example that doesn't fit the conjecture, then can claim we have disproved the conjecture by discovering a counterexample.
Deductive proof - uses a general definition to include a specific instance or a premise to reach a conclusion. Is generalizable to all instances with the same characteristics that satisfies the givens or represents the given class, definition, or law.
Start with a set of assumptions and use them to derive a valid conclusion.
What is it?
Conclusion - The only one number of 49, 64, and 81 to have a factor of three is 81.
Inductive reasoning - is using the specific to create the general or conclusion. Rolling two die hundreds of times to find the probability of the different sums. Inductive reasoning is the step child of deductive reasoning since it is based on experiment and variables. For example - statistical reasoning is not accepted in a courtroom.
Reasoning and proof - if you include ideas of sufficient, necessary, equivalent, and independent conditions, you have the tools for reasoning and proof.
Independent conditions are properties or conditions that vary independently of the other.
Example size and shape. It is possible for the shape of a square to have an infinite variety of sizes and still have the shape of a square. Similarly the properties necessary to make a shape a square do not include size.
Proof by analogy or metaphor - mathematicians might distinguish analogies as a directly related comparison of two like arguments and metaphor as indirectly related.
The relationships are obviously clearly definable relations of the respective parts or artifacts to animals as well as parts or artifacts to parts or artifacts and animals to animals.
Metaphors are not as tight - At night all cats are black. Can not substitute - like the previous analogies. There is a violation of syntactic structure. Night relates to black as light or dark and black relates to cats as to color - white, brown, yellow, black; while the figurative meaning of the metaphor is something unrelated.
Proof of the Commutative Property
To show students a specific example, can be a starting point of a proof. Specially for very young children. Like 3 + 5 = 5 + 3; can be demonstrated by gluing three beans to card stock and five beans to card stalk, then showing that either order of the cards has a cardinality of eight beans. However, if we stop here we must recognize it is not a proof. In fact it is recognized as not being a proof - by attempting to prove a specific example or a selective instance as equivalent for all possible examples.
To make it a proof the idea must be generalized for all possible cases or the infinite number of cases that might exist. This can be suggested by asking what happens if the number of beans are increased. Again, young children usually reach a limit because of their limited understanding of the number system. However, as students understanding of number systems increase toward a conceptualization of infinity, the generalization can be pushed toward infinite possibilities and the equivalence for all of them. For children to be able to do this they will need to have achieved all the conditions for conservation. When you believe this is so, the next example can be presented.
Prove that when you add any two numbers the sum will be the same no matter the order of the numbers. While all the students may be convinced of this it is necessary to push them to explain why it would work for addition of all numbers (infinity again) for the case of whole numbers. Have them create an explanation that they could give to another person (art, music, PE. teacher or principal).
To move to infinity students should be able to show with manipulatives and a drawing that represents additional or less numbers. You can question by asking for repeated larger or smaller numbers.
Geometric construct of a Proof. Prove that a construction of a triangle will yield an equilateral triangle. Figure at right shows how a compass can be used to demonstrate how three equal lines can be drawn to create a triangle with all three sides congruent.
Proof of when an Even number is added with another even number it will make an even number. See three different levels: concrete, semi-concrete, and formal proofs .
Proof of a specific and translate to general
Students found the mass of a dry sponge (98 g), then soaked it in water and found its mass (145g). When they were asked to find the mass of the water they wanted to subtract. The teacher asked them to prove it or convince her that it was an accurate solution.
The wet sponge's mass was 145 grams - the dry sponge's mass 98 grams. Students decide to use nice numbers and adjust to subtract. So they subtracted 100g. 145 - 100 = 45. Then they were not sure if they should subtract two more or add two more.
To represent the problem they drew a rectangular shape (sponge) with an enclosed blob to represent the water. They labeled the area outside the blob 98g for sponge and labeled the blob - water with a question mark. Then they wrote 145g beside the rectangular shape and said that represented the sponge and the water.
They proceeded to explain that taking 100g away was 2g too much so they would need to add two back.
To get to the generalization the teacher asked this follow up question: Would this always be true for subtraction?
What would the statement be? Or rule?
Like - If you take away smaller you end up bigger so you need to ... Or If you take away bigger you end up smaller so you need to ...
With the hopes to get to the idea of: How to argue for an infinite claim.
Think that problem is too complicated to start?
What if we use the same numbers only change the scenario?
Max has 145 collector cards and gives 98 to Chris. How many collector cards does Max have left?
This can also be solved with subtraction and the numbers are the same as the sponge problem. However, the representation of the problem isn't as complicated because of the type of subtraction problem it is.
Why is there such a difference? The sponge problem is stated as an addition problem with the start and finish numbers know, but not the change. This makes it more complicated than this second which is a start total known and the amount to remove or separate known with the unknown as what is left.
See samples of different addition and subtraction problems
The following are examples of specific instances that can use proof by analogy if the analogy is stated as to be infinite.
Prove why you can exchange numbers of equal value with equations of equal value.
Prove the validity of this equation:
Prove that two unknowns x and y when added together and squared are represented accurately with the following:
Can prove the validity of the distributive property by analogy?
x2 - 1 = (x +1 )(x - 1)
Arguments from children's representations (like the 3 + 5 = 5 + 3 ) and from other student stories where numbers are involved can be an effective way to move students toward a proof as a more general idea. The key is for the teacher to facilitate the inclusion of questions such as:
The purpose is to push thinking to generalize the explanation to include all possible examples that are true for all cases, or move to infinity.
A = length * width
Does multiplying length and width determine area for every rectangle and square? Why?
We can use graph paper or tiles to represent a rectangular area. By modeling how every time a row of square tiles is add it is represented by multiplying the number of column by an increased row number of one.
Elementary students will need to act this out multiple times. Talking about how the rows are increasing and the columns are increasing and how that can be represented in the product of the length and width. Then ultimately they should recognize that it doesn't matter what the number of rows or columns are. Therefore, the expression L * W will determine the area. To be able to do so students need to be familiar with all the necessary conditions or characteristics for conservation. Specifically in the case of conservation of area.
Identify all examples and justify their inclusion as well as reasons for the elimination and identification of all possibilities.
What are all of the possible four cube high towers that can be made with red and yellow cubes?
Proof by exhaustion using systematic process of recursion.
Students must take time to build meaning, which must be from concrete experiences. Time to notice patterns between the physical entities and retain these ideas in short term memory long enough to act on them to create relationships and communicate these relationships.
It is the communication element that pushes us to identify the limits, systematically explain all possibilities even if they extend to infinity.
There are several ways to systematically try to insure the discovery of all possibilities of the cube towers. Some examples follow.
First towers can be made to show the number of red and yellow cubes that could be to make a four cube tall tower. Below illustrates that if four cubes are used for each tower it would be possible to build a tower with 4 red, 3 red and 1 yellow, 2 red and 2 yellow, 1 red and 3 yellow or 4 yellow. There is no other combination of reds and yellows for a 4 cube tower.
However, that doesn't provide for the different placements of the two colors in different combinations. For example: There is only one way to arrange a tower with all red. Like wise there is only one to arrange a tower with four yellow. However, there are several ways to arrange three red cubes and one yellow cubes. Like wise for two red and two yellow and three yellow and one red.
These can be found by rearranging the cubes for each of the above combinations of colors and summing the totals 1 + __ + __ + __ + 1 = __ .
Another way is to use a tree. Start with R and Y and put a R Y pair below it and continue till there is a row for each of the four rows in the towers. Then find all the different paths through the maze of connections and label each.
Another example by exhaustion or discovery is all the examples to find,
How many squares in a 5 X 5 square?
Systematic solution could find the number of
Then total the five groups.
See example of card trick explanation using elimination and identification.
How many handshakes will there be for everyone to shake hands with everyone else in your classroom.
Systematically - if there are five people, the first person shakes hands with the four other people in the classroom. Since the first person has shaken everyones hand, the first person can relax and sit down. The second person has shaken the first person's hand, so the second person begins by shaking the third person's hand. When the second person has shaken everyones hands the person may relax and sit down. The third person now has shaken hands with person one and two, so has to shake the fourth and fifth persons' hands. Then the fourth person only has to shake the fifth person's hand and when it becomes the fifth persons turn, by golly the fifth person has shaken all the hands and is done. To summarize.
Person five - 4, person four - 3, person three - 2, person four - 1, person five - done - o.
Add them and get: 4 + 3 + 2 + 1 = __ .
Ecological fallacy is thinking relationships observed for groups necessarily hold for individuals.
Exception fallacy is sort of the reverse of the ecological fallacy. It is when a conclusion about a group is made on the basis of a specific case. This fallacious reasoning is often at the core of sexism and racism. The stereotypical response when seeing a woman make a driving error of: "women are bad drivers." Is wrong. Fallacy...
Students at this level appear unaware of the need to provide a mathematical justification to demonstrate the truth of a proposition or statement. For example, a student might accept a proposition as true because a teacher, parent, or text "says" it's true (cf. Harel & Sowder, 1998); in this case, the justification is "non-mathematical." In other cases, a student might simply state a proposition is true without any reference to why the proposition is true (e.g., "the sum of two even numbers is even because that is just the way it is," "yes the numbers will be equal because they will always be equal").
Students at this level appear to be aware of the need to provide a mathematical justification, but their justifications are not general; in the majority of cases, students' justifications are empirically based. Among the empirically based justifications, we recognize distinctions (at sub-levels) among students who consider checking a few cases, students who consider systematically checking a few cases (e.g., even and odd numbers), students who consider checking extreme cases or "random" cases, and students who consider the use of a generic example (proof for a class of objects) (cf. Balacheff, 1987).
Students at this level appear to be aware of the need for a general argument, and often attempt to produce such arguments themselves; the arguments, however, fall short of being acceptable proofs. "Falling short" may happen in one of two ways: (1) Students express recognition of the need to provide a general argument and attempt to produce such an argument, however, the argument provided is not a viable argument (i.e., the argument is either incorrect mathematically or it would not lead to an acceptable proof). (2) Students express recognition of the need to provide a general argument and attempt to produce such an argument, however, the argument is incomplete (if completed, the argument would be an acceptable proof). In both situations, the point is that students are attempting to treat the general case. In addition. Level 2 justifications also include responses from those students who demonstrate an awareness that empirical evidence does not suffice as proof—by either expressing recognition of the need to deal with all cases or expressing recognition of the limitation of examples as proof—but who are unable to produce (or attempt) a general argument.
Students at this level appear to be aware of the need for a general argument, and are able to successfully produce such arguments themselves. We consider the arguments students produce at this level to be acceptable proofs; that is, their arguments demonstrate that a proposition or statement is true in all cases. Arguments categorized at this level typically involve a reference to any assumptions or givens, a chain of deductions used to build the argument, and finally an explicit concluding statement. Although the arguments students produce may lack the rigor or formality typically associated with a proof, their arguments, nonetheless, do prove the general case.
See the original article for explanations and examples of the levels.
From Middle School Students' Production of Mathematical Justifications. Eric J. Knuth, Jeffrey M. Choppin, and Kristen N. Bieda. In Teaching and Learning Proof Across the Grades. A K-16 Perspective (2009).
Research suggests current practice is insufficient and only change can improve students abilities to reason and use proofs. Goswami, and Hatano and Sakakibara (2004, ISBN 0-8058-4945-9.) observed children in natural settings using everyday language while they were attempting to use the language of mathematics while reasoning mathematical. That language was mostly absent of spontaneous analogies. The authors attribute these findings to limitations in a child's mathematical knowledge as well as to the norms and practices in the students' instructional setting.
A mathematical community or culture must include many opportunities with experiences that emphasize problem solving, representation, reasoning, and communication, where students make conjectures and develop skill in using these tools of reasoning and proof through systematic representation to justify their ideas. |
Advanced Preparation: Create a two row survey with a question at the top and the rows labeled yes or no (see picture in lesson resource).
As students enter the classroom for math, I instruct them to read the question and then answer it with a tally mark. Once everyone has answered I will start the conversation.
"Let's count the results for each category. How many people said "yes?" How many people said "no." Now I would like you to give me one way we could represent our results using an equation. What are some I Notice statements that you could make? Who could write an expression using the < or > sign?" The Core expects 1st graders to be able to represent a number of objects with a written numeral (CCSS.Math.Content.1.NBT.A.1) and also compare different quantities using symbols (CCSS.Math.Content.1.NBT.B.3).
"The students are reasoning abstractly and quantitatively. It is expected that mathematically proficient students make sense of quantities and their relationships in problem situations (CCSS.Math.Practice.MP2)." In this case, the students are distinguish the two categories as separate entities that also are parts of the whole.
The picture in the section resource exhibits the results of the above questions.
I continue to throw in surveys throughout the year. This allows me to spend less time in the isolated context of a unit and more time applying the concepts to real life situations. The CCSS expect that first graders can "organize, represent, and interpret data with up to three categories; ask and answer questions about the total number of data points, how many in each category, and how many more or less are in one category than in another (CCSS.Math.Content.1.MD.C.4)."
I have the students gather as a group and face the Smart Board in my room. On the whiteboard, I have a blank ten frame. I use a magnetic one but you could always just draw one in.
*I have used these ten frame cards throughout the year. If this is the 1st time you are using them, you will want to make sure that the students understand the the structure of a frame. This being that there are 5 boxes on top and 5 boxes on the bottom.
"We are going to have a ten party today! The rest of math class will focus entirely on making ten.I am really excited and I hope you are too!"
"I am going to flash a tens frame card the Smart Board. I will only flash it for about two seconds. I want you to figure out how many dots are on the card. Once you know, I want you to fill your ten frame card with some of the round counters that I gave you. I will then flash the card one more time and you can check your work. I will then leave the image up and ask you to tell the group how you figured out the total number of dots"
There are two videos int he resource section that allow you to view this discussion in action.
I am asking the students to communicate their thinking to their peers and allowing the students to hear and see a variety of approaches that people are using to solve a task. The CCSS expect students to "communicate precisely to others. They try to use clear definitions in discussion with others and in their own reasoning." (CCSS.Math.Practice.MP6) I am also trying to allow as many students as possible to be called on to articulate their thinking by saying how many dots they saw and how they saw it. Although I ask kids to put their thumb on their chin (once they have an answer), I call on any child. This is something that my students have come to expect. You will have to figure out a system that works for you and your students.
Through this activity, students are also working on developing fact fluency within 10 and the ability to add and subtract within 20 (CCSS.Math.Content.1.OA.C.6).
I repeat this procedure several times or as time allows.
Advanced Preparation: Before class started, I went ahead and set up the stations. Since the students will be able to choose which activity that they want to do, i want to minimize congestion and confusion of where to find the materials for each game. Watch the video to see what I am talking about.
I start station time by walking the students through each of the activities. Although all of these activities have been introduced in previous lessons, we are coming off a 12 day vacation and want to make sure that they are focused, understand the games, and that they truly focus on recording how they made 1o in each activity. They will need their recording sheets for the end of lesson wrap up (next section).
"I want to quickly review each of your station time choices. I am going to ask that you join me on a quick tour as we visit each station. With your help, I will quickly model each activity."
10's Go Fish: Advanced Preparation: You will need a set of number cards or playing cards for each team. To play the game you will only use the 1-9 cards. The rest should be discarded for this activity.
Remember your goal here is to make 10. You start by dealing out 5 cards to each player and then put the remaining cards face down on the table. On your turn, you should look to see if you have any cards (using only 2 cards) in your hand that can make ten. If you do, you pair them up and put them down on the table. You then can pick two new cards up from the pile that is face down on the table. If you can't make 10, on your turn, you can ask the other person if they have a card that you would need. For example, if I had a 3, what card would I want to ask for?
Each time I get a new card, I check to see if I can make 10 with that card and a card that's already in my hand. If I can, I out the pair down. If I can't, then my turn is over. If I run out of cards, I can pick two new cards from the deck."
"When you have run out of cards, the game is over. You then use the recording sheet (see section resource) to record each combination you collected to make ten. Let's play a sample round, as a class, to make sure everyone understands how to play."
In this activity, I am asking students to make sense of quantities and their relationships in a problem situation (CCSS.Math.Practice.MP2). The students are developing an understanding that if I have a 4, i will always need a 6 to make 10.
What's Under the Sheet: Advanced Preparation: You will need a piece of construction paper for each team, connecting cubes, and a recording sheet for each team member. The recording sheet can be found in the section resource.
"Remember, in this game, you use strategies to find out how many cubes are hidden under the sheet. You will be doing the work of master mathematicians and solving for what mathematicians call the unknown."
I then introduce the game to them and model it by playing with another student.
"You will start by grabbing 10 cubes. You will choose one person to go first and they will be the person that hides the cubes first. Once you have filled out the total number of cubes you are starting with (on your recording sheet), you will then hide some cubes under the sheet (construction paper) and leave some showing on top of the sheet. Let's say that you start with ten cubes. You put 6 under the sheet and four on top. Your partner will then fill out how many cubes are not hidden, leave the hidden space blank and then write the equation
___ + 4=10 or 10=____+4. Then that person tries to figure out how many cubes are under the sheet and describes how they figured it out. Then he/she fills in the missing part on the recording sheet."
In this activity, students are determining the unknown whole number in an addition equation (CCSS.Math.Content.1.OA.D.8). This is a complicated skill and must be worked on throughout the year in order for students to develop a sound understanding and mastery by the end of the year.
Making Ten With Number Cards: I use ten frame cards for this. You can use playing cards (minus the 10's, J-K's).
"The goal of the game is to make as many combinations of 10 as you can. To play the game you will need a deck of ten frame cards (or paling cards). You will take out the tens and just use the 1-9 cards and you will play with a partner (I did make one team of 3 because I had an odd number). To start, you make 4 rows of 5 cards with the numbers facing up. You put the extra cards on the side, face down. The first person scans the cards to find two numbers that make ten, and then picks them up as they say the two numbers. Your partner needs to agree that the cards do make ten. You can check by counting the dots. If you need to do this, I want you to count on from the highest number. After you take two cards, you replace them with two new ones from the "extra" pile that you made at the start.
Once you can no longer make any more combinations, you will use the recording sheet (see section resource) to write an equation for each combination. Let's play a few rounds together to make sure everyone understands."
The students are recording their equations on a recording sheet and using standard notation in doing so. They are modeling their answer with mathematics, which allows them to engage in MP4 (CCSS.Math.Practice.MP4), which states that "mathematically proficient students can apply the mathematics they know to solve problems arising in everyday life, society, and the workplace. In early grades, this might be as simple as writing an addition equation to describe a situation."
It is expected that "1st grade students can add and subtract within 20, demonstrating fluency for addition and subtraction within 10" (CCSS.Math.Content.1.OA.C.6). This activity's repetitiveness allows for this fluency to build.
I gather all of the students back to the carpet area. I ask them to bring their recording sheets from their station time activities and to sit and face the easel. The focus of this conversation is for them to generate all of the two addend combinations of ten.
"I hope you enjoyed our 10 Party! We have done a lot of work on combinations of ten using two addends. I now want to make a list of all the ways to make ten with two addends. Who can give me one way to make ten?"
I then start recording their answers on the easel. I set up the chart so that I can easily write the "flip facts/turn around facts" next to each other. There is a picture in the section resource that shows the chart we created.
I finish the lesson with a quick activity that focuses on the fluency of producing complements of ten. There is a video int he section resource that shows the students doing this activity.
"We have been spending so much time on our complements of ten because I want you to be fluent or be able to say them very quickly. We are now going to se show quick you are. I will say a number and I want you to tell me the complement that would make ten. You can shout it out and be as quick as possible."
I find that this makes the idea of being quick fun and is a way to get them to think about producing complements fluently. |
• shifts in demand and supply change equilibrium price and quantity • key assumption – information is costless when new technology reduces the cost of . The equilibrium price is the intersection of the supply and demand curves markets reach equilibrium because prices that are above and below an equilibrium price lead to surpluses and shortages . Since reductions in demand and supply, considered separately, each cause the equilibrium quantity to fall, the impact of both curves shifting simultaneously to the left means that the new equilibrium quantity of coffee is less than the old equilibrium quantity. Demand and supply a change demand or supply or both demand and supply changes the equilibrium price and the equilibrium quantity.
Demand, supply, and market equilibrium refer to the above diagram in which s1 and d1 represent the original supply and demand curves and s2 and d2 the new . Whenever this happens, the original equilibrium price will no longer equate demand with supply, and price will adjust to bring about a return to equilibrium changes in equilibrium for example, if there is a particularly hot summer, students may prefer to drink more soft drinks at all prices, as indicated in the new demand schedule, qd 1 . Market supply and demand and equilibrium prices b increase as there would be a decrease in the supply of new housing due to higher production the diagram .
The supply and demand curve is the correlation of price and quantity as depicted on a graph the price of the product is on the y-axis, whereas the quantity of the product is on the x-axis. The basics of supply and demand 19 chapter outline 21 supply and demand 20 22 the market mechanism 23 23 changes in market equilibrium 24 24 elasticities of supply and. Practice questions and answers from lesson i -4: demand and supply identify a competitive equilibrium of demand and supply the new equilibrium price of .
Demand, supply and market equilibrium every market has a demand side and a supply side and where these two forces are in balance it is said that the markets are at equilibrium the demand schedule: the demand side can be represented by law of downward sloping demand curve. Because equilibrium corresponds to the point where the demand and supply curves intersect, anything that shifts the demand or supply curves establishes a new equilibrium the illustration shows what happens when demand increases. The core ideas in microeconomics supply, demand and equilibrium. It is the non-price determinants of demand and supply that push prices to a new equilibrium we call this market equilibrium the equilibrium price is the price where the quantity demanded equals the quantity supplied. Supply, demand, equilibrium, and elasticity supply and demand are in equilibrium over time, 10% change in price of a new automobile would certainly be more .
A quick and comprehensive intro to supply and demand we define the demand curve, supply curve and equilibrium price & quantity we draw a demand and supply . Supply and demand definitions of linear supply and demand: to find the new equilibrium price with the tax we again set the demand equation to the supply and solve . In the supply and demand model, the equilibrium price and quantity in a market is located at the intersection of the market supply and market demand curves note that the equilibrium price is generally referred to as p and the market quantity is generally referred to as q. Of new programs and to promote the role of crnas in workforce,supply supply, demand, and equilibrium in the market the equilibrium between supply and demand .
This equilibrium price and quantity calculator can help you calculate both the equilibrium price & quantity in case you have a demand and a supply function both dependants on price. However, other events like those outlined here will cause either the demand or the supply of labor to shift, and thus will move the labor market to a new equilibrium salary and quantity technology and wage inequality: the four-step process. Two approaches to market equilibrium the graphical approach by now, we are familiar with graphs of supply curves and demand curves to find market equilibrium, we combine the two curves onto one graph.
Test your knowledge with these 10 supply and demand practice questions that come from previously administered gre economics tests find the new equilibrium price . The price and quantity that equates the quantity demanded and quantity supplied equates the demand price and supply price and achieves market equilibrium in other words, the market is “cleared” of shortages and surpluses. The supply-and-demand model is a partial equilibrium model of economic equilibrium, where the clearance on the market of some specific goods is obtained independently from prices and quantities in other markets. At this point supply and demand are in balance or equilibrium at any price below p, the quantity demanded is greater than the quantity supplied in this situation consumers would be anxious to acquire product the producer is unwilling to supply resulting in a product shortage.
In supply and demand analysis, equilibrium means that the upward pressure on price is exactly offset by the downward pressure on price the equilibrium price is the price towards. When supply and demand are equal (ie when the supply function and demand function intersect) the economy is said to be at equilibrium at this point, the allocation of goods is at its most . This new equation, representing a shift in demand, also causes a shift in market equilibrium, which we can find by setting the new demand equation equal to supply: qs = qd. It will always result in an increase in the equilibrium quantity, but the new equilibrium price may be more than, less than, or the same as, the old equilibrium price when demand remains the same and supply decreases. |
Mosaic of the Caloris basin based on photographs by the MESSENGER orbiter.
|Diameter||1,550 km (963 mi)|
|Eponym||Latin for "heat"|
Caloris Planitia is a plain within a large impact basin on Mercury, informally named Caloris, about 1,550 km (960 mi) in diameter. It is one of the largest impact basins in the Solar System. The plain itself is about 685 km in diameter. "Calor" is Latin for "heat" and the basin is so-named because the Sun is almost directly overhead every second time Mercury passes perihelion. The crater, discovered in 1974, is surrounded by a ring of mountains approximately 2 km (1.2 mi) tall.
Caloris was discovered on images taken by the Mariner 10 probe in 1974. It was situated on the terminator—the line dividing the daytime and nighttime hemispheres—at the time the probe passed by, and so half of the crater could not be imaged. Later, on January 15, 2008, one of the first photos of the planet taken by the MESSENGER probe revealed the crater in its entirety.
The basin was initially estimated to be about 810 mi (1,300 km) in diameter, though this was increased to 960 mi (1,540 km) based on subsequent images taken by MESSENGER. It is ringed by mountains up to 2 km (1.2 mi) high. Inside the crater walls, the floor of the crater is filled by lava plains, similar to the maria of the Moon. These plains are superposed by explosive vents associated with pyroclastic material. Outside the walls, material ejected in the impact which created the basin extends for 1,000 km (620 mi), and concentric rings surround the crater.
In the center of the basin is a region containing numerous radial troughs that appear to be extensional faults, with a 40 km (25 mi) crater located near the center of the pattern. The exact cause of this pattern of troughs is not currently known. The feature is named Pantheon Fossae.
The impacting body is estimated to have been at least 100 km (62 miles) in diameter.
Bodies in the inner Solar System experienced a heavy bombardment of large rocky bodies in the first billion years or so of the Solar System. The impact which created Caloris must have occurred after most of the heavy bombardment had finished, because fewer impact craters are seen on its floor than exist on comparably-sized regions outside the crater. Similar impact basins on the Moon such as the Mare Imbrium and Mare Orientale are believed to have formed at about the same time, possibly indicating that there was a 'spike' of large impacts towards the end of the heavy bombardment phase of the early Solar System. Based on MESSENGER's photographs, Caloris' age has been determined to be between 3.8 and 3.9 billion years.
Antipodal chaotic terrain and global effects
The giant impact believed to have formed Caloris may have had global consequences for the planet. At the exact antipode of the basin is a large area of hilly, grooved terrain, with few small impact craters that are known as chaotic terrain (also "weird terrain"). It is thought by some to have been created as seismic waves from the impact converged on the opposite side of the planet. Alternatively, it has been suggested that this terrain formed as a result of the convergence of ejecta at this basin’s antipode. This hypothetical impact is also believed to have triggered volcanic activity on Mercury, resulting in the formation of smooth plains. Surrounding Caloris is a series of geologic formations thought to have been produced by the basin's ejecta, collectively called the Caloris Group.
Emissions of gas
Mercury has a very tenuous and transient atmosphere, containing small amounts of hydrogen and helium captured from the solar wind, as well as heavier elements such as sodium and potassium. These are thought to originate within the planet, being "out-gassed" from beneath its crust. The Caloris basin has been found to be a significant source of sodium and potassium, indicating that the fractures created by the impact facilitate the release of gases from within the planet. The weird terrain is also a source of these gases.
- Shiga, David (2008-01-30). "Bizarre spider scar found on Mercury's surface". NewScientist.com news service.
- "Caloris Planitia". Gazetteer of Planetary Nomenclature. USGS Astrogeology Science Center. Retrieved 2016-03-26.
- Thomas, Rebecca J.; Rothery, David A.; Conway, Susan J.; Anand, Mahesh (16 September 2014). "Long-lived explosive volcanism on Mercury". Geophysical Research Letters 41 (17): 6084–6092. Bibcode:2014GeoRL..41.6084T. doi:10.1002/2014GL061224.
- Mercury's First Fossae. MESSENGER. May 5, 2008. Accessed on July 13, 2009.
- Coffey, Jerry (July 9, 2009). "Caloris Basin". Universe Today. Retrieved July 1, 2012.
- Gault, D. E.; Cassen, P.; Burns, J. A.; Strom, R. G. (1977). "Mercury". Annual Review of Astronomy and Astrophysics 15: 97–126. Bibcode:1977ARA&A..15...97G. doi:10.1146/annurev.aa.15.090177.000525.
- Schultz, P. H.; Gault, D. E. (1975). "Seismic effects from major basin formations on the moon and Mercury". The Moon 12 (2): 159–177. Bibcode:1975Moon...12..159S. doi:10.1007/BF00577875.
- Wieczorek, Mark A.; Zuber, Maria T. (2001). "A Serenitatis origin for the Imbrian grooves and South Pole-Aitken thorium anomaly". Journal of Geophysical Research 106 (E11): 27853–27864. Bibcode:2001JGR...10627853W. doi:10.1029/2000JE001384. Retrieved 2008-05-12.
- Kiefer, W. S.; Murray, B. C. (1987). "The formation of Mercury's smooth plains". Icarus 72 (3): 477–491. Bibcode:1987Icar...72..477K. doi:10.1016/0019-1035(87)90046-7.
- Sprague, A. L.; Kozlowski, R. W. H.; Hunten, D. M. (1990). "Caloris Basin: An Enhanced Source for Potassium in Mercury's Atmosphere". Science 249 (4973): 1140–1142. Bibcode:1990Sci...249.1140S. doi:10.1126/science.249.4973.1140. PMID 17831982. |
No part of math is more confusing than geometry. The main reason being the numerous terms which students get entangled in. In this ScienceStruck post, we give you a list of the basic terms used in geometry, to make understanding this branch of mathematics easier.
Did You Know?
The word ‘Geometry’ comes from the ancient Greek words ‘geo’ and ‘metron’, that mean earth and measurement respectively.
Geometry can be called a study of the shape, size, and position of objects. It is considered to be difficult due to the many terms used in the subject, and also due to the fact that one has to understand the subject; it cannot be learned by rote. Most of these terms can be confusing to students, and in addition to these terms, there are various theorems, laws, and definitions that have to be understood as well.
In this article, we have given you a list of the definitions of some of the basic terms used in geometry. It should be noted that neither is this list for beginners, not is it for advanced students. It is meant to be a ready reference for those who have studied geometry earlier.
List of Basic Terms Used in Geometry
If the angle formed by two lines intersecting is less than 90°, then the angle is known as acute angle.
An acute triangle can be defined as a triangle where all the 3 angles that make up the triangle measure less than 90° each.
The altitude of a triangle is a line segment that connects a vertex to the opposite line of the triangle, and is perpendicular to that line.
When two line segments intersect or have a common end point, then the inclination of one line with the other is called an angle. Angles are measure in degrees.
Angle-Angle-Angle (AAA) Similarity
The angle-angle-angle (AAA) similarity test says that if two triangles have corresponding angles that are congruent, then the triangles are similar.
A ray, that divides any angle into exactly half, making two equal angles in the process, is known as an angle bisector.
Arc of a Circle
An arc of a circle is a connected section of the circumference of a circle. Arcs are measured in two ways: as the measure of the central angle, or as the length of the arc itself.
A central angle is defined as one which has its vertex at the center of a circle.
The meeting point of the three medians of a triangle is called the centroid. This point is the center of mass of the triangle.
A circle can be defined as the set of all points in a plane that are equidistant from a given point in the plane, which is the center of the circle.
The meeting point of the three perpendicular bisectors of a triangle is known as its circumcenter. This point is equidistant from the three vertices of the triangle.
The circumference of a circle is said to be the length of the boundary or the border of a circle.
Any polygon that has at least one angle that measure more than 180° is called a concave polygon. Such polygons look like they have one or more angles caved in.
When three or more lines meet at a single point, they are said to be concurrent. The three medians, three perpendicular bisectors, three angle bisectors, and three altitudes of a triangle are each concurrent.
A cone is a three-dimensional figure with a single base tapering to an apex.
Two figures are said to be congruent if their corresponding lengths are the same, and the measure of their corresponding angles are same. Such figures are generally said to be the same shape and size.
Congruent triangles are triangles that have the same size and shape. In particular, corresponding angles have the same measure, and corresponding sides have the same length.
Converse means the “if” and “then” parts of a sentence are switched. For example, “If two numbers are both even, then their sum is even” is a true statement. The converse would be “If the sum of two numbers is even, then the numbers are even,” which is not a true statement.
A convex polygon is any polygon that is not concave, or where none of the angles measure more than 180°.
The coordinates of a point describe where it is located with respect to the x- and y-axes. The form (x,y) is a standard convention that allows everyone to mean the same thing when they reference any point.
If A is an acute angle in a right angle triangle, then the cosine of A is defined as the length of the side adjacent to angle A, divided by the length of the hypotenuse of the triangle. This is written as cos A = (adjacent)/(hypotenuse).
The face obtained after making a slice through an object is known as the cross section of that object.
A segment that passes through the center of the circle, and has its endpoints on the circle, is referred to as the diameter of the circle.
A line segment where two faces of an object intersect is known as an edge.
A triangle where the length of all its three sides are equal is known as an equilateral triangle.
A face is a polygon by which a solid object is bound. For example, a cube has six faces, with each face being a square.
A frieze pattern is an infinite strip containing a symmetric pattern.
A glide reflection is a combination of two transformations: a reflection over a line followed by a translation in the same direction as the line.
In a right angle triangle, the side of the triangle that is opposite to the right angle, is called the hypotenuse.
The meeting point of the three angle bisectors of a triangle is known as its incenter. The incenter is equidistant from each of the three sides of the triangle.
An angle whose vertex lies on the circle, and rays intersect the circle is called an inscribed angle.
In geometry, an intercept is defined as an intersection of a graph with one of the axes. An intersection with the horizontal axis is referred to as an x-intercept, and an intersection with the vertical axis is referred to as a y-intercept.
An irregular polygon is any polygon that is not regular, or a polygon where none of te sides or angles are equal.
An isosceles trapezoid is a quadrilateral with one pair of parallel sides and congruent base angles, or it is a trapezoid with congruent base angles.
An isosceles triangle is a triangle where the length of any two sides equal are equal.
If two pairs of adjacent sides in a quadrilateral have equal length, then it is called a kite.
A line can be defined as having only one dimension: length. It continues forever in two directions (so it has infinite length), but it has no width at all. A line connects two points via the shortest path, and then continues on in both directions.
The part of a line that lies between two points is called a line segment. Line segments have a finite length and no width.
Line Symmetry or Reflection Symmetry
An object is said to have line symmetry, or reflection symmetry, if it can be folded in half along a line so that the two halves match exactly. This folding line is called the line of symmetry.
A segment that connects any vertex of a triangle to the midpoint of the opposite side is called a median.
A midline is a segment that connects two consecutive midpoints of a triangle.
The midline theorem states that a midline of a triangle creates a segment that is parallel to the base and half as long.
A net is a two-dimensional representation of a three-dimensional object.
If the angle formed by the intersection of two lines is more than 90°, then the angle is known as obtuse angle.
An obtuse triangle is a triangle where at least one angle measures more than 90°.
The orthocenter of a triangle is the point where the three altitudes meet, making them concurrent.
Parallel lines are two lines in the same plane that never intersect. No matter where you measure such lines, their perpendicular distance is always constant.
A parallelogram is a quadrilateral that has two pairs of opposite sides that are parallel.
The perpendicular bisector of a line segment is perpendicular to that segment and bisects it; that is, it goes through the midpoint of the segment, creating two equal segments.
A plane is a flat, two-dimensional object. A plane must continue infinitely
in all directions and have no thickness at all. It can be defined by two intersecting lines or by three non-collinear points.
A Platonic solid is a solid such that all of its faces are congruent regular polygons and the same number of regular polygons meet at each vertex.
A point can be defined as having only location; it has no length, width, or depth.
A polygon is a two-dimensional geometric figure made of straight line segments, where each segment touches exactly two other segments, one at each of its endpoints. A polygon divides the plane into two distinct regions, the inside and the other outside the polygon.
A polyhedron is a closed three-dimensional figure, where all the faces are made up of polygons.
A prism is a solid that has parallel congruent bases which are both polygons. The bases must be oriented identically. The lateral faces of a prism are all parallelograms or rectangles.
The Pythagorean theorem states that if you have a right triangle, then the square built on the hypotenuse is equal to the sum of the squares built on the other two sides.
A quadrilateral is a polygon with exactly four sides.
The radius of a circle is the distance from the center of the circle to a point on the circle, and is constant for a given circle.
A ray is a line that has a point on one end, and extends infinitely in the other direction.
A rectangle is a quadrilateral with four right angles.
Reflection is a rigid motion, where an object changes its position but not its size or shape, due to the creation of a mirror image of the object.
A regular polygon has sides that are all the same length and angles that all measure the same.
A rhombus is a quadrilateral that has all four sides congruent.
If two lines intersect at an angle of 90°, then the angle formed is said to be a right angle.
Right Angle Triangle
A right angle triangle is a triangle where at least one angle measures 90°.
Rotation is a rigid motion, meaning an object changes its position but not its size or shape. In a rotation, an object is turned about a “center” point, through a particular angle.
A figure has rotation symmetry if you can rotate (or turn) that figure around a center point by fewer than 360° and the figure appears unchanged.
A scalene triangle is a triangle where all three sides have a different length.
Sector of a Circle
A sector of a circle is a part of the interior of a circle bounded by two radii and an arc.
Side-Angle-Side (SAS) Congruence
Side-angle-side (SAS) congruence states that if any two sides of a triangle are equal in length to two sides of another triangle, and the angles between each pair of sides have the same measure, then the two triangles are congruent; that is, they have the same shape and size.
Side-Angle-Side (SAS) Similarity
The side-angle-side (SAS) similarity test says that if two triangles have two pairs of sides that are proportional, and the included angles are congruent, then the triangles are similar.
Side-Side-Side (SSS) Congruence
The side-side-side (SSS) congruence states that if the three sides of one triangle have the same lengths as the three sides of another triangle, then the two triangles are congruent.
Side-Side-Side (SSS) Similarity
The side-side-side (SSS) similarity test says that if two triangles have all three pairs of sides in proportion, the triangles must be similar.
Two objects are said to be similar if their corresponding angles have the same measure and their corresponding sides are in proportion.
Two objects are said to be similar if their corresponding angles have the same measure and their corresponding sides are in proportion.
If angle A is an acute angle in a right angle triangle, the sine of A is the length of the side opposite to angle A, divided by the length of the hypotenuse of the triangle. This is written as A = (opposite)/(hypotenuse).
Supplementary angles are two angles that add up to 180°.
A square is a regular quadrilateral, or an object where all the four sides are equal.
A design has symmetry if you can move the entire design by either rotation, reflection, or translation, and the design appears unchanged.
If angle A is an acute angle in a right angle triangle, the tangent of A is the length of the side opposite to angle A, divided by the length of the side adjacent to angle A. We write this as tan A = (opposite)/(adjacent).
A tangram is a seven-piece puzzle made from a square, that contains two large
isosceles right triangles, one medium isosceles right triangle, two small isosceles right triangles, a square, and a parallelogram.
A theorem in mathematics is a proven fact. A theorem about right triangles must be true for every right triangle; there can be no exceptions. Just showing that an idea works in several cases is not enough to make an idea into a theorem.
Translation is a rigid motion, meaning an object changes its position but not its size or shape. In a translation, an object is moved in a given direction for a particular distance. A translation is, therefore, usually described by a vector, pointing in the direction of movement and with the appropriate length.
Translation symmetry can be found only on an infinite strip. For translation symmetry, you can slide the whole strip some distance, and the pattern will land back on itself.
A transversal is a line that passes through (transverses) two other lines.
A trapezoid is a quadrilateral that has one pair of opposite sides that are parallel.
The triangle inequality says that for three lengths to make a triangle, the sum of the lengths of any two sides must be greater than the third length.
Van Hiele Levels
Van Hiele levels are a theory of five levels of geometric thought. The levels are (0) visualization, (1) analysis, (2) informal deduction, (3) deduction, and (4) rigor.
A vector can be used to describe a translation. It is drawn as an arrow. The arrowhead points in the direction of the translation, and the length of the vector tells you the length of the translation.
A Venn diagram uses circles to represent relationships among sets of objects.
A vertex is the point where two sides of an object meet.
There are about a hundred other geometry terms which are mostly derivatives of the terms discussed above. Once you are thorough with these basic terminologies, it is not difficult to understand the other terms and definitions in geometry. |
This article will describe how to plot Poisson distribution in Excel based on sample data. First, we will discuss what Poisson distribution is, what it does mean, and which Excel function we use to calculate this. Then we will see how to plot the graph.
Look at the following image for a brief and quick idea.
What Is Poisson Distribution?
To say very simply, the Poisson Distribution shows the probability of happening an incident for a specific number of instances in a certain period.
For example, say, a football player scores the following number of goals in 5 matches: 0, 1, 1, 3, 0. Now, what is the probability of scoring a certain number of goals in a certain number of matches? The Poisson distribution graph will give an idea of this.
The Poisson Distribution can be described by a statistical function which is as follows:
- Here, X is for the number of successful incidents at a given interval of time
- λ is the mean number of successes during the same interval of time.
To understand in detail, read the whole article, we have discussed an example for your better understanding.
Introduction to POISSON.DIST Function in Excel
MS Excel provides POISSON.DIST function from Excel 2007 version. The objective of this function is simply to return Poisson Distribution in Excel.
It has the following arguments. All of them are of the required type.
- X refers to the number of expected incidents.
- Mean refers to the expected numeric value.
- Cumulative is a logical value that refers to the type returned probability distribution. If it is TRUE, the function returns the cumulative probability output for zero to X inclusive. If it is FALSE, it returns the Poisson probability for exactly x.
How to Plot Poisson Distribution of a Sample Data in Excel: Easy Steps
Let’s get introduced to our sample data first. You have to gather sample data like this first off.
The following is data of a Toll Plaza. It describes the number of bus arrival incidents at 15 minutes of intervals. In this example, we will find the probability of arrival of a certain number of buses at a 1-hour interval.
📌 Step 1: Calculate Mean, λ
- The first thing is to calculate the mean, λ of this data. It’s very simple. Just divide the number of occurrences by the number of cases.
- In this particular case, we have calculated λ with the following formula in cell E5.
- Here, SUM(B5:B101) returns the number of buses that arrive at the toll plaza.
- COUNT(B5:B101) is the number of case-count.
📌 Step 2: Calculate Poisson Distribution
- Now, arrange a table like the following.
What are we going to calculate here? Suppose you are thinking what is the probability that 5 buses will arrive at the toll plaza during a 1-hour interval? We will calculate this now.
- To do that, insert the following formula in cell C5, and drag the fill handle icon to the last cell of this order.
Here, B5 is the number of incidents X, ‘Source Data’!$E$5 refers to the mean λ, and FALSE is for non-cumulative Poisson Distribution. If we do this calculation in the same sheet as the source data, the formula would be =POISSON.DIST(B5,$E$5,FALSE). Not a big issue, right? Don’t forget to use Absolute Reference for mean. Otherwise, you will not get the correct result.
📌 Step 3: Plot Poisson Distribution Results
We are almost done!
- Now, select the f(X,λ) column and go to the Insert tab. Then from the Chart group, click on the Insert Line or Area Chart drop-down menu, and select the first icon of the 2-D Area section.
The following graph will appear.
- We have completed our job here. But we can add some formatting to get a better interpretation and to have an eye-soothing graph. After applying the percentage format for Poisson Distribution results, style 2 for the chart, and changing the chart title, we have the following result.
Read More: How to Create a Distribution Chart in Excel
- In the Poisson Distribution formula, you must use absolute reference for the mean, or hard code the mean in the formula.
- All the inputs must be numeric, otherwise #VALUE! error occurs.
- X and λ must not be less than zero, otherwise, you will get #NUM! Errors.
- If X is not a whole number, Excel will truncate it to a whole number.
- If you choose TRUE instead of FALSE in the formula =POISSON.DIST(B5,’Source Data’!$E$5,FALSE), you will get cumulative results that indicate the probability of 0 to X. But choosing FALSE will return the probability of exactly X.
Download Practice Workbook
Download the following sample workbook from the link below to practice along with it.
So, in this article, we have discussed how you can plot Poisson Distribution in Excel. If you have found this write-up useful, please let us know in the comment box. Also, don’t hesitate to ask if you have any queries. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.